bmad-method 6.2.1-next.20 → 6.2.1-next.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "https://json.schemastore.org/package.json",
3
3
  "name": "bmad-method",
4
- "version": "6.2.1-next.20",
4
+ "version": "6.2.1-next.22",
5
5
  "description": "Breakthrough Method of Agile AI-driven Development",
6
6
  "keywords": [
7
7
  "agile",
@@ -2,6 +2,7 @@
2
2
  diff_output: '' # set at runtime
3
3
  spec_file: '' # set at runtime (path or empty)
4
4
  review_mode: '' # set at runtime: "full" or "no-spec"
5
+ story_key: '' # set at runtime when discovered from sprint status
5
6
  ---
6
7
 
7
8
  # Step 1: Gather Context
@@ -23,8 +24,8 @@ review_mode: '' # set at runtime: "full" or "no-spec"
23
24
  - When multiple phrases match, prefer the most specific match (e.g., "branch diff" over bare "diff").
24
25
  - **If a clear match is found:** Announce the detected mode (e.g., "Detected intent: review staged changes only") and proceed directly to constructing `{diff_output}` using the corresponding sub-case from instruction 3. Skip to instruction 4 (spec question).
25
26
  - **If no match from invocation text, check sprint tracking.** Look for a sprint status file (`*sprint-status*`) in `{implementation_artifacts}` or `{planning_artifacts}`. If found, scan for any story with status `review`. Handle as follows:
26
- - **Exactly one `review` story:** Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, fall through to instruction 2.
27
- - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. Then use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3.
27
+ - **Exactly one `review` story:** Set `{story_key}` to the story's key (e.g., `1-2-user-auth`). Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, clear `{story_key}` and fall through to instruction 2.
28
+ - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. If the user selects a story, set `{story_key}` to the selected story's key and use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3. If the user selects the manual choice, clear `{story_key}` and fall through to instruction 2.
28
29
  - **If no match and no sprint tracking:** Fall through to instruction 2.
29
30
 
30
31
  2. HALT. Ask the user: **What do you want to review?** Present these options:
@@ -14,7 +14,7 @@ deferred_work_file: '{implementation_artifacts}/deferred-work.md'
14
14
 
15
15
  ### 1. Clean review shortcut
16
16
 
17
- If zero findings remain after triage (all dismissed or none raised): state that and end the workflow.
17
+ If zero findings remain after triage (all dismissed or none raised): state that and proceed to section 6 (Sprint Status Update).
18
18
 
19
19
  ### 2. Write findings to the story file
20
20
 
@@ -82,3 +82,48 @@ If `{spec_file}` is **not** set, present only options 1 and 3 (omit option 2 —
82
82
  - Patches handled: <P>
83
83
  - Deferred: <W>
84
84
  - Dismissed: <R>
85
+
86
+ ### 6. Update story status and sync sprint tracking
87
+
88
+ Skip this section if `{spec_file}` is not set.
89
+
90
+ #### Determine new status based on review outcome
91
+
92
+ - If all `decision-needed` and `patch` findings were resolved (fixed or dismissed) AND no unresolved HIGH/MEDIUM issues remain: set `{new_status}` = `done`. Update the story file Status section to `done`.
93
+ - If `patch` findings were left as action items, or unresolved issues remain: set `{new_status}` = `in-progress`. Update the story file Status section to `in-progress`.
94
+
95
+ Save the story file.
96
+
97
+ #### Sync sprint-status.yaml
98
+
99
+ If `{story_key}` is not set, skip this subsection and note that sprint status was not synced because no story key was available.
100
+
101
+ If `{sprint_status}` file exists:
102
+
103
+ 1. Load the FULL `{sprint_status}` file.
104
+ 2. Find the `development_status` entry matching `{story_key}`.
105
+ 3. If found: update `development_status[{story_key}]` to `{new_status}`. Update `last_updated` to current date. Save the file, preserving ALL comments and structure including STATUS DEFINITIONS.
106
+ 4. If `{story_key}` not found in sprint status: warn the user that the story file was updated but sprint-status sync failed.
107
+
108
+ If `{sprint_status}` file does not exist, note that story status was updated in the story file only.
109
+
110
+ #### Completion summary
111
+
112
+ > **Review Complete!**
113
+ >
114
+ > **Story Status:** `{new_status}`
115
+ > **Issues Fixed:** <fixed_count>
116
+ > **Action Items Created:** <action_count>
117
+ > **Deferred:** <W>
118
+ > **Dismissed:** <R>
119
+
120
+ ### 7. Next steps
121
+
122
+ Present the user with follow-up options:
123
+
124
+ > **What would you like to do next?**
125
+ > 1. **Start the next story** — run `dev-story` to pick up the next `ready-for-dev` story
126
+ > 2. **Re-run code review** — address findings and review again
127
+ > 3. **Done** — end the workflow
128
+
129
+ **HALT** — I am waiting for your choice. Do not proceed until the user selects an option.
@@ -44,6 +44,7 @@ Load and read full config from `{main_config}` and resolve:
44
44
  - `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
45
45
  - `communication_language`, `document_output_language`, `user_skill_level`
46
46
  - `date` as system-generated current datetime
47
+ - `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
47
48
  - `project_context` = `**/project-context.md` (load if exists)
48
49
  - CLAUDE.md / memory files (load if exist)
49
50
 
@@ -1,6 +1,137 @@
1
1
  ---
2
2
  name: bmad-advanced-elicitation
3
3
  description: 'Push the LLM to reconsider, refine, and improve its recent output. Use when user asks for deeper critique or mentions a known deeper critique method, e.g. socratic, first principles, pre-mortem, red team.'
4
+ agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
4
5
  ---
5
6
 
6
- Follow the instructions in ./workflow.md.
7
+ # Advanced Elicitation
8
+
9
+ **Goal:** Push the LLM to reconsider, refine, and improve its recent output.
10
+
11
+ ---
12
+
13
+ ## CRITICAL LLM INSTRUCTIONS
14
+
15
+ - **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
16
+ - DO NOT skip steps or change the sequence
17
+ - HALT immediately when halt-conditions are met
18
+ - Each action within a step is a REQUIRED action to complete that step
19
+ - Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
20
+ - **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
21
+
22
+ ---
23
+
24
+ ## INTEGRATION (When Invoked Indirectly)
25
+
26
+ When invoked from another prompt or process:
27
+
28
+ 1. Receive or review the current section content that was just generated
29
+ 2. Apply elicitation methods iteratively to enhance that specific content
30
+ 3. Return the enhanced version back when user selects 'x' to proceed and return back
31
+ 4. The enhanced content replaces the original section content in the output document
32
+
33
+ ---
34
+
35
+ ## FLOW
36
+
37
+ ### Step 1: Method Registry Loading
38
+
39
+ **Action:** Load and read `./methods.csv` and `{agent_party}`
40
+
41
+ #### CSV Structure
42
+
43
+ - **category:** Method grouping (core, structural, risk, etc.)
44
+ - **method_name:** Display name for the method
45
+ - **description:** Rich explanation of what the method does, when to use it, and why it's valuable
46
+ - **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
47
+
48
+ #### Context Analysis
49
+
50
+ - Use conversation history
51
+ - Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
52
+
53
+ #### Smart Selection
54
+
55
+ 1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
56
+ 2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
57
+ 3. Select 5 methods: Choose methods that best match the context based on their descriptions
58
+ 4. Balance approach: Include mix of foundational and specialized techniques as appropriate
59
+
60
+ ---
61
+
62
+ ### Step 2: Present Options and Handle Responses
63
+
64
+ #### Display Format
65
+
66
+ ```
67
+ **Advanced Elicitation Options**
68
+ _If party mode is active, agents will join in._
69
+ Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
70
+
71
+ 1. [Method Name]
72
+ 2. [Method Name]
73
+ 3. [Method Name]
74
+ 4. [Method Name]
75
+ 5. [Method Name]
76
+ r. Reshuffle the list with 5 new options
77
+ a. List all methods with descriptions
78
+ x. Proceed / No Further Actions
79
+ ```
80
+
81
+ #### Response Handling
82
+
83
+ **Case 1-5 (User selects a numbered method):**
84
+
85
+ - Execute the selected method using its description from the CSV
86
+ - Adapt the method's complexity and output format based on the current context
87
+ - Apply the method creatively to the current section content being enhanced
88
+ - Display the enhanced version showing what the method revealed or improved
89
+ - **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
90
+ - **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
91
+ - **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
92
+
93
+ **Case r (Reshuffle):**
94
+
95
+ - Select 5 random methods from methods.csv, present new list with same prompt format
96
+ - When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
97
+
98
+ **Case x (Proceed):**
99
+
100
+ - Complete elicitation and proceed
101
+ - Return the fully enhanced content back to the invoking skill
102
+ - The enhanced content becomes the final version for that section
103
+ - Signal completion back to the invoking skill to continue with next section
104
+
105
+ **Case a (List All):**
106
+
107
+ - List all methods with their descriptions from the CSV in a compact table
108
+ - Allow user to select any method by name or number from the full list
109
+ - After selection, execute the method as described in the Case 1-5 above
110
+
111
+ **Case: Direct Feedback:**
112
+
113
+ - Apply changes to current section content and re-present choices
114
+
115
+ **Case: Multiple Numbers:**
116
+
117
+ - Execute methods in sequence on the content, then re-offer choices
118
+
119
+ ---
120
+
121
+ ### Step 3: Execution Guidelines
122
+
123
+ - **Method execution:** Use the description from CSV to understand and apply each method
124
+ - **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
125
+ - **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
126
+ - **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
127
+ - Focus on actionable insights
128
+ - **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
129
+ - **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
130
+ - **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
131
+ - Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
132
+ - Each method application builds upon previous enhancements
133
+ - **Content preservation:** Track all enhancements made during elicitation
134
+ - **Iterative enhancement:** Each selected method (1-5) should:
135
+ 1. Apply to the current enhanced version of the content
136
+ 2. Show the improvements made
137
+ 3. Return to the prompt for additional elicitations or completion
@@ -1,135 +0,0 @@
1
- ---
2
- agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
3
- ---
4
-
5
- # Advanced Elicitation Workflow
6
-
7
- **Goal:** Push the LLM to reconsider, refine, and improve its recent output.
8
-
9
- ---
10
-
11
- ## CRITICAL LLM INSTRUCTIONS
12
-
13
- - **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
14
- - DO NOT skip steps or change the sequence
15
- - HALT immediately when halt-conditions are met
16
- - Each action within a step is a REQUIRED action to complete that step
17
- - Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
18
- - **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
19
-
20
- ---
21
-
22
- ## INTEGRATION (When Invoked Indirectly)
23
-
24
- When invoked from another prompt or process:
25
-
26
- 1. Receive or review the current section content that was just generated
27
- 2. Apply elicitation methods iteratively to enhance that specific content
28
- 3. Return the enhanced version back when user selects 'x' to proceed and return back
29
- 4. The enhanced content replaces the original section content in the output document
30
-
31
- ---
32
-
33
- ## FLOW
34
-
35
- ### Step 1: Method Registry Loading
36
-
37
- **Action:** Load and read `./methods.csv` and `{agent_party}`
38
-
39
- #### CSV Structure
40
-
41
- - **category:** Method grouping (core, structural, risk, etc.)
42
- - **method_name:** Display name for the method
43
- - **description:** Rich explanation of what the method does, when to use it, and why it's valuable
44
- - **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
45
-
46
- #### Context Analysis
47
-
48
- - Use conversation history
49
- - Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
50
-
51
- #### Smart Selection
52
-
53
- 1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
54
- 2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
55
- 3. Select 5 methods: Choose methods that best match the context based on their descriptions
56
- 4. Balance approach: Include mix of foundational and specialized techniques as appropriate
57
-
58
- ---
59
-
60
- ### Step 2: Present Options and Handle Responses
61
-
62
- #### Display Format
63
-
64
- ```
65
- **Advanced Elicitation Options**
66
- _If party mode is active, agents will join in._
67
- Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
68
-
69
- 1. [Method Name]
70
- 2. [Method Name]
71
- 3. [Method Name]
72
- 4. [Method Name]
73
- 5. [Method Name]
74
- r. Reshuffle the list with 5 new options
75
- a. List all methods with descriptions
76
- x. Proceed / No Further Actions
77
- ```
78
-
79
- #### Response Handling
80
-
81
- **Case 1-5 (User selects a numbered method):**
82
-
83
- - Execute the selected method using its description from the CSV
84
- - Adapt the method's complexity and output format based on the current context
85
- - Apply the method creatively to the current section content being enhanced
86
- - Display the enhanced version showing what the method revealed or improved
87
- - **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
88
- - **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
89
- - **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
90
-
91
- **Case r (Reshuffle):**
92
-
93
- - Select 5 random methods from methods.csv, present new list with same prompt format
94
- - When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
95
-
96
- **Case x (Proceed):**
97
-
98
- - Complete elicitation and proceed
99
- - Return the fully enhanced content back to the invoking skill
100
- - The enhanced content becomes the final version for that section
101
- - Signal completion back to the invoking skill to continue with next section
102
-
103
- **Case a (List All):**
104
-
105
- - List all methods with their descriptions from the CSV in a compact table
106
- - Allow user to select any method by name or number from the full list
107
- - After selection, execute the method as described in the Case 1-5 above
108
-
109
- **Case: Direct Feedback:**
110
-
111
- - Apply changes to current section content and re-present choices
112
-
113
- **Case: Multiple Numbers:**
114
-
115
- - Execute methods in sequence on the content, then re-offer choices
116
-
117
- ---
118
-
119
- ### Step 3: Execution Guidelines
120
-
121
- - **Method execution:** Use the description from CSV to understand and apply each method
122
- - **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
123
- - **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
124
- - **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
125
- - Focus on actionable insights
126
- - **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
127
- - **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
128
- - **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
129
- - Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
130
- - Each method application builds upon previous enhancements
131
- - **Content preservation:** Track all enhancements made during elicitation
132
- - **Iterative enhancement:** Each selected method (1-5) should:
133
- 1. Apply to the current enhanced version of the content
134
- 2. Show the improvements made
135
- 3. Return to the prompt for additional elicitations or completion