bmad-method 6.2.1-next.21 → 6.2.1-next.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "$schema": "https://json.schemastore.org/package.json",
3
3
  "name": "bmad-method",
4
- "version": "6.2.1-next.21",
4
+ "version": "6.2.1-next.23",
5
5
  "description": "Breakthrough Method of Agile AI-driven Development",
6
6
  "keywords": [
7
7
  "agile",
@@ -1,6 +1,137 @@
1
1
  ---
2
2
  name: bmad-advanced-elicitation
3
3
  description: 'Push the LLM to reconsider, refine, and improve its recent output. Use when user asks for deeper critique or mentions a known deeper critique method, e.g. socratic, first principles, pre-mortem, red team.'
4
+ agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
4
5
  ---
5
6
 
6
- Follow the instructions in ./workflow.md.
7
+ # Advanced Elicitation
8
+
9
+ **Goal:** Push the LLM to reconsider, refine, and improve its recent output.
10
+
11
+ ---
12
+
13
+ ## CRITICAL LLM INSTRUCTIONS
14
+
15
+ - **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
16
+ - DO NOT skip steps or change the sequence
17
+ - HALT immediately when halt-conditions are met
18
+ - Each action within a step is a REQUIRED action to complete that step
19
+ - Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
20
+ - **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
21
+
22
+ ---
23
+
24
+ ## INTEGRATION (When Invoked Indirectly)
25
+
26
+ When invoked from another prompt or process:
27
+
28
+ 1. Receive or review the current section content that was just generated
29
+ 2. Apply elicitation methods iteratively to enhance that specific content
30
+ 3. Return the enhanced version back when user selects 'x' to proceed and return back
31
+ 4. The enhanced content replaces the original section content in the output document
32
+
33
+ ---
34
+
35
+ ## FLOW
36
+
37
+ ### Step 1: Method Registry Loading
38
+
39
+ **Action:** Load and read `./methods.csv` and `{agent_party}`
40
+
41
+ #### CSV Structure
42
+
43
+ - **category:** Method grouping (core, structural, risk, etc.)
44
+ - **method_name:** Display name for the method
45
+ - **description:** Rich explanation of what the method does, when to use it, and why it's valuable
46
+ - **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
47
+
48
+ #### Context Analysis
49
+
50
+ - Use conversation history
51
+ - Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
52
+
53
+ #### Smart Selection
54
+
55
+ 1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
56
+ 2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
57
+ 3. Select 5 methods: Choose methods that best match the context based on their descriptions
58
+ 4. Balance approach: Include mix of foundational and specialized techniques as appropriate
59
+
60
+ ---
61
+
62
+ ### Step 2: Present Options and Handle Responses
63
+
64
+ #### Display Format
65
+
66
+ ```
67
+ **Advanced Elicitation Options**
68
+ _If party mode is active, agents will join in._
69
+ Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
70
+
71
+ 1. [Method Name]
72
+ 2. [Method Name]
73
+ 3. [Method Name]
74
+ 4. [Method Name]
75
+ 5. [Method Name]
76
+ r. Reshuffle the list with 5 new options
77
+ a. List all methods with descriptions
78
+ x. Proceed / No Further Actions
79
+ ```
80
+
81
+ #### Response Handling
82
+
83
+ **Case 1-5 (User selects a numbered method):**
84
+
85
+ - Execute the selected method using its description from the CSV
86
+ - Adapt the method's complexity and output format based on the current context
87
+ - Apply the method creatively to the current section content being enhanced
88
+ - Display the enhanced version showing what the method revealed or improved
89
+ - **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
90
+ - **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
91
+ - **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
92
+
93
+ **Case r (Reshuffle):**
94
+
95
+ - Select 5 random methods from methods.csv, present new list with same prompt format
96
+ - When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
97
+
98
+ **Case x (Proceed):**
99
+
100
+ - Complete elicitation and proceed
101
+ - Return the fully enhanced content back to the invoking skill
102
+ - The enhanced content becomes the final version for that section
103
+ - Signal completion back to the invoking skill to continue with next section
104
+
105
+ **Case a (List All):**
106
+
107
+ - List all methods with their descriptions from the CSV in a compact table
108
+ - Allow user to select any method by name or number from the full list
109
+ - After selection, execute the method as described in the Case 1-5 above
110
+
111
+ **Case: Direct Feedback:**
112
+
113
+ - Apply changes to current section content and re-present choices
114
+
115
+ **Case: Multiple Numbers:**
116
+
117
+ - Execute methods in sequence on the content, then re-offer choices
118
+
119
+ ---
120
+
121
+ ### Step 3: Execution Guidelines
122
+
123
+ - **Method execution:** Use the description from CSV to understand and apply each method
124
+ - **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
125
+ - **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
126
+ - **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
127
+ - Focus on actionable insights
128
+ - **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
129
+ - **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
130
+ - **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
131
+ - Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
132
+ - Each method application builds upon previous enhancements
133
+ - **Content preservation:** Track all enhancements made during elicitation
134
+ - **Iterative enhancement:** Each selected method (1-5) should:
135
+ 1. Apply to the current enhanced version of the content
136
+ 2. Show the improvements made
137
+ 3. Return to the prompt for additional elicitations or completion
@@ -3,4 +3,84 @@ name: bmad-editorial-review-prose
3
3
  description: 'Clinical copy-editor that reviews text for communication issues. Use when user says review for prose or improve the prose'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Editorial Review - Prose
7
+
8
+ **Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table.
9
+
10
+ **Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step.
11
+
12
+ **CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed.
13
+
14
+ **Inputs:**
15
+ - **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML)
16
+ - **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices.
17
+ - **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus
18
+
19
+
20
+ ## PRINCIPLES
21
+
22
+ 1. **Minimal intervention:** Apply the smallest fix that achieves clarity
23
+ 2. **Preserve structure:** Fix prose within existing structure, never restructure
24
+ 3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup
25
+ 4. **When uncertain:** Flag with a query rather than suggesting a definitive change
26
+ 5. **Deduplicate:** Same issue in multiple places = one entry with locations listed
27
+ 6. **No conflicts:** Merge overlapping fixes into single entries
28
+ 7. **Respect author voice:** Preserve intentional stylistic choices
29
+
30
+ > **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins.
31
+
32
+
33
+ ## STEPS
34
+
35
+ ### Step 1: Validate Input
36
+
37
+ - Check if content is empty or contains fewer than 3 words
38
+ - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)"
39
+ - Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`)
40
+ - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'"
41
+ - Identify content type (markdown, plain text, XML with text)
42
+ - Note any code blocks, frontmatter, or structural markup to skip
43
+
44
+ ### Step 2: Analyze Style
45
+
46
+ - Analyze the style, tone, and voice of the input text
47
+ - Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns)
48
+ - Calibrate review approach based on reader_type:
49
+ - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging
50
+ - If `humans`: Prioritize clarity, flow, readability, natural progression
51
+
52
+ ### Step 3: Editorial Review (CRITICAL)
53
+
54
+ - If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review
55
+ - Review all prose sections (skip code blocks, frontmatter, structural markup)
56
+ - Identify communication issues that impede comprehension
57
+ - For each issue, determine the minimal fix that achieves clarity
58
+ - Deduplicate: If same issue appears multiple times, create one entry listing all locations
59
+ - Merge overlapping issues into single entries (no conflicting suggestions)
60
+ - For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change
61
+ - Preserve author voice — do not "improve" intentional stylistic choices
62
+
63
+ ### Step 4: Output Results
64
+
65
+ - If issues found: Output a three-column markdown table with all suggested fixes
66
+ - If no issues found: Output "No editorial issues identified"
67
+
68
+ **Output format:**
69
+
70
+ | Original Text | Revised Text | Changes |
71
+ |---------------|--------------|---------|
72
+ | The exact original passage | The suggested revision | Brief explanation of what changed and why |
73
+
74
+ **Example:**
75
+
76
+ | Original Text | Revised Text | Changes |
77
+ |---------------|--------------|---------|
78
+ | The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" |
79
+ | Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) |
80
+
81
+
82
+ ## HALT CONDITIONS
83
+
84
+ - HALT with error if content is empty or fewer than 3 words
85
+ - HALT with error if reader_type is not `humans` or `llm`
86
+ - If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error)
@@ -3,4 +3,177 @@ name: bmad-editorial-review-structure
3
3
  description: 'Structural editor that proposes cuts, reorganization, and simplification while preserving comprehension. Use when user requests structural review or editorial review of structure'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Editorial Review - Structure
7
+
8
+ **Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing.
9
+
10
+ **Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step.
11
+
12
+ > **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins.
13
+
14
+ **Inputs:**
15
+ - **content** (required) -- Document to review (markdown, plain text, or structured content)
16
+ - **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices.
17
+ - **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview')
18
+ - **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers')
19
+ - **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density
20
+ - **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit')
21
+
22
+ ## Principles
23
+
24
+ - Comprehension through calibration: Optimize for the minimum words needed to maintain understanding
25
+ - Front-load value: Critical information comes first; nice-to-know comes last (or goes)
26
+ - One source of truth: If information appears identically twice, consolidate
27
+ - Scope discipline: Content that belongs in a different document should be cut or linked
28
+ - Propose, don't execute: Output recommendations -- user decides what to accept
29
+ - **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.**
30
+
31
+ ## Human-Reader Principles
32
+
33
+ These elements serve human comprehension and engagement -- preserve unless clearly wasteful:
34
+
35
+ - Visual aids: Diagrams, images, and flowcharts anchor understanding
36
+ - Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place
37
+ - Reader's Journey: Organize content biologically (linear progression), not logically (database)
38
+ - Mental models: Overview before details prevents cognitive overload
39
+ - Warmth: Encouraging tone reduces anxiety for new users
40
+ - Whitespace: Admonitions and callouts provide visual breathing room
41
+ - Summaries: Recaps help retention; they're reinforcement, not redundancy
42
+ - Examples: Concrete illustrations make abstract concepts accessible
43
+ - Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention
44
+
45
+ ## LLM-Reader Principles
46
+
47
+ When reader_type='llm', optimize for PRECISION and UNAMBIGUITY:
48
+
49
+ - Dependency-first: Define concepts before usage to minimize hallucination risk
50
+ - Cut emotional language, encouragement, and orientation sections
51
+ - IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly.
52
+ - Use consistent terminology -- same word for same concept throughout
53
+ - Eliminate hedging ("might", "could", "generally") -- use direct statements
54
+ - Prefer structured formats (tables, lists, YAML) over prose
55
+ - Reference known standards ("conventional commits", "Google style guide") to leverage training
56
+ - STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation
57
+ - Unambiguous references -- no unclear antecedents ("it", "this", "the above")
58
+ - Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth)
59
+
60
+ ## Structure Models
61
+
62
+ ### Tutorial/Guide (Linear)
63
+ **Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs
64
+ - Prerequisites: Setup/Context MUST precede action
65
+ - Sequence: Steps must follow strict chronological or logical dependency order
66
+ - Goal-oriented: clear 'Definition of Done' at the end
67
+
68
+ ### Reference/Database
69
+ **Applicability:** API docs, glossaries, configuration references, cheat sheets
70
+ - Random Access: No narrative flow required; user jumps to specific item
71
+ - MECE: Topics are Mutually Exclusive and Collectively Exhaustive
72
+ - Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns)
73
+
74
+ ### Explanation (Conceptual)
75
+ **Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context
76
+ - Abstract to Concrete: Definition to Context to Implementation/Example
77
+ - Scaffolding: Complex ideas built on established foundations
78
+
79
+ ### Prompt/Task Definition (Functional)
80
+ **Applicability:** BMAD tasks, prompts, system instructions, XML definitions
81
+ - Meta-first: Inputs, usage constraints, and context defined before instructions
82
+ - Separation of Concerns: Instructions (logic) separate from Data (content)
83
+ - Step-by-step: Execution flow must be explicit and ordered
84
+
85
+ ### Strategic/Context (Pyramid)
86
+ **Applicability:** PRDs, research reports, proposals, decision records
87
+ - Top-down: Conclusion/Status/Recommendation starts the document
88
+ - Grouping: Supporting context grouped logically below the headline
89
+ - Ordering: Most critical information first
90
+ - MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive
91
+ - Evidence: Data supports arguments, never leads
92
+
93
+ ## STEPS
94
+
95
+ ### Step 1: Validate Input
96
+
97
+ - Check if content is empty or contains fewer than 3 words
98
+ - If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)"
99
+ - Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans")
100
+ - If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'"
101
+ - Identify document type and structure (headings, sections, lists, etc.)
102
+ - Note the current word count and section count
103
+
104
+ ### Step 2: Understand Purpose
105
+
106
+ - If purpose was provided, use it; otherwise infer from content
107
+ - If target_audience was provided, use it; otherwise infer from content
108
+ - Identify the core question the document answers
109
+ - State in one sentence: "This document exists to help [audience] accomplish [goal]"
110
+ - Select the most appropriate structural model from Structure Models based on purpose/audience
111
+ - Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles)
112
+
113
+ ### Step 3: Structural Analysis (CRITICAL)
114
+
115
+ - If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis
116
+ - Map the document structure: list each major section with its word count
117
+ - Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid)
118
+ - For each section, answer: Does this directly serve the stated purpose?
119
+ - If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged?
120
+ - Identify sections that could be: cut entirely, merged with another, moved to a different location, or split
121
+ - Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement)
122
+ - Identify scope violations: content that belongs in a different document
123
+ - Identify burying: critical information hidden deep in the document
124
+
125
+ ### Step 4: Flow Analysis
126
+
127
+ - Assess the reader's journey: Does the sequence match how readers will use this?
128
+ - Identify premature detail: explanation given before the reader needs it
129
+ - Identify missing scaffolding: complex ideas without adequate setup
130
+ - Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim
131
+ - If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention?
132
+
133
+ ### Step 5: Generate Recommendations
134
+
135
+ - Compile all findings into prioritized recommendations
136
+ - Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension)
137
+ - For each recommendation, state the rationale in one sentence
138
+ - Estimate impact: how many words would this save (or cost, for PRESERVE)?
139
+ - If length_target was provided, assess whether recommendations meet it
140
+ - If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement"
141
+
142
+ ### Step 6: Output Results
143
+
144
+ - Output document summary (purpose, audience, reader_type, current length)
145
+ - Output the recommendation list in priority order
146
+ - Output estimated total reduction if all recommendations accepted
147
+ - If no recommendations, output: "No substantive changes recommended -- document structure is sound"
148
+
149
+ Use the following output format:
150
+
151
+ ```markdown
152
+ ## Document Summary
153
+ - **Purpose:** [inferred or provided purpose]
154
+ - **Audience:** [inferred or provided audience]
155
+ - **Reader type:** [selected reader type]
156
+ - **Structure model:** [selected structure model]
157
+ - **Current length:** [X] words across [Y] sections
158
+
159
+ ## Recommendations
160
+
161
+ ### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name]
162
+ **Rationale:** [One sentence explanation]
163
+ **Impact:** ~[X] words
164
+ **Comprehension note:** [If applicable, note impact on reader understanding]
165
+
166
+ ### 2. ...
167
+
168
+ ## Summary
169
+ - **Total recommendations:** [N]
170
+ - **Estimated reduction:** [X] words ([Y]% of original)
171
+ - **Meets length target:** [Yes/No/No target specified]
172
+ - **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity]
173
+ ```
174
+
175
+ ## HALT CONDITIONS
176
+
177
+ - HALT with error if content is empty or fewer than 3 words
178
+ - HALT with error if reader_type is not "humans" or "llm"
179
+ - If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error)
@@ -3,4 +3,90 @@ name: bmad-help
3
3
  description: 'Analyzes current state and user query to answer BMad questions or recommend the next workflow or agent. Use when user says what should I do next, what do I do now, or asks a question about BMad'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Task: BMAD Help
7
+
8
+ ## ROUTING RULES
9
+
10
+ - **Empty `phase` = anytime** — Universal tools work regardless of workflow state
11
+ - **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module)
12
+ - **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is.
13
+ - **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering
14
+ - **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed")
15
+ - **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases
16
+ - **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows
17
+
18
+ ## DISPLAY RULES
19
+
20
+ ### Command-Based Workflows
21
+ When `command` field has a value:
22
+ - Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`)
23
+
24
+ ### Skill-Referenced Workflows
25
+ When `workflow-file` starts with `skill:`:
26
+ - The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path
27
+ - Do NOT attempt to resolve or load it as a file path
28
+ - Display using the `command` column value as a skill name in backticks (same as command-based workflows)
29
+
30
+ ### Agent-Based Workflows
31
+ When `command` field is empty:
32
+ - User loads agent first by invoking the agent skill (e.g., `bmad-pm`)
33
+ - Then invokes by referencing the `code` field or describing the `name` field
34
+ - Do NOT show a slash command — show the code value and agent load instruction instead
35
+
36
+ Example presentation for empty command:
37
+ ```
38
+ Explain Concept (EC)
39
+ Load: tech-writer agent skill, then ask to "EC about [topic]"
40
+ Agent: Tech Writer
41
+ Description: Create clear technical explanations with examples...
42
+ ```
43
+
44
+ ## MODULE DETECTION
45
+
46
+ - **Empty `module` column** → universal tools (work across all modules)
47
+ - **Named `module`** → module-specific workflows
48
+
49
+ Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user.
50
+
51
+ ## INPUT ANALYSIS
52
+
53
+ Determine what was just completed:
54
+ - Explicit completion stated by user
55
+ - Workflow completed in current conversation
56
+ - Artifacts found matching `outputs` patterns
57
+ - If `index.md` exists, read it for additional context
58
+ - If still unclear, ask: "What workflow did you most recently complete?"
59
+
60
+ ## EXECUTION
61
+
62
+ 1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv`
63
+
64
+ 2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config.
65
+
66
+ 3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so.
67
+
68
+ 4. **Detect active module** — Use MODULE DETECTION above
69
+
70
+ 5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above.
71
+
72
+ 6. **Present recommendations** — Show next steps based on:
73
+ - Completed workflows detected
74
+ - Phase/sequence ordering (ROUTING RULES)
75
+ - Artifact presence
76
+
77
+ **Optional items first** — List optional workflows until a required step is reached
78
+ **Required items next** — List the next required workflow
79
+
80
+ For each item, apply DISPLAY RULES above and include:
81
+ - Workflow **name**
82
+ - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES)
83
+ - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)")
84
+ - Brief **description**
85
+
86
+ 7. **Additional guidance to convey**:
87
+ - Present all output in `{communication_language}`
88
+ - Run each workflow in a **fresh context window**
89
+ - For **validation workflows**: recommend using a different high-quality LLM if available
90
+ - For conversational requests: match the user's tone while presenting clearly
91
+
92
+ 8. Return to the calling process after presenting recommendations.
@@ -3,4 +3,64 @@ name: bmad-index-docs
3
3
  description: 'Generates or updates an index.md to reference all docs in the folder. Use if user requests to create or update an index of all files in a specific folder'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Index Docs
7
+
8
+ **Goal:** Generate or update an index.md to reference all docs in a target folder.
9
+
10
+
11
+ ## EXECUTION
12
+
13
+ ### Step 1: Scan Directory
14
+
15
+ - List all files and subdirectories in the target location
16
+
17
+ ### Step 2: Group Content
18
+
19
+ - Organize files by type, purpose, or subdirectory
20
+
21
+ ### Step 3: Generate Descriptions
22
+
23
+ - Read each file to understand its actual purpose and create brief (3-10 word) descriptions based on the content, not just the filename
24
+
25
+ ### Step 4: Create/Update Index
26
+
27
+ - Write or update index.md with organized file listings
28
+
29
+
30
+ ## OUTPUT FORMAT
31
+
32
+ ```markdown
33
+ # Directory Index
34
+
35
+ ## Files
36
+
37
+ - **[filename.ext](./filename.ext)** - Brief description
38
+ - **[another-file.ext](./another-file.ext)** - Brief description
39
+
40
+ ## Subdirectories
41
+
42
+ ### subfolder/
43
+
44
+ - **[file1.ext](./subfolder/file1.ext)** - Brief description
45
+ - **[file2.ext](./subfolder/file2.ext)** - Brief description
46
+
47
+ ### another-folder/
48
+
49
+ - **[file3.ext](./another-folder/file3.ext)** - Brief description
50
+ ```
51
+
52
+
53
+ ## HALT CONDITIONS
54
+
55
+ - HALT if target directory does not exist or is inaccessible
56
+ - HALT if user does not have write permissions to create index.md
57
+
58
+
59
+ ## VALIDATION
60
+
61
+ - Use relative paths starting with ./
62
+ - Group similar files together
63
+ - Read file contents to generate accurate descriptions - don't guess from filenames
64
+ - Keep descriptions concise but informative (3-10 words)
65
+ - Sort alphabetically within groups
66
+ - Skip hidden files (starting with .) unless specified
@@ -3,4 +3,35 @@ name: bmad-review-adversarial-general
3
3
  description: 'Perform a Cynical Review and produce a findings report. Use when the user requests a critical review of something'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Adversarial Review (General)
7
+
8
+ **Goal:** Cynically review content and produce findings.
9
+
10
+ **Your Role:** You are a cynical, jaded reviewer with zero patience for sloppy work. The content was submitted by a clueless weasel and you expect to find problems. Be skeptical of everything. Look for what's missing, not just what's wrong. Use a precise, professional tone — no profanity or personal attacks.
11
+
12
+ **Inputs:**
13
+ - **content** — Content to review: diff, spec, story, doc, or any artifact
14
+ - **also_consider** (optional) — Areas to keep in mind during review alongside normal adversarial analysis
15
+
16
+
17
+ ## EXECUTION
18
+
19
+ ### Step 1: Receive Content
20
+
21
+ - Load the content to review from provided input or context
22
+ - If content to review is empty, ask for clarification and abort
23
+ - Identify content type (diff, branch, uncommitted changes, document, etc.)
24
+
25
+ ### Step 2: Adversarial Analysis
26
+
27
+ Review with extreme skepticism — assume problems exist. Find at least ten issues to fix or improve in the provided content.
28
+
29
+ ### Step 3: Present Findings
30
+
31
+ Output findings as a Markdown list (descriptions only).
32
+
33
+
34
+ ## HALT CONDITIONS
35
+
36
+ - HALT if zero findings — this is suspicious, re-analyze or ask for guidance
37
+ - HALT if content is empty or unreadable