bmad-method 6.2.1-next.21 → 6.2.1-next.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,4 +3,65 @@ name: bmad-review-edge-case-hunter
3
3
  description: 'Walk every branching path and boundary condition in content, report only unhandled edge cases. Orthogonal to adversarial review - method-driven not attitude-driven. Use when you need exhaustive edge-case analysis of code, specs, or diffs.'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Edge Case Hunter Review
7
+
8
+ **Goal:** You are a pure path tracer. Never comment on whether code is good or bad; only list missing handling.
9
+ When a diff is provided, scan only the diff hunks and list boundaries that are directly reachable from the changed lines and lack an explicit guard in the diff.
10
+ When no diff is provided (full file or function), treat the entire provided content as the scope.
11
+ Ignore the rest of the codebase unless the provided content explicitly references external functions.
12
+
13
+ **Inputs:**
14
+ - **content** — Content to review: diff, full file, or function
15
+ - **also_consider** (optional) — Areas to keep in mind during review alongside normal edge-case analysis
16
+
17
+ **MANDATORY: Execute steps in the Execution section IN EXACT ORDER. DO NOT skip steps or change the sequence. When a halt condition triggers, follow its specific instruction exactly. Each action within a step is a REQUIRED action to complete that step.**
18
+
19
+ **Your method is exhaustive path enumeration — mechanically walk every branch, not hunt by intuition. Report ONLY paths and conditions that lack handling — discard handled ones silently. Do NOT editorialize or add filler — findings only.**
20
+
21
+
22
+ ## EXECUTION
23
+
24
+ ### Step 1: Receive Content
25
+
26
+ - Load the content to review strictly from provided input
27
+ - If content is empty, or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop
28
+ - Identify content type (diff, full file, or function) to determine scope rules
29
+
30
+ ### Step 2: Exhaustive Path Analysis
31
+
32
+ **Walk every branching path and boundary condition within scope — report only unhandled ones.**
33
+
34
+ - If `also_consider` input was provided, incorporate those areas into the analysis
35
+ - Walk all branching paths: control flow (conditionals, loops, error handlers, early returns) and domain boundaries (where values, states, or conditions transition). Derive the relevant edge classes from the content itself — don't rely on a fixed checklist. Examples: missing else/default, unguarded inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps
36
+ - For each path: determine whether the content handles it
37
+ - Collect only the unhandled paths as findings — discard handled ones silently
38
+
39
+ ### Step 3: Validate Completeness
40
+
41
+ - Revisit every edge class from Step 2 — e.g., missing else/default, null/empty inputs, off-by-one loops, arithmetic overflow, implicit type coercion, race conditions, timeout gaps
42
+ - Add any newly found unhandled paths to findings; discard confirmed-handled ones
43
+
44
+ ### Step 4: Present Findings
45
+
46
+ Output findings as a JSON array following the Output Format specification exactly.
47
+
48
+
49
+ ## OUTPUT FORMAT
50
+
51
+ Return ONLY a valid JSON array of objects. Each object must contain exactly these four fields and nothing else:
52
+
53
+ ```json
54
+ [{
55
+ "location": "file:start-end (or file:line when single line, or file:hunk when exact line unavailable)",
56
+ "trigger_condition": "one-line description (max 15 words)",
57
+ "guard_snippet": "minimal code sketch that closes the gap (single-line escaped string, no raw newlines or unescaped quotes)",
58
+ "potential_consequence": "what could actually go wrong (max 15 words)"
59
+ }]
60
+ ```
61
+
62
+ No extra text, no explanations, no markdown wrapping. An empty array `[]` is valid when no unhandled paths are found.
63
+
64
+
65
+ ## HALT CONDITIONS
66
+
67
+ - If content is empty or cannot be decoded as text, return `[{"location":"N/A","trigger_condition":"Input empty or undecodable","guard_snippet":"Provide valid content to review","potential_consequence":"Review skipped — no analysis performed"}]` and stop
@@ -3,4 +3,103 @@ name: bmad-shard-doc
3
3
  description: 'Splits large markdown documents into smaller, organized files based on level 2 (default) sections. Use if the user says perform shard document'
4
4
  ---
5
5
 
6
- Follow the instructions in ./workflow.md.
6
+ # Shard Document
7
+
8
+ **Goal:** Split large markdown documents into smaller, organized files based on level 2 sections using `npx @kayvan/markdown-tree-parser`.
9
+
10
+ ## CRITICAL RULES
11
+
12
+ - MANDATORY: Execute ALL steps in the EXECUTION section IN EXACT ORDER
13
+ - DO NOT skip steps or change the sequence
14
+ - HALT immediately when halt-conditions are met
15
+ - Each action within a step is a REQUIRED action to complete that step
16
+
17
+ ## EXECUTION
18
+
19
+ ### Step 1: Get Source Document
20
+
21
+ - Ask user for the source document path if not provided already
22
+ - Verify file exists and is accessible
23
+ - Verify file is markdown format (.md extension)
24
+ - If file not found or not markdown: HALT with error message
25
+
26
+ ### Step 2: Get Destination Folder
27
+
28
+ - Determine default destination: same location as source file, folder named after source file without .md extension
29
+ - Example: `/path/to/architecture.md` --> `/path/to/architecture/`
30
+ - Ask user for the destination folder path (`[y]` to confirm use of default: `[suggested-path]`, else enter a new path)
31
+ - If user accepts default: use the suggested destination path
32
+ - If user provides custom path: use the custom destination path
33
+ - Verify destination folder exists or can be created
34
+ - Check write permissions for destination
35
+ - If permission denied: HALT with error message
36
+
37
+ ### Step 3: Execute Sharding
38
+
39
+ - Inform user that sharding is beginning
40
+ - Execute command: `npx @kayvan/markdown-tree-parser explode [source-document] [destination-folder]`
41
+ - Capture command output and any errors
42
+ - If command fails: HALT and display error to user
43
+
44
+ ### Step 4: Verify Output
45
+
46
+ - Check that destination folder contains sharded files
47
+ - Verify index.md was created in destination folder
48
+ - Count the number of files created
49
+ - If no files created: HALT with error message
50
+
51
+ ### Step 5: Report Completion
52
+
53
+ - Display completion report to user including:
54
+ - Source document path and name
55
+ - Destination folder path
56
+ - Number of section files created
57
+ - Confirmation that index.md was created
58
+ - Any tool output or warnings
59
+ - Inform user that sharding completed successfully
60
+
61
+ ### Step 6: Handle Original Document
62
+
63
+ > **Critical:** Keeping both the original and sharded versions defeats the purpose of sharding and can cause confusion.
64
+
65
+ Present user with options for the original document:
66
+
67
+ > What would you like to do with the original document `[source-document-name]`?
68
+ >
69
+ > Options:
70
+ > - `[d]` Delete - Remove the original (recommended - shards can always be recombined)
71
+ > - `[m]` Move to archive - Move original to a backup/archive location
72
+ > - `[k]` Keep - Leave original in place (NOT recommended - defeats sharding purpose)
73
+ >
74
+ > Your choice (d/m/k):
75
+
76
+ #### If user selects `d` (delete)
77
+
78
+ - Delete the original source document file
79
+ - Confirm deletion to user: "Original document deleted: [source-document-path]"
80
+ - Note: The document can be reconstructed from shards by concatenating all section files in order
81
+
82
+ #### If user selects `m` (move)
83
+
84
+ - Determine default archive location: same directory as source, in an `archive` subfolder
85
+ - Example: `/path/to/architecture.md` --> `/path/to/archive/architecture.md`
86
+ - Ask: Archive location (`[y]` to use default: `[default-archive-path]`, or provide custom path)
87
+ - If user accepts default: use default archive path
88
+ - If user provides custom path: use custom archive path
89
+ - Create archive directory if it does not exist
90
+ - Move original document to archive location
91
+ - Confirm move to user: "Original document moved to: [archive-path]"
92
+
93
+ #### If user selects `k` (keep)
94
+
95
+ - Display warning to user:
96
+ - Keeping both original and sharded versions is NOT recommended
97
+ - The discover_inputs protocol may load the wrong version
98
+ - Updates to one will not reflect in the other
99
+ - Duplicate content taking up space
100
+ - Consider deleting or archiving the original document
101
+ - Confirm user choice: "Original document kept at: [source-document-path]"
102
+
103
+ ## HALT CONDITIONS
104
+
105
+ - HALT if npx command fails or produces no output files
@@ -1,135 +0,0 @@
1
- ---
2
- agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
3
- ---
4
-
5
- # Advanced Elicitation Workflow
6
-
7
- **Goal:** Push the LLM to reconsider, refine, and improve its recent output.
8
-
9
- ---
10
-
11
- ## CRITICAL LLM INSTRUCTIONS
12
-
13
- - **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
14
- - DO NOT skip steps or change the sequence
15
- - HALT immediately when halt-conditions are met
16
- - Each action within a step is a REQUIRED action to complete that step
17
- - Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
18
- - **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
19
-
20
- ---
21
-
22
- ## INTEGRATION (When Invoked Indirectly)
23
-
24
- When invoked from another prompt or process:
25
-
26
- 1. Receive or review the current section content that was just generated
27
- 2. Apply elicitation methods iteratively to enhance that specific content
28
- 3. Return the enhanced version back when user selects 'x' to proceed and return back
29
- 4. The enhanced content replaces the original section content in the output document
30
-
31
- ---
32
-
33
- ## FLOW
34
-
35
- ### Step 1: Method Registry Loading
36
-
37
- **Action:** Load and read `./methods.csv` and `{agent_party}`
38
-
39
- #### CSV Structure
40
-
41
- - **category:** Method grouping (core, structural, risk, etc.)
42
- - **method_name:** Display name for the method
43
- - **description:** Rich explanation of what the method does, when to use it, and why it's valuable
44
- - **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
45
-
46
- #### Context Analysis
47
-
48
- - Use conversation history
49
- - Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
50
-
51
- #### Smart Selection
52
-
53
- 1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
54
- 2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
55
- 3. Select 5 methods: Choose methods that best match the context based on their descriptions
56
- 4. Balance approach: Include mix of foundational and specialized techniques as appropriate
57
-
58
- ---
59
-
60
- ### Step 2: Present Options and Handle Responses
61
-
62
- #### Display Format
63
-
64
- ```
65
- **Advanced Elicitation Options**
66
- _If party mode is active, agents will join in._
67
- Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
68
-
69
- 1. [Method Name]
70
- 2. [Method Name]
71
- 3. [Method Name]
72
- 4. [Method Name]
73
- 5. [Method Name]
74
- r. Reshuffle the list with 5 new options
75
- a. List all methods with descriptions
76
- x. Proceed / No Further Actions
77
- ```
78
-
79
- #### Response Handling
80
-
81
- **Case 1-5 (User selects a numbered method):**
82
-
83
- - Execute the selected method using its description from the CSV
84
- - Adapt the method's complexity and output format based on the current context
85
- - Apply the method creatively to the current section content being enhanced
86
- - Display the enhanced version showing what the method revealed or improved
87
- - **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
88
- - **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
89
- - **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
90
-
91
- **Case r (Reshuffle):**
92
-
93
- - Select 5 random methods from methods.csv, present new list with same prompt format
94
- - When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
95
-
96
- **Case x (Proceed):**
97
-
98
- - Complete elicitation and proceed
99
- - Return the fully enhanced content back to the invoking skill
100
- - The enhanced content becomes the final version for that section
101
- - Signal completion back to the invoking skill to continue with next section
102
-
103
- **Case a (List All):**
104
-
105
- - List all methods with their descriptions from the CSV in a compact table
106
- - Allow user to select any method by name or number from the full list
107
- - After selection, execute the method as described in the Case 1-5 above
108
-
109
- **Case: Direct Feedback:**
110
-
111
- - Apply changes to current section content and re-present choices
112
-
113
- **Case: Multiple Numbers:**
114
-
115
- - Execute methods in sequence on the content, then re-offer choices
116
-
117
- ---
118
-
119
- ### Step 3: Execution Guidelines
120
-
121
- - **Method execution:** Use the description from CSV to understand and apply each method
122
- - **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
123
- - **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
124
- - **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
125
- - Focus on actionable insights
126
- - **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
127
- - **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
128
- - **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
129
- - Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
130
- - Each method application builds upon previous enhancements
131
- - **Content preservation:** Track all enhancements made during elicitation
132
- - **Iterative enhancement:** Each selected method (1-5) should:
133
- 1. Apply to the current enhanced version of the content
134
- 2. Show the improvements made
135
- 3. Return to the prompt for additional elicitations or completion
@@ -1,81 +0,0 @@
1
- # Editorial Review - Prose
2
-
3
- **Goal:** Review text for communication issues that impede comprehension and output suggested fixes in a three-column table.
4
-
5
- **Your Role:** You are a clinical copy-editor: precise, professional, neither warm nor cynical. Apply Microsoft Writing Style Guide principles as your baseline. Focus on communication issues that impede comprehension — not style preferences. NEVER rewrite for preference — only fix genuine issues. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step.
6
-
7
- **CONTENT IS SACROSANCT:** Never challenge ideas — only clarify how they're expressed.
8
-
9
- **Inputs:**
10
- - **content** (required) — Cohesive unit of text to review (markdown, plain text, or text-heavy XML)
11
- - **style_guide** (optional) — Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices.
12
- - **reader_type** (optional, default: `humans`) — `humans` for standard editorial, `llm` for precision focus
13
-
14
-
15
- ## PRINCIPLES
16
-
17
- 1. **Minimal intervention:** Apply the smallest fix that achieves clarity
18
- 2. **Preserve structure:** Fix prose within existing structure, never restructure
19
- 3. **Skip code/markup:** Detect and skip code blocks, frontmatter, structural markup
20
- 4. **When uncertain:** Flag with a query rather than suggesting a definitive change
21
- 5. **Deduplicate:** Same issue in multiple places = one entry with locations listed
22
- 6. **No conflicts:** Merge overlapping fixes into single entries
23
- 7. **Respect author voice:** Preserve intentional stylistic choices
24
-
25
- > **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including the Microsoft Writing Style Guide baseline and reader_type-specific priorities). The ONLY exception is CONTENT IS SACROSANCT — never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins.
26
-
27
-
28
- ## STEPS
29
-
30
- ### Step 1: Validate Input
31
-
32
- - Check if content is empty or contains fewer than 3 words
33
- - If empty or fewer than 3 words: **HALT** with error: "Content too short for editorial review (minimum 3 words required)"
34
- - Validate reader_type is `humans` or `llm` (or not provided, defaulting to `humans`)
35
- - If reader_type is invalid: **HALT** with error: "Invalid reader_type. Must be 'humans' or 'llm'"
36
- - Identify content type (markdown, plain text, XML with text)
37
- - Note any code blocks, frontmatter, or structural markup to skip
38
-
39
- ### Step 2: Analyze Style
40
-
41
- - Analyze the style, tone, and voice of the input text
42
- - Note any intentional stylistic choices to preserve (informal tone, technical jargon, rhetorical patterns)
43
- - Calibrate review approach based on reader_type:
44
- - If `llm`: Prioritize unambiguous references, consistent terminology, explicit structure, no hedging
45
- - If `humans`: Prioritize clarity, flow, readability, natural progression
46
-
47
- ### Step 3: Editorial Review (CRITICAL)
48
-
49
- - If style_guide provided: Consult style_guide now and note its key requirements — these override default principles for this review
50
- - Review all prose sections (skip code blocks, frontmatter, structural markup)
51
- - Identify communication issues that impede comprehension
52
- - For each issue, determine the minimal fix that achieves clarity
53
- - Deduplicate: If same issue appears multiple times, create one entry listing all locations
54
- - Merge overlapping issues into single entries (no conflicting suggestions)
55
- - For uncertain fixes, phrase as query: "Consider: [suggestion]?" rather than definitive change
56
- - Preserve author voice — do not "improve" intentional stylistic choices
57
-
58
- ### Step 4: Output Results
59
-
60
- - If issues found: Output a three-column markdown table with all suggested fixes
61
- - If no issues found: Output "No editorial issues identified"
62
-
63
- **Output format:**
64
-
65
- | Original Text | Revised Text | Changes |
66
- |---------------|--------------|---------|
67
- | The exact original passage | The suggested revision | Brief explanation of what changed and why |
68
-
69
- **Example:**
70
-
71
- | Original Text | Revised Text | Changes |
72
- |---------------|--------------|---------|
73
- | The system will processes data and it handles errors. | The system processes data and handles errors. | Fixed subject-verb agreement ("will processes" to "processes"); removed redundant "it" |
74
- | Users can chose from options (lines 12, 45, 78) | Users can choose from options | Fixed spelling: "chose" to "choose" (appears in 3 locations) |
75
-
76
-
77
- ## HALT CONDITIONS
78
-
79
- - HALT with error if content is empty or fewer than 3 words
80
- - HALT with error if reader_type is not `humans` or `llm`
81
- - If no issues found after thorough review, output "No editorial issues identified" (this is valid completion, not an error)
@@ -1,174 +0,0 @@
1
- # Editorial Review - Structure
2
-
3
- **Goal:** Review document structure and propose substantive changes to improve clarity and flow -- run this BEFORE copy editing.
4
-
5
- **Your Role:** You are a structural editor focused on HIGH-VALUE DENSITY. Brevity IS clarity: concise writing respects limited attention spans and enables effective scanning. Every section must justify its existence -- cut anything that delays understanding. True redundancy is failure. Follow ALL steps in the STEPS section IN EXACT ORDER. DO NOT skip steps or change the sequence. HALT immediately when halt-conditions are met. Each action within a step is a REQUIRED action to complete that step.
6
-
7
- > **STYLE GUIDE OVERRIDE:** If a style_guide input is provided, it overrides ALL generic principles in this task (including human-reader-principles, llm-reader-principles, reader_type-specific priorities, structure-models selection, and the Microsoft Writing Style Guide baseline). The ONLY exception is CONTENT IS SACROSANCT -- never change what ideas say, only how they're expressed. When style guide conflicts with this task, style guide wins.
8
-
9
- **Inputs:**
10
- - **content** (required) -- Document to review (markdown, plain text, or structured content)
11
- - **style_guide** (optional) -- Project-specific style guide. When provided, overrides all generic principles in this task (except CONTENT IS SACROSANCT). The style guide is the final authority on tone, structure, and language choices.
12
- - **purpose** (optional) -- Document's intended purpose (e.g., 'quickstart tutorial', 'API reference', 'conceptual overview')
13
- - **target_audience** (optional) -- Who reads this? (e.g., 'new users', 'experienced developers', 'decision makers')
14
- - **reader_type** (optional, default: "humans") -- 'humans' (default) preserves comprehension aids; 'llm' optimizes for precision and density
15
- - **length_target** (optional) -- Target reduction (e.g., '30% shorter', 'half the length', 'no limit')
16
-
17
- ## Principles
18
-
19
- - Comprehension through calibration: Optimize for the minimum words needed to maintain understanding
20
- - Front-load value: Critical information comes first; nice-to-know comes last (or goes)
21
- - One source of truth: If information appears identically twice, consolidate
22
- - Scope discipline: Content that belongs in a different document should be cut or linked
23
- - Propose, don't execute: Output recommendations -- user decides what to accept
24
- - **CONTENT IS SACROSANCT: Never challenge ideas -- only optimize how they're organized.**
25
-
26
- ## Human-Reader Principles
27
-
28
- These elements serve human comprehension and engagement -- preserve unless clearly wasteful:
29
-
30
- - Visual aids: Diagrams, images, and flowcharts anchor understanding
31
- - Expectation-setting: "What You'll Learn" helps readers confirm they're in the right place
32
- - Reader's Journey: Organize content biologically (linear progression), not logically (database)
33
- - Mental models: Overview before details prevents cognitive overload
34
- - Warmth: Encouraging tone reduces anxiety for new users
35
- - Whitespace: Admonitions and callouts provide visual breathing room
36
- - Summaries: Recaps help retention; they're reinforcement, not redundancy
37
- - Examples: Concrete illustrations make abstract concepts accessible
38
- - Engagement: "Flow" techniques (transitions, variety) are functional, not "fluff" -- they maintain attention
39
-
40
- ## LLM-Reader Principles
41
-
42
- When reader_type='llm', optimize for PRECISION and UNAMBIGUITY:
43
-
44
- - Dependency-first: Define concepts before usage to minimize hallucination risk
45
- - Cut emotional language, encouragement, and orientation sections
46
- - IF concept is well-known from training (e.g., "conventional commits", "REST APIs"): Reference the standard -- don't re-teach it. ELSE: Be explicit -- don't assume the LLM will infer correctly.
47
- - Use consistent terminology -- same word for same concept throughout
48
- - Eliminate hedging ("might", "could", "generally") -- use direct statements
49
- - Prefer structured formats (tables, lists, YAML) over prose
50
- - Reference known standards ("conventional commits", "Google style guide") to leverage training
51
- - STILL PROVIDE EXAMPLES even for known standards -- grounds the LLM in your specific expectation
52
- - Unambiguous references -- no unclear antecedents ("it", "this", "the above")
53
- - Note: LLM documents may be LONGER than human docs in some areas (more explicit) while shorter in others (no warmth)
54
-
55
- ## Structure Models
56
-
57
- ### Tutorial/Guide (Linear)
58
- **Applicability:** Tutorials, detailed guides, how-to articles, walkthroughs
59
- - Prerequisites: Setup/Context MUST precede action
60
- - Sequence: Steps must follow strict chronological or logical dependency order
61
- - Goal-oriented: clear 'Definition of Done' at the end
62
-
63
- ### Reference/Database
64
- **Applicability:** API docs, glossaries, configuration references, cheat sheets
65
- - Random Access: No narrative flow required; user jumps to specific item
66
- - MECE: Topics are Mutually Exclusive and Collectively Exhaustive
67
- - Consistent Schema: Every item follows identical structure (e.g., Signature to Params to Returns)
68
-
69
- ### Explanation (Conceptual)
70
- **Applicability:** Deep dives, architecture overviews, conceptual guides, whitepapers, project context
71
- - Abstract to Concrete: Definition to Context to Implementation/Example
72
- - Scaffolding: Complex ideas built on established foundations
73
-
74
- ### Prompt/Task Definition (Functional)
75
- **Applicability:** BMAD tasks, prompts, system instructions, XML definitions
76
- - Meta-first: Inputs, usage constraints, and context defined before instructions
77
- - Separation of Concerns: Instructions (logic) separate from Data (content)
78
- - Step-by-step: Execution flow must be explicit and ordered
79
-
80
- ### Strategic/Context (Pyramid)
81
- **Applicability:** PRDs, research reports, proposals, decision records
82
- - Top-down: Conclusion/Status/Recommendation starts the document
83
- - Grouping: Supporting context grouped logically below the headline
84
- - Ordering: Most critical information first
85
- - MECE: Arguments/Groups are Mutually Exclusive and Collectively Exhaustive
86
- - Evidence: Data supports arguments, never leads
87
-
88
- ## STEPS
89
-
90
- ### Step 1: Validate Input
91
-
92
- - Check if content is empty or contains fewer than 3 words
93
- - If empty or fewer than 3 words, HALT with error: "Content too short for substantive review (minimum 3 words required)"
94
- - Validate reader_type is "humans" or "llm" (or not provided, defaulting to "humans")
95
- - If reader_type is invalid, HALT with error: "Invalid reader_type. Must be 'humans' or 'llm'"
96
- - Identify document type and structure (headings, sections, lists, etc.)
97
- - Note the current word count and section count
98
-
99
- ### Step 2: Understand Purpose
100
-
101
- - If purpose was provided, use it; otherwise infer from content
102
- - If target_audience was provided, use it; otherwise infer from content
103
- - Identify the core question the document answers
104
- - State in one sentence: "This document exists to help [audience] accomplish [goal]"
105
- - Select the most appropriate structural model from Structure Models based on purpose/audience
106
- - Note reader_type and which principles apply (Human-Reader Principles or LLM-Reader Principles)
107
-
108
- ### Step 3: Structural Analysis (CRITICAL)
109
-
110
- - If style_guide provided, consult style_guide now and note its key requirements -- these override default principles for this analysis
111
- - Map the document structure: list each major section with its word count
112
- - Evaluate structure against the selected model's primary rules (e.g., 'Does recommendation come first?' for Pyramid)
113
- - For each section, answer: Does this directly serve the stated purpose?
114
- - If reader_type='humans', for each comprehension aid (visual, summary, example, callout), answer: Does this help readers understand or stay engaged?
115
- - Identify sections that could be: cut entirely, merged with another, moved to a different location, or split
116
- - Identify true redundancies: identical information repeated without purpose (not summaries or reinforcement)
117
- - Identify scope violations: content that belongs in a different document
118
- - Identify burying: critical information hidden deep in the document
119
-
120
- ### Step 4: Flow Analysis
121
-
122
- - Assess the reader's journey: Does the sequence match how readers will use this?
123
- - Identify premature detail: explanation given before the reader needs it
124
- - Identify missing scaffolding: complex ideas without adequate setup
125
- - Identify anti-patterns: FAQs that should be inline, appendices that should be cut, overviews that repeat the body verbatim
126
- - If reader_type='humans', assess pacing: Is there enough whitespace and visual variety to maintain attention?
127
-
128
- ### Step 5: Generate Recommendations
129
-
130
- - Compile all findings into prioritized recommendations
131
- - Categorize each recommendation: CUT (remove entirely), MERGE (combine sections), MOVE (reorder), CONDENSE (shorten significantly), QUESTION (needs author decision), PRESERVE (explicitly keep -- for elements that might seem cuttable but serve comprehension)
132
- - For each recommendation, state the rationale in one sentence
133
- - Estimate impact: how many words would this save (or cost, for PRESERVE)?
134
- - If length_target was provided, assess whether recommendations meet it
135
- - If reader_type='humans' and recommendations would cut comprehension aids, flag with warning: "This cut may impact reader comprehension/engagement"
136
-
137
- ### Step 6: Output Results
138
-
139
- - Output document summary (purpose, audience, reader_type, current length)
140
- - Output the recommendation list in priority order
141
- - Output estimated total reduction if all recommendations accepted
142
- - If no recommendations, output: "No substantive changes recommended -- document structure is sound"
143
-
144
- Use the following output format:
145
-
146
- ```markdown
147
- ## Document Summary
148
- - **Purpose:** [inferred or provided purpose]
149
- - **Audience:** [inferred or provided audience]
150
- - **Reader type:** [selected reader type]
151
- - **Structure model:** [selected structure model]
152
- - **Current length:** [X] words across [Y] sections
153
-
154
- ## Recommendations
155
-
156
- ### 1. [CUT/MERGE/MOVE/CONDENSE/QUESTION/PRESERVE] - [Section or element name]
157
- **Rationale:** [One sentence explanation]
158
- **Impact:** ~[X] words
159
- **Comprehension note:** [If applicable, note impact on reader understanding]
160
-
161
- ### 2. ...
162
-
163
- ## Summary
164
- - **Total recommendations:** [N]
165
- - **Estimated reduction:** [X] words ([Y]% of original)
166
- - **Meets length target:** [Yes/No/No target specified]
167
- - **Comprehension trade-offs:** [Note any cuts that sacrifice reader engagement for brevity]
168
- ```
169
-
170
- ## HALT CONDITIONS
171
-
172
- - HALT with error if content is empty or fewer than 3 words
173
- - HALT with error if reader_type is not "humans" or "llm"
174
- - If no structural issues found, output "No substantive changes recommended" (this is valid completion, not an error)
@@ -1,88 +0,0 @@
1
-
2
- # Task: BMAD Help
3
-
4
- ## ROUTING RULES
5
-
6
- - **Empty `phase` = anytime** — Universal tools work regardless of workflow state
7
- - **Numbered phases indicate sequence** — Phases like `1-discover` → `2-define` → `3-build` → `4-ship` flow in order (naming varies by module)
8
- - **Phase with no Required Steps** - If an entire phase has no required, true items, the entire phase is optional. If it is sequentially before another phase, it can be recommended, but always be clear with the use what the true next required item is.
9
- - **Stay in module** — Guide through the active module's workflow based on phase+sequence ordering
10
- - **Descriptions contain routing** — Read for alternate paths (e.g., "back to previous if fixes needed")
11
- - **`required=true` blocks progress** — Required workflows must complete before proceeding to later phases
12
- - **Artifacts reveal completion** — Search resolved output paths for `outputs` patterns, fuzzy-match found files to workflow rows
13
-
14
- ## DISPLAY RULES
15
-
16
- ### Command-Based Workflows
17
- When `command` field has a value:
18
- - Show the command as a skill name in backticks (e.g., `bmad-bmm-create-prd`)
19
-
20
- ### Skill-Referenced Workflows
21
- When `workflow-file` starts with `skill:`:
22
- - The value is a skill reference (e.g., `skill:bmad-quick-dev`), NOT a file path
23
- - Do NOT attempt to resolve or load it as a file path
24
- - Display using the `command` column value as a skill name in backticks (same as command-based workflows)
25
-
26
- ### Agent-Based Workflows
27
- When `command` field is empty:
28
- - User loads agent first by invoking the agent skill (e.g., `bmad-pm`)
29
- - Then invokes by referencing the `code` field or describing the `name` field
30
- - Do NOT show a slash command — show the code value and agent load instruction instead
31
-
32
- Example presentation for empty command:
33
- ```
34
- Explain Concept (EC)
35
- Load: tech-writer agent skill, then ask to "EC about [topic]"
36
- Agent: Tech Writer
37
- Description: Create clear technical explanations with examples...
38
- ```
39
-
40
- ## MODULE DETECTION
41
-
42
- - **Empty `module` column** → universal tools (work across all modules)
43
- - **Named `module`** → module-specific workflows
44
-
45
- Detect the active module from conversation context, recent workflows, or user query keywords. If ambiguous, ask the user.
46
-
47
- ## INPUT ANALYSIS
48
-
49
- Determine what was just completed:
50
- - Explicit completion stated by user
51
- - Workflow completed in current conversation
52
- - Artifacts found matching `outputs` patterns
53
- - If `index.md` exists, read it for additional context
54
- - If still unclear, ask: "What workflow did you most recently complete?"
55
-
56
- ## EXECUTION
57
-
58
- 1. **Load catalog** — Load `{project-root}/_bmad/_config/bmad-help.csv`
59
-
60
- 2. **Resolve output locations and config** — Scan each folder under `{project-root}/_bmad/` (except `_config`) for `config.yaml`. For each workflow row, resolve its `output-location` variables against that module's config so artifact paths can be searched. Also extract `communication_language` and `project_knowledge` from each scanned module's config.
61
-
62
- 3. **Ground in project knowledge** — If `project_knowledge` resolves to an existing path, read available documentation files (architecture docs, project overview, tech stack references) for grounding context. Use discovered project facts when composing any project-specific output. Never fabricate project-specific details — if documentation is unavailable, state so.
63
-
64
- 4. **Detect active module** — Use MODULE DETECTION above
65
-
66
- 5. **Analyze input** — Task may provide a workflow name/code, conversational phrase, or nothing. Infer what was just completed using INPUT ANALYSIS above.
67
-
68
- 6. **Present recommendations** — Show next steps based on:
69
- - Completed workflows detected
70
- - Phase/sequence ordering (ROUTING RULES)
71
- - Artifact presence
72
-
73
- **Optional items first** — List optional workflows until a required step is reached
74
- **Required items next** — List the next required workflow
75
-
76
- For each item, apply DISPLAY RULES above and include:
77
- - Workflow **name**
78
- - **Command** OR **Code + Agent load instruction** (per DISPLAY RULES)
79
- - **Agent** title and display name from the CSV (e.g., "🎨 Alex (Designer)")
80
- - Brief **description**
81
-
82
- 7. **Additional guidance to convey**:
83
- - Present all output in `{communication_language}`
84
- - Run each workflow in a **fresh context window**
85
- - For **validation workflows**: recommend using a different high-quality LLM if available
86
- - For conversational requests: match the user's tone while presenting clearly
87
-
88
- 8. Return to the calling process after presenting recommendations.