@codihaus/claude-skills 1.6.22 → 1.6.24

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@codihaus/claude-skills",
3
- "version": "1.6.22",
3
+ "version": "1.6.24",
4
4
  "description": "Claude Code skills for software development workflow",
5
5
  "main": "src/index.js",
6
6
  "bin": {
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: debrief
3
3
  description: Customer requirements → market-validated BRD with tiered features
4
- version: 4.0.0
4
+ version: 4.2.0
5
5
  ---
6
6
 
7
7
  # /debrief - Business Requirements Document
@@ -24,10 +24,11 @@ version: 4.0.0
24
24
  ## Usage
25
25
 
26
26
  ```bash
27
- /debrief "Customer wants..." # New project
28
- /debrief "Add {feature}" # Add feature
29
- /debrief --answers questionnaire.xlsx # Process answers
30
- /debrief --questionnaire-only # Questions only
27
+ /debrief "Customer wants..." # New project
28
+ /debrief "Add {feature}" # Add feature
29
+ /debrief --answers questionnaire.xlsx # Process answers
30
+ /debrief --generate-brd questionnaire.xlsx # Generate BRD from existing research
31
+ /debrief --questionnaire-only # Questions only
31
32
  ```
32
33
 
33
34
  ---
@@ -50,51 +51,49 @@ version: 4.0.0
50
51
 
51
52
  ## Key Principles
52
53
 
53
- ### 1. Comparison-First Discovery
54
+ ### 1. Default Output: Questionnaire Only
54
55
 
55
- **Don't** scan competitors one-by-one.
56
- **Do** use comparison/alternative pages first:
57
- - `"{category} comparison"` → feature matrices from 10+ competitors
58
- - `"best {category} alternatives"` → what features matter
59
- - `"{competitor A} vs {competitor B}"` → deal-breaker features
56
+ **ALWAYS output**: Summary + Questionnaire (3 sheets Excel)
57
+ **NEVER auto-create**: BRD files, use cases, feature folders
60
58
 
61
- Then deep revalidation per feature (ecosystem, user signals).
59
+ **BRD creation only via**:
60
+ - `--answers questionnaire.xlsx` (after customer fills it)
61
+ - `--generate-brd questionnaire.xlsx` (from existing research)
62
62
 
63
- ### 2. Always Generate Questionnaire
63
+ **Why**: User reviews research first, validates with customer, then creates BRD.
64
64
 
65
- Include both:
66
- - **Validation questions**: "We found 9/10 competitors offer SSO. Confirm this is needed?"
67
- - **Open questions**: "What's max team size per account?"
65
+ ### 2. Business Focus ONLY
68
66
 
69
- Customer validates assumptions + fills gaps.
67
+ **Do**:
68
+ - WHAT features users need (login, checkout, reports)
69
+ - WHY they need it (business value, user goal)
70
+ - WHEN it's needed (MVP, Standard, Advanced)
70
71
 
71
- ### 3. Scope-Based Organization
72
+ **Never do**:
73
+ - HOW to code it (files, functions, APIs)
74
+ - WHICH technology (React, Vue, database)
75
+ - WHERE to put code (components, services)
72
76
 
73
- **Project-wide** (brd/references.md):
74
- - Industry landscape
75
- - Compliance (GDPR, PCI)
76
- - Competitor company profiles
77
+ **This is NOT dev-specs**. Stay business level. No technical suggestions.
77
78
 
78
- **Feature-specific** (features/{name}/references.md):
79
- - Market research for THIS feature
80
- - Tier validation evidence (MVP/Standard/Advanced)
81
- - Sequenced features with dependencies
79
+ **If you mention files, code, APIs, or tech → YOU ARE DOING IT WRONG.**
82
80
 
83
- ### 4. Lean Use Cases (80/20)
81
+ ### 3. Comparison-First Research
84
82
 
85
- **Include (20%)**:
86
- - Story (1 line)
87
- - Critical acceptance criteria (3-5 bullets)
88
- - Key business rules (constraints, limits)
89
- - Open questions
83
+ Use comparison pages first (5-10 competitors in one page):
84
+ - `"{category} comparison"`
85
+ - `"best {category} alternatives"`
90
86
 
91
- **Defer to /dev-specs (80%)**:
92
- - Detailed flows
93
- - Integration points
94
- - Edge cases
95
- - UI/UX details
87
+ Then validate per feature (Quick or Deep).
96
88
 
97
- Keep use cases ~30 lines max. Business stakeholders scan in 30 seconds.
89
+ ### 4. Questionnaire = Research Output
90
+
91
+ **3 sheets**:
92
+ - Summary (stats, tiers)
93
+ - Questions (validation + open)
94
+ - References (URLs)
95
+
96
+ Customer reviews, answers, returns file. Then create BRD via `--answers`.
98
97
 
99
98
  ---
100
99
 
@@ -106,6 +105,7 @@ Keep use cases ~30 lines max. Business stakeholders scan in 30 seconds.
106
105
  - No `plans/brd/` → New Project
107
106
  - `plans/brd/` exists → Add Feature
108
107
  - `--answers {file}` → Process Answers
108
+ - `--generate-brd {file}` → Generate BRD from research
109
109
  - `--questionnaire-only` → Questions Only
110
110
 
111
111
  ---
@@ -14,6 +14,7 @@
14
14
 
15
15
  **Detection**:
16
16
  - `--answers {file}` → Process Answers mode
17
+ - `--generate-brd {file}` → Generate BRD mode
17
18
  - `--questionnaire-only` → Questionnaire Only mode
18
19
  - No `plans/brd/` → New Project mode
19
20
  - `plans/brd/` exists → Add Feature mode
@@ -31,12 +32,18 @@
31
32
 
32
33
  ### Context Gathering
33
34
 
34
- **Ask 5 questions** (AskUserQuestion):
35
+ **Check for existing questionnaire**:
36
+ - If found: Ask "Found existing research. Use it or start fresh?"
37
+ - Reuse → Switch to Generate BRD mode
38
+ - Fresh → Continue with new research
39
+
40
+ **Ask 6 questions** (AskUserQuestion):
35
41
  1. Project type (new/existing codebase)
36
42
  2. Industry (SaaS/E-commerce/Marketplace/etc.)
37
43
  3. Target users (B2B/B2C/Internal)
38
44
  4. Constraints (timeline/budget/compliance/integrations)
39
45
  5. Scope tier (Core/Standard/Full)
46
+ 6. Research depth (Quick 5min / Deep 15-20min)
40
47
 
41
48
  **If existing codebase**: Scan for docs + features (Glob, Read)
42
49
 
@@ -48,15 +55,15 @@
48
55
 
49
56
  **Methodology**: See `research.md` for full process.
50
57
 
51
- **Comparison-first**:
58
+ **Quick** (5 min):
52
59
  - Find 2-3 comparison/alternative pages
53
60
  - Extract feature matrix
54
61
  - Initial tier hypothesis
55
62
 
56
- **Deep validation**:
57
- - Per feature: cross-validate with ecosystem + user signals
58
- - Classify tier (MVP/Standard/Advanced)
59
- - Collect evidence
63
+ **Deep** (15-20 min):
64
+ - Comparison pages (initial)
65
+ - Cross-validate with ecosystem + user signals
66
+ - Full tier classification with evidence
60
67
 
61
68
  **Output**: Features with tier classification and evidence
62
69
 
@@ -117,34 +124,34 @@
117
124
 
118
125
  ---
119
126
 
120
- ### Questionnaire
127
+ ### Generate Questionnaire (ALWAYS)
121
128
 
122
- **Principle**: Always generate. Validate assumptions + fill gaps.
129
+ **Principle**: Default output. No BRD files created yet.
123
130
 
124
- **Include**:
125
- - **Validation questions**: "We found 9/10 competitors offer SSO. Confirm?"
126
- - **Open questions**: "What's max team size per account?"
131
+ **3 sheets**:
132
+ - **Summary**: Feature count, tier distribution, research stats
133
+ - **Questions**: Validation + open questions
134
+ - **References**: URLs by feature (comparison pages, reviews, ecosystems)
127
135
 
128
- **Categories**: Validation, Business, Requirements, Constraints, Integration
136
+ **Output**: `questionnaire-{date}.xlsx` in current directory
129
137
 
130
- **Output**: `features/{feature}/questionnaire-{date}.xlsx`
138
+ **NO BRD files created**. User reviews questionnaire first.
131
139
 
132
140
  ---
133
141
 
134
- ### Summary
142
+ ### Summary Output
135
143
 
136
144
  **Provide**:
137
- - Files created (brd/, features/)
138
- - Use case count (lean format)
139
145
  - Research summary (comparison pages, evidence sources)
140
146
  - Tier distribution (MVP: X, Standard: Y, Advanced: Z)
141
- - Questionnaire location and question count
147
+ - Questionnaire location: `questionnaire-{date}.xlsx`
148
+ - Question count (validation + open)
142
149
 
143
150
  **Next steps**:
144
- - Review BRD with stakeholders
145
- - Send questionnaire to customer
146
- - Process with `/debrief --answers {path}`
147
- - Then `/dev-specs {feature}` for implementation
151
+ 1. Review questionnaire
152
+ 2. Send to customer for answers
153
+ 3. After answers: `/debrief --answers questionnaire-{date}.xlsx`
154
+ 4. BRD will be created from answers
148
155
 
149
156
  ---
150
157
 
@@ -156,9 +163,10 @@
156
163
 
157
164
  ### Context Gathering
158
165
 
159
- **Ask 2 questions**:
166
+ **Ask 3 questions**:
160
167
  1. Feature name
161
168
  2. Scope tier (Core/Standard/Full)
169
+ 3. Research depth (Quick / Deep)
162
170
 
163
171
  **Duplicate check**: Read `docs-graph.json` if exists, warn if similar feature found
164
172
 
@@ -166,49 +174,46 @@
166
174
 
167
175
  ---
168
176
 
169
- ### Research & Document
177
+ ### Research & Generate Questionnaire
170
178
 
171
179
  **Same as New Project**:
172
180
  - Market research (comparison-first, see `research.md`)
173
- - Feature sequencing (dependencies)
174
- - Lean use cases (templates/use-case.md)
181
+ - Generate questionnaire (validation + open questions)
175
182
 
176
- **Update**:
177
- - `brd/README.md` (add to index)
178
- - `brd/changelog.md` (log addition)
179
- - Create `features/{feature}/` folder
180
-
181
- **Questionnaire**: Always generate (validation + open questions)
183
+ **Output**: `features/{feature}/questionnaire-{date}.xlsx`
182
184
 
183
- **Output**: Feature added to BRD with research and questionnaire
185
+ **NO BRD files created**. User reviews questionnaire, gets answers, then runs `--answers`.
184
186
 
185
187
  ---
186
188
 
187
189
  ## Process Answers Mode
188
190
 
189
- **Goal**: Update BRD with customer responses.
191
+ **Goal**: Create BRD from customer-answered questionnaire.
190
192
 
191
- **Mindset**: Integration mode. Incorporate feedback, resolve gaps.
193
+ **Mindset**: BRD creation mode. Customer validated research, now create structure.
192
194
 
193
195
  ### Process
194
196
 
195
197
  **Read questionnaire** (Excel file):
196
- - Extract answers from "Answer" column
197
- - Match to source use cases via "Context/Source"
198
+ - Summary sheet (features, tiers, research)
199
+ - Questions sheet (customer answers)
200
+ - References sheet (source URLs)
198
201
 
199
- **Update use cases**:
200
- - Remove questions from "Open Questions" section
201
- - Add answers to relevant sections (Acceptance Criteria, Business Rules)
202
+ **Create BRD structure**:
203
+ - `brd/` files (README, context, references, changelog)
204
+ - `features/{feature}/` folders
205
+ - Use cases (lean format, sequenced)
202
206
 
203
- **Update feature README**:
204
- - Log questionnaire as processed
205
- - Track history
207
+ **Incorporate answers**:
208
+ - Update use cases with customer answers
209
+ - Remove questions that were answered
210
+ - Keep unanswered questions in "Open Questions"
206
211
 
207
212
  **Check completeness**:
208
- - If gaps remain → generate new questionnaire (new date)
209
- - If complete → update UC status to "Confirmed"
213
+ - If gaps remain → note in use cases
214
+ - If complete → mark UC status "Confirmed"
210
215
 
211
- **Output**: Updated use cases, questionnaire history, completion status
216
+ **Output**: Full BRD structure + use cases
212
217
 
213
218
  ---
214
219
 
@@ -233,6 +238,32 @@
233
238
 
234
239
  ---
235
240
 
241
+ ## Generate BRD Mode
242
+
243
+ **Goal**: Generate full BRD from existing questionnaire research.
244
+
245
+ **Mindset**: Resumption mode. User reviewed Summary + Questionnaire, ready for full BRD.
246
+
247
+ ### Process
248
+
249
+ **Check questionnaire validity**:
250
+ - Ask: "Found existing research in questionnaire. Use it or start fresh?"
251
+ - Reuse (generate from questionnaire)
252
+ - Fresh (ignore questionnaire, do new research)
253
+
254
+ **If reuse**:
255
+ - Read Summary sheet (features, tiers, research stats)
256
+ - Read References sheet (source URLs)
257
+ - Generate BRD structure (no re-research)
258
+
259
+ **If fresh**:
260
+ - Ignore questionnaire
261
+ - Run full research flow (New Project or Add Feature mode)
262
+
263
+ **Output**: BRD files + feature folders
264
+
265
+ ---
266
+
236
267
  ## Questionnaire Only Mode
237
268
 
238
269
  **Goal**: Generate questions without creating BRD.
@@ -303,6 +334,7 @@
303
334
  **Workflow adapts to mode**:
304
335
  - New Project: Full discovery and research
305
336
  - Add Feature: Focused research, duplicate check
337
+ - Generate BRD: Reads existing questionnaire research
306
338
  - Process Answers: Integration and validation
307
339
  - Change Request: Impact tracking
308
340
  - Questionnaire Only: Quick question generation
@@ -10,6 +10,18 @@ questions.json format:
10
10
  {
11
11
  "project_name": "Project Name",
12
12
  "date": "2024-01-20",
13
+ "research": {
14
+ "depth": "Deep",
15
+ "features_count": 12,
16
+ "tiers": {"MVP": 5, "Standard": 4, "Advanced": 3},
17
+ "sources_count": 23
18
+ },
19
+ "references": {
20
+ "Authentication": [
21
+ {"type": "Comparison", "url": "https://..."},
22
+ {"type": "G2 Reviews", "url": "https://..."}
23
+ ]
24
+ },
13
25
  "questions": [
14
26
  {
15
27
  "category": "Requirements",
@@ -119,14 +131,24 @@ def create_questionnaire(output_path: str, questions_data: dict):
119
131
 
120
132
  # Summary sheet
121
133
  ws_summary = wb.create_sheet("Summary", 0)
134
+ research = questions_data.get('research', {})
135
+ tiers = research.get('tiers', {})
136
+
122
137
  summary_data = [
123
138
  ("Project Questionnaire", ""),
124
139
  ("", ""),
125
140
  ("Project:", questions_data.get('project_name', '')),
126
141
  ("Generated:", questions_data.get('date', '')),
127
- ("Total Questions:", len(questions)),
128
142
  ("", ""),
129
- ("Priority Breakdown:", ""),
143
+ ("Research Summary:", ""),
144
+ ("Depth:", research.get('depth', 'N/A')),
145
+ ("Features:", research.get('features_count', 0)),
146
+ (" MVP:", tiers.get('MVP', 0)),
147
+ (" Standard:", tiers.get('Standard', 0)),
148
+ (" Advanced:", tiers.get('Advanced', 0)),
149
+ ("Sources:", research.get('sources_count', 0)),
150
+ ("", ""),
151
+ ("Questions:", len(questions)),
130
152
  ]
131
153
 
132
154
  # Count priorities
@@ -134,28 +156,8 @@ def create_questionnaire(output_path: str, questions_data: dict):
134
156
  optional_count = len(questions) - required_count
135
157
 
136
158
  summary_data.extend([
137
- ("Required:", required_count),
138
- ("Optional:", optional_count),
139
- ("", ""),
140
- ("Categories:", ""),
141
- ])
142
-
143
- # Count categories
144
- categories = {}
145
- for q in questions:
146
- cat = q.get('category', 'General')
147
- categories[cat] = categories.get(cat, 0) + 1
148
-
149
- for cat, count in sorted(categories.items()):
150
- summary_data.append((f" {cat}:", count))
151
-
152
- summary_data.extend([
153
- ("", ""),
154
- ("Instructions:", ""),
155
- ("1.", "Fill in the 'Answer' column for each question"),
156
- ("2.", "Required questions must be answered before proceeding"),
157
- ("3.", "Context/Source shows where this question originated"),
158
- ("4.", "Save and return this file when complete"),
159
+ (" Required:", required_count),
160
+ (" Optional:", optional_count),
159
161
  ])
160
162
 
161
163
  for row, (col1, col2) in enumerate(summary_data, 1):
@@ -169,6 +171,29 @@ def create_questionnaire(output_path: str, questions_data: dict):
169
171
  ws_summary.column_dimensions['A'].width = 25
170
172
  ws_summary.column_dimensions['B'].width = 40
171
173
 
174
+ # References sheet
175
+ references = questions_data.get('references', {})
176
+ if references:
177
+ ws_refs = wb.create_sheet("References")
178
+ ws_refs.cell(1, 1, "Feature").font = header_font
179
+ ws_refs.cell(1, 2, "Source Type").font = header_font
180
+ ws_refs.cell(1, 3, "URL").font = header_font
181
+ ws_refs.cell(1, 1).fill = header_fill
182
+ ws_refs.cell(1, 2).fill = header_fill
183
+ ws_refs.cell(1, 3).fill = header_fill
184
+
185
+ row = 2
186
+ for feature, sources in references.items():
187
+ for source in sources:
188
+ ws_refs.cell(row, 1, feature)
189
+ ws_refs.cell(row, 2, source.get('type', ''))
190
+ ws_refs.cell(row, 3, source.get('url', ''))
191
+ row += 1
192
+
193
+ ws_refs.column_dimensions['A'].width = 20
194
+ ws_refs.column_dimensions['B'].width = 20
195
+ ws_refs.column_dimensions['C'].width = 60
196
+
172
197
  # Save
173
198
  wb.save(output_path)
174
199
  print(f"Created: {output_path}")