claude-all-hands 1.0.5 → 1.0.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude/agents/documentation-taxonomist.md +30 -24
- package/.claude/agents/documentation-writer.md +19 -17
- package/.claude/commands/{docs-audit.md → audit-docs.md} +9 -3
- package/.claude/commands/continue.md +34 -13
- package/.claude/commands/{docs-init.md → document.md} +3 -3
- package/.claude/envoy/package-lock.json +3 -0
- package/.claude/envoy/package.json +3 -0
- package/.claude/envoy/src/commands/docs.ts +3 -10
- package/.claude/envoy/src/commands/plan/index.ts +6 -0
- package/.claude/envoy/src/commands/plan/lifecycle.ts +85 -0
- package/.claude/envoy/src/lib/ast-queries.ts +83 -41
- package/.claude/envoy/src/lib/tree-sitter-utils.ts +40 -11
- package/package.json +1 -1
- package/.claude/commands/docs-adjust.md +0 -214
|
@@ -35,14 +35,14 @@ Your assignments should guide writers toward capturing KNOWLEDGE that isn't obvi
|
|
|
35
35
|
- `mode`: "init"
|
|
36
36
|
- `scope_paths`: optional paths to scope (default: entire codebase)
|
|
37
37
|
- `user_request`: optional user-specified context
|
|
38
|
-
- `feature_branch`: branch name for
|
|
38
|
+
- `feature_branch`: branch name (used for context only)
|
|
39
39
|
|
|
40
40
|
**OUTPUTS** (to main agent):
|
|
41
|
-
- `{ success: true,
|
|
41
|
+
- `{ success: true, structure_created: true, assignments: [...] }` - ready for writers
|
|
42
42
|
|
|
43
43
|
**STEPS:**
|
|
44
44
|
|
|
45
|
-
1. **Analyze codebase AND existing docs** - Run
|
|
45
|
+
1. **Analyze codebase AND existing docs** - Run as parallel tool calls or join with `;` (not `&&`, want all outputs):
|
|
46
46
|
```bash
|
|
47
47
|
# Understand codebase structure
|
|
48
48
|
envoy docs tree <path> --depth 4
|
|
@@ -85,22 +85,20 @@ Your assignments should guide writers toward capturing KNOWLEDGE that isn't obvi
|
|
|
85
85
|
- Subdomains should represent distinct subsystems, not directories
|
|
86
86
|
- One writer can handle parent + children if simple enough
|
|
87
87
|
|
|
88
|
-
4. **Create
|
|
88
|
+
4. **Create directory structure:**
|
|
89
89
|
```bash
|
|
90
90
|
mkdir -p docs/<product>/<subdomain>
|
|
91
|
-
git add docs/
|
|
92
|
-
git commit -m "docs: create documentation structure for <products>"
|
|
93
91
|
```
|
|
94
92
|
|
|
95
93
|
This happens BEFORE delegation - writers receive existing directories.
|
|
94
|
+
Note: Do NOT create .gitkeep files. Writers will add content shortly - empty directories are fine temporarily.
|
|
96
95
|
|
|
97
96
|
5. **Assign writers to directories:**
|
|
98
97
|
```yaml
|
|
99
|
-
|
|
98
|
+
structure_created: true
|
|
100
99
|
assignments:
|
|
101
100
|
- directory: "docs/<product>/"
|
|
102
101
|
files: ["<source-glob-patterns>"]
|
|
103
|
-
worktree_branch: "<feature_branch>/docs-<product>"
|
|
104
102
|
responsibilities:
|
|
105
103
|
- "Key design decisions and rationale"
|
|
106
104
|
- "Patterns observers should know"
|
|
@@ -110,7 +108,6 @@ Your assignments should guide writers toward capturing KNOWLEDGE that isn't obvi
|
|
|
110
108
|
|
|
111
109
|
- directory: "docs/<product>/<subdomain>/"
|
|
112
110
|
files: ["<source-glob-patterns>"]
|
|
113
|
-
worktree_branch: "<feature_branch>/docs-<product>-<subdomain>"
|
|
114
111
|
responsibilities:
|
|
115
112
|
- "Implementation rationale for subsystem"
|
|
116
113
|
- "Key patterns with reference examples"
|
|
@@ -133,13 +130,23 @@ Your assignments should guide writers toward capturing KNOWLEDGE that isn't obvi
|
|
|
133
130
|
- `use_diff`: boolean - if true, get changed files from git
|
|
134
131
|
- `scope_paths`: optional list of paths to scope
|
|
135
132
|
- `user_request`: optional user-specified context
|
|
136
|
-
- `feature_branch`: branch name for
|
|
133
|
+
- `feature_branch`: branch name (used for context only)
|
|
134
|
+
- `walkthroughs`: optional array from `envoy plan get-all-walkthroughs` containing:
|
|
135
|
+
- `prompt_num`, `variant`, `id`: prompt identifiers
|
|
136
|
+
- `description`: what the prompt implemented
|
|
137
|
+
- `walkthrough`: array of implementation iterations with decisions/rationale
|
|
138
|
+
- `relevant_files`: files affected by this prompt
|
|
137
139
|
|
|
138
140
|
**OUTPUTS** (to main agent):
|
|
139
|
-
- `{ success: true,
|
|
141
|
+
- `{ success: true, structure_created: true, assignments: [...] }` - targeted updates
|
|
140
142
|
|
|
141
143
|
**STEPS:**
|
|
142
|
-
1. **
|
|
144
|
+
1. **Analyze walkthroughs for rationale** (if provided):
|
|
145
|
+
- Extract design decisions, patterns chosen, and rationale from walkthrough entries
|
|
146
|
+
- Map prompts to affected files via `relevant_files`
|
|
147
|
+
- This context informs what knowledge to capture (WHY decisions were made)
|
|
148
|
+
|
|
149
|
+
2. **Discover what needs documenting:**
|
|
143
150
|
```bash
|
|
144
151
|
# If use_diff is true, get changed files from git
|
|
145
152
|
envoy git diff-base --name-only
|
|
@@ -153,25 +160,24 @@ Your assignments should guide writers toward capturing KNOWLEDGE that isn't obvi
|
|
|
153
160
|
|
|
154
161
|
# Check if changed concepts are already documented
|
|
155
162
|
envoy knowledge search "<changed-feature>" --metadata-only
|
|
156
|
-
envoy knowledge search "<affected-product>" --metadata-only
|
|
157
163
|
```
|
|
158
164
|
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
165
|
+
3. Identify affected products/features from changes + walkthrough context
|
|
166
|
+
4. Check existing doc structure - which directories need updates vs new sections
|
|
167
|
+
5. Create any new directories needed
|
|
168
|
+
6. Assign writers with walkthrough rationale included in notes:
|
|
163
169
|
|
|
164
170
|
```yaml
|
|
165
|
-
|
|
171
|
+
structure_created: true
|
|
166
172
|
assignments:
|
|
167
173
|
- directory: "docs/<product>/"
|
|
168
174
|
files: ["<changed-source-patterns>"]
|
|
169
|
-
worktree_branch: "<feature_branch>/docs-<product>"
|
|
170
175
|
responsibilities:
|
|
171
176
|
- "update README.md for new features"
|
|
172
177
|
- "add documentation for new commands"
|
|
173
178
|
action: "update"
|
|
174
|
-
notes: "<what changed
|
|
179
|
+
notes: "<what changed, plus rationale from walkthroughs>"
|
|
180
|
+
walkthrough_context: "<relevant decisions/rationale from prompt walkthroughs>"
|
|
175
181
|
```
|
|
176
182
|
</adjust_workflow>
|
|
177
183
|
|
|
@@ -230,9 +236,9 @@ assignments:
|
|
|
230
236
|
- MUST run `envoy docs tree docs/` to see existing documentation hierarchies before planning
|
|
231
237
|
- MUST use `envoy knowledge search` to check if concepts are already documented
|
|
232
238
|
- MUST use product/feature names, not directory names
|
|
233
|
-
- MUST create
|
|
239
|
+
- MUST create directory structure BEFORE returning assignments
|
|
234
240
|
- MUST assign writers to existing directories with clear responsibilities
|
|
235
|
-
- MUST run envoy commands
|
|
241
|
+
- MUST run envoy commands via parallel tool calls or `;` joins (avoid `&&` - want all outputs)
|
|
236
242
|
- MUST use --metadata-only for knowledge searches
|
|
237
243
|
- NEVER mirror source directory structure in domain names
|
|
238
244
|
- NEVER over-distribute - prefer fewer writers handling more
|
|
@@ -244,12 +250,12 @@ assignments:
|
|
|
244
250
|
**Init workflow complete when:**
|
|
245
251
|
- Products/features identified (not directories)
|
|
246
252
|
- Meaningful domain names chosen
|
|
247
|
-
- Directory structure created
|
|
253
|
+
- Directory structure created
|
|
248
254
|
- Writer assignments defined with responsibilities
|
|
249
255
|
- Each assignment has directory, files, responsibilities, depth
|
|
250
256
|
|
|
251
257
|
**Adjust workflow complete when:**
|
|
252
258
|
- Affected products identified
|
|
253
|
-
- New directories created if needed
|
|
259
|
+
- New directories created if needed
|
|
254
260
|
- Writer assignments target specific update responsibilities
|
|
255
261
|
</success_criteria>
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
name: documentation-writer
|
|
3
3
|
description: |
|
|
4
4
|
Documentation writer specialist. Writes knowledge-base documentation using file references. Triggers: "write docs", "document domain".
|
|
5
|
-
tools: Read, Glob, Grep, Bash, Write, Edit
|
|
5
|
+
tools: Read, Glob, Grep, Bash, Write, Edit, LSP
|
|
6
6
|
model: inherit
|
|
7
7
|
color: yellow
|
|
8
8
|
---
|
|
@@ -121,7 +121,7 @@ Authentication uses JWT for stateless sessions. The signing implementation [ref:
|
|
|
121
121
|
- `notes`: guidance from taxonomist
|
|
122
122
|
|
|
123
123
|
**OUTPUTS** (to main agent):
|
|
124
|
-
- `{ success: true }` - documentation written
|
|
124
|
+
- `{ success: true }` - documentation written (main agent commits after all writers complete)
|
|
125
125
|
|
|
126
126
|
**STEPS:**
|
|
127
127
|
1. Search existing knowledge: `envoy knowledge search "<domain> decisions patterns"`
|
|
@@ -158,7 +158,7 @@ Authentication uses JWT for stateless sessions. The signing implementation [ref:
|
|
|
158
158
|
- `detailed`: + rationale, tradeoffs, edge cases
|
|
159
159
|
- `comprehensive`: + all major patterns, troubleshooting
|
|
160
160
|
|
|
161
|
-
6. Validate before
|
|
161
|
+
6. Validate before returning:
|
|
162
162
|
|
|
163
163
|
a. Run: `envoy docs validate --path docs/<domain>/`
|
|
164
164
|
b. Check response:
|
|
@@ -171,18 +171,14 @@ Authentication uses JWT for stateless sessions. The signing implementation [ref:
|
|
|
171
171
|
d. If any check fails:
|
|
172
172
|
- Fix the issue
|
|
173
173
|
- Re-validate
|
|
174
|
-
- Do NOT commit until all checks pass
|
|
175
174
|
|
|
176
|
-
7.
|
|
177
|
-
- `git add docs/`
|
|
178
|
-
- `git commit -m "docs(<domain>): <summary>"`
|
|
179
|
-
- Commit hook validates references
|
|
175
|
+
7. Return `{ success: true }`
|
|
180
176
|
|
|
181
|
-
|
|
177
|
+
**IMPORTANT:** Do NOT commit. Main agent commits all writer changes together after parallel execution completes.
|
|
182
178
|
|
|
183
179
|
**On failure:**
|
|
184
180
|
- If AST symbol not found: use file-only ref `[ref:file::hash]`
|
|
185
|
-
- If
|
|
181
|
+
- If validation fails: fix references, re-validate
|
|
186
182
|
</write_workflow>
|
|
187
183
|
|
|
188
184
|
<fix_workflow>
|
|
@@ -207,9 +203,7 @@ Authentication uses JWT for stateless sessions. The signing implementation [ref:
|
|
|
207
203
|
- If file moved: update file path
|
|
208
204
|
- If file deleted: remove ref, update knowledge
|
|
209
205
|
|
|
210
|
-
3.
|
|
211
|
-
|
|
212
|
-
4. Return fix summary
|
|
206
|
+
3. Return fix summary (main agent commits)
|
|
213
207
|
</fix_workflow>
|
|
214
208
|
|
|
215
209
|
<audit_fix_workflow>
|
|
@@ -291,9 +285,7 @@ changes:
|
|
|
291
285
|
|
|
292
286
|
5. Validate changes: `envoy docs validate --path <doc_file>`
|
|
293
287
|
|
|
294
|
-
6.
|
|
295
|
-
|
|
296
|
-
7. Return changes summary
|
|
288
|
+
6. Return changes summary (main agent commits)
|
|
297
289
|
</audit_fix_workflow>
|
|
298
290
|
|
|
299
291
|
<documentation_format>
|
|
@@ -304,6 +296,15 @@ description: 1-2 sentence summary enabling semantic search discovery
|
|
|
304
296
|
---
|
|
305
297
|
```
|
|
306
298
|
|
|
299
|
+
**Description quality guidelines:**
|
|
300
|
+
- GOOD: "Authentication system using JWT tokens with refresh rotation and Redis session storage"
|
|
301
|
+
- GOOD: "CLI argument parsing with subcommand routing and help generation"
|
|
302
|
+
- BAD: "Documentation for auth" (too vague)
|
|
303
|
+
- BAD: "Code documentation" (useless for search)
|
|
304
|
+
- BAD: "This file documents the system" (describes the doc, not the code)
|
|
305
|
+
|
|
306
|
+
The description should answer: "What would someone search for to find this?"
|
|
307
|
+
|
|
307
308
|
**Structure (REQUIRED sections marked with *):**
|
|
308
309
|
|
|
309
310
|
```markdown
|
|
@@ -348,7 +349,8 @@ Adjust structure based on domain. The structure serves knowledge transfer, not c
|
|
|
348
349
|
- MUST include `description` in front-matter
|
|
349
350
|
- MUST include Overview, Key Decisions, and Use Cases sections
|
|
350
351
|
- MUST focus on decisions, rationale, patterns - NOT capabilities
|
|
351
|
-
- MUST validate with `envoy docs validate` before
|
|
352
|
+
- MUST validate with `envoy docs validate` before returning
|
|
353
|
+
- MUST NOT commit - main agent commits after all writers complete
|
|
352
354
|
- NEVER write inline code blocks (zero fenced blocks allowed)
|
|
353
355
|
- NEVER document what's obvious from reading code
|
|
354
356
|
- NEVER create capability tables (Command/Purpose, Option/Description)
|
|
@@ -129,14 +129,20 @@ changes:
|
|
|
129
129
|
<step name="verify_and_report">
|
|
130
130
|
After all agents complete:
|
|
131
131
|
|
|
132
|
-
1.
|
|
132
|
+
1. Commit all documentation fixes:
|
|
133
|
+
```bash
|
|
134
|
+
git add docs/
|
|
135
|
+
git commit -m "docs: fix stale/invalid references"
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
2. Run validation again:
|
|
133
139
|
```bash
|
|
134
140
|
envoy docs validate [--path <docs_path>]
|
|
135
141
|
```
|
|
136
142
|
|
|
137
|
-
|
|
143
|
+
3. If issues remain, report which failed and why
|
|
138
144
|
|
|
139
|
-
|
|
145
|
+
4. Report completion:
|
|
140
146
|
```markdown
|
|
141
147
|
## Audit Complete
|
|
142
148
|
|
|
@@ -42,17 +42,8 @@ For each prompt from next:
|
|
|
42
42
|
3. **Parallel execution**: If multiple prompts/variants returned, delegate all in parallel
|
|
43
43
|
</step>
|
|
44
44
|
|
|
45
|
-
<step name="extract_documentation">
|
|
46
|
-
After each specialist returns (prompt merged):
|
|
47
|
-
|
|
48
|
-
Call `/docs adjust --diff` to update documentation based on changes.
|
|
49
|
-
* Uses taxonomy-based approach to identify changed areas
|
|
50
|
-
* Writers update relevant documentation with symbol references
|
|
51
|
-
* Returns: `{ success: true }`
|
|
52
|
-
</step>
|
|
53
|
-
|
|
54
45
|
<step name="loop">
|
|
55
|
-
Repeat steps 1-
|
|
46
|
+
Repeat steps 1-2 until:
|
|
56
47
|
- No more prompts returned from next
|
|
57
48
|
- No prompts in_progress status
|
|
58
49
|
</step>
|
|
@@ -80,8 +71,38 @@ If verdict = "failed" OR suggested_fixes exist:
|
|
|
80
71
|
4. Rerun full review (repeat until passes)
|
|
81
72
|
</step>
|
|
82
73
|
|
|
74
|
+
<step name="extract_documentation">
|
|
75
|
+
After full review passes, delegate to **documentation-taxonomist** agent with adjust mode:
|
|
76
|
+
|
|
77
|
+
1. Get prompt walkthroughs for rationale context:
|
|
78
|
+
```bash
|
|
79
|
+
envoy plan get-all-walkthroughs
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
2. Delegate to taxonomist with inputs:
|
|
83
|
+
```yaml
|
|
84
|
+
mode: "adjust"
|
|
85
|
+
use_diff: true
|
|
86
|
+
feature_branch: "<current_branch>"
|
|
87
|
+
walkthroughs: <output from get-all-walkthroughs>
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
3. Taxonomist identifies affected domains, delegates to writers (writers do NOT commit)
|
|
91
|
+
|
|
92
|
+
4. After ALL writers complete, commit documentation changes:
|
|
93
|
+
```bash
|
|
94
|
+
git add docs/
|
|
95
|
+
git commit -m "docs: update documentation for feature"
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
5. Mark prompts documented:
|
|
99
|
+
```bash
|
|
100
|
+
envoy plan mark-all-documented
|
|
101
|
+
```
|
|
102
|
+
</step>
|
|
103
|
+
|
|
83
104
|
<step name="mandatory_doc_audit">
|
|
84
|
-
Call `/docs
|
|
105
|
+
Call `/audit-docs` to validate all documentation symbol references.
|
|
85
106
|
* Checks for stale (hash changed) and invalid (symbol deleted) references
|
|
86
107
|
* If issues found: fix automatically or present to user
|
|
87
108
|
* Returns: `{ success: true }`
|
|
@@ -104,7 +125,7 @@ Call /whats-next command
|
|
|
104
125
|
<success_criteria>
|
|
105
126
|
- All prompts implemented via specialist delegation
|
|
106
127
|
- Variants executed in parallel
|
|
107
|
-
- Documentation
|
|
128
|
+
- Documentation updated from all changes (once, after review passes)
|
|
108
129
|
- Full review passes
|
|
109
130
|
- Documentation audit completed
|
|
110
131
|
- Plan marked complete with PR created
|
|
@@ -114,7 +135,7 @@ Call /whats-next command
|
|
|
114
135
|
<constraints>
|
|
115
136
|
- MUST respect prompt dependencies (use envoy next)
|
|
116
137
|
- MUST run all variants in parallel
|
|
117
|
-
- MUST extract documentation after
|
|
138
|
+
- MUST extract documentation once after full review passes
|
|
118
139
|
- MUST loop until no prompts remain
|
|
119
140
|
- MUST pass full review before completing
|
|
120
141
|
- MUST run doc audit before completion
|
|
@@ -120,7 +120,7 @@ notes: "<segment.notes>"
|
|
|
120
120
|
success: true
|
|
121
121
|
```
|
|
122
122
|
|
|
123
|
-
Writers work directly on the branch. Taxonomist ensures non-overlapping output directories, so no conflicts occur.
|
|
123
|
+
Writers work directly on the branch without committing. Taxonomist ensures non-overlapping output directories, so no conflicts occur. Main agent commits all changes after writers complete.
|
|
124
124
|
</step>
|
|
125
125
|
|
|
126
126
|
<step name="validate_docs">
|
|
@@ -132,7 +132,7 @@ If stale/invalid refs found:
|
|
|
132
132
|
</step>
|
|
133
133
|
|
|
134
134
|
<step name="commit_documentation">
|
|
135
|
-
Commit
|
|
135
|
+
Commit ALL documentation changes from parallel writers:
|
|
136
136
|
|
|
137
137
|
1. Check for uncommitted changes in docs/:
|
|
138
138
|
```bash
|
|
@@ -168,7 +168,7 @@ Update semantic search index with new documentation:
|
|
|
168
168
|
|
|
169
169
|
2. Call reindex:
|
|
170
170
|
```bash
|
|
171
|
-
envoy knowledge reindex-from-changes
|
|
171
|
+
envoy knowledge reindex-from-changes --files '<json_array>'
|
|
172
172
|
```
|
|
173
173
|
|
|
174
174
|
3. If reindex reports missing references:
|
|
@@ -16,7 +16,6 @@ import matter from "gray-matter";
|
|
|
16
16
|
import { BaseCommand, CommandResult } from "./base.js";
|
|
17
17
|
import {
|
|
18
18
|
findSymbol,
|
|
19
|
-
symbolExists,
|
|
20
19
|
getFileComplexity,
|
|
21
20
|
} from "../lib/tree-sitter-utils.js";
|
|
22
21
|
import { getSupportedExtensions } from "../lib/ast-queries.js";
|
|
@@ -514,9 +513,9 @@ const codeBlockPattern = /^```[a-z0-9_+-]*$/gm;
|
|
|
514
513
|
continue;
|
|
515
514
|
}
|
|
516
515
|
|
|
517
|
-
// Check if symbol exists
|
|
518
|
-
const
|
|
519
|
-
if (!
|
|
516
|
+
// Check if symbol exists and get its location
|
|
517
|
+
const symbol = await findSymbol(absoluteRefFile, ref.refSymbol!);
|
|
518
|
+
if (!symbol) {
|
|
520
519
|
const reason = "Symbol not found";
|
|
521
520
|
invalid.push({
|
|
522
521
|
doc_file: ref.file,
|
|
@@ -530,12 +529,6 @@ const codeBlockPattern = /^```[a-z0-9_+-]*$/gm;
|
|
|
530
529
|
continue;
|
|
531
530
|
}
|
|
532
531
|
|
|
533
|
-
// Get current hash for symbol
|
|
534
|
-
const symbol = await findSymbol(absoluteRefFile, ref.refSymbol!);
|
|
535
|
-
if (!symbol) {
|
|
536
|
-
continue; // Already validated above
|
|
537
|
-
}
|
|
538
|
-
|
|
539
532
|
const { hash: mostRecentHash, success } = getMostRecentHashForRange(
|
|
540
533
|
absoluteRefFile,
|
|
541
534
|
symbol.startLine,
|
|
@@ -35,6 +35,8 @@ export {
|
|
|
35
35
|
CompletePromptCommand,
|
|
36
36
|
GetPromptWalkthroughCommand,
|
|
37
37
|
MarkPromptExtractedCommand,
|
|
38
|
+
GetAllWalkthroughsCommand,
|
|
39
|
+
MarkAllDocumentedCommand,
|
|
38
40
|
ReleaseAllPromptsCommand,
|
|
39
41
|
CompleteCommand,
|
|
40
42
|
} from "./lifecycle.js";
|
|
@@ -77,6 +79,8 @@ import {
|
|
|
77
79
|
CompletePromptCommand,
|
|
78
80
|
GetPromptWalkthroughCommand,
|
|
79
81
|
MarkPromptExtractedCommand,
|
|
82
|
+
GetAllWalkthroughsCommand,
|
|
83
|
+
MarkAllDocumentedCommand,
|
|
80
84
|
ReleaseAllPromptsCommand,
|
|
81
85
|
CompleteCommand,
|
|
82
86
|
} from "./lifecycle.js";
|
|
@@ -121,6 +125,8 @@ export const COMMANDS = {
|
|
|
121
125
|
"complete-prompt": CompletePromptCommand,
|
|
122
126
|
"get-prompt-walkthrough": GetPromptWalkthroughCommand,
|
|
123
127
|
"mark-prompt-extracted": MarkPromptExtractedCommand,
|
|
128
|
+
"get-all-walkthroughs": GetAllWalkthroughsCommand,
|
|
129
|
+
"mark-all-documented": MarkAllDocumentedCommand,
|
|
124
130
|
"release-all-prompts": ReleaseAllPromptsCommand,
|
|
125
131
|
complete: CompleteCommand,
|
|
126
132
|
// Gates
|
|
@@ -452,6 +452,91 @@ export class MarkPromptExtractedCommand extends BaseCommand {
|
|
|
452
452
|
}
|
|
453
453
|
}
|
|
454
454
|
|
|
455
|
+
/**
|
|
456
|
+
* Get all walkthroughs from prompts not yet documented.
|
|
457
|
+
* Returns walkthroughs for all prompts where documentation_extracted is false.
|
|
458
|
+
*/
|
|
459
|
+
export class GetAllWalkthroughsCommand extends BaseCommand {
|
|
460
|
+
readonly name = "get-all-walkthroughs";
|
|
461
|
+
readonly description = "Get walkthroughs from all undocumented prompts";
|
|
462
|
+
|
|
463
|
+
defineArguments(_cmd: Command): void {
|
|
464
|
+
// No arguments
|
|
465
|
+
}
|
|
466
|
+
|
|
467
|
+
async execute(_args: Record<string, unknown>): Promise<CommandResult> {
|
|
468
|
+
const branch = getBranch();
|
|
469
|
+
if (!branch) {
|
|
470
|
+
return this.error("no_branch", "Not in a git repository or no branch checked out");
|
|
471
|
+
}
|
|
472
|
+
|
|
473
|
+
if (!planExists()) {
|
|
474
|
+
return this.error("no_plan", "No plan directory exists for this branch");
|
|
475
|
+
}
|
|
476
|
+
|
|
477
|
+
const prompts = readAllPrompts();
|
|
478
|
+
const undocumented = prompts.filter((p) => !p.frontMatter.documentation_extracted);
|
|
479
|
+
|
|
480
|
+
const walkthroughs = undocumented.map((p) => ({
|
|
481
|
+
prompt_num: p.number,
|
|
482
|
+
variant: p.variant,
|
|
483
|
+
id: getPromptId(p.number, p.variant),
|
|
484
|
+
description: p.frontMatter.description,
|
|
485
|
+
walkthrough: p.frontMatter.walkthrough || [],
|
|
486
|
+
relevant_files: p.frontMatter.relevant_files || [],
|
|
487
|
+
}));
|
|
488
|
+
|
|
489
|
+
return this.success({
|
|
490
|
+
count: walkthroughs.length,
|
|
491
|
+
walkthroughs,
|
|
492
|
+
});
|
|
493
|
+
}
|
|
494
|
+
}
|
|
495
|
+
|
|
496
|
+
/**
|
|
497
|
+
* Mark all undocumented prompts as documentation_extracted.
|
|
498
|
+
*/
|
|
499
|
+
export class MarkAllDocumentedCommand extends BaseCommand {
|
|
500
|
+
readonly name = "mark-all-documented";
|
|
501
|
+
readonly description = "Mark all undocumented prompts as documentation extracted";
|
|
502
|
+
|
|
503
|
+
defineArguments(_cmd: Command): void {
|
|
504
|
+
// No arguments
|
|
505
|
+
}
|
|
506
|
+
|
|
507
|
+
async execute(_args: Record<string, unknown>): Promise<CommandResult> {
|
|
508
|
+
const branch = getBranch();
|
|
509
|
+
if (!branch) {
|
|
510
|
+
return this.error("no_branch", "Not in a git repository or no branch checked out");
|
|
511
|
+
}
|
|
512
|
+
|
|
513
|
+
if (!planExists()) {
|
|
514
|
+
return this.error("no_plan", "No plan directory exists for this branch");
|
|
515
|
+
}
|
|
516
|
+
|
|
517
|
+
const prompts = readAllPrompts();
|
|
518
|
+
const undocumented = prompts.filter((p) => !p.frontMatter.documentation_extracted);
|
|
519
|
+
const marked: string[] = [];
|
|
520
|
+
|
|
521
|
+
for (const p of undocumented) {
|
|
522
|
+
const prompt = readPrompt(p.number, p.variant);
|
|
523
|
+
if (prompt) {
|
|
524
|
+
const updatedFrontMatter: PromptFrontMatter = {
|
|
525
|
+
...prompt.frontMatter,
|
|
526
|
+
documentation_extracted: true,
|
|
527
|
+
};
|
|
528
|
+
writePrompt(p.number, p.variant, updatedFrontMatter, prompt.content);
|
|
529
|
+
marked.push(getPromptId(p.number, p.variant));
|
|
530
|
+
}
|
|
531
|
+
}
|
|
532
|
+
|
|
533
|
+
return this.success({
|
|
534
|
+
marked_count: marked.length,
|
|
535
|
+
marked_prompts: marked,
|
|
536
|
+
});
|
|
537
|
+
}
|
|
538
|
+
}
|
|
539
|
+
|
|
455
540
|
/**
|
|
456
541
|
* Release all prompts from in_progress status.
|
|
457
542
|
*/
|
|
@@ -8,6 +8,7 @@ export type SymbolType = "function" | "class" | "variable" | "type" | "export" |
|
|
|
8
8
|
export interface SymbolQuery {
|
|
9
9
|
query: string;
|
|
10
10
|
nameCapture: string;
|
|
11
|
+
defCapture?: string; // Capture for full definition range (optional, falls back to name)
|
|
11
12
|
}
|
|
12
13
|
|
|
13
14
|
export interface LanguageQueries {
|
|
@@ -21,195 +22,236 @@ export interface LanguageQueries {
|
|
|
21
22
|
export const languageQueries: Record<string, LanguageQueries> = {
|
|
22
23
|
typescript: {
|
|
23
24
|
function: {
|
|
24
|
-
query: `(function_declaration name: (identifier) @name)`,
|
|
25
|
+
query: `(function_declaration name: (identifier) @name) @def`,
|
|
25
26
|
nameCapture: "name",
|
|
27
|
+
defCapture: "def",
|
|
26
28
|
},
|
|
27
29
|
class: {
|
|
28
|
-
query: `(class_declaration name: (type_identifier) @name)`,
|
|
30
|
+
query: `(class_declaration name: (type_identifier) @name) @def`,
|
|
29
31
|
nameCapture: "name",
|
|
32
|
+
defCapture: "def",
|
|
30
33
|
},
|
|
31
34
|
variable: {
|
|
32
|
-
query: `(variable_declarator name: (identifier) @name)`,
|
|
35
|
+
query: `(variable_declarator name: (identifier) @name) @def`,
|
|
33
36
|
nameCapture: "name",
|
|
37
|
+
defCapture: "def",
|
|
34
38
|
},
|
|
35
39
|
type: {
|
|
36
|
-
query: `(type_alias_declaration name: (type_identifier) @name)`,
|
|
40
|
+
query: `(type_alias_declaration name: (type_identifier) @name) @def`,
|
|
37
41
|
nameCapture: "name",
|
|
42
|
+
defCapture: "def",
|
|
38
43
|
},
|
|
39
44
|
interface: {
|
|
40
|
-
query: `(interface_declaration name: (type_identifier) @name)`,
|
|
45
|
+
query: `(interface_declaration name: (type_identifier) @name) @def`,
|
|
41
46
|
nameCapture: "name",
|
|
47
|
+
defCapture: "def",
|
|
42
48
|
},
|
|
43
49
|
method: {
|
|
44
|
-
query: `(method_definition name: (property_identifier) @name)`,
|
|
50
|
+
query: `(method_definition name: (property_identifier) @name) @def`,
|
|
45
51
|
nameCapture: "name",
|
|
52
|
+
defCapture: "def",
|
|
46
53
|
},
|
|
47
54
|
export: {
|
|
48
|
-
query: `(export_statement declaration: (function_declaration name: (identifier) @name))`,
|
|
55
|
+
query: `(export_statement declaration: (function_declaration name: (identifier) @name)) @def`,
|
|
49
56
|
nameCapture: "name",
|
|
57
|
+
defCapture: "def",
|
|
50
58
|
},
|
|
51
59
|
arrowFunction: {
|
|
52
60
|
query: `(lexical_declaration
|
|
53
61
|
(variable_declarator
|
|
54
62
|
name: (identifier) @name
|
|
55
|
-
value: (arrow_function)))`,
|
|
63
|
+
value: (arrow_function))) @def`,
|
|
56
64
|
nameCapture: "name",
|
|
65
|
+
defCapture: "def",
|
|
57
66
|
},
|
|
58
67
|
},
|
|
59
68
|
|
|
60
69
|
javascript: {
|
|
61
70
|
function: {
|
|
62
|
-
query: `(function_declaration name: (identifier) @name)`,
|
|
71
|
+
query: `(function_declaration name: (identifier) @name) @def`,
|
|
63
72
|
nameCapture: "name",
|
|
73
|
+
defCapture: "def",
|
|
64
74
|
},
|
|
65
75
|
class: {
|
|
66
|
-
query: `(class_declaration name: (identifier) @name)`,
|
|
76
|
+
query: `(class_declaration name: (identifier) @name) @def`,
|
|
67
77
|
nameCapture: "name",
|
|
78
|
+
defCapture: "def",
|
|
68
79
|
},
|
|
69
80
|
variable: {
|
|
70
|
-
query: `(variable_declarator name: (identifier) @name)`,
|
|
81
|
+
query: `(variable_declarator name: (identifier) @name) @def`,
|
|
71
82
|
nameCapture: "name",
|
|
83
|
+
defCapture: "def",
|
|
72
84
|
},
|
|
73
85
|
method: {
|
|
74
|
-
query: `(method_definition name: (property_identifier) @name)`,
|
|
86
|
+
query: `(method_definition name: (property_identifier) @name) @def`,
|
|
75
87
|
nameCapture: "name",
|
|
88
|
+
defCapture: "def",
|
|
76
89
|
},
|
|
77
90
|
arrowFunction: {
|
|
78
91
|
query: `(lexical_declaration
|
|
79
92
|
(variable_declarator
|
|
80
93
|
name: (identifier) @name
|
|
81
|
-
value: (arrow_function)))`,
|
|
94
|
+
value: (arrow_function))) @def`,
|
|
82
95
|
nameCapture: "name",
|
|
96
|
+
defCapture: "def",
|
|
83
97
|
},
|
|
84
98
|
},
|
|
85
99
|
|
|
86
100
|
python: {
|
|
87
101
|
function: {
|
|
88
|
-
query: `(function_definition name: (identifier) @name)`,
|
|
102
|
+
query: `(function_definition name: (identifier) @name) @def`,
|
|
89
103
|
nameCapture: "name",
|
|
104
|
+
defCapture: "def",
|
|
90
105
|
},
|
|
91
106
|
class: {
|
|
92
|
-
query: `(class_definition name: (identifier) @name)`,
|
|
107
|
+
query: `(class_definition name: (identifier) @name) @def`,
|
|
93
108
|
nameCapture: "name",
|
|
109
|
+
defCapture: "def",
|
|
94
110
|
},
|
|
95
111
|
variable: {
|
|
96
|
-
query: `(assignment left: (identifier) @name)`,
|
|
112
|
+
query: `(assignment left: (identifier) @name) @def`,
|
|
97
113
|
nameCapture: "name",
|
|
114
|
+
defCapture: "def",
|
|
98
115
|
},
|
|
99
116
|
method: {
|
|
100
|
-
query: `(function_definition name: (identifier) @name)`,
|
|
117
|
+
query: `(function_definition name: (identifier) @name) @def`,
|
|
101
118
|
nameCapture: "name",
|
|
119
|
+
defCapture: "def",
|
|
102
120
|
},
|
|
103
121
|
},
|
|
104
122
|
|
|
105
123
|
go: {
|
|
106
124
|
function: {
|
|
107
|
-
query: `(function_declaration name: (identifier) @name)`,
|
|
125
|
+
query: `(function_declaration name: (identifier) @name) @def`,
|
|
108
126
|
nameCapture: "name",
|
|
127
|
+
defCapture: "def",
|
|
109
128
|
},
|
|
110
129
|
type: {
|
|
111
|
-
query: `(type_declaration (type_spec name: (type_identifier) @name))`,
|
|
130
|
+
query: `(type_declaration (type_spec name: (type_identifier) @name)) @def`,
|
|
112
131
|
nameCapture: "name",
|
|
132
|
+
defCapture: "def",
|
|
113
133
|
},
|
|
114
134
|
method: {
|
|
115
|
-
query: `(method_declaration name: (field_identifier) @name)`,
|
|
135
|
+
query: `(method_declaration name: (field_identifier) @name) @def`,
|
|
116
136
|
nameCapture: "name",
|
|
137
|
+
defCapture: "def",
|
|
117
138
|
},
|
|
118
139
|
variable: {
|
|
119
|
-
query: `(var_declaration (var_spec name: (identifier) @name))`,
|
|
140
|
+
query: `(var_declaration (var_spec name: (identifier) @name)) @def`,
|
|
120
141
|
nameCapture: "name",
|
|
142
|
+
defCapture: "def",
|
|
121
143
|
},
|
|
122
144
|
const: {
|
|
123
|
-
query: `(const_declaration (const_spec name: (identifier) @name))`,
|
|
145
|
+
query: `(const_declaration (const_spec name: (identifier) @name)) @def`,
|
|
124
146
|
nameCapture: "name",
|
|
147
|
+
defCapture: "def",
|
|
125
148
|
},
|
|
126
149
|
},
|
|
127
150
|
|
|
128
151
|
rust: {
|
|
129
152
|
function: {
|
|
130
|
-
query: `(function_item name: (identifier) @name)`,
|
|
153
|
+
query: `(function_item name: (identifier) @name) @def`,
|
|
131
154
|
nameCapture: "name",
|
|
155
|
+
defCapture: "def",
|
|
132
156
|
},
|
|
133
157
|
struct: {
|
|
134
|
-
query: `(struct_item name: (type_identifier) @name)`,
|
|
158
|
+
query: `(struct_item name: (type_identifier) @name) @def`,
|
|
135
159
|
nameCapture: "name",
|
|
160
|
+
defCapture: "def",
|
|
136
161
|
},
|
|
137
162
|
enum: {
|
|
138
|
-
query: `(enum_item name: (type_identifier) @name)`,
|
|
163
|
+
query: `(enum_item name: (type_identifier) @name) @def`,
|
|
139
164
|
nameCapture: "name",
|
|
165
|
+
defCapture: "def",
|
|
140
166
|
},
|
|
141
167
|
impl: {
|
|
142
|
-
query: `(impl_item type: (type_identifier) @name)`,
|
|
168
|
+
query: `(impl_item type: (type_identifier) @name) @def`,
|
|
143
169
|
nameCapture: "name",
|
|
170
|
+
defCapture: "def",
|
|
144
171
|
},
|
|
145
172
|
trait: {
|
|
146
|
-
query: `(trait_item name: (type_identifier) @name)`,
|
|
173
|
+
query: `(trait_item name: (type_identifier) @name) @def`,
|
|
147
174
|
nameCapture: "name",
|
|
175
|
+
defCapture: "def",
|
|
148
176
|
},
|
|
149
177
|
const: {
|
|
150
|
-
query: `(const_item name: (identifier) @name)`,
|
|
178
|
+
query: `(const_item name: (identifier) @name) @def`,
|
|
151
179
|
nameCapture: "name",
|
|
180
|
+
defCapture: "def",
|
|
152
181
|
},
|
|
153
182
|
},
|
|
154
183
|
|
|
155
184
|
java: {
|
|
156
185
|
class: {
|
|
157
|
-
query: `(class_declaration name: (identifier) @name)`,
|
|
186
|
+
query: `(class_declaration name: (identifier) @name) @def`,
|
|
158
187
|
nameCapture: "name",
|
|
188
|
+
defCapture: "def",
|
|
159
189
|
},
|
|
160
190
|
interface: {
|
|
161
|
-
query: `(interface_declaration name: (identifier) @name)`,
|
|
191
|
+
query: `(interface_declaration name: (identifier) @name) @def`,
|
|
162
192
|
nameCapture: "name",
|
|
193
|
+
defCapture: "def",
|
|
163
194
|
},
|
|
164
195
|
method: {
|
|
165
|
-
query: `(method_declaration name: (identifier) @name)`,
|
|
196
|
+
query: `(method_declaration name: (identifier) @name) @def`,
|
|
166
197
|
nameCapture: "name",
|
|
198
|
+
defCapture: "def",
|
|
167
199
|
},
|
|
168
200
|
field: {
|
|
169
|
-
query: `(field_declaration declarator: (variable_declarator name: (identifier) @name))`,
|
|
201
|
+
query: `(field_declaration declarator: (variable_declarator name: (identifier) @name)) @def`,
|
|
170
202
|
nameCapture: "name",
|
|
203
|
+
defCapture: "def",
|
|
171
204
|
},
|
|
172
205
|
enum: {
|
|
173
|
-
query: `(enum_declaration name: (identifier) @name)`,
|
|
206
|
+
query: `(enum_declaration name: (identifier) @name) @def`,
|
|
174
207
|
nameCapture: "name",
|
|
208
|
+
defCapture: "def",
|
|
175
209
|
},
|
|
176
210
|
},
|
|
177
211
|
|
|
178
212
|
ruby: {
|
|
179
213
|
function: {
|
|
180
|
-
query: `(method name: (identifier) @name)`,
|
|
214
|
+
query: `(method name: (identifier) @name) @def`,
|
|
181
215
|
nameCapture: "name",
|
|
216
|
+
defCapture: "def",
|
|
182
217
|
},
|
|
183
218
|
class: {
|
|
184
|
-
query: `(class name: (constant) @name)`,
|
|
219
|
+
query: `(class name: (constant) @name) @def`,
|
|
185
220
|
nameCapture: "name",
|
|
221
|
+
defCapture: "def",
|
|
186
222
|
},
|
|
187
223
|
module: {
|
|
188
|
-
query: `(module name: (constant) @name)`,
|
|
224
|
+
query: `(module name: (constant) @name) @def`,
|
|
189
225
|
nameCapture: "name",
|
|
226
|
+
defCapture: "def",
|
|
190
227
|
},
|
|
191
228
|
},
|
|
192
229
|
|
|
193
230
|
swift: {
|
|
194
231
|
function: {
|
|
195
|
-
query: `(function_declaration name: (simple_identifier) @name)`,
|
|
232
|
+
query: `(function_declaration name: (simple_identifier) @name) @def`,
|
|
196
233
|
nameCapture: "name",
|
|
234
|
+
defCapture: "def",
|
|
197
235
|
},
|
|
198
236
|
class: {
|
|
199
|
-
query: `(class_declaration name: (type_identifier) @name)`,
|
|
237
|
+
query: `(class_declaration name: (type_identifier) @name) @def`,
|
|
200
238
|
nameCapture: "name",
|
|
239
|
+
defCapture: "def",
|
|
201
240
|
},
|
|
202
241
|
struct: {
|
|
203
|
-
query: `(struct_declaration name: (type_identifier) @name)`,
|
|
242
|
+
query: `(struct_declaration name: (type_identifier) @name) @def`,
|
|
204
243
|
nameCapture: "name",
|
|
244
|
+
defCapture: "def",
|
|
205
245
|
},
|
|
206
246
|
enum: {
|
|
207
|
-
query: `(enum_declaration name: (type_identifier) @name)`,
|
|
247
|
+
query: `(enum_declaration name: (type_identifier) @name) @def`,
|
|
208
248
|
nameCapture: "name",
|
|
249
|
+
defCapture: "def",
|
|
209
250
|
},
|
|
210
251
|
protocol: {
|
|
211
|
-
query: `(protocol_declaration name: (type_identifier) @name)`,
|
|
252
|
+
query: `(protocol_declaration name: (type_identifier) @name) @def`,
|
|
212
253
|
nameCapture: "name",
|
|
254
|
+
defCapture: "def",
|
|
213
255
|
},
|
|
214
256
|
},
|
|
215
257
|
};
|
|
@@ -23,6 +23,7 @@ export interface ParseResult {
|
|
|
23
23
|
language?: string;
|
|
24
24
|
symbols?: SymbolLocation[];
|
|
25
25
|
error?: string;
|
|
26
|
+
warnings?: string[]; // Non-fatal issues during parsing (e.g., query failures)
|
|
26
27
|
}
|
|
27
28
|
|
|
28
29
|
// Tree-sitter Query types
|
|
@@ -111,7 +112,23 @@ async function getParser(language: string): Promise<ParserData | null> {
|
|
|
111
112
|
parserCache.set(language, cached);
|
|
112
113
|
return cached;
|
|
113
114
|
} catch (e) {
|
|
114
|
-
|
|
115
|
+
// Detect native binding failures
|
|
116
|
+
const errorMsg = e instanceof Error ? e.message : String(e);
|
|
117
|
+
if (
|
|
118
|
+
errorMsg.includes("MODULE_NOT_FOUND") ||
|
|
119
|
+
errorMsg.includes("invalid ELF header") ||
|
|
120
|
+
errorMsg.includes("was compiled against a different Node.js version") ||
|
|
121
|
+
errorMsg.includes("dlopen") ||
|
|
122
|
+
errorMsg.includes(".node")
|
|
123
|
+
) {
|
|
124
|
+
console.error(
|
|
125
|
+
`Native binding error for ${language} parser. ` +
|
|
126
|
+
`This usually means tree-sitter was compiled for a different Node.js version. ` +
|
|
127
|
+
`Try: rm -rf node_modules && npm install`
|
|
128
|
+
);
|
|
129
|
+
} else {
|
|
130
|
+
console.error(`Failed to load parser for ${language}:`, e);
|
|
131
|
+
}
|
|
115
132
|
return null;
|
|
116
133
|
}
|
|
117
134
|
}
|
|
@@ -157,12 +174,13 @@ export async function parseFile(filePath: string): Promise<ParseResult> {
|
|
|
157
174
|
};
|
|
158
175
|
}
|
|
159
176
|
|
|
160
|
-
const symbols = extractSymbols(tree, queries, parserData.grammar, parserData.QueryClass);
|
|
177
|
+
const { symbols, warnings } = extractSymbols(tree, queries, parserData.grammar, parserData.QueryClass);
|
|
161
178
|
|
|
162
179
|
return {
|
|
163
180
|
success: true,
|
|
164
181
|
language,
|
|
165
182
|
symbols,
|
|
183
|
+
warnings: warnings.length > 0 ? warnings : undefined,
|
|
166
184
|
};
|
|
167
185
|
} catch (e) {
|
|
168
186
|
return {
|
|
@@ -202,6 +220,11 @@ export async function symbolExists(
|
|
|
202
220
|
// Cache compiled queries to avoid recompilation
|
|
203
221
|
const queryCache = new Map<string, Query>();
|
|
204
222
|
|
|
223
|
+
interface ExtractResult {
|
|
224
|
+
symbols: SymbolLocation[];
|
|
225
|
+
warnings: string[];
|
|
226
|
+
}
|
|
227
|
+
|
|
205
228
|
/**
|
|
206
229
|
* Extract symbols from a parsed tree using tree-sitter's Query API.
|
|
207
230
|
* This uses the declarative query patterns from ast-queries.ts.
|
|
@@ -211,8 +234,9 @@ function extractSymbols(
|
|
|
211
234
|
queries: LanguageQueries,
|
|
212
235
|
grammar: unknown,
|
|
213
236
|
QueryClass: QueryConstructor
|
|
214
|
-
):
|
|
237
|
+
): ExtractResult {
|
|
215
238
|
const symbols: SymbolLocation[] = [];
|
|
239
|
+
const warnings: string[] = [];
|
|
216
240
|
const root = (tree as { rootNode: unknown }).rootNode;
|
|
217
241
|
|
|
218
242
|
for (const [symbolType, queryDef] of Object.entries(queries)) {
|
|
@@ -224,8 +248,8 @@ function extractSymbols(
|
|
|
224
248
|
query = new QueryClass(grammar, queryDef.query);
|
|
225
249
|
queryCache.set(cacheKey, query);
|
|
226
250
|
} catch (e) {
|
|
227
|
-
|
|
228
|
-
|
|
251
|
+
const msg = `Query compile failed for ${symbolType}: ${e instanceof Error ? e.message : String(e)}`;
|
|
252
|
+
warnings.push(msg);
|
|
229
253
|
continue;
|
|
230
254
|
}
|
|
231
255
|
}
|
|
@@ -234,23 +258,28 @@ function extractSymbols(
|
|
|
234
258
|
const matches = query.matches(root);
|
|
235
259
|
for (const match of matches) {
|
|
236
260
|
const nameCapture = match.captures.find(c => c.name === queryDef.nameCapture);
|
|
237
|
-
if
|
|
261
|
+
// Use defCapture for range if available, otherwise fall back to nameCapture
|
|
262
|
+
const defCaptureName = queryDef.defCapture || queryDef.nameCapture;
|
|
263
|
+
const defCapture = match.captures.find(c => c.name === defCaptureName);
|
|
264
|
+
const rangeNode = defCapture?.node || nameCapture?.node;
|
|
265
|
+
|
|
266
|
+
if (nameCapture && rangeNode) {
|
|
238
267
|
symbols.push({
|
|
239
268
|
name: nameCapture.node.text,
|
|
240
|
-
startLine:
|
|
241
|
-
endLine:
|
|
269
|
+
startLine: rangeNode.startPosition.row + 1,
|
|
270
|
+
endLine: rangeNode.endPosition.row + 1,
|
|
242
271
|
type: symbolType,
|
|
243
272
|
});
|
|
244
273
|
}
|
|
245
274
|
}
|
|
246
275
|
} catch (e) {
|
|
247
|
-
|
|
248
|
-
|
|
276
|
+
const msg = `Query exec failed for ${symbolType}: ${e instanceof Error ? e.message : String(e)}`;
|
|
277
|
+
warnings.push(msg);
|
|
249
278
|
continue;
|
|
250
279
|
}
|
|
251
280
|
}
|
|
252
281
|
|
|
253
|
-
return symbols;
|
|
282
|
+
return { symbols, warnings };
|
|
254
283
|
}
|
|
255
284
|
|
|
256
285
|
/**
|
package/package.json
CHANGED
|
@@ -1,214 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
description: Update documentation incrementally based on code changes
|
|
3
|
-
argument-hint: [--diff] [optional paths or context]
|
|
4
|
-
---
|
|
5
|
-
|
|
6
|
-
<objective>
|
|
7
|
-
Update documentation incrementally based on recent code changes or user-specified scope. Uses taxonomy-based approach for targeted documentation updates.
|
|
8
|
-
</objective>
|
|
9
|
-
|
|
10
|
-
<context>
|
|
11
|
-
Current branch: !`git branch --show-current`
|
|
12
|
-
Base branch: !`envoy git get-base-branch`
|
|
13
|
-
</context>
|
|
14
|
-
|
|
15
|
-
<main_agent_role>
|
|
16
|
-
Main agent is ORCHESTRATOR ONLY. Do NOT perform any codebase discovery, file analysis, or documentation planning. All discovery work is delegated to the taxonomist agent.
|
|
17
|
-
|
|
18
|
-
Main agent responsibilities:
|
|
19
|
-
1. Parse arguments (paths, flags, context)
|
|
20
|
-
2. Verify clean git state
|
|
21
|
-
3. Delegate to taxonomist with raw inputs
|
|
22
|
-
4. Orchestrate writers based on taxonomist output
|
|
23
|
-
5. Handle merging, validation, and PR creation
|
|
24
|
-
</main_agent_role>
|
|
25
|
-
|
|
26
|
-
<process>
|
|
27
|
-
<step name="parse_arguments">
|
|
28
|
-
Parse $ARGUMENTS:
|
|
29
|
-
- `--diff` flag: pass to taxonomist for git-based discovery
|
|
30
|
-
- Paths: pass to taxonomist as scope
|
|
31
|
-
- Context: pass to taxonomist as user guidance
|
|
32
|
-
|
|
33
|
-
Do NOT run discovery commands - pass raw inputs to taxonomist.
|
|
34
|
-
</step>
|
|
35
|
-
|
|
36
|
-
<step name="ensure_committed_state">
|
|
37
|
-
Before delegating to taxonomist, verify clean git state:
|
|
38
|
-
|
|
39
|
-
1. Check for uncommitted changes:
|
|
40
|
-
```bash
|
|
41
|
-
git status --porcelain
|
|
42
|
-
```
|
|
43
|
-
|
|
44
|
-
2. If changes exist:
|
|
45
|
-
- Use AskUserQuestion: "Uncommitted changes detected. Documentation requires committed state for valid reference hashes."
|
|
46
|
-
- Options:
|
|
47
|
-
- "Commit now" - propose message, gate for approval
|
|
48
|
-
- "Stash and continue" - `git stash`
|
|
49
|
-
- "Cancel" - abort workflow
|
|
50
|
-
|
|
51
|
-
3. If "Commit now":
|
|
52
|
-
- Run `git diff --cached --stat` for context
|
|
53
|
-
- Propose commit message based on staged changes
|
|
54
|
-
- Gate for user approval
|
|
55
|
-
- Execute: `git add -A && git commit -m "<approved message>"`
|
|
56
|
-
|
|
57
|
-
4. If "Stash and continue":
|
|
58
|
-
- Execute: `git stash push -m "pre-docs stash"`
|
|
59
|
-
- Note: remind user to `git stash pop` after docs complete
|
|
60
|
-
|
|
61
|
-
5. Verify clean state before proceeding:
|
|
62
|
-
```bash
|
|
63
|
-
git status --porcelain
|
|
64
|
-
```
|
|
65
|
-
Must return empty.
|
|
66
|
-
</step>
|
|
67
|
-
|
|
68
|
-
<step name="delegate_to_taxonomist">
|
|
69
|
-
Delegate to **documentation-taxonomist agent** with adjust-workflow.
|
|
70
|
-
|
|
71
|
-
Taxonomist handles ALL discovery: analyzing codebase, checking existing docs, identifying affected domains, creating directory structure.
|
|
72
|
-
|
|
73
|
-
**INPUTS:**
|
|
74
|
-
```yaml
|
|
75
|
-
mode: "adjust"
|
|
76
|
-
use_diff: true | false # from --diff flag
|
|
77
|
-
scope_paths: [<paths from arguments, if any>]
|
|
78
|
-
user_request: "<optional context from user>"
|
|
79
|
-
feature_branch: "<current_branch>"
|
|
80
|
-
```
|
|
81
|
-
|
|
82
|
-
**OUTPUTS:**
|
|
83
|
-
```yaml
|
|
84
|
-
success: true
|
|
85
|
-
segments:
|
|
86
|
-
- domain: "<domain-name>"
|
|
87
|
-
files: ["<glob-patterns>"]
|
|
88
|
-
output_path: "docs/<domain>/"
|
|
89
|
-
depth: "overview" | "detailed" | "comprehensive"
|
|
90
|
-
notes: "<guidance>"
|
|
91
|
-
action: "create" | "update"
|
|
92
|
-
```
|
|
93
|
-
</step>
|
|
94
|
-
|
|
95
|
-
<step name="parallel_writers">
|
|
96
|
-
If multiple segments, delegate to **documentation-writer agents** in parallel.
|
|
97
|
-
|
|
98
|
-
If single segment, delegate to single writer.
|
|
99
|
-
|
|
100
|
-
**INPUTS (per writer):**
|
|
101
|
-
```yaml
|
|
102
|
-
mode: "write"
|
|
103
|
-
domain: "<segment.domain>"
|
|
104
|
-
files: <segment.files>
|
|
105
|
-
output_path: "<segment.output_path>"
|
|
106
|
-
depth: "<segment.depth>"
|
|
107
|
-
notes: "<segment.notes>"
|
|
108
|
-
```
|
|
109
|
-
|
|
110
|
-
**OUTPUTS:**
|
|
111
|
-
```yaml
|
|
112
|
-
success: true
|
|
113
|
-
```
|
|
114
|
-
|
|
115
|
-
Writers work directly on the branch. Taxonomist ensures non-overlapping output directories, so no conflicts occur.
|
|
116
|
-
</step>
|
|
117
|
-
|
|
118
|
-
<step name="validate_and_report">
|
|
119
|
-
Run validation: `envoy docs validate`
|
|
120
|
-
|
|
121
|
-
If stale/invalid refs found:
|
|
122
|
-
- Present findings to user
|
|
123
|
-
- Delegate single writer with fix-workflow if user approves
|
|
124
|
-
</step>
|
|
125
|
-
|
|
126
|
-
<step name="commit_documentation">
|
|
127
|
-
Commit any uncommitted documentation changes (e.g., validation fixes):
|
|
128
|
-
|
|
129
|
-
1. Check for uncommitted changes in docs/:
|
|
130
|
-
```bash
|
|
131
|
-
git status --porcelain docs/
|
|
132
|
-
```
|
|
133
|
-
|
|
134
|
-
2. If changes exist:
|
|
135
|
-
```bash
|
|
136
|
-
git add docs/
|
|
137
|
-
git commit -m "docs: update documentation"
|
|
138
|
-
```
|
|
139
|
-
|
|
140
|
-
3. Track documentation files for reindex:
|
|
141
|
-
- Get list of doc files created/modified since branch diverged from base:
|
|
142
|
-
```bash
|
|
143
|
-
git diff --name-only $(git merge-base HEAD <base_branch>)..HEAD -- docs/
|
|
144
|
-
```
|
|
145
|
-
- Store this list for the reindex step
|
|
146
|
-
</step>
|
|
147
|
-
|
|
148
|
-
<step name="reindex_knowledge">
|
|
149
|
-
Update semantic search index with changed documentation:
|
|
150
|
-
|
|
151
|
-
1. Build file changes JSON from tracked doc files:
|
|
152
|
-
```json
|
|
153
|
-
[
|
|
154
|
-
{"path": "docs/domain/index.md", "added": true},
|
|
155
|
-
{"path": "docs/domain/subdomain/index.md", "modified": true}
|
|
156
|
-
]
|
|
157
|
-
```
|
|
158
|
-
- Use `added: true` for new files
|
|
159
|
-
- Use `modified: true` for updated files
|
|
160
|
-
- Use `deleted: true` for removed files
|
|
161
|
-
|
|
162
|
-
2. Call reindex:
|
|
163
|
-
```bash
|
|
164
|
-
envoy knowledge reindex-from-changes docs --files '<json_array>'
|
|
165
|
-
```
|
|
166
|
-
|
|
167
|
-
3. If reindex reports missing references:
|
|
168
|
-
- Log warning but continue (docs may reference code not yet indexed)
|
|
169
|
-
</step>
|
|
170
|
-
|
|
171
|
-
<step name="finalize">
|
|
172
|
-
If in workflow context (called from /continue):
|
|
173
|
-
- Return success without creating PR
|
|
174
|
-
- Let parent workflow handle PR
|
|
175
|
-
|
|
176
|
-
If standalone:
|
|
177
|
-
- Create PR if changes made
|
|
178
|
-
- Report completion
|
|
179
|
-
</step>
|
|
180
|
-
</process>
|
|
181
|
-
|
|
182
|
-
<workflow_integration>
|
|
183
|
-
When called from `/continue` or implementation workflow:
|
|
184
|
-
- Skip PR creation
|
|
185
|
-
- Return `{ success: true }` for workflow to continue
|
|
186
|
-
- Validation warnings go to workflow orchestrator
|
|
187
|
-
|
|
188
|
-
When called standalone:
|
|
189
|
-
- Create PR with changes
|
|
190
|
-
- Present validation results to user
|
|
191
|
-
</workflow_integration>
|
|
192
|
-
|
|
193
|
-
<success_criteria>
|
|
194
|
-
- Changed files identified (if --diff)
|
|
195
|
-
- Taxonomist created targeted segments with non-overlapping output directories
|
|
196
|
-
- Writers updated relevant docs
|
|
197
|
-
- Validation run
|
|
198
|
-
- Documentation committed
|
|
199
|
-
- Knowledge index updated
|
|
200
|
-
- PR created (if standalone)
|
|
201
|
-
</success_criteria>
|
|
202
|
-
|
|
203
|
-
<constraints>
|
|
204
|
-
- MUST NOT perform codebase discovery - delegate ALL discovery to taxonomist
|
|
205
|
-
- MUST NOT run envoy docs tree, envoy docs complexity, or envoy knowledge search
|
|
206
|
-
- MUST verify clean git state before documentation (ensure_committed_state step)
|
|
207
|
-
- MUST delegate to taxonomist for all segmentation and discovery
|
|
208
|
-
- MUST pass --diff flag to taxonomist (not process it directly)
|
|
209
|
-
- MUST work both standalone and in workflow context
|
|
210
|
-
- MUST validate after documentation
|
|
211
|
-
- MUST commit documentation changes before reindex (reindex reads from disk)
|
|
212
|
-
- MUST reindex knowledge base after documentation committed
|
|
213
|
-
- All delegations MUST follow INPUTS/OUTPUTS format
|
|
214
|
-
</constraints>
|