@esthernandez/vibe-doc 0.2.3 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "vibe-doc",
3
- "version": "0.2.3",
3
+ "version": "0.3.0",
4
4
  "description": "AI-powered documentation gap analyzer. Scans your codebase, classifies your project, identifies missing technical documentation, and generates professional docs from your existing artifacts.",
5
5
  "author": {
6
6
  "name": "626Labs LLC"
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@esthernandez/vibe-doc",
3
- "version": "0.2.3",
3
+ "version": "0.3.0",
4
4
  "description": "AI-powered documentation gap analyzer and generator for modern codebases. Scans your project, classifies your architecture, and generates professional docs from your existing artifacts.",
5
5
  "author": "626Labs LLC",
6
6
  "license": "MIT",
@@ -5,19 +5,35 @@ description: >
5
5
  "write my documentation", "fix my gaps", "create a runbook",
6
6
  "write the threat model", "generate missing docs", or wants to
7
7
  produce technical documentation from their project artifacts.
8
+ Runs an autonomous-first workflow: reads project files, synthesizes
9
+ as much as possible without asking, then interviews the user only
10
+ for the sections that genuinely need human judgment.
8
11
  ---
9
12
 
10
13
  # Vibe Doc Generate Skill
11
14
 
12
- Conversational pipeline to select documentation gaps and generate complete documents.
15
+ Autonomous-first pipeline: read the project, fill what you can, then ask the user only for the sections that need human judgment.
13
16
 
14
17
  **Shared behavior:** Read `skills/guide/SKILL.md` for state management, CLI patterns, checkpoints, and output formatting.
15
18
 
16
19
  ---
17
20
 
18
- ## Entry: Check for Existing Scan
21
+ ## Design Intent
19
22
 
20
- **First step: verify state exists**
23
+ The old model was **agent-interviewed, user-informed**: the agent asked 2-3 synthesis questions per doc type and the user answered them all. That's overkill for factual docs whose content lives in the codebase.
24
+
25
+ The new model is **autonomous-first**:
26
+
27
+ 1. **Read the project files directly** — README, CLAUDE.md, package.json, SKILL files, source entry points, git history, CI configs
28
+ 2. **Synthesize confidently** from what you read — fill in template sections where you have strong evidence
29
+ 3. **Interview only for the gaps** — ask targeted questions for the sections where code can't tell you the answer (security judgment, business intent, operational context the team knows but hasn't written down yet)
30
+ 4. **Present the result** — show the user what you filled in, what you left as NEEDS INPUT, and let them review
31
+
32
+ The CLI (`vibe-doc generate <doctype>`) still produces the deterministic scaffold. This skill layers intelligence on top: same scaffold, but the agent keeps going and fills it in from the codebase before handing off to the user.
33
+
34
+ ---
35
+
36
+ ## Entry: Verify Scan State
21
37
 
22
38
  ```bash
23
39
  if [ ! -f "<project-path>/.vibe-doc/state.json" ]; then
@@ -26,366 +42,318 @@ if [ ! -f "<project-path>/.vibe-doc/state.json" ]; then
26
42
  fi
27
43
  ```
28
44
 
29
- If state doesn't exist, redirect user to **Scan skill** and exit.
45
+ If state doesn't exist, redirect to the **Scan skill** and exit.
30
46
 
31
47
  ---
32
48
 
33
- ## Conversational Flow
49
+ ## Main Flow
34
50
 
35
- ### 1. Present Gap Summary & Offer Choices
51
+ ### 1. Present Gaps and Confirm Selection
36
52
 
37
- Read state and show gaps:
53
+ Read state and show gaps grouped by tier:
38
54
 
39
55
  ```
40
- Documentation Gaps
41
- ━━━━━━━━━━━━━━━━━━
56
+ Documentation Gaps — <Category>
57
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
42
58
 
43
- Required (Deployment Blockers) — 3 missing:
44
- Threat Model
45
- Architecture Decision Records
46
- Runbook (Deployment & Operations)
59
+ Required (ship blockers) — N missing:
60
+ README
61
+ Install Guide
62
+ Skill/Command Reference
47
63
 
48
- Recommended (Should Do) — 4 missing:
49
- API Specification
50
- Deployment Procedure
51
- Data Model Documentation
52
- □ Security Hardening Guide
53
-
54
- Optional (Nice to Have) — 3 missing:
55
- □ Changelog
56
- □ Contributing Guide
57
- □ Performance Benchmarks
64
+ Recommended (should do) — M missing:
65
+ ADRs
66
+ Test Plan
67
+ Changelog / Contributing
58
68
 
59
69
  Which would you like to generate?
60
70
 
61
- [required] Start with all Required docs
62
- [pick]Let me choose specific gaps
63
- [single <name>] Generate one doc now
64
- [all] Generate everything
71
+ [required] Start with all Required docs (runs autonomous fill in parallel)
72
+ [pick] Let me choose specific docs
73
+ [<name>] Single doc by name
74
+ [all] Every missing doc, Required + Recommended + Optional
65
75
  ```
66
76
 
67
- **User chooses:**
68
- - `required` → Go to step 2a (Generate Required, one at a time)
69
- - `pick` → Go to step 2b (Selection menu)
70
- - `single Threat Model` → Go directly to step 3 for that doc
71
- - `all` → Go to step 2a with all gaps pre-selected
77
+ **Do not default to "all"** unless the user asks for it. More docs = slower, more tokens, more noise.
72
78
 
73
79
  ---
74
80
 
75
- ### 2a. Sequential Generation — One at a Time
76
-
77
- For each selected gap, execute the generation workflow:
78
-
79
- 1. **Ask synthesis questions** (2-3 targeted questions for this doc type)
80
- 2. **Capture answers** (save to temporary JSON)
81
- 3. **Run generation command**
82
- 4. **Present results** (file paths, confidence summary)
83
- 5. **Confirm before moving to next doc**
81
+ ### 2. Route by Count
84
82
 
85
- **For example, generating a Threat Model:**
83
+ - **Single doc selected** go to **Section 3: Autonomous Fill (single doc)**
84
+ - **Multiple docs selected** → go to **Section 4: Parallel Dispatch (multiple docs)**
86
85
 
87
- ```
88
- Threat Model Synthesis
89
- ━━━━━━━━━━━━━━━━━━━━━━
90
-
91
- I found security discussion in your artifacts, but I need 2 more details:
86
+ ---
92
87
 
93
- 1. Beyond authentication and data encryption, are there other sensitive
94
- operations? (payments, PII access, admin functions, integrations?)
88
+ ### 3. Autonomous Fill (Single Doc)
95
89
 
96
- 2. Are there known external dependencies or services your app relies on?
97
- (Third-party APIs, databases, cache layers?)
90
+ Follow these steps, in order, for each doc to generate.
98
91
 
99
- [Capture answers and save to temp JSON]
92
+ #### 3a. Run the CLI for the scaffold
100
93
 
101
- Generating threat model...
94
+ ```bash
95
+ cd <project-path> && npx vibe-doc generate <docType> --format both
102
96
  ```
103
97
 
104
- Then run:
98
+ This produces `docs/generated/<docType>.md` with deterministic-extractor fields pre-filled and `NEEDS INPUT` comments marking the gaps.
105
99
 
106
- ```bash
107
- cd <project-path> && npx vibe-doc generate threat-model \
108
- --format both \
109
- --answers '{"externalDeps":["Firebase","Stripe"],"sensitiveOps":["payment","admin"]}'
110
- ```
100
+ Read the scaffold back so you can edit it in place.
111
101
 
112
- **If generation succeeds:**
102
+ #### 3b. Gather source material
113
103
 
114
- ```
115
- ✓ Threat Model generated
104
+ Read the files most relevant to this doc type. Use the hint table below; add files based on what the scan inventory shows.
116
105
 
117
- Files created:
118
- • docs/generated/threat-model.md (2,400 words)
119
- docs/generated/threat-model.docx
106
+ | Doc Type | Read These Files |
107
+ |----------|------------------|
108
+ | **readme** | `package.json`, `CLAUDE.md`, any existing `README.md`, main source entry file (e.g., `src/index.ts`), `docs/` summaries |
109
+ | **install-guide** | `package.json` (engines, scripts, bin), any existing `INSTALL.md`, CI configs (`.github/workflows/*.yml`), install-related scripts |
110
+ | **skill-command-reference** | every `skills/*/SKILL.md`, every `commands/*.md`, `.claude-plugin/plugin.json` |
111
+ | **changelog-contributing** | `git log --oneline -100`, any existing `CHANGELOG.md`, any existing `CONTRIBUTING.md`, `package.json` version history |
112
+ | **adr** | `CLAUDE.md`, commit messages with "decision:" or "arch:" prefixes, any `docs/adr/` or `docs/decisions/` folder |
113
+ | **runbook** | `package.json` scripts, `Dockerfile`, `.github/workflows/*.yml`, any `scripts/` folder, any deploy config |
114
+ | **api-spec** | Route/controller source files, `openapi.yaml`, `swagger.json`, any existing API docs |
115
+ | **deployment-procedure** | `.github/workflows/*.yml`, `Dockerfile`, deploy scripts, cloud infra configs (terraform, cdk, pulumi) |
116
+ | **test-plan** | Test files, test runner configs (`jest.config.*`, `pytest.ini`), CI test stages |
117
+ | **data-model** | Schema/migration files, ORM model files, database config |
118
+ | **threat-model** | Auth code, permission logic, sensitive-data handling, external API clients, secrets config |
120
119
 
121
- Confidence summary:
122
- • Attack surface (High) — extracted from code + interview
123
- • Threat scenarios (Medium) — based on patterns, flagged for review
124
- • Mitigations (Medium) — industry standard, review for your stack
125
- • Compliance mapping (High) — HIPAA requirements auto-linked
120
+ For each file, extract what's relevant to the template's sections. Ignore irrelevant content.
126
121
 
127
- Next: Review the markdown file, then approve or regenerate.
122
+ #### 3c. Fill the template autonomously
128
123
 
129
- [approve] Save and move to next doc
130
- [revise] → Ask different questions and regenerate
131
- [skip] → Move to next gap without saving
132
- ```
124
+ Open the scaffold at `docs/generated/<docType>.md`. For each `NEEDS INPUT` comment:
133
125
 
134
- **If user approves:**
135
- - Move to next selected doc (repeat step 2a)
126
+ 1. **Can you synthesize this section from what you read?** If yes, replace the empty block (or the `{{user.*}}` placeholder still sitting there) with real content. Remove the `NEEDS INPUT` comment to signal the section is filled.
127
+ 2. **Do you need human judgment?** If yes, leave the `NEEDS INPUT` comment in place. These will become the questions you ask the user in the next step.
136
128
 
137
- **If user revises:**
138
- - Ask follow-up questions
139
- - Re-run generation with new answers
129
+ **Rules for autonomous fills:**
140
130
 
141
- **If user skips:**
142
- - Mark gap as deferred
143
- - Move to next doc
131
+ - **Cite your sources inline** — at the end of a section you wrote, add a markdown comment: `<!-- Source: package.json, README.md -->`. This lets the user verify your work quickly.
132
+ - **Don't fabricate.** If a section would require making something up (an SLA target you don't see, a rollback procedure that isn't documented), leave it as NEEDS INPUT. Confident content only.
133
+ - **Prefer brevity over padding.** A 3-sentence section filled from real evidence beats a 3-paragraph section of boilerplate.
134
+ - **Match the existing doc's voice.** Read at least one existing doc in the repo (README is usually a good reference) to calibrate tone.
144
135
 
145
- ---
136
+ Write the filled-in doc back to `docs/generated/<docType>.md`.
146
137
 
147
- ### 2b. Selection Menu (If User Picks)
138
+ #### 3d. Interview the user for remaining gaps
148
139
 
149
- Show all gaps as a checklist:
140
+ Present a summary:
150
141
 
151
142
  ```
152
- Which docs would you like to generate? (Mark with [x])
153
-
154
- Required:
155
- [x] Threat Model
156
- [ ] Architecture Decision Records
157
- [ ] Runbook
143
+ Autonomous pass complete docs/generated/<docType>.md
158
144
 
159
- Recommended:
160
- [x] API Specification
161
- [ ] Deployment Procedure
162
- [ ] Data Model Documentation
145
+ Filled from codebase:
146
+ <section A> — from <source files>
147
+ <section B> — from <source files>
148
+ <section C> from <source files>
163
149
 
164
- Optional:
165
- [ ] Changelog
166
- [ ] Contributing Guide
150
+ Still need your input:
151
+ <section X> — <why the agent couldn't fill it>
152
+ <section Y> — <why the agent couldn't fill it>
167
153
 
168
- [done] Generate the 2 checked docs
169
- [add all required] Select all Required docs
170
- [clear] → Start over
154
+ I'll ask about those two now. If you'd rather fill them yourself
155
+ later, say "defer" and I'll leave the NEEDS INPUT comments.
171
156
  ```
172
157
 
173
- After selection, confirm:
174
-
175
- ```
176
- You've selected 2 docs to generate. Estimate time: 10-15 minutes.
158
+ Then ask **one question at a time** for each remaining gap. Each question should be specific, reference the context, and accept short answers:
177
159
 
178
- Ready to start?
179
- [yes] → Begin generation
180
- [no] → Go back and adjust
181
160
  ```
161
+ Question 1 of 2: <section X>
182
162
 
183
- Then proceed to step 2a (Sequential Generation).
184
-
185
- ---
163
+ <one-sentence explanation of what this section is for>
186
164
 
187
- ### 3. Synthesis Questions Doc Type Specifics
165
+ From what I read, you have <X, Y, Z>. What's the <specific thing>?
166
+ ```
188
167
 
189
- For each document type, ask targeted questions to fill extraction gaps.
168
+ Capture each answer and update the doc in place. When all questions are answered, remove the `NEEDS INPUT` comments for those sections.
190
169
 
191
- **Use breadcrumb heuristics from `skills/guide/references/breadcrumb-heuristics.md`.**
170
+ #### 3e. Present for review
192
171
 
193
- **Example questions per doc type:**
172
+ ```
173
+ ✓ <docType>.md is ready for review.
194
174
 
195
- | Doc Type | Sample Questions |
196
- |----------|------------------|
197
- | **Threat Model** | What sensitive operations exist? Known external dependencies? Attack vectors you're concerned about? |
198
- | **ADRs** | Major architecture decisions made? Tradeoffs considered (monolith vs. microservices)? Tech stack choices? |
199
- | **Runbook** | Deployment frequency? Health checks and alerts? Rollback procedure? On-call escalation? |
200
- | **API Spec** | Authentication method? Rate limiting? Pagination? Error codes? Versioning strategy? |
201
- | **Deployment Procedure** | CI/CD pipeline stages? Approval gates? Rollback trigger? Monitoring post-deploy? |
202
- | **Test Plan** | Coverage targets? Manual vs. automated split? Test environments? Performance benchmarks? |
203
- | **Data Model** | Data retention requirements? PII classification? Schema versioning? Backup/restore? |
175
+ Coverage:
176
+ • Sections filled autonomously: N
177
+ Sections filled from your answers: M
178
+ Sections still marked NEEDS INPUT: 0 (or K if deferred)
204
179
 
205
- **Question delivery format:**
180
+ Open: docs/generated/<docType>.md
206
181
 
182
+ [approve] Move to next doc (or finish if last)
183
+ [revise] Ask different questions / read more files / regenerate
184
+ [edit] I'll wait while you edit manually, then approve
185
+ [defer] Mark remaining gaps as NEEDS INPUT and move on
207
186
  ```
208
- Threat Model — Question 1 of 2
209
187
 
210
- [Question text]
188
+ ---
211
189
 
212
- Your answer: [capture user input]
213
- ```
190
+ ### 4. Parallel Dispatch (Multiple Docs)
214
191
 
215
- ---
192
+ When the user selects multiple docs, **dispatch one subagent per doc type in parallel** using the Task tool. This is the recommended path — it's faster and each agent gets a focused slice of the codebase to read.
216
193
 
217
- ### 4. Generation Command
194
+ #### 4a. Plan the dispatch
218
195
 
219
- After collecting answers, save to JSON and run CLI:
196
+ For each selected doc, build a subagent prompt that covers Section 3a-c (scaffold + read sources + fill autonomously). Do **not** include the conversational interview (Section 3d) in the subagent prompt — that happens in the main agent after all subagents return, so questions don't interleave.
220
197
 
221
- ```bash
222
- ANSWERS_JSON='{"externalDeps":["..."],"sensitiveOps":["..."]}'
223
- cd <project-path> && npx vibe-doc generate threat-model \
224
- --format both \
225
- --answers "$ANSWERS_JSON"
198
+ Subagent prompt template:
199
+
200
+ ```
201
+ You are generating documentation for a <Category> project at <project-path>.
202
+
203
+ Task: Produce a fully-filled `docs/generated/<docType>.md` from the project's
204
+ existing artifacts. Do NOT ask the user questions — fill only what you can
205
+ confidently synthesize from source files, and leave NEEDS INPUT comments for
206
+ anything you can't.
207
+
208
+ Steps:
209
+ 1. Run: `cd <project-path> && npx vibe-doc generate <docType> --format both`
210
+ 2. Read the generated scaffold at docs/generated/<docType>.md
211
+ 3. Read these source files: <from the hint table, plus inventory-specific adds>
212
+ 4. For each NEEDS INPUT section in the scaffold:
213
+ - If you can fill it confidently from what you read, replace it with real
214
+ content and add an inline <!-- Source: ... --> comment
215
+ - If you can't, leave the NEEDS INPUT comment so the main agent can ask the user
216
+ 5. Write the updated doc back to docs/generated/<docType>.md
217
+ 6. Report back with: (a) which sections you filled, (b) which sections still
218
+ need human input, (c) anything suspicious you noticed in the artifacts
219
+
220
+ Do not dispatch further subagents. Do not run the interview. Return findings
221
+ to the main agent.
226
222
  ```
227
223
 
228
- **Parse output:**
229
- - Extract file paths (e.g., `docs/generated/threat-model.md`)
230
- - Extract confidence scores per section
231
- - Extract source attributions
224
+ #### 4b. Dispatch in parallel
232
225
 
233
- ---
226
+ Use the Task tool to fire all subagents in the same message. Each subagent runs independently and edits its own doc.
234
227
 
235
- ### 5. Present Results
228
+ #### 4c. Collect results
236
229
 
237
- Show what was created:
230
+ When all subagents return, aggregate their findings:
238
231
 
239
232
  ```
240
- Threat Model generated
241
-
242
- Files:
243
- markdown: docs/generated/threat-model.md (2,400 words)
244
- docx: docs/generated/threat-model.docx
245
-
246
- Content breakdown:
247
- • Executive summary
248
- • Attack surface inventory (extracted from code + your input)
249
- • Threat scenarios (attack trees, entry points)
250
- • Mitigations and controls
251
- • Compliance checklist (HIPAA §164.308 mapping)
252
-
253
- Confidence by section:
254
- ✓ Attack surface (94%) — High confidence
255
- ⚠ Mitigations (72%) — Medium, please review
256
- ✓ Compliance (88%) — High confidence
257
-
258
- Source attributions included in document.
259
-
260
- Next step: Review the markdown, then either:
261
- [approve] → Mark as complete and generate more docs
262
- [revise] → Ask different questions and regenerate
263
- [skip] → Move to next gap
264
- ```
233
+ Autonomous pass complete — <N> docs
265
234
 
266
- ---
235
+ docs/generated/readme.md
236
+ Filled: overview, install, usage, license
237
+ Needs input: configuration (no .env.example found)
267
238
 
268
- ### 6. Document Review Checkpoint
239
+ docs/generated/install-guide.md
240
+ Filled: prerequisites, install steps, verification
241
+ Needs input: troubleshooting (no existing error documentation)
269
242
 
270
- Ask user to review before moving on:
243
+ docs/generated/skill-command-reference.md
244
+ Filled: all sections (found 8 SKILL files and 4 command definitions)
245
+ Needs input: none — ready to ship
271
246
 
247
+ Total: <X> sections filled autonomously, <Y> need your input.
272
248
  ```
273
- Before we move to the next doc, take a moment to review what was generated.
274
249
 
275
- Open: docs/generated/threat-model.md
250
+ #### 4d. Sequential interview for gaps
276
251
 
277
- Things to check:
278
- • Does the attack surface match your app?
279
- Are mitigations practical for your stack?
280
- • Any confidence flags (marked ⚠) that need manual review?
252
+ Now run the interview phase (Section 3d) **sequentially** across all docs — for each doc that has unfilled gaps, ask its questions one at a time, update the doc, move to the next. Don't interleave questions across docs; the user needs to stay focused on one doc at a time.
253
+
254
+ #### 4e. Present all docs for review
281
255
 
282
- [approve] → Document is good, move to next gap
283
- [revise] → I'll ask different questions and regenerate
284
- [edit] → I'll open the markdown so you can edit manually
285
- [skip] → Skip this doc for now, move to next
286
256
  ```
257
+ Generation complete ✓
287
258
 
288
- ---
259
+ Ready for review:
260
+ • docs/generated/readme.md (0 gaps remaining)
261
+ • docs/generated/install-guide.md (0 gaps remaining)
262
+ • docs/generated/skill-command-reference.md (0 gaps remaining)
289
263
 
290
- ### 7. Completion Summary
264
+ Coverage improved: <before>% → <after>% (<n> Required docs satisfied)
291
265
 
292
- After all selected docs are generated:
266
+ Open each file to review. When you're ready, you can promote them to the
267
+ repo root (README.md, INSTALL.md, etc.) or keep them in docs/generated/
268
+ as a staging area.
293
269
 
270
+ [approve-all] Done, docs are good
271
+ [revise <name>] Re-run autonomous fill on one doc with different focus
272
+ [promote] Move files from docs/generated/ to the repo root
294
273
  ```
295
- Generation Complete ✓
296
- ━━━━━━━━━━━━━━━━━━━━
297
274
 
298
- Generated: 3 documents
299
- ✓ Threat Model
300
- ✓ API Specification
301
- ✓ Runbook
275
+ ---
302
276
 
303
- Files saved to: docs/generated/
277
+ ## When to Fall Back to the Pure Interview Flow
304
278
 
305
- Coverage improved: 28% 57% (4 of 7 Required docs)
279
+ The autonomous-first flow works well for docs whose content lives in the codebase. It works **less well** for docs where the substance is judgment, intent, or future plans — specifically:
306
280
 
307
- What's next?
281
+ - **Threat Model** — requires security reasoning the agent shouldn't invent
282
+ - **ADRs for decisions not yet documented** — the "why" is in someone's head
283
+ - **Deployment Procedure for an app that hasn't deployed yet** — no evidence exists
284
+ - **Data Model for a pre-alpha app** — no schema yet
308
285
 
309
- [more] Generate more docs
310
- [check] → Run CI validation
311
- [done] → Finish (docs are ready to review)
312
- ```
286
+ For these, default to a **short autonomous pass** (fill only what's obviously there) and spend most of the time in the interview phase. Lean on the synthesis questions from `skills/guide/references/breadcrumb-heuristics.md`.
313
287
 
314
288
  ---
315
289
 
316
- ## Error Handling
290
+ ## Anti-Patterns
317
291
 
318
- ### No Scan Exists
292
+ - **Never fabricate.** If you don't have evidence, leave NEEDS INPUT. A scaffold with honest gaps is better than a polished doc that's half hallucination.
293
+ - **Never cite sources you didn't read.** Inline source comments must point to files the agent actually opened.
294
+ - **Don't auto-promote generated files.** `docs/generated/` is a staging area. Moving files to the repo root (README.md, INSTALL.md, CHANGELOG.md) is always an explicit user action.
295
+ - **Don't ask questions the code already answers.** Before asking a question, re-verify you couldn't have derived it from a file you haven't read yet.
296
+ - **Don't interleave questions across docs** in the parallel path. One doc at a time for the interview phase, even if the autonomous passes ran in parallel.
319
297
 
320
- ```
321
- I don't see a project profile yet. Run the Scan skill first to:
322
- • Analyze your artifacts
323
- • Classify your app type
324
- • Identify documentation gaps
298
+ ---
325
299
 
326
- Then come back here to generate docs.
327
- ```
300
+ ## Error Handling
328
301
 
329
- ### Generation Command Fails
302
+ ### CLI scaffold generation fails
330
303
 
331
304
  ```
332
- Generation failed: [error message]
305
+ The scaffold step failed: <error>
333
306
 
334
- This could mean:
335
- • The answers you provided didn't match expected format
336
- • The doc template is missing or corrupted
337
- • A file system error occurred
307
+ This usually means:
308
+ • The doc type isn't registered (check `vibe-doc templates list`)
309
+ • The template file is missing from the install
310
+ • A filesystem error blocked writing to docs/generated/
338
311
 
339
- Options:
340
- [retry] Try again with same questions
341
- [different] → Ask different synthesis questions
342
- [manual] → Skip this doc and move to next
312
+ [retry] Try again
313
+ [skip] Skip this doc and move to the next
343
314
  ```
344
315
 
345
- ### Low Confidence Sections
316
+ ### Autonomous pass runs out of context
346
317
 
347
- If a section has <70% confidence:
318
+ If reading too many source files would exceed a reasonable context budget, narrow the scope:
348
319
 
349
- ```
350
- Low Confidence Flag: "Mitigations" section (68% confidence)
320
+ - Read only the top 10-15 files most relevant to the doc type
321
+ - Prefer summary files (READMEs, CLAUDE.md, SKILL.md) over large source files
322
+ - Skim rather than read exhaustively — you're looking for evidence, not comprehension
351
323
 
352
- This section was generated from limited artifact information and may need
353
- manual review or revision. I've marked it with flags in the document.
324
+ ### Subagent returns with everything marked NEEDS INPUT
354
325
 
355
- Suggestions:
356
- • Review "Mitigations" manually and adjust
357
- • Re-generate with more specific answers to synthesis questions
358
- • Leave as-is (it's a starting point, not final)
326
+ If a subagent couldn't fill any sections, it probably got the wrong doc type or the repo genuinely has no evidence. Options:
359
327
 
360
- [revise] Ask different questions and regenerate
361
- [continue] Keep this version, move to next doc
362
- ```
328
+ - Fall back to the pure interview flow for that doc
329
+ - Skip that doc (not everything should be generated for every project)
330
+ - Ask the user to point the agent at the right files manually
363
331
 
364
332
  ---
365
333
 
366
334
  ## State & Output
367
335
 
368
336
  **Read from `.vibe-doc/state.json`:**
369
- - Classification (to select appropriate doc types)
370
- - Gaps list (to show what's available to generate)
371
- - Generation history (to track what's been done)
337
+ - Classification (to pick the right doc types)
338
+ - Gaps list (to know what's missing)
339
+ - Artifact inventory (to know which files to read during autonomous pass)
372
340
 
373
- **Write to `.vibe-doc/state.json`:**
374
- - Generated doc metadata (file paths, timestamps, confidence scores)
341
+ **Write to:**
342
+ - `docs/generated/<docType>.md` — the filled-in doc (autonomous + interview results)
343
+ - `docs/generated/<docType>.docx` — DOCX version from the CLI scaffold pass
344
+ - `.vibe-doc/state.json` — generation history (file paths, timestamps)
375
345
 
376
- **Files created in user's project:**
377
- - `docs/generated/<doc-type>.md`markdown version
378
- - `docs/generated/<doc-type>.docx` docx version
379
- - `docs/generated/.history/<doc-type>-<timestamp>.md` — version history
346
+ **Files the agent should NOT modify:**
347
+ - Repo-root docs (README.md, INSTALL.md, CHANGELOG.md) promotion is explicit user action
348
+ - Source code — docs generation is read-only on the codebase
349
+ - `.vibe-doc/state.json`'s `classification` or `gapReport` blocks those are owned by scan/check skills
380
350
 
381
351
  ---
382
352
 
383
353
  ## Synthesis Questions Reference
384
354
 
385
- Full question sets per doc type are in `skills/guide/references/breadcrumb-heuristics.md`.
386
-
387
- Each skill consults that reference to build context-appropriate questions for each gap type.
355
+ When the interview phase is needed, question sets per doc type live in `skills/guide/references/breadcrumb-heuristics.md`. Each breadcrumb's `gapQuestions` field is a pre-written list of targeted questions for that doc type — use them as a starting point and adapt to what you already filled in.
388
356
 
389
357
  ---
390
358
 
391
- **Last updated:** 2026-04-11 | **Version:** 1.0
359
+ **Last updated:** 2026-04-15 | **Version:** 2.0 (autonomous-first)