@sandrinio/vbounce 1.9.0 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -8,9 +8,162 @@ V-Bounce Engine turns AI assistants — Claude Code, Cursor, Gemini, Copilot, Co
8
8
 
9
9
  ---
10
10
 
11
- ## How It Works
11
+ ## The Problem
12
12
 
13
- V-Bounce Engine is built around a **Context Loop** a closed feedback system that makes agents smarter with each sprint.
13
+ AI coding agents are powerfulbut without structure, they create expensive chaos:
14
+
15
+ - **No accountability.** The agent writes code, but nobody reviews it against requirements before it ships. Bugs that a junior engineer would catch survive to production.
16
+ - **Invisible progress.** You ask "how's the feature going?" and the only answer is "the agent is still running." No milestones, no intermediate artifacts, no way to course-correct mid-sprint.
17
+ - **No institutional memory.** Every session starts from zero. The agent makes the same architectural mistake it made last week because nothing captures what went wrong.
18
+ - **Rework cycles.** Without quality gates, bad code compounds. A missed requirement discovered late costs 10x more to fix than one caught early.
19
+ - **Risk blindness.** There's no structured way to assess what could go wrong before the agent starts building.
20
+
21
+ V-Bounce Engine solves this by wrapping AI agents in the same discipline that makes human engineering teams reliable: planning documents, role-based reviews, automated gates, and a learning loop that compounds knowledge across sprints.
22
+
23
+ ---
24
+
25
+ ## Built-in Guardrails
26
+
27
+ Every risk that keeps you up at night has a specific mechanism that catches it:
28
+
29
+ | Risk | What catches it |
30
+ |------|----------------|
31
+ | Agent ships code that doesn't match requirements | **QA gate** — validates every story against acceptance criteria before merge |
32
+ | Architectural drift over time | **Architect gate** — audits against your ADRs on every story |
33
+ | One bad story breaks everything | **Git worktrees** — every story is isolated; failures can't contaminate other work |
34
+ | Agent gets stuck in a loop | **3-bounce escalation** — after 3 failed attempts, the story surfaces to a human |
35
+ | Scope creep on "quick fixes" | **Hotfix hard-stop** — Developer must stop if a fix touches more than 2 files |
36
+ | Same mistakes keep happening | **LESSONS.md** — agents read accumulated mistakes before writing future code |
37
+ | Silent regressions | **Root cause tagging** — every failure is tagged and tracked across sprints |
38
+ | Framework itself becomes stale | **Self-improvement skill** — analyzes friction patterns and proposes changes (with your approval) |
39
+
40
+ ---
41
+
42
+ ## Planning With V-Bounce
43
+
44
+ V-Bounce separates planning into two layers: **what to build** and **how to ship it**. The AI is your planning partner — not a tool you invoke with commands.
45
+
46
+ ### Product Planning — What to Build
47
+
48
+ Just talk to the AI. Say "plan a feature for X" or "create an epic for payments" and it handles the rest — reading upstream documents, researching your codebase, and drafting planning documents.
49
+
50
+ A document hierarchy that mirrors how product teams already think:
51
+
52
+ ```
53
+ Charter (WHY — vision, principles, constraints)
54
+ → Roadmap (WHAT/WHEN — releases, milestones, architecture decisions)
55
+ → Epic (scoped WHAT — a feature with clear boundaries)
56
+ → Story (HOW — implementation spec with acceptance criteria)
57
+ ```
58
+
59
+ **You write the top levels. The AI builds the bottom — informed by your actual codebase.**
60
+
61
+ When creating Epics, the AI researches your codebase to fill Technical Context with real file paths and verified dependencies — not guesses. When decomposing Epics into Stories, the AI reads affected files, explores architecture patterns, and creates small, focused stories by deliverable (vertical slices), not by layer.
62
+
63
+ Every document includes an **ambiguity score**:
64
+ - 🔴 High — requirements unclear, blocked from development
65
+ - 🟡 Medium — tech TBD but logic is clear, safe to plan
66
+ - 🟢 Low — fully specified, ready to build
67
+
68
+ No level can be skipped. This prevents the most common AI failure mode: building the wrong thing because requirements were vague.
69
+
70
+ ### Execution Planning — How to Ship It
71
+
72
+ Once you know *what* to build, three documents govern *how* it gets delivered:
73
+
74
+ | Document | Scope | Who uses it | What it tracks |
75
+ |----------|-------|-------------|----------------|
76
+ | **Delivery Plan** | A full release (multiple sprints) | PM | Which Epics are included, project window (start/end dates), high-level backlog prioritization, escalated/parked stories |
77
+ | **Sprint Plan** | One sprint (typically 1 week) | Team Lead + PM | Active story scope, context pack readiness checklists, execution strategy (parallel vs sequential phases), dependency chains, risk flags, and a live execution log |
78
+ | **Risk Registry** | Cross-cutting (all levels) | PM + Architect | Active risks with likelihood/impact scoring, phase-stamped analysis log, mitigations, and resolution history |
79
+
80
+ **How they connect:**
81
+
82
+ ```
83
+ Delivery Plan (the milestone — "we're shipping auth + payments by March 30")
84
+ → Sprint Plan (this week — "stories 01-03 in parallel, 04 depends on 01")
85
+
86
+ Risk Registry (cross-cutting — reviewed at every sprint boundary)
87
+ ```
88
+
89
+ The **Delivery Plan** is updated only at sprint boundaries. The **Sprint Plan** is the single source of truth during active execution — every story state transition is recorded there. At sprint end, the Sprint Plan's execution log becomes the skeleton for the Sprint Report automatically.
90
+
91
+ The **Sprint Plan** also includes a **Context Pack Readiness** checklist for each story — a preflight check ensuring the spec is complete, acceptance criteria are defined, and ambiguity is low before any code is written. If a story isn't ready, it stays in Refinement.
92
+
93
+ ---
94
+
95
+ ## Reports and Visibility
96
+
97
+ V-Bounce generates structured reports at every stage — designed to answer stakeholder questions without requiring anyone to read code:
98
+
99
+ | Report | When it's generated | What it answers |
100
+ |--------|-------------------|-----------------|
101
+ | **Implementation Report** | After each story is built | What was built? What decisions were made? What tests were added? |
102
+ | **QA Report** | After validation | Does the implementation match the acceptance criteria? What failed? |
103
+ | **Architect Report** | After audit | Does this align with our architecture? Any ADR violations? |
104
+ | **Sprint Report** | End of sprint | What shipped? What bounced? What's the correction tax? Lessons learned? |
105
+ | **Release Report** | After merge | What went to production? Environment changes? Post-merge validations? |
106
+ | **Scribe Report** | After documentation pass | What product docs were created, updated, or flagged as stale? |
107
+
108
+ **You don't need to read code to manage the sprint.** The reports surface exactly what a PM or PO needs to make decisions.
109
+
110
+ ---
111
+
112
+ ## What You Can Measure
113
+
114
+ V-Bounce tracks metrics that map directly to product and delivery health:
115
+
116
+ | Metric | What it tells you | Action when it's bad |
117
+ |--------|------------------|---------------------|
118
+ | **Bounce Rate (QA)** | How often code fails acceptance criteria | Stories may have vague requirements — tighten acceptance criteria |
119
+ | **Bounce Rate (Architect)** | How often code violates architecture rules | ADRs may be unclear, or the agent needs better context |
120
+ | **Correction Tax** | 0% = agent delivered autonomously, 100% = human rewrote everything | High tax means the agent needs better guidance (Charter, Roadmap, or Skills) |
121
+ | **Root Cause Distribution** | Why things fail — `missing_tests`, `adr_violation`, `spec_ambiguity`, etc. | Invest in the category that fails most often |
122
+ | **Escalation Rate** | How often stories hit the 3-bounce limit | Chronic escalation signals structural issues in planning docs |
123
+ | **Sprint Velocity** | Stories completed per sprint | Track trend over time — should improve as LESSONS.md grows |
124
+
125
+ Run `vbounce trends` to see cross-sprint analysis. Run `vbounce suggest` for AI-generated improvement recommendations.
126
+
127
+ ---
128
+
129
+ ## How a Sprint Flows
130
+
131
+ Here's what a sprint looks like from the product side — no terminal commands, no code:
132
+
133
+ **Phase 1 — Planning**
134
+ You talk to the AI about what to build. The AI creates Epics and Stories by reading upstream documents and researching your codebase. Ambiguity, risks, and open questions are surfaced and discussed collaboratively.
135
+
136
+ **Phase 2 — Sprint Planning**
137
+ You and the AI decide what goes into the sprint together. The AI reads the backlog, proposes scope, and surfaces blockers — open questions, unresolved ambiguity, dependency risks, edge cases. You discuss, adjust, and confirm. **No sprint starts without your explicit confirmation.** The Sprint Plan is mandatory.
138
+
139
+ **Phase 3 — The Bounce**
140
+ The AI team works autonomously. For each Story:
141
+ 1. The **Developer** builds the feature in isolation (with E2E tests, not just unit tests)
142
+ 2. The **QA agent** checks: does the code meet the acceptance criteria?
143
+ 3. The **Architect agent** checks: does the code follow our architecture rules?
144
+ 4. If either check fails, the work "bounces" back to the Developer with a tagged reason
145
+ 5. After 3 bounces, the story escalates — the AI presents root causes and options (re-scope, split, spike, or remove), and you decide
146
+
147
+ Lessons are recorded **immediately** after each story merges, not deferred to sprint close.
148
+
149
+ **Phase 4 — Review**
150
+ The Sprint Report lands. It tells you:
151
+ - What shipped and what didn't
152
+ - How many bounces each story took (and why)
153
+ - The correction tax (how much human intervention was needed)
154
+ - Test counts per story
155
+ - Lessons already captured during the sprint
156
+ - Recommendations for process improvements
157
+
158
+ You review, approve the release, and the sprint archives itself. The next sprint starts smarter because the agents now carry forward everything they learned.
159
+
160
+ ---
161
+
162
+ ## Continuous Improvement
163
+
164
+ Most AI coding setups are stateless — every session starts from scratch. V-Bounce is the opposite.
165
+
166
+ The **Context Loop** is a closed feedback system that makes your AI team measurably better over time:
14
167
 
15
168
  ```
16
169
  Plan ──> Build ──> Bounce ──> Document ──> Learn
@@ -22,15 +175,121 @@ Plan ──> Build ──> Bounce ──> Document ──> Learn
22
175
  Next sprint reads it all
23
176
  ```
24
177
 
25
- **Plan.** The Team Lead writes requirements using structured templates (Charter, Epic, Story) before any code is written.
178
+ After each sprint:
179
+ - **LESSONS.md** captures every mistake — agents read this before writing future code
180
+ - **Trend analysis** spots recurring patterns (e.g., "auth-related stories bounce 3x more than average")
181
+ - **Self-improvement pipeline** analyzes friction and proposes concrete framework changes
182
+ - **Scribe** keeps product documentation in sync with actual code
183
+
184
+ Sprint 1 might have a 40% bounce rate. By Sprint 5, that number drops — because the agents have accumulated context about your codebase, your architecture decisions, and your team's standards.
185
+
186
+ ### The Self-Improvement Pipeline
187
+
188
+ When a sprint closes (`vbounce sprint close`), an automated pipeline analyzes what went wrong and proposes how to fix the framework itself:
189
+
190
+ ```
191
+ Sprint Close
192
+
193
+ ├── Trend Analysis → Cross-sprint bounce patterns
194
+
195
+ ├── Retro Parser → Reads §5 Framework Self-Assessment tables
196
+ │ from the Sprint Report
197
+
198
+ ├── Lesson Analyzer → Classifies LESSONS.md rules by what they
199
+ │ can become: gate checks, scripts, template
200
+ │ fields, or permanent agent rules
201
+
202
+ ├── Recurrence Detector → Cross-references archived sprint reports
203
+ │ to find findings that keep coming back
204
+
205
+ ├── Effectiveness Checker → Did last sprint's improvements actually
206
+ │ resolve their target findings?
207
+
208
+ └── Improvement Suggestions → Human-readable proposals with impact levels
209
+ ```
210
+
211
+ Every proposal gets an **impact level** so you know what to fix first:
212
+
213
+ | Level | Label | Meaning | When to fix |
214
+ |-------|-------|---------|-------------|
215
+ | **P0** | Critical | Blocks agent work or causes incorrect output | Before next sprint |
216
+ | **P1** | High | Causes rework — bounces, wasted tokens, repeated manual steps | This improvement cycle |
217
+ | **P2** | Medium | Friction that slows agents but doesn't block | Within 2 sprints |
218
+ | **P3** | Low | Polish — nice-to-have | Batch when convenient |
219
+
220
+ ### Lessons Become Automation
26
221
 
27
- **Build.** The Developer agent implements each Story in an isolated git worktree and submits an Implementation Report.
222
+ The pipeline doesn't just track lessons it classifies each one by what it can become:
28
223
 
29
- **Bounce.** The QA agent validates against acceptance criteria. The Architect agent audits against your architecture rules. If either fails, the work bounces back to the Developer — up to 3 times before escalating to you. Every failure is tagged with a root cause (`missing_tests`, `adr_violation`, `spec_ambiguity`, etc.) for trend analysis.
224
+ | Lesson pattern | Becomes | Example |
225
+ |---------------|---------|---------|
226
+ | "Always check X", "Never use Y" | **Gate check** — automated grep/lint rule | "Never import from internal modules" → pre-gate grep pattern |
227
+ | "Run X before Y" | **Script** — validation step | "Run type-check before QA" → added to pre_gate_runner.sh |
228
+ | "Include X in the story" | **Template field** — required section | "Include rollback plan" → new field in story template |
229
+ | General behavioral rules (3+ sprints old) | **Agent config** — permanent brain rule | "Always check for N+1 queries" → graduated to Architect config |
30
230
 
31
- **Document.** After merge, the Scribe agent maps the actual codebase into semantic product documentation using [vdoc](https://github.com/sandrinio/vdoc) (optional).
231
+ This means your framework evolves organically: agents report friction, the pipeline classifies it, you approve the fix, and the next sprint runs smoother. No manual analysis required.
32
232
 
33
- **Learn.** Sprint mistakes are recorded in `LESSONS.md`. All agents read it before writing future code. The framework also tracks its own performance — bounce rates, correction tax, recurring failure patterns — and suggests improvements to its own templates and skills.
233
+ Run `vbounce improve S-XX` anytime to trigger the pipeline on demand.
234
+
235
+ ---
236
+
237
+ ## Is V-Bounce Right For You?
238
+
239
+ **Best fit:**
240
+ - Teams using AI agents for production code (not just prototypes)
241
+ - Projects with clear requirements that can be expressed as acceptance criteria
242
+ - Codebases where architectural consistency matters
243
+ - Teams that want to scale AI usage without losing quality control
244
+
245
+ **Less ideal for:**
246
+ - One-off scripts or throwaway prototypes (overkill)
247
+ - Exploratory research with no defined requirements
248
+ - Projects where the entire team is deeply embedded in every code change anyway
249
+
250
+ **Minimum setup:** One person who can run `npx` commands + one person who can write a Charter and Epics. That's it.
251
+
252
+ ---
253
+
254
+ ## Roles and Responsibilities
255
+
256
+ ### Human
257
+
258
+ You own the planning and the final say. The agents never ship without your approval.
259
+
260
+ | Responsibility | What it involves |
261
+ |---------------|-----------------|
262
+ | **Set vision and constraints** | Write the Charter and Roadmap — define what to build and what's off-limits |
263
+ | **Define requirements** | Break Roadmap into Epics and Stories with acceptance criteria |
264
+ | **Review and approve** | Read sprint reports, approve releases, intervene on escalations |
265
+ | **Tune agent performance** | Adjust brain files, skills, and ADRs based on trend data and bounce patterns |
266
+ | **Install and configure** | Run the installer, verify setup with `vbounce doctor` |
267
+
268
+ ### Agent — Team Lead (Orchestrator)
269
+
270
+ The Team Lead reads your planning documents and coordinates the entire sprint. It never writes code — it delegates, tracks state, and generates reports.
271
+
272
+ | Responsibility | What it involves |
273
+ |---------------|-----------------|
274
+ | **Sprint orchestration** | Assigns stories, manages state transitions, enforces the bounce loop |
275
+ | **Agent delegation** | Spawns Developer, QA, Architect, DevOps, and Scribe agents as needed |
276
+ | **Report routing** | Reads each agent's output and decides the next step (pass, bounce, escalate) |
277
+ | **Escalation** | Surfaces stories to the human after 3 failed bounces |
278
+ | **Sprint reporting** | Consolidates execution data into Sprint Reports and Release Reports |
279
+
280
+ ### Agent — Specialists (Developer, QA, Architect, DevOps, Scribe)
281
+
282
+ Five specialist agents, each with a single job and strict boundaries:
283
+
284
+ | Agent | What it does | Constraints |
285
+ |-------|-------------|-------------|
286
+ | **Developer** | Implements stories in isolated worktrees, submits implementation reports | Works only in its assigned worktree |
287
+ | **QA** | Validates code against acceptance criteria | Read-only — cannot modify code |
288
+ | **Architect** | Audits against ADRs and architecture rules | Read-only — cannot modify code |
289
+ | **DevOps** | Merges passing stories into the sprint branch | Only acts after both gates pass |
290
+ | **Scribe** | Generates and maintains product documentation from the actual codebase | Only runs after merge |
291
+
292
+ One person can fill the entire human side. The framework scales to the team you have.
34
293
 
35
294
  ---
36
295
 
@@ -63,25 +322,30 @@ your-project/
63
322
  │ ├── architect.md
64
323
  │ ├── devops.md
65
324
  │ └── scribe.md
66
- ├── templates/ # 9 Markdown + YAML frontmatter templates
325
+ ├── templates/ # 12 Markdown + YAML frontmatter templates
67
326
  │ ├── charter.md
68
327
  │ ├── roadmap.md
69
328
  │ ├── epic.md
70
329
  │ ├── story.md
330
+ │ ├── spike.md
71
331
  │ ├── sprint.md
72
332
  │ ├── delivery_plan.md
73
333
  │ ├── sprint_report.md
74
334
  │ ├── hotfix.md
335
+ │ ├── bug.md
336
+ │ ├── change_request.md
75
337
  │ └── risk_registry.md
76
- ├── skills/ # 7 modular skill files (see Skills below)
338
+ ├── skills/ # 9 modular skill files (see Skills below)
77
339
  │ ├── agent-team/
78
340
  │ ├── doc-manager/
341
+ │ ├── product-graph/
79
342
  │ ├── lesson/
80
343
  │ ├── vibe-code-review/
81
344
  │ ├── write-skill/
82
345
  │ ├── improve/
346
+ │ ├── file-organization/
83
347
  │ └── react-best-practices/ # Example — customize for your stack
84
- ├── scripts/ # 23 automation scripts (validation, context prep, state)
348
+ ├── scripts/ # 26 automation scripts (validation, context prep, state, graph)
85
349
  └── package.json # 3 deps: js-yaml, marked, commander. Nothing else.
86
350
  ```
87
351
 
@@ -138,11 +402,13 @@ Skills are modular markdown instructions the Team Lead invokes automatically dur
138
402
  | Skill | Purpose |
139
403
  |-------|---------|
140
404
  | `agent-team` | Spawns temporary sub-agents (Dev, QA, Architect, DevOps, Scribe) to parallelize work |
141
- | `doc-manager` | Enforces the document hierarchy for Epics and Stories |
405
+ | `doc-manager` | Enforces the document hierarchy, cascade rules, and planning workflows |
406
+ | `product-graph` | Dependency-aware document intelligence — knows what's blocked, what's affected by changes |
142
407
  | `lesson` | Extracts mistakes from sprints into `LESSONS.md` |
143
408
  | `vibe-code-review` | Runs Quick Scan or Deep Audit against acceptance criteria and architecture rules |
144
409
  | `write-skill` | Allows the Team Lead to author new skills when the team encounters a recurring problem |
145
410
  | `improve` | Self-improvement loop — reads agent friction signals across sprints and proposes framework changes (with your approval) |
411
+ | `file-organization` | File structure management and organization verification |
146
412
  | `react-best-practices` | Example tech-stack skill — customize this for your own stack |
147
413
 
148
414
  ---
@@ -184,6 +450,10 @@ vbounce story complete STORY-ID # Mark story done, update state
184
450
  vbounce state show # Print current state
185
451
  vbounce state update STORY-ID STATE # Update story state
186
452
 
453
+ # Product graph
454
+ vbounce graph # Generate document dependency graph
455
+ vbounce graph impact EPIC-002 # Show what's affected by a change
456
+
187
457
  # Context preparation
188
458
  vbounce prep sprint S-01 # Sprint context pack
189
459
  vbounce prep qa STORY-ID # QA context pack
@@ -198,6 +468,7 @@ vbounce validate ready STORY-ID # Pre-bounce readiness gate
198
468
  # Self-improvement
199
469
  vbounce trends # Cross-sprint trend analysis
200
470
  vbounce suggest S-01 # Generate improvement suggestions
471
+ vbounce improve S-01 # Full self-improvement pipeline
201
472
 
202
473
  # Health check
203
474
  vbounce doctor # Verify setup
@@ -219,8 +490,11 @@ product_plans/ # Created when you start planning
219
490
 
220
491
  .bounce/ # Created on first sprint init
221
492
  state.json # Machine-readable sprint state (crash recovery)
493
+ product-graph.json # Document dependency graph (auto-generated)
222
494
  reports/ # QA and Architect bounce reports
223
- improvement-log.md # Tracked improvement suggestions
495
+ improvement-manifest.json # Machine-readable improvement proposals (auto-generated)
496
+ improvement-suggestions.md # Human-readable suggestions with impact levels (auto-generated)
497
+ improvement-log.md # Applied/rejected/deferred improvement tracking
224
498
 
225
499
  .worktrees/ # Git worktrees for isolated story branches
226
500
 
@@ -229,18 +503,28 @@ LESSONS.md # Accumulated mistakes — agents read this bef
229
503
 
230
504
  ---
231
505
 
232
- ## End-of-Sprint Reports
233
-
234
- When a sprint concludes, V-Bounce Engine generates three structured reports:
235
-
236
- - **Sprint Report** what was delivered, execution metrics (tokens, cost, bounce rates), story results, lessons learned, and a retrospective.
237
- - **Release Report** the DevOps agent's merge log, environment changes, and post-merge validations.
238
- - **Scribe Report** which product documentation was created, updated, or flagged as stale.
506
+ ## Glossary
507
+
508
+ | Term | Definition |
509
+ |------|-----------|
510
+ | **Bounce** | When a story fails a quality gate (QA or Architect) and gets sent back to the Developer for fixes |
511
+ | **Bounce Rate** | Percentage of stories that fail a gate on the first attempt |
512
+ | **Context Loop** | The closed feedback cycle: Plan Build Bounce → Document → Learn → next sprint |
513
+ | **Correction Tax** | How much human intervention a story needed — 0% is fully autonomous, 100% means a human rewrote it |
514
+ | **Escalation** | When a story hits the 3-bounce limit and surfaces to a human for intervention |
515
+ | **Gate** | An automated quality checkpoint — QA validates requirements, Architect validates structure |
516
+ | **Hotfix Path** | A fast track for trivial (L1) changes: 1-2 files, no QA/Architect gates, human verifies directly |
517
+ | **L1–L4** | Complexity labels: L1 Trivial, L2 Standard, L3 Complex, L4 Uncertain |
518
+ | **Root Cause Tag** | A label on every bounce failure (e.g., `missing_tests`, `adr_violation`) used for trend analysis |
519
+ | **Scribe** | The documentation agent that maps code into semantic product docs |
520
+ | **Sprint Report** | End-of-sprint summary: what shipped, metrics, bounce analysis, lessons, retrospective |
521
+ | **Worktree** | An isolated git checkout where a single story is implemented — prevents cross-story interference |
239
522
 
240
523
  ---
241
524
 
242
525
  ## Documentation
243
526
 
527
+ - [System Overview with diagrams](OVERVIEW.md)
244
528
  - [Epic template and structure](templates/epic.md)
245
529
  - [Hotfix edge cases](docs/HOTFIX_EDGE_CASES.md)
246
530
  - [vdoc integration](https://github.com/sandrinio/vdoc)
package/bin/vbounce.mjs CHANGED
@@ -82,10 +82,13 @@ Usage:
82
82
  vbounce prep qa <storyId> Generate QA context pack
83
83
  vbounce prep arch <storyId> Generate Architect context pack
84
84
  vbounce prep sprint <sprintId> Generate Sprint context pack
85
+ vbounce graph [generate] Generate product document graph
86
+ vbounce graph impact <DOC-ID> Show what's affected by a document change
85
87
  vbounce docs match --story <ID> Match story scope against vdoc manifest
86
88
  vbounce docs check <sprintId> Detect stale vdocs and generate Scribe task
87
89
  vbounce trends Cross-sprint trend analysis
88
90
  vbounce suggest <sprintId> Generate improvement suggestions
91
+ vbounce improve <sprintId> Run full self-improvement pipeline
89
92
  vbounce doctor Validate all configs and state files
90
93
 
91
94
  Install Platforms:
@@ -195,6 +198,40 @@ if (command === 'suggest') {
195
198
  runScript('suggest_improvements.mjs', args.slice(1));
196
199
  }
197
200
 
201
+ // -- improve --
202
+ if (command === 'improve') {
203
+ rl.close();
204
+ // Full pipeline: analyze → trends → suggest
205
+ const sprintArg = args[1];
206
+ if (!sprintArg) {
207
+ console.error('Usage: vbounce improve S-XX');
208
+ process.exit(1);
209
+ }
210
+ // Run trends first
211
+ const trendsPath = path.join(pkgRoot, 'scripts', 'sprint_trends.mjs');
212
+ if (fs.existsSync(trendsPath)) {
213
+ console.log('Step 1/2: Running cross-sprint trend analysis...');
214
+ spawnSync(process.execPath, [trendsPath], { stdio: 'inherit', cwd: process.cwd() });
215
+ }
216
+ // Run suggest (which internally runs post_sprint_improve.mjs)
217
+ console.log('\nStep 2/2: Running improvement analyzer + suggestions...');
218
+ runScript('suggest_improvements.mjs', [sprintArg]);
219
+ }
220
+
221
+ // -- graph --
222
+ if (command === 'graph') {
223
+ rl.close();
224
+ if (sub === 'impact') {
225
+ runScript('product_impact.mjs', args.slice(2));
226
+ } else if (!sub || sub === 'generate') {
227
+ runScript('product_graph.mjs', args.slice(2));
228
+ } else {
229
+ console.error(`Unknown graph subcommand: ${sub}`);
230
+ console.error('Usage: vbounce graph [generate] | vbounce graph impact <DOC-ID>');
231
+ process.exit(1);
232
+ }
233
+ }
234
+
198
235
  // -- docs --
199
236
  if (command === 'docs') {
200
237
  rl.close();
@@ -540,6 +577,13 @@ if (command === 'install') {
540
577
  console.log(` \x1b[32m✓\x1b[0m ${rule.dest}`);
541
578
  }
542
579
 
580
+ // Create LESSONS.md if missing
581
+ const lessonsPath = path.join(CWD, 'LESSONS.md');
582
+ if (!fs.existsSync(lessonsPath)) {
583
+ fs.writeFileSync(lessonsPath, '# Lessons Learned\n\nProject-specific lessons recorded after each story merge. Read this before writing code.\n');
584
+ console.log(` \x1b[32m✓\x1b[0m LESSONS.md (created)`);
585
+ }
586
+
543
587
  // Write install metadata
544
588
  writeInstallMeta(pkgVersion, targetPlatform, installedFiles, hashes);
545
589
  console.log(` \x1b[32m✓\x1b[0m .bounce/install-meta.json`);