vbounce-engine 2.5.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +142 -0
- package/VBOUNCE_MANIFEST.md +404 -0
- package/bin/vbounce.mjs +882 -0
- package/brains/AGENTS.md +71 -0
- package/brains/CHANGELOG.md +398 -0
- package/brains/CLAUDE.md +90 -0
- package/brains/GEMINI.md +102 -0
- package/brains/SETUP.md +195 -0
- package/brains/claude-agents/architect.md +226 -0
- package/brains/claude-agents/developer.md +133 -0
- package/brains/claude-agents/devops.md +267 -0
- package/brains/claude-agents/explorer.md +157 -0
- package/brains/claude-agents/qa.md +225 -0
- package/brains/claude-agents/scribe.md +171 -0
- package/brains/copilot/copilot-instructions.md +54 -0
- package/brains/cursor-rules/vbounce-docs.mdc +45 -0
- package/brains/cursor-rules/vbounce-process.mdc +51 -0
- package/brains/cursor-rules/vbounce-rules.mdc +29 -0
- package/brains/windsurf/.windsurfrules +35 -0
- package/docs/HOTFIX_EDGE_CASES.md +37 -0
- package/docs/IMPROVEMENT.md +46 -0
- package/docs/agent-skill-profiles.docx +0 -0
- package/docs/icons/alert.svg +1 -0
- package/docs/icons/beaker.svg +1 -0
- package/docs/icons/book.svg +1 -0
- package/docs/icons/git-branch.svg +1 -0
- package/docs/icons/git-merge.svg +1 -0
- package/docs/icons/graph.svg +1 -0
- package/docs/icons/light-bulb.svg +1 -0
- package/docs/icons/logo.svg +9 -0
- package/docs/icons/pencil.svg +1 -0
- package/docs/icons/rocket.svg +1 -0
- package/docs/icons/shield.svg +1 -0
- package/docs/icons/sync.svg +1 -0
- package/docs/icons/terminal.svg +1 -0
- package/docs/icons/tools.svg +1 -0
- package/docs/icons/zap.svg +1 -0
- package/docs/images/bounce_loop_diagram.png +0 -0
- package/docs/vbounce-os-manual.docx +0 -0
- package/package.json +48 -0
- package/scripts/close_sprint.mjs +134 -0
- package/scripts/complete_story.mjs +121 -0
- package/scripts/count_tokens.mjs +494 -0
- package/scripts/doctor.mjs +144 -0
- package/scripts/hotfix_manager.sh +157 -0
- package/scripts/init_gate_config.sh +151 -0
- package/scripts/init_sprint.mjs +129 -0
- package/scripts/post_sprint_improve.mjs +486 -0
- package/scripts/pre_gate_common.sh +576 -0
- package/scripts/pre_gate_runner.sh +176 -0
- package/scripts/prep_arch_context.mjs +178 -0
- package/scripts/prep_qa_context.mjs +152 -0
- package/scripts/prep_sprint_context.mjs +141 -0
- package/scripts/prep_sprint_summary.mjs +154 -0
- package/scripts/product_graph.mjs +387 -0
- package/scripts/product_impact.mjs +167 -0
- package/scripts/sprint_trends.mjs +160 -0
- package/scripts/suggest_improvements.mjs +363 -0
- package/scripts/update_state.mjs +132 -0
- package/scripts/validate_bounce_readiness.mjs +152 -0
- package/scripts/validate_report.mjs +165 -0
- package/scripts/validate_sprint_plan.mjs +117 -0
- package/scripts/validate_state.mjs +99 -0
- package/scripts/vdoc_match.mjs +269 -0
- package/scripts/vdoc_staleness.mjs +199 -0
- package/scripts/verify_framework.mjs +122 -0
- package/scripts/verify_framework.sh +13 -0
- package/skills/agent-team/SKILL.md +579 -0
- package/skills/agent-team/references/cleanup.md +42 -0
- package/skills/agent-team/references/delivery-sync.md +43 -0
- package/skills/agent-team/references/discovery.md +97 -0
- package/skills/agent-team/references/git-strategy.md +52 -0
- package/skills/agent-team/references/mid-sprint-triage.md +85 -0
- package/skills/agent-team/references/report-naming.md +34 -0
- package/skills/doc-manager/SKILL.md +444 -0
- package/skills/file-organization/SKILL.md +146 -0
- package/skills/file-organization/TEST-RESULTS.md +193 -0
- package/skills/file-organization/evals/evals.json +41 -0
- package/skills/file-organization/references/gitignore-template.md +53 -0
- package/skills/file-organization/references/quick-checklist.md +48 -0
- package/skills/improve/SKILL.md +296 -0
- package/skills/lesson/SKILL.md +136 -0
- package/skills/product-graph/SKILL.md +102 -0
- package/skills/react-best-practices/SKILL.md +3014 -0
- package/skills/react-best-practices/rules/_sections.md +46 -0
- package/skills/react-best-practices/rules/_template.md +28 -0
- package/skills/react-best-practices/rules/advanced-event-handler-refs.md +55 -0
- package/skills/react-best-practices/rules/advanced-init-once.md +42 -0
- package/skills/react-best-practices/rules/advanced-use-latest.md +39 -0
- package/skills/react-best-practices/rules/async-api-routes.md +38 -0
- package/skills/react-best-practices/rules/async-defer-await.md +80 -0
- package/skills/react-best-practices/rules/async-dependencies.md +51 -0
- package/skills/react-best-practices/rules/async-parallel.md +28 -0
- package/skills/react-best-practices/rules/async-suspense-boundaries.md +99 -0
- package/skills/react-best-practices/rules/bundle-barrel-imports.md +59 -0
- package/skills/react-best-practices/rules/bundle-conditional.md +31 -0
- package/skills/react-best-practices/rules/bundle-defer-third-party.md +49 -0
- package/skills/react-best-practices/rules/bundle-dynamic-imports.md +35 -0
- package/skills/react-best-practices/rules/bundle-preload.md +50 -0
- package/skills/react-best-practices/rules/client-event-listeners.md +74 -0
- package/skills/react-best-practices/rules/client-localstorage-schema.md +71 -0
- package/skills/react-best-practices/rules/client-passive-event-listeners.md +48 -0
- package/skills/react-best-practices/rules/client-swr-dedup.md +56 -0
- package/skills/react-best-practices/rules/js-batch-dom-css.md +107 -0
- package/skills/react-best-practices/rules/js-cache-function-results.md +80 -0
- package/skills/react-best-practices/rules/js-cache-property-access.md +28 -0
- package/skills/react-best-practices/rules/js-cache-storage.md +70 -0
- package/skills/react-best-practices/rules/js-combine-iterations.md +32 -0
- package/skills/react-best-practices/rules/js-early-exit.md +50 -0
- package/skills/react-best-practices/rules/js-hoist-regexp.md +45 -0
- package/skills/react-best-practices/rules/js-index-maps.md +37 -0
- package/skills/react-best-practices/rules/js-length-check-first.md +49 -0
- package/skills/react-best-practices/rules/js-min-max-loop.md +82 -0
- package/skills/react-best-practices/rules/js-set-map-lookups.md +24 -0
- package/skills/react-best-practices/rules/js-tosorted-immutable.md +57 -0
- package/skills/react-best-practices/rules/rendering-activity.md +26 -0
- package/skills/react-best-practices/rules/rendering-animate-svg-wrapper.md +47 -0
- package/skills/react-best-practices/rules/rendering-conditional-render.md +40 -0
- package/skills/react-best-practices/rules/rendering-content-visibility.md +38 -0
- package/skills/react-best-practices/rules/rendering-hoist-jsx.md +46 -0
- package/skills/react-best-practices/rules/rendering-hydration-no-flicker.md +82 -0
- package/skills/react-best-practices/rules/rendering-hydration-suppress-warning.md +30 -0
- package/skills/react-best-practices/rules/rendering-svg-precision.md +28 -0
- package/skills/react-best-practices/rules/rendering-usetransition-loading.md +75 -0
- package/skills/react-best-practices/rules/rerender-defer-reads.md +39 -0
- package/skills/react-best-practices/rules/rerender-dependencies.md +45 -0
- package/skills/react-best-practices/rules/rerender-derived-state-no-effect.md +40 -0
- package/skills/react-best-practices/rules/rerender-derived-state.md +29 -0
- package/skills/react-best-practices/rules/rerender-functional-setstate.md +74 -0
- package/skills/react-best-practices/rules/rerender-lazy-state-init.md +58 -0
- package/skills/react-best-practices/rules/rerender-memo-with-default-value.md +38 -0
- package/skills/react-best-practices/rules/rerender-memo.md +44 -0
- package/skills/react-best-practices/rules/rerender-move-effect-to-event.md +45 -0
- package/skills/react-best-practices/rules/rerender-simple-expression-in-memo.md +35 -0
- package/skills/react-best-practices/rules/rerender-transitions.md +40 -0
- package/skills/react-best-practices/rules/rerender-use-ref-transient-values.md +73 -0
- package/skills/react-best-practices/rules/server-after-nonblocking.md +73 -0
- package/skills/react-best-practices/rules/server-auth-actions.md +96 -0
- package/skills/react-best-practices/rules/server-cache-lru.md +41 -0
- package/skills/react-best-practices/rules/server-cache-react.md +76 -0
- package/skills/react-best-practices/rules/server-dedup-props.md +65 -0
- package/skills/react-best-practices/rules/server-parallel-fetching.md +83 -0
- package/skills/react-best-practices/rules/server-serialization.md +38 -0
- package/skills/vibe-code-review/SKILL.md +70 -0
- package/skills/vibe-code-review/references/deep-audit.md +259 -0
- package/skills/vibe-code-review/references/pr-review.md +234 -0
- package/skills/vibe-code-review/references/quick-scan.md +178 -0
- package/skills/vibe-code-review/references/report-template.md +189 -0
- package/skills/vibe-code-review/references/trend-check.md +224 -0
- package/skills/vibe-code-review/scripts/generate-snapshot.sh +89 -0
- package/skills/vibe-code-review/scripts/pr-analyze.sh +180 -0
- package/skills/write-skill/SKILL.md +133 -0
- package/templates/bug.md +100 -0
- package/templates/change_request.md +105 -0
- package/templates/charter.md +144 -0
- package/templates/delivery_plan.md +44 -0
- package/templates/epic.md +203 -0
- package/templates/hotfix.md +58 -0
- package/templates/risk_registry.md +87 -0
- package/templates/roadmap.md +174 -0
- package/templates/spike.md +143 -0
- package/templates/sprint.md +134 -0
- package/templates/sprint_context.md +61 -0
- package/templates/sprint_report.md +215 -0
- package/templates/story.md +193 -0
|
@@ -0,0 +1,193 @@
|
|
|
1
|
+
# File Organization Skill — Eval Results
|
|
2
|
+
|
|
3
|
+
## Eval 1: Repro Script vs. Handler Fix
|
|
4
|
+
|
|
5
|
+
**Prompt:** "I need to fix a race condition in the websocket handler. I wrote a quick Python script to simulate concurrent connections and reproduce the bug. I also fixed the actual handler. Where does each file go?"
|
|
6
|
+
|
|
7
|
+
**Expected Output:** The Python repro script is a working artifact → /temporary/. The websocket handler fix is a deliverable → commit in place.
|
|
8
|
+
|
|
9
|
+
**Relevant Guidance:**
|
|
10
|
+
- "Script to reproduce a bug → debug-repro.py (working artifact)" (Line 33)
|
|
11
|
+
- "I'm creating this because the user asked for it / it solves the task" → Project tree (Line 11)
|
|
12
|
+
- "I'm creating this to help me work — debug, analyze, test an idea" → /temporary/ (Line 12)
|
|
13
|
+
|
|
14
|
+
**Analysis:**
|
|
15
|
+
The skill clearly distinguishes between debugging artifacts ("Script to reproduce a bug") and actual fixes. An agent following the core principle would recognize:
|
|
16
|
+
- The Python script's intent: "help me understand/debug" → /temporary/
|
|
17
|
+
- The handler fix's intent: "solves the task" → project tree
|
|
18
|
+
|
|
19
|
+
The guidance is unambiguous. The agent gets the correct answer.
|
|
20
|
+
|
|
21
|
+
**Rating: PASS**
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Eval 2: User-Requested Tests vs. Scratch File
|
|
26
|
+
|
|
27
|
+
**Prompt:** "User asked me to add unit tests for the payment module. I also created a scratch file to test some regex patterns I needed for the validation logic. Where does each go?"
|
|
28
|
+
|
|
29
|
+
**Expected Output:** The unit tests are deliverables (user asked for them) → project tree. The regex scratch file is a working artifact → /temporary/.
|
|
30
|
+
|
|
31
|
+
**Relevant Guidance:**
|
|
32
|
+
- "Write unit tests for auth" → auth.test.ts (deliverable) (Line 26)
|
|
33
|
+
- "Add user validation with tests" example shows validate.test.ts as deliverable because "User asked for tests" (Line 85)
|
|
34
|
+
- "Quick test to verify an assumption → check-behavior.js (working artifact)" (Line 35)
|
|
35
|
+
|
|
36
|
+
**Analysis:**
|
|
37
|
+
The skill explicitly handles this distinction in the "Add user validation with tests" example (Lines 76-85), which directly parallels Eval 2:
|
|
38
|
+
- User-requested tests (validate.test.ts) = deliverable
|
|
39
|
+
- Scratch working files (scratch-regex-test.js) = working artifact
|
|
40
|
+
|
|
41
|
+
The key insight is whether **the user asked for** the tests. The skill states this clearly. An agent would correctly identify:
|
|
42
|
+
- User explicitly asked for unit tests → deliverable
|
|
43
|
+
- Regex pattern scratch file is "to help me work" (testing an assumption) → working artifact
|
|
44
|
+
|
|
45
|
+
**Potential gap:** The skill doesn't address a borderline case where scratch tests could be mistaken for part of the test suite if the agent isn't careful about the "user asked for" criterion. However, the stated guidance is clear enough.
|
|
46
|
+
|
|
47
|
+
**Rating: PASS**
|
|
48
|
+
|
|
49
|
+
---
|
|
50
|
+
|
|
51
|
+
## Eval 3: Existing Tracked Tests vs. Debug Script
|
|
52
|
+
|
|
53
|
+
**Prompt:** "I see there's a tests/ directory with existing test files. I also see a file called check-api.sh in the root that I created yesterday to debug an endpoint. What should I do?"
|
|
54
|
+
|
|
55
|
+
**Expected Output:** Leave the tests/ directory alone — it's an existing tracked test suite. Move check-api.sh to /temporary/ since it's a debug working artifact.
|
|
56
|
+
|
|
57
|
+
**Relevant Guidance:**
|
|
58
|
+
- "Existing files you modified (they're already tracked in git)" — Never working artifacts (Line 108)
|
|
59
|
+
- "Test suites the project already has (`tests/`, `__tests__/`, `spec/`)" — Never working artifacts (Line 109)
|
|
60
|
+
- "If a file already exists in the git tree, it belongs there. Your job is only to route **new files you create** during your working process." (Line 116)
|
|
61
|
+
|
|
62
|
+
**Analysis:**
|
|
63
|
+
The skill explicitly states that existing tracked files are "NEVER working artifacts" and gives `tests/` as a direct example. For check-api.sh, the intent is clear: debug artifact, not user-requested deliverable.
|
|
64
|
+
|
|
65
|
+
An agent would correctly identify:
|
|
66
|
+
1. tests/ is already tracked → don't touch it
|
|
67
|
+
2. check-api.sh intent: "to help me debug" → /temporary/
|
|
68
|
+
|
|
69
|
+
The guidance is explicit and unambiguous. The agent would get the right answer.
|
|
70
|
+
|
|
71
|
+
**Rating: PASS**
|
|
72
|
+
|
|
73
|
+
---
|
|
74
|
+
|
|
75
|
+
## Eval 4: Generated-but-Committed Migration vs. Analysis Notes
|
|
76
|
+
|
|
77
|
+
**Prompt:** "I'm working on a database migration task. I generated a migration file using the ORM CLI, and I also wrote an analysis.md exploring different indexing strategies. Where do these go?"
|
|
78
|
+
|
|
79
|
+
**Expected Output:** The migration file is a deliverable (generated but committed as part of the project) → project tree. The analysis.md is a working artifact → /temporary/.
|
|
80
|
+
|
|
81
|
+
**Relevant Guidance:**
|
|
82
|
+
- "Database migrations are generated but absolutely committed" (Line 94)
|
|
83
|
+
- "Migration files (database schema changes)" — Never working artifacts (Line 112)
|
|
84
|
+
- "Markdown notes analyzing the codebase → analysis.md (working artifact)" (Line 34)
|
|
85
|
+
|
|
86
|
+
**Analysis:**
|
|
87
|
+
The skill handles this well. It explicitly recognizes that "generated" doesn't mean "working artifact" — migrations are generated by the ORM but belong in the project because they're **part of the deliverable** (schema changes that must be committed).
|
|
88
|
+
|
|
89
|
+
For the migration file: The skill states directly "Migration files (database schema changes)" as something that is never a working artifact.
|
|
90
|
+
|
|
91
|
+
For analysis.md: The skill lists "Markdown notes analyzing the codebase → analysis.md (working artifact)" — this directly matches the evaluation scenario.
|
|
92
|
+
|
|
93
|
+
An agent would correctly identify:
|
|
94
|
+
1. Migration file: "the project commits this" + "database schema changes" → project tree
|
|
95
|
+
2. analysis.md: "notes analyzing the codebase" + "to help me work" → /temporary/
|
|
96
|
+
|
|
97
|
+
The guidance is explicit and covers both cases directly.
|
|
98
|
+
|
|
99
|
+
**Rating: PASS**
|
|
100
|
+
|
|
101
|
+
---
|
|
102
|
+
|
|
103
|
+
## Eval 5: Requested Component vs. Debug Render vs. Existing Test Suite
|
|
104
|
+
|
|
105
|
+
**Prompt:** "I created a new React component as requested, plus a debug-render.jsx to test how it renders in isolation. The project already has a __tests__/ folder. Where does everything go?"
|
|
106
|
+
|
|
107
|
+
**Expected Output:** The React component is a deliverable → project tree. debug-render.jsx is a working artifact → /temporary/. The __tests__/ folder is existing tracked code — don't touch it.
|
|
108
|
+
|
|
109
|
+
**Relevant Guidance:**
|
|
110
|
+
- "The user asked for it / it solves the task" → Project tree (Line 11)
|
|
111
|
+
- "I need this to help me understand, debug, or explore" → /temporary/ (Line 31)
|
|
112
|
+
- "Test suites the project already has (`tests/`, `__tests__/`, `spec/`)" — Never working artifacts (Line 109)
|
|
113
|
+
|
|
114
|
+
**Analysis:**
|
|
115
|
+
This eval tests three things:
|
|
116
|
+
1. **Requested component:** Clear deliverable intent
|
|
117
|
+
2. **Debug render file:** Clearly a working artifact ("test how it renders in isolation" = debugging/exploring)
|
|
118
|
+
3. **Existing __tests__/ folder:** Explicitly listed as something to never move
|
|
119
|
+
|
|
120
|
+
The skill handles all three. The guidance is clear. An agent would get the right answer.
|
|
121
|
+
|
|
122
|
+
**Rating: PASS**
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
## Eval 6: Git Status Cleanup (Layer 2)
|
|
127
|
+
|
|
128
|
+
**Prompt:** "Before committing, I ran git status and see: modified src/api/users.ts, new file src/api/users.test.ts (user asked for tests), new file output.log, new file temp-check.py. How do I clean this up?"
|
|
129
|
+
|
|
130
|
+
**Expected Output:** Commit users.ts (modified existing) and users.test.ts (deliverable). Move output.log and temp-check.py to /temporary/ (working artifacts).
|
|
131
|
+
|
|
132
|
+
**Relevant Guidance:**
|
|
133
|
+
- Layer 2 reactive check (Lines 42-55)
|
|
134
|
+
- "Did the user's task require this file? If no → move to /temporary/" (Line 53)
|
|
135
|
+
- "Does this file exist in the project already? If yes, you're editing existing code — that's fine, leave it" (Line 54)
|
|
136
|
+
- "Is this a new file I created to help myself work? If yes → move to /temporary/" (Line 55)
|
|
137
|
+
- Example showing git status cleanup (Lines 57-74) with similar structure
|
|
138
|
+
|
|
139
|
+
**Analysis:**
|
|
140
|
+
The skill provides the Layer 2 reactive framework directly:
|
|
141
|
+
1. **modified users.ts:** Already tracked → commit
|
|
142
|
+
2. **new users.test.ts:** User asked for tests (stated in prompt) → commit
|
|
143
|
+
3. **new output.log:** Created during working process (debug output) → /temporary/
|
|
144
|
+
4. **new temp-check.py:** Name itself suggests "to help myself work" + temporary → /temporary/
|
|
145
|
+
|
|
146
|
+
The example (Lines 57-74) shows the exact scenario structure. The three questions in Layer 2 map directly:
|
|
147
|
+
- Q1 (did user ask?): No for output.log and temp-check.py → move
|
|
148
|
+
- Q2 (already exists?): No for new files, but users.ts exists → commit users.ts
|
|
149
|
+
- Q3 (new artifact?): Yes for output.log and temp-check.py → move
|
|
150
|
+
|
|
151
|
+
An agent would get the right answer following the Layer 2 framework.
|
|
152
|
+
|
|
153
|
+
**Rating: PASS**
|
|
154
|
+
|
|
155
|
+
---
|
|
156
|
+
|
|
157
|
+
## Summary Assessment
|
|
158
|
+
|
|
159
|
+
| Eval | Result | Confidence | Notes |
|
|
160
|
+
|------|--------|-----------|-------|
|
|
161
|
+
| 1 | PASS | High | Clear distinction between debug script and fix |
|
|
162
|
+
| 2 | PASS | High | Explicit example matches eval scenario |
|
|
163
|
+
| 3 | PASS | High | Existing files explicitly excluded from working artifacts |
|
|
164
|
+
| 4 | PASS | High | Migrations explicitly covered; analysis.md directly exemplified |
|
|
165
|
+
| 5 | PASS | High | All three elements (new component, debug file, existing suite) handled clearly |
|
|
166
|
+
| 6 | PASS | High | Layer 2 framework provides exact decision tree; example mirrors scenario |
|
|
167
|
+
|
|
168
|
+
## Critical Findings
|
|
169
|
+
|
|
170
|
+
**All evals achieve PASS.** The skill provides:
|
|
171
|
+
|
|
172
|
+
1. **Clear intent-based framework** that works across all scenarios
|
|
173
|
+
2. **Explicit examples** that map directly to evals 2, 4, 5, and 6
|
|
174
|
+
3. **Direct lists** of files that are "NEVER working artifacts," covering edge cases in evals 3 and 5
|
|
175
|
+
4. **Layer 2 reactive checks** that handle the git status scenario (eval 6) with a concrete decision tree
|
|
176
|
+
5. **Explicit handling of "generated but committed"** files like migrations (eval 4)
|
|
177
|
+
|
|
178
|
+
The skill successfully distinguishes user-requested deliverables from working artifacts across all cases. Agents following either Layer 1 (proactive) or Layer 2 (reactive) would arrive at correct answers for all six evals.
|
|
179
|
+
|
|
180
|
+
### Strengths of the Skill
|
|
181
|
+
|
|
182
|
+
- **Not file-type dependent:** The "intent" approach works for all scenarios without fragile extension-based rules
|
|
183
|
+
- **Handles edge cases explicitly:** Migrations, codegen, existing tracked files all explicitly addressed
|
|
184
|
+
- **Concrete examples:** Evals 2, 4, 5 are nearly identical to skill examples
|
|
185
|
+
- **Dual-layer approach:** Catches mistakes at creation time or before commit
|
|
186
|
+
|
|
187
|
+
### No Significant Gaps Identified
|
|
188
|
+
|
|
189
|
+
All three "focus areas" from the prompt are handled well:
|
|
190
|
+
- **Eval 2 (user-requested vs. scratch tests):** Clear distinction via "user asked for"
|
|
191
|
+
- **Eval 3 (existing tracked files):** Explicit list + general rule about existing files
|
|
192
|
+
- **Eval 4 (generated-but-committed):** Direct mention of migrations + intent-based reasoning
|
|
193
|
+
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
{
|
|
2
|
+
"skill_name": "file-organization",
|
|
3
|
+
"evals": [
|
|
4
|
+
{
|
|
5
|
+
"id": 1,
|
|
6
|
+
"prompt": "I need to fix a race condition in the websocket handler. I wrote a quick Python script to simulate concurrent connections and reproduce the bug. I also fixed the actual handler. Where does each file go?",
|
|
7
|
+
"expected_output": "The Python repro script is a working artifact → /temporary/. The websocket handler fix is a deliverable → commit in place.",
|
|
8
|
+
"files": []
|
|
9
|
+
},
|
|
10
|
+
{
|
|
11
|
+
"id": 2,
|
|
12
|
+
"prompt": "User asked me to add unit tests for the payment module. I also created a scratch file to test some regex patterns I needed for the validation logic. Where does each go?",
|
|
13
|
+
"expected_output": "The unit tests are deliverables (user asked for them) → project tree. The regex scratch file is a working artifact → /temporary/.",
|
|
14
|
+
"files": []
|
|
15
|
+
},
|
|
16
|
+
{
|
|
17
|
+
"id": 3,
|
|
18
|
+
"prompt": "I see there's a tests/ directory with existing test files. I also see a file called check-api.sh in the root that I created yesterday to debug an endpoint. What should I do?",
|
|
19
|
+
"expected_output": "Leave the tests/ directory alone — it's an existing tracked test suite. Move check-api.sh to /temporary/ since it's a debug working artifact.",
|
|
20
|
+
"files": []
|
|
21
|
+
},
|
|
22
|
+
{
|
|
23
|
+
"id": 4,
|
|
24
|
+
"prompt": "I'm working on a database migration task. I generated a migration file using the ORM CLI, and I also wrote an analysis.md exploring different indexing strategies. Where do these go?",
|
|
25
|
+
"expected_output": "The migration file is a deliverable (generated but committed as part of the project) → project tree. The analysis.md is a working artifact → /temporary/.",
|
|
26
|
+
"files": []
|
|
27
|
+
},
|
|
28
|
+
{
|
|
29
|
+
"id": 5,
|
|
30
|
+
"prompt": "I created a new React component as requested, plus a debug-render.jsx to test how it renders in isolation. The project already has a __tests__/ folder. Where does everything go?",
|
|
31
|
+
"expected_output": "The React component is a deliverable → project tree. debug-render.jsx is a working artifact → /temporary/. The __tests__/ folder is existing tracked code — don't touch it.",
|
|
32
|
+
"files": []
|
|
33
|
+
},
|
|
34
|
+
{
|
|
35
|
+
"id": 6,
|
|
36
|
+
"prompt": "Before committing, I ran git status and see: modified src/api/users.ts, new file src/api/users.test.ts (user asked for tests), new file output.log, new file temp-check.py. How do I clean this up?",
|
|
37
|
+
"expected_output": "Commit users.ts (modified existing) and users.test.ts (deliverable). Move output.log and temp-check.py to /temporary/ (working artifacts).",
|
|
38
|
+
"files": []
|
|
39
|
+
}
|
|
40
|
+
]
|
|
41
|
+
}
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
# .gitignore Template for File Organization Standard
|
|
2
|
+
|
|
3
|
+
Add this to your `./.gitignore` file to ensure `/temporary/` never gets committed:
|
|
4
|
+
|
|
5
|
+
```gitignore
|
|
6
|
+
# ============================================
|
|
7
|
+
# Local temporary work (NEVER commit)
|
|
8
|
+
# ============================================
|
|
9
|
+
/temporary/
|
|
10
|
+
```
|
|
11
|
+
|
|
12
|
+
## Why This Matters
|
|
13
|
+
|
|
14
|
+
The `/temporary/` folder is where agents and developers place all working files that won't be part of the final codebase:
|
|
15
|
+
- Debug scripts
|
|
16
|
+
- Test experiments
|
|
17
|
+
- Analysis documents
|
|
18
|
+
- Exploration code
|
|
19
|
+
- Generated output
|
|
20
|
+
|
|
21
|
+
By adding `/temporary/` to `.gitignore`, you ensure:
|
|
22
|
+
1. ✅ No clutter in git history
|
|
23
|
+
2. ✅ Team members only see production code in the repository
|
|
24
|
+
3. ✅ Safe space for experimentation without affecting commits
|
|
25
|
+
4. ✅ Reduced cognitive load when browsing the codebase
|
|
26
|
+
|
|
27
|
+
## Installation
|
|
28
|
+
|
|
29
|
+
If you don't have a `.gitignore` file yet:
|
|
30
|
+
1. Create a new file called `.gitignore` in the root of your repository
|
|
31
|
+
2. Add the entry above
|
|
32
|
+
3. Commit it: `git add .gitignore && git commit -m "Add temporary folder to gitignore"`
|
|
33
|
+
|
|
34
|
+
If you already have a `.gitignore`:
|
|
35
|
+
1. Open it
|
|
36
|
+
2. Add the entry above (preferably in a section labeled "Local temporary work")
|
|
37
|
+
3. Commit the change
|
|
38
|
+
|
|
39
|
+
## Verification
|
|
40
|
+
|
|
41
|
+
To verify the setup is correct:
|
|
42
|
+
```bash
|
|
43
|
+
# This should NOT list any files from /temporary/
|
|
44
|
+
git status
|
|
45
|
+
|
|
46
|
+
# This should show that /temporary/ is ignored
|
|
47
|
+
git check-ignore -v /temporary/something.txt
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
If `/temporary/` files are appearing in `git status`, double-check that:
|
|
51
|
+
- The `.gitignore` entry is spelled correctly (case-sensitive on Linux/Mac)
|
|
52
|
+
- The file is committed (not just created but not staged)
|
|
53
|
+
- You haven't accidentally added `/temporary/` files with `git add -f`
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
# File Organization Quick Checklist
|
|
2
|
+
|
|
3
|
+
## At File Creation Time
|
|
4
|
+
|
|
5
|
+
```
|
|
6
|
+
WHY am I creating this file?
|
|
7
|
+
│
|
|
8
|
+
├─ DELIVERABLE (serves the project / user asked for it)
|
|
9
|
+
│ → Create in project tree
|
|
10
|
+
│
|
|
11
|
+
└─ WORKING ARTIFACT (helps me debug / analyze / explore)
|
|
12
|
+
→ Create in /temporary/
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
## Before Committing
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
git diff --name-only
|
|
19
|
+
git status
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
For each file:
|
|
23
|
+
|
|
24
|
+
| Question | Answer | Action |
|
|
25
|
+
|----------|--------|--------|
|
|
26
|
+
| Did the user's task require this file? | Yes | Commit |
|
|
27
|
+
| Is this an existing file I modified? | Yes | Commit |
|
|
28
|
+
| Did I create this to help myself work? | Yes | Move to /temporary/ |
|
|
29
|
+
| Not sure? | — | Move to /temporary/ (safer) |
|
|
30
|
+
|
|
31
|
+
## Never Move These to /temporary/
|
|
32
|
+
|
|
33
|
+
- Existing tracked files you edited
|
|
34
|
+
- Project test suites (`tests/`, `__tests__/`, `spec/`)
|
|
35
|
+
- CI/CD configs (`.github/workflows/`, `Dockerfile`)
|
|
36
|
+
- Lock files (`package-lock.json`, `Cargo.lock`)
|
|
37
|
+
- Migration files
|
|
38
|
+
- Generated code the project commits (protobuf, codegen)
|
|
39
|
+
- Config files (`.eslintrc`, `tsconfig.json`, etc.)
|
|
40
|
+
|
|
41
|
+
## Common Working Artifacts (Always /temporary/)
|
|
42
|
+
|
|
43
|
+
- Debug/repro scripts you wrote to investigate
|
|
44
|
+
- Analysis or exploration markdown
|
|
45
|
+
- Scratch files testing an idea
|
|
46
|
+
- Console output or logs you captured
|
|
47
|
+
- Experimental code trying different approaches
|
|
48
|
+
- Notes and drafts that aren't official docs
|
|
@@ -0,0 +1,296 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: improve
|
|
3
|
+
description: "Use when the V-Bounce Engine framework needs to evolve based on accumulated agent feedback. Activates after sprint retros, when recurring friction patterns emerge, or when the user explicitly asks to improve the framework. Reads Process Feedback from sprint reports, analyzes LESSONS.md for automation candidates, identifies patterns, proposes specific changes to templates, skills, brain files, scripts, and agent configs with impact levels, and applies approved changes. This is the system's self-improvement loop."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Framework Self-Improvement
|
|
7
|
+
|
|
8
|
+
## Purpose
|
|
9
|
+
|
|
10
|
+
V-Bounce Engine is not static. Every sprint generates friction signals from agents who work within the framework daily. This skill closes the feedback loop: it reads what agents struggled with, analyzes which lessons can be automated, identifies patterns, and proposes targeted improvements to the framework itself.
|
|
11
|
+
|
|
12
|
+
**Core principle:** No framework change happens without human approval. The system suggests — the human decides.
|
|
13
|
+
|
|
14
|
+
## Impact Levels
|
|
15
|
+
|
|
16
|
+
Every improvement proposal is classified by impact to help the human prioritize:
|
|
17
|
+
|
|
18
|
+
| Level | Label | Meaning | Timeline |
|
|
19
|
+
|-------|-------|---------|----------|
|
|
20
|
+
| **P0** | Critical | Blocks agent work or causes incorrect output | Fix before next sprint |
|
|
21
|
+
| **P1** | High | Causes rework — bounces, wasted tokens, repeated manual steps | Fix this improvement cycle |
|
|
22
|
+
| **P2** | Medium | Friction that slows agents but does not block | Fix within 2 sprints |
|
|
23
|
+
| **P3** | Low | Polish — nice-to-have, batch with other improvements | Batch when convenient |
|
|
24
|
+
|
|
25
|
+
### How Impact Is Determined
|
|
26
|
+
|
|
27
|
+
| Signal | Impact |
|
|
28
|
+
|--------|--------|
|
|
29
|
+
| Blocker finding + recurring across 2+ sprints | **P0** |
|
|
30
|
+
| Blocker finding (single sprint) | **P1** |
|
|
31
|
+
| Friction finding recurring across 2+ sprints | **P1** |
|
|
32
|
+
| Lesson with mechanical rule (can be a gate check or script) | **P1** |
|
|
33
|
+
| Previous improvement that didn't resolve its finding | **P1** |
|
|
34
|
+
| Friction finding (single sprint) | **P2** |
|
|
35
|
+
| Lesson graduation candidate (3+ sprints old) | **P2** |
|
|
36
|
+
| Low first-pass rate or high correction tax | **P1** |
|
|
37
|
+
| High bounce rate | **P2** |
|
|
38
|
+
| Framework health checks | **P3** |
|
|
39
|
+
|
|
40
|
+
## When to Use
|
|
41
|
+
|
|
42
|
+
- **Automatically** — `vbounce sprint close S-XX` runs the improvement pipeline and regenerates `.vbounce/improvement-suggestions.md` (overwrites previous — always reflects latest data)
|
|
43
|
+
- **On demand** — `vbounce improve S-XX` runs the full pipeline (trends + analyzer + suggestions)
|
|
44
|
+
- **Applying changes:** After every 1-3 sprints, human reviews suggestions and runs `/improve` to apply approved ones. The analysis runs every sprint; applying changes is the human's call.
|
|
45
|
+
- When the same Process Feedback appears across multiple sprint reports
|
|
46
|
+
- When the user explicitly asks to improve templates, skills, or process
|
|
47
|
+
- When a sprint's Framework Self-Assessment reveals Blocker-severity findings
|
|
48
|
+
- When LESSONS.md contains 3+ entries pointing to the same process gap
|
|
49
|
+
|
|
50
|
+
## Trigger
|
|
51
|
+
|
|
52
|
+
`/improve` OR `vbounce improve S-XX` OR when the Team Lead identifies recurring framework friction during Sprint Consolidation.
|
|
53
|
+
|
|
54
|
+
## Announcement
|
|
55
|
+
|
|
56
|
+
When using this skill, state: "Using improve skill to evaluate and propose framework changes."
|
|
57
|
+
|
|
58
|
+
## The Automated Pipeline
|
|
59
|
+
|
|
60
|
+
The self-improvement pipeline runs automatically on `vbounce sprint close` and can be triggered manually via `vbounce improve S-XX`:
|
|
61
|
+
|
|
62
|
+
```
|
|
63
|
+
vbounce sprint close S-XX
|
|
64
|
+
│
|
|
65
|
+
├── .vbounce/scripts/sprint_trends.mjs → .vbounce/trends.md
|
|
66
|
+
│
|
|
67
|
+
├── .vbounce/scripts/post_sprint_improve.mjs → .vbounce/improvement-manifest.json
|
|
68
|
+
│ ├── Parse Sprint Report §5 Framework Self-Assessment tables
|
|
69
|
+
│ ├── Parse LESSONS.md for automation candidates
|
|
70
|
+
│ ├── Cross-reference archived sprint reports for recurring patterns
|
|
71
|
+
│ └── Check if previous improvements resolved their findings
|
|
72
|
+
│
|
|
73
|
+
└── .vbounce/scripts/suggest_improvements.mjs → .vbounce/improvement-suggestions.md
|
|
74
|
+
├── Consume improvement-manifest.json
|
|
75
|
+
├── Add metric-driven suggestions (bounce rate, correction tax, first-pass rate)
|
|
76
|
+
├── Add lesson graduation candidates
|
|
77
|
+
└── Format with impact levels for human review
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
### Output Files
|
|
81
|
+
|
|
82
|
+
| File | Purpose |
|
|
83
|
+
|------|---------|
|
|
84
|
+
| `.vbounce/improvement-manifest.json` | Machine-readable proposals with metadata (consumed by this skill) |
|
|
85
|
+
| `.vbounce/improvement-suggestions.md` | Human-readable improvement suggestions with impact levels |
|
|
86
|
+
| `.vbounce/trends.md` | Cross-sprint trend data |
|
|
87
|
+
|
|
88
|
+
## Input Sources
|
|
89
|
+
|
|
90
|
+
The improve skill reads from multiple signals, in priority order:
|
|
91
|
+
|
|
92
|
+
### 1. Improvement Manifest (Primary — Machine-Generated)
|
|
93
|
+
Read `.vbounce/improvement-manifest.json` first. It contains pre-analyzed proposals with impact levels, automation classifications, recurrence data, and effectiveness checks. This is the richest, most structured input.
|
|
94
|
+
|
|
95
|
+
### 2. Sprint Report §5 — Framework Self-Assessment
|
|
96
|
+
The structured retro tables are the richest human-authored source. Each row has:
|
|
97
|
+
- Finding (what went wrong)
|
|
98
|
+
- Source Agent (who experienced it)
|
|
99
|
+
- Severity (Friction vs Blocker)
|
|
100
|
+
- Suggested Fix (agent's proposal)
|
|
101
|
+
|
|
102
|
+
### 3. LESSONS.md — Automation Candidates
|
|
103
|
+
Lessons are classified by automation potential:
|
|
104
|
+
|
|
105
|
+
| Automation Type | What to Look For | Target |
|
|
106
|
+
|----------------|-----------------|--------|
|
|
107
|
+
| **gate_check** | Rules with "Always check...", "Never use...", "Must have..." | `.vbounce/gate-checks.json` or `pre_gate_runner.sh` |
|
|
108
|
+
| **script** | Rules with "Run X before Y", "Use X instead of Y" | `.vbounce/scripts/` |
|
|
109
|
+
| **template_field** | Rules with "Include X in...", "Add X to the story/epic/template" | `.vbounce/templates/*.md` |
|
|
110
|
+
| **agent_config** | General behavioral rules proven over 3+ sprints | `.claude/agents/*.md` |
|
|
111
|
+
|
|
112
|
+
**Key insight:** Lessons tell you WHAT to enforce. Sprint retro tells you WHERE the framework is weak. Together they drive targeted improvements.
|
|
113
|
+
|
|
114
|
+
### 4. Sprint Execution Metrics
|
|
115
|
+
Quantitative signals from Sprint Report §3:
|
|
116
|
+
- High bounce ratios → story templates may need better acceptance criteria guidance
|
|
117
|
+
- High correction tax → handoffs may be losing critical context
|
|
118
|
+
- Escalation patterns → complexity labels may need recalibration
|
|
119
|
+
|
|
120
|
+
### 5. Improvement Effectiveness
|
|
121
|
+
The pipeline checks whether previously applied improvements resolved their target findings. Unresolved improvements are re-escalated at P1 priority.
|
|
122
|
+
|
|
123
|
+
### 6. Agent Process Feedback (Raw)
|
|
124
|
+
If sprint reports aren't available, read individual agent reports from `.vbounce/archive/` and extract `## Process Feedback` sections directly.
|
|
125
|
+
|
|
126
|
+
## The Improvement Process
|
|
127
|
+
|
|
128
|
+
### Step 1: Read the Manifest
|
|
129
|
+
```
|
|
130
|
+
1. Read .vbounce/improvement-manifest.json (if it exists)
|
|
131
|
+
2. Read .vbounce/improvement-suggestions.md for human-readable context
|
|
132
|
+
3. If no manifest exists, run: vbounce improve S-XX to generate one
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
### Step 2: Supplement with Manual Analysis
|
|
136
|
+
The manifest handles mechanical detection. The /improve skill adds judgment:
|
|
137
|
+
- Are there patterns the scripts can't detect? (e.g., misaligned mental models between agents)
|
|
138
|
+
- Do the metric anomalies have root causes not captured in §5?
|
|
139
|
+
- Are there skill instructions that agents consistently misinterpret?
|
|
140
|
+
|
|
141
|
+
### Step 3: Prioritize Using Impact Levels
|
|
142
|
+
Rank all proposals (manifest + manual) by impact:
|
|
143
|
+
|
|
144
|
+
1. **P0 Critical** — Fix before next sprint. Non-negotiable.
|
|
145
|
+
2. **P1 High** — Fix in this improvement pass.
|
|
146
|
+
3. **P2 Medium** — Fix if bandwidth allows, otherwise defer.
|
|
147
|
+
4. **P3 Low** — Batch with other improvements when convenient.
|
|
148
|
+
|
|
149
|
+
### Step 4: Propose Changes
|
|
150
|
+
For each finding, write a concrete proposal:
|
|
151
|
+
|
|
152
|
+
```markdown
|
|
153
|
+
### Proposal {N}: {Short title}
|
|
154
|
+
|
|
155
|
+
**Impact:** {P0/P1/P2/P3} — {reason}
|
|
156
|
+
**Finding:** {What went wrong — from the retro or lesson}
|
|
157
|
+
**Pattern:** {How many times / sprints this appeared}
|
|
158
|
+
**Root Cause:** {Why the framework allowed this to happen}
|
|
159
|
+
**Affected Files:**
|
|
160
|
+
- `{file_path}` — {what changes}
|
|
161
|
+
|
|
162
|
+
**Proposed Change:**
|
|
163
|
+
{Describe the specific edit. Include before/after for template changes.
|
|
164
|
+
For skill changes, describe the new instruction or step.
|
|
165
|
+
For script changes, describe the new behavior.}
|
|
166
|
+
|
|
167
|
+
**Risk:** {Low / Medium — what could break if this change is wrong}
|
|
168
|
+
**Reversibility:** {Easy — revert the edit / Medium — downstream docs may need updating}
|
|
169
|
+
```
|
|
170
|
+
|
|
171
|
+
#### Special Case: Lesson → Gate Check Proposals
|
|
172
|
+
|
|
173
|
+
When a lesson contains a mechanical rule (classified as `gate_check` in the manifest):
|
|
174
|
+
|
|
175
|
+
```markdown
|
|
176
|
+
### Proposal {N}: Add pre-gate check — {check name}
|
|
177
|
+
|
|
178
|
+
**Impact:** P1 — mechanical check currently performed manually by agents
|
|
179
|
+
**Lesson:** "{lesson title}" (active since {date})
|
|
180
|
+
**Rule:** {the lesson's rule}
|
|
181
|
+
**Gate:** qa / arch
|
|
182
|
+
**Check config to add to `.vbounce/gate-checks.json`:**
|
|
183
|
+
```json
|
|
184
|
+
{
|
|
185
|
+
"id": "custom_grep",
|
|
186
|
+
"gate": "arch",
|
|
187
|
+
"enabled": true,
|
|
188
|
+
"pattern": "{regex}",
|
|
189
|
+
"glob": "{file pattern}",
|
|
190
|
+
"should_find": false,
|
|
191
|
+
"description": "{human-readable description}"
|
|
192
|
+
}
|
|
193
|
+
```
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
#### Special Case: Lesson → Script Proposals
|
|
197
|
+
|
|
198
|
+
When a lesson describes a procedural check:
|
|
199
|
+
|
|
200
|
+
```markdown
|
|
201
|
+
### Proposal {N}: Automate — {check name}
|
|
202
|
+
|
|
203
|
+
**Impact:** P1 — repeated manual procedure
|
|
204
|
+
**Lesson:** "{lesson title}" (active since {date})
|
|
205
|
+
**Rule:** {the lesson's rule}
|
|
206
|
+
**Proposed script/enhancement:** {describe the new script or addition to existing script}
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
#### Special Case: Lesson Graduation
|
|
210
|
+
|
|
211
|
+
When a lesson has been active 3+ sprints and is classified as `agent_config`:
|
|
212
|
+
|
|
213
|
+
```markdown
|
|
214
|
+
### Proposal {N}: Graduate lesson — "{title}"
|
|
215
|
+
|
|
216
|
+
**Impact:** P2 — proven rule ready for permanent enforcement
|
|
217
|
+
**Active since:** {date} ({N} sprints)
|
|
218
|
+
**Rule:** {the lesson's rule}
|
|
219
|
+
**Target agent config:** `.claude/agents/{agent}.md`
|
|
220
|
+
**Action:** Add rule to agent's Critical Rules section. Archive lesson from LESSONS.md.
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
### Step 5: Present to Human
|
|
224
|
+
Present ALL proposals as a numbered list, grouped by impact level. The human can:
|
|
225
|
+
- **Approve** — apply the change
|
|
226
|
+
- **Reject** — skip it (optionally explain why)
|
|
227
|
+
- **Modify** — adjust the proposal before applying
|
|
228
|
+
- **Defer** — save for the next improvement pass
|
|
229
|
+
|
|
230
|
+
**Never apply changes without explicit approval.** The human owns the framework.
|
|
231
|
+
|
|
232
|
+
### Step 6: Apply Approved Changes
|
|
233
|
+
For each approved proposal:
|
|
234
|
+
1. Edit the affected file(s)
|
|
235
|
+
2. If brain files are affected, ensure ALL brain surfaces stay in sync (CLAUDE.md, GEMINI.md, AGENTS.md, cursor-rules/)
|
|
236
|
+
3. Log the change in `.vbounce/CHANGELOG.md`
|
|
237
|
+
4. If skills were modified, update skill descriptions in all brain files that reference them
|
|
238
|
+
5. Record in `.vbounce/improvement-log.md` under "Applied" with the impact level
|
|
239
|
+
|
|
240
|
+
### Step 7: Validate
|
|
241
|
+
After all changes are applied:
|
|
242
|
+
1. Run `vbounce doctor` to verify framework integrity
|
|
243
|
+
2. Verify no cross-references are broken (template paths, skill names, report field names)
|
|
244
|
+
3. Confirm brain file consistency — all surfaces should describe the same process
|
|
245
|
+
|
|
246
|
+
## Improvement Scope
|
|
247
|
+
|
|
248
|
+
### What CAN Be Improved
|
|
249
|
+
|
|
250
|
+
| Target | Examples | Typical Impact |
|
|
251
|
+
|--------|---------|----------------|
|
|
252
|
+
| **Gate Checks** | New grep/lint rules from lessons | P1 |
|
|
253
|
+
| **Scripts** | New validation, automate manual steps | P1-P2 |
|
|
254
|
+
| **Templates** | Add/remove/rename sections, improve instructions | P2 |
|
|
255
|
+
| **Agent Report Formats** | Add/remove YAML fields, improve handoff clarity | P1-P2 |
|
|
256
|
+
| **Skills** | Update instructions, add/remove steps, add new skills | P1-P2 |
|
|
257
|
+
| **Brain Files** | Graduate lessons to permanent rules, update skill refs | P2 |
|
|
258
|
+
| **Process Flow** | Reorder steps, add/remove gates, adjust thresholds | P1 |
|
|
259
|
+
|
|
260
|
+
### What CANNOT Be Changed Without Escalation
|
|
261
|
+
- **Adding a new agent role** — requires human design decision + new brain config
|
|
262
|
+
- **Changing the V-Bounce state machine** — core process change, needs explicit human approval beyond normal improvement flow
|
|
263
|
+
- **Removing a gate** (QA, Architect) — safety-critical, must be a deliberate human decision
|
|
264
|
+
- **Changing git branching strategy** — affects all developers and CI/CD
|
|
265
|
+
|
|
266
|
+
## Output
|
|
267
|
+
|
|
268
|
+
The improve skill produces:
|
|
269
|
+
1. The list of proposals presented to the human (inline during the conversation)
|
|
270
|
+
2. The applied changes to framework files
|
|
271
|
+
3. The `.vbounce/CHANGELOG.md` entries documenting what changed and why
|
|
272
|
+
4. Updates to `.vbounce/improvement-log.md` tracking approved/rejected/deferred items
|
|
273
|
+
|
|
274
|
+
## Tracking Improvement Velocity
|
|
275
|
+
|
|
276
|
+
Over time, the Sprint Report §5 Framework Self-Assessment tables should shrink. If the same findings keep appearing after improvement passes, the fix didn't work — the pipeline will automatically detect this and re-escalate at P1 priority.
|
|
277
|
+
|
|
278
|
+
The Team Lead should note in the Sprint Report whether the previous improvement pass resolved the issues it targeted:
|
|
279
|
+
- "Improvement pass from S-03 resolved the Dev→QA handoff gap (0 handoff complaints this sprint)"
|
|
280
|
+
- "Improvement pass from S-03 did NOT resolve RAG relevance — same feedback from Developer"
|
|
281
|
+
|
|
282
|
+
## Critical Rules
|
|
283
|
+
|
|
284
|
+
- **Never change the framework without human approval.** Propose, don't impose.
|
|
285
|
+
- **Keep all brain surfaces in sync.** A change to CLAUDE.md must be reflected in GEMINI.md, AGENTS.md, and cursor-rules/.
|
|
286
|
+
- **Log everything.** Every change goes in `.vbounce/CHANGELOG.md` with the finding that motivated it.
|
|
287
|
+
- **Run `vbounce doctor` after changes.** Verify framework integrity after applying improvements.
|
|
288
|
+
- **Don't over-engineer.** Fix the actual problem reported by agents. Don't add speculative improvements.
|
|
289
|
+
- **Respect the hierarchy.** Template changes are low-risk. Process flow changes are high-risk. Scope accordingly.
|
|
290
|
+
- **Skills are living documents.** If a skill's instructions consistently confuse agents, rewrite the confusing section — don't add workarounds elsewhere.
|
|
291
|
+
- **Impact levels drive priority.** P0 and P1 items are addressed first. P3 items are batched.
|
|
292
|
+
- **Lessons are fuel.** Every lesson is a potential automation — classify and act on them.
|
|
293
|
+
|
|
294
|
+
## Keywords
|
|
295
|
+
|
|
296
|
+
improve, self-improvement, framework evolution, retro, retrospective, process feedback, friction, template improvement, skill improvement, brain sync, meta-process, self-aware, impact levels, lesson graduation, gate check, automation
|