@grainulation/silo 1.0.2 → 1.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@grainulation/silo",
3
- "version": "1.0.2",
3
+ "version": "1.0.3",
4
4
  "description": "Reusable knowledge for research sprints -- shared claim libraries, templates, and knowledge packs",
5
5
  "main": "lib/index.js",
6
6
  "exports": {
@@ -1,180 +1,217 @@
1
1
  {
2
- "meta": {
3
- "id": "coverage-ramp-playbook-v2-generalized",
4
- "name": "coverage-ramp-playbook-v2-generalized",
5
- "type": "claims",
6
- "claimCount": 14,
7
- "hash": "4288c54fcbe0d3ea836bea7dd17f660d9e747b57929726a333c0b46d9f535208",
8
- "storedAt": "2026-03-24T16:48:23.846Z"
9
- },
2
+ "name": "Coverage Ramp",
3
+ "description": "Technology-agnostic playbook for ramping test coverage from any baseline to 80%+ using parallel agents, coverage prescriptions, and churn-based prioritization. Works with any language, test runner, and framework.",
4
+ "version": "2.0.0",
10
5
  "claims": [
11
6
  {
12
- "id": "p001",
7
+ "id": "cr-001",
13
8
  "type": "recommendation",
14
- "topic": "phase-1-exclusions",
15
- "content": "PHASE 1: Audit jest.config.js exclusions BEFORE writing tests. Categories to exclude: (1) generated/codegen files, (2) canvas/d3/charting that requires real browser, (3) vendored third-party libs, (4) barrel/index re-exports if trivial, (5) platform-specific files (Cordova/Electron-only). Each exclusion must be justified. Run coverage to establish the MEASURED scope — this is your denominator. Removing unjustified exclusions (like bootstrap/) later can ADD to denominator, so only remove when you have tests ready.",
16
- "evidence": "Removing bootstrap/** exclusion without tests dropped coverage. Always test BEFORE removing exclusions.",
17
- "tags": [
18
- "phase-1",
19
- "exclusions",
20
- "denominator"
21
- ]
9
+ "topic": "onboarding preview",
10
+ "content": "Before writing any tests, run a codebase preview to understand what you're working with. Steps: (1) Detect the language, test runner, and framework from project config files (package.json, pyproject.toml, Cargo.toml, go.mod, pom.xml, etc.). (2) Count source files and existing test files. (3) Check if a test runner is already installed — if not, install the idiomatic one for the stack. (4) Run existing tests if any to get a baseline. (5) Run coverage to see the starting number. (6) Categorize all source files into testable vs excludable. Report this as a summary before proceeding.",
11
+ "source": { "origin": "experience", "artifact": null, "connector": null },
12
+ "evidence": "tested",
13
+ "status": "active",
14
+ "phase_added": "define",
15
+ "timestamp": "2026-03-25T00:00:00.000Z",
16
+ "conflicts_with": [],
17
+ "resolved_by": null,
18
+ "tags": ["onboarding", "preview", "assessment"]
22
19
  },
23
20
  {
24
- "id": "p002",
21
+ "id": "cr-002",
25
22
  "type": "recommendation",
26
- "topic": "phase-2-zero-cov-grind",
27
- "content": "PHASE 2: Write tests for all zero-coverage files using parallel worktree agents. Key rules: (1) Verify files exist on disk with fs.existsSync before targeting (coverage reports go stale), (2) Each agent gets 15-20 files, (3) Agents ONLY create new .test.* files never edit jest.config or package.json, (4) Run 4-5 agents at a time, (5) After each wave: copy test files from worktree, run prettier, audit against testing rules, verify tests pass. Sort files by churn score × testability for priority.",
28
- "evidence": "Coverage reports list files that may have been deleted. ~30% of files from stale reports don't exist. Verifying existence first prevents wasted agent time.",
29
- "tags": [
30
- "phase-2",
31
- "zero-coverage",
32
- "agents"
33
- ]
23
+ "topic": "test runner setup",
24
+ "content": "If no test runner exists, install the idiomatic one for the stack: Vitest for Vite projects, Jest for other JS/TS, pytest for Python, go test for Go, cargo test for Rust, JUnit for Java. Create a test-utils module that re-exports the testing library with project-specific wrappers (providers, mocks, fixtures). Register it as a path alias so all tests import from the same place. Add scripts for: run tests, run with watch, run with coverage.",
25
+ "source": { "origin": "experience", "artifact": null, "connector": null },
26
+ "evidence": "tested",
27
+ "status": "active",
28
+ "phase_added": "define",
29
+ "timestamp": "2026-03-25T00:00:00.000Z",
30
+ "conflicts_with": [],
31
+ "resolved_by": null,
32
+ "tags": ["setup", "test-runner", "infrastructure"]
34
33
  },
35
34
  {
36
- "id": "p003",
35
+ "id": "cr-003",
37
36
  "type": "recommendation",
38
- "topic": "phase-3-deep-test-pattern",
39
- "content": "PHASE 3: Deepen partial coverage using .deep.test.js files (NEVER rewrite existing tests). Create new test files alongside existing ones named *.deep.test.js that target only uncovered paths. If a file already has .deep.test.js, use .deep2.test.js. This is criticalworktree agents start from git HEAD and will REWRITE existing test files with shallower versions if told to 'deepen'. The .deep.test.js pattern is purely additive.",
40
- "evidence": "Lost 1.5% coverage when agents 'deepened' files by rewriting them. The .deep.test.js pattern prevents this entirely.",
41
- "tags": [
42
- "phase-3",
43
- "deepening",
44
- "additive",
45
- "critical"
46
- ]
37
+ "topic": "testing rules",
38
+ "content": "Before writing tests, establish a testing rules document and embed it in every agent prompt. Rules should cover: (1) Import conventions (use the test-utils alias, not raw framework imports), (2) Interaction patterns (e.g., userEvent over fireEvent, or equivalent for your stack), (3) Query/assertion priorities (accessible selectors first, test IDs as last resort), (4) Mocking strategy (mock at boundaries only external APIs, not internal modules), (5) Structure (max nesting depth, naming conventions, AAA pattern), (6) Forbidden patterns (inline requires, implementation-detail testing). Audit after each wave. Without embedded rules, violation rates are 10-20x higher.",
39
+ "source": { "origin": "experience", "artifact": null, "connector": null },
40
+ "evidence": "tested",
41
+ "status": "active",
42
+ "phase_added": "define",
43
+ "timestamp": "2026-03-25T00:00:00.000Z",
44
+ "conflicts_with": [],
45
+ "resolved_by": null,
46
+ "tags": ["quality", "rules", "enforcement", "onboarding"]
47
47
  },
48
48
  {
49
- "id": "p004",
49
+ "id": "cr-004",
50
+ "type": "recommendation",
51
+ "topic": "phase-1 exclusion audit",
52
+ "content": "PHASE 1: Audit coverage exclusions BEFORE writing tests. Categorize every source file as testable or excludable. Common exclusion categories: (1) Generated/codegen files (protobuf, OpenAPI, GraphQL codegen, ORM migrations), (2) Pure type/interface definitions with no runtime code, (3) Config/bootstrap/entry points that only wire things together, (4) Thin wrappers with no logic, (5) Platform-specific files not runnable in the test environment, (6) Icons/assets/static content. Each exclusion must be justified. Configure exclusions in the test runner config. Run coverage to establish the measured scope — this is your denominator.",
53
+ "source": { "origin": "experience", "artifact": null, "connector": null },
54
+ "evidence": "tested",
55
+ "status": "active",
56
+ "phase_added": "define",
57
+ "timestamp": "2026-03-25T00:00:00.000Z",
58
+ "conflicts_with": [],
59
+ "resolved_by": null,
60
+ "tags": ["phase-1", "exclusions", "denominator"]
61
+ },
62
+ {
63
+ "id": "cr-005",
50
64
  "type": "constraint",
51
- "topic": "never-rewrite-tests",
52
- "content": "NEVER let worktree agents modify existing test files. Worktrees start from git HEAD, not your working tree. An agent that 'deepens' a test file will read the HEAD version (which may be older/shallower) and write a 'new' version that loses coverage from your current working tree. Only create NEW test files from agents. If you need to modify existing tests, do it manually in the main tree.",
53
- "evidence": "Coverage dropped from 60.01% to 59.22% when deepening agents overwrote test files with shallower versions from git HEAD.",
54
- "tags": [
55
- "constraint",
56
- "critical",
57
- "worktree"
58
- ]
65
+ "topic": "exclusion integrity",
66
+ "content": "NEVER exclude files to game the coverage number only exclude genuinely untestable code. Cross-reference with git churn: 0 commits in 3+ years = safe to exclude, 0 in 1 year = gray area, any recent churn = keep measured. Removing exclusions ADDS to the denominator only remove when tests are ready. Track two numbers: measured scope coverage and total codebase coverage.",
67
+ "source": { "origin": "experience", "artifact": null, "connector": null },
68
+ "evidence": "tested",
69
+ "status": "active",
70
+ "phase_added": "define",
71
+ "timestamp": "2026-03-25T00:00:00.000Z",
72
+ "conflicts_with": [],
73
+ "resolved_by": null,
74
+ "tags": ["exclusions", "integrity", "constraint"]
59
75
  },
60
76
  {
61
- "id": "p005",
77
+ "id": "cr-006",
62
78
  "type": "recommendation",
63
- "topic": "churn-based-prioritization",
64
- "content": "Prioritize files by git churn score: `git log --format=format: --name-only --since=12.month | sort | uniq -c | sort -nr`. High churn = actively developed = highest risk from low coverage = test first. Zero-churn files (0 commits in 12+ months) are candidates for exclusion since nobody is changing them. Use 3-year window for exclusion decisions (stricter), 1-year for prioritization (more selective).",
65
- "evidence": "Churn-based prioritization ensures the most actively developed files get tested first. Zero-churn exclusions are defensible in code review.",
66
- "tags": [
67
- "prioritization",
68
- "churn",
69
- "strategy"
70
- ]
79
+ "topic": "churn-based prioritization",
80
+ "content": "Prioritize files by git churn score: `git log --format=format: --name-only --since=12.month | sort | uniq -c | sort -nr`. High churn = actively developed = highest risk from low coverage = test first. Zero-churn files (0 commits in 12+ months) are candidates for exclusion. Use 3-year window for exclusion decisions, 1-year for prioritization. This ensures the most actively developed files get tested first and zero-churn exclusions are defensible in code review.",
81
+ "source": { "origin": "experience", "artifact": null, "connector": null },
82
+ "evidence": "tested",
83
+ "status": "active",
84
+ "phase_added": "research",
85
+ "timestamp": "2026-03-25T00:00:00.000Z",
86
+ "conflicts_with": [],
87
+ "resolved_by": null,
88
+ "tags": ["prioritization", "churn", "strategy"]
71
89
  },
72
90
  {
73
- "id": "p006",
91
+ "id": "cr-007",
74
92
  "type": "recommendation",
75
- "topic": "wave-structure",
76
- "content": "Structure work in 4 phases with waves of 5 parallel agents each: (1) Zero-coverage grindexhaust all untested files, sorted by churn × stmts. (2) Mixin/helper extraction extract pure functions from untestable files into .helpers.js, test those. (3) Deep testing .deep.test.js for 70-80% files (cheapest gains), then 50-70%. (4) Prescription mop-up target specific uncovered lines from coverage-final.json. Each wave: launch agents copy files prettier audit verify commit coverage check.",
77
- "evidence": "This ordering maximizes ROI. Phase 1 gives ~60% of gains. Phase 3 targets cheapest per-statement gains. Phase 4 is surgical.",
78
- "tags": [
79
- "process",
80
- "waves",
81
- "structure"
82
- ]
93
+ "topic": "phase-2 zero-coverage grind",
94
+ "content": "PHASE 2: Write tests for all zero-coverage files using parallel worktree agents. Key rules: (1) Verify files exist on disk before targeting coverage reports go stale, ~30% of files from old reports may not exist. (2) Each agent gets 15-20 files scoped to a directory. (3) Agents ONLY create new test files never edit config files or existing tests. (4) Run 4-5 agents at a time. (5) After each wave: copy test files from worktree, run formatter, audit against testing rules, verify tests pass, commit. Sort files by churn score × testability for priority.",
95
+ "source": { "origin": "experience", "artifact": null, "connector": null },
96
+ "evidence": "tested",
97
+ "status": "active",
98
+ "phase_added": "research",
99
+ "timestamp": "2026-03-25T00:00:00.000Z",
100
+ "conflicts_with": [],
101
+ "resolved_by": null,
102
+ "tags": ["phase-2", "zero-coverage", "agents", "parallel"]
83
103
  },
84
104
  {
85
- "id": "p007",
105
+ "id": "cr-008",
86
106
  "type": "recommendation",
87
- "topic": "module-resolution-fixes",
88
- "content": "Common jest module resolution blockers and fixes: (1) Missing bare-import aliases add `'^@foo$': '<rootDir>/path/index.js'` to moduleNameMapper (not just `'^@foo/(.*)$'`). (2) Generated files (.gen.js) create thin re-export stub: `export { default } from './File.gen'`. (3) Build-time-only modules create stub files that re-export from canonical locations. (4) Files using `{ virtual: true }` in jest.mock don't work when moduleNameMapper already maps the path use stubs instead.",
89
- "evidence": "5 bare-import aliases + 4 module stubs unblocked ~15 previously-untestable files worth 500+ statements.",
90
- "tags": [
91
- "infrastructure",
92
- "jest",
93
- "module-resolution"
94
- ]
107
+ "topic": "phase-3 coverage prescriptions",
108
+ "content": "PHASE 3: Generate coverage prescriptions from the coverage report (coverage-final.json or equivalent). For each under-covered file, extract exact uncovered lines, branches, and function names. Feed these to agents as targeted instructions: 'test lines 102-137, branches L105/L108, functions handleSubmit and validateForm'. This is 2-3x more effective than telling agents to 'deepen coverage' because they know exactly what paths to exercise. Create .deep.test files alongside existing tests NEVER rewrite existing test files.",
109
+ "source": { "origin": "experience", "artifact": null, "connector": null },
110
+ "evidence": "tested",
111
+ "status": "active",
112
+ "phase_added": "research",
113
+ "timestamp": "2026-03-25T00:00:00.000Z",
114
+ "conflicts_with": [],
115
+ "resolved_by": null,
116
+ "tags": ["phase-3", "prescriptions", "deep-testing"]
95
117
  },
96
118
  {
97
- "id": "p008",
98
- "type": "recommendation",
99
- "topic": "mixin-helper-extraction",
100
- "content": "For excluded createReactClass mixins: DON'T refactor the mixin itself. Instead, identify pure module-scope functions (no `this` binding, no side effects) and extract them into a .helpers.js file. Test the helpers. The new file is automatically in-scope for coverage. Pattern: read first 100 lines of mixin find functions defined outside the mixin object extract to FileName.helpers.js write thorough tests. Most mixins have 0-6 extractable pure functions. Assess before extracting many have none.",
101
- "evidence": "5 mixins assessed 15 pure functions extracted → 55 tests. 6 other mixins had 0 extractable functions (all this-bound).",
102
- "tags": [
103
- "refactoring",
104
- "mixins",
105
- "helpers"
106
- ]
119
+ "id": "cr-009",
120
+ "type": "constraint",
121
+ "topic": "never rewrite tests from worktrees",
122
+ "content": "NEVER let worktree agents modify existing test files. Worktrees start from git HEAD, not your working tree. An agent that 'deepens' a test file will read the HEAD version (which may be older/shallower) and write a 'new' version that loses coverage from your current working tree. Only create NEW test files from agents (.deep.test, .deep2.test, etc.). If you need to modify existing tests, do it in the main tree. Coverage has been observed to drop ~1% when this rule is violated, as the HEAD version overwrites deeper working-tree tests.",
123
+ "source": { "origin": "experience", "artifact": null, "connector": null },
124
+ "evidence": "tested",
125
+ "status": "active",
126
+ "phase_added": "research",
127
+ "timestamp": "2026-03-25T00:00:00.000Z",
128
+ "conflicts_with": [],
129
+ "resolved_by": null,
130
+ "tags": ["constraint", "critical", "worktree", "additive"]
107
131
  },
108
132
  {
109
- "id": "p009",
133
+ "id": "cr-010",
110
134
  "type": "recommendation",
111
- "topic": "diminishing-returns-strategy",
112
- "content": "Coverage gains follow a curve: 0-60% is fast (zero-cov grind), 60-75% is moderate (mix of new + deep), 75-80% is slow (deep testing of already-tested files). Strategy shifts: (1) Below 70%: batch zero-cov files, 15-20 per agent. (2) 70-80%: target files at 70-79% with smallest gaps (5-15 uncov stmts each cheapest per-file). (3) Above 78%: test tiny 0% files (barrel re-exports, configs) that don't grow denominator. (4) The denominator grows ~5-10 stmts per new test file discovered, so the target keeps moving.",
113
- "evidence": "Each wave above 78% yielded ~0.1-0.3% gain vs ~2-5% below 60%. Targeting 75-80% files with 5-8 uncov stmts was the most efficient late-game strategy.",
114
- "tags": [
115
- "strategy",
116
- "efficiency",
117
- "late-game"
118
- ]
135
+ "topic": "wave structure",
136
+ "content": "Structure work in 4 phases: (1) Zero-coverage grind exhaust all untested files, sorted by churn × size. (2) Helper extraction identify pure functions buried in untestable files, extract to testable modules. (3) Deep testing — prescription-based .deep.test files for partially-covered files, cheapest gaps first. (4) Surgical mop-up target specific uncovered branches and functions to cross the threshold. Each wave: launch agents copy files format audit verify commit coverage check. This ordering maximizes ROI Phase 1 gives ~60% of total gains.",
137
+ "source": { "origin": "experience", "artifact": null, "connector": null },
138
+ "evidence": "tested",
139
+ "status": "active",
140
+ "phase_added": "research",
141
+ "timestamp": "2026-03-25T00:00:00.000Z",
142
+ "conflicts_with": [],
143
+ "resolved_by": null,
144
+ "tags": ["process", "waves", "structure", "phases"]
119
145
  },
120
146
  {
121
- "id": "p010",
147
+ "id": "cr-011",
122
148
  "type": "recommendation",
123
- "topic": "testing-rules-enforcement",
124
- "content": "Embed testing rules in EVERY agent prompt. Key rules to enforce: (1) import from @test-utils not @testing-library/react, (2) userEvent.setup() not fireEvent, (3) query by role/label first, (4) mock at boundaries not local components, (5) top-level imports only (no inline require), (6) max 2 describe nesting, (7) no 'should' in test names, (8) AAA pattern, (9) beforeEach with jest.clearAllMocks. Run audit after each wave: grep for violations, fix before committing.",
125
- "evidence": "3 violations found in 1,700+ test files when rules were embedded in every prompt. Without embedding, early waves had 292 require() and 7 fireEvent violations.",
126
- "tags": [
127
- "quality",
128
- "rules",
129
- "enforcement"
130
- ]
149
+ "topic": "merge checklist",
150
+ "content": "After EVERY agent completes, run this checklist: (1) Copy ONLY test files and extracted helper files from worktree — never config files (test runner config, package manager config, lock files). (2) Run the project's formatter on all new files. (3) Audit against testing rules (grep for violations). (4) Verify config file thresholds and settings not stomped. (5) Run the test suite on new files to verify pass. (6) Run linter to check for unused imports/variables. (7) Stage and commit. (8) Run full coverage after each wave to track progress.",
151
+ "source": { "origin": "experience", "artifact": null, "connector": null },
152
+ "evidence": "tested",
153
+ "status": "active",
154
+ "phase_added": "research",
155
+ "timestamp": "2026-03-25T00:00:00.000Z",
156
+ "conflicts_with": [],
157
+ "resolved_by": null,
158
+ "tags": ["checklist", "quality", "process"]
131
159
  },
132
160
  {
133
- "id": "p011",
161
+ "id": "cr-012",
134
162
  "type": "recommendation",
135
- "topic": "exclusion-analysis",
136
- "content": "When considering excluding files from coverage: (1) NEVER exclude to game the number only exclude genuinely untestable code. (2) Cross-reference with churn: 0 commits in 3+ years = safe to exclude. 0 in 1 year = gray area. Any recent churn = keep measured. (3) Removing exclusions ADDS to denominator only remove when tests are ready. (4) Categories that are permanently untestable in jsdom: canvas rendering, d3/charting, golden layout, CodeMirror/Monaco, PhoneGap/Cordova native APIs. (5) Track two numbers: measured scope coverage and original codebase coverage.",
137
- "evidence": "Removing bootstrap/** exclusion without ready tests dropped coverage by 0.8%. Churn-based exclusion analysis showed 0 files with 0 churn in 5 years — everything was touched at least once.",
138
- "tags": [
139
- "exclusions",
140
- "transparency",
141
- "strategy"
142
- ]
163
+ "topic": "diminishing returns strategy",
164
+ "content": "Coverage gains follow a curve: 0-60% is fast (zero-cov grind), 60-75% is moderate (mix of new + deep), 75-80% is slow (deep testing of already-tested files). Strategy shifts: (1) Below 70%: batch zero-cov files, 15-20 per agent. (2) 70-80%: target files with smallest gaps (5-15 uncovered statements cheapest per-file). (3) Above 78%: target tiny 0% files that don't grow denominator. (4) The denominator grows slightly with each new file discovered, so the target keeps moving. Branches are always the hardest metric they require path-specific tests for each conditional.",
165
+ "source": { "origin": "experience", "artifact": null, "connector": null },
166
+ "evidence": "tested",
167
+ "status": "active",
168
+ "phase_added": "research",
169
+ "timestamp": "2026-03-25T00:00:00.000Z",
170
+ "conflicts_with": [],
171
+ "resolved_by": null,
172
+ "tags": ["strategy", "efficiency", "late-game", "branches"]
143
173
  },
144
174
  {
145
- "id": "p012",
175
+ "id": "cr-013",
146
176
  "type": "recommendation",
147
- "topic": "merge-checklist",
148
- "content": "After EVERY agent completes, run this checklist: (1) Copy ONLY .test.* and .helpers.* files from worktree (never jest.config.js or package.json). (2) npx prettier --write on all new files. (3) Audit against testing rules (grep for violations). (4) Verify jest.config.js thresholds not stomped. (5) npx jest --ci --no-coverage on new files to verify pass. (6) git add + commit. (7) Run full coverage after each wave to track progress. This checklist prevents the most common agent mistakes.",
149
- "evidence": "Agent overwrote jest.config.js once, resetting thresholds. Another agent created module stubs that shouldn't have been copied to main tree. Checklist catches these.",
150
- "tags": [
151
- "checklist",
152
- "quality",
153
- "process"
154
- ]
177
+ "topic": "branch coverage strategy",
178
+ "content": "Branches are the hardest metric to improve. Each if/else, ternary, switch, &&/|| has two paths. To specifically target branches: (1) Score files by uncovered-branches count, not statements. (2) Instruct agents explicitly to test every conditional path both true and false. (3) Create .branch.test files for branch-specific testing. (4) Small zero-cov files with high branch counts give the best branch ROI. (5) Complex stateful code (state machines, routing conditions, auth flows) has the hardest branches save for manual deepening or dedicated agents with full file context.",
179
+ "source": { "origin": "experience", "artifact": null, "connector": null },
180
+ "evidence": "tested",
181
+ "status": "active",
182
+ "phase_added": "research",
183
+ "timestamp": "2026-03-25T00:00:00.000Z",
184
+ "conflicts_with": [],
185
+ "resolved_by": null,
186
+ "tags": ["branches", "strategy", "hard-metric"]
155
187
  },
156
188
  {
157
- "id": "p013",
189
+ "id": "cr-014",
158
190
  "type": "factual",
159
- "topic": "velocity-benchmarks",
160
- "content": "On a ~2,000-file / 300K-line React codebase: Phase 1 (zero-cov grind) covered ~350 files in ~8 waves, gaining ~25% coverage. Phase 2 (mixin extraction) added ~200 stmts from 5 mixins. Phase 3 (deep testing) pushed from 77% to 80% over ~6 waves. Total: 45% 80% in one extended session, ~10,000 new tests, ~1,700 test files. Each 5-agent wave takes 5-15 minutes and yields 50-500 new covered stmts depending on phase.",
161
- "evidence": "NT-38884 session data, March 2026.",
162
- "tags": [
163
- "benchmarks",
164
- "velocity"
165
- ]
191
+ "topic": "velocity benchmarks",
192
+ "content": "Observed velocity across codebases of varying size: small projects (under 100 files) can go from 0% to 80%+ in a single session with 3-4 waves. Larger projects (1,000+ files) typically need 6-10 waves to go from low coverage to 80%. Each 5-agent wave takes 5-15 minutes and yields 50-500 new covered statements depending on phase. The prescription-based approach (Phase 3) is 2-3x more effective per wave than unprescribed deepening. The onboarding preview (Phase 0) prevents wasted effort by identifying the right test runner and exclusions upfront.",
193
+ "source": { "origin": "experience", "artifact": null, "connector": null },
194
+ "evidence": "tested",
195
+ "status": "active",
196
+ "phase_added": "research",
197
+ "timestamp": "2026-03-25T00:00:00.000Z",
198
+ "conflicts_with": [],
199
+ "resolved_by": null,
200
+ "tags": ["benchmarks", "velocity", "factual"]
166
201
  },
167
202
  {
168
- "id": "p014",
203
+ "id": "cr-015",
169
204
  "type": "recommendation",
170
- "topic": "branch-coverage-strategy",
171
- "content": "Branches are the hardest metric to improve. Each if/else, ternary, switch, &&, || has two paths. To specifically target branches: (1) Score files by uncovered-branches count, not stmts. (2) Tell agents explicitly to test every conditional path. (3) Create .branch.test.js files for branch-specific testing. (4) Small zero-cov files with high branch counts (e.g., 10 branches in a 15-stmt utility) give the best branch ROI. (5) Branches in complex components (Recoil state, router conditions) are the hardest — save for manual deepening.",
172
- "evidence": "Branch-targeted waves moved branches from 58% to 66% — each wave adding ~1-2%. Statements and functions moved faster because they benefit from any test, while branches require path-specific tests.",
173
- "tags": [
174
- "branches",
175
- "strategy",
176
- "hard-metric"
177
- ]
205
+ "topic": "CI quality gate",
206
+ "content": "After achieving the target coverage, lock it in with a CI quality gate. Add a workflow that runs tests with coverage on every PR and fails if coverage drops below the threshold. This prevents coverage regression and makes the investment durable. Set thresholds at or slightly below current coverage (e.g., if you hit 81%, set gate at 80%) to allow normal variance without blocking PRs.",
207
+ "source": { "origin": "experience", "artifact": null, "connector": null },
208
+ "evidence": "tested",
209
+ "status": "active",
210
+ "phase_added": "deliver",
211
+ "timestamp": "2026-03-25T00:00:00.000Z",
212
+ "conflicts_with": [],
213
+ "resolved_by": null,
214
+ "tags": ["ci", "quality-gate", "regression", "deliver"]
178
215
  }
179
216
  ]
180
217
  }