@fro.bot/systematic 2.0.3 → 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/agents/research/learnings-researcher.md +27 -26
- package/agents/review/api-contract-reviewer.md +1 -1
- package/agents/review/correctness-reviewer.md +1 -1
- package/agents/review/data-migrations-reviewer.md +1 -1
- package/agents/review/dhh-rails-reviewer.md +31 -52
- package/agents/review/julik-frontend-races-reviewer.md +27 -200
- package/agents/review/kieran-python-reviewer.md +29 -116
- package/agents/review/kieran-rails-reviewer.md +29 -98
- package/agents/review/kieran-typescript-reviewer.md +29 -107
- package/agents/review/maintainability-reviewer.md +1 -1
- package/agents/review/performance-reviewer.md +1 -1
- package/agents/review/reliability-reviewer.md +1 -1
- package/agents/review/security-reviewer.md +1 -1
- package/agents/review/testing-reviewer.md +1 -1
- package/agents/workflow/pr-comment-resolver.md +99 -50
- package/dist/index.js +9 -0
- package/dist/lib/config-handler.d.ts +2 -0
- package/package.json +1 -1
- package/skills/ce-compound/SKILL.md +100 -27
- package/skills/ce-compound-refresh/SKILL.md +172 -74
- package/skills/ce-review/SKILL.md +379 -418
- package/skills/ce-work/SKILL.md +5 -4
- package/skills/ce-work-beta/SKILL.md +6 -5
- package/skills/claude-permissions-optimizer/scripts/extract-commands.mjs +9 -159
- package/skills/claude-permissions-optimizer/scripts/normalize.mjs +151 -0
- package/skills/git-worktree/scripts/worktree-manager.sh +163 -0
- package/skills/lfg/SKILL.md +2 -2
- package/skills/orchestrating-swarms/SKILL.md +1 -1
- package/skills/setup/SKILL.md +8 -137
- package/skills/slfg/SKILL.md +8 -4
- package/skills/test-browser/SKILL.md +2 -2
- package/skills/test-xcode/SKILL.md +2 -2
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: ce:compound-refresh
|
|
3
|
-
description: Refresh stale or drifting learnings and pattern docs in docs/solutions/ by reviewing, updating, replacing, or
|
|
4
|
-
argument-hint: '[mode:
|
|
3
|
+
description: Refresh stale or drifting learnings and pattern docs in docs/solutions/ by reviewing, updating, consolidating, replacing, or deleting them against the current codebase. Use after refactors, migrations, dependency upgrades, or when a retrieved learning feels outdated or wrong. Also use when reviewing docs/solutions/ for accuracy, when a recently solved problem contradicts an existing learning, when pattern docs no longer reflect current code, or when multiple docs seem to cover the same topic and might benefit from consolidation.
|
|
4
|
+
argument-hint: '[mode:autofix] [optional: scope hint]'
|
|
5
5
|
disable-model-invocation: true
|
|
6
6
|
---
|
|
7
7
|
|
|
@@ -11,25 +11,25 @@ Maintain the quality of `docs/solutions/` over time. This workflow reviews exist
|
|
|
11
11
|
|
|
12
12
|
## Mode Detection
|
|
13
13
|
|
|
14
|
-
Check if `$ARGUMENTS` contains `mode:
|
|
14
|
+
Check if `$ARGUMENTS` contains `mode:autofix`. If present, strip it from arguments (use the remainder as a scope hint) and run in **autofix mode**.
|
|
15
15
|
|
|
16
16
|
| Mode | When | Behavior |
|
|
17
17
|
|------|------|----------|
|
|
18
18
|
| **Interactive** (default) | User is present and can answer questions | Ask for decisions on ambiguous cases, confirm actions |
|
|
19
|
-
| **
|
|
19
|
+
| **Autofix** | `mode:autofix` in arguments | No user interaction. Apply all unambiguous actions (Keep, Update, Consolidate, auto-Delete, Replace with sufficient evidence). Mark ambiguous cases as stale. Generate a summary report at the end. |
|
|
20
20
|
|
|
21
|
-
###
|
|
21
|
+
### Autofix mode rules
|
|
22
22
|
|
|
23
23
|
- **Skip all user questions.** Never pause for input.
|
|
24
24
|
- **Process all docs in scope.** No scope narrowing questions — if no scope hint was provided, process everything.
|
|
25
|
-
- **Attempt all safe actions:** Keep (no-op), Update (fix references), auto-
|
|
26
|
-
- **Mark as stale when uncertain.** If classification is genuinely ambiguous (Update vs Replace vs
|
|
27
|
-
- **Use conservative confidence.** In interactive mode, borderline cases get a user question. In
|
|
25
|
+
- **Attempt all safe actions:** Keep (no-op), Update (fix references), Consolidate (merge and delete subsumed doc), auto-Delete (unambiguous criteria met), Replace (when evidence is sufficient). If a write succeeds, record it as **applied**. If a write fails (e.g., permission denied), record the action as **recommended** in the report and continue — do not stop or ask for permissions.
|
|
26
|
+
- **Mark as stale when uncertain.** If classification is genuinely ambiguous (Update vs Replace vs Consolidate vs Delete) or Replace evidence is insufficient, mark as stale with `status: stale`, `stale_reason`, and `stale_date` in the frontmatter. If even the stale-marking write fails, include it as a recommendation.
|
|
27
|
+
- **Use conservative confidence.** In interactive mode, borderline cases get a user question. In autofix mode, borderline cases get marked stale. Err toward stale-marking over incorrect action.
|
|
28
28
|
- **Always generate a report.** The report is the primary deliverable. It has two sections: **Applied** (actions that were successfully written) and **Recommended** (actions that could not be written, with full rationale so a human can apply them or run the skill interactively). The report structure is the same regardless of what permissions were granted — the only difference is which section each action lands in.
|
|
29
29
|
|
|
30
30
|
## Interaction Principles
|
|
31
31
|
|
|
32
|
-
**These principles apply to interactive mode only. In
|
|
32
|
+
**These principles apply to interactive mode only. In autofix mode, skip all user questions and apply the autofix mode rules above.**
|
|
33
33
|
|
|
34
34
|
Follow the same interaction style as `ce:brainstorm`:
|
|
35
35
|
|
|
@@ -46,7 +46,7 @@ The goal is not to force the user through a checklist. The goal is to help them
|
|
|
46
46
|
Refresh in this order:
|
|
47
47
|
|
|
48
48
|
1. Review the relevant individual learning docs first
|
|
49
|
-
2. Note which learnings stayed valid, were updated, were replaced, or were
|
|
49
|
+
2. Note which learnings stayed valid, were updated, were consolidated, were replaced, or were deleted
|
|
50
50
|
3. Then review any pattern docs that depend on those learnings
|
|
51
51
|
|
|
52
52
|
Why this order:
|
|
@@ -59,21 +59,22 @@ If the user starts by naming a pattern doc, you may begin there to understand th
|
|
|
59
59
|
|
|
60
60
|
## Maintenance Model
|
|
61
61
|
|
|
62
|
-
For each candidate artifact, classify it into one of
|
|
62
|
+
For each candidate artifact, classify it into one of five outcomes:
|
|
63
63
|
|
|
64
64
|
| Outcome | Meaning | Default action |
|
|
65
65
|
|---------|---------|----------------|
|
|
66
66
|
| **Keep** | Still accurate and still useful | No file edit by default; report that it was reviewed and remains trustworthy |
|
|
67
67
|
| **Update** | Core solution is still correct, but references drifted | Apply evidence-backed in-place edits |
|
|
68
|
-
| **
|
|
69
|
-
| **
|
|
68
|
+
| **Consolidate** | Two or more docs overlap heavily but are both correct | Merge unique content into the canonical doc, delete the subsumed doc |
|
|
69
|
+
| **Replace** | The old artifact is now misleading, but there is a known better replacement | Create a trustworthy successor, then delete the old artifact |
|
|
70
|
+
| **Delete** | No longer useful, applicable, or distinct | Delete the file — git history preserves it if anyone needs to recover it later |
|
|
70
71
|
|
|
71
72
|
## Core Rules
|
|
72
73
|
|
|
73
74
|
1. **Evidence informs judgment.** The signals below are inputs, not a mechanical scorecard. Use engineering judgment to decide whether the artifact is still trustworthy.
|
|
74
75
|
2. **Prefer no-write Keep.** Do not update a doc just to leave a review breadcrumb.
|
|
75
76
|
3. **Match docs to reality, not the reverse.** When current code differs from a learning, update the learning to reflect the current code. The skill's job is doc accuracy, not code review — do not ask the user whether code changes were "intentional" or "a regression." If the code changed, the doc should match. If the user thinks the code is wrong, that is a separate concern outside this workflow.
|
|
76
|
-
4. **Be decisive, minimize questions.** When evidence is clear (file renamed, class moved, reference broken), apply the update. In interactive mode, only ask the user when the right action is genuinely ambiguous. In
|
|
77
|
+
4. **Be decisive, minimize questions.** When evidence is clear (file renamed, class moved, reference broken), apply the update. In interactive mode, only ask the user when the right action is genuinely ambiguous. In autofix mode, mark ambiguous cases as stale instead of asking. The goal is automated maintenance with human oversight on judgment calls, not a question for every finding.
|
|
77
78
|
5. **Avoid low-value churn.** Do not edit a doc just to fix a typo, polish wording, or make cosmetic changes that do not materially improve accuracy or usability.
|
|
78
79
|
6. **Use Update only for meaningful, evidence-backed drift.** Paths, module names, related links, category metadata, code snippets, and clearly stale wording are fair game when fixing them materially improves accuracy.
|
|
79
80
|
7. **Use Replace only when there is a real replacement.** That means either:
|
|
@@ -81,7 +82,9 @@ For each candidate artifact, classify it into one of four outcomes:
|
|
|
81
82
|
- the user has provided enough concrete replacement context to document the successor honestly, or
|
|
82
83
|
- the codebase investigation found the current approach and can document it as the successor, or
|
|
83
84
|
- newer docs, pattern docs, PRs, or issues provide strong successor evidence.
|
|
84
|
-
8. **
|
|
85
|
+
8. **Delete when the code is gone.** If the referenced code, controller, or workflow no longer exists in the codebase and no successor can be found, delete the file — don't default to Keep just because the general advice is still "sound." A learning about a deleted feature misleads readers into thinking that feature still exists. When in doubt between Keep and Delete, ask the user (in interactive mode) or mark as stale (in autofix mode). But missing referenced files with no matching code is **not** a doubt case — it is strong, unambiguous Delete evidence. Auto-delete it.
|
|
86
|
+
9. **Evaluate document-set design, not just accuracy.** In addition to checking whether each doc is accurate, evaluate whether it is still the right unit of knowledge. If two or more docs overlap heavily, determine whether they should remain separate, be cross-scoped more clearly, or be consolidated into one canonical document. Redundant docs are dangerous because they drift silently — two docs saying the same thing will eventually say different things.
|
|
87
|
+
10. **Delete, don't archive.** There is no `_archived/` directory. When a doc is no longer useful, delete it. Git history preserves every deleted file — that is the archive. A dedicated archive directory creates problems: archived docs accumulate, pollute search results, and nobody reads them. If someone needs a deleted doc, `git log --diff-filter=D -- docs/solutions/` will find it.
|
|
85
88
|
|
|
86
89
|
## Scope Selection
|
|
87
90
|
|
|
@@ -90,9 +93,9 @@ Start by discovering learnings and pattern docs under `docs/solutions/`.
|
|
|
90
93
|
Exclude:
|
|
91
94
|
|
|
92
95
|
- `README.md`
|
|
93
|
-
- `docs/solutions/_archived/`
|
|
96
|
+
- `docs/solutions/_archived/` (legacy — if this directory exists, flag it for cleanup in the report)
|
|
94
97
|
|
|
95
|
-
Find all `.md` files under `docs/solutions/`, excluding `README.md` files and anything under `_archived/`.
|
|
98
|
+
Find all `.md` files under `docs/solutions/`, excluding `README.md` files and anything under `_archived/`. If an `_archived/` directory exists, note it in the report as a legacy artifact that should be cleaned up (files either restored or deleted).
|
|
96
99
|
|
|
97
100
|
If `$ARGUMENTS` is provided, use it to narrow scope before proceeding. Try these matching strategies in order, stopping at the first that produces results:
|
|
98
101
|
|
|
@@ -101,7 +104,7 @@ If `$ARGUMENTS` is provided, use it to narrow scope before proceeding. Try these
|
|
|
101
104
|
3. **Filename match** — match against filenames (partial matches are fine)
|
|
102
105
|
4. **Content search** — search file contents for the argument as a keyword (useful for feature names or feature areas)
|
|
103
106
|
|
|
104
|
-
If no matches are found, report that and ask the user to clarify. In
|
|
107
|
+
If no matches are found, report that and ask the user to clarify. In autofix mode, report the miss and stop — do not guess at scope.
|
|
105
108
|
|
|
106
109
|
If no candidate docs are found, report:
|
|
107
110
|
|
|
@@ -133,7 +136,7 @@ When scope is broad (9+ candidate docs), do a lightweight triage before deep inv
|
|
|
133
136
|
1. **Inventory** — read frontmatter of all candidate docs, group by module/component/category
|
|
134
137
|
2. **Impact clustering** — identify areas with the densest clusters of learnings + pattern docs. A cluster of 5 learnings and 2 patterns covering the same module is higher-impact than 5 isolated single-doc areas, because staleness in one doc is likely to affect the others.
|
|
135
138
|
3. **Spot-check drift** — for each cluster, check whether the primary referenced files still exist. Missing references in a high-impact cluster = strongest signal for where to start.
|
|
136
|
-
4. **Recommend a starting area** — present the highest-impact cluster with a brief rationale and ask the user to confirm or redirect. In
|
|
139
|
+
4. **Recommend a starting area** — present the highest-impact cluster with a brief rationale and ask the user to confirm or redirect. In autofix mode, skip the question and process all clusters in impact order.
|
|
137
140
|
|
|
138
141
|
Example:
|
|
139
142
|
|
|
@@ -162,6 +165,7 @@ A learning has several dimensions that can independently go stale. Surface-level
|
|
|
162
165
|
- **Code examples** — if the learning includes code snippets, do they still reflect the current implementation?
|
|
163
166
|
- **Related docs** — are cross-referenced learnings and patterns still present and consistent?
|
|
164
167
|
- **Auto memory** — does the auto memory directory contain notes in the same problem domain? Read MEMORY.md from the auto memory directory (the path is known from the system prompt context). If it does not exist or is empty, skip this dimension. A memory note describing a different approach than what the learning recommends is a supplementary drift signal.
|
|
168
|
+
- **Overlap** — while investigating, note when another doc in scope covers the same problem domain, references the same files, or recommends a similar solution. For each overlap, record: the two file paths, which dimensions overlap (problem, solution, root cause, files, prevention), and which doc appears broader or more current. These signals feed Phase 1.75 (Document-Set Analysis).
|
|
165
169
|
|
|
166
170
|
Match investigation depth to the learning's specificity — a learning referencing exact file paths and code snippets needs more verification than one describing a general principle.
|
|
167
171
|
|
|
@@ -174,12 +178,12 @@ The critical distinction is whether the drift is **cosmetic** (references moved
|
|
|
174
178
|
|
|
175
179
|
**The boundary:** if you find yourself rewriting the solution section or changing what the learning recommends, stop — that is Replace, not Update.
|
|
176
180
|
|
|
177
|
-
**Memory-sourced drift signals** are supplementary, not primary. A memory note describing a different approach does not alone justify Replace or
|
|
181
|
+
**Memory-sourced drift signals** are supplementary, not primary. A memory note describing a different approach does not alone justify Replace or Delete. Use memory signals to:
|
|
178
182
|
- Corroborate codebase-sourced drift (strengthens the case for Replace)
|
|
179
183
|
- Prompt deeper investigation when codebase evidence is borderline
|
|
180
184
|
- Add context to the evidence report ("(auto memory [claude]) notes suggest approach X may have changed since this learning was written")
|
|
181
185
|
|
|
182
|
-
In
|
|
186
|
+
In autofix mode, memory-only drift (no codebase corroboration) should result in stale-marking, not action.
|
|
183
187
|
|
|
184
188
|
### Judgment Guidelines
|
|
185
189
|
|
|
@@ -187,7 +191,7 @@ Three guidelines that are easy to get wrong:
|
|
|
187
191
|
|
|
188
192
|
1. **Contradiction = strong Replace signal.** If the learning's recommendation conflicts with current code patterns or a recently verified fix, that is not a minor drift — the learning is actively misleading. Classify as Replace.
|
|
189
193
|
2. **Age alone is not a stale signal.** A 2-year-old learning that still matches current code is fine. Only use age as a prompt to inspect more carefully.
|
|
190
|
-
3. **Check for successors before
|
|
194
|
+
3. **Check for successors before deleting.** Before recommending Replace or Delete, look for newer learnings, pattern docs, PRs, or issues covering the same problem space. If successor evidence exists, prefer Replace over Delete so readers are directed to the newer guidance.
|
|
191
195
|
|
|
192
196
|
## Phase 1.5: Investigate Pattern Docs
|
|
193
197
|
|
|
@@ -197,6 +201,65 @@ Pattern docs are high-leverage — a stale pattern is more dangerous than a stal
|
|
|
197
201
|
|
|
198
202
|
A pattern doc with no clear supporting learnings is a stale signal — investigate carefully before keeping it unchanged.
|
|
199
203
|
|
|
204
|
+
## Phase 1.75: Document-Set Analysis
|
|
205
|
+
|
|
206
|
+
After investigating individual docs, step back and evaluate the document set as a whole. The goal is to catch problems that only become visible when comparing docs to each other — not just to reality.
|
|
207
|
+
|
|
208
|
+
### Overlap Detection
|
|
209
|
+
|
|
210
|
+
For docs that share the same module, component, tags, or problem domain, compare them across these dimensions:
|
|
211
|
+
|
|
212
|
+
- **Problem statement** — do they describe the same underlying problem?
|
|
213
|
+
- **Solution shape** — do they recommend the same approach, even if worded differently?
|
|
214
|
+
- **Referenced files** — do they point to the same code paths?
|
|
215
|
+
- **Prevention rules** — do they repeat the same prevention bullets?
|
|
216
|
+
- **Root cause** — do they identify the same root cause?
|
|
217
|
+
|
|
218
|
+
High overlap across 3+ dimensions is a strong Consolidate signal. The question to ask: "Would a future maintainer need to read both docs to get the current truth, or is one mostly repeating the other?"
|
|
219
|
+
|
|
220
|
+
### Supersession Signals
|
|
221
|
+
|
|
222
|
+
Detect "older narrow precursor, newer canonical doc" patterns:
|
|
223
|
+
|
|
224
|
+
- A newer doc covers the same files, same workflow, and broader runtime behavior than an older doc
|
|
225
|
+
- An older doc describes a specific incident that a newer doc generalizes into a pattern
|
|
226
|
+
- Two docs recommend the same fix but the newer one has better context, examples, or scope
|
|
227
|
+
|
|
228
|
+
When a newer doc clearly subsumes an older one, the older doc is a consolidation candidate — its unique content (if any) should be merged into the newer doc, and the older doc should be deleted.
|
|
229
|
+
|
|
230
|
+
### Canonical Doc Identification
|
|
231
|
+
|
|
232
|
+
For each topic cluster (docs sharing a problem domain), identify which doc is the **canonical source of truth**:
|
|
233
|
+
|
|
234
|
+
- Usually the most recent, broadest, most accurate doc in the cluster
|
|
235
|
+
- The one a maintainer should find first when searching for this topic
|
|
236
|
+
- The one that other docs should point to, not duplicate
|
|
237
|
+
|
|
238
|
+
All other docs in the cluster are either:
|
|
239
|
+
- **Distinct** — they cover a meaningfully different sub-problem and have independent retrieval value. Keep them separate.
|
|
240
|
+
- **Subsumed** — their unique content fits as a section in the canonical doc. Consolidate.
|
|
241
|
+
- **Redundant** — they add nothing the canonical doc doesn't already say. Delete.
|
|
242
|
+
|
|
243
|
+
### Retrieval-Value Test
|
|
244
|
+
|
|
245
|
+
Before recommending that two docs stay separate, apply this test: "If a maintainer searched for this topic six months from now, would having these as separate docs improve discoverability, or just create drift risk?"
|
|
246
|
+
|
|
247
|
+
Separate docs earn their keep only when:
|
|
248
|
+
- They cover genuinely different sub-problems that someone might search for independently
|
|
249
|
+
- They target different audiences or contexts (e.g., one is about debugging, another about prevention)
|
|
250
|
+
- Merging them would create an unwieldy doc that is harder to navigate than two focused ones
|
|
251
|
+
|
|
252
|
+
If none of these apply, prefer consolidation. Two docs covering the same ground will eventually drift apart and contradict each other — that is worse than a slightly longer single doc.
|
|
253
|
+
|
|
254
|
+
### Cross-Doc Conflict Check
|
|
255
|
+
|
|
256
|
+
Look for outright contradictions between docs in scope:
|
|
257
|
+
- Doc A says "always use approach X" while Doc B says "avoid approach X"
|
|
258
|
+
- Doc A references a file path that Doc B says was deprecated
|
|
259
|
+
- Doc A and Doc B describe different root causes for what appears to be the same problem
|
|
260
|
+
|
|
261
|
+
Contradictions between docs are more urgent than individual staleness — they actively confuse readers. Flag these for immediate resolution, either through Consolidate (if one is right and the other is a stale version of the same truth) or through targeted Update/Replace.
|
|
262
|
+
|
|
200
263
|
## Subagent Strategy
|
|
201
264
|
|
|
202
265
|
Use subagents for context isolation when investigating multiple artifacts — not just because the task sounds complex. Choose the lightest approach that fits:
|
|
@@ -216,10 +279,10 @@ Use subagents for context isolation when investigating multiple artifacts — no
|
|
|
216
279
|
|
|
217
280
|
There are two subagent roles:
|
|
218
281
|
|
|
219
|
-
1. **Investigation subagents** — read-only. They must not edit files, create successors, or
|
|
220
|
-
2. **Replacement subagents** — write a single new learning to replace a stale one. These run **one at a time, sequentially** (each replacement subagent may need to read significant code, and running multiple in parallel risks context exhaustion). The orchestrator handles all
|
|
282
|
+
1. **Investigation subagents** — read-only. They must not edit files, create successors, or delete anything. Each returns: file path, evidence, recommended action, confidence, and open questions. These can run in parallel when artifacts are independent.
|
|
283
|
+
2. **Replacement subagents** — write a single new learning to replace a stale one. These run **one at a time, sequentially** (each replacement subagent may need to read significant code, and running multiple in parallel risks context exhaustion). The orchestrator handles all deletions and metadata updates after each replacement completes.
|
|
221
284
|
|
|
222
|
-
The orchestrator merges investigation results, detects contradictions, coordinates replacement subagents, and performs all
|
|
285
|
+
The orchestrator merges investigation results, detects contradictions, coordinates replacement subagents, and performs all deletions/metadata edits centrally. In interactive mode, it asks the user questions on ambiguous cases. In autofix mode, it marks ambiguous cases as stale instead. If two artifacts overlap or discuss the same root issue, investigate them together rather than parallelizing.
|
|
223
286
|
|
|
224
287
|
## Phase 2: Classify the Right Maintenance Action
|
|
225
288
|
|
|
@@ -233,6 +296,26 @@ The learning is still accurate and useful. Do not edit the file — report that
|
|
|
233
296
|
|
|
234
297
|
The core solution is still valid but references have drifted (paths, class names, links, code snippets, metadata). Apply the fixes directly.
|
|
235
298
|
|
|
299
|
+
### Consolidate
|
|
300
|
+
|
|
301
|
+
Choose **Consolidate** when Phase 1.75 identified docs that overlap heavily but are both materially correct. This is different from Update (which fixes drift in a single doc) and Replace (which rewrites misleading guidance). Consolidate handles the "both right, one subsumes the other" case.
|
|
302
|
+
|
|
303
|
+
**When to consolidate:**
|
|
304
|
+
|
|
305
|
+
- Two docs describe the same problem and recommend the same (or compatible) solution
|
|
306
|
+
- One doc is a narrow precursor and a newer doc covers the same ground more broadly
|
|
307
|
+
- The unique content from the subsumed doc can fit as a section or addendum in the canonical doc
|
|
308
|
+
- Keeping both creates drift risk without meaningful retrieval benefit
|
|
309
|
+
|
|
310
|
+
**When NOT to consolidate** (apply the Retrieval-Value Test from Phase 1.75):
|
|
311
|
+
|
|
312
|
+
- The docs cover genuinely different sub-problems that someone would search for independently
|
|
313
|
+
- Merging would create an unwieldy doc that harms navigation more than drift risk harms accuracy
|
|
314
|
+
|
|
315
|
+
**Consolidate vs Delete:** If the subsumed doc has unique content worth preserving (edge cases, alternative approaches, extra prevention rules), use Consolidate to merge that content first. If the subsumed doc adds nothing the canonical doc doesn't already say, skip straight to Delete.
|
|
316
|
+
|
|
317
|
+
The Consolidate action is: merge unique content from the subsumed doc into the canonical doc, then delete the subsumed doc. Not archive — delete. Git history preserves it.
|
|
318
|
+
|
|
236
319
|
### Replace
|
|
237
320
|
|
|
238
321
|
Choose **Replace** when the learning's core guidance is now misleading — the recommended fix changed materially, the root cause or architecture shifted, or the preferred pattern is different.
|
|
@@ -249,71 +332,64 @@ By the time you identify a Replace candidate, Phase 1 investigation has already
|
|
|
249
332
|
- Report what evidence you found and what is missing
|
|
250
333
|
- Recommend the user run `ce:compound` after their next encounter with that area, when they have fresh problem-solving context
|
|
251
334
|
|
|
252
|
-
###
|
|
335
|
+
### Delete
|
|
253
336
|
|
|
254
|
-
Choose **
|
|
337
|
+
Choose **Delete** when:
|
|
255
338
|
|
|
256
|
-
- The code or workflow no longer exists
|
|
339
|
+
- The code or workflow no longer exists and the problem domain is gone
|
|
257
340
|
- The learning is obsolete and has no modern replacement worth documenting
|
|
258
|
-
- The learning is redundant
|
|
341
|
+
- The learning is fully redundant with another doc (use Consolidate if there is unique content to merge first)
|
|
259
342
|
- There is no meaningful successor evidence suggesting it should be replaced instead
|
|
260
343
|
|
|
261
|
-
Action:
|
|
262
|
-
|
|
263
|
-
- Move the file to `docs/solutions/_archived/`, preserving directory structure when helpful
|
|
264
|
-
- Add:
|
|
265
|
-
- `archived_date: YYYY-MM-DD`
|
|
266
|
-
- `archive_reason: [why it was archived]`
|
|
344
|
+
Action: delete the file. No archival directory, no metadata — just delete it. Git history preserves every deleted file if recovery is ever needed.
|
|
267
345
|
|
|
268
|
-
### Before
|
|
346
|
+
### Before deleting: check if the problem domain is still active
|
|
269
347
|
|
|
270
|
-
When a learning's referenced files are gone, that is strong evidence — but only that the **implementation** is gone. Before
|
|
348
|
+
When a learning's referenced files are gone, that is strong evidence — but only that the **implementation** is gone. Before deleting, reason about whether the **problem the learning solves** is still a concern in the codebase:
|
|
271
349
|
|
|
272
|
-
- A learning about session token storage where `auth_token.rb` is gone — does the application still handle session tokens? If so, the concept persists under a new implementation. That is Replace, not
|
|
273
|
-
- A learning about a deprecated API endpoint where the entire feature was removed — the problem domain is gone. That is
|
|
350
|
+
- A learning about session token storage where `auth_token.rb` is gone — does the application still handle session tokens? If so, the concept persists under a new implementation. That is Replace, not Delete.
|
|
351
|
+
- A learning about a deprecated API endpoint where the entire feature was removed — the problem domain is gone. That is Delete.
|
|
274
352
|
|
|
275
353
|
Do not search mechanically for keywords from the old learning. Instead, understand what problem the learning addresses, then investigate whether that problem domain still exists in the codebase. The agent understands concepts — use that understanding to look for where the problem lives now, not where the old code used to be.
|
|
276
354
|
|
|
277
|
-
**Auto-
|
|
355
|
+
**Auto-delete only when both the implementation AND the problem domain are gone:**
|
|
278
356
|
|
|
279
357
|
- the referenced code is gone AND the application no longer deals with that problem domain
|
|
280
|
-
- the learning is fully superseded by a clearly better successor
|
|
281
|
-
- the document is plainly redundant and adds
|
|
358
|
+
- the learning is fully superseded by a clearly better successor AND the old doc adds no distinct value
|
|
359
|
+
- the document is plainly redundant and adds nothing the canonical doc doesn't already say
|
|
282
360
|
|
|
283
361
|
If the implementation is gone but the problem domain persists (the app still does auth, still processes payments, still handles migrations), classify as **Replace** — the problem still matters and the current approach should be documented.
|
|
284
362
|
|
|
285
|
-
Do not keep a learning just because its general advice is "still sound" — if the specific code it references is gone, the learning misleads readers. But do not
|
|
286
|
-
|
|
287
|
-
If there is a clearly better successor, strongly consider **Replace** before **Archive** so the old artifact points readers toward the newer guidance.
|
|
363
|
+
Do not keep a learning just because its general advice is "still sound" — if the specific code it references is gone, the learning misleads readers. But do not delete a learning whose problem domain is still active — that knowledge gap should be filled with a replacement.
|
|
288
364
|
|
|
289
365
|
## Pattern Guidance
|
|
290
366
|
|
|
291
|
-
Apply the same
|
|
367
|
+
Apply the same five outcomes (Keep, Update, Consolidate, Replace, Delete) to pattern docs, but evaluate them as **derived guidance** rather than incident-level learnings. Key differences:
|
|
292
368
|
|
|
293
369
|
- **Keep**: the underlying learnings still support the generalized rule and examples remain representative
|
|
294
370
|
- **Update**: the rule holds but examples, links, scope, or supporting references drifted
|
|
371
|
+
- **Consolidate**: two pattern docs generalize the same set of learnings or cover the same design concern — merge into one canonical pattern
|
|
295
372
|
- **Replace**: the generalized rule is now misleading, or the underlying learnings support a different synthesis. Base the replacement on the refreshed learning set — do not invent new rules from guesswork
|
|
296
|
-
- **
|
|
297
|
-
|
|
298
|
-
If "archive" feels too strong but the pattern should no longer be elevated, reduce its prominence in place if the docs structure supports that.
|
|
373
|
+
- **Delete**: the pattern is no longer valid, no longer recurring, or fully subsumed by a stronger pattern doc with no unique content remaining
|
|
299
374
|
|
|
300
375
|
## Phase 3: Ask for Decisions
|
|
301
376
|
|
|
302
|
-
###
|
|
377
|
+
### Autofix mode
|
|
303
378
|
|
|
304
379
|
**Skip this entire phase. Do not ask any questions. Do not present options. Do not wait for input.** Proceed directly to Phase 4 and execute all actions based on the classifications from Phase 2:
|
|
305
380
|
|
|
306
|
-
- Unambiguous Keep, Update, auto-
|
|
381
|
+
- Unambiguous Keep, Update, Consolidate, auto-Delete, and Replace (with sufficient evidence) → execute directly
|
|
307
382
|
- Ambiguous cases → mark as stale
|
|
308
383
|
- Then generate the report (see Output Format)
|
|
309
384
|
|
|
310
385
|
### Interactive mode
|
|
311
386
|
|
|
312
|
-
Most Updates should be applied directly without asking. Only ask the user when:
|
|
387
|
+
Most Updates and Consolidations should be applied directly without asking. Only ask the user when:
|
|
313
388
|
|
|
314
|
-
- The right action is genuinely ambiguous (Update vs Replace vs
|
|
315
|
-
- You are about to
|
|
316
|
-
- You are about to
|
|
389
|
+
- The right action is genuinely ambiguous (Update vs Replace vs Consolidate vs Delete)
|
|
390
|
+
- You are about to Delete a document **and** the evidence is not unambiguous (see auto-delete criteria in Phase 2). When auto-delete criteria are met, proceed without asking.
|
|
391
|
+
- You are about to Consolidate and the choice of canonical doc is not clear-cut
|
|
392
|
+
- You are about to create a successor via Replace
|
|
317
393
|
|
|
318
394
|
Do **not** ask questions about whether code changes were intentional, whether the user wants to fix bugs in the code, or other concerns outside doc maintenance. Stay in your lane — doc accuracy.
|
|
319
395
|
|
|
@@ -340,7 +416,7 @@ For a single artifact, present:
|
|
|
340
416
|
Then ask:
|
|
341
417
|
|
|
342
418
|
```text
|
|
343
|
-
This [learning/pattern] looks like a [Update/
|
|
419
|
+
This [learning/pattern] looks like a [Keep/Update/Consolidate/Replace/Delete].
|
|
344
420
|
|
|
345
421
|
Why: [one-sentence rationale based on the evidence]
|
|
346
422
|
|
|
@@ -351,7 +427,7 @@ What would you like to do?
|
|
|
351
427
|
3. Skip for now
|
|
352
428
|
```
|
|
353
429
|
|
|
354
|
-
Do not list all
|
|
430
|
+
Do not list all five actions unless all five are genuinely plausible.
|
|
355
431
|
|
|
356
432
|
#### Batch Scope
|
|
357
433
|
|
|
@@ -359,14 +435,16 @@ For several learnings:
|
|
|
359
435
|
|
|
360
436
|
1. Group obvious **Keep** cases together
|
|
361
437
|
2. Group obvious **Update** cases together when the fixes are straightforward
|
|
362
|
-
3. Present **
|
|
363
|
-
4. Present **
|
|
438
|
+
3. Present **Consolidate** cases together when the canonical doc is clear
|
|
439
|
+
4. Present **Replace** cases individually or in very small groups
|
|
440
|
+
5. Present **Delete** cases individually unless they are strong auto-delete candidates
|
|
364
441
|
|
|
365
442
|
Ask for confirmation in stages:
|
|
366
443
|
|
|
367
444
|
1. Confirm grouped Keep/Update recommendations
|
|
368
|
-
2. Then handle
|
|
369
|
-
3. Then handle
|
|
445
|
+
2. Then handle Consolidate groups (present the canonical doc and what gets merged)
|
|
446
|
+
3. Then handle Replace one at a time
|
|
447
|
+
4. Then handle Delete one at a time unless the deletion is unambiguous and safe to auto-apply
|
|
370
448
|
|
|
371
449
|
#### Broad Scope
|
|
372
450
|
|
|
@@ -407,6 +485,20 @@ Examples that should **not** be in-place updates:
|
|
|
407
485
|
|
|
408
486
|
Those cases require **Replace**, not Update.
|
|
409
487
|
|
|
488
|
+
### Consolidate Flow
|
|
489
|
+
|
|
490
|
+
The orchestrator handles consolidation directly (no subagent needed — the docs are already read and the merge is a focused edit). Process Consolidate candidates by topic cluster. For each cluster identified in Phase 1.75:
|
|
491
|
+
|
|
492
|
+
1. **Confirm the canonical doc** — the broader, more current, more accurate doc in the cluster.
|
|
493
|
+
2. **Extract unique content** from the subsumed doc(s) — anything the canonical doc does not already cover. This might be specific edge cases, additional prevention rules, or alternative debugging approaches.
|
|
494
|
+
3. **Merge unique content** into the canonical doc in a natural location. Do not just append — integrate it where it logically belongs. If the unique content is small (a bullet point, a sentence), inline it. If it is a substantial sub-topic, add it as a clearly labeled section.
|
|
495
|
+
4. **Update cross-references** — if any other docs reference the subsumed doc, update those references to point to the canonical doc.
|
|
496
|
+
5. **Delete the subsumed doc.** Do not archive it, do not add redirect metadata — just delete the file. Git history preserves it.
|
|
497
|
+
|
|
498
|
+
If a doc cluster has 3+ overlapping docs, process pairwise: consolidate the two most overlapping docs first, then evaluate whether the merged result should be consolidated with the next doc.
|
|
499
|
+
|
|
500
|
+
**Structural edits beyond merge:** Consolidate also covers the reverse case. If one doc has grown unwieldy and covers multiple distinct problems that would benefit from separate retrieval, it is valid to recommend splitting it. Only do this when the sub-topics are genuinely independent and a maintainer might search for one without needing the other.
|
|
501
|
+
|
|
410
502
|
### Replace Flow
|
|
411
503
|
|
|
412
504
|
Process Replace candidates **one at a time, sequentially**. Each replacement is written by a subagent to protect the main context window.
|
|
@@ -418,9 +510,7 @@ Process Replace candidates **one at a time, sequentially**. Each replacement is
|
|
|
418
510
|
- A summary of the investigation evidence (what changed, what the current code does, why the old guidance is misleading)
|
|
419
511
|
- The target path and category (same category as the old learning unless the category itself changed)
|
|
420
512
|
2. The subagent writes the new learning following `ce:compound`'s document format: YAML frontmatter (title, category, date, module, component, tags), problem description, root cause, current solution with code examples, and prevention tips. It should use dedicated file search and read tools if it needs additional context beyond what was passed.
|
|
421
|
-
3. After the subagent completes, the orchestrator:
|
|
422
|
-
- Adds `superseded_by: [new learning path]` to the old learning's frontmatter
|
|
423
|
-
- Moves the old learning to `docs/solutions/_archived/`
|
|
513
|
+
3. After the subagent completes, the orchestrator deletes the old learning file. The new learning's frontmatter may include `supersedes: [old learning filename]` for traceability, but this is optional — the git history and commit message provide the same information.
|
|
424
514
|
|
|
425
515
|
**When evidence is insufficient:**
|
|
426
516
|
|
|
@@ -429,9 +519,9 @@ Process Replace candidates **one at a time, sequentially**. Each replacement is
|
|
|
429
519
|
2. Report what evidence was found and what is missing
|
|
430
520
|
3. Recommend the user run `ce:compound` after their next encounter with that area
|
|
431
521
|
|
|
432
|
-
###
|
|
522
|
+
### Delete Flow
|
|
433
523
|
|
|
434
|
-
|
|
524
|
+
Delete only when a learning is clearly obsolete, redundant (with no unique content to merge), or its problem domain is gone. Do not delete a document just because it is old — age alone is not a signal.
|
|
435
525
|
|
|
436
526
|
## Output Format
|
|
437
527
|
|
|
@@ -446,30 +536,33 @@ Scanned: N learnings
|
|
|
446
536
|
|
|
447
537
|
Kept: X
|
|
448
538
|
Updated: Y
|
|
539
|
+
Consolidated: C
|
|
449
540
|
Replaced: Z
|
|
450
|
-
|
|
541
|
+
Deleted: W
|
|
451
542
|
Skipped: V
|
|
452
543
|
Marked stale: S
|
|
453
544
|
```
|
|
454
545
|
|
|
455
546
|
Then for EVERY file processed, list:
|
|
456
547
|
- The file path
|
|
457
|
-
- The classification (Keep/Update/Replace/
|
|
548
|
+
- The classification (Keep/Update/Consolidate/Replace/Delete/Stale)
|
|
458
549
|
- What evidence was found -- tag any memory-sourced findings with "(auto memory [claude])" to distinguish them from codebase-sourced evidence
|
|
459
550
|
- What action was taken (or recommended)
|
|
551
|
+
- For Consolidate: which doc was canonical, what unique content was merged, what was deleted
|
|
460
552
|
|
|
461
553
|
For **Keep** outcomes, list them under a reviewed-without-edits section so the result is visible without creating git churn.
|
|
462
554
|
|
|
463
|
-
###
|
|
555
|
+
### Autofix mode report
|
|
464
556
|
|
|
465
|
-
In
|
|
557
|
+
In autofix mode, the report is the sole deliverable — there is no user present to ask follow-up questions, so the report must be self-contained and complete. **Print the full report. Do not abbreviate, summarize, or skip sections.**
|
|
466
558
|
|
|
467
559
|
Split actions into two sections:
|
|
468
560
|
|
|
469
561
|
**Applied** (writes that succeeded):
|
|
470
562
|
- For each **Updated** file: the file path, what references were fixed, and why
|
|
563
|
+
- For each **Consolidated** cluster: the canonical doc, what unique content was merged from each subsumed doc, and the subsumed docs that were deleted
|
|
471
564
|
- For each **Replaced** file: what the old learning recommended vs what the current code does, and the path to the new successor
|
|
472
|
-
- For each **
|
|
565
|
+
- For each **Deleted** file: the file path and why it was removed (problem domain gone, fully redundant, etc.)
|
|
473
566
|
- For each **Marked stale** file: the file path, what evidence was found, and why it was ambiguous
|
|
474
567
|
|
|
475
568
|
**Recommended** (actions that could not be written — e.g., permission denied):
|
|
@@ -478,6 +571,9 @@ Split actions into two sections:
|
|
|
478
571
|
|
|
479
572
|
If all writes succeed, the Recommended section is empty. If no writes succeed (e.g., read-only invocation), all actions appear under Recommended — the report becomes a maintenance plan.
|
|
480
573
|
|
|
574
|
+
**Legacy cleanup** (if `docs/solutions/_archived/` exists):
|
|
575
|
+
- List archived files found and recommend disposition: restore (if still relevant), delete (if truly obsolete), or consolidate (if overlapping with active docs)
|
|
576
|
+
|
|
481
577
|
## Phase 5: Commit Changes
|
|
482
578
|
|
|
483
579
|
After all actions are executed and the report is generated, handle committing the changes. Skip this phase if no files were modified (all Keep, or all writes failed).
|
|
@@ -489,7 +585,7 @@ Before offering options, check:
|
|
|
489
585
|
2. Whether the working tree has other uncommitted changes beyond what compound-refresh modified
|
|
490
586
|
3. Recent commit messages to match the repo's commit style
|
|
491
587
|
|
|
492
|
-
###
|
|
588
|
+
### Autofix mode
|
|
493
589
|
|
|
494
590
|
Use sensible defaults — no user to ask:
|
|
495
591
|
|
|
@@ -525,14 +621,16 @@ First, run `git branch --show-current` to determine the current branch. Then pre
|
|
|
525
621
|
### Commit message
|
|
526
622
|
|
|
527
623
|
Write a descriptive commit message that:
|
|
528
|
-
- Summarizes what was refreshed (e.g., "update 3 stale learnings,
|
|
624
|
+
- Summarizes what was refreshed (e.g., "update 3 stale learnings, consolidate 2 overlapping docs, delete 1 obsolete doc")
|
|
529
625
|
- Follows the repo's existing commit conventions (check recent git log for style)
|
|
530
626
|
- Is succinct — the details are in the changed files themselves
|
|
531
627
|
|
|
532
628
|
## Relationship to ce:compound
|
|
533
629
|
|
|
534
630
|
- `ce:compound` captures a newly solved, verified problem
|
|
535
|
-
- `ce:compound-refresh` maintains older learnings as the codebase evolves
|
|
631
|
+
- `ce:compound-refresh` maintains older learnings as the codebase evolves — both their individual accuracy and their collective design as a document set
|
|
536
632
|
|
|
537
633
|
Use **Replace** only when the refresh process has enough real evidence to write a trustworthy successor. When evidence is insufficient, mark as stale and recommend `ce:compound` for when the user next encounters that problem area.
|
|
538
634
|
|
|
635
|
+
Use **Consolidate** proactively when the document set has grown organically and redundancy has crept in. Every `ce:compound` invocation adds a new doc — over time, multiple docs may cover the same problem from slightly different angles. Periodic consolidation keeps the document set lean and authoritative.
|
|
636
|
+
|