cc-workspace 4.6.1 → 4.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -208,28 +208,6 @@ The orchestrator (Opus) never touches repo code. It clarifies the need,
208
208
  writes a plan in markdown, then sends teammates (Sonnet) to work in
209
209
  parallel in each repo via Agent Teams.
210
210
 
211
- ### Architecture
212
-
213
- ```
214
- orchestrator/
215
- ┌─────────────────────┐
216
- You ◄──────────► │ Team Lead (Opus) │
217
- clarify, plan, │ writes plans/*.md │
218
- review └────────┬────────────┘
219
- │ spawn
220
- ┌───────────────┼───────────────┐
221
- │ │ │
222
- ▼ ▼ ▼
223
- ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
224
- │ Implementer │ │ Implementer │ │ QA Ruthless │
225
- │ (Sonnet) │ │ (Sonnet) │ │ (Sonnet) │
226
- └──────┬───────┘ └──────┬───────┘ └──────────────┘
227
- │ commit │ commit
228
- ▼ ▼ Explorer
229
- /tmp/repo-api /tmp/repo-front (Haiku)
230
- session/feat session/feat read-only
231
- ```
232
-
233
211
  ### Who does what
234
212
 
235
213
  | Role | Model | What it does |
@@ -237,7 +215,7 @@ parallel in each repo via Agent Teams.
237
215
  | **Orchestrator** | Opus 4.6 | Clarifies, plans, delegates, verifies. Writes in orchestrator/ only. |
238
216
  | **Init** | Sonnet 4.6 | Diagnostic + interactive workspace configuration. Run once. |
239
217
  | **Teammates** | Sonnet 4.6 | Implement in an isolated worktree, test, commit. |
240
- | **Explorers** | Haiku | Read-only. Scan, verify consistency. |
218
+ | **Data extractors** | Haiku | Read-only. Collect raw data (types, configs, logs). Never judge or conclude. |
241
219
  | **QA** | Sonnet 4.6 | Hostile mode. Min 3 problems found per service. |
242
220
  | **E2E Validator** | Sonnet 4.6 | Containers + Chrome browser testing (beta). |
243
221
 
@@ -253,49 +231,15 @@ parallel in each repo via Agent Teams.
253
231
  ### The dispatch-feature workflow (Mode A)
254
232
 
255
233
  ```
256
- User describes feature
257
-
258
-
259
- ┌─── Phase 0: CLARIFY ───┐
260
- Ask max 5 questions │
261
- if ambiguity │
262
- └────────┬────────────────┘
263
-
264
- ┌─── Phase 1-2: PLAN ────┐
265
- │ Load context │
266
- │ Write plan in ./plans/ │
267
- │ Commit units + contract │
268
- │ Wait for approval ◄─┐ │
269
- │ │ No ──┘ │
270
- └────┼────────────────────┘
271
- │ Yes
272
-
273
- ┌─── Phase 2.5: SESSION ─┐
274
- │ git branch session/name │
275
- │ in each impacted repo │
276
- └────────┬────────────────┘
277
-
278
- ┌─── Phase 3: DISPATCH ──────────────────────────┐
279
- │ │
280
- │ Wave 1: Producers (API, data, auth) │
281
- │ ├── Implementer → Commit 1/3 │
282
- │ ├── Implementer → Commit 2/3 │
283
- │ └── Implementer → Commit 3/3 │
284
- │ │ contracts validated │
285
- │ ▼ │
286
- │ Wave 2: Consumers (frontend, integrations) │
287
- │ │ │
288
- │ ▼ │
289
- │ Wave 3: Infra (gateway, config) │
290
- │ │
291
- └────────┬────────────────────────────────────────┘
292
-
293
- ┌─── Phase 4-5: VERIFY ──┐
294
- │ cross-service-check │
295
- │ + qa-ruthless │
296
- └────────┬────────────────┘
297
-
298
- Final summary + propose fixes
234
+ CLARIFY -> ask max 5 questions if ambiguity
235
+ PLAN -> write the plan in ./plans/, wait for approval
236
+ SESSION -> create session branches in impacted repos (Phase 2.5)
237
+ SPAWN -> Wave 1: API/data in parallel
238
+ Wave 2: frontend with validated API contract
239
+ Wave 3: infra/config if applicable
240
+ COLLECT -> update the plan with results
241
+ VERIFY -> cross-service-check + qa-ruthless
242
+ REPORT -> final summary
299
243
  ```
300
244
 
301
245
  ### Security — path-aware writes
@@ -321,7 +265,7 @@ Protection layers:
321
265
  | **incident-debug** | Multi-layer diagnostic | "Bug", "500", "not working" |
322
266
  | **plan-review** | Plan sanity check (Haiku) | "Review plan" |
323
267
  | **merge-prep** | Conflicts, PRs, merge order | "Merge", "PR" |
324
- | **cycle-retrospective** | Post-cycle learning (Haiku) | "Retro", "retrospective" |
268
+ | **cycle-retrospective** | Post-cycle learning (Opus + Haiku gatherers) | "Retro", "retrospective" |
325
269
  | **refresh-profiles** | Re-scan repo CLAUDE.md files (Haiku) | "Refresh profiles" |
326
270
  | **bootstrap-repo** | Generate a CLAUDE.md (Haiku) | "Bootstrap", "init CLAUDE.md" |
327
271
  | **e2e-validator** | E2E validation: containers + Chrome (beta) | `claude --agent e2e-validator` |
@@ -574,6 +518,17 @@ With `--chrome`, the agent:
574
518
 
575
519
  ---
576
520
 
521
+ ## Changelog v4.6.2 -> v4.7.0
522
+
523
+ | # | Feature | Detail |
524
+ |---|---------|--------|
525
+ | 1 | **Gather → Reason pattern** | `cross-service-check`, `incident-debug`, and `cycle-retrospective` now use a two-phase approach: Haiku subagents extract raw data (types, configs, logs, code snippets), then Opus performs all reasoning, comparison, and judgment. Previously Haiku did both, producing shallow analysis. |
526
+ | 2 | **cycle-retrospective upgraded to Opus** | Was `model: haiku` (entire skill ran on Haiku). Now inherits session model (Opus). Haiku still gathers data, but pattern analysis and improvement suggestions are Opus-quality. |
527
+ | 3 | **Data extractors replace investigators** | `incident-debug` no longer uses a full Sonnet teammate for API investigation. All layers use Haiku data collectors, and Opus correlates the evidence — better reasoning at lower cost. |
528
+ | 4 | **Model routing docs updated** | `rules/model-routing.md` now documents the Gather → Reason pattern and when to apply it. |
529
+
530
+ ---
531
+
577
532
  ## Changelog v4.5.1 -> v4.6.0
578
533
 
579
534
  | # | Feature | Detail |
@@ -583,7 +538,6 @@ With `--chrome`, the agent:
583
538
  | 3 | **LSP fallback documented** | `qa-ruthless` and `incident-debug` now include explicit Grep+Glob fallback when LSP tool is unavailable. |
584
539
  | 4 | **`cc-workspace uninstall`** | New CLI command to cleanly remove all global components from `~/.claude/`. Interactive confirmation. Local orchestrator/ preserved. |
585
540
  | 5 | **workspace-init fixes** | Removed hardcoded version ("v4.0" → dynamic). Fixed skills count in diagnostic (9 → 13). |
586
- | 6 | **ASCII diagrams in README** | Architecture overview and dispatch workflow now have visual diagrams (ASCII art, compatible with GitHub and npm). |
587
541
 
588
542
  ---
589
543
 
@@ -26,29 +26,47 @@ Scope: ONLY inter-service alignment. Not code quality, not bugs.
26
26
  - Use `git -C ../[repo] show session/{name}:[file]` to read files from the
27
27
  session branch without checking it out
28
28
 
29
- ## Checks (parallel Explore subagents via Task, Haiku)
29
+ ## Phase 1 Gather (Haiku data extractors)
30
30
 
31
- Only run checks for services that exist in the workspace.
32
- Spawn lightweight Explore subagents (Task tool, model: haiku) in parallel.
33
- Use `background: true` so the orchestrator can continue interacting while scans run.
31
+ Only run extractors for services that exist in the workspace.
32
+ Spawn parallel Explore subagents (Task tool, model: haiku) using `background: true`.
33
+ Each extractor returns RAW DATA ONLY no judgment, no ✅/❌, no comparisons.
34
34
 
35
- ### API Frontend contract
36
- Compare API Resource response shapes with TypeScript interfaces.
37
- Report ONLY mismatches: field names, types, missing fields, route names.
35
+ Include this instruction in every extractor prompt: "Return RAW DATA ONLY. Do NOT judge, compare, or produce conclusions. No ✅/❌. Just structured lists of what you found." Format: structured markdown with clear headings and tables.
38
36
 
39
- ### Environment variables
40
- Cross-check env vars between all repos. Grep for env access patterns.
41
- Compare with .env.example files. Report: used but not declared, declared but never used.
37
+ ### API extractor
38
+ Extract all API endpoint response shapes from backend code.
39
+ Return: route, method, response fields and types. One row per field.
42
40
 
43
- ### Gateway ↔ API (if gateway exists)
44
- Compare gateway config routes with actual API routes.
45
- Report: dead gateway routes, missing routes for new endpoints.
41
+ ### Frontend extractor
42
+ Extract all TypeScript interfaces/types that represent API responses.
43
+ Return: interface name, fields and types, which endpoint they map to (if determinable).
46
44
 
47
- ### Data layer (if data service exists)
48
- Compare data schemas with application code. Report: column/type mismatches, missing schema updates.
45
+ ### Env extractor
46
+ Extract env var declarations (.env.example) and usages (grep for process.env / import.meta.env / getenv / os.Getenv / etc.) from ALL repos.
47
+ Return: declared vars per repo (from .env.example), used vars per repo (from code grep).
49
48
 
50
- ### Auth (if auth service was changed)
51
- Compare auth config (client IDs, redirect URIs, scopes) between services. Report inconsistencies.
49
+ ### Gateway extractor (if gateway exists)
50
+ Extract all gateway route configs.
51
+ Return: path, upstream, method for each route.
52
+
53
+ ### Data extractor (if data service exists)
54
+ Extract schema definitions (table name, columns, types) and application model definitions.
55
+ Return: raw schema table and raw model fields side by side.
56
+
57
+ ### Auth extractor (if auth service was changed)
58
+ Extract auth configs (client IDs, redirect URIs, scopes) from each service.
59
+ Return: raw config values per service.
60
+
61
+ ## Phase 2 — Reason (this skill, running as Opus)
62
+
63
+ After all extractors return, YOU compare the datasets side-by-side and produce final judgments:
64
+
65
+ - **API shapes vs Frontend interfaces**: field mismatches, type mismatches, missing fields
66
+ - **Env declarations vs env usages**: used-but-undeclared, declared-but-unused (per repo)
67
+ - **Gateway routes vs API routes**: dead routes, missing routes for new endpoints
68
+ - **Data schemas vs application models**: column/type drift
69
+ - **Auth configs across services**: inconsistencies between client configs
52
70
 
53
71
  ## Output
54
72
 
@@ -8,9 +8,6 @@ description: >
8
8
  "capitalize", "lessons learned", "what did we learn", "improve docs".
9
9
  argument-hint: "[feature-name]"
10
10
  context: fork
11
- agent: general-purpose
12
- disable-model-invocation: true
13
- model: haiku
14
11
  allowed-tools: Read, Write, Glob, Grep, Task
15
12
  ---
16
13
 
@@ -28,25 +25,29 @@ and propagate improvements to project documentation.
28
25
  - Cross-service check results
29
26
  - Session log (blockers, escalations, re-dispatches)
30
27
 
31
- ## Phase 2: Analyze patterns
28
+ ## Phase 2: Extract data (Haiku collectors)
32
29
 
33
- Spawn parallel Explore subagents (Task, Haiku) to categorize findings:
30
+ Spawn parallel Explore subagents (Task, model: haiku) to extract and structure raw data from the plan. Each collector: "Extract and structure the data. Do NOT analyze patterns or suggest improvements."
34
31
 
35
- ### Recurring QA findings
36
- - Group QA findings by category (🔴 bugs, 🟡 smells, 🟠 dead code, 🔵 missing tests, 🟣 UX violations)
37
- - Identify patterns: same type of issue across multiple cycles?
38
- - Flag findings that could have been prevented by a rule or convention
32
+ ### QA data extractor
33
+ Parse the QA report section from the plan.
34
+ Return: raw list of findings with category (🔴/🟡/🟠/🔵/🟣), file reference, and description. No analysis.
39
35
 
40
- ### Teammate friction
41
- - Parse session log for escalations — what decisions weren't covered by the plan?
42
- - Parse for re-dispatches what went wrong on first attempt?
43
- - Parse for idle time — were tasks too large or poorly scoped?
36
+ ### Session log extractor
37
+ Parse the session log section from the plan.
38
+ Return: raw list of escalations, re-dispatches, and blockers with timestamps or phase references. No analysis.
44
39
 
45
- ### Cross-service gaps
46
- - Were there contract mismatches? Missing env vars? Schema drift?
47
- - Could these have been caught earlier by a convention?
40
+ ### Cross-service data extractor
41
+ Parse the cross-service check results section from the plan.
42
+ Return: raw list of checks with status and details. No analysis.
48
43
 
49
- ## Phase 3: Generate improvements
44
+ ## Phase 3: Analyze patterns (Opus reasoning)
45
+
46
+ After all collectors return, YOU (the skill) analyze the raw data. This is where the reasoning model (Opus) works:
47
+ - Group findings by category and identify recurring patterns across cycles
48
+ - Correlate escalations with plan quality — what was missing from the plan?
49
+ - Identify what could have been prevented by a rule or convention
50
+ - Generate concrete improvement suggestions for each finding category
50
51
 
51
52
  For each finding category, generate concrete improvement suggestions:
52
53
 
@@ -30,32 +30,38 @@ Parse the user's report for signals:
30
30
 
31
31
  If unclear, investigate all layers.
32
32
 
33
- ## Phase 2: Investigate (parallel)
33
+ ## Phase 2: Collect evidence (parallel Haiku extractors)
34
34
 
35
- Spawn investigators via Agent Teams (Teammate tool):
36
- - **API/Backend**: full Sonnet teammate with write-capable investigation. Use the **LSP tool** (go-to-definition, find-references) to trace error call chains. If LSP is unavailable, fall back to Grep + Glob for tracing references manually.
37
- - **Frontend, Gateway, Infra, Auth**: lightweight Explore subagents (Task, Haiku) for read-only scan. Use LSP tool where available for tracing, or Grep + Glob as fallback.
35
+ Spawn parallel Explore subagents (Task, model: haiku) per layer. Each one collects raw evidence — code snippets, log entries, config values, error messages — and returns them WITHOUT diagnosis.
38
36
 
39
- Multiple teammates can share findings and challenge each other's hypotheses.
40
- This adversarial pattern finds root causes faster than sequential investigation.
37
+ Include this instruction in every collector prompt: "Collect RAW EVIDENCE only. Code snippets with file:line references. Do NOT diagnose or hypothesize. Do NOT say 'the problem is X'. Just return what you found."
41
38
 
42
- ### LSP investigation patterns
39
+ Use these patterns to guide collectors on WHERE to look (not to diagnose):
43
40
 
44
- Instruct investigators to use these specific LSP workflows:
41
+ | Signal | Where to collect |
42
+ |--------|-----------------|
43
+ | HTTP 500 in controller | Use `go-to-definition` (or Grep+Glob fallback) to find controller method → service layer → repository/query. Return code snippets with file:line. |
44
+ | Type mismatch frontend | Use `find-references` on the TypeScript interface → collect all usage sites. Return raw code. |
45
+ | Auth loop / 401 | Find auth middleware and token validation logic. Return raw config and code snippets. |
46
+ | N+1 query suspicion | Find the relationship method and all callers. Return raw code at each call site. |
47
+ | Dead import / unused function | Use `find-references` → return reference count and locations. |
48
+ | Unknown error class | Find the exception class definition and all catch blocks. Return raw code. |
45
49
 
46
- | Signal | LSP action |
47
- |--------|-----------|
48
- | HTTP 500 in controller | `go-to-definition` on the controller method trace into service layer trace into repository/query |
49
- | Type mismatch frontend | `find-references` on the TypeScript interface verify all usages match the API shape |
50
- | Auth loop / 401 | `hover` on auth middleware verify configuration `find-references` on token validation |
51
- | N+1 query suspicion | `find-references` on the relationship method check all callers for eager loading |
52
- | Dead import / unused function | `find-references` if 0 references outside tests, flag as dead code |
53
- | Unknown error class | `go-to-definition` on the exception class → check parent hierarchy and catch blocks |
50
+ ### Per-layer collector prompts
51
+
52
+ - **API/Backend collector**: Find error handlers, relevant controller/service code, recent log patterns, middleware chain. Use Grep+Glob to trace the call chain from the entry point. Return: relevant code snippets with file:line, error handling logic, middleware order.
53
+ - **Frontend collector**: Find component rendering logic, API call code, error boundaries, console error patterns. Return: component tree around the error, API call implementations, error handling code.
54
+ - **Gateway collector** (if applicable): Extract route config, upstream definitions, timeout settings. Return raw config.
55
+ - **Auth collector** (if applicable): Extract token validation logic, middleware config, redirect URIs. Return raw code snippets.
56
+ - **Infra collector** (if applicable): Extract container configs, health checks, resource limits, recent deployment changes. Return raw config.
54
57
 
55
- ## Phase 3: Correlate
58
+ ## Phase 3: Diagnose (Opus reasoning)
56
59
 
57
- Build request flow timeline with ✅/❌ markers.
58
- Cross-reference findings between layers.
60
+ After all collectors return, YOU (the skill) correlate the evidence:
61
+ - Build the request flow timeline from the collected code and config
62
+ - Cross-reference findings between layers to identify where the chain breaks
63
+ - Identify the root cause based on the evidence — this is deep reasoning, not pattern matching
64
+ - The model running this skill (Opus) is the one that diagnoses; collectors never diagnose
59
65
 
60
66
  ## Phase 4: Write diagnosis
61
67
 
@@ -21,8 +21,24 @@ If you write code for a repo (not a markdown plan), you have failed — delegate
21
21
  | Orchestrator | **Opus 4.6** | `claude --agent team-lead` (frontmatter `model: opus`) |
22
22
  | Implementation teammates | **Sonnet 4.6** | `CLAUDE_CODE_SUBAGENT_MODEL=sonnet` |
23
23
  | QA investigators | **Sonnet 4.6** | Same |
24
- | Explorers / cross-checks | **Haiku** | `model: haiku` in skill/agent frontmatter |
25
- | Plan review | **Haiku** | `model: haiku` in skill frontmatter |
24
+ | Data extractors / explorers | **Haiku** | Task subagents with `model: haiku` raw data only, no reasoning |
25
+ | Gatherers (cross-check, debug, retro) | **Haiku** | Task subagents raw data extraction only |
26
+
27
+ ## Gather → Reason pattern
28
+
29
+ Skills that need both data collection and analysis use a two-phase approach:
30
+
31
+ 1. **Gather (Haiku)** — Spawn parallel Explore subagents (Task, model: haiku) that extract
32
+ raw data: code snippets, type definitions, config values, log entries. They return
33
+ structured facts. They do NOT judge, compare, or conclude.
34
+
35
+ 2. **Reason (Opus)** — The skill itself (running as Opus via `context: fork`) receives
36
+ the raw data and performs all analysis: comparison, correlation, judgment, diagnosis,
37
+ and report writing.
38
+
39
+ This pattern applies to: `cross-service-check`, `incident-debug`, `cycle-retrospective`.
40
+ It does NOT apply to: `qa-ruthless` (QA investigators are Sonnet — they need to run tests
41
+ and reason about code quality), `plan-review` (structural checklist, Haiku is sufficient).
26
42
 
27
43
  ## Custom agent `implementer`
28
44
  For Task subagents that need to write code in an isolated worktree,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "cc-workspace",
3
- "version": "4.6.1",
3
+ "version": "4.7.0",
4
4
  "description": "Claude Code multi-workspace orchestrator — skills, hooks, agents, and templates for multi-service projects",
5
5
  "bin": {
6
6
  "cc-workspace": "./bin/cli.js"