@tgoodington/intuition 9.3.1 → 9.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@tgoodington/intuition",
|
|
3
|
-
"version": "9.
|
|
3
|
+
"version": "9.4.0",
|
|
4
4
|
"description": "Domain-adaptive workflow system for Claude Code: prompt, outline, assemble specialist teams, detail with domain experts, build with format producers, test code output. Supports v8 compat (design, engineer, build) and v9 specialist workflows with 14 domain specialists and 6 format producers.",
|
|
5
5
|
"keywords": [
|
|
6
6
|
"claude-code",
|
|
@@ -48,7 +48,7 @@ Scan three tiers in priority order. Deduplicate by `name` — first found wins.
|
|
|
48
48
|
2. Glob `~/.claude/specialists/*/*.specialist.md` (user-level, expand `~` via Bash)
|
|
49
49
|
3. Determine the Intuition package root: run `node -e "console.log(require.resolve('@tgoodington/intuition/package.json'))"` via Bash, extract the directory. Glob `{package_root}/specialists/*/*.specialist.md`.
|
|
50
50
|
|
|
51
|
-
For each profile found: read the YAML frontmatter
|
|
51
|
+
For each profile found: read ONLY the YAML frontmatter using `Read` with `limit: 30` (frontmatter is typically under 25 lines). Extract `name` and `domain_tags`. Do NOT read the full profile body — the Stage 1/2 protocols are not needed for matching. Build a specialists list.
|
|
52
52
|
|
|
53
53
|
If zero specialists found after all three tiers, HALT with this message:
|
|
54
54
|
"No specialist profiles found. Install specialist profiles in one of these locations:
|
|
@@ -58,7 +58,7 @@ If zero specialists found after all three tiers, HALT with this message:
|
|
|
58
58
|
|
|
59
59
|
### Step 3: Scan Producer Registry
|
|
60
60
|
|
|
61
|
-
Same three-tier pattern using `producers/` directories and `*.producer.md` files. Extract `name` and `output_formats` from each. Deduplicate by name with same priority (first found wins).
|
|
61
|
+
Same three-tier pattern using `producers/` directories and `*.producer.md` files. Read ONLY the YAML frontmatter using `Read` with `limit: 30`. Extract `name` and `output_formats` from each. Do NOT read the full profile body. Deduplicate by name with same priority (first found wins).
|
|
62
62
|
|
|
63
63
|
If zero producers found, HALT with the same pattern message referencing producer directories.
|
|
64
64
|
|
|
@@ -128,7 +128,7 @@ If the outline has no format constraints and no Section 3 technology decisions a
|
|
|
128
128
|
### Step 5: Prerequisite Checking
|
|
129
129
|
|
|
130
130
|
For each producer in `producer_assignments`:
|
|
131
|
-
1. Read the
|
|
131
|
+
1. Read the producer profile frontmatter from the registry (the `tooling` field is within the frontmatter, already read in Step 3)
|
|
132
132
|
2. Check `tooling.{output_format}.required` array
|
|
133
133
|
3. For each required tool, run Bash to verify availability (e.g., `python --version`, `which pandoc`)
|
|
134
134
|
4. Record results in `prerequisite_check` (format: `"producer/format": "PASS — tool version found"` or `"FAIL — tool not found"`)
|
|
@@ -149,7 +149,7 @@ For each task per `team_assignment.json` execution order (parallelize tasks with
|
|
|
149
149
|
- Project: `.claude/producers/{producer-name}/{producer-name}.producer.md`
|
|
150
150
|
- User: `~/.claude/producers/{producer-name}/{producer-name}.producer.md`
|
|
151
151
|
- Framework-shipped: scan the `producers/` directory at the package root
|
|
152
|
-
4. Construct the delegation prompt using the producer profile as system instructions
|
|
152
|
+
4. Construct the delegation prompt using the producer profile as system instructions. Direct the subagent to READ the blueprint from disk (do NOT inject blueprint content into the prompt — this avoids duplicating large files in both parent and subagent contexts). Only include non-test output files in the delegation.
|
|
153
153
|
5. Spawn the producer as a Task subagent using the model declared in the producer profile.
|
|
154
154
|
|
|
155
155
|
**Producer delegation format:**
|
|
@@ -174,25 +174,27 @@ When building on a branch, add to subagent prompts:
|
|
|
174
174
|
|
|
175
175
|
## STEP 5: THREE-LAYER REVIEW CHAIN
|
|
176
176
|
|
|
177
|
-
After
|
|
177
|
+
After producers complete deliverables, execute all three review layers. **Batch deliverables from the same specialist** into a single review subagent (up to 3 deliverables per review — if a specialist has more than 3, split into multiple batches). This reduces subagent spawn overhead.
|
|
178
178
|
|
|
179
179
|
### Layer 1: Domain Specialist Review
|
|
180
180
|
|
|
181
181
|
1. Identify the specialist that authored the blueprint (from blueprint YAML frontmatter `specialist` field).
|
|
182
|
-
2.
|
|
183
|
-
3.
|
|
184
|
-
4. Spawn a review subagent with adversarial framing. Use the `reviewer_model` declared in the specialist profile's YAML frontmatter.
|
|
182
|
+
2. Locate that specialist's profile path in the registry (same scan order as producers: project → user → framework).
|
|
183
|
+
3. Spawn a review subagent with adversarial framing. Use the `reviewer_model` declared in the specialist profile's YAML frontmatter. If this specialist produced multiple deliverables, include ALL of them (up to 3) in a single review subagent.
|
|
185
184
|
|
|
186
185
|
**Specialist review delegation format:**
|
|
187
186
|
```
|
|
188
|
-
You are a [specialist display_name] reviewing
|
|
187
|
+
You are a [specialist display_name] reviewing deliverables produced from your blueprint. Your job is to FIND PROBLEMS — not to approve.
|
|
189
188
|
|
|
190
|
-
[
|
|
189
|
+
Read your review protocol from: [specialist profile path] — find the ## Review Protocol section.
|
|
191
190
|
|
|
192
191
|
Blueprint: Read {context_path}/blueprints/{specialist-name}.md
|
|
193
|
-
|
|
192
|
+
Deliverables: Read each of these files:
|
|
193
|
+
- [produced output file path 1]
|
|
194
|
+
- [produced output file path 2]
|
|
195
|
+
- ...
|
|
194
196
|
|
|
195
|
-
|
|
197
|
+
For EACH deliverable: does it accurately capture what the blueprint specified? Are the domain-specific requirements met? Check every review criterion. Return per deliverable: PASS + summary OR FAIL + specific issues list with blueprint section references.
|
|
196
198
|
```
|
|
197
199
|
|
|
198
200
|
- If FAIL → send feedback back to the producer (re-delegate with specific issues). Do NOT proceed to Layer 2.
|
|
@@ -222,14 +224,15 @@ Log all deviations (additions and omissions) in the build report's "Deviations f
|
|
|
222
224
|
### Layer 3: Mandatory Cross-Cutting Reviewers
|
|
223
225
|
|
|
224
226
|
1. Check the specialist profile's `mandatory_reviewers` field in its YAML frontmatter.
|
|
225
|
-
2. For EACH mandatory reviewer listed:
|
|
227
|
+
2. For EACH mandatory reviewer listed: locate their specialist profile, spawn a review subagent using their `reviewer_model`.
|
|
226
228
|
3. **Security Expert is ALWAYS mandatory** — even if `mandatory_reviewers` is empty. Spawn a Security Expert review for every deliverable that produces code, configuration, or scripts.
|
|
229
|
+
4. **Batch cross-cutting reviews** the same way as Layer 1: include up to 3 deliverables per review subagent. If all code deliverables in the current execution phase share the same cross-cutting reviewer, batch them into one review call.
|
|
227
230
|
|
|
228
231
|
**Cross-cutting review delegation format:**
|
|
229
232
|
```
|
|
230
233
|
You are a [reviewer display_name] performing a cross-cutting review. Your job is to FIND PROBLEMS in your area of expertise.
|
|
231
234
|
|
|
232
|
-
[
|
|
235
|
+
Read your review protocol from: [reviewer profile path] — find the ## Review Protocol section.
|
|
233
236
|
|
|
234
237
|
Deliverable: Read [produced output file paths]
|
|
235
238
|
Blueprint: Read {context_path}/blueprints/{specialist-name}.md (for context only)
|
|
@@ -348,7 +351,7 @@ Present a concise version: task count, pass/fail status, files produced count, r
|
|
|
348
351
|
|
|
349
352
|
After reporting results:
|
|
350
353
|
|
|
351
|
-
**8a. Extract to memory.**
|
|
354
|
+
**8a. Extract to memory (inline).** Review the build report you just wrote. For any notable deviations or lessons learned, read `docs/project_notes/key_facts.md` and use Edit to append concise entries (2-3 lines each) if not already present. For any bugs found during review cycles, read `docs/project_notes/bugs.md` and append. Do NOT spawn a subagent — write directly.
|
|
352
355
|
|
|
353
356
|
**8b. Determine next phase.** Read `{context_path}/team_assignment.json`. Check if any `producer_assignments` entry has `producer == "code-writer"`.
|
|
354
357
|
|
|
@@ -104,7 +104,7 @@ Ensure the `{context_path}/blueprints/` directory exists. After the subagent ret
|
|
|
104
104
|
|
|
105
105
|
#### Stage 1a: Research Planning
|
|
106
106
|
|
|
107
|
-
Spawn
|
|
107
|
+
Spawn a sonnet Task subagent. The system prompt combines a research-planning framing (owned by this skill) with the specialist's domain expertise (from the profile):
|
|
108
108
|
|
|
109
109
|
- **System prompt**: Construct by concatenating:
|
|
110
110
|
1. **Framing (detail skill provides this):**
|
|
@@ -342,7 +342,9 @@ Spawn a FRESH opus Task subagent (do NOT resume Stage 1):
|
|
|
342
342
|
- Full contents of `{context_path}/scratch/{specialist-name}-decisions.json`
|
|
343
343
|
- Plan tasks with acceptance criteria
|
|
344
344
|
- Prior blueprint contents (if any — read each path and include full text)
|
|
345
|
-
- **Output instruction**: "Produce the complete blueprint in the universal envelope format (9 sections: Task Reference, Research Findings, Approach, Decisions Made, Deliverable Specification, Acceptance Mapping, Integration Points, Open Items, Producer Handoff). Write to `{context_path}/blueprints/{specialist-name}.md`. Every design choice must trace to Stage 1 research, a user decision from decisions.json, or a named domain standard. Ungrounded choices go in the Open Items section.
|
|
345
|
+
- **Output instruction**: "Produce the complete blueprint in the universal envelope format (9 sections: Task Reference, Research Findings, Approach, Decisions Made, Deliverable Specification, Acceptance Mapping, Integration Points, Open Items, Producer Handoff). Write to `{context_path}/blueprints/{specialist-name}.md`. Every design choice must trace to Stage 1 research, a user decision from decisions.json, or a named domain standard. Ungrounded choices go in the Open Items section.
|
|
346
|
+
|
|
347
|
+
IMPORTANT — Testing boundary: Do NOT specify test files or test deliverables in Producer Handoff (Section 9). Testing is handled by a dedicated test phase, not by producers. If you have domain-specific testing knowledge (edge cases, critical paths, failure modes, boundary conditions), include it in the Approach section (Section 3) under a '### Testability Notes' subheading. This gives the test phase domain context without prescribing test files."
|
|
346
348
|
|
|
347
349
|
Ensure the `{context_path}/blueprints/` directory exists (create via Bash `mkdir -p` if needed).
|
|
348
350
|
|
|
@@ -370,7 +372,9 @@ After a blueprint passes the traceability check:
|
|
|
370
372
|
|
|
371
373
|
**8b. Update specialist state.** Read `.project-memory-state.json`. In `workflow.detail.specialists`, mark the completed specialist: `status → "completed"`, `stage → "done"`, `blueprint_path → "{context_path}/blueprints/{specialist-name}.md"`. Write back.
|
|
372
374
|
|
|
373
|
-
**8c. Extract to memory.**
|
|
375
|
+
**8c. Extract to memory (inline).** Read the just-written blueprint's Decisions Made section (Section 4). For each decision, read `docs/project_notes/decisions.md` and use Edit to append a new ADR entry if one doesn't already exist. For key domain facts from the blueprint's Research Findings (Section 2), read `docs/project_notes/key_facts.md` and append if not present. Keep entries concise (2-3 lines each). Do NOT spawn a subagent — write directly.
|
|
376
|
+
|
|
377
|
+
**8c-ii. Extract testability notes.** If the blueprint's Approach section (Section 3) contains a `### Testability Notes` subheading, extract its contents and append to `{context_path}/test_advisory.md` (create if it doesn't exist). Format: `## {Specialist Display Name}\n{testability notes content}\n`. This gives the test phase a compact file instead of needing to read all blueprints.
|
|
374
378
|
|
|
375
379
|
**8d. Check for next specialist.** Read `{context_path}/team_assignment.json`. Read current state.
|
|
376
380
|
|
|
@@ -97,6 +97,8 @@ From the prompt brief, extract: core problem, success criteria, stakeholders, co
|
|
|
97
97
|
|
|
98
98
|
Create the directory `{context_path}/.outline_research/` if it does not exist.
|
|
99
99
|
|
|
100
|
+
**Resume check:** If `{context_path}/.outline_research/orientation.md` already exists AND `{context_path}/.outline_research/decisions_log.md` exists with at least one entry, skip the research agents — read the existing orientation.md and proceed to Step 3. This avoids re-spending tokens on research that hasn't changed.
|
|
101
|
+
|
|
100
102
|
Launch 2 sonnet research agents in parallel using the Task tool:
|
|
101
103
|
|
|
102
104
|
**Agent 1 — Codebase Topology** (subagent_type: Explore, model: sonnet):
|
|
@@ -201,7 +203,7 @@ When actors are sufficiently mapped (user has confirmed or adjusted), transition
|
|
|
201
203
|
Based on the scope revealed by the prompt brief and actors discussion, recommend a outline depth tier:
|
|
202
204
|
|
|
203
205
|
- **Lightweight** (1-4 tasks): Focused scope, few unknowns. Outline includes: Objective, Discovery Summary, Task Sequence, Execution Notes.
|
|
204
|
-
- **Standard** (5-10 tasks): Moderate complexity. Adds: Technology Decisions,
|
|
206
|
+
- **Standard** (5-10 tasks): Moderate complexity. Adds: Technology Decisions, Risks & Mitigations.
|
|
205
207
|
- **Comprehensive** (10+ tasks): Broad scope, multiple components. All sections including Component Architecture and Interface Contracts.
|
|
206
208
|
|
|
207
209
|
Present your recommendation with reasoning via AskUserQuestion. Options: the three tiers (with your recommendation marked). The user may agree or pick a different tier.
|
|
@@ -354,7 +356,7 @@ After writing `outline.md`:
|
|
|
354
356
|
|
|
355
357
|
**1. Update state:** Read `.project-memory-state.json`. Target the active context object (trunk or branch). Set: `status` → `"outline"`, `workflow.outline.completed` → `true`, `workflow.outline.completed_at` → current ISO timestamp, `workflow.outline.approved` → `true`. Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"outline_complete"`. Write back.
|
|
356
358
|
|
|
357
|
-
**2. Extract to memory
|
|
359
|
+
**2. Extract to memory (inline).** Read `{context_path}/.outline_research/decisions_log.md`. For each locked decision, read `docs/project_notes/decisions.md` and use Edit to append a new ADR entry if one doesn't already exist for that decision. For each risk identified during dialogue, read `docs/project_notes/issues.md` and use Edit to append if not already present. Keep entries concise (2-3 lines each). Do NOT spawn a subagent for this — write directly.
|
|
358
360
|
|
|
359
361
|
**3. Fast Track Assessment (v9 only):**
|
|
360
362
|
|
|
@@ -405,8 +407,8 @@ If fast track declined OR conditions not met, continue to step 4.
|
|
|
405
407
|
## Scope Scaling
|
|
406
408
|
|
|
407
409
|
- **Lightweight**: Sections 1, 2, 6, 6.5, 10
|
|
408
|
-
- **Standard**: Sections 1, 2, 3, 6, 6.5,
|
|
409
|
-
- **Comprehensive**: All sections (1-
|
|
410
|
+
- **Standard**: Sections 1, 2, 3, 6, 6.5, 8, 10
|
|
411
|
+
- **Comprehensive**: All sections (1-6.5, 8-10)
|
|
410
412
|
|
|
411
413
|
Section 6.5 (Detail Assessment) is ALWAYS included regardless of tier.
|
|
412
414
|
Section 2.5 is Parent Context — included for ALL tiers when on a branch.
|
|
@@ -482,8 +484,7 @@ Depth controls specialist invocation:
|
|
|
482
484
|
|
|
483
485
|
**Acceptance criteria rule:** If a criterion can only be satisfied ONE way, it is over-specified. Criteria describe outcomes ("users can reset passwords via email"), not implementations ("add a resetPassword() method that calls sendEmail()"). The engineer and build phases decide the code-level HOW.
|
|
484
486
|
|
|
485
|
-
|
|
486
|
-
Test types required. Which tasks need tests (reference task numbers). Critical test scenarios. Infrastructure needed.
|
|
487
|
+
**No test tasks.** Do NOT create tasks for writing tests (e.g., "Write unit tests for the API layer"). Testing is a dedicated phase (`/intuition-test`), not a task. The test phase discovers infrastructure, designs strategy, and creates tests independently. Outline tasks describe what gets built — verification is the test phase's job.
|
|
487
488
|
|
|
488
489
|
### 8. Risks & Mitigations (Standard+)
|
|
489
490
|
|
|
@@ -24,6 +24,7 @@ These are non-negotiable. Violating any of these means the protocol has failed.
|
|
|
24
24
|
8. You MUST write `{context_path}/test_report.md` before routing to handoff.
|
|
25
25
|
9. You MUST run the Exit Protocol after writing the test report. NEVER route to `/intuition-handoff`.
|
|
26
26
|
10. You MUST update `.project-memory-state.json` as part of the Exit Protocol.
|
|
27
|
+
11. You MUST NOT use `run_in_background` for subagents in Steps 2 and 5. All research and test-creation agents MUST complete before their next step begins.
|
|
27
28
|
|
|
28
29
|
## CONTEXT PATH RESOLUTION
|
|
29
30
|
|
|
@@ -63,11 +64,11 @@ Check for existing artifacts before starting. Use `{context_path}/scratch/test_s
|
|
|
63
64
|
Read these files:
|
|
64
65
|
|
|
65
66
|
1. `{context_path}/build_report.md` — REQUIRED. Extract: files modified, task results, deviations from blueprints, decision compliance notes.
|
|
66
|
-
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
67
|
+
2. `{context_path}/outline.md` — acceptance criteria per task.
|
|
68
|
+
3. `{context_path}/test_advisory.md` — compact testability notes extracted by the detail phase (one section per specialist). Read this INSTEAD of all blueprints. If this file does not exist (older workflows), fall back to reading `{context_path}/blueprints/*.md` and extracting Testability Notes from each Approach section.
|
|
69
|
+
4. `{context_path}/team_assignment.json` — producer assignments (identify code-writer tasks).
|
|
70
|
+
5. ALL files matching `{context_path}/scratch/*-decisions.json` — decision tiers and chosen options per specialist.
|
|
71
|
+
6. `docs/project_notes/decisions.md` — project-level ADRs.
|
|
71
72
|
|
|
72
73
|
From build_report.md, extract:
|
|
73
74
|
- **Files modified** — the scope boundary for testing and fixes
|
|
@@ -76,10 +77,9 @@ From build_report.md, extract:
|
|
|
76
77
|
- **Decision compliance** — any flagged decision issues
|
|
77
78
|
- **Test Deliverables Deferred** — test specs/files that specialists recommended but build skipped (if this section exists)
|
|
78
79
|
|
|
79
|
-
From blueprints, extract
|
|
80
|
-
-
|
|
81
|
-
-
|
|
82
|
-
- Test-related deliverables from Producer Handoff sections
|
|
80
|
+
From test_advisory.md (or blueprints as fallback), extract domain test knowledge:
|
|
81
|
+
- Edge cases, critical paths, failure modes, and boundary conditions flagged by specialists
|
|
82
|
+
- Any test-relevant domain insights
|
|
83
83
|
|
|
84
84
|
From decisions files, build a decision index:
|
|
85
85
|
- Map each `[USER]` decision to its chosen option
|
|
@@ -88,7 +88,7 @@ From decisions files, build a decision index:
|
|
|
88
88
|
|
|
89
89
|
## STEP 2: RESEARCH (2 Parallel Haiku Explore Agents)
|
|
90
90
|
|
|
91
|
-
Spawn two haiku Explore agents in parallel (both Task calls in a single response):
|
|
91
|
+
Spawn two haiku Explore agents in parallel (both Task calls in a single response). Do NOT use `run_in_background` — you MUST wait for both agents to return before proceeding to Step 3:
|
|
92
92
|
|
|
93
93
|
**Agent 1 — Test Infrastructure:**
|
|
94
94
|
"Search the project for test infrastructure. Find: test framework and runner (jest, vitest, mocha, pytest, etc.), test configuration files, existing test directories and naming conventions, mock/fixture patterns, test utility helpers, CI test commands, coverage configuration and thresholds. Report exact paths and configuration values."
|
|
@@ -157,11 +157,11 @@ Tests that only exercise isolated helper functions satisfy unit coverage but do
|
|
|
157
157
|
|
|
158
158
|
### Specialist Test Recommendations
|
|
159
159
|
|
|
160
|
-
Before finalizing the test plan, review specialist
|
|
161
|
-
- **
|
|
162
|
-
- **Deferred test deliverables**:
|
|
160
|
+
Before finalizing the test plan, review specialist domain knowledge from blueprints:
|
|
161
|
+
- **Testability Notes**: Edge cases, critical paths, failure modes, and boundary conditions from each blueprint's Approach section (Section 3, `### Testability Notes` subheading)
|
|
162
|
+
- **Deferred test deliverables**: Any test specs from build_report.md's "Test Deliverables Deferred" section (legacy — older blueprints may still include test files in Producer Handoff)
|
|
163
163
|
|
|
164
|
-
Specialists have domain expertise about what should be tested. Incorporate
|
|
164
|
+
Specialists have domain expertise about what should be tested. Incorporate their testability insights into your test plan, but you own the test strategy — use specialist input as advisory, not prescriptive.
|
|
165
165
|
|
|
166
166
|
### Output
|
|
167
167
|
|
|
@@ -203,7 +203,7 @@ Options:
|
|
|
203
203
|
|
|
204
204
|
## STEP 5: CREATE TESTS
|
|
205
205
|
|
|
206
|
-
Delegate test creation to sonnet Task subagents. Parallelize independent test files (multiple Task calls in a single response).
|
|
206
|
+
Delegate test creation to sonnet Task subagents. Parallelize independent test files (multiple Task calls in a single response). Do NOT use `run_in_background` — you MUST wait for ALL subagents to return before proceeding to Step 6.
|
|
207
207
|
|
|
208
208
|
For each test file, spawn a sonnet subagent:
|
|
209
209
|
|
|
@@ -224,7 +224,7 @@ You are a test writer. Create a test file following these specifications exactly
|
|
|
224
224
|
Write the complete test file to the specified path. Follow the project's existing test style exactly. Do NOT add test infrastructure (no new packages, no config changes).
|
|
225
225
|
```
|
|
226
226
|
|
|
227
|
-
After all subagents return, verify each test file
|
|
227
|
+
SYNCHRONIZATION GATE: After all subagents return, verify each test file exists on disk using Glob. If any file is missing, retry that subagent once (foreground) with error context. Do NOT proceed to Step 6 until every planned test file is confirmed on disk.
|
|
228
228
|
|
|
229
229
|
## STEP 6: RUN TESTS + FIX CYCLE
|
|
230
230
|
|
|
@@ -327,7 +327,7 @@ Write `{context_path}/test_report.md`:
|
|
|
327
327
|
|
|
328
328
|
## STEP 8: EXIT PROTOCOL
|
|
329
329
|
|
|
330
|
-
**8a. Extract to memory.**
|
|
330
|
+
**8a. Extract to memory (inline).** Review the test report you just wrote. For test coverage insights, read `docs/project_notes/key_facts.md` and use Edit to append concise entries (2-3 lines each) if not already present. For implementation fixes applied, read `docs/project_notes/bugs.md` and append. For escalated issues, read `docs/project_notes/issues.md` and append. Do NOT spawn a subagent — write directly.
|
|
331
331
|
|
|
332
332
|
**8b. Update state.** Read `.project-memory-state.json`. Target active context. Set: `status` → `"complete"`, `workflow.test.completed` → `true`, `workflow.test.completed_at` → current ISO timestamp, `workflow.build.completed` → `true`, `workflow.build.completed_at` → current ISO timestamp (if not already set). Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"test_to_complete"`. Write back.
|
|
333
333
|
|