@tgoodington/intuition 10.10.2 → 11.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/docs/project_notes/trunk/discovery_brief.md +40 -0
- package/package.json +2 -2
- package/scripts/install-skills.js +10 -1
- package/skills/intuition-enuncia-compose/SKILL.md +406 -0
- package/skills/intuition-enuncia-design/SKILL.md +302 -0
- package/skills/intuition-enuncia-discovery/SKILL.md +351 -0
- package/skills/intuition-enuncia-execute/SKILL.md +279 -0
- package/skills/intuition-enuncia-handoff/SKILL.md +80 -0
- package/skills/intuition-enuncia-start/SKILL.md +152 -0
- package/skills/intuition-enuncia-verify/SKILL.md +292 -0
|
@@ -0,0 +1,292 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: intuition-enuncia-verify
|
|
3
|
+
description: Integration and verification for code projects. Wires build output into the project, runs the toolchain, writes smoke and experience-slice tests, and fixes what's broken. Proves the code actually works. Only runs when code was produced.
|
|
4
|
+
model: opus
|
|
5
|
+
tools: Read, Write, Edit, Glob, Grep, Task, AskUserQuestion, Bash, mcp__ide__getDiagnostics
|
|
6
|
+
allowed-tools: Read, Write, Edit, Glob, Grep, Task, Bash, mcp__ide__getDiagnostics
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Verify Protocol
|
|
10
|
+
|
|
11
|
+
## PROJECT GOAL
|
|
12
|
+
|
|
13
|
+
Deliver something to the user through an experience that places them as creative director, offloading technical implementation to Claude, that satisfies their needs and desires.
|
|
14
|
+
|
|
15
|
+
## SKILL GOAL
|
|
16
|
+
|
|
17
|
+
Make the code work, then prove it works. Wire execute's output into the project, run the toolchain, write tests that exercise the real system from the outside, and fix what's broken. This skill only runs for code projects — non-code deliverables complete at execute.
|
|
18
|
+
|
|
19
|
+
The discovery brief's North Star is the ultimate test: does the running system deliver the experience it promised?
|
|
20
|
+
|
|
21
|
+
## CRITICAL RULES
|
|
22
|
+
|
|
23
|
+
1. You MUST read `.project-memory-state.json` and resolve context_path before anything else.
|
|
24
|
+
2. You MUST read `{context_path}/discovery_brief.md`, `{context_path}/outline.json`, `{context_path}/build_output.json`, and `{context_path}/project_map.md`.
|
|
25
|
+
3. You MUST integrate before testing. Code that isn't wired in can't be meaningfully tested.
|
|
26
|
+
4. You MUST NOT write unit tests that test implementation internals. Tests exercise the system from the outside — smoke tests and experience-slice tests only.
|
|
27
|
+
5. You MUST NOT fix failures that violate user decisions from the specs. Escalate immediately.
|
|
28
|
+
6. You MUST delegate integration tasks and test writing to subagents. Do not write code yourself.
|
|
29
|
+
7. You MUST verify against the discovery brief after all tests pass — does the system deliver the North Star?
|
|
30
|
+
8. You MUST update `{context_path}/project_map.md` if integration reveals new information.
|
|
31
|
+
|
|
32
|
+
## CONTEXT PATH RESOLUTION
|
|
33
|
+
|
|
34
|
+
```
|
|
35
|
+
1. Read .project-memory-state.json
|
|
36
|
+
2. Get active_context value
|
|
37
|
+
3. IF active_context == "trunk":
|
|
38
|
+
context_path = "docs/project_notes/trunk/"
|
|
39
|
+
ELSE:
|
|
40
|
+
context_path = "docs/project_notes/branches/{active_context}/"
|
|
41
|
+
4. Use context_path for ALL file reads and writes
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
## PROTOCOL
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
Step 1: Read context
|
|
48
|
+
Step 2: Integration — wire everything together
|
|
49
|
+
Step 3: Toolchain — compile, type-check, lint
|
|
50
|
+
Step 4: Smoke tests — does it start and respond
|
|
51
|
+
Step 5: Experience slice tests — do the stakeholder journeys work
|
|
52
|
+
Step 6: Fix cycle
|
|
53
|
+
Step 7: Final verification against discovery brief
|
|
54
|
+
Step 8: Exit
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
## STEP 1: READ CONTEXT
|
|
58
|
+
|
|
59
|
+
Read these files:
|
|
60
|
+
1. `{context_path}/discovery_brief.md` — North Star, stakeholders, constraints
|
|
61
|
+
2. `{context_path}/outline.json` — experience slices, tasks, acceptance criteria
|
|
62
|
+
3. `{context_path}/build_output.json` — what was built, files created/modified, any deviations
|
|
63
|
+
4. `{context_path}/specs/*.md` — design specs for technical context
|
|
64
|
+
5. `{context_path}/project_map.md` — component landscape, interactions
|
|
65
|
+
|
|
66
|
+
From build_output.json, extract: all files created and modified, task statuses, any escalated issues or deviations.
|
|
67
|
+
|
|
68
|
+
From outline.json, extract: experience slices (these become the basis for experience-slice tests).
|
|
69
|
+
|
|
70
|
+
### Gate Check
|
|
71
|
+
|
|
72
|
+
If build_output.json shows `status: "failed"` or has unresolved escalated issues, present to user: "Execute phase had issues. Proceed with integration anyway, or go back?" If they want to go back, route to `/intuition-enuncia-execute`.
|
|
73
|
+
|
|
74
|
+
## STEP 2: INTEGRATION
|
|
75
|
+
|
|
76
|
+
Wire the build output into the project so it actually runs.
|
|
77
|
+
|
|
78
|
+
### 2a. Research Integration Points
|
|
79
|
+
|
|
80
|
+
Spawn two `intuition-researcher` agents in parallel:
|
|
81
|
+
|
|
82
|
+
**Agent 1 — Toolchain Discovery:**
|
|
83
|
+
"Find the project's build and run infrastructure: package manager, build commands, dev server, type-checking, linting, full test suite command, CI config. Report exact commands and config paths."
|
|
84
|
+
|
|
85
|
+
**Agent 2 — Integration Gap Discovery:**
|
|
86
|
+
"Using the build output at `{context_path}/build_output.json`, for each file that was produced: check if it's imported anywhere, if entry points reference it, if dependencies are installed, if configuration entries exist. Report what's already wired and what's missing."
|
|
87
|
+
|
|
88
|
+
### 2b. Execute Integration
|
|
89
|
+
|
|
90
|
+
For each gap found, delegate to an `intuition-code-writer` subagent:
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
You are an integration specialist. Make the MINIMUM change needed to wire a new module into the project.
|
|
94
|
+
|
|
95
|
+
Task: [category — import wiring / dependency install / config entry / re-export / etc.]
|
|
96
|
+
File to modify: [path]
|
|
97
|
+
Change needed: [specific change]
|
|
98
|
+
Context: [what build deliverable this connects]
|
|
99
|
+
|
|
100
|
+
Rules:
|
|
101
|
+
- Smallest possible change
|
|
102
|
+
- Follow existing code style
|
|
103
|
+
- Do NOT modify build deliverables — only modify integration points
|
|
104
|
+
- If more complex than described, STOP and report back
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
### 2c. Install Dependencies
|
|
108
|
+
|
|
109
|
+
If specs reference new packages, install them via Bash. Verify manifest and lockfile are updated.
|
|
110
|
+
|
|
111
|
+
## STEP 3: TOOLCHAIN
|
|
112
|
+
|
|
113
|
+
Run the project's toolchain to verify basic health. Execute in order:
|
|
114
|
+
|
|
115
|
+
1. **Type check / lint** (if applicable): `[type check command]`, `[lint command]`
|
|
116
|
+
2. **Build / compile** (if applicable): `[build command]`
|
|
117
|
+
3. **Existing tests**: `[test command]` — run the FULL existing test suite to catch regressions
|
|
118
|
+
|
|
119
|
+
Also run `mcp__ide__getDiagnostics` to catch IDE-visible issues.
|
|
120
|
+
|
|
121
|
+
If any step fails, classify and fix (see STEP 6) before proceeding.
|
|
122
|
+
|
|
123
|
+
## STEP 4: SMOKE TESTS
|
|
124
|
+
|
|
125
|
+
Smoke tests verify the system actually runs. They exercise real code paths, not mocks.
|
|
126
|
+
|
|
127
|
+
### What Smoke Tests Cover
|
|
128
|
+
|
|
129
|
+
- **Startup**: Does the app/server/process start without errors?
|
|
130
|
+
- **Main entry points**: Do the primary routes/endpoints/commands respond?
|
|
131
|
+
- **Core dependencies**: Do external connections initialize? (Database connects, API keys validate, etc.)
|
|
132
|
+
- **Happy path**: One simple request through the main flow — does it complete?
|
|
133
|
+
|
|
134
|
+
### Writing Smoke Tests
|
|
135
|
+
|
|
136
|
+
Delegate to an `intuition-code-writer` subagent:
|
|
137
|
+
|
|
138
|
+
```
|
|
139
|
+
You are writing smoke tests. These tests verify the system ACTUALLY RUNS — not that individual functions return correct values.
|
|
140
|
+
|
|
141
|
+
Test framework: [detected framework from Step 2a]
|
|
142
|
+
Test conventions: [naming, directory from existing tests]
|
|
143
|
+
|
|
144
|
+
What to test:
|
|
145
|
+
- App startup (import the app, verify no crash)
|
|
146
|
+
- Main entry points respond (hit routes, verify non-error status codes)
|
|
147
|
+
- Core flow completes (one end-to-end request through the primary path)
|
|
148
|
+
|
|
149
|
+
Rules:
|
|
150
|
+
- Actually start the app/server in the test
|
|
151
|
+
- Make real HTTP requests or function calls — no mocking the system under test
|
|
152
|
+
- Mock ONLY external services (databases, third-party APIs) that aren't available in test
|
|
153
|
+
- Each test should take < 5 seconds
|
|
154
|
+
- If a test fails, it means the system is broken — not that a detail is wrong
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
Run the smoke tests. If they fail, fix (Step 6) before proceeding.
|
|
158
|
+
|
|
159
|
+
## STEP 5: EXPERIENCE SLICE TESTS
|
|
160
|
+
|
|
161
|
+
These are the highest-value tests in the system. They walk through each stakeholder's journey as defined in the compose phase and verify the end-to-end flow works.
|
|
162
|
+
|
|
163
|
+
### Deriving Tests from Experience Slices
|
|
164
|
+
|
|
165
|
+
Read `outline.json` and extract the experience slices. For each slice that involves code behavior:
|
|
166
|
+
|
|
167
|
+
- **What triggers it**: The test setup
|
|
168
|
+
- **What the stakeholder does**: The test actions
|
|
169
|
+
- **What should happen**: The test assertions (from acceptance criteria)
|
|
170
|
+
|
|
171
|
+
### Writing Experience Slice Tests
|
|
172
|
+
|
|
173
|
+
Delegate to an `intuition-code-writer` subagent:
|
|
174
|
+
|
|
175
|
+
```
|
|
176
|
+
You are writing experience-slice tests. These tests verify that stakeholder journeys work end-to-end. They are derived from the project's experience slices — NOT from the source code.
|
|
177
|
+
|
|
178
|
+
Test framework: [detected framework]
|
|
179
|
+
Test conventions: [from existing tests]
|
|
180
|
+
|
|
181
|
+
## Experience Slices to Test
|
|
182
|
+
|
|
183
|
+
[For each testable slice:]
|
|
184
|
+
|
|
185
|
+
### ES-[N]: [Title]
|
|
186
|
+
Stakeholder: [who]
|
|
187
|
+
Journey: [trigger → action → expected outcome]
|
|
188
|
+
Acceptance criteria: [from outline.json]
|
|
189
|
+
|
|
190
|
+
## Rules
|
|
191
|
+
- Test the journey from the stakeholder's perspective
|
|
192
|
+
- Use the same entry points a real user would (HTTP routes, CLI commands, public APIs)
|
|
193
|
+
- Mock ONLY external services not available in test — NOT internal modules
|
|
194
|
+
- Assert against acceptance criteria from the outline, not implementation details
|
|
195
|
+
- Each test should tell a story: "the admin does X, the system does Y, the result is Z"
|
|
196
|
+
- If a slice requires UI interaction you can't automate, test the API layer that backs it
|
|
197
|
+
- Do NOT read source code to determine expected behavior — the spec defines what should happen
|
|
198
|
+
|
|
199
|
+
## Spec Sources (read these for expected behavior)
|
|
200
|
+
- Discovery brief: {context_path}/discovery_brief.md
|
|
201
|
+
- Outline: {context_path}/outline.json
|
|
202
|
+
- Specs: {context_path}/specs/*.md
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
Run the experience slice tests. Classify and fix failures (Step 6).
|
|
206
|
+
|
|
207
|
+
## STEP 6: FIX CYCLE
|
|
208
|
+
|
|
209
|
+
For each failure, classify:
|
|
210
|
+
|
|
211
|
+
| Classification | Action |
|
|
212
|
+
|---|---|
|
|
213
|
+
| **Integration bug** (wrong import, missing config, typo in wiring) | Fix via `intuition-code-writer` |
|
|
214
|
+
| **Missing dependency** | Install via Bash |
|
|
215
|
+
| **Implementation bug, simple** (1-3 lines, spec is clear) | Fix via `intuition-code-writer` |
|
|
216
|
+
| **Implementation bug, complex** (multi-file, architectural) | Escalate to user |
|
|
217
|
+
| **Spec violation** (code disagrees with spec) | Escalate: "Spec says X, code does Y" |
|
|
218
|
+
| **Test regression** (existing test broke) | Diagnose: is the test outdated or the new code wrong? Escalate if ambiguous |
|
|
219
|
+
| **Violates user decision** | STOP — escalate immediately |
|
|
220
|
+
|
|
221
|
+
### Fix Process
|
|
222
|
+
|
|
223
|
+
1. Classify the failure
|
|
224
|
+
2. If fixable: delegate fix to `intuition-code-writer`
|
|
225
|
+
3. Re-run the failing test
|
|
226
|
+
4. Max 3 fix cycles per failure — then escalate
|
|
227
|
+
5. After all failures addressed, run FULL verification (toolchain + all tests) one final time
|
|
228
|
+
|
|
229
|
+
## STEP 7: FINAL VERIFICATION
|
|
230
|
+
|
|
231
|
+
After all tests pass, check the running system against the discovery brief:
|
|
232
|
+
|
|
233
|
+
**North Star check**: Does the system deliver the experience the brief describes? Walk through it mentally:
|
|
234
|
+
- [For each stakeholder]: Can they do what the brief says they should be able to do?
|
|
235
|
+
- Does the system honor the constraints?
|
|
236
|
+
- Would this satisfy the North Star as written?
|
|
237
|
+
|
|
238
|
+
If something drifts, flag it to the user: "Tests pass, but [specific concern about North Star alignment]."
|
|
239
|
+
|
|
240
|
+
**Update project map** if integration or testing revealed anything new about how components connect.
|
|
241
|
+
|
|
242
|
+
## STEP 8: EXIT
|
|
243
|
+
|
|
244
|
+
**Update state.** Read `.project-memory-state.json`. Target active context. Set: `status` → `"complete"`, `workflow.verify.completed` → `true`, `workflow.verify.completed_at` → current ISO timestamp. Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"verify_to_complete"`. Write back.
|
|
245
|
+
|
|
246
|
+
**Present results** via AskUserQuestion:
|
|
247
|
+
|
|
248
|
+
```
|
|
249
|
+
Question: "Verification complete.
|
|
250
|
+
|
|
251
|
+
**Integration**: [pass/issues]
|
|
252
|
+
**Toolchain**: [builds, type-checks, lints]
|
|
253
|
+
**Existing tests**: [N passed, N failed]
|
|
254
|
+
**Smoke tests**: [N passed, N failed]
|
|
255
|
+
**Experience slice tests**: [N passed, N failed]
|
|
256
|
+
**North Star alignment**: [met / concerns]
|
|
257
|
+
|
|
258
|
+
[If escalated issues exist, list them]
|
|
259
|
+
|
|
260
|
+
Ready to commit?"
|
|
261
|
+
|
|
262
|
+
Header: "Verify"
|
|
263
|
+
Options:
|
|
264
|
+
- "Commit and push"
|
|
265
|
+
- "Commit only"
|
|
266
|
+
- "Done — no commit"
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
If committing: stage files from build output + integration changes + tests, commit with descriptive message, optionally push.
|
|
270
|
+
|
|
271
|
+
**Route.** "Workflow complete. Run `/clear` then `/intuition-enuncia-start` to see project status."
|
|
272
|
+
|
|
273
|
+
## BRANCH MODE
|
|
274
|
+
|
|
275
|
+
When verifying on a branch:
|
|
276
|
+
- Run the FULL test suite (parent + branch tests) to catch compatibility issues
|
|
277
|
+
- Integration must be compatible with parent architecture
|
|
278
|
+
- Update the branch's project map, not the parent's
|
|
279
|
+
|
|
280
|
+
## RESUME LOGIC
|
|
281
|
+
|
|
282
|
+
1. If tests exist but no verification complete: "Found tests from a previous session. Re-running verification."
|
|
283
|
+
2. If integration was done but tests haven't run: skip to Step 4.
|
|
284
|
+
3. Otherwise fresh start from Step 1.
|
|
285
|
+
|
|
286
|
+
## VOICE
|
|
287
|
+
|
|
288
|
+
- **Pragmatic** — make it work, prove it works, report what happened
|
|
289
|
+
- **Evidence-driven** — every failure has a classification, every fix has a rationale
|
|
290
|
+
- **Honest** — if tests pass but something feels off against the North Star, say so
|
|
291
|
+
- **Concise** — status updates, not essays
|
|
292
|
+
- **Brief-anchored** — the discovery foundation is the ultimate measure of success
|