@tgoodington/intuition 11.2.0 → 11.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@tgoodington/intuition",
3
- "version": "11.2.0",
3
+ "version": "11.3.1",
4
4
  "description": "Domain-adaptive workflow system for Claude Code. Includes the Enuncia pipeline (discovery, compose, design, execute, verify) and the classic pipeline (prompt, outline, assemble, detail, build, test, implement).",
5
5
  "keywords": [
6
6
  "claude-code",
@@ -85,7 +85,8 @@ const producers = [
85
85
  'spreadsheet-builder',
86
86
  'presentation-creator',
87
87
  'form-filler',
88
- 'data-file-writer'
88
+ 'data-file-writer',
89
+ 'ui-writer'
89
90
  ];
90
91
 
91
92
  // Reusable agent definitions (v9.4) — scanned dynamically
@@ -24,7 +24,7 @@ The first cycle is the **trunk**. After trunk completes, create **branches** for
24
24
  | `/intuition-enuncia-compose` | Maps experience slices, decomposes into buildable tasks |
25
25
  | `/intuition-enuncia-design` | Technical design — enriches tasks with specs, updates project map |
26
26
  | `/intuition-enuncia-execute` | Delegates production to subagents, verifies outputs |
27
- | `/intuition-enuncia-verify` | Wires code into project, runs toolchain and tests |
27
+ | `/intuition-enuncia-verify` | Wires code into project, gets it running, tests the live system |
28
28
  | `/intuition-enuncia-handoff` | Branch creation and context management |
29
29
  | `/intuition-initialize` | Sets up project memory (you already ran this) |
30
30
  | `/intuition-meander` | Thought partner — reason through problems collaboratively |
@@ -40,7 +40,7 @@ The first cycle is the **trunk**. After trunk completes, create **branches** for
40
40
  3. `/intuition-enuncia-compose` — decompose into experience slices and tasks
41
41
  4. `/intuition-enuncia-design` — technical design for each task group
42
42
  5. `/intuition-enuncia-execute` — build from specs
43
- 6. `/intuition-enuncia-verify` — wire in, test, prove it works (code projects)
43
+ 6. `/intuition-enuncia-verify` — wire in, get it running, test the live system (code projects)
44
44
 
45
45
  Run `/clear` before each phase skill.
46
46
 
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: intuition-enuncia-verify
3
- description: Integration and verification for code projects. Wires build output into the project, runs the toolchain, writes smoke and experience-slice tests, and fixes what's broken. Proves the code actually works. Only runs when code was produced.
3
+ description: Integration and verification for code projects. Wires build output into the project, walks the user through getting it running for real, then tests the live system. Proves the code actually works. Only runs when code was produced.
4
4
  model: opus
5
5
  tools: Read, Write, Edit, Glob, Grep, Task, AskUserQuestion, Bash, mcp__ide__getDiagnostics
6
6
  allowed-tools: Read, Write, Edit, Glob, Grep, Task, Bash, mcp__ide__getDiagnostics
@@ -14,20 +14,21 @@ Deliver something to the user through an experience that places them as creative
14
14
 
15
15
  ## SKILL GOAL
16
16
 
17
- Make the code work, then prove it works. Wire execute's output into the project, run the toolchain, write tests that exercise the real system from the outside, and fix what's broken. This skill only runs for code projects non-code deliverables complete at execute.
17
+ Make the code work for real. Wire execute's output into the project, figure out everything the system needs to actually run services, databases, environment, infrastructure — and walk the user through standing it up. Once they confirm it's live, test the running system against the discovery brief's North Star.
18
18
 
19
- The discovery brief's North Star is the ultimate test: does the running system deliver the experience it promised?
19
+ No mocks. No "verified against synthetic data." Either it works or it doesn't.
20
20
 
21
21
  ## CRITICAL RULES
22
22
 
23
23
  1. You MUST read `.project-memory-state.json` and resolve context_path before anything else.
24
24
  2. You MUST read `{context_path}/discovery_brief.md`, `{context_path}/tasks.json`, `{context_path}/build_output.json`, and `docs/project_notes/project_map.md`.
25
- 3. You MUST integrate before testing. Code that isn't wired in can't be meaningfully tested.
26
- 4. You MUST NOT write unit tests that test implementation internals. Tests exercise the system from the outside — smoke tests and experience-slice tests only.
27
- 5. You MUST NOT fix failures that violate user decisions from the specs. Escalate immediately.
28
- 6. You MUST delegate integration tasks and test writing to subagents. Do not write code yourself.
29
- 7. You MUST verify against the discovery brief after all tests pass does the system deliver the North Star?
30
- 8. You MUST update `docs/project_notes/project_map.md` if integration reveals new information.
25
+ 3. You MUST integrate before anything else. Code that isn't wired in can't run.
26
+ 4. You MUST NOT write tests until the user confirms the system is running.
27
+ 5. You MUST NOT mock anything in tests. Tests hit the live system.
28
+ 6. You MUST NOT fix failures that violate user decisions from the specs. Escalate immediately.
29
+ 7. You MUST delegate integration tasks and test writing to subagents. Do not write code yourself.
30
+ 8. You MUST verify against the discovery brief after all tests pass — does the system deliver the North Star?
31
+ 9. You MUST update `docs/project_notes/project_map.md` if integration reveals new information.
31
32
 
32
33
  ## CONTEXT PATH RESOLUTION
33
34
 
@@ -44,17 +45,26 @@ The discovery brief's North Star is the ultimate test: does the running system d
44
45
  ## PROTOCOL
45
46
 
46
47
  ```
47
- Step 1: Read context
48
- Step 2: Integration — wire everything together
49
- Step 3: Toolchaincompile, type-check, lint
50
- Step 4: Smoke tests does it start and respond
51
- Step 5: Experience slice tests do the stakeholder journeys work
52
- Step 6: Fix cycle
53
- Step 7: Final verification against discovery brief
54
- Step 8: Exit
48
+ Phase 1: Get it running
49
+ Step 1: Read context
50
+ Step 2: Integrationwire everything together
51
+ Step 3: Toolchaincompile, type-check, lint
52
+ Step 4: Readiness checklistwhat does the system need to actually start?
53
+ Step 5: Assisted setup — help the user stand it up
54
+
55
+ Phase 2: Prove it works
56
+ Step 6: Smoke tests against the live system
57
+ Step 7: Experience slice tests against the live system
58
+ Step 8: Fix cycle
59
+ Step 9: Final verification against discovery brief
60
+ Step 10: Exit
55
61
  ```
56
62
 
57
- ## STEP 1: READ CONTEXT
63
+ ---
64
+
65
+ ## PHASE 1: GET IT RUNNING
66
+
67
+ ### STEP 1: READ CONTEXT
58
68
 
59
69
  Read these files:
60
70
  1. `{context_path}/discovery_brief.md` — North Star, stakeholders, constraints
@@ -64,17 +74,17 @@ Read these files:
64
74
 
65
75
  From build_output.json, extract: all files created and modified, task statuses, any escalated issues or deviations.
66
76
 
67
- From tasks.json, extract: experience slices (these become the basis for experience-slice tests).
77
+ From tasks.json, extract: experience slices (these become the basis for experience-slice tests later).
68
78
 
69
- ### Gate Check
79
+ #### Gate Check
70
80
 
71
81
  If build_output.json shows `status: "failed"` or has unresolved escalated issues, present to user: "Execute phase had issues. Proceed with integration anyway, or go back?" If they want to go back, route to `/intuition-enuncia-execute`.
72
82
 
73
- ## STEP 2: INTEGRATION
83
+ ### STEP 2: INTEGRATION
74
84
 
75
- Wire the build output into the project so it actually runs.
85
+ Wire the build output into the project so it can run.
76
86
 
77
- ### 2a. Research Integration Points
87
+ #### 2a. Research Integration Points
78
88
 
79
89
  Spawn two `intuition-researcher` agents in parallel:
80
90
 
@@ -84,7 +94,7 @@ Spawn two `intuition-researcher` agents in parallel:
84
94
  **Agent 2 — Integration Gap Discovery:**
85
95
  "Using the build output at `{context_path}/build_output.json`, for each file that was produced: check if it's imported anywhere, if entry points reference it, if dependencies are installed, if configuration entries exist. Report what's already wired and what's missing."
86
96
 
87
- ### 2b. Execute Integration
97
+ #### 2b. Execute Integration
88
98
 
89
99
  For each gap found, delegate to an `intuition-code-writer` subagent:
90
100
 
@@ -103,13 +113,13 @@ Rules:
103
113
  - If more complex than described, STOP and report back
104
114
  ```
105
115
 
106
- ### 2c. Install Dependencies
116
+ #### 2c. Install Dependencies
107
117
 
108
118
  If specs reference new packages, install them via Bash. Verify manifest and lockfile are updated.
109
119
 
110
- ## STEP 3: TOOLCHAIN
120
+ ### STEP 3: TOOLCHAIN
111
121
 
112
- Run the project's toolchain to verify basic health. Execute in order:
122
+ Run the project's toolchain to verify basic code health. Execute in order:
113
123
 
114
124
  1. **Type check / lint** (if applicable): `[type check command]`, `[lint command]`
115
125
  2. **Build / compile** (if applicable): `[build command]`
@@ -117,65 +127,163 @@ Run the project's toolchain to verify basic health. Execute in order:
117
127
 
118
128
  Also run `mcp__ide__getDiagnostics` to catch IDE-visible issues.
119
129
 
120
- If any step fails, classify and fix (see STEP 6) before proceeding.
130
+ If any step fails, classify and fix before proceeding.
131
+
132
+ ### STEP 4: READINESS CHECKLIST
133
+
134
+ This is where you figure out everything the system needs to actually start and run — not just compile.
135
+
136
+ #### 4a. Research Prerequisites
137
+
138
+ Spawn an `intuition-researcher` agent:
139
+
140
+ "Analyze the full codebase to identify every external dependency the system needs at runtime. Look at:
141
+ - Database connections (connection strings, migrations, seed data)
142
+ - External API integrations (keys, endpoints, auth tokens, OAuth registrations)
143
+ - Environment variables (every env var referenced in the code)
144
+ - Infrastructure services (message queues, caches, file storage, etc.)
145
+ - Configuration files that need real values (not template/example values)
146
+ - Network requirements (ports, domains, certificates)
147
+ - Platform-specific setup (cloud permissions, service registrations, shared resources)
148
+ - Data requirements (initial data loads, imports, reference data)
149
+
150
+ For each dependency, report: what it is, where in the code it's referenced, whether it has a default/fallback or is required, and what happens if it's missing."
151
+
152
+ #### 4b. Build the Checklist
153
+
154
+ From the researcher's findings plus context from the discovery brief (which describes the deployment environment), build a concrete readiness checklist. Group items by category.
155
+
156
+ Format:
157
+
158
+ ```
159
+ ## Readiness Checklist
160
+
161
+ To get this system running, here's what needs to be set up:
162
+
163
+ ### [Category: e.g., Database]
164
+ - [ ] [Specific action — e.g., "Create PostgreSQL database 'staff_coverage'"]
165
+ - [ ] [Next action — e.g., "Run migrations: alembic upgrade head"]
166
+
167
+ ### [Category: e.g., External Services]
168
+ - [ ] [Specific action]
169
+ - I can help with: [what you can assist with — e.g., "generating the config file, writing the migration"]
170
+ - You'll need to: [what requires human action — e.g., "create the Azure AD app registration, grant admin consent"]
171
+
172
+ ### [Category: e.g., Environment]
173
+ - [ ] [Specific action]
174
+
175
+ ...
176
+ ```
177
+
178
+ For each item, be specific about:
179
+ - **What** needs to happen (exact commands, exact config values where known)
180
+ - **Where** it's referenced in the code (so the user can verify)
181
+ - **What you can help with** vs. **what requires their action** (admin portals, credentials, infrastructure access)
182
+
183
+ #### 4c. Present to User
184
+
185
+ Present the readiness checklist via AskUserQuestion:
186
+
187
+ ```
188
+ Question: "[The readiness checklist from 4b]
189
+
190
+ Let's work through these. Which would you like to tackle first, or is anything already set up?"
191
+ Header: "Getting It Running"
192
+ ```
193
+
194
+ ### STEP 5: ASSISTED SETUP
195
+
196
+ Work through the checklist with the user interactively. For each item:
197
+
198
+ - If you can do it (write config files, run migrations, generate boilerplate): offer to do it and execute when approved.
199
+ - If it requires their action (portal configuration, credential creation, infrastructure provisioning): give them exact instructions and wait for confirmation.
200
+ - If it requires both: do your part, then tell them what's left.
201
+
202
+ After each item is addressed, try to start the relevant component and verify it connects. For example:
203
+ - After database setup: try connecting and running a basic query
204
+ - After API credentials: try a test request to the service
205
+ - After environment config: try importing/starting the app
206
+
207
+ When something fails, diagnose and help fix it before moving on.
208
+
209
+ #### Completion Gate
210
+
211
+ When the user confirms the system is running (or you've verified it starts and connects to all services), present:
212
+
213
+ ```
214
+ Question: "System is up. Ready to run tests against the live application?"
215
+ Header: "Ready for Testing"
216
+ Options:
217
+ - "Run tests"
218
+ - "Not yet — still setting up [specify]"
219
+ ```
220
+
221
+ Do NOT proceed to Phase 2 until the user confirms.
222
+
223
+ ---
224
+
225
+ ## PHASE 2: PROVE IT WORKS
121
226
 
122
- ## STEP 4: SMOKE TESTS
227
+ ### STEP 6: SMOKE TESTS
123
228
 
124
- Smoke tests verify the system actually runs. They exercise real code paths, not mocks.
229
+ Smoke tests verify the live system responds correctly. They hit the real running application — no test servers, no mocks, no in-memory substitutes.
125
230
 
126
- ### What Smoke Tests Cover
231
+ #### What Smoke Tests Cover
127
232
 
128
- - **Startup**: Does the app/server/process start without errors?
129
- - **Main entry points**: Do the primary routes/endpoints/commands respond?
130
- - **Core dependencies**: Do external connections initialize? (Database connects, API keys validate, etc.)
131
- - **Happy path**: One simple request through the main flow — does it complete?
233
+ - **Liveness**: Does the running app respond to requests?
234
+ - **Main entry points**: Do the primary routes/endpoints/commands return non-error responses?
235
+ - **Core dependencies**: Does the app actually talk to its database, APIs, etc.? (Verify with a request that exercises a real dependency path)
236
+ - **Happy path**: One simple request through the main flow — does it complete end-to-end?
132
237
 
133
- ### Writing Smoke Tests
238
+ #### Writing Smoke Tests
134
239
 
135
240
  Delegate to an `intuition-code-writer` subagent:
136
241
 
137
242
  ```
138
- You are writing smoke tests. These tests verify the system ACTUALLY RUNSnot that individual functions return correct values.
243
+ You are writing smoke tests against a LIVE, RUNNING system. The app is already up you are testing it from the outside.
139
244
 
140
245
  Test framework: [detected framework from Step 2a]
141
246
  Test conventions: [naming, directory from existing tests]
247
+ App URL / entry point: [how to reach the running system]
142
248
 
143
249
  What to test:
144
- - App startup (import the app, verify no crash)
145
- - Main entry points respond (hit routes, verify non-error status codes)
146
- - Core flow completes (one end-to-end request through the primary path)
250
+ - App responds to health/root requests
251
+ - Main entry points return successful responses
252
+ - At least one request that touches the database returns real data
253
+ - One end-to-end request through the primary flow completes
147
254
 
148
255
  Rules:
149
- - Actually start the app/server in the test
150
- - Make real HTTP requests or function calls no mocking the system under test
151
- - Mock ONLY external services (databases, third-party APIs) that aren't available in test
152
- - Each test should take < 5 seconds
153
- - If a test fails, it means the system is broken — not that a detail is wrong
256
+ - The system is ALREADY RUNNING. Tests make real requests to it.
257
+ - NO mocks. NO in-memory databases. NO test servers. You hit the live app.
258
+ - If a test needs data to exist, create it through the app's own API first (setup), then clean it up after (teardown).
259
+ - Each test should take < 10 seconds.
260
+ - If a test fails, it means the live system is broken — not that a mock is misconfigured.
154
261
  ```
155
262
 
156
- Run the smoke tests. If they fail, fix (Step 6) before proceeding.
263
+ Run the smoke tests. If they fail, fix (Step 8) before proceeding.
157
264
 
158
- ## STEP 5: EXPERIENCE SLICE TESTS
265
+ ### STEP 7: EXPERIENCE SLICE TESTS
159
266
 
160
- These are the highest-value tests in the system. They walk through each stakeholder's journey as defined in the compose phase and verify the end-to-end flow works.
267
+ These are the highest-value tests. They walk through each stakeholder's journey as defined in the compose phase and verify the live system delivers the experience end-to-end.
161
268
 
162
- ### Deriving Tests from Experience Slices
269
+ #### Deriving Tests from Experience Slices
163
270
 
164
271
  Read `tasks.json` and extract the experience slices. For each slice that involves code behavior:
165
272
 
166
273
  - **What triggers it**: The test setup
167
- - **What the stakeholder does**: The test actions
274
+ - **What the stakeholder does**: The test actions (real API calls to the live system)
168
275
  - **What should happen**: The test assertions (from acceptance criteria)
169
276
 
170
- ### Writing Experience Slice Tests
277
+ #### Writing Experience Slice Tests
171
278
 
172
279
  Delegate to an `intuition-code-writer` subagent:
173
280
 
174
281
  ```
175
- You are writing experience-slice tests. These tests verify that stakeholder journeys work end-to-end. They are derived from the project's experience slices — NOT from the source code.
282
+ You are writing experience-slice tests against a LIVE, RUNNING system. These tests verify that stakeholder journeys work end-to-end on the real application.
176
283
 
177
284
  Test framework: [detected framework]
178
285
  Test conventions: [from existing tests]
286
+ App URL / entry point: [how to reach the running system]
179
287
 
180
288
  ## Experience Slices to Test
181
289
 
@@ -187,22 +295,23 @@ Journey: [trigger → action → expected outcome]
187
295
  Acceptance criteria: [from tasks.json]
188
296
 
189
297
  ## Rules
190
- - Test the journey from the stakeholder's perspective
191
- - Use the same entry points a real user would (HTTP routes, CLI commands, public APIs)
192
- - Mock ONLY external services not available in test NOT internal modules
193
- - Assert against acceptance criteria from the outline, not implementation details
298
+ - The system is ALREADY RUNNING. Tests make real requests to it.
299
+ - NO mocks of any kind. The app, database, and services are all live.
300
+ - Test the journey from the stakeholder's perspective using real entry points (HTTP routes, CLI commands, public APIs).
301
+ - If a test needs data, create it through the app's API first (setup), clean up after (teardown).
302
+ - Assert against acceptance criteria from the spec, not implementation details.
194
303
  - Each test should tell a story: "the admin does X, the system does Y, the result is Z"
195
- - If a slice requires UI interaction you can't automate, test the API layer that backs it
196
- - Do NOT read source code to determine expected behavior — the spec defines what should happen
304
+ - If a slice requires UI interaction you can't automate, test the API layer that backs it.
305
+ - Do NOT read source code to determine expected behavior — the spec defines what should happen.
197
306
 
198
307
  ## Spec Sources (read these for expected behavior)
199
308
  - Discovery brief: {context_path}/discovery_brief.md
200
309
  - Tasks: {context_path}/tasks.json
201
310
  ```
202
311
 
203
- Run the experience slice tests. Classify and fix failures (Step 6).
312
+ Run the experience slice tests. Classify and fix failures (Step 8).
204
313
 
205
- ## STEP 6: FIX CYCLE
314
+ ### STEP 8: FIX CYCLE
206
315
 
207
316
  For each failure, classify:
208
317
 
@@ -212,45 +321,47 @@ For each failure, classify:
212
321
  | **Missing dependency** | Install via Bash |
213
322
  | **Implementation bug, simple** (1-3 lines, spec is clear) | Fix via `intuition-code-writer` |
214
323
  | **Implementation bug, complex** (multi-file, architectural) | Escalate to user |
324
+ | **Environment/config issue** (service not reachable, credentials wrong) | Help user diagnose and fix |
215
325
  | **Spec violation** (code disagrees with spec) | Escalate: "Spec says X, code does Y" |
216
326
  | **Test regression** (existing test broke) | Diagnose: is the test outdated or the new code wrong? Escalate if ambiguous |
217
327
  | **Violates user decision** | STOP — escalate immediately |
218
328
 
219
- ### Fix Process
329
+ #### Fix Process
220
330
 
221
331
  1. Classify the failure
222
332
  2. If fixable: delegate fix to `intuition-code-writer`
223
- 3. Re-run the failing test
224
- 4. Max 3 fix cycles per failure then escalate
225
- 5. After all failures addressed, run FULL verification (toolchain + all tests) one final time
333
+ 3. If environment/config: work with user to resolve
334
+ 4. Re-run the failing test against the live system
335
+ 5. Max 3 fix cycles per failure then escalate
336
+ 6. After all failures addressed, run FULL test suite one final time
226
337
 
227
- ## STEP 7: FINAL VERIFICATION
338
+ ### STEP 9: FINAL VERIFICATION
228
339
 
229
- After all tests pass, check the running system against the discovery brief:
340
+ After all tests pass against the live system, check against the discovery brief:
230
341
 
231
- **North Star check**: Does the system deliver the experience the brief describes? Walk through it mentally:
232
- - [For each stakeholder]: Can they do what the brief says they should be able to do?
342
+ **North Star check**: Walk through the brief's North Star statement. For each stakeholder:
343
+ - Can they do what the brief says they should be able to do — on the live system?
233
344
  - Does the system honor the constraints?
234
345
  - Would this satisfy the North Star as written?
235
346
 
236
- If something drifts, flag it to the user: "Tests pass, but [specific concern about North Star alignment]."
347
+ If something drifts, flag it: "Tests pass, but [specific concern about North Star alignment]."
237
348
 
238
- **Update `docs/project_notes/project_map.md`** if integration or testing revealed anything new about how components connect.
349
+ **Update `docs/project_notes/project_map.md`** if integration or testing revealed anything new.
239
350
 
240
- ## STEP 8: EXIT
351
+ ### STEP 10: EXIT
241
352
 
242
- **Update state.** Read `.project-memory-state.json`. Target active context. Set: `status` → `"complete"`, `workflow.verify.completed` → `true`, `workflow.verify.completed_at` → current ISO timestamp. Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"verify_to_complete"`. Write back.
353
+ **Update state.** Read `.project-memory-state.json`. Target active context. Set: `status` → `"complete"`, `workflow.verify.completed` → `true`, `workflow.verify.completed_at` → current ISO timestamp. Set on root: `last_handoff` → current ISO timestamp, `last_handoff_transition` → `"verify_to_complete"`. Write back.
243
354
 
244
355
  **Present results** via AskUserQuestion:
245
356
 
246
357
  ```
247
- Question: "Verification complete.
358
+ Question: "Verification complete — tested against the live system.
248
359
 
249
360
  **Integration**: [pass/issues]
250
361
  **Toolchain**: [builds, type-checks, lints]
251
362
  **Existing tests**: [N passed, N failed]
252
- **Smoke tests**: [N passed, N failed]
253
- **Experience slice tests**: [N passed, N failed]
363
+ **Smoke tests (live)**: [N passed, N failed]
364
+ **Experience slice tests (live)**: [N passed, N failed]
254
365
  **North Star alignment**: [met / concerns]
255
366
 
256
367
  [If escalated issues exist, list them]
@@ -277,13 +388,13 @@ When verifying on a branch:
277
388
 
278
389
  ## RESUME LOGIC
279
390
 
280
- 1. If tests exist but no verification complete: "Found tests from a previous session. Re-running verification."
281
- 2. If integration was done but tests haven't run: skip to Step 4.
391
+ 1. If Phase 1 completed (system running) but tests haven't run: skip to Step 6.
392
+ 2. If tests exist but verification not complete: "Found tests from a previous session. Re-running against live system."
282
393
  3. Otherwise fresh start from Step 1.
283
394
 
284
395
  ## VOICE
285
396
 
286
- - **Pragmatic** — make it work, prove it works, report what happened
397
+ - **Pragmatic** — make it work for real, prove it works for real, report what happened
287
398
  - **Evidence-driven** — every failure has a classification, every fix has a rationale
288
399
  - **Honest** — if tests pass but something feels off against the North Star, say so
289
400
  - **Concise** — status updates, not essays
@@ -92,7 +92,7 @@ If user selected "No, skip for now":
92
92
  IMPORTANT: Restart Claude Code for changes to take effect.
93
93
 
94
94
  Changes will apply to:
95
- - All 9 intuition skills
95
+ - All Intuition skills
96
96
  - New sessions only (current session uses old version)
97
97
  ```
98
98