@infinitedusky/indusk-mcp 1.15.1 → 1.16.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/skills/falsify.md +87 -0
- package/skills/planner.md +29 -8
- package/skills/retrospective.md +28 -0
- package/skills/work.md +2 -1
package/package.json
CHANGED
|
@@ -0,0 +1,87 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: falsify
|
|
3
|
+
description: Run the falsification ritual against a completed plan. Goal-flip from "prove it works" to "find a failing test" — investigate the code, form a specific hypothesis about what should be broken, write the test that confirms it, run it. Required between /work completion and /retrospective.
|
|
4
|
+
argument-hint: "{plan-name}"
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
You are about to run the **falsification ritual** against a plan whose `/work` has completed. The plan has an attested state — the goal, the Trajectory rows (all in terminal state), the claims it makes about what is now true. Your job is **not** to confirm those claims. Your job is to **falsify them**.
|
|
8
|
+
|
|
9
|
+
This is a goal-flip, not a persona switch. Same agent, different question. Instead of "does this work?" — "what specific thing, with what specific inputs, makes this fail?"
|
|
10
|
+
|
|
11
|
+
## How to hunt
|
|
12
|
+
|
|
13
|
+
This is bounty hunting, not candidate generation. **Do not write hopeful tests and see what fails.** Each iteration hunts a specific target:
|
|
14
|
+
|
|
15
|
+
1. **Read the attested state.** Open the plan's `impl.md`. Read the Goal. Read every Trajectory row — what does each claim? Read the ADR if one exists — what invariants does the plan promise?
|
|
16
|
+
2. **Investigate the code.** Read the actual implementation. Compare what the code does against what the attestation claims. Look for gaps.
|
|
17
|
+
3. **Form a specific hypothesis.** Not "what could go wrong?" — "*this specific condition, with these specific inputs, will violate this specific invariant.*" Name the failure before writing any test.
|
|
18
|
+
4. **Write the test that confirms the hypothesis.** If the hypothesis is right, the test fails. If the hypothesis is wrong, the test passes.
|
|
19
|
+
5. **Run the test.**
|
|
20
|
+
|
|
21
|
+
Prompts to ask yourself while investigating (use these as starting points, not a checklist):
|
|
22
|
+
|
|
23
|
+
- **What's an edge case not covered by T1–Tn?** List every row. For each: what inputs did the author think of? What inputs did they miss?
|
|
24
|
+
- **What's an implicit invariant the attestation makes that the Trajectory doesn't test?** "Recoverable from crash" implies "recoverable from partial write." Is there a test for partial writes?
|
|
25
|
+
- **What about concurrent, partial, or malformed inputs?** Two callers at once. A half-written file. A valid-shape but semantically-wrong input.
|
|
26
|
+
- **What would a malicious user try?** If this accepts input, what input breaks the parser, or traverses paths, or exhausts memory?
|
|
27
|
+
- **What does the attestation assume about the environment?** Time monotonicity. Disk not full. Network present. Clock skew. Is any assumption documented vs. silently-assumed?
|
|
28
|
+
- **What's the first thing someone would try if they were paid $100 to find one failure here?** Specifically. Concretely.
|
|
29
|
+
- **What invariants are only enforced in one direction?** (E.g., "create calls validate, but update bypasses validation.")
|
|
30
|
+
- **What claim does the Goal make that's not expressed as a Trajectory row?** That's often the unguarded surface.
|
|
31
|
+
|
|
32
|
+
**Anti-pattern — do NOT do this:** "I'll write several tests and see which ones fail." That's candidate generation. It's cheap and useless. Every candidate you write without a specific hypothesis is noise. Investigate first, hypothesize specifically, write the test that targets *that* hypothesis.
|
|
33
|
+
|
|
34
|
+
## Three outcomes per failing test
|
|
35
|
+
|
|
36
|
+
When a test fails (your hypothesis is confirmed), pick one outcome — recommend one, but the user decides:
|
|
37
|
+
|
|
38
|
+
1. **Fix in scope** — the gap is small and clearly in-scope for the plan's original goal. Add a new phase to the current `impl.md`, flip the impl status back to `in-progress`, return to `/work`. This is "build the plane while flying" — the plan grows during its own closure.
|
|
39
|
+
2. **Spawn a new plan** — the gap is large, touches unrelated areas, or deserves its own planning lifecycle. Create `.indusk/planning/{new-slug}/brief.md` with the failing test as its core motivation. Link via `blocks:` in the current plan's brief.
|
|
40
|
+
3. **Accept as finding** — rare. The gap is small, unambiguously out-of-scope, and the cost of a new plan isn't justified. Record in the falsification log and note in retrospective. Use only when the other two genuinely don't fit.
|
|
41
|
+
|
|
42
|
+
After choosing the outcome, record the hypothesis via `appendHypothesis(planRoot, { hypothesis, testPath, outcome, note? })` from `apps/indusk-mcp/src/lib/falsification/log.ts`. The log file at `.indusk/planning/{plan}/falsification.md` captures the session's history.
|
|
43
|
+
|
|
44
|
+
## Loop exit (hybrid)
|
|
45
|
+
|
|
46
|
+
Continue hunting until you genuinely **cannot form a specific in-scope hypothesis** about what should be broken. Not "I've tried enough tests" — "I have investigated the code and cannot name a concrete attack vector remaining."
|
|
47
|
+
|
|
48
|
+
When you reach that point, present the user with a summary:
|
|
49
|
+
|
|
50
|
+
- Hypotheses investigated and their outcomes (confirmed → fix/spawn/accept; wrong → the hypothesis was rejected, note what held up)
|
|
51
|
+
- Regions of code you searched without finding an attack vector
|
|
52
|
+
- Any areas you did NOT investigate and why (e.g., "didn't investigate serialization because no serialization code was changed")
|
|
53
|
+
|
|
54
|
+
The user confirms termination — or points at an area you didn't investigate. Not "write another test" — they should point at a *region* you missed. If that produces a new hypothesis, the loop continues. If nothing new surfaces, call `markTerminated(planRoot, reason)` to close the log and hand off to `/retrospective`.
|
|
55
|
+
|
|
56
|
+
## When to skip the ritual entirely
|
|
57
|
+
|
|
58
|
+
For genuinely trivial plans (two-line typo fix, changelog entry, variable rename with no behavioral change), the ritual's cost may exceed its discipline value. To skip, the plan's `impl.md` frontmatter must contain BOTH:
|
|
59
|
+
|
|
60
|
+
```yaml
|
|
61
|
+
falsification: skipped
|
|
62
|
+
falsification_reason: "a non-empty reason, quoted as a YAML string"
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
The retrospective skill's Step 0 gate accepts either a completed falsification log OR the two-field skip frontmatter. Skipping is a confession, not a bypass — use sparingly.
|
|
66
|
+
|
|
67
|
+
## Output
|
|
68
|
+
|
|
69
|
+
By the time you hand off to `/retrospective`, one of these must be true:
|
|
70
|
+
|
|
71
|
+
- `.indusk/planning/{plan}/falsification.md` exists with a terminator entry (log is closed cleanly), OR
|
|
72
|
+
- The plan reopened (`impl` status flipped to `in-progress`) via a "fix in scope" outcome and `/work` is active again (deferring falsification until the fix lands)
|
|
73
|
+
|
|
74
|
+
The `/retrospective` skill's Step 0 hard-blocks without this. Don't bypass.
|
|
75
|
+
|
|
76
|
+
## Why this exists
|
|
77
|
+
|
|
78
|
+
See the [Falsification Ritual guide](apps/indusk-docs/src/guide/falsification-ritual.md) for the full motivation. Short version: the Test Trajectory made universal deferral structurally impossible, but authors only write tests they can think of — and the author is the last person likely to notice the gaps in their own thinking. The ritual is a bullshit detector. Its purpose is rigor through self-examination.
|
|
79
|
+
|
|
80
|
+
## Important
|
|
81
|
+
|
|
82
|
+
- Same agent, flipped goal. No persona, no separate session. The same you that built the plan, asked a different question.
|
|
83
|
+
- Bounty hunting, not candidate generation. Investigate first, hypothesize specifically, write the test that targets *that*.
|
|
84
|
+
- Exit criterion is "can't form a specific in-scope hypothesis" — not "ran out of candidates" or "tried N things."
|
|
85
|
+
- The log is append-only. Never edit `falsification.md` by hand. Write via `appendHypothesis` / `markTerminated` from the library.
|
|
86
|
+
- If you find a gap, pick an outcome. Do not log a failing test and then continue looking for more failing tests as if the first didn't matter — each failure demands a decision before moving on.
|
|
87
|
+
- The user's input is: $ARGUMENTS
|
package/skills/planner.md
CHANGED
|
@@ -92,7 +92,7 @@ Workflow templates are in `templates/workflows/` in the package. They describe w
|
|
|
92
92
|
```
|
|
93
93
|
mcp__graphiti__add_memory({
|
|
94
94
|
name: "adr-{plan-name}",
|
|
95
|
-
episode_body: "In
|
|
95
|
+
episode_body: "In context facing: {use case AND constraint}. We decided for: {chosen option}. And against: {rejected alternatives}. To achieve: {desired outcome}. Accepting: {tradeoff}. Because: {rationale}.",
|
|
96
96
|
group_id: "{project-group}",
|
|
97
97
|
source: "text",
|
|
98
98
|
source_description: "ADR acceptance"
|
|
@@ -207,13 +207,34 @@ status: proposed | accepted | deprecated | superseded | abandoned
|
|
|
207
207
|
# {Title}
|
|
208
208
|
|
|
209
209
|
## Y-Statement
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
210
|
+
|
|
211
|
+
**In the context of:**
|
|
212
|
+
{the use case — one paragraph, plain text, not bold}
|
|
213
|
+
|
|
214
|
+
**Facing:**
|
|
215
|
+
{the constraint or problem the use case presents — one paragraph}
|
|
216
|
+
|
|
217
|
+
**We decided for:**
|
|
218
|
+
{the chosen option — one paragraph}
|
|
219
|
+
|
|
220
|
+
**And against:**
|
|
221
|
+
{the rejected alternatives — one paragraph}
|
|
222
|
+
|
|
223
|
+
**To achieve:**
|
|
224
|
+
{the desired outcome — one paragraph}
|
|
225
|
+
|
|
226
|
+
**Accepting:**
|
|
227
|
+
{the tradeoff — one paragraph}
|
|
228
|
+
|
|
229
|
+
**Because:**
|
|
230
|
+
{the rationale — one paragraph}
|
|
231
|
+
|
|
232
|
+
Format rules (the standard Y-statement format for every ADR in every project going forward):
|
|
233
|
+
- Use all seven canonical clauses: In the context of, Facing, We decided for, And against, To achieve, Accepting, Because. These are the standard Y-statement fields — do not collapse, rename, or omit them.
|
|
234
|
+
- Each clause is its own section. The clause label is bold and ends with a colon.
|
|
235
|
+
- The paragraph body begins on the next line immediately after the bold label — no blank line between the label and the paragraph.
|
|
236
|
+
- The paragraph body is plain text — not bold, no inline label.
|
|
237
|
+
- A blank line separates each clause (between the end of one paragraph and the next bold label).
|
|
217
238
|
|
|
218
239
|
## Context
|
|
219
240
|
{Situation and background. Reference research and brief.}
|
package/skills/retrospective.md
CHANGED
|
@@ -28,6 +28,34 @@ The retrospective skill replaces the freeform "write a retrospective" step with
|
|
|
28
28
|
|
|
29
29
|
Work through these steps in order. Each step is blocking — do not skip ahead.
|
|
30
30
|
|
|
31
|
+
### Step 0: Falsification Gate
|
|
32
|
+
|
|
33
|
+
**This gate blocks everything below. Do not proceed to Step 1 until it passes.**
|
|
34
|
+
|
|
35
|
+
Before writing a single word of the retrospective, confirm that the plan has completed the falsification ritual or has an explicit, recorded skip-reason.
|
|
36
|
+
|
|
37
|
+
Check the gate by reading two sources:
|
|
38
|
+
|
|
39
|
+
1. **Completion:** Does `.indusk/planning/{plan-name}/falsification.md` exist with a terminator entry? Use `isFalsificationComplete(planRoot)` from `apps/indusk-mcp/src/lib/falsification/log.js` (invoke via `tsx` or an MCP tool wrapper).
|
|
40
|
+
2. **Skip:** Does the impl's frontmatter contain BOTH `falsification: skipped` AND `falsification_reason: "{non-empty text}"`? Use `isFalsificationSkipped(implContent)` from `apps/indusk-mcp/src/lib/falsification/skip.js`.
|
|
41
|
+
|
|
42
|
+
The gate passes if either condition holds. If neither holds, refuse to run the retrospective and surface this message to the user:
|
|
43
|
+
|
|
44
|
+
> **Retrospective blocked: falsification gate not satisfied for `{plan-name}`.**
|
|
45
|
+
>
|
|
46
|
+
> Before closing out a plan, run `/falsify {plan-name}` to exercise the bounty-hunting ritual — investigate the code, form a specific hypothesis about what should be broken, write the test that confirms it. The ritual may surface gaps worth addressing before archival (fix in scope, spawn a new plan, or accept as finding).
|
|
47
|
+
>
|
|
48
|
+
> To skip the ritual intentionally, add these two fields to the impl's frontmatter:
|
|
49
|
+
>
|
|
50
|
+
> ```yaml
|
|
51
|
+
> falsification: skipped
|
|
52
|
+
> falsification_reason: "why skipping is acceptable for this specific plan"
|
|
53
|
+
> ```
|
|
54
|
+
>
|
|
55
|
+
> The skip-reason is recorded in the archive and surfaced in retrospectives. Use sparingly — typically only for trivial typo-fix plans where the ritual cost exceeds the discipline value.
|
|
56
|
+
|
|
57
|
+
Do not proceed to Step 1 until the gate passes. This is structural enforcement of the discipline documented in the [Falsification Ritual guide](apps/indusk-docs/src/guide/falsification-ritual.md) — happy-path authoring produces happy-path tests, and the ritual is the mechanism for surfacing the gaps the author couldn't think of.
|
|
58
|
+
|
|
31
59
|
### Step 1: Write the Retrospective Document
|
|
32
60
|
|
|
33
61
|
Create `.indusk/planning/{plan-name}/retrospective.md` using the template from the plan skill. This is the reflective writing — what we set out to do, what actually happened, what we learned.
|
package/skills/work.md
CHANGED
|
@@ -204,7 +204,8 @@ The hook validates that both `asked:` and `user:` are present with non-empty quo
|
|
|
204
204
|
- Update impl status to `completed`
|
|
205
205
|
- Summarize what was done
|
|
206
206
|
- If this plan included an ADR, confirm CLAUDE.md's Key Decisions was updated
|
|
207
|
-
-
|
|
207
|
+
- **Run `/falsify {plan}` next, before `/retrospective`.** The falsification ritual is the bridge between "impl done" and "plan archived." It drives the same working agent through a goal-flipped bounty hunt — investigate the code, form a specific hypothesis about what should be broken, write the test that confirms it. The ritual may surface gaps worth addressing, which can reopen the impl (status flips back to `in-progress`) for a fix-in-scope phase, or spawn a new plan, or be recorded as a finding. Only after `/falsify` terminates cleanly — or has been explicitly skipped via `falsification: skipped` + `falsification_reason: "..."` in the impl frontmatter — is the plan ready for `/retrospective`. See the [Falsification Ritual guide](apps/indusk-docs/src/guide/falsification-ritual.md) and `.indusk/planning/archive/falsification-ritual/adr.md`.
|
|
208
|
+
- Let the user know: "Impl complete. Run `/falsify {plan}` next. If it terminates cleanly, then `/retrospective {plan}` will close out the plan."
|
|
208
209
|
|
|
209
210
|
## Teach Mode
|
|
210
211
|
|