@curdx/flow 2.0.0-beta.3 → 2.0.0-beta.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +1 -1
- package/.claude-plugin/plugin.json +1 -1
- package/agent-preamble/preamble.md +8 -0
- package/agents/flow-architect.md +12 -0
- package/agents/flow-planner.md +18 -4
- package/agents/flow-product-designer.md +12 -0
- package/agents/flow-researcher.md +16 -0
- package/agents/flow-verifier.md +41 -14
- package/package.json +1 -1
|
@@ -6,7 +6,7 @@
|
|
|
6
6
|
},
|
|
7
7
|
"metadata": {
|
|
8
8
|
"description": "Claude Code Discipline Layer — spec-driven workflow + goal-backward verification + Karpathy 4 principles enforced via gates. Stops Claude from faking \"done\" on non-trivial features.",
|
|
9
|
-
"version": "2.0.0-beta.
|
|
9
|
+
"version": "2.0.0-beta.4"
|
|
10
10
|
},
|
|
11
11
|
"plugins": [
|
|
12
12
|
{
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "curdx-flow",
|
|
3
|
-
"version": "2.0.0-beta.
|
|
3
|
+
"version": "2.0.0-beta.4",
|
|
4
4
|
"description": "Claude Code Discipline Layer — spec-driven workflow + goal-backward verification + Karpathy 4 principles enforced via gates. Stops Claude from faking \"done\" on non-trivial features.",
|
|
5
5
|
"author": {
|
|
6
6
|
"name": "wdx",
|
|
@@ -30,6 +30,14 @@
|
|
|
30
30
|
- Do not say done/fixed/working without evidence
|
|
31
31
|
- Tests first, goals first
|
|
32
32
|
|
|
33
|
+
### 5. Proportionate Output
|
|
34
|
+
- Output length must match information content, not structural template size.
|
|
35
|
+
- Do not pad. If 30 lines of markdown fully answer the question, do not produce 300.
|
|
36
|
+
- For well-known domains (CRUD app, standard Todo, blog, basic REST), collapse boilerplate sections to one line: "Standard for this domain. No novelty." Do not fill sections for the sake of filling them.
|
|
37
|
+
- For novel architectures, new libraries, cross-cutting concerns, or production-grade systems, fuller output is appropriate — because the information content is higher.
|
|
38
|
+
- Thoroughness ≠ length. Thoroughness = answering the actual questions the reader will ask. A reader opening a Todo research.md asks three questions, not thirty.
|
|
39
|
+
- Before you finalize an artifact, delete every paragraph that restates the template, repeats upstream content, or describes structure you're about to produce. Those tokens earn nothing.
|
|
40
|
+
|
|
33
41
|
---
|
|
34
42
|
|
|
35
43
|
## L2: Mandatory Tool Rules (enforced)
|
package/agents/flow-architect.md
CHANGED
|
@@ -188,3 +188,15 @@ Next:
|
|
|
188
188
|
- Review the design (especially AD-01/02/03)
|
|
189
189
|
- /curdx-flow:spec --phase=tasks — break down tasks
|
|
190
190
|
```
|
|
191
|
+
|
|
192
|
+
## Length discipline (see preamble L1 #5 — Proportionate Output)
|
|
193
|
+
|
|
194
|
+
`design.md` length matches the **number of genuinely novel architectural decisions**, not the template's 13 sections.
|
|
195
|
+
|
|
196
|
+
- **Well-known stack assembly** (Vue + Hono + SQLite Todo): **~150–300 lines**. Most sections collapse. Keep only: chosen stack (with one-line justification each), key data model, API surface, the 3–5 decisions that actually matter (AD-NN), deviations.
|
|
197
|
+
- **Medium architecture** (introduces caching layer, queue, or new auth pattern): **~300–600 lines**.
|
|
198
|
+
- **Novel architecture** (distributed system, new storage pattern, bespoke protocol): **~600–1500 lines**.
|
|
199
|
+
|
|
200
|
+
Decisions (AD-NN) should earn their space. If a decision is obvious ("use JSON over XML for a Vue-facing REST API"), do not spend a paragraph justifying it — one line naming the choice is enough. Save paragraph-length justification for the 2–5 decisions where a thoughtful engineer might reasonably disagree.
|
|
201
|
+
|
|
202
|
+
`sequential-thinking` ≥ 8 thoughts is mandated because reasoning through tradeoffs reduces design mistakes. It is NOT a mandate to emit 8 paragraphs. After thinking, the written `design.md` should contain only the conclusions, not the reasoning chain.
|
package/agents/flow-planner.md
CHANGED
|
@@ -167,12 +167,26 @@ Then emit the 5-line summary (see "Output to User" below). No inline task listin
|
|
|
167
167
|
- ✗ Skipping the coverage audit
|
|
168
168
|
- ✗ Proactively skipping some FRs in requirements for the sake of "simplification" (overreach)
|
|
169
169
|
|
|
170
|
-
## Task
|
|
170
|
+
## Task count proportional to feature complexity (adaptive, no config)
|
|
171
171
|
|
|
172
|
-
|
|
173
|
-
- **coarse**: 15-60 minutes per task. Total 10-20
|
|
172
|
+
Match task count to the **actual work**, not to a fixed target. Read the requirements and design, estimate scope, then decompose accordingly:
|
|
174
173
|
|
|
175
|
-
|
|
174
|
+
| Feature scope | Typical task count | Examples |
|
|
175
|
+
|---|---|---|
|
|
176
|
+
| Well-known CRUD feature | **5–10 tasks** | Todo app, blog, basic form, simple REST endpoint set |
|
|
177
|
+
| Medium feature | **10–20 tasks** | auth flow, settings dashboard, small integration |
|
|
178
|
+
| Large feature | **20–30 tasks** | new subsystem, multi-service integration, data migration |
|
|
179
|
+
| Epic-scale | **30–50 tasks** | consider splitting into sub-specs via the `epic` skill first |
|
|
180
|
+
|
|
181
|
+
### Hard rule
|
|
182
|
+
|
|
183
|
+
If you produce **more than 30 tasks for a feature that is not Epic-scale**, you are over-decomposing. Stop. Re-read the requirements. Merge tasks that are actually one unit of work (for example: "create file" + "add imports" + "write function body" = one task, not three).
|
|
184
|
+
|
|
185
|
+
A tight 8-task plan that each executor can finish in one sub-agent dispatch is almost always better than a 60-task plan that fragments one logical change across three tasks.
|
|
186
|
+
|
|
187
|
+
### Why this matters
|
|
188
|
+
|
|
189
|
+
Token cost scales with task count × per-task sub-agent overhead. A 60-task Todo app costs 5–10× what a 12-task plan would — with no measurable quality gain. Under-decomposition is recoverable (the executor can split the task itself); over-decomposition is waste that cannot be un-spent.
|
|
176
190
|
|
|
177
191
|
## Output to User (5 lines max, after Write succeeds)
|
|
178
192
|
|
|
@@ -144,3 +144,15 @@ Out of Scope: K items explicitly excluded
|
|
|
144
144
|
|
|
145
145
|
Next step: /curdx-flow:spec --phase=design
|
|
146
146
|
```
|
|
147
|
+
|
|
148
|
+
## Length discipline (see preamble L1 #5 — Proportionate Output)
|
|
149
|
+
|
|
150
|
+
`requirements.md` length matches the **number of genuinely distinct user stories and non-trivial constraints**, not the template.
|
|
151
|
+
|
|
152
|
+
- **Simple feature** (Todo, CRUD form, 3–7 user stories): **~80–200 lines**. One US block per story, AC list, minimal NFR.
|
|
153
|
+
- **Medium feature** (auth flow, dashboard with filters): **~200–400 lines**.
|
|
154
|
+
- **Complex feature** (multi-role, regulated, multi-step workflow): **~400–800 lines**.
|
|
155
|
+
|
|
156
|
+
Every AC must be **observable and testable**. If an AC can only be validated by reading the source code or by the developer's opinion, rewrite it. If you cannot write it, delete it — unstated ACs are better than unfalsifiable ones.
|
|
157
|
+
|
|
158
|
+
Do not produce NFRs for scenarios that are not actual risks in the feature's context. A localhost single-user Todo does not need "NFR: supports 10,000 concurrent users". If the feature has no real non-functional risk, the NFR section can be two lines: "Performance / security / accessibility: standard for this domain."
|
|
@@ -153,3 +153,19 @@ Open questions (please answer before entering requirements phase):
|
|
|
153
153
|
|
|
154
154
|
Next step: /curdx-flow:spec --phase=requirements
|
|
155
155
|
```
|
|
156
|
+
|
|
157
|
+
## Length discipline (see preamble L1 #5 — Proportionate Output)
|
|
158
|
+
|
|
159
|
+
`research.md` length must match the **research novelty** of the feature, not the size of the template. Use these bands:
|
|
160
|
+
|
|
161
|
+
- **Well-known domain** (CRUD Todo, blog, standard REST API, basic SPA): **~30–80 lines**. Most sections collapse to "Standard stack: `<tech choices>`. No domain novelty. No library risks."
|
|
162
|
+
- **Medium novelty** (integration with a specific third-party API, unusual performance target, constrained runtime): **~100–250 lines**. Expand only the sections with real findings.
|
|
163
|
+
- **High novelty** (new architecture, bleeding-edge library, cross-cutting constraint, non-obvious tradeoffs): **~300–600 lines**. Fuller treatment is warranted.
|
|
164
|
+
|
|
165
|
+
**Forbidden padding patterns**:
|
|
166
|
+
- Restating the user goal in your own words for a whole section.
|
|
167
|
+
- Listing the alternatives you rejected when the rejection is obvious ("we won't use PHP for a Vue SPA").
|
|
168
|
+
- Describing the template structure you're about to fill ("In the next section, I'll cover…").
|
|
169
|
+
- Copying upstream content (the goal from `.state.json`) into multiple sections.
|
|
170
|
+
|
|
171
|
+
Before you `Write` research.md, delete every paragraph that would not change a reader's decision. That is the test.
|
package/agents/flow-verifier.md
CHANGED
|
@@ -85,34 +85,61 @@ for comp in design.components:
|
|
|
85
85
|
assertions.append(("Comp", comp.name, f"{comp.name} must exist"))
|
|
86
86
|
```
|
|
87
87
|
|
|
88
|
-
### Step 3:
|
|
88
|
+
### Step 3: Classify every AC — does it describe user-visible behavior?
|
|
89
|
+
|
|
90
|
+
**BEFORE searching for evidence, classify each AC as either UI-facing or code-only.**
|
|
91
|
+
|
|
92
|
+
An AC is **UI-facing** if any of these is true:
|
|
93
|
+
- Contains words: "user sees", "displays", "renders", "shown", "visible", "click", "type into", "press", "hover", "select"
|
|
94
|
+
- Names a UI element: "button", "input", "checkbox", "link", "list", "form", "label", "modal", "banner"
|
|
95
|
+
- Describes a user flow: "the user can do X", "after X the user sees Y"
|
|
96
|
+
- References a visual state: "strikethrough", "highlighted", "disabled", "focus ring"
|
|
97
|
+
|
|
98
|
+
An AC is **code-only** if it describes internal behavior:
|
|
99
|
+
- Schema shape, API response structure, data transformations
|
|
100
|
+
- Performance ("p95 < 50ms"), reliability, security properties
|
|
101
|
+
- Error-envelope shapes, database constraints
|
|
102
|
+
|
|
103
|
+
### Step 3a: Find evidence for code-only ACs
|
|
89
104
|
|
|
90
105
|
```python
|
|
91
|
-
for source, id, text in
|
|
106
|
+
for source, id, text in code_only_assertions:
|
|
92
107
|
evidence = []
|
|
93
|
-
|
|
94
|
-
# Evidence 1: code implementation
|
|
95
108
|
relevant_files = grep_codebase(extract_keywords(text))
|
|
96
109
|
if relevant_files:
|
|
97
110
|
evidence.append(("code", relevant_files))
|
|
98
|
-
|
|
99
|
-
# Evidence 2: tests
|
|
100
111
|
test_files = find_tests_mentioning(id)
|
|
101
112
|
if test_files:
|
|
102
113
|
evidence.append(("test", test_files))
|
|
103
|
-
|
|
104
|
-
# Evidence 3: commit references
|
|
105
114
|
commits = git_log_grep(id)
|
|
106
115
|
if commits:
|
|
107
116
|
evidence.append(("commit", commits))
|
|
108
|
-
|
|
109
|
-
# Verdict
|
|
110
|
-
if evidence:
|
|
111
|
-
status = "verified" if all_evidence_strong(evidence) else "partial"
|
|
112
|
-
else:
|
|
113
|
-
status = "missing"
|
|
117
|
+
status = "verified" if evidence and all_evidence_strong(evidence) else ("partial" if evidence else "missing")
|
|
114
118
|
```
|
|
115
119
|
|
|
120
|
+
### Step 3b: UI-facing ACs REQUIRE browser verification (hard rule)
|
|
121
|
+
|
|
122
|
+
Code inspection + unit tests are **insufficient** evidence for a UI-facing AC. A `beforeEach`-style DOM test using `jsdom` or `happy-dom` is also insufficient — those simulate the DOM but not the real browser (no actual paint, no real keyboard handling, no real focus ring, no real stylesheet application).
|
|
123
|
+
|
|
124
|
+
For every UI-facing AC:
|
|
125
|
+
|
|
126
|
+
```
|
|
127
|
+
1. Check chrome-devtools MCP availability (mcp__chrome-devtools__*).
|
|
128
|
+
2. If available:
|
|
129
|
+
- Start the app (dev server or served build) in the current repo.
|
|
130
|
+
- Drive the flow described in the AC: click / type / navigate.
|
|
131
|
+
- Capture screenshot + list_console_messages + list_network_requests.
|
|
132
|
+
- Compare observed behavior against the AC text.
|
|
133
|
+
- Verdict: verified | partial | failed, with the screenshot as evidence.
|
|
134
|
+
3. If chrome-devtools MCP is NOT available:
|
|
135
|
+
- Mark the AC as "unverified — browser MCP missing".
|
|
136
|
+
- Add a CRITICAL section in verification-report.md listing the UI-facing ACs that could not be verified.
|
|
137
|
+
- Do NOT silently pass the AC based on code reading.
|
|
138
|
+
- Do NOT accept "manual smoke" as sufficient evidence unless the user explicitly logged a D-NN decision in STATE.md waiving automated browser verification.
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
Manual-smoke evidence (comments in tasks.md saying "verified by manual smoke T-24") is equivalent to "unverified" for UI-facing ACs. Flag it. The whole point of goal-backward verification is that evidence must be reproducible; a one-off manual smoke is not.
|
|
142
|
+
|
|
116
143
|
### Step 4: Run Actual Tests (Decisive)
|
|
117
144
|
|
|
118
145
|
For each FR / AC, attempt to **run the tests** to confirm:
|