deepflow 0.1.87 → 0.1.88

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,80 +5,41 @@ description: Set up and launch fully autonomous execution with plan generation a
5
5
 
6
6
  # /df:auto — Autonomous Mode Setup
7
7
 
8
- Set up and launch fully autonomous execution. Runs `/df:plan` if no PLAN.md exists, takes a ratchet snapshot, then starts `/loop 1m /df:auto-cycle`.
8
+ Set up and launch fully autonomous execution. Run `/df:plan` if no PLAN.md, take ratchet snapshot, start `/loop 1m /df:auto-cycle`.
9
9
 
10
10
  **NEVER:** use EnterPlanMode, use ExitPlanMode
11
11
 
12
- ## Usage
13
- ```
14
- /df:auto # Set up and start autonomous loop
15
- ```
16
-
17
12
  ## Behavior
18
13
 
19
14
  ### 1. RUN PLAN IF NEEDED
20
15
 
21
- ```
22
- If PLAN.md does not exist:
23
- → Run /df:plan via Skill tool (skill: "df:plan", no args)
24
- → Wait for plan to complete before continuing
25
- If PLAN.md exists:
26
- → Skip planning, proceed to step 2
27
- ```
16
+ If PLAN.md missing → run `/df:plan` via Skill tool, wait for completion. If exists → skip.
28
17
 
29
18
  ### 2. RATCHET SNAPSHOT
30
19
 
31
- Before starting the loop, snapshot pre-existing test files so the ratchet has a stable baseline:
20
+ Snapshot pre-existing test files for stable ratchet baseline:
32
21
 
33
22
  ```bash
34
- # Snapshot pre-existing test files (only these count for ratchet)
35
23
  git ls-files | grep -E '\.(test|spec)\.[^/]+$|^test_|_test\.[^/]+$|^tests/|__tests__/' \
36
24
  > .deepflow/auto-snapshot.txt
37
-
38
25
  echo "Ratchet snapshot: $(wc -l < .deepflow/auto-snapshot.txt) pre-existing test files"
39
26
  ```
40
27
 
41
- **Only pre-existing test files are used for ratchet evaluation.** New test files created by agents during implementation do not influence pass/fail decisions. This prevents agents from gaming the ratchet by writing tests that pass trivially.
28
+ Only pre-existing tests count for ratchet. New agent-created tests are excluded to prevent gaming.
42
29
 
43
30
  ### 3. START LOOP
44
31
 
45
- Launch the autonomous cycle loop:
46
-
47
32
  ```
48
33
  /loop 1m /df:auto-cycle
49
34
  ```
50
35
 
51
- This starts `/df:auto-cycle` on a 1-minute recurring interval. Each invocation runs with fresh context — no coordination overhead, zero LLM tokens on loop management.
36
+ Each invocation gets fresh context — zero LLM tokens on loop management.
52
37
 
53
38
  ## Rules
54
39
 
55
40
  | Rule | Detail |
56
41
  |------|--------|
57
- | Plan once | Only runs `/df:plan` if PLAN.md is absent |
58
- | Snapshot before loop | Ratchet baseline is set before any agents run |
59
- | No lead agent | No custom orchestrator — `/loop` is a native Claude Code feature |
60
- | Zero loop overhead | Loop coordination uses zero LLM tokens |
61
- | Cycle logic lives in `/df:auto-cycle` | This command is setup only |
62
-
63
- ## Example
64
-
65
- ```
66
- /df:auto
67
-
68
- No PLAN.md found — running /df:plan...
69
- ✓ Plan generated — 1 spec, 5 tasks.
70
-
71
- Ratchet snapshot: 12 pre-existing test files
72
-
73
- Starting loop: /loop 1m /df:auto-cycle
74
- ```
75
-
76
- ```
77
- /df:auto
78
-
79
- PLAN.md exists — skipping plan.
80
-
81
- Ratchet snapshot: 12 pre-existing test files
82
-
83
- Starting loop: /loop 1m /df:auto-cycle
84
- ```
42
+ | Plan once | Only runs `/df:plan` if PLAN.md absent |
43
+ | Snapshot before loop | Ratchet baseline set before any agents run |
44
+ | No lead agent | `/loop` is native Claude Code no custom orchestrator |
45
+ | Cycle logic in `/df:auto-cycle` | This command is setup only |
@@ -5,62 +5,38 @@ description: Remove duplicates and superseded entries from decisions file, promo
5
5
 
6
6
  # /df:consolidate — Consolidate Decisions
7
7
 
8
- ## Purpose
9
8
  Remove duplicates, superseded entries, and promote stale provisionals. Keep decisions.md dense and useful.
10
9
 
11
- **NEVER:** use EnterPlanMode, use ExitPlanMode
12
-
13
- ## Usage
14
- ```
15
- /df:consolidate # Consolidate decisions.md
16
- ```
10
+ **NEVER:** use EnterPlanMode, ExitPlanMode
17
11
 
18
12
  ## Behavior
19
13
 
20
14
  ### 1. LOAD
21
- Read `.deepflow/decisions.md`. If missing or empty, report and exit.
22
-
23
- Shell injection (use output directly — no manual file reads needed):
24
- - `` !`cat .deepflow/decisions.md 2>/dev/null || echo 'NOT_FOUND'` ``
15
+ Read `.deepflow/decisions.md` via `` !`cat .deepflow/decisions.md 2>/dev/null || echo 'NOT_FOUND'` ``. If missing/empty, report and exit.
25
16
 
26
- ### 2. ANALYZE
27
- Model-driven analysis (not regex):
28
- - Identify duplicate decisions (same meaning, different wording)
29
- - Identify superseded decisions (later entry contradicts earlier)
17
+ ### 2. ANALYZE (model-driven, not regex)
18
+ - Identify duplicates (same meaning, different wording)
19
+ - Identify superseded entries (later contradicts earlier)
30
20
  - Identify stale `[PROVISIONAL]` entries (>30 days old, no resolution)
31
21
 
32
22
  ### 3. CONSOLIDATE
33
- - Remove duplicates (keep the more precise wording)
34
- - Silently remove superseded entries (the later decision wins)
35
- - Promote stale `[PROVISIONAL]` to `[DEBT]` (needs revisiting)
36
- - Preserve all `[APPROACH]` entries unless superseded
37
- - Preserve all `[ASSUMPTION]` entries unless invalidated
38
- - Target: 200-500 lines (if currently longer)
23
+ - Remove duplicates (keep more precise wording)
24
+ - Remove superseded entries (later decision wins)
25
+ - Promote stale `[PROVISIONAL]` `[DEBT]`
26
+ - Preserve `[APPROACH]` unless superseded, `[ASSUMPTION]` unless invalidated
27
+ - Target: 200-500 lines if currently longer
39
28
  - When in doubt, keep both entries (conservative)
40
29
 
41
30
  ### 4. WRITE
42
31
  - Rewrite `.deepflow/decisions.md` with consolidated content
43
- - Write timestamp to `.deepflow/last-consolidated.json`:
44
- ```json
45
- { "last_consolidated": "{ISO-8601 timestamp}" }
46
- ```
32
+ - Write `{ "last_consolidated": "{ISO-8601}" }` to `.deepflow/last-consolidated.json`
47
33
 
48
34
  ### 5. REPORT
49
- ```
50
- ✓ Consolidated: {before} → {after} lines, {n} removed, {n} promoted to [DEBT]
51
- ```
52
-
53
- ## Tags
54
- | Tag | Meaning | Source |
55
- |-----|---------|--------|
56
- | `[APPROACH]` | Firm decision | Auto-extraction, /df:note |
57
- | `[PROVISIONAL]` | Revisit later | Auto-extraction, /df:note |
58
- | `[ASSUMPTION]` | Unverified | Auto-extraction, /df:note |
59
- | `[DEBT]` | Needs revisiting | Consolidation only |
35
+ `✓ Consolidated: {before} → {after} lines, {n} removed, {n} promoted to [DEBT]`
60
36
 
61
37
  ## Rules
38
+
62
39
  - Conservative: when in doubt, keep both entries
63
40
  - Never add new decisions — only remove, merge, or re-tag
64
- - [DEBT] is never manually assigned — only produced by consolidation
41
+ - `[DEBT]` is only produced by consolidation, never manually assigned
65
42
  - Preserve chronological ordering within sections
66
- - decisions.md stays a single flat file, human-readable
@@ -8,54 +8,35 @@ allowed-tools: [Read, Grep, Glob, Agent]
8
8
 
9
9
  ## Orchestrator Role
10
10
 
11
- You coordinate reasoner agents to debate a problem from multiple perspectives, then synthesize their arguments into a structured document.
11
+ Coordinate reasoner agents to debate a problem from multiple perspectives, then synthesize into a structured document.
12
12
 
13
- **NEVER:** use TaskOutput, use `run_in_background`, use Explore agents, use EnterPlanMode, use ExitPlanMode
13
+ **NEVER:** use TaskOutput, `run_in_background`, Explore agents, EnterPlanMode, ExitPlanMode
14
14
 
15
15
  **ONLY:** Gather codebase context (Glob/Grep/Read), spawn reasoner agents (non-background), write debate file, respond conversationally
16
16
 
17
- ---
18
-
19
- ## Purpose
20
- Generate a multi-perspective analysis of a problem before formalizing into a spec. Surfaces tensions, trade-offs, and blind spots that a single perspective would miss.
21
-
22
- ## Usage
23
- ```
24
- /df:debate <name>
25
- ```
26
-
27
- ## Skills & Agents
17
+ ## Agents
28
18
 
29
- **Use Task tool to spawn agents:**
30
- | Agent | subagent_type | model | Purpose |
31
- |-------|---------------|-------|---------|
19
+ | Agent | subagent_type | model | Focus |
20
+ |-------|---------------|-------|-------|
32
21
  | User Advocate | `reasoner` | `opus` | UX, simplicity, real user needs |
33
22
  | Tech Skeptic | `reasoner` | `opus` | Technical risks, hidden complexity, feasibility |
34
23
  | Systems Thinker | `reasoner` | `opus` | Integration, scalability, long-term effects |
35
24
  | LLM Efficiency | `reasoner` | `opus` | Token density, minimal scaffolding, navigable structure |
36
25
  | Synthesizer | `reasoner` | `opus` | Merge perspectives into consensus + tensions |
37
26
 
38
- ---
39
-
40
27
  ## Behavior
41
28
 
42
29
  ### 1. SUMMARIZE
43
-
44
- Summarize conversation context in ~200 words: core problem, key requirements, constraints, user priorities. Passed to each perspective agent.
30
+ Summarize conversation context in ~200 words: core problem, requirements, constraints, user priorities. Passed to each perspective agent.
45
31
 
46
32
  ### 2. GATHER CODEBASE CONTEXT
47
-
48
- Ground the debate in what actually exists. Glob/Grep/Read relevant files (up to 5-6, focus on core logic).
49
-
50
- Produce a ~300 word codebase summary: what exists, key interfaces/contracts, current limitations, dependencies. Passed to every perspective agent so they argue from facts, not assumptions.
33
+ Glob/Grep/Read relevant files (up to 5-6, focus on core logic). Produce ~300 word codebase summary: what exists, key interfaces, current limitations, dependencies. Passed to every agent.
51
34
 
52
35
  ### 3. SPAWN PERSPECTIVES
53
36
 
54
- **Spawn ALL 4 perspective agents in ONE message (non-background, parallel):**
55
-
56
- Each agent receives the same preamble + codebase context but a different role lens.
37
+ **Spawn ALL 4 perspective agents in ONE message (parallel, non-background).** Each receives the shared preamble + a role-specific lens.
57
38
 
58
- **Shared preamble for all perspectives:**
39
+ **Shared preamble (included in every agent prompt):**
59
40
  ```
60
41
  ## Context
61
42
  {summary}
@@ -71,146 +52,36 @@ Provide:
71
52
  Keep response under 400 words.
72
53
  ```
73
54
 
74
- **Perspective-specific role lenses (append to preamble):**
75
-
76
- ```python
77
- # All 4 in a single message — parallel, non-background:
78
-
79
- Task(subagent_type="reasoner", model="opus", prompt="""
80
- {shared_preamble}
81
-
82
- ## Your Role: USER ADVOCATE
83
- Argue from the perspective of the end user. Focus on:
84
- - Simplicity and ease of use
85
- - Real user needs vs assumed needs
86
- - Friction points and cognitive load
87
- - Whether the solution matches how users actually think
88
- """)
89
-
90
- Task(subagent_type="reasoner", model="opus", prompt="""
91
- {shared_preamble}
92
-
93
- ## Your Role: TECH SKEPTIC
94
- Challenge technical assumptions and surface hidden complexity. Focus on:
95
- - What could go wrong technically
96
- - Hidden dependencies or coupling
97
- - Complexity that seems simple but isn't
98
- - Maintenance burden over time
99
- """)
100
-
101
- Task(subagent_type="reasoner", model="opus", prompt="""
102
- {shared_preamble}
103
-
104
- ## Your Role: SYSTEMS THINKER
105
- Analyze how this fits into the broader system. Focus on:
106
- - Integration with existing components
107
- - Scalability implications
108
- - Second-order effects and unintended consequences
109
- - Long-term evolution and extensibility
110
- """)
111
-
112
- Task(subagent_type="reasoner", model="opus", prompt="""
113
- {shared_preamble}
114
-
115
- ## Your Role: LLM EFFICIENCY
116
- Evaluate from the perspective of LLM consumption and interaction. Focus on:
117
- - Token density: can the output be consumed efficiently by LLMs?
118
- - Minimal scaffolding: avoid ceremony that adds tokens without information
119
- - Navigable structure: can an LLM quickly find what it needs?
120
- - Attention budget: does the design respect limited context windows?
121
- """)
122
- ```
123
-
124
- ### 4. SYNTHESIZE
125
-
126
- After all 4 perspectives return, spawn 1 additional reasoner to synthesize:
127
-
128
- ```python
129
- Task(subagent_type="reasoner", model="opus", prompt="""
130
- You are the SYNTHESIZER. Four perspectives have debated a design problem.
131
-
132
- ## Context
133
- {summary}
134
-
135
- ## User Advocate's Arguments
136
- {user_advocate_response}
137
-
138
- ## Tech Skeptic's Arguments
139
- {tech_skeptic_response}
140
-
141
- ## Systems Thinker's Arguments
142
- {systems_thinker_response}
55
+ **Role lenses (append one per agent):**
143
56
 
144
- ## LLM Efficiency's Arguments
145
- {llm_efficiency_response}
57
+ | Role | Focus areas |
58
+ |------|------------|
59
+ | USER ADVOCATE | Simplicity, real vs assumed needs, friction, cognitive load, user mental model |
60
+ | TECH SKEPTIC | What could go wrong, hidden dependencies/coupling, deceptive simplicity, maintenance burden |
61
+ | SYSTEMS THINKER | Integration with existing components, scalability, second-order effects, extensibility |
62
+ | LLM EFFICIENCY | Token density, minimal ceremony, navigable structure, attention budget |
146
63
 
147
- ## Your Task
148
- Synthesize these perspectives into:
64
+ ### 4. SYNTHESIZE
149
65
 
150
- 1. **Consensus** Points where all or most perspectives agree
66
+ After all 4 return, spawn 1 synthesizer agent. Pass context summary + all 4 responses. Synthesizer produces (under 500 words):
67
+ 1. **Consensus** — Points where perspectives agree
151
68
  2. **Tensions** — Unresolved disagreements and genuine trade-offs
152
- 3. **Open Decisions** — Questions that need human judgment to resolve
153
- 4. **Recommendation** — Your balanced recommendation considering all perspectives
154
-
155
- Be specific. Name the tensions, don't smooth them over.
69
+ 3. **Open Decisions** — Questions needing human judgment
70
+ 4. **Recommendation** — Balanced recommendation considering all perspectives
156
71
 
157
- Keep response under 500 words.
158
- """)
159
- ```
72
+ Instruction: "Be specific. Name the tensions, don't smooth them over."
160
73
 
161
74
  ### 5. WRITE DEBATE FILE
162
75
 
163
- Create `specs/.debate-{name}.md` with sections: Context · Codebase Context · Perspectives (User Advocate / Tech Skeptic / Systems Thinker / LLM Efficiency) · Synthesis (Consensus / Tensions / Open Decisions / Recommendation).
76
+ Create `specs/.debate-{name}.md` with sections: Context, Codebase Context, Perspectives (User Advocate / Tech Skeptic / Systems Thinker / LLM Efficiency), Synthesis (Consensus / Tensions / Open Decisions / Recommendation).
164
77
 
165
78
  ### 6. CONFIRM
166
79
 
167
80
  Present key tensions and open decisions, then: `Next: Run /df:spec {name} to formalize into a specification`
168
81
 
169
- ---
170
-
171
82
  ## Rules
172
83
 
173
- - **All 4 perspective agents MUST be spawned in ONE message** (parallel, non-background)
174
- - **Codebase context is gathered by the orchestrator** (step 2) and passed to agents via prompt
175
- - Reasoner agents receive context through their prompt, not by reading files themselves
176
- - The debate file goes in `specs/` so `/df:spec` can reference it
177
- - File name MUST be `.debate-{name}.md` (dot prefix = auxiliary file)
178
- - Keep each perspective under 400 words, synthesis under 500 words
179
-
180
- ## Example
181
-
182
- ```
183
- USER: /df:debate auth
184
-
185
- CLAUDE: Let me summarize what we've discussed and understand the current
186
- codebase before getting multiple perspectives on the authentication design.
187
-
188
- [Summarizes: ~200 words about auth requirements from conversation]
189
-
190
- [Globs/Greps/Reads relevant auth files — middleware, routes, config]
191
-
192
- [Produces ~300 word codebase summary of what exists]
193
-
194
- [Spawns 4 reasoner agents in parallel — each receives both summaries]
195
-
196
- [All 4 return their arguments]
197
-
198
- [Spawns synthesizer agent with all 4 perspectives]
199
-
200
- [Synthesizer returns consensus, tensions, open decisions, recommendation]
201
-
202
- [Writes specs/.debate-auth.md]
203
-
204
- ✓ Created specs/.debate-auth.md
205
-
206
- Key tensions:
207
- - OAuth complexity vs simpler API key approach
208
- - User convenience (social login) vs privacy concerns
209
- - Centralized auth service vs per-route middleware
210
-
211
- Open decisions:
212
- - Session storage strategy (JWT vs server-side)
213
- - Token expiration policy
214
-
215
- Next: Run /df:spec auth to formalize into a specification
216
- ```
84
+ - ALL 4 perspective agents MUST be spawned in ONE message (parallel, non-background)
85
+ - Orchestrator gathers codebase context (step 2), passes to agents via prompt — agents never read files
86
+ - File name MUST be `.debate-{name}.md` (dot prefix = auxiliary file, lives in `specs/`)
87
+ - Word limits: each perspective <400 words, synthesis <500 words
@@ -6,18 +6,11 @@ allowed-tools: [AskUserQuestion, Read, Agent]
6
6
 
7
7
  # /df:discover — Deep Problem Exploration
8
8
 
9
- ## Orchestrator Role
10
-
11
9
  You are a Socratic questioner. Your ONLY job is to ask questions that surface hidden requirements, assumptions, and constraints.
12
10
 
13
11
  **NEVER:** Read source files directly, use Glob/Grep directly, proactively spawn agents, create files (except `.deepflow/decisions.md`), run git, use TaskOutput, use Task tool, use EnterPlanMode, use ExitPlanMode
14
12
 
15
- **ONLY:** Ask questions using `AskUserQuestion` tool, respond conversationally, and spawn context-fetch agents **when the user explicitly requests it**
16
-
17
- ---
18
-
19
- ## Purpose
20
- Explore a problem space deeply before formalizing into specs. Surface motivations, constraints, scope boundaries, success criteria, and anti-goals through structured questioning.
13
+ **ONLY:** Ask questions via `AskUserQuestion`, respond conversationally, spawn context-fetch agents **only when the user explicitly requests it**.
21
14
 
22
15
  ## Usage
23
16
  ```
@@ -26,198 +19,59 @@ Explore a problem space deeply before formalizing into specs. Surface motivation
26
19
 
27
20
  ## Behavior
28
21
 
29
- Work through these phases organically. You don't need to announce phases — let the conversation flow naturally. Move to the next phase when the current one feels sufficiently explored.
30
-
31
- ### Phase 1: MOTIVATION
32
- Why does this need to exist? What problem does it solve? Who suffers without it?
33
-
34
- Example questions:
35
- - What triggered the need for this?
36
- - Who will use this and what's their current workaround?
37
- - What happens if we don't build this?
38
-
39
- ### Phase 2: CONTEXT
40
- What already exists? What has been tried? What's the current state?
41
-
42
- Example questions:
43
- - Is there existing code or infrastructure that relates to this?
44
- - Have you tried solving this before? What worked/didn't?
45
- - Are there external systems or APIs involved?
46
-
47
- ### Phase 3: SCOPE
48
- What's in? What's out? What's the minimum viable version?
49
-
50
- Example questions:
51
- - What's the smallest version that would be useful?
52
- - What features feel essential vs nice-to-have?
53
- - Are there parts you explicitly want to exclude?
54
-
55
- ### Phase 4: CONSTRAINTS
56
- Technical limits, time pressure, resource boundaries?
22
+ Work through phases organically. Don't announce phases — let conversation flow naturally. Move on when a phase feels sufficiently explored.
57
23
 
58
- Example questions:
59
- - Are there performance requirements or SLAs?
60
- - What technologies are non-negotiable?
61
- - Is there a deadline or timeline pressure?
24
+ | Phase | Purpose |
25
+ |-------|---------|
26
+ | 1. MOTIVATION | Why does this need to exist? What problem? Who suffers without it? |
27
+ | 2. CONTEXT | What exists? What's been tried? Current state? External systems? |
28
+ | 3. SCOPE | What's in/out? Minimum viable version? Essential vs nice-to-have? |
29
+ | 4. CONSTRAINTS | Performance requirements? Non-negotiable tech? Timeline pressure? |
30
+ | 5. SUCCESS | How to verify it works? What metrics? What makes you confident to ship? |
31
+ | 6. ANTI-GOALS | What to explicitly NOT build? Common over-engineering traps? Failed approaches elsewhere? |
62
32
 
63
- ### Phase 5: SUCCESS
64
- How do we know it worked? What does "done" look like?
33
+ ## Questioning Rules
65
34
 
66
- Example questions:
67
- - How will you verify this works correctly?
68
- - What metrics would indicate success?
69
- - What would make you confident enough to ship?
35
+ - Use `AskUserQuestion` for structured questions with options. Max **4 questions per call** (tool limit). Headers **≤12 chars**.
36
+ - Mix structured questions with conversational follow-ups.
37
+ - Follow up on surprising/unclear answers — don't march through phases mechanically.
38
+ - **Never re-ask answered questions.** Review prior answers before composing each call. If a topic was settled, reference the prior answer and move forward.
39
+ - Keep responses short between questions — don't lecture. Acknowledge answers briefly.
70
40
 
71
- ### Phase 6: ANTI-GOALS
72
- What should we explicitly NOT do? What traps to avoid?
41
+ ## On-Demand Context Fetching
73
42
 
74
- Example questions:
75
- - What's the most common way this kind of feature gets over-engineered?
76
- - Are there approaches you've seen fail elsewhere?
77
- - What should we explicitly avoid building?
78
-
79
- ---
80
-
81
- ## Rules
82
-
83
- ### Questioning Rules
84
- - Use `AskUserQuestion` tool for structured questions with options
85
- - Maximum **4 questions per `AskUserQuestion` call** (tool limit)
86
- - Headers must be **≤12 characters**
87
- - Mix structured questions (AskUserQuestion) with conversational follow-ups
88
- - Ask follow-up questions based on answers — don't just march through phases mechanically
89
- - Go deeper on surprising or unclear answers
90
- ### Behavioral Rules
91
- - Keep your responses short between questions — don't lecture
92
- - Acknowledge answers briefly before asking the next question
93
-
94
- ### On-Demand Context Fetching
95
-
96
- When the user explicitly asks you to look at code or a URL (e.g., "olha no código", "vê esse link", "look at src/auth/", "check https://docs.example.com"), fetch context using a sub-agent.
97
-
98
- **Trigger:** Intent-based detection — the user must explicitly request it. NEVER proactively fetch context.
43
+ **Trigger:** User explicitly asks to look at code or a URL (e.g., "look at src/auth/", "check this link"). NEVER proactively fetch.
99
44
 
100
45
  **For codebase context:**
101
46
  ```
102
- Agent(subagent_type="Explore", model="haiku", prompt="""
103
- Read and summarize the following: {what the user asked to see}
104
-
105
- Rules:
106
- - Return ONLY factual observations: what files exist, what functions/types are defined, what patterns are used
107
- - Do NOT suggest solutions, improvements, or architectural changes
108
- - Do NOT give opinions on code quality
109
- - Keep response under 4000 tokens
110
- - Format: bullet points of facts
111
- """)
47
+ Agent(subagent_type="Explore", model="haiku", prompt="Read and summarize: {target}. Rules: factual observations only (files, functions, types, patterns). No solutions/improvements/opinions. Under 4000 tokens. Bullet points.")
112
48
  ```
113
49
 
114
50
  **For URL context:**
115
51
  ```
116
- Agent(subagent_type="Explore", model="haiku", prompt="""
117
- Use the browse-fetch skill to fetch this URL: {url}
118
-
119
- Then summarize what the page contains.
120
-
121
- Rules:
122
- - Return ONLY factual observations: what the documentation says, what APIs are described, what patterns are shown
123
- - Do NOT suggest how to use this in the project
124
- - Do NOT give opinions or recommendations
125
- - Keep response under 4000 tokens
126
- - Format: bullet points of facts
127
- """)
52
+ Agent(subagent_type="Explore", model="haiku", prompt="Use browse-fetch skill to fetch: {url}. Summarize contents. Rules: factual observations only. No recommendations. Under 4000 tokens. Bullet points.")
128
53
  ```
129
54
 
130
- **After receiving context:** Briefly share the factual summary with the user, then **resume Socratic questioning** incorporating the new facts. Do NOT shift to suggesting solutions.
131
-
132
- **Soft cap:** ~3 context fetches per discover session to protect context window.
55
+ After receiving context: share factual summary, then **resume Socratic questioning** incorporating new facts. Do NOT shift to suggesting solutions. Soft cap: ~3 context fetches per session.
133
56
 
134
- ### When the User Wants to Move On
135
- When the user signals they want to advance (e.g., "I think that's enough", "let's move on", "ready for next step"):
57
+ ## When the User Wants to Move On
136
58
 
137
- ```
138
- Great, we've covered a lot of ground. To analyze this from multiple perspectives, run:
59
+ Assess spec layer reached:
139
60
 
140
- /df:debate {name}
61
+ | Layer | Criteria |
62
+ |-------|----------|
63
+ | L0 | Objective clear |
64
+ | L1 | Requirements enumerated |
65
+ | L2 | Testable ACs defined |
66
+ | L3 | Constraints + scope + tech context |
141
67
 
142
- Or if you want to go straight to a spec:
143
-
144
- /df:spec {name}
145
68
  ```
69
+ Great, we've covered enough for an L{N} spec ({label}).
146
70
 
147
- ## Example
71
+ /df:spec {name} — generate spec at current layer
72
+ /df:debate {name} — analyze from multiple perspectives first
148
73
 
149
- ```
150
- USER: /df:discover auth
151
-
152
- CLAUDE: Let's explore the authentication problem space.
153
-
154
- [Uses AskUserQuestion]
155
- {
156
- "questions": [
157
- {
158
- "question": "What's driving the need for auth right now?",
159
- "header": "Motivation",
160
- "multiSelect": false,
161
- "options": [
162
- {"label": "Security gap", "description": "Current system has no auth or weak auth"},
163
- {"label": "New users", "description": "Opening up to users who need access control"},
164
- {"label": "Compliance", "description": "Regulatory or policy requirement"}
165
- ]
166
- },
167
- {
168
- "question": "Who are the primary users that will authenticate?",
169
- "header": "Users",
170
- "multiSelect": true,
171
- "options": [
172
- {"label": "End users", "description": "Public-facing users of the product"},
173
- {"label": "Internal team", "description": "Your team or organization members"},
174
- {"label": "API clients", "description": "Other services or integrations"}
175
- ]
176
- }
177
- ]
178
- }
179
-
180
- USER: [Selects: Security gap, End users + API clients]
181
-
182
- CLAUDE: So there's a real security gap and you need to cover both human users
183
- and machine clients. That's two different auth flows to think about.
184
-
185
- [Uses AskUserQuestion]
186
- {
187
- "questions": [
188
- {
189
- "question": "Do end users currently have any way to identify themselves?",
190
- "header": "Current state",
191
- "multiSelect": false,
192
- "options": [
193
- {"label": "Nothing", "description": "Completely open, no identification"},
194
- {"label": "Basic auth", "description": "Simple username/password exists"},
195
- {"label": "Third-party", "description": "Using external auth provider already"}
196
- ]
197
- },
198
- {
199
- "question": "For API clients, what authentication exists today?",
200
- "header": "API auth",
201
- "multiSelect": false,
202
- "options": [
203
- {"label": "None", "description": "APIs are open"},
204
- {"label": "API keys", "description": "Static keys in headers"},
205
- {"label": "OAuth/JWT", "description": "Token-based auth already in place"}
206
- ]
207
- }
208
- ]
209
- }
210
-
211
- [... continues through phases based on answers ...]
212
-
213
- USER: I think we've covered everything, let's move on.
214
-
215
- CLAUDE: Great, we've covered a lot of ground. To analyze this from multiple
216
- perspectives, run:
217
-
218
- /df:debate auth
219
-
220
- Or if you want to go straight to a spec:
221
-
222
- /df:spec auth
74
+ {If L0-L1:}
75
+ At L{N}, /df:plan will generate spikes to discover what's missing.
76
+ Deepen the spec later with /df:spec {name} after spikes run.
223
77
  ```