@leeovery/claude-technical-workflows 2.1.6 → 2.1.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/agents/implementation-polish.md +3 -8
- package/agents/implementation-task-executor.md +5 -14
- package/agents/implementation-task-reviewer.md +2 -16
- package/package.json +1 -1
- package/skills/start-research/SKILL.md +1 -1
- package/skills/technical-implementation/SKILL.md +1 -3
- package/skills/technical-implementation/references/code-quality.md +0 -10
- package/skills/technical-implementation/references/steps/invoke-executor.md +2 -6
- package/skills/technical-implementation/references/steps/invoke-polish.md +2 -3
- package/skills/technical-implementation/references/steps/invoke-reviewer.md +1 -5
- package/skills/technical-implementation/references/steps/task-loop.md +1 -15
- package/skills/technical-research/SKILL.md +51 -0
|
@@ -20,12 +20,11 @@ You receive file paths and context via the orchestrator's prompt:
|
|
|
20
20
|
3. **Specification path** — What was intended — design decisions and rationale
|
|
21
21
|
4. **Plan file path** — What was built — the full task landscape
|
|
22
22
|
5. **Plan format reading.md path** — How to read tasks from the plan (format-specific adapter)
|
|
23
|
-
6. **
|
|
24
|
-
7. **Project skill paths** — Framework conventions
|
|
23
|
+
6. **Project skill paths** — Framework conventions
|
|
25
24
|
|
|
26
25
|
On **re-invocation after user feedback**, additionally include:
|
|
27
26
|
|
|
28
|
-
|
|
27
|
+
7. **User feedback** — the user's comments on what to change or focus on
|
|
29
28
|
|
|
30
29
|
## Hard Rules
|
|
31
30
|
|
|
@@ -50,7 +49,6 @@ Read and absorb the following. Do not write any code or dispatch any agents duri
|
|
|
50
49
|
3. **Read project skills** — absorb framework conventions
|
|
51
50
|
4. **Read the plan format's reading.md** — understand how to retrieve tasks from the plan
|
|
52
51
|
5. **Read the plan** — follow the reading adapter's instructions to retrieve all completed tasks. Understand the full scope: phases, tasks, acceptance criteria, what was built
|
|
53
|
-
6. **Read the integration context file** — understand patterns, helpers, and conventions from all tasks
|
|
54
52
|
|
|
55
53
|
→ Proceed to **Step 2**.
|
|
56
54
|
|
|
@@ -58,7 +56,7 @@ Read and absorb the following. Do not write any code or dispatch any agents duri
|
|
|
58
56
|
|
|
59
57
|
## Step 2: Identify Implementation Scope
|
|
60
58
|
|
|
61
|
-
Find all files changed during implementation. Use git history
|
|
59
|
+
Find all files changed during implementation. Use git history and the plan's task list to build a complete picture of what was touched. Read and understand the full implemented codebase.
|
|
62
60
|
|
|
63
61
|
Build a definitive list of implementation files. This list is passed to analysis sub-agents in subsequent steps.
|
|
64
62
|
|
|
@@ -133,8 +131,6 @@ Invoke the `implementation-task-executor` agent (`implementation-task-executor.m
|
|
|
133
131
|
- code-quality.md path
|
|
134
132
|
- Specification path (if available)
|
|
135
133
|
- Project skill paths
|
|
136
|
-
- Plan file path
|
|
137
|
-
- Integration context file path
|
|
138
134
|
|
|
139
135
|
**STOP.** Do not proceed until the executor has returned its result.
|
|
140
136
|
|
|
@@ -148,7 +144,6 @@ Invoke the `implementation-task-reviewer` agent (`implementation-task-reviewer.m
|
|
|
148
144
|
- Specification path
|
|
149
145
|
- The same task description used for the executor (including test rules)
|
|
150
146
|
- Project skill paths
|
|
151
|
-
- Integration context file path
|
|
152
147
|
|
|
153
148
|
**STOP.** Do not proceed until the reviewer has returned its result.
|
|
154
149
|
|
|
@@ -18,12 +18,10 @@ You receive file paths and context via the orchestrator's prompt:
|
|
|
18
18
|
3. **Specification path** — For context when rationale is unclear
|
|
19
19
|
4. **Project skill paths** — Relevant `.claude/skills/` paths for framework conventions
|
|
20
20
|
5. **Task content** — Task ID, phase, and all instructional content: goal, implementation steps, acceptance criteria, tests, edge cases, context, notes. This is your scope.
|
|
21
|
-
6. **Plan file path** — The implementation plan, for understanding the overall task landscape and where this task fits
|
|
22
|
-
7. **Integration context file path** (if exists) — Accumulated notes from prior tasks about established patterns, conventions, and decisions
|
|
23
21
|
|
|
24
22
|
On **re-invocation after review feedback**, you receive all of the above, plus:
|
|
25
|
-
|
|
26
|
-
|
|
23
|
+
6. **User-approved review notes** — may be the reviewer's original notes, modified by user, or user's own notes
|
|
24
|
+
7. **Specific issues to address**
|
|
27
25
|
|
|
28
26
|
You are stateless — each invocation starts fresh. The full task content is always provided so you can see what was asked, what was done, and what needs fixing.
|
|
29
27
|
|
|
@@ -33,14 +31,10 @@ You are stateless — each invocation starts fresh. The full task content is alw
|
|
|
33
31
|
2. **Read code-quality.md** — absorb quality standards
|
|
34
32
|
3. **Read project skills** — absorb framework conventions, testing patterns, architecture patterns
|
|
35
33
|
4. **Read specification** (if provided) — understand broader context for this task
|
|
36
|
-
5. **Explore codebase** —
|
|
37
|
-
- If an integration context file was provided, read it first — identify helpers, patterns, and conventions you must reuse before writing anything new
|
|
38
|
-
- Skim the plan file to understand the task landscape — what's been built, what's coming, where your task fits. Use this for awareness, not to build ahead (YAGNI still applies)
|
|
34
|
+
5. **Explore codebase** — understand what exists before writing anything:
|
|
39
35
|
- Read files and tests related to the task's domain
|
|
40
|
-
-
|
|
41
|
-
-
|
|
42
|
-
- Match conventions established in the codebase: error message style, naming patterns, file organisation, type placement
|
|
43
|
-
- Your code should read as if the same developer wrote the entire codebase
|
|
36
|
+
- Identify patterns, conventions, and structures you'll need to follow or extend
|
|
37
|
+
- Check for existing code that the task builds on or integrates with
|
|
44
38
|
6. **Execute TDD cycle** — follow the process in tdd-workflow.md for each acceptance criterion and test case.
|
|
45
39
|
7. **Verify all acceptance criteria met** — every criterion from the task must be satisfied
|
|
46
40
|
8. **Return structured result**
|
|
@@ -81,10 +75,7 @@ FILES_CHANGED: {list of files created/modified}
|
|
|
81
75
|
TESTS_WRITTEN: {list of test files/methods}
|
|
82
76
|
TEST_RESULTS: {all passing | failures — details}
|
|
83
77
|
ISSUES: {any concerns, blockers, or deviations discovered}
|
|
84
|
-
INTEGRATION_NOTES:
|
|
85
|
-
- {3-5 concise bullet points: key patterns, helpers, conventions, interface decisions established by this task. Anchor to concrete file paths where applicable (e.g., "Created `ValidationHelper` in `src/helpers/validation.ts` — use for all input validation"), but high-level observations without a specific file reference are also valuable}
|
|
86
78
|
```
|
|
87
79
|
|
|
88
80
|
- If STATUS is `blocked` or `failed`, ISSUES **must** explain why and what decision is needed.
|
|
89
81
|
- If STATUS is `complete`, all acceptance criteria must be met and all tests passing.
|
|
90
|
-
- After completing the task, document what you created or established that future tasks should be aware of in INTEGRATION_NOTES. This is factual — what exists and why — not evaluative.
|
|
@@ -18,7 +18,6 @@ You receive via the orchestrator's prompt:
|
|
|
18
18
|
1. **Specification path** — The validated specification for design decision context
|
|
19
19
|
2. **Task content** — Same task content the executor received: task ID, phase, and all instructional content
|
|
20
20
|
3. **Project skill paths** — Relevant `.claude/skills/` paths for checking framework convention adherence
|
|
21
|
-
4. **Integration context file path** (if exists) — Accumulated notes from prior tasks, for evaluating cohesion with established patterns
|
|
22
21
|
|
|
23
22
|
## Your Process
|
|
24
23
|
|
|
@@ -26,7 +25,7 @@ You receive via the orchestrator's prompt:
|
|
|
26
25
|
2. **Check unstaged changes** — use `git diff` and `git status` to identify files changed by the executor
|
|
27
26
|
3. **Read all changed files** — implementation code and test code
|
|
28
27
|
4. **Read project skills** — understand framework conventions, testing patterns, architecture patterns
|
|
29
|
-
5. **Evaluate all
|
|
28
|
+
5. **Evaluate all five review dimensions** (see below)
|
|
30
29
|
|
|
31
30
|
## Review Dimensions
|
|
32
31
|
|
|
@@ -63,15 +62,6 @@ Is this a sound design decision? Will it compose well with future tasks?
|
|
|
63
62
|
- Will this cause problems for subsequent tasks in the phase?
|
|
64
63
|
- Are there structural concerns that should be raised now rather than compounding?
|
|
65
64
|
|
|
66
|
-
### 6. Codebase Cohesion
|
|
67
|
-
Does the new code integrate well with the existing codebase? If integration context exists from prior tasks, check against established patterns.
|
|
68
|
-
- Is there duplicated logic that should be extracted into a shared helper?
|
|
69
|
-
- Are existing helpers and patterns being reused where applicable?
|
|
70
|
-
- Are naming conventions consistent with existing code?
|
|
71
|
-
- Are error message conventions consistent (casing, wrapping style, prefixes)?
|
|
72
|
-
- Do interfaces use concrete types rather than generic/any types where possible?
|
|
73
|
-
- Are related types co-located with the interfaces or functions they serve?
|
|
74
|
-
|
|
75
65
|
## Fix Recommendations (needs-changes only)
|
|
76
66
|
|
|
77
67
|
When your verdict is `needs-changes`, you must also recommend how to fix each issue. You have full context — the spec, the task, the conventions, and the code — so use it.
|
|
@@ -96,7 +86,7 @@ When alternatives exist, explain the tradeoff briefly — don't just list option
|
|
|
96
86
|
2. **No git writes** — Do not commit or stage. Reading git history and diffs is fine. The orchestrator handles all git writes.
|
|
97
87
|
3. **One task only** — You review exactly one plan task per invocation.
|
|
98
88
|
4. **Independent judgement** — Evaluate the code yourself. Do not trust the executor's self-assessment.
|
|
99
|
-
5. **All
|
|
89
|
+
5. **All five dimensions** — Evaluate spec conformance, acceptance criteria, test adequacy, convention adherence, and architectural quality.
|
|
100
90
|
6. **Be specific** — Include file paths and line numbers for every issue. Vague findings are not actionable.
|
|
101
91
|
7. **Proportional** — Prioritize by impact. Don't nitpick style when the architecture is wrong.
|
|
102
92
|
8. **Task scope only** — Only review what's in the task. Don't flag issues outside the task's scope.
|
|
@@ -113,7 +103,6 @@ ACCEPTANCE_CRITERIA: {all met | gaps — list}
|
|
|
113
103
|
TEST_COVERAGE: {adequate | gaps — list}
|
|
114
104
|
CONVENTIONS: {followed | violations — list}
|
|
115
105
|
ARCHITECTURE: {sound | concerns — details}
|
|
116
|
-
CODEBASE_COHESION: {cohesive | concerns — details}
|
|
117
106
|
ISSUES:
|
|
118
107
|
- {specific issue with file:line reference}
|
|
119
108
|
FIX: {recommended approach}
|
|
@@ -121,12 +110,9 @@ ISSUES:
|
|
|
121
110
|
CONFIDENCE: {high | medium | low}
|
|
122
111
|
NOTES:
|
|
123
112
|
- {non-blocking observations}
|
|
124
|
-
COHESION_NOTES:
|
|
125
|
-
- {2-4 concise bullet points: patterns to maintain, conventions confirmed, architectural integration observations}
|
|
126
113
|
```
|
|
127
114
|
|
|
128
115
|
- If VERDICT is `approved`, omit ISSUES entirely (or leave empty)
|
|
129
116
|
- If VERDICT is `needs-changes`, ISSUES must contain specific, actionable items with file:line references AND fix recommendations
|
|
130
117
|
- Each issue must include FIX and CONFIDENCE. ALTERNATIVE is optional — include only when genuinely multiple valid approaches exist
|
|
131
118
|
- NOTES are for non-blocking observations — things worth noting but not requiring changes
|
|
132
|
-
- COHESION_NOTES are always included — they capture patterns and conventions observed for future task context
|
package/package.json
CHANGED
|
@@ -19,7 +19,7 @@ This is **Phase 1** of the six-phase workflow:
|
|
|
19
19
|
| 5. Implementation | DOING - tests first, then code | |
|
|
20
20
|
| 6. Review | VALIDATING - check work against artifacts | |
|
|
21
21
|
|
|
22
|
-
**Stay in your lane**: Explore freely. This is the time for broad thinking, feasibility checks, and learning.
|
|
22
|
+
**Stay in your lane**: Explore freely. This is the time for broad thinking, feasibility checks, and learning. Surface options and tradeoffs — don't make decisions. When a topic converges toward a conclusion, that's a signal it's ready for discussion phase, not a cue to start deciding. Park it and move on.
|
|
23
23
|
|
|
24
24
|
---
|
|
25
25
|
|
|
@@ -196,8 +196,6 @@ After creating the file, populate `project_skills` with the paths confirmed in S
|
|
|
196
196
|
|
|
197
197
|
Commit: `impl({topic}): start implementation`
|
|
198
198
|
|
|
199
|
-
Integration context will accumulate in `docs/workflow/implementation/{topic}-context.md` as tasks complete. This file is created automatically during the task loop — no initialisation needed here.
|
|
200
|
-
|
|
201
199
|
→ Proceed to **Step 5**.
|
|
202
200
|
|
|
203
201
|
---
|
|
@@ -305,4 +303,4 @@ Commit: `impl({topic}): complete implementation`
|
|
|
305
303
|
- **[task-normalisation.md](references/task-normalisation.md)** — Normalised task shape for agent invocation
|
|
306
304
|
- **[tdd-workflow.md](references/tdd-workflow.md)** — TDD cycle (passed to executor agent)
|
|
307
305
|
- **[code-quality.md](references/code-quality.md)** — Quality standards (passed to executor agent)
|
|
308
|
-
|
|
306
|
+
|
|
@@ -37,15 +37,5 @@ Only implement what's in the plan. Ask: "Is this in the plan?"
|
|
|
37
37
|
- Long parameter lists (4+)
|
|
38
38
|
- Boolean parameters
|
|
39
39
|
|
|
40
|
-
## Convention Consistency
|
|
41
|
-
|
|
42
|
-
When adding to an existing codebase, match what's already there:
|
|
43
|
-
- **Error messages**: Match the casing, wrapping style, and prefix patterns in existing code
|
|
44
|
-
- **Naming**: Follow the project's conventions for files, functions, types, and variables
|
|
45
|
-
- **File organisation**: Follow the project's pattern for splitting concerns across files
|
|
46
|
-
- **Helpers**: Search for existing helpers before creating new ones. After creating one, check if existing code could use it too
|
|
47
|
-
- **Types**: Prefer concrete types over generic/any types when the set of possibilities is known. Use structured return types over multiple bare return values for extensibility
|
|
48
|
-
- **Co-location**: Keep related types near the interfaces or functions they serve
|
|
49
|
-
|
|
50
40
|
## Project Standards
|
|
51
41
|
Check `.claude/skills/` for project-specific patterns.
|
|
@@ -17,12 +17,10 @@ This step invokes the `implementation-task-executor` agent (`../../../../agents/
|
|
|
17
17
|
3. **Specification path**: from the plan's frontmatter (if available)
|
|
18
18
|
4. **Project skill paths**: from `project_skills` in the implementation tracking file
|
|
19
19
|
5. **Task content**: normalised task content (see [task-normalisation.md](../task-normalisation.md))
|
|
20
|
-
6. **Plan file path**: the implementation plan (same path used to read tasks)
|
|
21
|
-
7. **Integration context file** (if exists): `docs/workflow/implementation/{topic}-context.md`
|
|
22
20
|
|
|
23
21
|
**Re-attempts after review feedback** additionally include:
|
|
24
|
-
|
|
25
|
-
|
|
22
|
+
6. **User-approved review notes**: verbatim or as modified by the user
|
|
23
|
+
7. **Specific issues to address**: the ISSUES from the review
|
|
26
24
|
|
|
27
25
|
The executor is stateless — each invocation starts fresh with no memory of previous attempts. Always pass the full task content so the executor can see what was asked, what was done, and what needs fixing.
|
|
28
26
|
|
|
@@ -38,8 +36,6 @@ TASK: {task name}
|
|
|
38
36
|
SUMMARY: {2-5 lines — commentary, decisions made, anything off-script}
|
|
39
37
|
TEST_RESULTS: {all passing | failures — details only if failures}
|
|
40
38
|
ISSUES: {blockers or deviations — omit if none}
|
|
41
|
-
INTEGRATION_NOTES:
|
|
42
|
-
- {3-5 bullets: patterns, helpers, conventions established — anchor to file paths where applicable}
|
|
43
39
|
```
|
|
44
40
|
|
|
45
41
|
- `complete`: all acceptance criteria met, tests passing
|
|
@@ -17,12 +17,11 @@ This step invokes the `implementation-polish` agent (`../../../../agents/impleme
|
|
|
17
17
|
3. **Specification path**: from the plan's frontmatter (if available)
|
|
18
18
|
4. **Plan file path**: the implementation plan
|
|
19
19
|
5. **Plan format reading.md**: `../../../technical-planning/references/output-formats/{format}/reading.md` (format from plan frontmatter)
|
|
20
|
-
6. **
|
|
21
|
-
7. **Project skill paths**: from `project_skills` in the implementation tracking file
|
|
20
|
+
6. **Project skill paths**: from `project_skills` in the implementation tracking file
|
|
22
21
|
|
|
23
22
|
**Re-invocation after user feedback** additionally includes:
|
|
24
23
|
|
|
25
|
-
|
|
24
|
+
7. **User feedback**: the user's comments on what to change or focus on
|
|
26
25
|
|
|
27
26
|
The polish agent is stateless — each invocation starts fresh. Always pass all inputs.
|
|
28
27
|
|
|
@@ -15,7 +15,6 @@ Invoke `implementation-task-reviewer` with:
|
|
|
15
15
|
1. **Specification path**: same path given to the executor
|
|
16
16
|
2. **Task content**: same normalised task content the executor received
|
|
17
17
|
3. **Project skill paths**: from `project_skills` in the implementation tracking file
|
|
18
|
-
4. **Integration context file** (if exists): `docs/workflow/implementation/{topic}-context.md` — for checking cohesion with established patterns
|
|
19
18
|
|
|
20
19
|
---
|
|
21
20
|
|
|
@@ -31,7 +30,6 @@ ACCEPTANCE_CRITERIA: {all met | gaps — list}
|
|
|
31
30
|
TEST_COVERAGE: {adequate | gaps — list}
|
|
32
31
|
CONVENTIONS: {followed | violations — list}
|
|
33
32
|
ARCHITECTURE: {sound | concerns — details}
|
|
34
|
-
CODEBASE_COHESION: {cohesive | concerns — details}
|
|
35
33
|
ISSUES:
|
|
36
34
|
- {specific issue with file:line reference}
|
|
37
35
|
FIX: {recommended approach}
|
|
@@ -39,9 +37,7 @@ ISSUES:
|
|
|
39
37
|
CONFIDENCE: {high | medium | low}
|
|
40
38
|
NOTES:
|
|
41
39
|
- {non-blocking observations}
|
|
42
|
-
COHESION_NOTES:
|
|
43
|
-
- {2-4 bullets: patterns to maintain, conventions confirmed, integration quality}
|
|
44
40
|
```
|
|
45
41
|
|
|
46
|
-
- `approved`: task passes all
|
|
42
|
+
- `approved`: task passes all five review dimensions
|
|
47
43
|
- `needs-changes`: ISSUES contains specific, actionable items with fix recommendations and confidence levels
|
|
@@ -169,27 +169,13 @@ Announce the result (one line, no stop):
|
|
|
169
169
|
|
|
170
170
|
The tracking file is a derived view for discovery scripts and cross-topic dependency resolution — not a decision-making input during implementation (except `task_gate_mode` and `fix_gate_mode`).
|
|
171
171
|
|
|
172
|
-
**Append integration context** — extract INTEGRATION_NOTES from the executor's final output and COHESION_NOTES from the reviewer's output. Append both to `docs/workflow/implementation/{topic}-context.md`:
|
|
173
|
-
|
|
174
|
-
```
|
|
175
|
-
## T{task-id}: {task name}
|
|
176
|
-
|
|
177
|
-
### Integration (executor)
|
|
178
|
-
{INTEGRATION_NOTES content}
|
|
179
|
-
|
|
180
|
-
### Cohesion (reviewer)
|
|
181
|
-
{COHESION_NOTES content}
|
|
182
|
-
```
|
|
183
|
-
|
|
184
|
-
Create the file if it doesn't exist. This is mechanical text extraction and file append — the same as extracting STATUS to route on. Use outputs from the final approved iteration only.
|
|
185
|
-
|
|
186
172
|
**Commit all changes** in a single commit:
|
|
187
173
|
|
|
188
174
|
```
|
|
189
175
|
impl({topic}): T{task-id} — {brief description}
|
|
190
176
|
```
|
|
191
177
|
|
|
192
|
-
Code, tests, plan progress, tracking file
|
|
178
|
+
Code, tests, plan progress, and tracking file — one commit per approved task.
|
|
193
179
|
|
|
194
180
|
This is the end of this iteration.
|
|
195
181
|
|
|
@@ -65,6 +65,57 @@ Don't constrain yourself. Research goes wherever it needs to go.
|
|
|
65
65
|
|
|
66
66
|
**Be honest**: If something seems flawed or risky, say so. Challenge assumptions.
|
|
67
67
|
|
|
68
|
+
**Explore, don't decide**: Your job is to surface options, tradeoffs, and understanding — not to pick winners. Synthesis is welcome ("the tradeoffs are X, Y, Z"), conclusions are not ("therefore we should do Y"). Decisions belong in the discussion phase.
|
|
69
|
+
|
|
70
|
+
## Convergence Awareness
|
|
71
|
+
|
|
72
|
+
Research threads naturally converge. As you explore a topic, options narrow, tradeoffs clarify, and opinions start forming. This is healthy — but it's also a signal.
|
|
73
|
+
|
|
74
|
+
### Recognizing convergence
|
|
75
|
+
|
|
76
|
+
Watch for these signs that a thread is moving from exploration toward decision-making:
|
|
77
|
+
|
|
78
|
+
- "We should..." or "The best approach is..." language (from you or the user)
|
|
79
|
+
- Options narrowing to a clear frontrunner with well-understood tradeoffs
|
|
80
|
+
- The same conclusion being reached from multiple angles
|
|
81
|
+
- Discussion shifting from "what are the options?" to "which option?"
|
|
82
|
+
- You or the user starting to advocate for a particular approach
|
|
83
|
+
|
|
84
|
+
### What to do
|
|
85
|
+
|
|
86
|
+
When you notice convergence, **flag it and give the user options**:
|
|
87
|
+
|
|
88
|
+
> "This thread seems to be converging — we've explored {topic} enough that the tradeoffs are clear and it's approaching decision territory.
|
|
89
|
+
>
|
|
90
|
+
> - **`p`/`park`** — Mark as discussion-ready and move to another topic
|
|
91
|
+
> - **`k`/`keep`** — Keep digging, there's more to understand
|
|
92
|
+
> - **`s`/`something else`** — Your call"
|
|
93
|
+
|
|
94
|
+
**Never decide for the user.** Even if the answer seems obvious, flag it and ask.
|
|
95
|
+
|
|
96
|
+
### If the user parks it
|
|
97
|
+
|
|
98
|
+
Document the convergence point in the research file using this marker:
|
|
99
|
+
|
|
100
|
+
```markdown
|
|
101
|
+
> **Discussion-ready**: {Brief summary of what was explored and why it's ready for decision-making. Key tradeoffs or options identified.}
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
Then continue with whatever's next — another topic, a different angle, or wrapping up the session.
|
|
105
|
+
|
|
106
|
+
### If the user keeps digging
|
|
107
|
+
|
|
108
|
+
Continue exploring. The convergence signal isn't a stop sign — it's an awareness check. The user might want to stress-test the emerging conclusion, explore edge cases, or understand the problem more deeply before moving on. That's valid research work.
|
|
109
|
+
|
|
110
|
+
### Synthesis vs decision
|
|
111
|
+
|
|
112
|
+
This distinction matters:
|
|
113
|
+
|
|
114
|
+
- **Synthesis** (research): "There are three viable approaches. A is simplest but limited. B scales better but costs more. C is future-proof but complex."
|
|
115
|
+
- **Decision** (discussion): "We should go with B because scaling matters more than simplicity for this project."
|
|
116
|
+
|
|
117
|
+
Synthesis is your job. Decisions are not. Present the landscape, don't pick the destination.
|
|
118
|
+
|
|
68
119
|
## Questioning
|
|
69
120
|
|
|
70
121
|
For structured questioning, use the interview reference (`references/interview.md`). Good research questions:
|