@leeovery/claude-technical-workflows 2.1.15 → 2.1.17

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -18,6 +18,7 @@ You receive via the orchestrator's prompt:
18
18
  3. **Project skill paths** — relevant `.claude/skills/` paths for framework conventions
19
19
  4. **code-quality.md path** — quality standards
20
20
  5. **Topic name** — the implementation topic
21
+ 6. **Cycle number** — which analysis cycle this is (used in output file naming)
21
22
 
22
23
  ## Your Focus
23
24
 
@@ -26,6 +27,8 @@ You receive via the orchestrator's prompt:
26
27
  - Integration test gaps — are cross-task workflows tested end-to-end?
27
28
  - Seam quality between task boundaries — do the pieces fit together cleanly?
28
29
  - Over/under-engineering — are abstractions justified by usage? Is raw code crying out for structure?
30
+ - Missed composition opportunities — are new abstractions independently implemented when they could be derived from existing ones? If two queries are logical inverses, one should be defined in terms of the other.
31
+ - Type safety at boundaries — are interfaces or function signatures using untyped parameters when the concrete types are known? Runtime type checks inside implementations signal the signature should be more specific.
29
32
 
30
33
  ## Your Process
31
34
 
@@ -34,7 +37,7 @@ You receive via the orchestrator's prompt:
34
37
  3. **Read specification** — understand design intent and boundaries
35
38
  4. **Read all implementation files** — understand the full picture
36
39
  5. **Analyze architecture** — evaluate how the pieces compose as a whole
37
- 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-architecture.md`
40
+ 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-architecture-c{cycle-number}.md`
38
41
 
39
42
  ## Hard Rules
40
43
 
@@ -48,7 +51,7 @@ You receive via the orchestrator's prompt:
48
51
 
49
52
  ## Output File Format
50
53
 
51
- Write to `docs/workflow/implementation/{topic}/analysis-architecture.md`:
54
+ Write to `docs/workflow/implementation/{topic}/analysis-architecture-c{cycle-number}.md`:
52
55
 
53
56
  ```
54
57
  AGENT: architecture
@@ -18,6 +18,7 @@ You receive via the orchestrator's prompt:
18
18
  3. **Project skill paths** — relevant `.claude/skills/` paths for framework conventions
19
19
  4. **code-quality.md path** — quality standards
20
20
  5. **Topic name** — the implementation topic
21
+ 6. **Cycle number** — which analysis cycle this is (used in output file naming)
21
22
 
22
23
  ## Your Focus
23
24
 
@@ -33,7 +34,7 @@ You receive via the orchestrator's prompt:
33
34
  3. **Read specification** — understand design intent
34
35
  4. **Read all implementation files** — build a mental map of the full codebase
35
36
  5. **Analyze for duplication** — compare patterns across files, identify extraction candidates
36
- 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-duplication.md`
37
+ 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-duplication-c{cycle-number}.md`
37
38
 
38
39
  ## Hard Rules
39
40
 
@@ -47,7 +48,7 @@ You receive via the orchestrator's prompt:
47
48
 
48
49
  ## Output File Format
49
50
 
50
- Write to `docs/workflow/implementation/{topic}/analysis-duplication.md`:
51
+ Write to `docs/workflow/implementation/{topic}/analysis-duplication-c{cycle-number}.md`:
51
52
 
52
53
  ```
53
54
  AGENT: duplication
@@ -18,6 +18,7 @@ You receive via the orchestrator's prompt:
18
18
  3. **Project skill paths** — relevant `.claude/skills/` paths for framework conventions
19
19
  4. **code-quality.md path** — quality standards
20
20
  5. **Topic name** — the implementation topic
21
+ 6. **Cycle number** — which analysis cycle this is (used in output file naming)
21
22
 
22
23
  ## Your Focus
23
24
 
@@ -33,7 +34,7 @@ You receive via the orchestrator's prompt:
33
34
  3. **Read code-quality.md** — understand quality standards
34
35
  4. **Read all implementation files** — map each file back to its spec requirements
35
36
  5. **Compare implementation against spec** — check every decision point
36
- 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-standards.md`
37
+ 6. **Write findings** to `docs/workflow/implementation/{topic}/analysis-standards-c{cycle-number}.md`
37
38
 
38
39
  ## Hard Rules
39
40
 
@@ -47,7 +48,7 @@ You receive via the orchestrator's prompt:
47
48
 
48
49
  ## Output File Format
49
50
 
50
- Write to `docs/workflow/implementation/{topic}/analysis-standards.md`:
51
+ Write to `docs/workflow/implementation/{topic}/analysis-standards-c{cycle-number}.md`:
51
52
 
52
53
  ```
53
54
  AGENT: standards
@@ -20,13 +20,13 @@ You receive via the orchestrator's prompt:
20
20
 
21
21
  ## Your Process
22
22
 
23
- 1. **Read all findings files** from `docs/workflow/implementation/{topic}/` — look for `analysis-duplication.md`, `analysis-standards.md`, and `analysis-architecture.md`
23
+ 1. **Read all findings files** from `docs/workflow/implementation/{topic}/` — look for `analysis-duplication-c{cycle-number}.md`, `analysis-standards-c{cycle-number}.md`, and `analysis-architecture-c{cycle-number}.md`
24
24
  2. **Deduplicate** — same issue found by multiple agents → one finding, note all sources
25
25
  3. **Group related findings** — multiple findings about the same pattern become one task (e.g., 3 duplication findings about the same helper pattern = 1 "extract helper" task)
26
26
  4. **Filter** — discard low-severity findings unless they cluster into a pattern. Never discard high-severity.
27
27
  5. **Normalize** — convert each group into a task using the canonical task template (Problem / Solution / Outcome / Do / Acceptance Criteria / Tests)
28
- 6. **Write report** — output to `docs/workflow/implementation/{topic}/analysis-report.md`
29
- 7. **Write staging file** — if actionable tasks exist, write to `docs/workflow/implementation/{topic}/analysis-tasks.md` with `status: pending` for each task
28
+ 6. **Write report** — output to `docs/workflow/implementation/{topic}/analysis-report-c{cycle-number}.md`
29
+ 7. **Write staging file** — if actionable tasks exist, write to `docs/workflow/implementation/{topic}/analysis-tasks-c{cycle-number}.md` with `status: pending` for each task
30
30
 
31
31
  ## Report Format
32
32
 
@@ -42,10 +42,12 @@ Are all criteria genuinely met — not just self-reported?
42
42
  - Check for criteria that are technically met but miss the intent
43
43
 
44
44
  ### 3. Test Adequacy
45
- Do tests actually verify the criteria? Are edge cases covered?
45
+ Do tests actually verify the criteria? Are assertions precise? Are edge cases covered?
46
46
  - Is there a test for each acceptance criterion?
47
47
  - Would the tests fail if the feature broke?
48
48
  - Are edge cases from the task's test cases covered?
49
+ - **Assertion depth**: For mutation operations, do tests verify observable side effects — not just that the operation returned success? State changes should be asserted independently.
50
+ - **Assertion precision**: When expected output is deterministic, do tests use exact comparison? Substring or partial matching masks formatting regressions and missing/extra content.
49
51
  - Flag both under-testing AND over-testing
50
52
 
51
53
  ### 4. Convention Adherence
@@ -61,6 +63,7 @@ Is this a sound design decision? Will it compose well with future tasks?
61
63
  - Are there coupling or abstraction concerns?
62
64
  - Will this cause problems for subsequent tasks in the phase?
63
65
  - Are there structural concerns that should be raised now rather than compounding?
66
+ - Are concrete types used where data structures are known? Flag untyped escape hatches used where concrete types would be clearer and safer.
64
67
 
65
68
  ## Fix Recommendations (needs-changes only)
66
69
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@leeovery/claude-technical-workflows",
3
- "version": "2.1.15",
3
+ "version": "2.1.17",
4
4
  "description": "Technical workflow skills & commands for Claude Code",
5
5
  "license": "MIT",
6
6
  "author": "Lee Overy <me@leeovery.com>",
@@ -49,7 +49,7 @@ Context refresh (compaction) summarizes the conversation, losing procedural deta
49
49
 
50
50
  1. **Re-read this skill file completely.** Do not rely on your summary of it. The full process, steps, and rules must be reloaded.
51
51
  2. **Check task progress in the plan** — use the plan adapter's instructions to read the plan's current state. Also read the implementation tracking file and any other working documents for additional context.
52
- 3. **Check `task_gate_mode`, `fix_gate_mode`, `fix_attempts`, and `analysis_cycle`** in the tracking file — if gates are `auto`, the user previously opted out. If `fix_attempts` > 0, you're mid-fix-loop for the current task. If `analysis_cycle` > 0, you've completed analysis cycles — check for findings files on disk (`analysis-*.md` in `{topic}/`) to determine mid-analysis state.
52
+ 3. **Check `task_gate_mode`, `fix_gate_mode`, `fix_attempts`, and `analysis_cycle`** in the tracking file — if gates are `auto`, the user previously opted out. If `fix_attempts` > 0, you're mid-fix-loop for the current task. If `analysis_cycle` > 0, you've completed analysis cycles — check for findings files on disk (`analysis-*-c{cycle-number}.md` in `{topic}/`) to determine mid-analysis state.
53
53
  4. **Check git state.** Run `git status` and `git log --oneline -10` to see recent commits. Commit messages follow a conventional pattern that reveals what was completed.
54
54
  5. **Announce your position** to the user before continuing: what step you believe you're at, what's been completed, and what comes next. Wait for confirmation.
55
55
 
@@ -12,8 +12,11 @@ Apply standard quality principles. Defer to project-specific skills for framewor
12
12
  - Extract repeated logic after three instances (Rule of Three)
13
13
  - Avoid premature abstraction for code used once or twice
14
14
 
15
+ ### Compose, Don't Duplicate
16
+ When new behavior is the logical inverse or subset of existing behavior, derive it from the existing abstraction rather than implementing independently. If you have a query for "ready items," the query for "blocked items" should be "open AND NOT ready" — not an independently authored query that could drift. Prefer mathematical relationships (derived = total - computed) over parallel computations that must be kept in sync.
17
+
15
18
  ### SOLID
16
- - **Single Responsibility**: Each class/function does one thing
19
+ - **Single Responsibility**: Each class/function does one thing. Multi-step logic should decompose into named helper functions — each step a function, each name documents intent.
17
20
  - **Open/Closed**: Extend behavior without modifying existing code
18
21
  - **Liskov Substitution**: Subtypes must be substitutable for base types
19
22
  - **Interface Segregation**: Don't force classes to implement unused methods
@@ -25,6 +28,9 @@ Keep low. Fix with early returns and method extraction.
25
28
  ### YAGNI
26
29
  Only implement what's in the plan. Ask: "Is this in the plan?"
27
30
 
31
+ ### Concrete Over Abstract
32
+ Prefer concrete types over language-level escape hatches that bypass the type system. Use specific types for data passing between layers, not untyped containers. If you need polymorphism, define a named interface/protocol with specific methods — don't pass untyped values. If you find yourself writing runtime type checks or casts inside a function, the signature is too abstract.
33
+
28
34
  ## Testability
29
35
  - Inject dependencies
30
36
  - Prefer pure functions
@@ -36,6 +42,8 @@ Only implement what's in the plan. Ask: "Is this in the plan?"
36
42
  - Deep nesting (3+)
37
43
  - Long parameter lists (4+)
38
44
  - Boolean parameters
45
+ - Untyped parameters when concrete types are known at design time
46
+ - Substring assertions in tests when exact output is deterministic
39
47
 
40
48
  ## Project Standards
41
49
  Check `.claude/skills/` for project-specific patterns.
@@ -79,6 +79,12 @@ Load **[invoke-analysis.md](invoke-analysis.md)** and follow its instructions.
79
79
 
80
80
  **STOP.** Do not proceed until all agents have returned.
81
81
 
82
+ Commit the analysis findings:
83
+
84
+ ```
85
+ impl({topic}): analysis cycle {N} — findings
86
+ ```
87
+
82
88
  → Proceed to **D. Dispatch Synthesis Agent**.
83
89
 
84
90
  ---
@@ -89,6 +95,12 @@ Load **[invoke-synthesizer.md](invoke-synthesizer.md)** and follow its instructi
89
95
 
90
96
  **STOP.** Do not proceed until the synthesizer has returned.
91
97
 
98
+ Commit the synthesis output:
99
+
100
+ ```
101
+ impl({topic}): analysis cycle {N} — synthesis
102
+ ```
103
+
92
104
  → If `STATUS: clean`, return to the skill for **Step 8**.
93
105
 
94
106
  → If `STATUS: tasks_proposed`, proceed to **E. Approval Gate**.
@@ -97,7 +109,7 @@ Load **[invoke-synthesizer.md](invoke-synthesizer.md)** and follow its instructi
97
109
 
98
110
  ## E. Approval Gate
99
111
 
100
- Read the staging file from `docs/workflow/implementation/{topic}/analysis-tasks.md`.
112
+ Read the staging file from `docs/workflow/implementation/{topic}/analysis-tasks-c{cycle-number}.md`.
101
113
 
102
114
  Present an overview:
103
115
 
@@ -155,7 +167,15 @@ After all tasks processed:
155
167
 
156
168
  → If any tasks have `status: approved`, proceed to **F. Create Tasks in Plan**.
157
169
 
158
- → If all tasks were skipped, return to the skill for **Step 8**.
170
+ → If all tasks were skipped:
171
+
172
+ Commit the staging file updates:
173
+
174
+ ```
175
+ impl({topic}): analysis cycle {N} — tasks skipped
176
+ ```
177
+
178
+ Return to the skill for **Step 8**.
159
179
 
160
180
  ---
161
181
 
@@ -165,7 +185,7 @@ Load **[invoke-task-writer.md](invoke-task-writer.md)** and follow its instructi
165
185
 
166
186
  **STOP.** Do not proceed until the task writer has returned.
167
187
 
168
- Commit:
188
+ Commit all analysis and plan changes (staging file, plan tasks, Plan Index File):
169
189
 
170
190
  ```
171
191
  impl({topic}): add analysis phase {N} ({K} tasks)
@@ -29,6 +29,7 @@ Dispatch **all three in parallel** via the Task tool. Each agent receives the sa
29
29
  3. **Project skill paths** — from `project_skills` in the implementation tracking file
30
30
  4. **code-quality.md path** — `../code-quality.md`
31
31
  5. **Topic name** — the implementation topic
32
+ 6. **Cycle number** — the current analysis cycle number (from `analysis_cycle` in the tracking file)
32
33
 
33
34
  Each agent knows its own output path convention and writes findings independently.
34
35
 
@@ -15,7 +15,7 @@ This step invokes the task writer agent to create plan tasks from approved analy
15
15
  Pass via the orchestrator's prompt:
16
16
 
17
17
  1. **Topic name** — the implementation topic
18
- 2. **Staging file path** — `docs/workflow/implementation/{topic}/analysis-tasks.md`
18
+ 2. **Staging file path** — `docs/workflow/implementation/{topic}/analysis-tasks-c{cycle-number}.md`
19
19
  3. **Plan path** — the implementation plan path
20
20
  4. **Plan format reading adapter path** — `../../../technical-planning/references/output-formats/{format}/reading.md`
21
21
  5. **Plan format authoring adapter path** — `../../../technical-planning/references/output-formats/{format}/authoring.md`
@@ -42,7 +42,9 @@ But write **complete, functional implementations** - don't artificially minimize
42
42
 
43
43
  **Derive tests from plan**: Task's micro acceptance becomes your first test. Edge cases become additional tests.
44
44
 
45
- **Write test names first**: List all test names before writing bodies. Confirm coverage matches acceptance criteria.
45
+ **Assert precisely**: For mutation operations (create, update, delete, state transitions), don't just assert the return value — verify observable side effects independently. A test that checks "operation succeeded" but ignores whether timestamps updated, related records changed, or output structure is correct will miss regressions that only affect side effects. Assert structured output by parsing and checking fields/values, not by string matching the serialized form.
46
+
47
+ **Write test names first**: List all test names before writing bodies. Confirm coverage matches acceptance criteria. Then consider: invalid input types, boundary values, zero/nil/empty inputs, single vs multiple items, and success alongside failure paths — add tests for any that are relevant and non-trivial.
46
48
 
47
49
  **No implementation code exists yet.** If you're tempted to "just sketch out the class first" - don't. The test comes first. Always.
48
50