@leeovery/claude-technical-workflows 2.1.16 → 2.1.17
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
|
@@ -27,6 +27,8 @@ You receive via the orchestrator's prompt:
|
|
|
27
27
|
- Integration test gaps — are cross-task workflows tested end-to-end?
|
|
28
28
|
- Seam quality between task boundaries — do the pieces fit together cleanly?
|
|
29
29
|
- Over/under-engineering — are abstractions justified by usage? Is raw code crying out for structure?
|
|
30
|
+
- Missed composition opportunities — are new abstractions independently implemented when they could be derived from existing ones? If two queries are logical inverses, one should be defined in terms of the other.
|
|
31
|
+
- Type safety at boundaries — are interfaces or function signatures using untyped parameters when the concrete types are known? Runtime type checks inside implementations signal the signature should be more specific.
|
|
30
32
|
|
|
31
33
|
## Your Process
|
|
32
34
|
|
|
@@ -42,10 +42,12 @@ Are all criteria genuinely met — not just self-reported?
|
|
|
42
42
|
- Check for criteria that are technically met but miss the intent
|
|
43
43
|
|
|
44
44
|
### 3. Test Adequacy
|
|
45
|
-
Do tests actually verify the criteria? Are edge cases covered?
|
|
45
|
+
Do tests actually verify the criteria? Are assertions precise? Are edge cases covered?
|
|
46
46
|
- Is there a test for each acceptance criterion?
|
|
47
47
|
- Would the tests fail if the feature broke?
|
|
48
48
|
- Are edge cases from the task's test cases covered?
|
|
49
|
+
- **Assertion depth**: For mutation operations, do tests verify observable side effects — not just that the operation returned success? State changes should be asserted independently.
|
|
50
|
+
- **Assertion precision**: When expected output is deterministic, do tests use exact comparison? Substring or partial matching masks formatting regressions and missing/extra content.
|
|
49
51
|
- Flag both under-testing AND over-testing
|
|
50
52
|
|
|
51
53
|
### 4. Convention Adherence
|
|
@@ -61,6 +63,7 @@ Is this a sound design decision? Will it compose well with future tasks?
|
|
|
61
63
|
- Are there coupling or abstraction concerns?
|
|
62
64
|
- Will this cause problems for subsequent tasks in the phase?
|
|
63
65
|
- Are there structural concerns that should be raised now rather than compounding?
|
|
66
|
+
- Are concrete types used where data structures are known? Flag untyped escape hatches used where concrete types would be clearer and safer.
|
|
64
67
|
|
|
65
68
|
## Fix Recommendations (needs-changes only)
|
|
66
69
|
|
package/package.json
CHANGED
|
@@ -12,8 +12,11 @@ Apply standard quality principles. Defer to project-specific skills for framewor
|
|
|
12
12
|
- Extract repeated logic after three instances (Rule of Three)
|
|
13
13
|
- Avoid premature abstraction for code used once or twice
|
|
14
14
|
|
|
15
|
+
### Compose, Don't Duplicate
|
|
16
|
+
When new behavior is the logical inverse or subset of existing behavior, derive it from the existing abstraction rather than implementing independently. If you have a query for "ready items," the query for "blocked items" should be "open AND NOT ready" — not an independently authored query that could drift. Prefer mathematical relationships (derived = total - computed) over parallel computations that must be kept in sync.
|
|
17
|
+
|
|
15
18
|
### SOLID
|
|
16
|
-
- **Single Responsibility**: Each class/function does one thing
|
|
19
|
+
- **Single Responsibility**: Each class/function does one thing. Multi-step logic should decompose into named helper functions — each step a function, each name documents intent.
|
|
17
20
|
- **Open/Closed**: Extend behavior without modifying existing code
|
|
18
21
|
- **Liskov Substitution**: Subtypes must be substitutable for base types
|
|
19
22
|
- **Interface Segregation**: Don't force classes to implement unused methods
|
|
@@ -25,6 +28,9 @@ Keep low. Fix with early returns and method extraction.
|
|
|
25
28
|
### YAGNI
|
|
26
29
|
Only implement what's in the plan. Ask: "Is this in the plan?"
|
|
27
30
|
|
|
31
|
+
### Concrete Over Abstract
|
|
32
|
+
Prefer concrete types over language-level escape hatches that bypass the type system. Use specific types for data passing between layers, not untyped containers. If you need polymorphism, define a named interface/protocol with specific methods — don't pass untyped values. If you find yourself writing runtime type checks or casts inside a function, the signature is too abstract.
|
|
33
|
+
|
|
28
34
|
## Testability
|
|
29
35
|
- Inject dependencies
|
|
30
36
|
- Prefer pure functions
|
|
@@ -36,6 +42,8 @@ Only implement what's in the plan. Ask: "Is this in the plan?"
|
|
|
36
42
|
- Deep nesting (3+)
|
|
37
43
|
- Long parameter lists (4+)
|
|
38
44
|
- Boolean parameters
|
|
45
|
+
- Untyped parameters when concrete types are known at design time
|
|
46
|
+
- Substring assertions in tests when exact output is deterministic
|
|
39
47
|
|
|
40
48
|
## Project Standards
|
|
41
49
|
Check `.claude/skills/` for project-specific patterns.
|
|
@@ -42,7 +42,9 @@ But write **complete, functional implementations** - don't artificially minimize
|
|
|
42
42
|
|
|
43
43
|
**Derive tests from plan**: Task's micro acceptance becomes your first test. Edge cases become additional tests.
|
|
44
44
|
|
|
45
|
-
**
|
|
45
|
+
**Assert precisely**: For mutation operations (create, update, delete, state transitions), don't just assert the return value — verify observable side effects independently. A test that checks "operation succeeded" but ignores whether timestamps updated, related records changed, or output structure is correct will miss regressions that only affect side effects. Assert structured output by parsing and checking fields/values, not by string matching the serialized form.
|
|
46
|
+
|
|
47
|
+
**Write test names first**: List all test names before writing bodies. Confirm coverage matches acceptance criteria. Then consider: invalid input types, boundary values, zero/nil/empty inputs, single vs multiple items, and success alongside failure paths — add tests for any that are relevant and non-trivial.
|
|
46
48
|
|
|
47
49
|
**No implementation code exists yet.** If you're tempted to "just sketch out the class first" - don't. The test comes first. Always.
|
|
48
50
|
|