@anionzo/skill 1.4.0 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (57) hide show
  1. package/CONTRIBUTING.md +2 -1
  2. package/README.md +82 -24
  3. package/docs/design-brief.md +17 -13
  4. package/docs/knowledge-spec.md +1 -0
  5. package/i18n/CONTRIBUTING.vi.md +2 -1
  6. package/i18n/README.vi.md +82 -24
  7. package/i18n/design-brief.vi.md +17 -13
  8. package/i18n/knowledge-spec.vi.md +1 -0
  9. package/knowledge/global/skill-triggering-rules.md +3 -2
  10. package/package.json +1 -1
  11. package/scripts/install-opencode-skills +197 -35
  12. package/skills/brainstorming/SKILL.md +176 -13
  13. package/skills/brainstorming/meta.yaml +18 -10
  14. package/skills/code-review/SKILL.md +214 -19
  15. package/skills/code-review/meta.yaml +21 -9
  16. package/skills/commit/SKILL.md +187 -0
  17. package/skills/commit/examples.md +62 -0
  18. package/skills/commit/meta.yaml +29 -0
  19. package/skills/commit/references/output-template.md +14 -0
  20. package/skills/debug/SKILL.md +252 -0
  21. package/skills/debug/examples.md +83 -0
  22. package/skills/debug/meta.yaml +39 -0
  23. package/skills/debug/references/output-template.md +16 -0
  24. package/skills/docs-writer/SKILL.md +85 -10
  25. package/skills/docs-writer/meta.yaml +18 -13
  26. package/skills/extract/SKILL.md +201 -0
  27. package/skills/extract/examples.md +47 -0
  28. package/skills/extract/meta.yaml +33 -0
  29. package/skills/extract/references/output-template.md +24 -0
  30. package/skills/feature-delivery/SKILL.md +12 -5
  31. package/skills/feature-delivery/meta.yaml +6 -1
  32. package/skills/planning/SKILL.md +146 -17
  33. package/skills/planning/meta.yaml +19 -7
  34. package/skills/refactor-safe/SKILL.md +10 -7
  35. package/skills/research/SKILL.md +130 -0
  36. package/skills/research/examples.md +79 -0
  37. package/skills/research/meta.yaml +31 -0
  38. package/skills/research/references/output-template.md +23 -0
  39. package/skills/test-driven-development/SKILL.md +194 -0
  40. package/skills/test-driven-development/examples.md +77 -0
  41. package/skills/test-driven-development/meta.yaml +31 -0
  42. package/skills/test-driven-development/references/.gitkeep +0 -0
  43. package/skills/test-driven-development/references/output-template.md +31 -0
  44. package/skills/using-skills/SKILL.md +33 -17
  45. package/skills/using-skills/examples.md +20 -5
  46. package/skills/using-skills/meta.yaml +7 -4
  47. package/skills/verification-before-completion/SKILL.md +127 -13
  48. package/skills/verification-before-completion/meta.yaml +23 -14
  49. package/templates/SKILL.md +8 -1
  50. package/skills/bug-triage/SKILL.md +0 -47
  51. package/skills/bug-triage/examples.md +0 -68
  52. package/skills/bug-triage/meta.yaml +0 -25
  53. package/skills/bug-triage/references/output-template.md +0 -26
  54. package/skills/repo-onboarding/SKILL.md +0 -52
  55. package/skills/repo-onboarding/examples.md +0 -115
  56. package/skills/repo-onboarding/meta.yaml +0 -23
  57. package/skills/repo-onboarding/references/output-template.md +0 -24
@@ -0,0 +1,130 @@
1
+ # Research
2
+
3
+ ## Purpose
4
+
5
+ Understand existing code, patterns, decisions, and repository structure before writing new code.
6
+
7
+ This skill exists to prevent implementing from scratch what already exists, and to surface constraints that would otherwise be discovered mid-implementation.
8
+
9
+ It also covers repo onboarding: mapping a repository quickly enough to act safely when the task starts from little context.
10
+
11
+ ## When To Use
12
+
13
+ Load this skill when:
14
+
15
+ - exploring a codebase before starting a task
16
+ - entering a repo for the first time
17
+ - looking for existing patterns, utilities, or conventions to follow
18
+ - trying to understand how a feature or subsystem works
19
+ - the implementation approach depends on what already exists
20
+ - the user says "research this", "look into", or "what do we have for X"
21
+ - the user asks "explain this repo" or "understand this codebase before we change it"
22
+
23
+ Skip this skill when you already have clear context and the task is straightforward.
24
+
25
+ ## Workflow
26
+
27
+ 1. State what is being researched and why.
28
+ 2. Search in this order:
29
+ - project documentation (READMEs, docs/, wikis)
30
+ - existing code paths and implementations
31
+ - adjacent tests, validation, and configuration
32
+ - related issues, PRs, or commit history
33
+ 3. For each finding, note:
34
+ - what exists and where
35
+ - whether it is reusable, needs adaptation, or is irrelevant
36
+ - any conventions or constraints it reveals
37
+ 4. Identify gaps — what does NOT exist that the task needs.
38
+ 5. If this is repo onboarding, also identify:
39
+ - project purpose
40
+ - major components and responsibilities
41
+ - runtime model and key integrations
42
+ - important development commands
43
+ - notable conventions and open questions
44
+ 6. Summarize findings with concrete recommendations.
45
+
46
+ ## Repo Onboarding Mode
47
+
48
+ When the user is entering an unfamiliar repo, use research in repo-map mode:
49
+
50
+ 1. Read the top-level operating docs first, especially `AGENTS.md` and `README.md` when present.
51
+ 2. Inspect the most informative files next:
52
+ - package manifests or build files
53
+ - app entrypoints and framework bootstraps
54
+ - core config files
55
+ - representative tests
56
+ 3. Summarize:
57
+ - project purpose
58
+ - architecture summary
59
+ - major components
60
+ - important commands
61
+ - notable conventions or constraints
62
+ - open questions
63
+ 4. Recommend the next files or directories to inspect for the user's likely goal.
64
+
65
+ ## Search Techniques
66
+
67
+ Use the most efficient search method for the situation:
68
+
69
+ ```
70
+ # Find files by name pattern
71
+ find . -name "*<pattern>*" -type f | grep -v node_modules | head -20
72
+
73
+ # Search code for patterns
74
+ grep -r "<pattern>" --include="*.ts" -l | head -20
75
+
76
+ # Check recent changes to a file
77
+ git log --oneline -10 -- <file>
78
+
79
+ # Find related tests
80
+ find . -name "*<topic>*test*" -o -name "*test*<topic>*" | head -10
81
+ ```
82
+
83
+ Read the actual files — do not guess from filenames alone.
84
+
85
+ ## Output Format
86
+
87
+ Present findings using the Shared Output Contract:
88
+
89
+ 1. **Goal/Result** — what was researched and the key conclusion
90
+ 2. **Key Details:**
91
+ - concrete files or docs found
92
+ - what is reusable vs what is missing
93
+ - architecture or convention constraints discovered
94
+ - patterns that new code should follow
95
+ 3. **Next Action** — recommend a follow-up only when findings clearly lead somewhere:
96
+ - research for an active task → `planning`
97
+ - research revealed a gap → `brainstorming`
98
+ - research produced enough repo context → `planning`, `docs-writer`, or direct execution
99
+ - no clear handoff → stop after findings
100
+
101
+ ## Research Rules
102
+
103
+ - Search before assuming nothing exists.
104
+ - Read the actual code, not just the file tree.
105
+ - State explicitly when no existing pattern is found rather than implying one exists.
106
+ - If docs and code disagree, call out the mismatch.
107
+ - If the research surface is too large, focus on the most relevant subset and note what was not covered.
108
+ - Do not expand the original question into unrelated exploration.
109
+
110
+ ## Red Flags
111
+
112
+ - skipping the search and going straight to implementation
113
+ - reading filenames without opening the files
114
+ - reporting vague findings like "there seems to be something related"
115
+ - expanding research scope without noting the expansion
116
+ - treating absence of evidence as evidence of absence
117
+
118
+ ## Checklist
119
+
120
+ - [ ] Research topic stated
121
+ - [ ] Documentation searched
122
+ - [ ] Existing code patterns found (or explicitly noted as absent)
123
+ - [ ] Related tests and config checked
124
+ - [ ] Findings include concrete file paths
125
+ - [ ] Reusable vs missing clearly separated
126
+ - [ ] Recommendations are actionable
127
+
128
+ ## Done Criteria
129
+
130
+ This skill is complete when the findings are concrete enough to inform the next step — whether that is planning, implementation, docs work, or a decision that more information is needed. Every finding should include a file path or explicit "not found" statement.
@@ -0,0 +1,79 @@
1
+ # Examples
2
+
3
+ ## Example 1
4
+
5
+ User request:
6
+
7
+ ```text
8
+ How does our auth system work? I need to add a new OAuth provider.
9
+ ```
10
+
11
+ Expected result:
12
+
13
+ - search for auth-related files, patterns, and documentation
14
+ - identify existing OAuth implementations and patterns
15
+ - note what is reusable and what needs to be built
16
+
17
+ Sample completed output:
18
+
19
+ ```
20
+ Goal/Result: Researched the auth system to understand how to add a new OAuth provider.
21
+
22
+ Key Details:
23
+ - Existing implementations:
24
+ - src/auth/providers/google.ts — Google OAuth using passport-google-oauth20
25
+ - src/auth/providers/github.ts — GitHub OAuth using passport-github2
26
+ - src/auth/strategy.ts — Base strategy wrapper, all providers follow this pattern
27
+ - src/auth/callback.ts — Shared callback handler for all OAuth flows
28
+ - Patterns found:
29
+ - Each provider is a separate file in src/auth/providers/
30
+ - All providers implement the AuthProvider interface from src/auth/types.ts
31
+ - Callback URL pattern: /auth/{provider}/callback
32
+ - Config loaded from environment variables: AUTH_{PROVIDER}_CLIENT_ID, AUTH_{PROVIDER}_SECRET
33
+ - Gaps:
34
+ - No generic OAuth2 provider — each is implemented individually
35
+ - No tests for the OAuth callback flow
36
+ - Constraints:
37
+ - Must use passport.js (project convention)
38
+ - Must register in src/auth/index.ts provider registry
39
+
40
+ Next Action: planning — plan the new OAuth provider implementation following the existing pattern.
41
+ ```
42
+
43
+ ## Example 2
44
+
45
+ User request:
46
+
47
+ ```text
48
+ What caching do we use? I need to cache API responses.
49
+ ```
50
+
51
+ Expected result:
52
+
53
+ - search for caching patterns in the codebase
54
+ - identify existing cache implementations
55
+ - note configuration and conventions
56
+
57
+ Sample completed output:
58
+
59
+ ```
60
+ Goal/Result: Researched caching patterns. Found Redis-based caching in use for sessions but no API response caching.
61
+
62
+ Key Details:
63
+ - Existing implementations:
64
+ - src/cache/redis.ts — Redis client wrapper (ioredis)
65
+ - src/middleware/session.ts — Uses Redis for session storage
66
+ - No API response caching found anywhere
67
+ - Patterns found:
68
+ - Redis connection config in src/config/redis.ts
69
+ - TTL conventions: sessions use 24h, no other TTL patterns
70
+ - Cache keys use prefix pattern: "app:{type}:{id}"
71
+ - Gaps:
72
+ - No response caching middleware exists
73
+ - No cache invalidation strategy documented
74
+ - Constraints:
75
+ - Redis is already a dependency — reuse existing connection
76
+ - Key prefix pattern must be followed
77
+
78
+ Next Action: planning — design a response caching middleware using the existing Redis setup.
79
+ ```
@@ -0,0 +1,31 @@
1
+ name: research
2
+ version: 0.2.0
3
+ category: discovery
4
+ summary: Understand repository structure, existing code, patterns, and decisions before acting so new work follows what already exists.
5
+ summary_vi: "Tìm hiểu cấu trúc repo, code, pattern, và quyết định hiện có trước khi hành động để công việc mới đi đúng theo cái đã có."
6
+ triggers:
7
+ - research this codebase
8
+ - what do we already have for X
9
+ - look into how this works
10
+ - explore before implementing
11
+ - explain this repo
12
+ - onboard into a codebase
13
+ - understand architecture before changing code
14
+ inputs:
15
+ - topic, feature, or repository area to research
16
+ - optional file paths or package names
17
+ outputs:
18
+ - existing implementations found
19
+ - patterns and conventions discovered
20
+ - gaps identified
21
+ - actionable recommendations
22
+ - repo map and important commands when onboarding
23
+ constraints:
24
+ - search before assuming nothing exists
25
+ - read actual code not just filenames
26
+ related_skills:
27
+ - using-skills
28
+ - planning
29
+ - brainstorming
30
+ - extract
31
+ - docs-writer
@@ -0,0 +1,23 @@
1
+ ## Goal/Result
2
+
3
+ - What was researched:
4
+ - Key conclusion:
5
+
6
+ ## Key Details
7
+
8
+ ### Existing Implementations
9
+ - `path/to/file`: Does X
10
+
11
+ ### Patterns Found
12
+ - Pattern 1: Used for...
13
+
14
+ ### Gaps Identified
15
+ - What is missing:
16
+
17
+ ### Constraints Discovered
18
+ - Convention 1:
19
+ - Architecture constraint 1:
20
+
21
+ ## Next Action
22
+
23
+ - Recommended follow-up:
@@ -0,0 +1,194 @@
1
+ # Test-Driven Development
2
+
3
+ ## Purpose
4
+
5
+ Enforce the discipline of writing a failing test before writing production code. This skill exists because tests written after implementation pass immediately, proving nothing — they verify what you built, not what was required.
6
+
7
+ ## When To Use
8
+
9
+ Load this skill when:
10
+
11
+ - implementing any new feature or behavior
12
+ - fixing a bug (write a test that reproduces the bug first)
13
+ - refactoring code that lacks test coverage
14
+ - the user says "use TDD", "test first", or "red-green-refactor"
15
+
16
+ Exceptions (confirm with the user first):
17
+
18
+ - throwaway prototypes or spikes
19
+ - generated code (codegen, scaffolding)
20
+ - pure configuration files
21
+
22
+ ## The Iron Law
23
+
24
+ ```
25
+ NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
26
+ ```
27
+
28
+ Wrote code before the test? Delete it. Start over. Implement fresh from the test.
29
+
30
+ - Do not keep it as "reference"
31
+ - Do not "adapt" it while writing tests
32
+ - Do not look at it while writing the test
33
+ - Delete means delete
34
+
35
+ ## Workflow: Red-Green-Refactor
36
+
37
+ ### 1. RED — Write a Failing Test
38
+
39
+ Write one minimal test that describes the behavior you want.
40
+
41
+ Requirements:
42
+
43
+ - Tests one behavior (if the test name contains "and", split it)
44
+ - Clear name that describes expected behavior
45
+ - Uses real code, not mocks (unless external dependency makes this impossible)
46
+ - Asserts observable outcomes, not implementation details
47
+
48
+ ### 2. Verify RED — Watch It Fail
49
+
50
+ Run the test. This step is mandatory — never skip it.
51
+
52
+ ```bash
53
+ <test-command> <path-to-test-file>
54
+ ```
55
+
56
+ Confirm:
57
+
58
+ - The test fails (not errors due to syntax or import issues)
59
+ - The failure message matches what you expect
60
+ - It fails because the feature is missing, not because of a typo
61
+
62
+ If the test passes immediately, you are testing existing behavior. Rewrite the test.
63
+
64
+ If the test errors instead of failing, fix the error first, then re-run until it fails correctly.
65
+
66
+ ### 3. GREEN — Write Minimal Code to Pass
67
+
68
+ Write the simplest code that makes the test pass. Nothing more.
69
+
70
+ - Do not add features the test does not require
71
+ - Do not refactor other code
72
+ - Do not "improve" beyond what the test demands
73
+ - YAGNI — You Aren't Gonna Need It
74
+
75
+ ### 4. Verify GREEN — Watch It Pass
76
+
77
+ Run the test again. This step is mandatory.
78
+
79
+ ```bash
80
+ <test-command> <path-to-test-file>
81
+ ```
82
+
83
+ Confirm:
84
+
85
+ - The new test passes
86
+ - All other tests still pass
87
+ - No warnings or errors in the output
88
+
89
+ If the test still fails, fix the code — not the test.
90
+
91
+ If other tests broke, fix them now before moving on.
92
+
93
+ ### 5. REFACTOR — Clean Up (Tests Must Stay Green)
94
+
95
+ After green only:
96
+
97
+ - Remove duplication
98
+ - Improve names
99
+ - Extract helpers or shared utilities
100
+
101
+ Run tests after every refactor change. If any test fails, undo the refactor and try again.
102
+
103
+ Do not add new behavior during refactor. Refactor changes structure, not behavior.
104
+
105
+ ### 6. Repeat
106
+
107
+ Next failing test for the next behavior. One cycle at a time.
108
+
109
+ ## Test Quality Checklist
110
+
111
+ | Quality | Good | Bad |
112
+ |---------|------|-----|
113
+ | **Minimal** | Tests one thing | "validates email and domain and whitespace" |
114
+ | **Clear name** | Describes expected behavior | "test1", "it works" |
115
+ | **Shows intent** | Demonstrates desired API usage | Tests internal implementation details |
116
+ | **Real code** | Calls actual functions | Mocks everything, tests mock behavior |
117
+ | **Observable** | Asserts return values or side effects | Asserts internal state or call counts |
118
+
119
+ ## When Tests Are Hard to Write
120
+
121
+ | Problem | What It Means | Action |
122
+ |---------|---------------|--------|
123
+ | Cannot figure out how to test | Design is unclear | Write the API you wish existed first, then assert on it |
124
+ | Test is too complicated | Code design is too complicated | Simplify the interface |
125
+ | Must mock everything | Code is too tightly coupled | Use dependency injection, reduce coupling |
126
+ | Test setup is enormous | Too many dependencies | Extract helpers — if still complex, simplify the design |
127
+
128
+ Hard-to-test code is hard-to-use code. Listen to what the test is telling you about the design.
129
+
130
+ ## Bug Fix Protocol
131
+
132
+ Every bug fix follows TDD:
133
+
134
+ 1. **RED** — write a test that reproduces the bug
135
+ 2. **Verify RED** — confirm the test fails with the bug present
136
+ 3. **GREEN** — implement the fix
137
+ 4. **Verify GREEN** — confirm the test passes and the bug is gone
138
+
139
+ Never fix a bug without a test. The test proves the fix works and prevents regression.
140
+
141
+ ## Common Rationalizations
142
+
143
+ | Excuse | Reality |
144
+ |--------|---------|
145
+ | "Too simple to test" | Simple code breaks. The test takes 30 seconds. |
146
+ | "I'll write tests after" | Tests written after pass immediately and prove nothing. |
147
+ | "Tests after achieve the same goals" | Tests-after verify "what does this do?" Tests-first define "what should this do?" |
148
+ | "Already manually tested" | Manual testing is ad-hoc: no record, cannot re-run, easy to miss cases. |
149
+ | "Deleting X hours of work is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. |
150
+ | "Keep as reference, write tests first" | You will adapt it instead of writing fresh. That is testing after. |
151
+ | "Need to explore first" | Fine. Throw away the exploration. Start fresh with TDD. |
152
+ | "TDD will slow me down" | TDD is faster than debugging after the fact. |
153
+ | "This is different because..." | It is not. Delete the code. Start over with TDD. |
154
+
155
+ ## Output Format
156
+
157
+ Present results using the Shared Output Contract:
158
+
159
+ 1. **Goal/Result** — what was implemented using TDD and the current cycle state
160
+ 2. **Key Details:**
161
+ - tests written (names and what they verify)
162
+ - RED/GREEN/REFACTOR status for each cycle
163
+ - any test that could not be written and why
164
+ - verification output (pass/fail counts)
165
+ 3. **Next Action** — continue with next RED cycle, or hand off:
166
+ - all tests green and feature complete → `verification-before-completion`
167
+ - needs broader review → `code-review`
168
+ - complex feature needs planning first → `planning`
169
+
170
+ ## Red Flags
171
+
172
+ - writing production code before a failing test exists
173
+ - test passes immediately on first run (testing existing behavior, not new behavior)
174
+ - cannot explain why the test failed (do not proceed to GREEN)
175
+ - adding features the test does not require during GREEN
176
+ - adding behavior during REFACTOR
177
+ - rationalizing "just this once" to skip the failing test step
178
+ - mocking so heavily that the test verifies mock behavior, not real behavior
179
+ - keeping pre-TDD code as "reference" instead of deleting it
180
+
181
+ ## Checklist
182
+
183
+ - [ ] Every new function/method has a test that was written first
184
+ - [ ] Watched each test fail before writing implementation
185
+ - [ ] Each test failed for the expected reason (feature missing, not typo)
186
+ - [ ] Wrote minimal code to pass each test (YAGNI)
187
+ - [ ] All tests pass after each GREEN step
188
+ - [ ] Refactoring did not break any tests
189
+ - [ ] Tests use real code (mocks only when unavoidable)
190
+ - [ ] Edge cases and error paths are covered
191
+
192
+ ## Done Criteria
193
+
194
+ This skill is complete when all required behaviors have passing tests that were written before the implementation, the full test suite is green, and no production code exists without a corresponding test that was seen to fail first. If any rationalization was used to skip TDD, the skill is not complete — delete the code and start over.
@@ -0,0 +1,77 @@
1
+ # Test-Driven Development — Examples
2
+
3
+ ## Example 1
4
+
5
+ **User:** "Add email validation to the signup form"
6
+
7
+ Expected routing:
8
+
9
+ - task type: new feature with TDD
10
+ - chosen skill: `test-driven-development`
11
+ - planning required: yes, if multi-file
12
+ - next step: write a failing test for email validation before any implementation
13
+
14
+ ## Example 2
15
+
16
+ **User:** "Fix bug: empty email accepted by the form"
17
+
18
+ Expected routing:
19
+
20
+ - task type: bug fix with TDD
21
+ - chosen skill: `test-driven-development` (via `debug`)
22
+ - next step: write a test that reproduces the bug (empty email accepted), confirm it fails, then fix
23
+
24
+ ## Example 3
25
+
26
+ **User:** "Refactor the auth module — add tests first since it has none"
27
+
28
+ Expected routing:
29
+
30
+ - task type: refactor with TDD safety net
31
+ - chosen skill: `test-driven-development` + `refactor-safe`
32
+ - next step: write characterization tests for existing behavior before refactoring
33
+
34
+ ## Red-Green-Refactor Cycle Example
35
+
36
+ **Feature:** retry failed HTTP requests 3 times
37
+
38
+ ### RED
39
+
40
+ ```typescript
41
+ test('retries failed operations 3 times', async () => {
42
+ let attempts = 0;
43
+ const operation = () => {
44
+ attempts++;
45
+ if (attempts < 3) throw new Error('fail');
46
+ return 'success';
47
+ };
48
+
49
+ const result = await retryOperation(operation);
50
+
51
+ expect(result).toBe('success');
52
+ expect(attempts).toBe(3);
53
+ });
54
+ ```
55
+
56
+ Run test → FAIL: `retryOperation is not defined`
57
+
58
+ ### GREEN
59
+
60
+ ```typescript
61
+ async function retryOperation<T>(fn: () => T | Promise<T>): Promise<T> {
62
+ for (let i = 0; i < 3; i++) {
63
+ try {
64
+ return await fn();
65
+ } catch (e) {
66
+ if (i === 2) throw e;
67
+ }
68
+ }
69
+ throw new Error('unreachable');
70
+ }
71
+ ```
72
+
73
+ Run test → PASS
74
+
75
+ ### REFACTOR
76
+
77
+ Extract magic number 3 into a constant if needed. Run tests → still PASS.
@@ -0,0 +1,31 @@
1
+ name: test-driven-development
2
+ version: 0.1.0
3
+ category: quality
4
+ summary: Enforce test-first discipline with red-green-refactor cycles, preventing production code without a failing test.
5
+ summary_vi: "Thực thi kỷ luật test-first với chu trình red-green-refactor, không cho phép code production khi chưa có test fail."
6
+ triggers:
7
+ - implement with TDD
8
+ - test first
9
+ - red-green-refactor
10
+ - write a failing test first
11
+ - use TDD
12
+ inputs:
13
+ - feature or behavior to implement
14
+ - existing test framework and conventions
15
+ outputs:
16
+ - failing tests written first
17
+ - minimal implementation passing tests
18
+ - refactored code with green tests
19
+ - verification output
20
+ constraints:
21
+ - no production code without a failing test
22
+ - delete code written before its test
23
+ - one behavior per test
24
+ - YAGNI during GREEN phase
25
+ related_skills:
26
+ - using-skills
27
+ - planning
28
+ - feature-delivery
29
+ - debug
30
+ - verification-before-completion
31
+ - code-review
@@ -0,0 +1,31 @@
1
+ # TDD Cycle Output Template
2
+
3
+ Use this template when reporting TDD progress so each cycle is explicit and verifiable.
4
+
5
+ ```markdown
6
+ ## Goal/Result
7
+
8
+ [What behavior was implemented or fixed using TDD]
9
+
10
+ ## Key Details
11
+
12
+ - Test name: `[exact test name]`
13
+ - RED status: PASS / FAIL
14
+ - RED evidence: `[command and failure reason]`
15
+ - GREEN status: PASS / FAIL
16
+ - GREEN evidence: `[command and pass/fail result]`
17
+ - Refactor performed: yes / no
18
+ - Notes: `[edge cases, blockers, or why a test could not be written]`
19
+
20
+ ## Next Action
21
+
22
+ - Continue with next RED cycle
23
+ - Or hand off to `verification-before-completion`
24
+ ```
25
+
26
+ Checklist:
27
+
28
+ - The test was written before production code
29
+ - The RED failure was observed for the expected reason
30
+ - The GREEN pass was observed with fresh output
31
+ - Refactor did not add new behavior