ace-test 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (67) hide show
  1. checksums.yaml +7 -0
  2. data/.ace-defaults/nav/protocols/agent-sources/ace-test.yml +19 -0
  3. data/.ace-defaults/nav/protocols/guide-sources/ace-test.yml +19 -0
  4. data/.ace-defaults/nav/protocols/tmpl-sources/ace-test.yml +11 -0
  5. data/.ace-defaults/nav/protocols/wfi-sources/ace-test.yml +19 -0
  6. data/CHANGELOG.md +169 -0
  7. data/LICENSE +21 -0
  8. data/README.md +40 -0
  9. data/Rakefile +12 -0
  10. data/handbook/agents/mock.ag.md +164 -0
  11. data/handbook/agents/profile-tests.ag.md +132 -0
  12. data/handbook/agents/test.ag.md +99 -0
  13. data/handbook/guides/SUMMARY.md +95 -0
  14. data/handbook/guides/embedded-testing-guide.g.md +261 -0
  15. data/handbook/guides/mocking-patterns.g.md +464 -0
  16. data/handbook/guides/quick-reference.g.md +46 -0
  17. data/handbook/guides/test-driven-development-cycle/meta-documentation.md +26 -0
  18. data/handbook/guides/test-driven-development-cycle/ruby-application.md +18 -0
  19. data/handbook/guides/test-driven-development-cycle/ruby-gem.md +19 -0
  20. data/handbook/guides/test-driven-development-cycle/rust-cli.md +18 -0
  21. data/handbook/guides/test-driven-development-cycle/rust-wasm-zed.md +19 -0
  22. data/handbook/guides/test-driven-development-cycle/typescript-nuxt.md +18 -0
  23. data/handbook/guides/test-driven-development-cycle/typescript-vue.md +19 -0
  24. data/handbook/guides/test-layer-decision.g.md +261 -0
  25. data/handbook/guides/test-mocking-patterns.g.md +414 -0
  26. data/handbook/guides/test-organization.g.md +140 -0
  27. data/handbook/guides/test-performance.g.md +353 -0
  28. data/handbook/guides/test-responsibility-map.g.md +220 -0
  29. data/handbook/guides/test-review-checklist.g.md +231 -0
  30. data/handbook/guides/test-suite-health.g.md +337 -0
  31. data/handbook/guides/testable-code-patterns.g.md +315 -0
  32. data/handbook/guides/testing/ruby-rspec-config-examples.md +120 -0
  33. data/handbook/guides/testing/ruby-rspec.md +87 -0
  34. data/handbook/guides/testing/rust.md +52 -0
  35. data/handbook/guides/testing/test-maintenance.md +364 -0
  36. data/handbook/guides/testing/typescript-bun.md +47 -0
  37. data/handbook/guides/testing/vue-firebase-auth.md +546 -0
  38. data/handbook/guides/testing/vue-vitest.md +236 -0
  39. data/handbook/guides/testing-philosophy.g.md +82 -0
  40. data/handbook/guides/testing-strategy.g.md +151 -0
  41. data/handbook/guides/testing-tdd-cycle.g.md +146 -0
  42. data/handbook/guides/testing.g.md +170 -0
  43. data/handbook/skills/as-test-create-cases/SKILL.md +24 -0
  44. data/handbook/skills/as-test-fix/SKILL.md +26 -0
  45. data/handbook/skills/as-test-improve-coverage/SKILL.md +22 -0
  46. data/handbook/skills/as-test-optimize/SKILL.md +34 -0
  47. data/handbook/skills/as-test-performance-audit/SKILL.md +34 -0
  48. data/handbook/skills/as-test-plan/SKILL.md +34 -0
  49. data/handbook/skills/as-test-review/SKILL.md +34 -0
  50. data/handbook/skills/as-test-verify-suite/SKILL.md +45 -0
  51. data/handbook/templates/e2e-sandbox-checklist.template.md +289 -0
  52. data/handbook/templates/test-case.template.md +56 -0
  53. data/handbook/templates/test-performance-audit.template.md +132 -0
  54. data/handbook/templates/test-responsibility-map.template.md +92 -0
  55. data/handbook/templates/test-review-checklist.template.md +163 -0
  56. data/handbook/workflow-instructions/test/analyze-failures.wf.md +120 -0
  57. data/handbook/workflow-instructions/test/create-cases.wf.md +675 -0
  58. data/handbook/workflow-instructions/test/fix.wf.md +120 -0
  59. data/handbook/workflow-instructions/test/improve-coverage.wf.md +370 -0
  60. data/handbook/workflow-instructions/test/optimize.wf.md +368 -0
  61. data/handbook/workflow-instructions/test/performance-audit.wf.md +17 -0
  62. data/handbook/workflow-instructions/test/plan.wf.md +323 -0
  63. data/handbook/workflow-instructions/test/review.wf.md +16 -0
  64. data/handbook/workflow-instructions/test/verify-suite.wf.md +343 -0
  65. data/lib/ace/test/version.rb +7 -0
  66. data/lib/ace/test.rb +10 -0
  67. metadata +152 -0
@@ -0,0 +1,236 @@
1
+ ---
2
+ doc-type: guide
3
+ title: Vue + Vitest Testing Guide
4
+ purpose: Vue Vitest testing reference
5
+ ace-docs:
6
+ last-updated: 2026-01-23
7
+ last-checked: 2026-03-21
8
+ ---
9
+
10
+ # Vue + Vitest Testing Guide
11
+
12
+ Quick reference for Vue component testing with Vitest, focusing on single-run execution for coding agents and CI environments.
13
+
14
+ ## Single-Run Commands
15
+
16
+ ```bash
17
+ # Basic single run (exits immediately)
18
+ vitest run
19
+
20
+ # Run with output to file for parsing
21
+ vitest run --reporter=json --outputFile=reports/vitest.json
22
+
23
+ # Fail-fast with minimal output
24
+ vitest run --bail=1 --silent
25
+
26
+ # Run only changed files
27
+ vitest run --changed
28
+
29
+ # Run specific test files
30
+ vitest run src/components/__tests__/LoginForm.test.js
31
+ ```
32
+
33
+ ## Component Testing Patterns
34
+
35
+ ### Basic Component Test Structure
36
+
37
+ ```javascript
38
+ import { mount } from '@vue/test-utils'
39
+ import { describe, it, expect, beforeEach } from 'vitest'
40
+ import LoginForm from '../LoginForm.vue'
41
+
42
+ describe('LoginForm', () => {
43
+ let wrapper
44
+
45
+ beforeEach(() => {
46
+ wrapper = mount(LoginForm)
47
+ })
48
+
49
+ it('renders login form elements', () => {
50
+ expect(wrapper.find('input[type="email"]').exists()).toBe(true)
51
+ expect(wrapper.find('input[type="password"]').exists()).toBe(true)
52
+ expect(wrapper.find('button[type="submit"]').exists()).toBe(true)
53
+ })
54
+
55
+ it('emits login event with form data', async () => {
56
+ await wrapper.find('input[type="email"]').setValue('test@example.com')
57
+ await wrapper.find('input[type="password"]').setValue('password123')
58
+ await wrapper.find('form').trigger('submit.prevent')
59
+
60
+ expect(wrapper.emitted('login')).toHaveLength(1)
61
+ expect(wrapper.emitted('login')[0][0]).toEqual({
62
+ email: 'test@example.com',
63
+ password: 'password123'
64
+ })
65
+ })
66
+ })
67
+ ```
68
+
69
+ ### Testing with Composables
70
+
71
+ ```javascript
72
+ import { mount } from '@vue/test-utils'
73
+ import { vi } from 'vitest'
74
+ import ProfileView from '../ProfileView.vue'
75
+
76
+ // Mock the composable
77
+ vi.mock('@/composables/useAuth', () => ({
78
+ useAuth: () => ({
79
+ user: { value: { email: 'test@example.com', name: 'Test User' } },
80
+ loading: { value: false },
81
+ logout: vi.fn()
82
+ })
83
+ }))
84
+
85
+ describe('ProfileView', () => {
86
+ it('displays user information', () => {
87
+ const wrapper = mount(ProfileView)
88
+ expect(wrapper.text()).toContain('test@example.com')
89
+ expect(wrapper.text()).toContain('Test User')
90
+ })
91
+ })
92
+ ```
93
+
94
+ ### Testing with Router
95
+
96
+ ```javascript
97
+ import { mount } from '@vue/test-utils'
98
+ import { createRouter, createWebHistory } from 'vue-router'
99
+ import App from '../App.vue'
100
+
101
+ const router = createRouter({
102
+ history: createWebHistory(),
103
+ routes: [
104
+ { path: '/', component: { template: '<div>Home</div>' } },
105
+ { path: '/login', component: { template: '<div>Login</div>' } }
106
+ ]
107
+ })
108
+
109
+ describe('App with Router', () => {
110
+ it('navigates to login page', async () => {
111
+ const wrapper = mount(App, {
112
+ global: {
113
+ plugins: [router]
114
+ }
115
+ })
116
+
117
+ await router.push('/login')
118
+ await wrapper.vm.$nextTick()
119
+
120
+ expect(wrapper.text()).toContain('Login')
121
+ })
122
+ })
123
+ ```
124
+
125
+ ## Configuration for Single-Run
126
+
127
+ ### vitest.config.js
128
+
129
+ ```javascript
130
+ import { defineConfig } from 'vitest/config'
131
+ import vue from '@vitejs/plugin-vue'
132
+
133
+ export default defineConfig({
134
+ plugins: [vue()],
135
+ test: {
136
+ environment: 'happy-dom',
137
+ // For CI/automated testing
138
+ run: true, // Force single-run mode
139
+ reporters: ['default', 'json'],
140
+ outputFile: {
141
+ json: './reports/vitest.json'
142
+ },
143
+ coverage: {
144
+ provider: 'v8',
145
+ reporter: ['text', 'html', 'lcov']
146
+ }
147
+ }
148
+ })
149
+ ```
150
+
151
+ ### package.json Scripts
152
+
153
+ ```json
154
+ {
155
+ "scripts": {
156
+ "test": "vitest run",
157
+ "test:watch": "vitest",
158
+ "test:ui": "vitest --ui",
159
+ "test:coverage": "vitest run --coverage"
160
+ }
161
+ }
162
+ ```
163
+
164
+ ## Coding Agent Integration
165
+
166
+ ### Exit Code Handling
167
+
168
+ ```bash
169
+ # Test success/failure based on exit code
170
+ vitest run && echo "✅ Tests passed" || echo "❌ Tests failed"
171
+
172
+ # Store exit code for processing
173
+ vitest run
174
+ TEST_EXIT_CODE=$?
175
+ if [ $TEST_EXIT_CODE -eq 0 ]; then
176
+ echo "All tests passed"
177
+ else
178
+ echo "Tests failed with code $TEST_EXIT_CODE"
179
+ fi
180
+ ```
181
+
182
+ ### JSON Output Parsing
183
+
184
+ ```javascript
185
+ // Parse test results from JSON output
186
+ const results = JSON.parse(fs.readFileSync('reports/vitest.json', 'utf8'))
187
+
188
+ console.log(`Tests: ${results.testResults.length}`)
189
+ console.log(`Passed: ${results.numPassedTests}`)
190
+ console.log(`Failed: ${results.numFailedTests}`)
191
+
192
+ // Get failed test details
193
+ const failedTests = results.testResults
194
+ .filter(test => test.status === 'failed')
195
+ .map(test => ({ name: test.name, error: test.message }))
196
+ ```
197
+
198
+ ## Common Pitfalls & Solutions
199
+
200
+ | Issue | Solution |
201
+ |-------|----------|
202
+ | Tests hang in watch mode | Always use `vitest run` for automated testing |
203
+ | Mock not applying | Ensure `vi.mock()` is called before imports |
204
+ | Async test failures | Use proper `await` and `nextTick()` |
205
+ | Component not rendering | Check if all required props are provided |
206
+ | Router tests failing | Mock router or provide test router instance |
207
+
208
+ ## Best Practices for Automation
209
+
210
+ 1. **Always use `vitest run`** - Never leave tests in watch mode for CI/agents
211
+ 2. **Set explicit timeouts** - Prevent hanging tests with `--testTimeout=10000`
212
+ 3. **Use JSON reporter** - Parse structured output instead of console logs
213
+ 4. **Fail fast** - Use `--bail=1` to stop on first failure
214
+ 5. **Clean output** - Use `--silent` or `--reporter=dot` for minimal noise
215
+ 6. **Exit code validation** - Check process exit code for pass/fail status
216
+
217
+ ## Quick Commands Reference
218
+
219
+ ```bash
220
+ # Single run with coverage
221
+ vitest run --coverage
222
+
223
+ # Run specific test pattern
224
+ vitest run --testNamePattern="LoginForm"
225
+
226
+ # Run tests for specific files
227
+ vitest run src/components/**/*.test.js
228
+
229
+ # Generate machine-readable report
230
+ vitest run --reporter=junit --outputFile=reports/junit.xml
231
+
232
+ # Minimal output for CI
233
+ vitest run --reporter=dot --silent
234
+ ```
235
+
236
+ This guide ensures your Vue + Vitest tests run deterministically and provide clear success/failure signals for coding agents and CI environments.
@@ -0,0 +1,82 @@
1
+ ---
2
+ doc-type: guide
3
+ title: Testing Philosophy
4
+ purpose: Testing philosophy and pyramid structure
5
+ ace-docs:
6
+ last-updated: 2026-02-22
7
+ last-checked: 2026-03-21
8
+ ---
9
+
10
+ # Testing Philosophy
11
+
12
+ ## The Testing Pyramid
13
+
14
+ ACE follows a strict testing pyramid with clear IO boundaries:
15
+
16
+ | Layer | Location | IO Policy | Purpose |
17
+ |-------|----------|-----------|---------|
18
+ | **Unit (atoms)** | `test/atoms/` | **No IO** | Test pure logic in isolation |
19
+ | **Unit (molecules)** | `test/molecules/` | **No IO** | Test component composition |
20
+ | **Unit (organisms)** | `test/organisms/` | **Mocked IO** | Test business logic with stubbed boundaries |
21
+ | **Integration** | `test/integration/` | **Mocked IO** | Test CLI/API surface with stubbed externals |
22
+ | **E2E** | `test/e2e/TS-*/scenario.yml` | **Real IO** | Validate the real system works |
23
+
24
+ ## IO Isolation Principle
25
+
26
+ **Default: No IO in unit tests.** This means:
27
+
28
+ - **No file system**: Use `MockGitRepo` or inline strings, not `File.read`
29
+ - **No network**: Use `WebMock` stubs, not real HTTP calls
30
+ - **No subprocesses**: Use method stubs, not `Open3.capture3`
31
+ - **No sleep**: Stub `Kernel.sleep` in retry logic
32
+
33
+ **Why?**
34
+ - Tests run in parallel safely
35
+ - Tests are fast (<10ms for atoms)
36
+ - Tests are deterministic (no flaky failures)
37
+ - CI doesn't need special setup
38
+
39
+ ## When Real IO is Allowed
40
+
41
+ Real IO belongs in **E2E tests only** (`test/e2e/TS-*/`):
42
+
43
+ - Executed by an agent, not the test runner
44
+ - Verify the full system works end-to-end
45
+ - Run infrequently (on-demand, not every commit)
46
+ - Document real tool requirements (standardrb, rubocop, etc.)
47
+
48
+ See `/ace-e2e-run` workflow for execution.
49
+
50
+ ## Core Principles
51
+
52
+ ### 1. Test-Driven Development
53
+
54
+ Writing tests first drives design and ensures testability. Follow the Red-Green-Refactor cycle:
55
+
56
+ 1. **Red**: Write a failing test that defines the desired outcome
57
+ 2. **Green**: Write the minimum code to make the test pass
58
+ 3. **Refactor**: Improve code design while keeping tests green
59
+
60
+ ### 2. Isolation
61
+
62
+ - Unit tests should mock dependencies to test the unit in isolation
63
+ - Ensure tests clean up after themselves (reset state, delete created files/records)
64
+ - Tests must be independent and runnable in any order
65
+
66
+ ### 3. Determinism
67
+
68
+ - Avoid flaky tests (tests that pass sometimes and fail others without code changes)
69
+ - Address sources of flakiness (timing issues, race conditions, external dependencies)
70
+ - Tests should produce the same result every run
71
+
72
+ ### 4. Clarity
73
+
74
+ - Use descriptive names for test files, contexts, and individual tests
75
+ - Follow the Arrange-Act-Assert pattern
76
+ - Keep tests focused on a single behavior or requirement
77
+
78
+ ## Related Guides
79
+
80
+ - [Test Organization](guide://test-organization) - Directory structure and naming
81
+ - [Mocking Patterns](guide://mocking-patterns) - How to isolate tests
82
+ - [Testing TDD Cycle](guide://testing-tdd-cycle) - Implementing the task cycle
@@ -0,0 +1,151 @@
1
+ ---
2
+ doc-type: guide
3
+ title: "Testing Strategy: The Fast and Slow Loops"
4
+ purpose: Define the ACE testing strategy with fast loop (unit/integration) and slow loop (E2E)
5
+ ace-docs:
6
+ last-updated: 2026-02-19
7
+ last-checked: 2026-03-21
8
+ ---
9
+
10
+ # Testing Strategy: The Fast and Slow Loops
11
+
12
+ This guide defines the ACE strategy for maintaining a high-performance, high-confidence test suite. We divide testing into two distinct loops: the **Fast Loop** (immediate feedback) and the **Slow Loop** (comprehensive validation).
13
+
14
+ ## Core Philosophy
15
+
16
+ 1. **Fast Loop must be FAST (< 10s total)**: If unit tests are slow, developers stop running them.
17
+ 2. **Slow Loop must be SAFE**: E2E tests run real commands and modify state; they must be sandboxed.
18
+ 3. **Stub the Boundary**: Never let a unit test leak a subprocess call or file I/O.
19
+ 4. **Test Behavior, Not Mocks**: Verify the *outcome* of logic, not just that a method was called.
20
+
21
+ ## The Test Pyramid
22
+
23
+ | Layer | Scope | Target Speed | I/O | Strategy |
24
+ |-------|-------|--------------|-----|----------|
25
+ | **Unit** (Atoms) | Single Method/Class | < 10ms / test | **Forbidden** | Pure functions. Mock *all* collaborators. |
26
+ | **Integration** (Molecules) | Component Interaction | < 100ms / test | **Stubbed** | Test wiring. Stub APIs/Shell/FS. |
27
+ | **E2E** (Systems) | Full Workflow | Seconds/Minutes | **Real** | Real CLI execution. Sandboxed FS. |
28
+
29
+ > **Note**: Target speeds are ideal goals. The `verify-test-suite` workflow uses relaxed thresholds for warnings (atoms >50ms, molecules >100ms) to catch tests drifting toward I/O leaks before they become critical.
30
+
31
+ ---
32
+
33
+ ## 1. The Fast Loop (Unit & Integration)
34
+
35
+ **Goal**: Validate logic correctness instantly.
36
+
37
+ ### Rules of Engagement
38
+ - **No Subprocesses**: Never call `system`, `Open3`, or backticks.
39
+ - **No Network**: Never make HTTP requests.
40
+ - **In-Memory FS**: Use `FakeFS` or temporary directories only if absolutely necessary (prefer mocking `File`).
41
+
42
+ ### Effective Mocking: "Stub the Boundary"
43
+ Don't just stub the inner implementation; stub the *check* that leads to it.
44
+
45
+ **Bad** (Still triggers subprocess):
46
+ ```ruby
47
+ # The code checks `if runner.available?` before calling `run`
48
+ Runner.stub(:run, result) do
49
+ # Code calls `available?` -> triggers `system("cmd --version")` -> SLOW!
50
+ subject.process
51
+ end
52
+ ```
53
+
54
+ **Good** (Fast):
55
+ ```ruby
56
+ Runner.stub(:available?, true) do # Bypass the check
57
+ Runner.stub(:run, result) do # Return canned result
58
+ subject.process
59
+ end
60
+ end
61
+ ```
62
+
63
+ ### Avoiding "Testing the Mock"
64
+ - **Stub for Data**: When you need a value to proceed (e.g., configuration, file content), use stubs.
65
+ - **Mock for Side Effects**: Only use `Minitest::Mock` (expectations) when the *purpose* of the method is the side effect (e.g., `Git.commit`).
66
+ - **Don't over-specify**: If your test breaks every time you rename a private helper, you are testing implementation, not behavior.
67
+
68
+ ### Maintainable Stubbing (Composite Helpers)
69
+ Avoid deeply nested stub blocks. Use composite helpers to wrap common environmental setups.
70
+
71
+ **Bad (Deep Nesting):**
72
+ ```ruby
73
+ def test_workflow
74
+ mock_config do
75
+ mock_git do
76
+ mock_llm do
77
+ # Actual test code is buried
78
+ end
79
+ end
80
+ end
81
+ end
82
+ ```
83
+
84
+ **Good (Composite Helper):**
85
+ ```ruby
86
+ def test_workflow
87
+ with_mock_environment(git: true, llm: true) do
88
+ # Clear test focus
89
+ end
90
+ end
91
+ ```
92
+
93
+ ### Mock Hygiene (Avoiding Drift)
94
+ Mocks simulate reality, but reality changes.
95
+ - **Rule**: Whenever you change behavior covered by an E2E test (reality), you MUST verify and update the corresponding Unit Test mocks (simulation).
96
+ - **Risk**: "Mock Drift" leads to green unit tests that fail in production.
97
+
98
+ ---
99
+
100
+ ## 2. The Slow Loop (E2E)
101
+
102
+ **Goal**: Validate system coherence and real-world functionality.
103
+
104
+ ### Rules of Engagement
105
+ - **Real Binaries**: Execute the actual `ace-*` CLI tools.
106
+ - **Sandboxing**: Run in a temporary directory. Clean up after yourself.
107
+ - **Critical Paths**: Focus on happy paths and critical error cases. Don't test every edge case (leave that to unit tests).
108
+
109
+ ### E2E Test Structure (TS-format)
110
+ We use directory-based test scenarios with `scenario.yml` and `TC-*.tc.md` files for E2E to ensure they double as documentation.
111
+
112
+ ```
113
+ TS-FEATURE-001-task-creation/
114
+ scenario.yml # Metadata + setup
115
+ TC-001-create.tc.md # Test case with steps + assertions
116
+ fixtures/ # Shared test data
117
+ ```
118
+
119
+ ---
120
+
121
+ ## 3. The "Test Planner" & "Test Writer" Roles
122
+
123
+ When creating tests, separate the concerns:
124
+
125
+ ### 🎩 The Test Planner
126
+ Decides **WHAT** and **WHERE** to test.
127
+ - *"This logic handles a git conflict. It's complex logic -> **Unit Test** with mocked git output."*
128
+ - *"This command wires up the search tool to the LLM. -> **Integration Test** with stubbed LLM."*
129
+ - *"This workflow creates a PR and comments on it. -> **E2E Test**."*
130
+
131
+ ### ✍️ The Test Writer
132
+ Implements the test efficiently.
133
+ - Uses `ace-test --profile` to ensure speed.
134
+ - Writes proper setup/teardown.
135
+ - Ensures assertions are meaningful.
136
+
137
+ ---
138
+
139
+ ## 4. Maintenance & Profiling
140
+
141
+ ### The 100ms Rule
142
+ Any unit/integration test taking > 100ms is a bug.
143
+ - **Cause**: Likely a hidden I/O call (subprocess, file system).
144
+ - **Fix**: Profile, find the leak, and **Stub the Boundary**.
145
+
146
+ ### Periodic Verification
147
+ Run the suite with profiling regularly:
148
+ ```bash
149
+ ace-test --profile 10
150
+ ```
151
+ If "fast" tests appear in the top 10 slow list, investigate immediately.
@@ -0,0 +1,146 @@
1
+ ---
2
+ doc-type: guide
3
+ title: Implementing the Task Cycle
4
+ purpose: Task cycle implementation
5
+ ace-docs:
6
+ last-updated: 2026-02-22
7
+ last-checked: 2026-03-21
8
+ ---
9
+
10
+ # Implementing the Task Cycle
11
+
12
+ ## 1. Introduction
13
+
14
+ This guide outlines the standard development cycle used for implementing tasks within this project. Following this cycle
15
+ ensures consistency, promotes quality through testing, and facilitates effective collaboration, especially when working
16
+ with AI agents. It integrates principles of Test-Driven Development (TDD) and emphasizes continuous reflection.
17
+
18
+ ## 2. The Core Cycle Overview
19
+
20
+ The typical task implementation follows these high-level steps:
21
+
22
+ 1. **Start:** Understand the task and plan the approach.
23
+ 2. **Test (Red):** Write a failing test that defines the desired outcome.
24
+ 3. **Code (Green):** Write the minimum code required to make the test pass.
25
+ 4. **Refactor:** Improve the code's design while ensuring tests still pass.
26
+ 5. **Verify:** Run all checks (linters, formatters, full test suite).
27
+ 6. **Commit:** Save the changes with a clear, conventional commit message.
28
+ 7. **Reflect:** Analyze the process and capture learnings.
29
+ 8. **Update Status:** Mark the task as complete or note progress.
30
+
31
+ This cycle (steps 2-4) may be repeated multiple times for a single task as functionality is built incrementally.
32
+
33
+ ## 3. Detailed Steps
34
+
35
+ Here's a more detailed breakdown of each step, referencing relevant workflow instructions:
36
+
37
+ ### Step 1: Start Task (Understand & Plan)
38
+
39
+ * **Goal:** Fully understand the task requirements and plan the implementation approach.
40
+ * **Actions:**
41
+ * Carefully review the task description (`.md` file) including objectives, scope, and acceptance criteria.
42
+ * Identify relevant existing code, patterns, or documentation.
43
+ * Break down the task into smaller, manageable implementation steps.
44
+ * Outline the required tests based on acceptance criteria.
45
+ * **Workflow:** See [`work-on-task.wf.md`](wfi://task/work)
46
+
47
+ ### Step 2: Write Tests (TDD - Red)
48
+
49
+ * **Goal:** Define the desired behavior or functionality by writing an automated test *before* writing the
50
+ implementation code.
51
+ * **Actions:**
52
+ * Create a new test file or add to an existing one.
53
+ * Write a specific test case that captures one aspect of the requirement.
54
+ * Ensure the test clearly describes the expected outcome.
55
+ * Run the test and confirm that it **fails** (this is the "Red" phase).
56
+ * **Workflow:** See the testing section in [`work-on-task.wf.md`](wfi://task/work)
57
+
58
+ ### Step 3: Implement Code (TDD - Green)
59
+
60
+ * **Goal:** Write the simplest, minimum amount of code necessary to make the failing test pass.
61
+ * **Actions:**
62
+ * Focus *only* on satisfying the requirements of the current failing test.
63
+ * Avoid adding extra functionality or premature optimizations.
64
+ * Run the test(s) frequently until the target test passes (this is the "Green" phase).
65
+
66
+ ### Step 4: Refactor (TDD - Refactor)
67
+
68
+ * **Goal:** Improve the design, clarity, and structure of the code *now that it works* (tests are passing).
69
+ * **Actions:**
70
+ * Look for opportunities to remove duplication, improve variable names, simplify logic, or adhere better to coding
71
+ standards.
72
+ * Run tests after each small refactoring step to ensure no behavior was broken.
73
+ * **Reference:** [Coding Standards](guide://coding-standards.g)
74
+
75
+ ### Step 5: Verify Locally
76
+
77
+ * **Goal:** Ensure the changes integrate well and meet overall quality standards before committing.
78
+ * **Actions:**
79
+ * Run the full test suite (not just the tests for the current change).
80
+ * Run linters and code formatters.
81
+ * Check test coverage if applicable.
82
+
83
+ ### Step 6: Commit Changes
84
+
85
+ * **Goal:** Save the completed, tested, and verified changes to version control with a meaningful message.
86
+ * **Actions:**
87
+ * Stage only the files related to the logical change being committed (atomic commits).
88
+ * Review staged changes (`git diff --staged`).
89
+ * **Critically review any AI-generated code before committing.**
90
+ * Write a commit message following the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
91
+ standard.
92
+ * **Workflow:** See [`commit.wf.md`](wfi://git/commit)
93
+ * **Reference:** [Version Control Git Guide](guide://version-control-system-git)
94
+
95
+ ### Step 7: Self-Reflection
96
+
97
+ * **Goal:** Analyze the completed work to capture learnings and identify potential improvements.
98
+ * **Actions:**
99
+ * Review the implementation process, challenges, and successes.
100
+ * Update documentation (guides, ADRs, comments) if necessary.
101
+ * Identify any follow-up actions (e.g., refactoring needs, process improvements) and create backlog tasks if
102
+ needed.
103
+ * Log the reflection summary using the [`create-reflection-note.wf.md`](wfi://create-reflection-note) workflow.
104
+
105
+ ### Step 8: Update Task Status
106
+
107
+ * **Goal:** Keep project tracking up-to-date.
108
+ * **Actions:**
109
+ * Update the status field (e.g., to `done`) in the task's `.md` file.
110
+ * Move the task file to the appropriate `done` directory if applicable (refer to project management
111
+ specifics).
112
+ * **Reference:** [Project Management Guide](guide://project-management.g)
113
+
114
+ ## 4. Key Principles & Best Practices
115
+
116
+ * **Test-Driven Development:** Writing tests first drives design and ensures testability.
117
+ * **Atomic Commits:** Each commit should represent a single, logical change.
118
+ * **Review AI Contributions:** Treat AI-generated code with the same rigor as code from any other source. Verify its
119
+ correctness and adherence to standards.
120
+ * **Incremental Progress:** Build functionality in small, testable steps.
121
+ * **Continuous Improvement:** Use the self-reflection step to actively improve code and processes.
122
+
123
+ ## 5. Technology-Specific Variations
124
+
125
+ While the core cycle remains the same, specific commands and tools vary by technology stack. Refer to the relevant
126
+ sub-guide for details:
127
+
128
+ * [Ruby Application](./test-driven-development-cycle/ruby-application.md)
129
+ * [Ruby Gem](./test-driven-development-cycle/ruby-gem.md)
130
+ * [Rust CLI](./test-driven-development-cycle/rust-cli.md)
131
+ * [Rust→Wasm Zed Extension](./test-driven-development-cycle/rust-wasm-zed.md)
132
+ * [TypeScript + Vue](./test-driven-development-cycle/typescript-vue.md)
133
+ * [TypeScript + Nuxt](./test-driven-development-cycle/typescript-nuxt.md)
134
+ * [Meta (Documentation)](./test-driven-development-cycle/meta-documentation.md)
135
+
136
+ ## 6. Related Documentation
137
+
138
+ * **Workflow Instructions:**
139
+ * [`work-on-task.wf.md`](wfi://task/work) (includes testing guidance)
140
+ * [`commit.wf.md`](wfi://git/commit)
141
+ * [`save-session-context.wf.md`](wfi://save-session-context) (for saving session context)
142
+ * **Core Guides:**
143
+ * [Testing Guide](./testing.g.md)
144
+ * [Version Control Git Guide](guide://version-control-system-git)
145
+ * [Coding Standards](guide://coding-standards.g)
146
+ * [Project Management Guide](guide://project-management.g)