ace-test 0.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.ace-defaults/nav/protocols/agent-sources/ace-test.yml +19 -0
- data/.ace-defaults/nav/protocols/guide-sources/ace-test.yml +19 -0
- data/.ace-defaults/nav/protocols/tmpl-sources/ace-test.yml +11 -0
- data/.ace-defaults/nav/protocols/wfi-sources/ace-test.yml +19 -0
- data/CHANGELOG.md +169 -0
- data/LICENSE +21 -0
- data/README.md +40 -0
- data/Rakefile +12 -0
- data/handbook/agents/mock.ag.md +164 -0
- data/handbook/agents/profile-tests.ag.md +132 -0
- data/handbook/agents/test.ag.md +99 -0
- data/handbook/guides/SUMMARY.md +95 -0
- data/handbook/guides/embedded-testing-guide.g.md +261 -0
- data/handbook/guides/mocking-patterns.g.md +464 -0
- data/handbook/guides/quick-reference.g.md +46 -0
- data/handbook/guides/test-driven-development-cycle/meta-documentation.md +26 -0
- data/handbook/guides/test-driven-development-cycle/ruby-application.md +18 -0
- data/handbook/guides/test-driven-development-cycle/ruby-gem.md +19 -0
- data/handbook/guides/test-driven-development-cycle/rust-cli.md +18 -0
- data/handbook/guides/test-driven-development-cycle/rust-wasm-zed.md +19 -0
- data/handbook/guides/test-driven-development-cycle/typescript-nuxt.md +18 -0
- data/handbook/guides/test-driven-development-cycle/typescript-vue.md +19 -0
- data/handbook/guides/test-layer-decision.g.md +261 -0
- data/handbook/guides/test-mocking-patterns.g.md +414 -0
- data/handbook/guides/test-organization.g.md +140 -0
- data/handbook/guides/test-performance.g.md +353 -0
- data/handbook/guides/test-responsibility-map.g.md +220 -0
- data/handbook/guides/test-review-checklist.g.md +231 -0
- data/handbook/guides/test-suite-health.g.md +337 -0
- data/handbook/guides/testable-code-patterns.g.md +315 -0
- data/handbook/guides/testing/ruby-rspec-config-examples.md +120 -0
- data/handbook/guides/testing/ruby-rspec.md +87 -0
- data/handbook/guides/testing/rust.md +52 -0
- data/handbook/guides/testing/test-maintenance.md +364 -0
- data/handbook/guides/testing/typescript-bun.md +47 -0
- data/handbook/guides/testing/vue-firebase-auth.md +546 -0
- data/handbook/guides/testing/vue-vitest.md +236 -0
- data/handbook/guides/testing-philosophy.g.md +82 -0
- data/handbook/guides/testing-strategy.g.md +151 -0
- data/handbook/guides/testing-tdd-cycle.g.md +146 -0
- data/handbook/guides/testing.g.md +170 -0
- data/handbook/skills/as-test-create-cases/SKILL.md +24 -0
- data/handbook/skills/as-test-fix/SKILL.md +26 -0
- data/handbook/skills/as-test-improve-coverage/SKILL.md +22 -0
- data/handbook/skills/as-test-optimize/SKILL.md +34 -0
- data/handbook/skills/as-test-performance-audit/SKILL.md +34 -0
- data/handbook/skills/as-test-plan/SKILL.md +34 -0
- data/handbook/skills/as-test-review/SKILL.md +34 -0
- data/handbook/skills/as-test-verify-suite/SKILL.md +45 -0
- data/handbook/templates/e2e-sandbox-checklist.template.md +289 -0
- data/handbook/templates/test-case.template.md +56 -0
- data/handbook/templates/test-performance-audit.template.md +132 -0
- data/handbook/templates/test-responsibility-map.template.md +92 -0
- data/handbook/templates/test-review-checklist.template.md +163 -0
- data/handbook/workflow-instructions/test/analyze-failures.wf.md +120 -0
- data/handbook/workflow-instructions/test/create-cases.wf.md +675 -0
- data/handbook/workflow-instructions/test/fix.wf.md +120 -0
- data/handbook/workflow-instructions/test/improve-coverage.wf.md +370 -0
- data/handbook/workflow-instructions/test/optimize.wf.md +368 -0
- data/handbook/workflow-instructions/test/performance-audit.wf.md +17 -0
- data/handbook/workflow-instructions/test/plan.wf.md +323 -0
- data/handbook/workflow-instructions/test/review.wf.md +16 -0
- data/handbook/workflow-instructions/test/verify-suite.wf.md +343 -0
- data/lib/ace/test/version.rb +7 -0
- data/lib/ace/test.rb +10 -0
- metadata +152 -0
|
@@ -0,0 +1,120 @@
|
|
|
1
|
+
---
|
|
2
|
+
doc-type: workflow
|
|
3
|
+
title: Fix Tests Workflow
|
|
4
|
+
purpose: fix-tests workflow instruction
|
|
5
|
+
ace-docs:
|
|
6
|
+
last-updated: 2026-02-24
|
|
7
|
+
last-checked: 2026-03-21
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# Fix Tests Workflow
|
|
11
|
+
|
|
12
|
+
## Goal
|
|
13
|
+
|
|
14
|
+
Apply targeted fixes for failing automated tests based on an existing failure analysis report.
|
|
15
|
+
|
|
16
|
+
This workflow is execution-only. Root cause classification is handled by `wfi://test/analyze-failures`.
|
|
17
|
+
|
|
18
|
+
## Hard Gate (Required Before Any Fix)
|
|
19
|
+
|
|
20
|
+
Do not apply any fix until an analysis report exists with:
|
|
21
|
+
- failure identifier
|
|
22
|
+
- category (`implementation-bug`, `test-defect`, `test-infrastructure`)
|
|
23
|
+
- evidence
|
|
24
|
+
- fix target
|
|
25
|
+
- fix target layer
|
|
26
|
+
- primary candidate files
|
|
27
|
+
- do-not-touch boundaries
|
|
28
|
+
- confidence
|
|
29
|
+
|
|
30
|
+
If analysis is missing or incomplete, stop and run:
|
|
31
|
+
```bash
|
|
32
|
+
ace-bundle wfi://test/analyze-failures
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
## Required Input
|
|
36
|
+
|
|
37
|
+
Use the output section from `test/analyze-failures`:
|
|
38
|
+
- `## Failure Analysis Report`
|
|
39
|
+
- `## Fix Decisions`
|
|
40
|
+
- `### Execution Plan Input`
|
|
41
|
+
|
|
42
|
+
## Autonomy Rule
|
|
43
|
+
|
|
44
|
+
- Do not ask the user to choose fix target, category, or rerun scope.
|
|
45
|
+
- If analysis is incomplete, auto-complete missing decision fields via local evidence (logs, tests, source/test files), then proceed.
|
|
46
|
+
- Only stop for hard blockers (missing files/tools/permissions).
|
|
47
|
+
|
|
48
|
+
## Fix Procedure
|
|
49
|
+
|
|
50
|
+
1. Pick the first prioritized failure from analysis
|
|
51
|
+
- Use the "Primary failure to fix first" item
|
|
52
|
+
- Confirm category and fix target
|
|
53
|
+
- Apply the "Chosen fix decision" and primary candidate files directly
|
|
54
|
+
|
|
55
|
+
2. Apply category-specific fix
|
|
56
|
+
|
|
57
|
+
### Category: implementation-bug
|
|
58
|
+
- Fix application/implementation code
|
|
59
|
+
- Update/add tests only as needed to capture intended behavior
|
|
60
|
+
|
|
61
|
+
### Category: test-defect
|
|
62
|
+
- Fix assertions, fixtures, setup, or test expectations
|
|
63
|
+
- Keep product code unchanged unless new contradictory evidence appears
|
|
64
|
+
|
|
65
|
+
### Category: test-infrastructure
|
|
66
|
+
- Fix setup/isolation/tooling/configuration issues
|
|
67
|
+
- Keep behavior/spec expectations unchanged unless analysis is revised
|
|
68
|
+
|
|
69
|
+
3. Verify the specific fix
|
|
70
|
+
- Run the failing test(s) first
|
|
71
|
+
- Run related tests second
|
|
72
|
+
|
|
73
|
+
4. Re-check classification if verification contradicts analysis
|
|
74
|
+
- If new evidence invalidates original category, return to `test/analyze-failures`
|
|
75
|
+
- Update analysis report and re-select a new autonomous chosen fix decision before continuing
|
|
76
|
+
|
|
77
|
+
5. Iterate until failures are resolved
|
|
78
|
+
- Fix one prioritized failure at a time
|
|
79
|
+
- Keep changes scoped to the active failure
|
|
80
|
+
|
|
81
|
+
## Verification Sequence
|
|
82
|
+
|
|
83
|
+
```bash
|
|
84
|
+
# targeted failure
|
|
85
|
+
# Run project-specific test command path/to/failing_test
|
|
86
|
+
|
|
87
|
+
# related tests
|
|
88
|
+
# Run project-specific test command --related path/to/failing_test
|
|
89
|
+
|
|
90
|
+
# full suite final check
|
|
91
|
+
# Run project-specific test command
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
## Required Output
|
|
95
|
+
|
|
96
|
+
```markdown
|
|
97
|
+
## Fix Execution Summary
|
|
98
|
+
|
|
99
|
+
| Failure | Category | Change Applied | Verification | Result |
|
|
100
|
+
|---|---|---|---|---|
|
|
101
|
+
| ... | ... | ... | command + output summary | pass/fail |
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
If unresolved:
|
|
105
|
+
|
|
106
|
+
```markdown
|
|
107
|
+
## Blockers
|
|
108
|
+
- Failure: ...
|
|
109
|
+
- Why unresolved: ...
|
|
110
|
+
- New evidence: ...
|
|
111
|
+
- Re-analysis required: yes/no
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
## Success Criteria
|
|
115
|
+
|
|
116
|
+
- Fixes are traceable to analyzed failures
|
|
117
|
+
- Verification commands and outcomes are documented
|
|
118
|
+
- No speculative fixes outside analyzed scope
|
|
119
|
+
- No user clarification was required for fix targeting/scope in normal flow
|
|
120
|
+
- Full test suite passes (or unresolved blockers are explicitly documented)
|
|
@@ -0,0 +1,370 @@
|
|
|
1
|
+
---
|
|
2
|
+
doc-type: workflow
|
|
3
|
+
title: Improve Code Coverage
|
|
4
|
+
purpose: improve-code-coverage workflow instruction
|
|
5
|
+
ace-docs:
|
|
6
|
+
last-updated: 2026-03-21
|
|
7
|
+
last-checked: 2026-03-21
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
# Improve Code Coverage
|
|
11
|
+
|
|
12
|
+
## Goal
|
|
13
|
+
|
|
14
|
+
Systematically analyze code coverage reports and create targeted test tasks to improve overall test coverage by identifying untested code paths, edge cases, and missing test scenarios using quality-focused testing approach.
|
|
15
|
+
|
|
16
|
+
## Prerequisites
|
|
17
|
+
|
|
18
|
+
* Coverage report available (SimpleCov `.resultset.json`, Jest coverage, pytest coverage, Go coverage)
|
|
19
|
+
* Access to coverage analysis tools
|
|
20
|
+
* Understanding of testing patterns and project architecture
|
|
21
|
+
* Access to task creation workflows
|
|
22
|
+
* Source code access for uncovered line analysis
|
|
23
|
+
|
|
24
|
+
## Project Context Loading
|
|
25
|
+
|
|
26
|
+
- Read and follow: `ace-bundle wfi://bundle`
|
|
27
|
+
|
|
28
|
+
## Framework Detection
|
|
29
|
+
|
|
30
|
+
**Auto-detect testing framework and coverage tools:**
|
|
31
|
+
|
|
32
|
+
**Ruby:**
|
|
33
|
+
- Check `Gemfile` for `simplecov`
|
|
34
|
+
- Coverage file: `coverage/.resultset.json`
|
|
35
|
+
- Tool: SimpleCov
|
|
36
|
+
|
|
37
|
+
**JavaScript:**
|
|
38
|
+
- Check `package.json` for `jest`, coverage scripts
|
|
39
|
+
- Coverage file: `coverage/coverage-final.json`
|
|
40
|
+
- Tool: Jest coverage, nyc, c8
|
|
41
|
+
|
|
42
|
+
**Python:**
|
|
43
|
+
- Check `requirements.txt` for `pytest-cov`, `coverage`
|
|
44
|
+
- Coverage file: `.coverage`, `coverage.xml`
|
|
45
|
+
- Tool: pytest-cov, coverage.py
|
|
46
|
+
|
|
47
|
+
**Go:**
|
|
48
|
+
- Coverage file: `coverage.out`
|
|
49
|
+
- Tool: `go test -cover`
|
|
50
|
+
|
|
51
|
+
## Process Steps
|
|
52
|
+
|
|
53
|
+
1. **Generate Coverage Analysis Report**
|
|
54
|
+
* Ensure tests have been run to generate coverage data:
|
|
55
|
+
```bash
|
|
56
|
+
# Ruby/RSpec
|
|
57
|
+
bundle exec rspec
|
|
58
|
+
|
|
59
|
+
# JavaScript/Jest
|
|
60
|
+
npm test -- --coverage
|
|
61
|
+
|
|
62
|
+
# Python/pytest
|
|
63
|
+
pytest --cov=.
|
|
64
|
+
|
|
65
|
+
# Go
|
|
66
|
+
go test -coverprofile=coverage.out ./...
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
* Verify coverage data exists:
|
|
70
|
+
```bash
|
|
71
|
+
# Check for coverage files
|
|
72
|
+
ls -la coverage/ .coverage coverage.out
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
2. **Load and Parse Coverage Data**
|
|
76
|
+
* Load the generated coverage report
|
|
77
|
+
* Identify files with low coverage or significant uncovered method groups
|
|
78
|
+
* Focus on files with coverage percentage below adaptive threshold
|
|
79
|
+
* Prioritize files based on:
|
|
80
|
+
- Architecture importance (critical components first)
|
|
81
|
+
- Business logic components
|
|
82
|
+
- Error handling and edge case pathways
|
|
83
|
+
- Public API methods and CLI entry points
|
|
84
|
+
|
|
85
|
+
3. **Iterative File Analysis Process**
|
|
86
|
+
For each file identified in the coverage report (process 3-5 files per iteration):
|
|
87
|
+
|
|
88
|
+
**3.1 Source Code Analysis**
|
|
89
|
+
* Load the source file and examine uncovered line ranges
|
|
90
|
+
* For each uncovered method, analyze:
|
|
91
|
+
- Method signature and parameters
|
|
92
|
+
- Expected inputs and outputs
|
|
93
|
+
- Error conditions and edge cases
|
|
94
|
+
- Dependencies on external systems (file system, network, etc.)
|
|
95
|
+
- Security considerations (path validation, sanitization)
|
|
96
|
+
|
|
97
|
+
**3.2 Test Gap Assessment**
|
|
98
|
+
* Review existing test files for the component
|
|
99
|
+
* Identify missing test scenarios:
|
|
100
|
+
- Happy path tests for normal operation
|
|
101
|
+
- Edge cases (empty inputs, boundary conditions)
|
|
102
|
+
- Error conditions (permission errors, invalid paths)
|
|
103
|
+
- Integration scenarios with dependent components
|
|
104
|
+
- Security scenarios (path traversal, injection attempts)
|
|
105
|
+
|
|
106
|
+
**3.3 Test Quality Evaluation**
|
|
107
|
+
* Assess current test quality, not just coverage percentage:
|
|
108
|
+
- Are tests testing behavior or just exercising code?
|
|
109
|
+
- Do tests cover meaningful business scenarios?
|
|
110
|
+
- Are error conditions properly tested?
|
|
111
|
+
- Do tests verify edge cases and boundary conditions?
|
|
112
|
+
- Are integration points properly tested?
|
|
113
|
+
|
|
114
|
+
4. **Test Strategy Design**
|
|
115
|
+
For each file requiring improved coverage:
|
|
116
|
+
|
|
117
|
+
**4.1 Edge Case Identification**
|
|
118
|
+
* Identify specific edge cases based on method analysis:
|
|
119
|
+
- Boundary value testing (min/max inputs, empty collections)
|
|
120
|
+
- Error condition testing (network failures, permission errors)
|
|
121
|
+
- State transition testing (object lifecycle scenarios)
|
|
122
|
+
- Concurrency scenarios (if applicable)
|
|
123
|
+
- Resource limitation scenarios (memory, disk space)
|
|
124
|
+
|
|
125
|
+
**4.2 Test Scenario Planning**
|
|
126
|
+
* Design comprehensive test scenarios following framework patterns:
|
|
127
|
+
- Logical grouping of related tests
|
|
128
|
+
- Different contexts for different scenarios
|
|
129
|
+
- Mocking/stubbing for external API interactions
|
|
130
|
+
- Shared examples for common behaviors
|
|
131
|
+
- Custom matchers/assertions for domain-specific validation
|
|
132
|
+
|
|
133
|
+
5. **Task Creation for Test Improvements**
|
|
134
|
+
For each file requiring test improvements:
|
|
135
|
+
|
|
136
|
+
* **Create focused test improvement task** using the embedded template
|
|
137
|
+
* **Task should include:**
|
|
138
|
+
- Specific uncovered methods and line ranges
|
|
139
|
+
- Detailed test scenarios to implement
|
|
140
|
+
- Edge cases and error conditions to cover
|
|
141
|
+
- Expected test file structure and organization
|
|
142
|
+
- References to architecture testing patterns
|
|
143
|
+
- Integration requirements with existing test suite
|
|
144
|
+
|
|
145
|
+
6. **Quality Guidelines and Validation**
|
|
146
|
+
|
|
147
|
+
**6.1 Coverage as Attention Indicator**
|
|
148
|
+
* Use coverage data to identify areas needing attention, not as a percentage target
|
|
149
|
+
* Focus on meaningful test scenarios that validate business logic
|
|
150
|
+
* Prioritize quality tests over coverage percentage metrics
|
|
151
|
+
* Ensure tests provide value beyond just exercising code
|
|
152
|
+
|
|
153
|
+
**6.2 Test Implementation Standards**
|
|
154
|
+
* Follow framework best practices and project conventions
|
|
155
|
+
* Use appropriate mocking/stubbing for external interactions
|
|
156
|
+
* Implement proper test isolation and cleanup
|
|
157
|
+
* Use factory patterns or fixtures for test data setup
|
|
158
|
+
* Follow project architecture testing patterns for each layer
|
|
159
|
+
|
|
160
|
+
**6.3 Continuous Improvement**
|
|
161
|
+
* Re-run coverage analysis after test implementation
|
|
162
|
+
* Validate that new tests provide meaningful scenario coverage
|
|
163
|
+
* Review test execution time and optimize if necessary
|
|
164
|
+
* Update test documentation and examples
|
|
165
|
+
|
|
166
|
+
## Error Handling
|
|
167
|
+
|
|
168
|
+
### Common Issues
|
|
169
|
+
|
|
170
|
+
**Missing Coverage Data:**
|
|
171
|
+
* Symptom: No coverage file found
|
|
172
|
+
* Solution: Run test suite first to generate coverage data
|
|
173
|
+
* Command: Run project-specific test command with coverage enabled
|
|
174
|
+
|
|
175
|
+
**Coverage Tool Errors:**
|
|
176
|
+
* Symptom: Coverage analysis command fails
|
|
177
|
+
* Solution: Check tool availability and file permissions
|
|
178
|
+
* Verify coverage tool is installed and configured
|
|
179
|
+
|
|
180
|
+
**Unclear Test Requirements:**
|
|
181
|
+
* Symptom: Difficulty determining what tests to write
|
|
182
|
+
* Solution: Focus on error conditions and edge cases first
|
|
183
|
+
* Approach: Start with simple scenarios, then add complexity
|
|
184
|
+
|
|
185
|
+
### Recovery Procedures
|
|
186
|
+
|
|
187
|
+
If analysis fails or produces unclear results:
|
|
188
|
+
1. Verify coverage data is current and complete
|
|
189
|
+
2. Start with highest-impact files (low coverage + high importance)
|
|
190
|
+
3. Focus on one component/file at a time
|
|
191
|
+
4. Use incremental approach with regular validation
|
|
192
|
+
5. Consult existing test patterns in the codebase
|
|
193
|
+
|
|
194
|
+
## Success Criteria
|
|
195
|
+
|
|
196
|
+
* Coverage analysis report generated successfully
|
|
197
|
+
* Uncovered code sections identified and analyzed
|
|
198
|
+
* Test improvement tasks created for priority components
|
|
199
|
+
* Each task includes specific test scenarios and edge cases
|
|
200
|
+
* Tasks follow project standards and architecture patterns
|
|
201
|
+
* Quality-focused approach prioritizes meaningful tests over coverage percentages
|
|
202
|
+
* Integration with existing testing infrastructure
|
|
203
|
+
|
|
204
|
+
## Usage Example
|
|
205
|
+
|
|
206
|
+
```bash
|
|
207
|
+
# Ruby/SimpleCov
|
|
208
|
+
bundle exec rspec
|
|
209
|
+
coverage-analyze coverage/.resultset.json
|
|
210
|
+
|
|
211
|
+
# JavaScript/Jest
|
|
212
|
+
npm test -- --coverage
|
|
213
|
+
cat coverage/coverage-summary.json
|
|
214
|
+
|
|
215
|
+
# Python/pytest
|
|
216
|
+
pytest --cov=. --cov-report=json
|
|
217
|
+
cat coverage.json
|
|
218
|
+
|
|
219
|
+
# Go
|
|
220
|
+
go test -coverprofile=coverage.out ./...
|
|
221
|
+
go tool cover -func=coverage.out
|
|
222
|
+
```
|
|
223
|
+
|
|
224
|
+
## Framework-Specific Coverage Analysis
|
|
225
|
+
|
|
226
|
+
### Ruby/SimpleCov
|
|
227
|
+
|
|
228
|
+
```bash
|
|
229
|
+
# Run tests with coverage
|
|
230
|
+
bundle exec rspec
|
|
231
|
+
|
|
232
|
+
# View coverage report
|
|
233
|
+
open coverage/index.html
|
|
234
|
+
|
|
235
|
+
# Analyze specific files
|
|
236
|
+
bundle exec rspec --coverage-path=lib/specific/path
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
### JavaScript/Jest
|
|
240
|
+
|
|
241
|
+
```bash
|
|
242
|
+
# Run tests with coverage
|
|
243
|
+
npm test -- --coverage
|
|
244
|
+
|
|
245
|
+
# View coverage report
|
|
246
|
+
open coverage/lcov-report/index.html
|
|
247
|
+
|
|
248
|
+
# Coverage for specific files
|
|
249
|
+
npm test -- --coverage --collectCoverageFrom='src/**/*.js'
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
### Python/pytest
|
|
253
|
+
|
|
254
|
+
```bash
|
|
255
|
+
# Run tests with coverage
|
|
256
|
+
pytest --cov=. --cov-report=html
|
|
257
|
+
|
|
258
|
+
# View coverage report
|
|
259
|
+
open htmlcov/index.html
|
|
260
|
+
|
|
261
|
+
# Coverage for specific modules
|
|
262
|
+
pytest --cov=mymodule --cov-report=term-missing
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
### Go
|
|
266
|
+
|
|
267
|
+
```bash
|
|
268
|
+
# Run tests with coverage
|
|
269
|
+
go test -coverprofile=coverage.out ./...
|
|
270
|
+
|
|
271
|
+
# View coverage report
|
|
272
|
+
go tool cover -html=coverage.out
|
|
273
|
+
|
|
274
|
+
# Function-level coverage
|
|
275
|
+
go tool cover -func=coverage.out
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
<documents>
|
|
279
|
+
<template path="dev-handbook/templates/release-testing/task-test-improvement.template.md">---
|
|
280
|
+
id: [AUTO-GENERATED]
|
|
281
|
+
status: pending
|
|
282
|
+
priority: medium
|
|
283
|
+
estimate: 3h
|
|
284
|
+
dependencies: []
|
|
285
|
+
---
|
|
286
|
+
|
|
287
|
+
# Improve Test Coverage for [ComponentName] - [FocusArea]
|
|
288
|
+
|
|
289
|
+
## Objective
|
|
290
|
+
|
|
291
|
+
Implement comprehensive test coverage for [ComponentName] focusing on [FocusArea] including edge cases, error conditions, and integration scenarios. Address uncovered line ranges [LineRanges] identified in coverage analysis.
|
|
292
|
+
|
|
293
|
+
## Prerequisites
|
|
294
|
+
|
|
295
|
+
* Understanding of project architecture and testing patterns
|
|
296
|
+
* Familiarity with testing framework (RSpec, Jest, pytest, Go testing)
|
|
297
|
+
* Access to coverage analysis reports
|
|
298
|
+
* Knowledge of mocking/stubbing strategies
|
|
299
|
+
|
|
300
|
+
## Scope of Work
|
|
301
|
+
|
|
302
|
+
- Add missing test scenarios for uncovered methods
|
|
303
|
+
- Implement edge case testing for boundary conditions
|
|
304
|
+
- Add error condition testing for failure scenarios
|
|
305
|
+
- Follow testing standards and architecture patterns
|
|
306
|
+
- Ensure meaningful test coverage beyond just exercising code
|
|
307
|
+
|
|
308
|
+
### Deliverables
|
|
309
|
+
|
|
310
|
+
#### Create
|
|
311
|
+
- [test_file_path] (if not exists)
|
|
312
|
+
|
|
313
|
+
#### Modify
|
|
314
|
+
- [test_file_path] (add new test scenarios)
|
|
315
|
+
|
|
316
|
+
#### Delete
|
|
317
|
+
- None
|
|
318
|
+
|
|
319
|
+
## Implementation Plan
|
|
320
|
+
|
|
321
|
+
### Planning Steps
|
|
322
|
+
* [ ] Analyze source code for [ComponentName] component
|
|
323
|
+
* [ ] Review existing test coverage and identify gaps
|
|
324
|
+
* [ ] Design test scenarios for uncovered methods: [MethodList]
|
|
325
|
+
* [ ] Plan edge case scenarios and error conditions
|
|
326
|
+
|
|
327
|
+
### Execution Steps
|
|
328
|
+
- [ ] Implement happy path tests for uncovered methods
|
|
329
|
+
- [ ] Add edge case tests for boundary conditions
|
|
330
|
+
- [ ] Implement error condition tests (invalid inputs, system failures)
|
|
331
|
+
- [ ] Add integration tests for component interactions
|
|
332
|
+
- [ ] Verify test isolation and cleanup procedures
|
|
333
|
+
- [ ] Run full test suite to ensure no regressions
|
|
334
|
+
|
|
335
|
+
## Acceptance Criteria
|
|
336
|
+
- [ ] All uncovered methods have meaningful test scenarios
|
|
337
|
+
- [ ] Edge cases and error conditions are properly tested
|
|
338
|
+
- [ ] Tests follow framework best practices and project conventions
|
|
339
|
+
- [ ] Appropriate mocking/stubbing for external interactions
|
|
340
|
+
- [ ] Test execution completes without errors
|
|
341
|
+
- [ ] Coverage analysis shows improved meaningful coverage
|
|
342
|
+
|
|
343
|
+
## Test Scenarios
|
|
344
|
+
|
|
345
|
+
### Uncovered Methods
|
|
346
|
+
[List specific methods and line ranges from coverage analysis]
|
|
347
|
+
|
|
348
|
+
### Edge Cases to Test
|
|
349
|
+
- [ ] Boundary value testing (empty/nil inputs, limits)
|
|
350
|
+
- [ ] Error condition testing (exceptions, failures)
|
|
351
|
+
- [ ] State transition testing (object lifecycle)
|
|
352
|
+
- [ ] Resource limitation scenarios
|
|
353
|
+
- [ ] Security scenarios (if applicable)
|
|
354
|
+
|
|
355
|
+
### Integration Scenarios
|
|
356
|
+
- [ ] Component interaction testing
|
|
357
|
+
- [ ] External dependency mocking/stubbing
|
|
358
|
+
- [ ] Cross-layer communication testing
|
|
359
|
+
|
|
360
|
+
## References
|
|
361
|
+
- Coverage analysis report
|
|
362
|
+
- Testing standards documentation
|
|
363
|
+
- Architecture documentation
|
|
364
|
+
- Source file: [SourceFilePath]
|
|
365
|
+
</template>
|
|
366
|
+
</documents>
|
|
367
|
+
|
|
368
|
+
---
|
|
369
|
+
|
|
370
|
+
*This workflow provides a systematic approach to improving test coverage through quality-focused testing strategies that prioritize meaningful test scenarios over coverage percentage metrics.*
|