@aborruso/ckan-mcp-server 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. package/.claude/commands/openspec/apply.md +23 -0
  2. package/.claude/commands/openspec/archive.md +27 -0
  3. package/.claude/commands/openspec/proposal.md +28 -0
  4. package/.claude/settings.local.json +31 -0
  5. package/.gemini/commands/openspec/apply.toml +21 -0
  6. package/.gemini/commands/openspec/archive.toml +25 -0
  7. package/.gemini/commands/openspec/proposal.toml +26 -0
  8. package/.mcp.json +12 -0
  9. package/.opencode/command/openspec-apply.md +24 -0
  10. package/.opencode/command/openspec-archive.md +27 -0
  11. package/.opencode/command/openspec-proposal.md +29 -0
  12. package/AGENTS.md +18 -0
  13. package/CLAUDE.md +320 -0
  14. package/EXAMPLES.md +707 -0
  15. package/LICENSE.txt +21 -0
  16. package/LOG.md +154 -0
  17. package/PRD.md +912 -0
  18. package/README.md +468 -0
  19. package/REFACTORING.md +237 -0
  20. package/dist/index.js +1277 -0
  21. package/openspec/AGENTS.md +456 -0
  22. package/openspec/changes/archive/2026-01-08-add-mcp-resources/design.md +115 -0
  23. package/openspec/changes/archive/2026-01-08-add-mcp-resources/proposal.md +52 -0
  24. package/openspec/changes/archive/2026-01-08-add-mcp-resources/specs/mcp-resources/spec.md +92 -0
  25. package/openspec/changes/archive/2026-01-08-add-mcp-resources/tasks.md +56 -0
  26. package/openspec/changes/archive/2026-01-08-expand-test-coverage-specs/design.md +355 -0
  27. package/openspec/changes/archive/2026-01-08-expand-test-coverage-specs/proposal.md +161 -0
  28. package/openspec/changes/archive/2026-01-08-expand-test-coverage-specs/tasks.md +162 -0
  29. package/openspec/changes/archive/2026-01-08-translate-project-to-english/proposal.md +115 -0
  30. package/openspec/changes/archive/2026-01-08-translate-project-to-english/specs/documentation-language/spec.md +32 -0
  31. package/openspec/changes/archive/2026-01-08-translate-project-to-english/tasks.md +115 -0
  32. package/openspec/changes/archive/add-automated-tests/design.md +324 -0
  33. package/openspec/changes/archive/add-automated-tests/proposal.md +167 -0
  34. package/openspec/changes/archive/add-automated-tests/specs/automated-testing/spec.md +143 -0
  35. package/openspec/changes/archive/add-automated-tests/tasks.md +132 -0
  36. package/openspec/project.md +113 -0
  37. package/openspec/specs/documentation-language/spec.md +32 -0
  38. package/openspec/specs/mcp-resources/spec.md +94 -0
  39. package/package.json +46 -0
  40. package/spunti.md +19 -0
  41. package/tasks/todo.md +124 -0
  42. package/test-urls.js +18 -0
  43. package/tmp/test-org-search.js +55 -0
@@ -0,0 +1,324 @@
1
+ # Design: Automated Testing Strategy
2
+
3
+ This document explains the testing architecture, patterns, and trade-offs for the CKAN MCP Server automated tests.
4
+
5
+ ## Architecture Overview
6
+
7
+ ```
8
+ tests/
9
+ ├── unit/ # Test individual functions in isolation
10
+ │ ├── formatting.test.ts
11
+ │ └── http.test.ts
12
+ ├── integration/ # Test tool behavior with mocked API
13
+ │ ├── status.test.ts
14
+ │ ├── package.test.ts
15
+ │ ├── organization.test.ts
16
+ │ └── datastore.test.ts
17
+ ├── fixtures/ # Mock CKAN API responses
18
+ │ ├── responses/ # Success scenarios
19
+ │ │ ├── status-success.json
20
+ │ │ ├── package-search-success.json
21
+ │ │ └── ...
22
+ │ └── errors/ # Error scenarios
23
+ │ ├── timeout.json
24
+ │ ├── not-found.json
25
+ │ └── server-error.json
26
+ └── README.md # Test writing guide
27
+ ```
28
+
29
+ ## Testing Pyramid
30
+
31
+ ```
32
+ E2e Tests
33
+ (0% - excluded)
34
+
35
+
36
+ Integration Tests
37
+ (30% - tools)
38
+
39
+
40
+ Unit Tests
41
+ (70% - utils)
42
+
43
+ ```
44
+
45
+ ### Rationale
46
+
47
+ - **Unit tests (70%)**: Fast, stable, cover utility functions completely
48
+ - **Integration tests (30%)**: Medium speed, test tool behavior with mocked API
49
+ - **E2e tests (0%)**: Excluded - too slow, depends on external servers
50
+
51
+ ## Mock Strategy
52
+
53
+ ### Why Mock CKAN API?
54
+
55
+ **Benefits:**
56
+ - Fast execution (no network I/O)
57
+ - Deterministic results (same every time)
58
+ - Test error scenarios easily (404, timeout, 500)
59
+ - No dependency on external servers
60
+ - Offline development possible
61
+
62
+ **Drawbacks:**
63
+ - Mocks may diverge from real API
64
+ - Doesn't catch API integration bugs
65
+ - Maintenance overhead when API changes
66
+
67
+ **Mitigation:**
68
+ - Use real CKAN API documentation for fixtures
69
+ - Periodically validate fixtures against demo.ckan.org
70
+ - Keep mocks focused on API contract, not implementation
71
+
72
+ ### Mock Implementation
73
+
74
+ Use Vitest's built-in `vi.fn()` to mock axios:
75
+
76
+ ```typescript
77
+ import { vi } from 'vitest';
78
+ import axios from 'axios';
79
+
80
+ // Mock axios at module level
81
+ vi.mock('axios');
82
+
83
+ // In test, set return value
84
+ (axios.get as Mock).mockResolvedValue(fixtureResponse);
85
+ ```
86
+
87
+ ### Fixture Structure
88
+
89
+ Fixtures represent realistic CKAN API v3 responses:
90
+
91
+ **Success Response Structure:**
92
+ ```json
93
+ {
94
+ "success": true,
95
+ "result": { ... }
96
+ }
97
+ ```
98
+
99
+ **Error Response Structure:**
100
+ ```json
101
+ {
102
+ "success": false,
103
+ "error": { ... }
104
+ }
105
+ ```
106
+
107
+ ## Test Categories
108
+
109
+ ### Unit Tests
110
+
111
+ **Target:** Utility functions in `src/utils/`
112
+
113
+ **Characteristics:**
114
+ - Test single function in isolation
115
+ - Mock external dependencies (axios, formatting libs)
116
+ - Focus on inputs/outputs
117
+ - Fast execution (< 1ms each)
118
+
119
+ **Example:**
120
+ ```typescript
121
+ test('formatDate formats ISO date correctly', () => {
122
+ const result = formatDate('2024-01-15T10:30:00Z');
123
+ expect(result).toBe('15/01/2024');
124
+ });
125
+ ```
126
+
127
+ ### Integration Tests
128
+
129
+ **Target:** MCP tools in `src/tools/`
130
+
131
+ **Characteristics:**
132
+ - Test tool end-to-end with mocked API
133
+ - Mock CKAN API responses
134
+ - Validate output format (markdown/json)
135
+ - Test error handling
136
+ - Medium execution speed (~10-50ms each)
137
+
138
+ **Example:**
139
+ ```typescript
140
+ test('ckan_package_search returns markdown format', async () => {
141
+ vi.mocked(axios.get).mockResolvedValue(fixture);
142
+ const result = await ckan_package_search({...});
143
+ expect(result.content[0].type).toBe('text');
144
+ expect(result.content[0].text).toContain('# Search Results');
145
+ });
146
+ ```
147
+
148
+ ## Coverage Strategy
149
+
150
+ ### Target: 80%
151
+
152
+ **Breakdown:**
153
+ - Utility functions: 100% (essential, easy to test)
154
+ - Tool implementations: 75%+ (focus on critical paths)
155
+ - Edge cases: 60% (lower priority, add incrementally)
156
+
157
+ ### What NOT to Test
158
+
159
+ - External library code (axios, express, zod)
160
+ - MCP SDK internals
161
+ - Simple getters/setters
162
+ - Type definitions
163
+ - Configuration constants
164
+
165
+ ### When to Stop Testing
166
+
167
+ When tests provide diminishing returns:
168
+ - Error handling code already tested elsewhere
169
+ - Simple boolean logic
170
+ - One-line functions with clear behavior
171
+ - Delegation to tested functions
172
+
173
+ ## Test Writing Guidelines
174
+
175
+ ### Naming Convention
176
+
177
+ ```typescript
178
+ // Good: Describes what and expected outcome
179
+ test('truncateText limits text to 50000 characters', () => { ... });
180
+
181
+ // Bad: Vague
182
+ test('truncateText works', () => { ... });
183
+ ```
184
+
185
+ ### AAA Pattern
186
+
187
+ Arrange, Act, Assert:
188
+
189
+ ```typescript
190
+ test('formatBytes converts bytes to KB', () => {
191
+ // Arrange
192
+ const bytes = 1500;
193
+
194
+ // Act
195
+ const result = formatBytes(bytes);
196
+
197
+ // Assert
198
+ expect(result).toBe('1.46 KB');
199
+ });
200
+ ```
201
+
202
+ ### One Assertion Per Test
203
+
204
+ ```typescript
205
+ // Good: One test, one assertion
206
+ test('formatDate returns formatted date', () => {
207
+ const result = formatDate('2024-01-15');
208
+ expect(result).toMatch(/\d{2}\/\d{2}\/\d{4}/);
209
+ });
210
+
211
+ // Bad: Multiple assertions, harder to debug
212
+ test('formatDate works', () => {
213
+ const result = formatDate('2024-01-15');
214
+ expect(result).toBeDefined();
215
+ expect(typeof result).toBe('string');
216
+ expect(result).toMatch(/\d{2}\/\d{2}\/\d{4}/);
217
+ });
218
+ ```
219
+
220
+ ### Testing Error Scenarios
221
+
222
+ ```typescript
223
+ test('ckan_package_show handles 404 error', async () => {
224
+ vi.mocked(axios.get).mockRejectedValue(new Error('404 Not Found'));
225
+
226
+ const result = await ckan_package_show({...});
227
+
228
+ expect(result.isError).toBe(true);
229
+ expect(result.content[0].text).toContain('Not found');
230
+ });
231
+ ```
232
+
233
+ ## CI/CD Integration
234
+
235
+ If CI/CD exists (GitHub Actions, GitLab CI, etc.):
236
+
237
+ ```yaml
238
+ - name: Run tests
239
+ run: npm test
240
+
241
+ - name: Generate coverage
242
+ run: npm run test:coverage
243
+
244
+ - name: Upload coverage
245
+ run: # Upload to coverage service (optional)
246
+ ```
247
+
248
+ ## Trade-offs
249
+
250
+ ### Vitest vs Jest
251
+
252
+ **Chosen: Vitest**
253
+
254
+ **Why Vitest:**
255
+ - 10x faster than Jest
256
+ - Native ESM support
257
+ - Built-in TypeScript support
258
+ - Modern, actively maintained
259
+ - API compatible with Jest
260
+
261
+ **Trade-off:**
262
+ - Newer ecosystem than Jest
263
+ - Fewer community examples
264
+
265
+ ### Mocking Real API
266
+
267
+ **Chosen: Mock fixtures**
268
+
269
+ **Why Mock:**
270
+ - Fast and reliable
271
+ - Test error scenarios
272
+ - No external dependencies
273
+
274
+ **Trade-off:**
275
+ - Doesn't catch API integration bugs
276
+ - Mocks may become outdated
277
+
278
+ **Mitigation:**
279
+ - Validate fixtures periodically against demo.ckan.org
280
+ - Document fixture structure
281
+ - Keep tests focused on API contract
282
+
283
+ ### Coverage Target 80%
284
+
285
+ **Why 80%:**
286
+ - High enough to catch most bugs
287
+ - Not so high to be wasteful
288
+ - Industry standard for first iteration
289
+
290
+ **Trade-off:**
291
+ - 20% of code untested
292
+ - Edge cases may slip through
293
+
294
+ **Mitigation:**
295
+ - Focus testing on critical paths
296
+ - Add tests for bugs found in production
297
+ - Incrementally improve coverage over time
298
+
299
+ ## Future Enhancements
300
+
301
+ ### Short Term
302
+ - Add performance benchmarks
303
+ - Test output formatting edge cases
304
+ - Add more error scenario tests
305
+
306
+ ### Medium Term
307
+ - Increase coverage to 90%
308
+ - Add contract tests for CKAN API
309
+ - Visual regression tests for markdown output
310
+
311
+ ### Long Term
312
+ - Fuzz testing for input validation
313
+ - Chaos testing for error handling
314
+ - Integration with external testing services
315
+
316
+ ## Conclusion
317
+
318
+ This testing strategy balances:
319
+ - **Speed**: Unit tests + mocked integration tests
320
+ - **Reliability**: Deterministic mocks, no external dependencies
321
+ - **Maintainability**: Clear patterns, simple fixtures
322
+ - **Effectiveness**: 80% coverage, focus on critical paths
323
+
324
+ The incremental approach ensures we get value from tests early while keeping the investment manageable.
@@ -0,0 +1,167 @@
1
+ # Proposal: Add Automated Testing
2
+
3
+ **Status:** Draft
4
+ **Created:** 2026-01-08
5
+ **Author:** OpenCode
6
+
7
+ ## Summary
8
+
9
+ Add automated tests to the CKAN MCP Server project using Vitest framework, focusing on unit tests for utilities and integration tests for tools with mocked CKAN API responses.
10
+
11
+ ## Motivation
12
+
13
+ The project currently has no automated tests (mentioned in CLAUDE.md), which creates several risks:
14
+ - No regression detection when modifying code
15
+ - Difficult to refactor confidently
16
+ - No validation of tool behavior against CKAN API specifications
17
+ - Harder for new contributors to understand expected behavior
18
+ - No safety net for future changes
19
+
20
+ Adding automated tests will:
21
+ - Catch bugs early before deployment
22
+ - Enable confident refactoring
23
+ - Document expected behavior through tests
24
+ - Improve code quality and maintainability
25
+ - Lower barrier for contributions
26
+
27
+ ## Scope
28
+
29
+ ### Included
30
+ - **Unit tests** for utility functions (`src/utils/formatting.ts`, `src/utils/http.ts`)
31
+ - **Integration tests** for MCP tools with mocked CKAN API responses
32
+ - Test setup and configuration (Vitest)
33
+ - Mock fixtures for CKAN API responses
34
+ - Test scripts in package.json
35
+ - CI/CD integration for running tests (if CI exists)
36
+ - Documentation for running and writing tests
37
+
38
+ ### Excluded
39
+ - E2e tests with real CKAN servers (too slow/unstable)
40
+ - Performance/benchmark tests (future enhancement)
41
+ - UI/interaction tests (no UI in this project)
42
+ - Manual testing procedures (already documented in CLAUDE.md)
43
+
44
+ ## Proposed Changes
45
+
46
+ ### Technology Stack
47
+
48
+ **Framework**: Vitest
49
+ - Fast and modern test runner
50
+ - Native TypeScript support
51
+ - Jest-compatible API (easy to learn)
52
+ - Built-in mocking and spying
53
+ - Excellent watch mode for development
54
+
55
+ **Mock Library**: Built-in Vitest vi.fn()
56
+ - No additional dependencies
57
+ - Simple and powerful mocking
58
+ - Easy to mock CKAN API responses
59
+
60
+ **Coverage**: c8 (Vitest's built-in coverage tool)
61
+ - Generates coverage reports
62
+ - Integrates with Vitest
63
+
64
+ ### Test Strategy
65
+
66
+ #### Phase 1: Foundation (Priority: High)
67
+ - Configure Vitest with TypeScript support
68
+ - Create test directory structure
69
+ - Add test scripts to package.json
70
+ - Set up coverage reporting
71
+
72
+ #### Phase 2: Unit Tests (Priority: High)
73
+ - `src/utils/formatting.ts`: truncateText, formatDate, formatBytes
74
+ - `src/utils/http.ts`: makeCkanRequest (with mocked axios)
75
+
76
+ #### Phase 3: Integration Tests (Priority: Medium)
77
+ - `tools/status.ts`: ckan_status_show
78
+ - `tools/package.ts`: ckan_package_search, ckan_package_show
79
+ - `tools/organization.ts`: ckan_organization_list, ckan_organization_show, ckan_organization_search
80
+ - `tools/datastore.ts`: ckan_datastore_search
81
+
82
+ ### Mock Strategy
83
+
84
+ Create fixture files with realistic CKAN API responses:
85
+ - `fixtures/responses/status-success.json`
86
+ - `fixtures/responses/package-search-success.json`
87
+ - `fixtures/responses/organization-list-success.json`
88
+ - `fixtures/responses/datastore-search-success.json`
89
+ - Error scenarios: timeouts, 404, 500 errors
90
+
91
+ Mock `axios` to return fixture data without making real HTTP requests.
92
+
93
+ ### Coverage Target
94
+
95
+ **Initial goal**: 80% code coverage
96
+ - All utility functions: 100%
97
+ - Tool implementations: 75%+
98
+ - Focus on critical paths, not edge cases
99
+
100
+ **Future goal**: 90%+ (incremental improvement)
101
+
102
+ ### Incremental Approach
103
+
104
+ Start small and iterate:
105
+ 1. Test 1-2 tools + all utils
106
+ 2. Validate approach works
107
+ 3. Add tests for remaining tools
108
+ 4. Improve coverage over time
109
+ 5. Add more edge case tests as needed
110
+
111
+ ## Alternatives Considered
112
+
113
+ 1. **Jest** as test framework
114
+ - *Rejected*: Vitest is faster, has better TypeScript support, and is more modern
115
+
116
+ 2. **E2e tests with real CKAN servers**
117
+ - *Rejected*: Too slow (network calls), unstable (depends on external servers), non-deterministic
118
+
119
+ 3. **No mocking - test with demo.ckan.org**
120
+ - *Rejected*: Requires network, slow, test results vary, can't test error scenarios easily
121
+
122
+ 4. **100% coverage from start**
123
+ - *Rejected*: Too much effort, diminishing returns, 80% is reasonable for first iteration
124
+
125
+ 5. **Testing framework with separate dependencies** (supertest, nock)
126
+ - *Rejected*: Vitest has built-in mocking sufficient for this project
127
+
128
+ ## Impact Assessment
129
+
130
+ ### Benefits
131
+ - + Early bug detection
132
+ - + Safer refactoring
133
+ - + Behavior documentation through tests
134
+ - + Improved code quality
135
+ - + Easier onboarding for new contributors
136
+ - + Faster development cycle (tests catch issues early)
137
+
138
+ ### Risks
139
+ - - Initial time investment for writing first tests
140
+ - - Test maintenance overhead as code evolves
141
+ - - Possible brittle tests if mocking not done well
142
+ - - False sense of security if tests don't cover real scenarios
143
+
144
+ ### Mitigation
145
+ - Start with critical paths only (incremental approach)
146
+ - Keep mocks simple and focused on API contracts
147
+ - Review tests in code review
148
+ - Update tests when CKAN API changes
149
+ - Regular test maintenance in future development
150
+
151
+ ## Open Questions
152
+
153
+ None - clarified with project owner.
154
+
155
+ ## Dependencies
156
+
157
+ None - this is a new capability that doesn't depend on other changes.
158
+
159
+ ## Success Criteria
160
+
161
+ - [ ] Vitest configured and running
162
+ - [ ] Test scripts in package.json work
163
+ - [ ] Unit tests for utils passing
164
+ - [ ] Integration tests for at least 2 tools passing
165
+ - [ ] Coverage at least 80% for tested code
166
+ - [ ] Tests documented in README or test files
167
+ - [ ] Validation: `openspec validate add-automated-tests --strict` passes
@@ -0,0 +1,143 @@
1
+ # Spec: Automated Testing
2
+
3
+ Defines requirements for automated testing infrastructure and test coverage.
4
+
5
+ ## ADDED Requirements
6
+
7
+ ### Requirement: Automated test suite
8
+
9
+ The project SHALL have an automated test suite using Vitest that runs unit tests for utility functions and integration tests for MCP tools.
10
+
11
+ #### Scenario: Run test suite
12
+ Given the project has tests configured
13
+ When a developer runs `npm test`
14
+ Then all tests pass
15
+ And the test run completes in under 10 seconds
16
+ And the output shows pass/fail status for each test
17
+
18
+ #### Scenario: Run tests in watch mode
19
+ Given a developer is actively writing code
20
+ When they run `npm run test:watch`
21
+ Then Vitest starts in watch mode
22
+ And tests re-run automatically when files change
23
+ And the developer receives immediate feedback
24
+
25
+ ### Requirement: Test coverage reporting
26
+
27
+ The project SHALL provide code coverage reporting with a minimum threshold of 80% for tested code.
28
+
29
+ #### Scenario: Generate coverage report
30
+ Given the project has tests configured
31
+ When a developer runs `npm run test:coverage`
32
+ Then a coverage report is generated
33
+ And the report shows line-by-line coverage
34
+ And coverage meets 80% threshold
35
+ And the report is saved in coverage/ directory
36
+
37
+ #### Scenario: Coverage below threshold
38
+ Given new code is added without tests
39
+ When coverage is checked
40
+ Then the coverage report shows below 80%
41
+ And the report highlights untested lines
42
+ And developer knows which files need tests
43
+
44
+ ### Requirement: Unit tests for utilities
45
+
46
+ The project SHALL have unit tests for all utility functions in `src/utils/` directory.
47
+
48
+ #### Scenario: Unit test for formatting function
49
+ Given a utility function exists (e.g., formatDate)
50
+ When a unit test is written for it
51
+ Then the test covers all branches
52
+ And the test provides realistic inputs
53
+ And the test validates expected output
54
+
55
+ #### Scenario: Unit test for HTTP client
56
+ Given the makeCkanRequest function
57
+ When a unit test is written for it
58
+ Then axios is mocked
59
+ And the test validates successful responses
60
+ And the test validates error scenarios (404, 500, timeout)
61
+ And the test validates URL normalization
62
+
63
+ ### Requirement: Integration tests for tools
64
+
65
+ The project SHALL have integration tests for MCP tools that validate tool behavior with mocked CKAN API responses.
66
+
67
+ #### Scenario: Integration test for search tool
68
+ Given the ckan_package_search tool
69
+ When an integration test is written for it
70
+ Then CKAN API responses are mocked
71
+ And the test validates output format (markdown/json)
72
+ And the test validates successful search
73
+ And the test validates error handling
74
+
75
+ #### Scenario: Integration test for DataStore tool
76
+ Given the ckan_datastore_search tool
77
+ When an integration test is written for it
78
+ Then DataStore API responses are mocked
79
+ And the test validates query processing
80
+ And the test validates output formatting
81
+ And the test validates filter and sort parameters
82
+
83
+ ### Requirement: Mock fixtures for API responses
84
+
85
+ The project SHALL provide mock fixtures that represent realistic CKAN API v3 responses for both success and error scenarios.
86
+
87
+ #### Scenario: Success response fixture
88
+ Given a CKAN API success response is needed
89
+ When a fixture is created
90
+ Then the fixture follows CKAN API v3 format
91
+ And the fixture includes "success": true
92
+ And the fixture includes realistic "result" data
93
+
94
+ #### Scenario: Error response fixture
95
+ Given an error scenario is tested
96
+ When an error fixture is created
97
+ Then the fixture follows CKAN API v3 format
98
+ And the fixture includes "success": false
99
+ And the fixture includes error details
100
+
101
+ #### Scenario: Timeout error fixture
102
+ Given timeout scenarios are tested
103
+ When a timeout fixture is used
104
+ Then axios is mocked to throw timeout error
105
+ And the tool handles the error gracefully
106
+ And the tool returns user-friendly error message
107
+
108
+ ### Requirement: Test documentation
109
+
110
+ The project SHALL provide documentation for running and writing tests to help developers understand testing practices.
111
+
112
+ #### Scenario: Running tests
113
+ Given a new developer joins the project
114
+ When they read test documentation
115
+ Then they know how to run tests
116
+ Then they know how to run tests in watch mode
117
+ Then they know how to generate coverage reports
118
+
119
+ #### Scenario: Writing tests
120
+ Given a developer needs to write a new test
121
+ When they read test documentation
122
+ Then they see examples of unit tests
123
+ Then they see examples of integration tests
124
+ Then they understand the mocking strategy
125
+ Then they follow naming conventions
126
+
127
+ ### Requirement: Incremental test development
128
+
129
+ The project SHALL allow incremental development of tests, starting with critical tools and expanding over time.
130
+
131
+ #### Scenario: Initial test coverage
132
+ Given the first iteration of tests
133
+ When tests are implemented
134
+ Then at least 2 tools have tests
135
+ Then all utility functions have tests
136
+ And coverage meets 80% threshold for tested code
137
+
138
+ #### Scenario: Expanding test coverage
139
+ Given new tools are added to the project
140
+ When a developer adds tests for the new tool
141
+ Then the tests follow existing patterns
142
+ And the tests are added incrementally
143
+ And the project maintains overall quality