@grimoire-cc/cli 0.13.3 → 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (50) hide show
  1. package/dist/commands/update.d.ts.map +1 -1
  2. package/dist/commands/update.js +14 -0
  3. package/dist/commands/update.js.map +1 -1
  4. package/dist/enforce.d.ts +3 -1
  5. package/dist/enforce.d.ts.map +1 -1
  6. package/dist/enforce.js +18 -6
  7. package/dist/enforce.js.map +1 -1
  8. package/dist/setup.d.ts.map +1 -1
  9. package/dist/setup.js +47 -0
  10. package/dist/setup.js.map +1 -1
  11. package/dist/summary.d.ts.map +1 -1
  12. package/dist/summary.js +9 -0
  13. package/dist/summary.js.map +1 -1
  14. package/package.json +1 -1
  15. package/packs/dev-pack/agents/grimoire.tdd-specialist.md +194 -27
  16. package/packs/dev-pack/grimoire.json +0 -38
  17. package/packs/dev-pack/skills/grimoire.conventional-commit/SKILL.md +69 -65
  18. package/packs/dotnet-pack/agents/grimoire.csharp-coder.md +110 -113
  19. package/packs/dotnet-pack/grimoire.json +23 -5
  20. package/packs/dotnet-pack/skills/grimoire.unit-testing-dotnet/SKILL.md +252 -0
  21. package/packs/{dev-pack/skills/grimoire.tdd-specialist → dotnet-pack/skills/grimoire.unit-testing-dotnet}/reference/anti-patterns.md +78 -0
  22. package/packs/dotnet-pack/skills/grimoire.unit-testing-dotnet/reference/tdd-workflow-patterns.md +259 -0
  23. package/packs/go-pack/grimoire.json +19 -0
  24. package/packs/go-pack/skills/grimoire.unit-testing-go/SKILL.md +256 -0
  25. package/packs/go-pack/skills/grimoire.unit-testing-go/reference/anti-patterns.md +244 -0
  26. package/packs/go-pack/skills/grimoire.unit-testing-go/reference/tdd-workflow-patterns.md +259 -0
  27. package/packs/python-pack/grimoire.json +19 -0
  28. package/packs/python-pack/skills/grimoire.unit-testing-python/SKILL.md +239 -0
  29. package/packs/python-pack/skills/grimoire.unit-testing-python/reference/anti-patterns.md +244 -0
  30. package/packs/python-pack/skills/grimoire.unit-testing-python/reference/tdd-workflow-patterns.md +259 -0
  31. package/packs/rust-pack/grimoire.json +29 -0
  32. package/packs/rust-pack/skills/grimoire.unit-testing-rust/SKILL.md +243 -0
  33. package/packs/rust-pack/skills/grimoire.unit-testing-rust/reference/anti-patterns.md +244 -0
  34. package/packs/rust-pack/skills/grimoire.unit-testing-rust/reference/tdd-workflow-patterns.md +259 -0
  35. package/packs/ts-pack/agents/grimoire.typescript-coder.md +36 -1
  36. package/packs/ts-pack/grimoire.json +27 -1
  37. package/packs/ts-pack/skills/grimoire.unit-testing-typescript/SKILL.md +255 -0
  38. package/packs/ts-pack/skills/grimoire.unit-testing-typescript/reference/anti-patterns.md +244 -0
  39. package/packs/ts-pack/skills/grimoire.unit-testing-typescript/reference/tdd-workflow-patterns.md +259 -0
  40. package/packs/dev-pack/skills/grimoire.tdd-specialist/SKILL.md +0 -248
  41. package/packs/dev-pack/skills/grimoire.tdd-specialist/reference/language-frameworks.md +0 -388
  42. package/packs/dev-pack/skills/grimoire.tdd-specialist/reference/tdd-workflow-patterns.md +0 -135
  43. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/SKILL.md +0 -293
  44. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/reference/anti-patterns.md +0 -329
  45. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/reference/framework-guidelines.md +0 -361
  46. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/reference/parameterized-testing.md +0 -378
  47. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/reference/test-organization.md +0 -476
  48. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/reference/test-performance.md +0 -576
  49. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/templates/tunit-template.md +0 -438
  50. package/packs/dotnet-pack/skills/grimoire.dotnet-unit-testing/templates/xunit-template.md +0 -303
@@ -0,0 +1,244 @@
1
+ # Testing Anti-Patterns
2
+
3
+ Common testing mistakes that reduce test value and increase maintenance cost. These are language-agnostic — they apply to any test framework.
4
+
5
+ ## Table of Contents
6
+
7
+ - [The Liar](#the-liar)
8
+ - [The Giant](#the-giant)
9
+ - [Excessive Setup](#excessive-setup)
10
+ - [The Slow Poke](#the-slow-poke)
11
+ - [The Peeping Tom](#the-peeping-tom)
12
+ - [The Mockery](#the-mockery)
13
+ - [The Inspector](#the-inspector)
14
+ - [The Flaky Test](#the-flaky-test)
15
+ - [The Cargo Culter](#the-cargo-culter)
16
+ - [The Hard Test](#the-hard-test)
17
+
18
+ ## The Liar
19
+
20
+ **What it is:** A test that passes but doesn't actually verify the behavior it claims to test. It gives false confidence.
21
+
22
+ **How to spot it:**
23
+ - Test name says "validates input" but assertions only check the return type
24
+ - Assertions are too loose (`assert result is not None` instead of checking the actual value)
25
+ - Test catches exceptions broadly and passes regardless
26
+
27
+ **Fix:** Ensure assertions directly verify the specific behavior described in the test name. Every assertion should fail if the behavior breaks.
28
+
29
+ ```python
30
+ # Bad — passes even if discount logic is completely wrong
31
+ def test_apply_discount():
32
+ result = apply_discount(100, 10)
33
+ assert result is not None
34
+
35
+ # Good — fails if the calculation is wrong
36
+ def test_apply_discount_with_10_percent_returns_90():
37
+ result = apply_discount(100, 10)
38
+ assert result == 90.0
39
+ ```
40
+
41
+ ## The Giant
42
+
43
+ **What it is:** A single test that verifies too many things. When it fails, you can't tell which behavior broke.
44
+
45
+ **How to spot it:**
46
+ - Test has more than 8-10 assertions
47
+ - Test name uses "and" (e.g., "creates user and sends email and updates cache")
48
+ - Multiple Act phases in one test
49
+
50
+ **Fix:** Split into focused tests, each verifying one logical concept. Multiple assertions are fine if they verify aspects of the same behavior.
51
+
52
+ ```typescript
53
+ // Bad — three unrelated behaviors in one test
54
+ test('user registration works', () => {
55
+ const user = register({ name: 'Alice', email: 'alice@test.com' });
56
+ expect(user.id).toBeDefined();
57
+ expect(emailService.send).toHaveBeenCalled();
58
+ expect(cache.set).toHaveBeenCalledWith(`user:${user.id}`, user);
59
+ expect(auditLog.entries).toHaveLength(1);
60
+ });
61
+
62
+ // Good — separate tests for each behavior
63
+ test('register with valid data creates user with id', () => { ... });
64
+ test('register with valid data sends welcome email', () => { ... });
65
+ test('register with valid data caches the user', () => { ... });
66
+ test('register with valid data writes audit log entry', () => { ... });
67
+ ```
68
+
69
+ ## Excessive Setup
70
+
71
+ **What it is:** Tests that require dozens of lines of setup before the actual test logic. Often signals that the code under test has too many dependencies.
72
+
73
+ **How to spot it:**
74
+ - Arrange section is 20+ lines
75
+ - Multiple mocks configured with complex behaviors
76
+ - Shared setup methods that configure things most tests don't need
77
+
78
+ **Fix:** Use factory methods/builders for test data. Consider whether the code under test needs refactoring to reduce dependencies. Only set up what the specific test needs.
79
+
80
+ ```go
81
+ // Bad — every test sets up the entire world
82
+ func TestProcessOrder(t *testing.T) {
83
+ db := setupDatabase()
84
+ cache := setupCache()
85
+ logger := setupLogger()
86
+ emailClient := setupEmailClient()
87
+ validator := NewValidator(db)
88
+ processor := NewProcessor(cache)
89
+ service := NewOrderService(db, cache, logger, emailClient, validator, processor)
90
+ // ... 10 more lines of setup
91
+ result, err := service.ProcessOrder(ctx, order)
92
+ assert.NoError(t, err)
93
+ }
94
+
95
+ // Good — factory method hides irrelevant details
96
+ func TestProcessOrder_WithValidOrder_Succeeds(t *testing.T) {
97
+ service := newTestOrderService(t)
98
+ result, err := service.ProcessOrder(ctx, validOrder())
99
+ assert.NoError(t, err)
100
+ assert.Equal(t, "processed", result.Status)
101
+ }
102
+ ```
103
+
104
+ ## The Slow Poke
105
+
106
+ **What it is:** Tests that are slow because they use real I/O, network calls, or sleeps. Slow tests get run less frequently and slow down the feedback loop.
107
+
108
+ **How to spot it:**
109
+ - `time.Sleep()`, `Thread.sleep()`, `setTimeout` in tests
110
+ - Real HTTP calls, database connections, file system operations
111
+ - Test suite takes more than a few seconds for unit tests
112
+
113
+ **Fix:** Mock external dependencies. Use fake implementations for I/O. Replace time-based waits with event-based synchronization.
114
+
115
+ ## The Peeping Tom
116
+
117
+ **What it is:** Tests that access private/internal state to verify behavior instead of testing through the public interface.
118
+
119
+ **How to spot it:**
120
+ - Reflection to access private fields
121
+ - Testing internal method calls instead of observable results
122
+ - Assertions on implementation details (internal data structures, private counters)
123
+
124
+ **Fix:** Test through the public API. If you can't verify behavior through the public interface, the class may need a design change (e.g., expose a query method or extract a collaborator).
125
+
126
+ ## The Mockery
127
+
128
+ **What it is:** Tests that mock so heavily that they're testing mock configurations rather than real behavior. Every dependency is mocked, including simple value objects.
129
+
130
+ **How to spot it:**
131
+ - More mock setup lines than actual test logic
132
+ - Mocking concrete classes, value objects, or data structures
133
+ - Test passes but the real system fails because mocks don't match reality
134
+
135
+ **Fix:** Only mock at system boundaries (external services, databases, clocks). Use real implementations for in-process collaborators when practical.
136
+
137
+ ## The Inspector
138
+
139
+ **What it is:** Tests that verify exact method calls and their order rather than outcomes. They break whenever the implementation changes, even if behavior is preserved.
140
+
141
+ **How to spot it:**
142
+ - `verify(mock, times(1)).method()` for every mock interaction
143
+ - Assertions on call order
144
+ - Test breaks when you refactor without changing behavior
145
+
146
+ **Fix:** Verify state (the result) rather than interactions (how it got there). Only verify interactions for side effects that ARE the behavior (e.g., "email was sent").
147
+
148
+ ```java
149
+ // Bad — breaks if implementation changes sort algorithm
150
+ verify(sorter, times(1)).quickSort(any());
151
+ verify(sorter, never()).mergeSort(any());
152
+
153
+ // Good — verifies the outcome
154
+ assertThat(result).isSortedAccordingTo(naturalOrder());
155
+ ```
156
+
157
+ ## The Flaky Test
158
+
159
+ **What it is:** Tests that pass and fail intermittently without code changes. They erode trust in the test suite.
160
+
161
+ **Common causes:**
162
+ - Time-dependent logic (`new Date()`, `time.Now()`)
163
+ - Random data without fixed seeds
164
+ - Shared mutable state between tests
165
+ - Race conditions in async tests
166
+ - Dependency on test execution order
167
+
168
+ **Fix:** Inject time as a dependency. Use fixed seeds for randomness. Ensure test isolation. Use proper async synchronization.
169
+
170
+ ## The Cargo Culter
171
+
172
+ **What it is:** Writing tests to hit a coverage percentage target rather than to verify behavior. The tests exist to satisfy a metric, not to provide confidence.
173
+
174
+ **How to spot it:**
175
+ - Tests that assert trivially obvious things (e.g., `assert user.name == user.name`)
176
+ - Every private method has a corresponding test accessed via reflection
177
+ - 100% coverage but bugs still escape to production
178
+ - Test suite takes minutes to pass but developers don't trust it
179
+
180
+ **Fix:** Coverage is a diagnostic tool, not a goal. Use it to find untested gaps, not as a number to optimize. High 80s–90% emerges naturally from disciplined TDD. A test that only exists to push coverage up is worse than no test — it adds maintenance cost without adding confidence.
181
+
182
+ ```python
183
+ # Bad — written for coverage, not for confidence
184
+ def test_user_has_name():
185
+ user = User(name="Alice")
186
+ assert user.name is not None # This verifies nothing meaningful
187
+
188
+ # Good — written to verify a business rule
189
+ def test_user_with_empty_name_raises_validation_error():
190
+ with pytest.raises(ValidationError, match="name cannot be empty"):
191
+ User(name="")
192
+ ```
193
+
194
+ > See: https://martinfowler.com/bliki/TestCoverage.html
195
+
196
+ ## The Hard Test
197
+
198
+ **What it is:** Not an anti-pattern in the test itself, but a signal from the test about the production code. When a test is painful, complex, or requires elaborate setup, the production code has a design problem.
199
+
200
+ **How to spot it:**
201
+ - Need to mock 5+ dependencies to test one class
202
+ - Need to access private internals to verify behavior
203
+ - Test requires a complex sequence of operations just to get to the state under test
204
+ - You find yourself thinking "testing this would be too hard"
205
+
206
+ **What it signals:**
207
+ - Too many responsibilities in one class (SRP violation)
208
+ - Hidden dependencies or tight coupling
209
+ - Poor separation of concerns
210
+ - Untestable architecture (e.g., side effects embedded in business logic)
211
+
212
+ **Fix:** Resist the urge to skip the test or work around it with clever mocking. Instead, fix the production code design. Extract classes, inject dependencies, separate concerns. A hard test is a free design review — take the feedback.
213
+
214
+ ```python
215
+ # Hard to test — service does too much
216
+ class OrderService:
217
+ def process(self, order):
218
+ db = Database() # hidden dependency
219
+ email = EmailClient() # hidden dependency
220
+ self._validate(order)
221
+ db.save(order)
222
+ email.send_confirmation(order)
223
+ self._update_inventory(order) # another responsibility
224
+
225
+ # Easy to test — dependencies explicit, concerns separated
226
+ class OrderService:
227
+ def __init__(self, repo: OrderRepository, notifier: Notifier):
228
+ self._repo = repo
229
+ self._notifier = notifier
230
+
231
+ def process(self, order: Order) -> OrderResult:
232
+ self._validate(order)
233
+ saved = self._repo.save(order)
234
+ self._notifier.notify(saved)
235
+ return saved
236
+ ```
237
+
238
+ ---
239
+
240
+ ## Further Reading
241
+
242
+ - xUnit Patterns (Meszaros): http://xunitpatterns.com
243
+ - Codepipes testing anti-patterns: https://blog.codepipes.com/testing/software-testing-antipatterns.html
244
+ - Google SWE Book — Test Doubles: https://abseil.io/resources/swe-book/html/ch13.html
@@ -0,0 +1,259 @@
1
+ # TDD Workflow Patterns
2
+
3
+ Guidance on the test-driven development process, when to apply it, and advanced techniques.
4
+
5
+ ## Table of Contents
6
+
7
+ - [Canon TDD — Start with a Test List](#canon-tdd--start-with-a-test-list)
8
+ - [Red-Green-Refactor](#red-green-refactor)
9
+ - [Transformation Priority Premise](#transformation-priority-premise)
10
+ - [F.I.R.S.T. Principles](#first-principles)
11
+ - [London School vs Detroit School](#london-school-vs-detroit-school)
12
+ - [When to Use TDD](#when-to-use-tdd)
13
+ - [When TDD Is Less Effective](#when-tdd-is-less-effective)
14
+ - [BDD and ATDD Extensions](#bdd-and-atdd-extensions)
15
+ - [Advanced Techniques](#advanced-techniques)
16
+
17
+ ## Canon TDD — Start with a Test List
18
+
19
+ > Source: https://tidyfirst.substack.com/p/canon-tdd
20
+
21
+ Kent Beck's recommended starting point is not a single test but a **test list** — a written enumeration of all behaviors you intend to verify. This separates the creative work (what to test) from the mechanical work (write, make pass, refactor).
22
+
23
+ **Process:**
24
+ 1. Write down all behaviors the code needs — a flat list, not tests
25
+ 2. Pick the simplest item on the list
26
+ 3. Write one failing test for it
27
+ 4. Make it pass with the minimum code
28
+ 5. Refactor
29
+ 6. Cross off the item; repeat
30
+
31
+ **Why test order matters:** Starting with simpler behaviors forces simpler transformations (see TPP below) and lets the design emerge naturally. Jumping to complex cases early leads to over-engineered solutions. The test list keeps you focused and prevents scope creep.
32
+
33
+ ## Red-Green-Refactor
34
+
35
+ > Source: https://martinfowler.com/bliki/TestDrivenDevelopment.html
36
+
37
+ The core TDD cycle, repeated in small increments:
38
+
39
+ ### 1. Red — Write a Failing Test
40
+
41
+ Write the smallest test that describes the next piece of behavior. The test MUST fail before you write any production code. A test that passes immediately provides no confidence.
42
+
43
+ **Rules:**
44
+ - Write only ONE test at a time
45
+ - The test should compile/parse but fail at the assertion
46
+ - If the test passes immediately, it's either trivial or testing existing behavior
47
+
48
+ ### 2. Green — Make It Pass
49
+
50
+ Write the MINIMUM code to make the failing test pass. Do not add extra logic, handle cases not yet tested, or optimize.
51
+
52
+ **Rules:**
53
+ - Write the simplest code that makes the test pass
54
+ - It's OK to hardcode values initially — the next test will force generalization
55
+ - Do not add code for future tests
56
+ - All existing tests must still pass
57
+
58
+ ### 3. Refactor — Clean Up
59
+
60
+ With all tests green, improve the code structure without changing behavior. Tests give you the safety net.
61
+
62
+ **Rules:**
63
+ - No new functionality during refactoring
64
+ - All tests must remain green after each refactoring step
65
+ - Remove duplication, improve naming, extract methods
66
+ - Refactor both production code AND test code
67
+
68
+ ### Cycle Length
69
+
70
+ Each Red-Green-Refactor cycle should take 1–10 minutes. If you're spending more than 10 minutes in the Red or Green phase, the step is too large — break it down.
71
+
72
+ ## Transformation Priority Premise
73
+
74
+ > Source: http://blog.cleancoder.com/uncle-bob/2013/05/27/TheTransformationPriorityPremise.html
75
+
76
+ When going from Red to Green, prefer simpler transformations over complex ones. Listed from simplest to most complex:
77
+
78
+ 1. **Constant** — return a hardcoded value
79
+ 2. **Scalar** — replace constant with a variable
80
+ 3. **Direct** — replace unconditional with conditional (if/else)
81
+ 4. **Collection** — operate on a collection instead of a scalar
82
+ 5. **Iteration** — add a loop
83
+ 6. **Recursion** — add recursive call
84
+ 7. **Assignment** — replace computed value with mutation
85
+
86
+ **Example — building FizzBuzz with TDD:**
87
+
88
+ ```
89
+ Test 1: input 1 → "1" Transformation: Constant
90
+ Test 2: input 2 → "2" Transformation: Scalar (use the input)
91
+ Test 3: input 3 → "Fizz" Transformation: Direct (add if)
92
+ Test 4: input 5 → "Buzz" Transformation: Direct (add another if)
93
+ Test 5: input 15 → "FizzBuzz" Transformation: Direct (add combined if)
94
+ Test 6: input 1-15 → full list Transformation: Iteration (generalize)
95
+ ```
96
+
97
+ By following this priority, you avoid over-engineering early and let the design emerge naturally from the tests.
98
+
99
+ ## F.I.R.S.T. Principles
100
+
101
+ Every unit test must satisfy these five properties:
102
+
103
+ | Principle | Definition | Violation Signal |
104
+ |-----------|------------|-----------------|
105
+ | **Fast** | Runs in milliseconds | Real I/O, network calls, `sleep()` |
106
+ | **Independent** | No dependency on other tests | Shared mutable state, ordered execution |
107
+ | **Repeatable** | Same result every run | System clock, random data without seed, race conditions |
108
+ | **Self-Validating** | Pass or fail without manual interpretation | Tests that print output for a human to read |
109
+ | **Timely** | Written before or alongside production code | Tests added weeks after a feature shipped |
110
+
111
+ F.I.R.S.T. is a diagnostic checklist: if a test violates any property, it will erode team trust and reduce the value of the suite.
112
+
113
+ ## London School vs Detroit School
114
+
115
+ > Source: https://martinfowler.com/articles/mocksArentStubs.html
116
+
117
+ Two schools of TDD with different philosophies on test doubles. Most teams use a hybrid.
118
+
119
+ ### Detroit School (Classicist, Inside-Out)
120
+
121
+ - **Unit definition**: A module of any size — can span multiple classes
122
+ - **Approach**: Bottom-up; start from domain logic, build outward
123
+ - **Test doubles**: Avoid mocks; use real objects when feasible
124
+ - **Verification**: State verification — examine the result after execution
125
+ - **Testing style**: Black-box; test through public API
126
+ - **Refactoring**: Safe — tests aren't coupled to implementation details
127
+ - **Best for**: Building confidence in real interactions; reducing brittleness
128
+
129
+ ### London School (Mockist, Outside-In)
130
+
131
+ - **Unit definition**: A single class in isolation
132
+ - **Approach**: Top-down; start from the API, work inward
133
+ - **Test doubles**: Mock all collaborators
134
+ - **Verification**: Behavior verification — confirm correct method calls occurred
135
+ - **Testing style**: White-box; tests know about internals
136
+ - **Refactoring**: Can be brittle — tests break when implementation changes
137
+ - **Best for**: Designing interactions upfront; driving architecture decisions
138
+
139
+ ### Recommended: Hybrid Approach
140
+
141
+ Apply Detroit discipline as the default — use real objects, verify state. Apply London mocking only at architectural boundaries (external APIs, databases, clocks). Never mock value objects, pure functions, or in-process helpers.
142
+
143
+ The most important rule: if you're mocking to make a test easy to write, that's often a design smell (see The Hard Test in anti-patterns). If you're mocking because the dependency is genuinely external or slow, that's the right use.
144
+
145
+ ## When to Use TDD
146
+
147
+ TDD is most valuable when:
148
+
149
+ - **Business logic** — Complex rules, calculations, state machines. TDD forces you to think through all cases before implementing.
150
+ - **Algorithm development** — Sorting, parsing, validation, transformation logic. Tests serve as a specification.
151
+ - **Bug fixes** — Write a test that reproduces the bug first (Red), then fix it (Green). This prevents regressions.
152
+ - **API/interface design** — Writing tests first helps you design interfaces from the consumer's perspective.
153
+ - **Refactoring** — Ensure tests exist before refactoring. If they don't, write characterization tests first, then refactor.
154
+
155
+ ## When TDD Is Less Effective
156
+
157
+ TDD is not universally optimal. Use judgment:
158
+
159
+ - **UI/visual components** — Layout, styling, animations are hard to express as unit tests. Use visual regression testing or snapshot tests instead.
160
+ - **Exploratory/prototype code** — When you don't know what to build yet, writing tests first slows exploration. Spike first, then write tests.
161
+ - **Thin integration layers** — Simple pass-through code (e.g., a controller that calls a service) may not benefit from test-first approach. Integration tests are more valuable here.
162
+ - **Infrastructure/glue code** — Database migrations, config files, build scripts. Test these with integration or end-to-end tests.
163
+ - **External API wrappers** — Thin clients wrapping external APIs are better tested with integration tests against the real (or sandboxed) API.
164
+
165
+ For these cases, write tests AFTER the implementation (test-last), but still write them.
166
+
167
+ ## BDD and ATDD Extensions
168
+
169
+ ### Behavior-Driven Development (BDD)
170
+
171
+ > Source: https://martinfowler.com/bliki/GivenWhenThen.html
172
+
173
+ BDD extends TDD by using natural language to describe behavior. Useful when tests need to be readable by non-developers.
174
+
175
+ **Given-When-Then** structure:
176
+
177
+ ```gherkin
178
+ Given a cart with items totaling $100
179
+ When a 10% discount is applied
180
+ Then the total should be $90
181
+ ```
182
+
183
+ Maps to test code:
184
+
185
+ ```python
186
+ def test_cart_with_10_percent_discount_totals_90():
187
+ # Given
188
+ cart = Cart(items=[Item(price=100)])
189
+
190
+ # When
191
+ cart.apply_discount(PercentageDiscount(10))
192
+
193
+ # Then
194
+ assert cart.total == 90.0
195
+ ```
196
+
197
+ ### Acceptance TDD (ATDD)
198
+
199
+ Write high-level acceptance tests before implementing a feature. These tests describe the feature from the user's perspective and drive the overall design. Unit tests (via TDD) then drive the implementation of each component.
200
+
201
+ **Flow:**
202
+ 1. Write acceptance test (fails — Red)
203
+ 2. Use TDD to implement components needed to pass it
204
+ 3. Acceptance test passes (Green)
205
+ 4. Refactor
206
+
207
+ ATDD is most valuable for features with clear acceptance criteria and when working with product owners or stakeholders.
208
+
209
+ ## Advanced Techniques
210
+
211
+ ### Property-Based Testing
212
+
213
+ Instead of writing individual input/output pairs, define **properties** that should always hold true and let a framework generate hundreds of test cases automatically.
214
+
215
+ **Best for:** Pure functions, algorithms, data transformations, serialization round-trips.
216
+
217
+ **Tools:**
218
+ - Python: [Hypothesis](https://hypothesis.readthedocs.io)
219
+ - JavaScript/TypeScript: [fast-check](https://fast-check.dev)
220
+ - Go: `testing/quick` (stdlib), [gopter](https://github.com/leanovate/gopter)
221
+ - Rust: [proptest](https://github.com/proptest-rs/proptest)
222
+ - Java: [jqwik](https://jqwik.net)
223
+ - Elixir: [StreamData](https://hexdocs.pm/stream_data)
224
+
225
+ **Example property** (Python/Hypothesis):
226
+ ```python
227
+ from hypothesis import given, strategies as st
228
+
229
+ @given(st.lists(st.integers()))
230
+ def test_sort_is_idempotent(lst):
231
+ assert sorted(sorted(lst)) == sorted(lst)
232
+ ```
233
+
234
+ ### Mutation Testing
235
+
236
+ Mutation testing introduces small code changes (mutations) and checks whether your tests catch them. A test suite that lets mutations survive has gaps in its coverage.
237
+
238
+ **Metric:** Mutation score = % of mutations killed. Target 80%+.
239
+
240
+ **Tools:**
241
+ - JavaScript/TypeScript/C#: [Stryker](https://stryker-mutator.io)
242
+ - Java: [PITest](https://pitest.org)
243
+ - Python: [mutmut](https://mutmut.readthedocs.io)
244
+ - Go: [go-mutesting](https://github.com/zimmski/go-mutesting)
245
+
246
+ Run mutation testing periodically (not on every commit) to identify weak spots in the test suite.
247
+
248
+ ### Contract Testing
249
+
250
+ In microservice or distributed architectures, contract tests verify that services communicate correctly without running full integration tests.
251
+
252
+ **How it works:**
253
+ 1. Consumer defines a contract (expected interactions)
254
+ 2. Provider verifies it can fulfill the contract
255
+ 3. Both test independently — no need to spin up the full system
256
+
257
+ **Tool:** [Pact](https://pact.io) — supports most major languages.
258
+
259
+ Contract tests replace the expensive integration test layer for inter-service communication while still catching breaking API changes early.
@@ -0,0 +1,29 @@
1
+ {
2
+ "name": "rust-pack",
3
+ "version": "1.0.0",
4
+ "agents": [],
5
+ "skills": [
6
+ {
7
+ "name": "grimoire.unit-testing-rust",
8
+ "path": "skills/grimoire.unit-testing-rust",
9
+ "description": "Rust unit testing specialist. Patterns and best practices for the built-in test framework, mockall, and proptest. Use when writing tests for .rs files, or asking about Rust testing patterns, test modules, mocking traits, property-based testing, integration tests.",
10
+ "version": "1.0.0",
11
+ "triggers": {
12
+ "keywords": ["cargo-test", "mockall", "proptest", "rstest"],
13
+ "file_extensions": [".rs"],
14
+ "patterns": [
15
+ "write.*test",
16
+ "add.*test",
17
+ "create.*test",
18
+ "test.*coverage",
19
+ "rust.*test",
20
+ "cargo.*test"
21
+ ],
22
+ "file_paths": [
23
+ "**/tests/**/*.rs",
24
+ "**/*_test.rs"
25
+ ]
26
+ }
27
+ }
28
+ ]
29
+ }