context-mode 1.0.107 → 1.0.108
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +2 -2
- package/.claude-plugin/plugin.json +1 -1
- package/.openclaw-plugin/openclaw.plugin.json +1 -1
- package/.openclaw-plugin/package.json +1 -1
- package/README.md +22 -18
- package/build/adapters/claude-code/index.js +26 -9
- package/build/adapters/opencode/index.js +5 -5
- package/build/cli.js +92 -12
- package/build/server.js +7 -0
- package/build/session/analytics.js +36 -13
- package/cli.bundle.mjs +117 -116
- package/hooks/ensure-deps.mjs +28 -12
- package/hooks/posttooluse.mjs +90 -80
- package/hooks/precompact.mjs +56 -46
- package/hooks/pretooluse.mjs +161 -167
- package/hooks/routing-block.mjs +2 -2
- package/hooks/run-hook.mjs +82 -0
- package/hooks/sessionstart.mjs +187 -155
- package/hooks/userpromptsubmit.mjs +69 -58
- package/openclaw.plugin.json +1 -1
- package/package.json +2 -1
- package/scripts/heal-better-sqlite3.mjs +108 -0
- package/scripts/postinstall.mjs +27 -0
- package/server.bundle.mjs +51 -51
- package/skills/UPSTREAM-CREDITS.md +51 -0
- package/skills/context-mode-ops/SKILL.md +147 -0
- package/skills/diagnose/SKILL.md +122 -0
- package/skills/diagnose/scripts/hitl-loop.template.sh +41 -0
- package/skills/grill-me/SKILL.md +15 -0
- package/skills/grill-with-docs/ADR-FORMAT.md +47 -0
- package/skills/grill-with-docs/CONTEXT-FORMAT.md +77 -0
- package/skills/grill-with-docs/SKILL.md +93 -0
- package/skills/improve-codebase-architecture/DEEPENING.md +37 -0
- package/skills/improve-codebase-architecture/INTERFACE-DESIGN.md +44 -0
- package/skills/improve-codebase-architecture/LANGUAGE.md +53 -0
- package/skills/improve-codebase-architecture/SKILL.md +76 -0
- package/skills/tdd/SKILL.md +114 -0
- package/skills/tdd/deep-modules.md +33 -0
- package/skills/tdd/interface-design.md +31 -0
- package/skills/tdd/mocking.md +59 -0
- package/skills/tdd/refactoring.md +10 -0
- package/skills/tdd/tests.md +61 -0
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
# Interface Design
|
|
2
|
+
|
|
3
|
+
When the user wants to explore alternative interfaces for a chosen deepening candidate, use this parallel sub-agent pattern. Based on "Design It Twice" (Ousterhout) — your first idea is unlikely to be the best.
|
|
4
|
+
|
|
5
|
+
Uses the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**, **leverage**.
|
|
6
|
+
|
|
7
|
+
## Process
|
|
8
|
+
|
|
9
|
+
### 1. Frame the problem space
|
|
10
|
+
|
|
11
|
+
Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:
|
|
12
|
+
|
|
13
|
+
- The constraints any new interface would need to satisfy
|
|
14
|
+
- The dependencies it would rely on, and which category they fall into (see [DEEPENING.md](DEEPENING.md))
|
|
15
|
+
- A rough illustrative code sketch to ground the constraints — not a proposal, just a way to make the constraints concrete
|
|
16
|
+
|
|
17
|
+
Show this to the user, then immediately proceed to Step 2. The user reads and thinks while the sub-agents work in parallel.
|
|
18
|
+
|
|
19
|
+
### 2. Spawn sub-agents
|
|
20
|
+
|
|
21
|
+
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module.
|
|
22
|
+
|
|
23
|
+
Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category from [DEEPENING.md](DEEPENING.md), what sits behind the seam). The brief is independent of the user-facing problem-space explanation in Step 1. Give each agent a different design constraint:
|
|
24
|
+
|
|
25
|
+
- Agent 1: "Minimize the interface — aim for 1–3 entry points max. Maximise leverage per entry point."
|
|
26
|
+
- Agent 2: "Maximise flexibility — support many use cases and extension."
|
|
27
|
+
- Agent 3: "Optimise for the most common caller — make the default case trivial."
|
|
28
|
+
- Agent 4 (if applicable): "Design around ports & adapters for cross-seam dependencies."
|
|
29
|
+
|
|
30
|
+
Include both [LANGUAGE.md](LANGUAGE.md) vocabulary and CONTEXT.md vocabulary in the brief so each sub-agent names things consistently with the architecture language and the project's domain language.
|
|
31
|
+
|
|
32
|
+
Each sub-agent outputs:
|
|
33
|
+
|
|
34
|
+
1. Interface (types, methods, params — plus invariants, ordering, error modes)
|
|
35
|
+
2. Usage example showing how callers use it
|
|
36
|
+
3. What the implementation hides behind the seam
|
|
37
|
+
4. Dependency strategy and adapters (see [DEEPENING.md](DEEPENING.md))
|
|
38
|
+
5. Trade-offs — where leverage is high, where it's thin
|
|
39
|
+
|
|
40
|
+
### 3. Present and compare
|
|
41
|
+
|
|
42
|
+
Present designs sequentially so the user can absorb each one, then compare them in prose. Contrast by **depth** (leverage at the interface), **locality** (where change concentrates), and **seam placement**.
|
|
43
|
+
|
|
44
|
+
After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not a menu.
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
# Language
|
|
2
|
+
|
|
3
|
+
Shared vocabulary for every suggestion this skill makes. Use these terms exactly — don't substitute "component," "service," "API," or "boundary." Consistent language is the whole point.
|
|
4
|
+
|
|
5
|
+
## Terms
|
|
6
|
+
|
|
7
|
+
**Module**
|
|
8
|
+
Anything with an interface and an implementation. Deliberately scale-agnostic — applies equally to a function, class, package, or tier-spanning slice.
|
|
9
|
+
_Avoid_: unit, component, service.
|
|
10
|
+
|
|
11
|
+
**Interface**
|
|
12
|
+
Everything a caller must know to use the module correctly. Includes the type signature, but also invariants, ordering constraints, error modes, required configuration, and performance characteristics.
|
|
13
|
+
_Avoid_: API, signature (too narrow — those refer only to the type-level surface).
|
|
14
|
+
|
|
15
|
+
**Implementation**
|
|
16
|
+
What's inside a module — its body of code. Distinct from **Adapter**: a thing can be a small adapter with a large implementation (a Postgres repo) or a large adapter with a small implementation (an in-memory fake). Reach for "adapter" when the seam is the topic; "implementation" otherwise.
|
|
17
|
+
|
|
18
|
+
**Depth**
|
|
19
|
+
Leverage at the interface — the amount of behaviour a caller (or test) can exercise per unit of interface they have to learn. A module is **deep** when a large amount of behaviour sits behind a small interface. A module is **shallow** when the interface is nearly as complex as the implementation.
|
|
20
|
+
|
|
21
|
+
**Seam** _(from Michael Feathers)_
|
|
22
|
+
A place where you can alter behaviour without editing in that place. The *location* at which a module's interface lives. Choosing where to put the seam is its own design decision, distinct from what goes behind it.
|
|
23
|
+
_Avoid_: boundary (overloaded with DDD's bounded context).
|
|
24
|
+
|
|
25
|
+
**Adapter**
|
|
26
|
+
A concrete thing that satisfies an interface at a seam. Describes *role* (what slot it fills), not substance (what's inside).
|
|
27
|
+
|
|
28
|
+
**Leverage**
|
|
29
|
+
What callers get from depth. More capability per unit of interface they have to learn. One implementation pays back across N call sites and M tests.
|
|
30
|
+
|
|
31
|
+
**Locality**
|
|
32
|
+
What maintainers get from depth. Change, bugs, knowledge, and verification concentrate at one place rather than spreading across callers. Fix once, fixed everywhere.
|
|
33
|
+
|
|
34
|
+
## Principles
|
|
35
|
+
|
|
36
|
+
- **Depth is a property of the interface, not the implementation.** A deep module can be internally composed of small, mockable, swappable parts — they just aren't part of the interface. A module can have **internal seams** (private to its implementation, used by its own tests) as well as the **external seam** at its interface.
|
|
37
|
+
- **The deletion test.** Imagine deleting the module. If complexity vanishes, the module wasn't hiding anything (it was a pass-through). If complexity reappears across N callers, the module was earning its keep.
|
|
38
|
+
- **The interface is the test surface.** Callers and tests cross the same seam. If you want to test *past* the interface, the module is probably the wrong shape.
|
|
39
|
+
- **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a seam unless something actually varies across it.
|
|
40
|
+
|
|
41
|
+
## Relationships
|
|
42
|
+
|
|
43
|
+
- A **Module** has exactly one **Interface** (the surface it presents to callers and tests).
|
|
44
|
+
- **Depth** is a property of a **Module**, measured against its **Interface**.
|
|
45
|
+
- A **Seam** is where a **Module**'s **Interface** lives.
|
|
46
|
+
- An **Adapter** sits at a **Seam** and satisfies the **Interface**.
|
|
47
|
+
- **Depth** produces **Leverage** for callers and **Locality** for maintainers.
|
|
48
|
+
|
|
49
|
+
## Rejected framings
|
|
50
|
+
|
|
51
|
+
- **Depth as ratio of implementation-lines to interface-lines** (Ousterhout): rewards padding the implementation. We use depth-as-leverage instead.
|
|
52
|
+
- **"Interface" as the TypeScript `interface` keyword or a class's public methods**: too narrow — interface here includes every fact a caller must know.
|
|
53
|
+
- **"Boundary"**: overloaded with DDD's bounded context. Say **seam** or **interface**.
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: improve-codebase-architecture
|
|
3
|
+
description: Find deepening opportunities in a codebase, informed by the domain language in CONTEXT.md and the decisions in docs/adr/. Use when the user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more testable and AI-navigable.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Improve Codebase Architecture
|
|
7
|
+
|
|
8
|
+
Surface architectural friction and propose **deepening opportunities** — refactors that turn shallow modules into deep ones. The aim is testability and AI-navigability.
|
|
9
|
+
|
|
10
|
+
## Glossary
|
|
11
|
+
|
|
12
|
+
Use these terms exactly in every suggestion. Consistent language is the point — don't drift into "component," "service," "API," or "boundary." Full definitions in [LANGUAGE.md](LANGUAGE.md).
|
|
13
|
+
|
|
14
|
+
- **Module** — anything with an interface and an implementation (function, class, package, slice).
|
|
15
|
+
- **Interface** — everything a caller must know to use the module: types, invariants, error modes, ordering, config. Not just the type signature.
|
|
16
|
+
- **Implementation** — the code inside.
|
|
17
|
+
- **Depth** — leverage at the interface: a lot of behaviour behind a small interface. **Deep** = high leverage. **Shallow** = interface nearly as complex as the implementation.
|
|
18
|
+
- **Seam** — where an interface lives; a place behaviour can be altered without editing in place. (Use this, not "boundary.")
|
|
19
|
+
- **Adapter** — a concrete thing satisfying an interface at a seam.
|
|
20
|
+
- **Leverage** — what callers get from depth.
|
|
21
|
+
- **Locality** — what maintainers get from depth: change, bugs, knowledge concentrated in one place.
|
|
22
|
+
|
|
23
|
+
Key principles (see [LANGUAGE.md](LANGUAGE.md) for the full list):
|
|
24
|
+
|
|
25
|
+
- **Deletion test**: imagine deleting the module. If complexity vanishes, it was a pass-through. If complexity reappears across N callers, it was earning its keep.
|
|
26
|
+
- **The interface is the test surface.**
|
|
27
|
+
- **One adapter = hypothetical seam. Two adapters = real seam.**
|
|
28
|
+
|
|
29
|
+
This skill is _informed_ by the project's domain model. The domain language gives names to good seams; ADRs record decisions the skill should not re-litigate.
|
|
30
|
+
|
|
31
|
+
## Process
|
|
32
|
+
|
|
33
|
+
### 1. Explore
|
|
34
|
+
|
|
35
|
+
Read the project's domain glossary and any ADRs in the area you're touching first.
|
|
36
|
+
|
|
37
|
+
Then use the Agent tool with `subagent_type=Explore` to walk the codebase. Don't follow rigid heuristics — explore organically and note where you experience friction:
|
|
38
|
+
|
|
39
|
+
- Where does understanding one concept require bouncing between many small modules?
|
|
40
|
+
- Where are modules **shallow** — interface nearly as complex as the implementation?
|
|
41
|
+
- Where have pure functions been extracted just for testability, but the real bugs hide in how they're called (no **locality**)?
|
|
42
|
+
- Where do tightly-coupled modules leak across their seams?
|
|
43
|
+
- Which parts of the codebase are untested, or hard to test through their current interface?
|
|
44
|
+
|
|
45
|
+
Apply the **deletion test** to anything you suspect is shallow: would deleting it concentrate complexity, or just move it? A "yes, concentrates" is the signal you want.
|
|
46
|
+
|
|
47
|
+
### 2. Present candidates
|
|
48
|
+
|
|
49
|
+
Present a numbered list of deepening opportunities. For each candidate:
|
|
50
|
+
|
|
51
|
+
- **Files** — which files/modules are involved
|
|
52
|
+
- **Problem** — why the current architecture is causing friction
|
|
53
|
+
- **Solution** — plain English description of what would change
|
|
54
|
+
- **Benefits** — explained in terms of locality and leverage, and also in how tests would improve
|
|
55
|
+
|
|
56
|
+
**Use CONTEXT.md vocabulary for the domain, and [LANGUAGE.md](LANGUAGE.md) vocabulary for the architecture.** If `CONTEXT.md` defines "Order," talk about "the Order intake module" — not "the FooBarHandler," and not "the Order service."
|
|
57
|
+
|
|
58
|
+
**ADR conflicts**: if a candidate contradicts an existing ADR, only surface it when the friction is real enough to warrant revisiting the ADR. Mark it clearly (e.g. _"contradicts ADR-0007 — but worth reopening because…"_). Don't list every theoretical refactor an ADR forbids.
|
|
59
|
+
|
|
60
|
+
Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?"
|
|
61
|
+
|
|
62
|
+
### 3. Grilling loop
|
|
63
|
+
|
|
64
|
+
Once the user picks a candidate, drop into a grilling conversation. Walk the design tree with them — constraints, dependencies, the shape of the deepened module, what sits behind the seam, what tests survive.
|
|
65
|
+
|
|
66
|
+
Side effects happen inline as decisions crystallize:
|
|
67
|
+
|
|
68
|
+
- **Naming a deepened module after a concept not in `CONTEXT.md`?** Add the term to `CONTEXT.md` — same discipline as `/grill-with-docs` (see [CONTEXT-FORMAT.md](../grill-with-docs/CONTEXT-FORMAT.md)). Create the file lazily if it doesn't exist.
|
|
69
|
+
- **Sharpening a fuzzy term during the conversation?** Update `CONTEXT.md` right there.
|
|
70
|
+
- **User rejects the candidate with a load-bearing reason?** Offer an ADR, framed as: _"Want me to record this as an ADR so future architecture reviews don't re-suggest it?"_ Only offer when the reason would actually be needed by a future explorer to avoid re-suggesting the same thing — skip ephemeral reasons ("not worth it right now") and self-evident ones. See [ADR-FORMAT.md](../grill-with-docs/ADR-FORMAT.md).
|
|
71
|
+
- **Want to explore alternative interfaces for the deepened module?** See [INTERFACE-DESIGN.md](INTERFACE-DESIGN.md).
|
|
72
|
+
|
|
73
|
+
|
|
74
|
+
---
|
|
75
|
+
|
|
76
|
+
_Vendored from [mattpocock/skills](https://github.com/mattpocock/skills) @ `b843cb5` — MIT License. See [skills/UPSTREAM-CREDITS.md](../UPSTREAM-CREDITS.md) for refresh instructions._
|
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: tdd
|
|
3
|
+
description: Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs using TDD, mentions "red-green-refactor", wants integration tests, or asks for test-first development.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Test-Driven Development
|
|
7
|
+
|
|
8
|
+
## Philosophy
|
|
9
|
+
|
|
10
|
+
**Core principle**: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
|
|
11
|
+
|
|
12
|
+
**Good tests** are integration-style: they exercise real code paths through public APIs. They describe _what_ the system does, not _how_ it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
|
|
13
|
+
|
|
14
|
+
**Bad tests** are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.
|
|
15
|
+
|
|
16
|
+
See [tests.md](tests.md) for examples and [mocking.md](mocking.md) for mocking guidelines.
|
|
17
|
+
|
|
18
|
+
## Anti-Pattern: Horizontal Slices
|
|
19
|
+
|
|
20
|
+
**DO NOT write all tests first, then all implementation.** This is "horizontal slicing" - treating RED as "write all tests" and GREEN as "write all code."
|
|
21
|
+
|
|
22
|
+
This produces **crap tests**:
|
|
23
|
+
|
|
24
|
+
- Tests written in bulk test _imagined_ behavior, not _actual_ behavior
|
|
25
|
+
- You end up testing the _shape_ of things (data structures, function signatures) rather than user-facing behavior
|
|
26
|
+
- Tests become insensitive to real changes - they pass when behavior breaks, fail when behavior is fine
|
|
27
|
+
- You outrun your headlights, committing to test structure before understanding the implementation
|
|
28
|
+
|
|
29
|
+
**Correct approach**: Vertical slices via tracer bullets. One test → one implementation → repeat. Each test responds to what you learned from the previous cycle. Because you just wrote the code, you know exactly what behavior matters and how to verify it.
|
|
30
|
+
|
|
31
|
+
```
|
|
32
|
+
WRONG (horizontal):
|
|
33
|
+
RED: test1, test2, test3, test4, test5
|
|
34
|
+
GREEN: impl1, impl2, impl3, impl4, impl5
|
|
35
|
+
|
|
36
|
+
RIGHT (vertical):
|
|
37
|
+
RED→GREEN: test1→impl1
|
|
38
|
+
RED→GREEN: test2→impl2
|
|
39
|
+
RED→GREEN: test3→impl3
|
|
40
|
+
...
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
## Workflow
|
|
44
|
+
|
|
45
|
+
### 1. Planning
|
|
46
|
+
|
|
47
|
+
When exploring the codebase, use the project's domain glossary so that test names and interface vocabulary match the project's language, and respect ADRs in the area you're touching.
|
|
48
|
+
|
|
49
|
+
Before writing any code:
|
|
50
|
+
|
|
51
|
+
- [ ] Confirm with user what interface changes are needed
|
|
52
|
+
- [ ] Confirm with user which behaviors to test (prioritize)
|
|
53
|
+
- [ ] Identify opportunities for [deep modules](deep-modules.md) (small interface, deep implementation)
|
|
54
|
+
- [ ] Design interfaces for [testability](interface-design.md)
|
|
55
|
+
- [ ] List the behaviors to test (not implementation steps)
|
|
56
|
+
- [ ] Get user approval on the plan
|
|
57
|
+
|
|
58
|
+
Ask: "What should the public interface look like? Which behaviors are most important to test?"
|
|
59
|
+
|
|
60
|
+
**You can't test everything.** Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
|
|
61
|
+
|
|
62
|
+
### 2. Tracer Bullet
|
|
63
|
+
|
|
64
|
+
Write ONE test that confirms ONE thing about the system:
|
|
65
|
+
|
|
66
|
+
```
|
|
67
|
+
RED: Write test for first behavior → test fails
|
|
68
|
+
GREEN: Write minimal code to pass → test passes
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
This is your tracer bullet - proves the path works end-to-end.
|
|
72
|
+
|
|
73
|
+
### 3. Incremental Loop
|
|
74
|
+
|
|
75
|
+
For each remaining behavior:
|
|
76
|
+
|
|
77
|
+
```
|
|
78
|
+
RED: Write next test → fails
|
|
79
|
+
GREEN: Minimal code to pass → passes
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
Rules:
|
|
83
|
+
|
|
84
|
+
- One test at a time
|
|
85
|
+
- Only enough code to pass current test
|
|
86
|
+
- Don't anticipate future tests
|
|
87
|
+
- Keep tests focused on observable behavior
|
|
88
|
+
|
|
89
|
+
### 4. Refactor
|
|
90
|
+
|
|
91
|
+
After all tests pass, look for [refactor candidates](refactoring.md):
|
|
92
|
+
|
|
93
|
+
- [ ] Extract duplication
|
|
94
|
+
- [ ] Deepen modules (move complexity behind simple interfaces)
|
|
95
|
+
- [ ] Apply SOLID principles where natural
|
|
96
|
+
- [ ] Consider what new code reveals about existing code
|
|
97
|
+
- [ ] Run tests after each refactor step
|
|
98
|
+
|
|
99
|
+
**Never refactor while RED.** Get to GREEN first.
|
|
100
|
+
|
|
101
|
+
## Checklist Per Cycle
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
[ ] Test describes behavior, not implementation
|
|
105
|
+
[ ] Test uses public interface only
|
|
106
|
+
[ ] Test would survive internal refactor
|
|
107
|
+
[ ] Code is minimal for this test
|
|
108
|
+
[ ] No speculative features added
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
_Vendored from [mattpocock/skills](https://github.com/mattpocock/skills) @ `b843cb5` — MIT License. See [skills/UPSTREAM-CREDITS.md](../UPSTREAM-CREDITS.md) for refresh instructions._
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# Deep Modules
|
|
2
|
+
|
|
3
|
+
From "A Philosophy of Software Design":
|
|
4
|
+
|
|
5
|
+
**Deep module** = small interface + lots of implementation
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
┌─────────────────────┐
|
|
9
|
+
│ Small Interface │ ← Few methods, simple params
|
|
10
|
+
├─────────────────────┤
|
|
11
|
+
│ │
|
|
12
|
+
│ │
|
|
13
|
+
│ Deep Implementation│ ← Complex logic hidden
|
|
14
|
+
│ │
|
|
15
|
+
│ │
|
|
16
|
+
└─────────────────────┘
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
**Shallow module** = large interface + little implementation (avoid)
|
|
20
|
+
|
|
21
|
+
```
|
|
22
|
+
┌─────────────────────────────────┐
|
|
23
|
+
│ Large Interface │ ← Many methods, complex params
|
|
24
|
+
├─────────────────────────────────┤
|
|
25
|
+
│ Thin Implementation │ ← Just passes through
|
|
26
|
+
└─────────────────────────────────┘
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
When designing interfaces, ask:
|
|
30
|
+
|
|
31
|
+
- Can I reduce the number of methods?
|
|
32
|
+
- Can I simplify the parameters?
|
|
33
|
+
- Can I hide more complexity inside?
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
# Interface Design for Testability
|
|
2
|
+
|
|
3
|
+
Good interfaces make testing natural:
|
|
4
|
+
|
|
5
|
+
1. **Accept dependencies, don't create them**
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
// Testable
|
|
9
|
+
function processOrder(order, paymentGateway) {}
|
|
10
|
+
|
|
11
|
+
// Hard to test
|
|
12
|
+
function processOrder(order) {
|
|
13
|
+
const gateway = new StripeGateway();
|
|
14
|
+
}
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
2. **Return results, don't produce side effects**
|
|
18
|
+
|
|
19
|
+
```typescript
|
|
20
|
+
// Testable
|
|
21
|
+
function calculateDiscount(cart): Discount {}
|
|
22
|
+
|
|
23
|
+
// Hard to test
|
|
24
|
+
function applyDiscount(cart): void {
|
|
25
|
+
cart.total -= discount;
|
|
26
|
+
}
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
3. **Small surface area**
|
|
30
|
+
- Fewer methods = fewer tests needed
|
|
31
|
+
- Fewer params = simpler test setup
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
# When to Mock
|
|
2
|
+
|
|
3
|
+
Mock at **system boundaries** only:
|
|
4
|
+
|
|
5
|
+
- External APIs (payment, email, etc.)
|
|
6
|
+
- Databases (sometimes - prefer test DB)
|
|
7
|
+
- Time/randomness
|
|
8
|
+
- File system (sometimes)
|
|
9
|
+
|
|
10
|
+
Don't mock:
|
|
11
|
+
|
|
12
|
+
- Your own classes/modules
|
|
13
|
+
- Internal collaborators
|
|
14
|
+
- Anything you control
|
|
15
|
+
|
|
16
|
+
## Designing for Mockability
|
|
17
|
+
|
|
18
|
+
At system boundaries, design interfaces that are easy to mock:
|
|
19
|
+
|
|
20
|
+
**1. Use dependency injection**
|
|
21
|
+
|
|
22
|
+
Pass external dependencies in rather than creating them internally:
|
|
23
|
+
|
|
24
|
+
```typescript
|
|
25
|
+
// Easy to mock
|
|
26
|
+
function processPayment(order, paymentClient) {
|
|
27
|
+
return paymentClient.charge(order.total);
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
// Hard to mock
|
|
31
|
+
function processPayment(order) {
|
|
32
|
+
const client = new StripeClient(process.env.STRIPE_KEY);
|
|
33
|
+
return client.charge(order.total);
|
|
34
|
+
}
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
**2. Prefer SDK-style interfaces over generic fetchers**
|
|
38
|
+
|
|
39
|
+
Create specific functions for each external operation instead of one generic function with conditional logic:
|
|
40
|
+
|
|
41
|
+
```typescript
|
|
42
|
+
// GOOD: Each function is independently mockable
|
|
43
|
+
const api = {
|
|
44
|
+
getUser: (id) => fetch(`/users/${id}`),
|
|
45
|
+
getOrders: (userId) => fetch(`/users/${userId}/orders`),
|
|
46
|
+
createOrder: (data) => fetch('/orders', { method: 'POST', body: data }),
|
|
47
|
+
};
|
|
48
|
+
|
|
49
|
+
// BAD: Mocking requires conditional logic inside the mock
|
|
50
|
+
const api = {
|
|
51
|
+
fetch: (endpoint, options) => fetch(endpoint, options),
|
|
52
|
+
};
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
The SDK approach means:
|
|
56
|
+
- Each mock returns one specific shape
|
|
57
|
+
- No conditional logic in test setup
|
|
58
|
+
- Easier to see which endpoints a test exercises
|
|
59
|
+
- Type safety per endpoint
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
# Refactor Candidates
|
|
2
|
+
|
|
3
|
+
After TDD cycle, look for:
|
|
4
|
+
|
|
5
|
+
- **Duplication** → Extract function/class
|
|
6
|
+
- **Long methods** → Break into private helpers (keep tests on public interface)
|
|
7
|
+
- **Shallow modules** → Combine or deepen
|
|
8
|
+
- **Feature envy** → Move logic to where data lives
|
|
9
|
+
- **Primitive obsession** → Introduce value objects
|
|
10
|
+
- **Existing code** the new code reveals as problematic
|
|
@@ -0,0 +1,61 @@
|
|
|
1
|
+
# Good and Bad Tests
|
|
2
|
+
|
|
3
|
+
## Good Tests
|
|
4
|
+
|
|
5
|
+
**Integration-style**: Test through real interfaces, not mocks of internal parts.
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
// GOOD: Tests observable behavior
|
|
9
|
+
test("user can checkout with valid cart", async () => {
|
|
10
|
+
const cart = createCart();
|
|
11
|
+
cart.add(product);
|
|
12
|
+
const result = await checkout(cart, paymentMethod);
|
|
13
|
+
expect(result.status).toBe("confirmed");
|
|
14
|
+
});
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
Characteristics:
|
|
18
|
+
|
|
19
|
+
- Tests behavior users/callers care about
|
|
20
|
+
- Uses public API only
|
|
21
|
+
- Survives internal refactors
|
|
22
|
+
- Describes WHAT, not HOW
|
|
23
|
+
- One logical assertion per test
|
|
24
|
+
|
|
25
|
+
## Bad Tests
|
|
26
|
+
|
|
27
|
+
**Implementation-detail tests**: Coupled to internal structure.
|
|
28
|
+
|
|
29
|
+
```typescript
|
|
30
|
+
// BAD: Tests implementation details
|
|
31
|
+
test("checkout calls paymentService.process", async () => {
|
|
32
|
+
const mockPayment = jest.mock(paymentService);
|
|
33
|
+
await checkout(cart, payment);
|
|
34
|
+
expect(mockPayment.process).toHaveBeenCalledWith(cart.total);
|
|
35
|
+
});
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
Red flags:
|
|
39
|
+
|
|
40
|
+
- Mocking internal collaborators
|
|
41
|
+
- Testing private methods
|
|
42
|
+
- Asserting on call counts/order
|
|
43
|
+
- Test breaks when refactoring without behavior change
|
|
44
|
+
- Test name describes HOW not WHAT
|
|
45
|
+
- Verifying through external means instead of interface
|
|
46
|
+
|
|
47
|
+
```typescript
|
|
48
|
+
// BAD: Bypasses interface to verify
|
|
49
|
+
test("createUser saves to database", async () => {
|
|
50
|
+
await createUser({ name: "Alice" });
|
|
51
|
+
const row = await db.query("SELECT * FROM users WHERE name = ?", ["Alice"]);
|
|
52
|
+
expect(row).toBeDefined();
|
|
53
|
+
});
|
|
54
|
+
|
|
55
|
+
// GOOD: Verifies through interface
|
|
56
|
+
test("createUser makes user retrievable", async () => {
|
|
57
|
+
const user = await createUser({ name: "Alice" });
|
|
58
|
+
const retrieved = await getUser(user.id);
|
|
59
|
+
expect(retrieved.name).toBe("Alice");
|
|
60
|
+
});
|
|
61
|
+
```
|