@kodrunhq/opencode-autopilot 1.4.0 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/assets/commands/brainstorm.md +7 -0
- package/assets/commands/stocktake.md +7 -0
- package/assets/commands/tdd.md +7 -0
- package/assets/commands/update-docs.md +7 -0
- package/assets/commands/write-plan.md +7 -0
- package/assets/skills/brainstorming/SKILL.md +295 -0
- package/assets/skills/code-review/SKILL.md +241 -0
- package/assets/skills/e2e-testing/SKILL.md +266 -0
- package/assets/skills/git-worktrees/SKILL.md +296 -0
- package/assets/skills/go-patterns/SKILL.md +240 -0
- package/assets/skills/plan-executing/SKILL.md +258 -0
- package/assets/skills/plan-writing/SKILL.md +278 -0
- package/assets/skills/python-patterns/SKILL.md +255 -0
- package/assets/skills/rust-patterns/SKILL.md +293 -0
- package/assets/skills/strategic-compaction/SKILL.md +217 -0
- package/assets/skills/systematic-debugging/SKILL.md +299 -0
- package/assets/skills/tdd-workflow/SKILL.md +311 -0
- package/assets/skills/typescript-patterns/SKILL.md +278 -0
- package/assets/skills/verification/SKILL.md +240 -0
- package/package.json +1 -1
- package/src/index.ts +4 -0
- package/src/orchestrator/skill-injection.ts +38 -0
- package/src/review/sanitize.ts +1 -1
- package/src/skills/adaptive-injector.ts +122 -0
- package/src/skills/dependency-resolver.ts +88 -0
- package/src/skills/linter.ts +113 -0
- package/src/skills/loader.ts +88 -0
- package/src/templates/skill-template.ts +4 -0
- package/src/tools/create-skill.ts +12 -0
- package/src/tools/stocktake.ts +170 -0
- package/src/tools/update-docs.ts +116 -0
|
@@ -0,0 +1,278 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: plan-writing
|
|
3
|
+
description: Methodology for decomposing features into bite-sized implementation tasks with file paths, dependencies, and verification criteria
|
|
4
|
+
stacks: []
|
|
5
|
+
requires: []
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Plan Writing
|
|
9
|
+
|
|
10
|
+
A systematic methodology for breaking down features, refactors, and bug fixes into bite-sized implementation tasks. Each task has exact file paths, clear actions, verification commands, and dependency ordering. Plans are the bridge between "what we want" and "what we build."
|
|
11
|
+
|
|
12
|
+
## When to Use
|
|
13
|
+
|
|
14
|
+
- **New feature implementation** — any feature touching more than 3 files needs a plan
|
|
15
|
+
- **Refactoring existing code** — without a plan, refactors sprawl and break things
|
|
16
|
+
- **Multi-step bug fixes** — when the fix spans multiple files or modules
|
|
17
|
+
- **Any work that will take more than 60 minutes** — break it into trackable tasks
|
|
18
|
+
- **Work that others need to review** — a plan makes the approach reviewable before code is written
|
|
19
|
+
- **Work you might not finish in one session** — a plan lets you (or someone else) resume cleanly
|
|
20
|
+
|
|
21
|
+
A plan is not overhead — it is the work. Writing the plan forces you to think through the approach, identify dependencies, and surface problems before you write any code. The time spent planning is recovered 3x during implementation.
|
|
22
|
+
|
|
23
|
+
## The Plan Writing Process
|
|
24
|
+
|
|
25
|
+
### Step 1: Define the Goal
|
|
26
|
+
|
|
27
|
+
State what must be TRUE when this work is complete. Goals are outcome-shaped, not task-shaped.
|
|
28
|
+
|
|
29
|
+
**Good goals:**
|
|
30
|
+
- "Users can log in with email and password, receiving a JWT on success and a clear error on failure"
|
|
31
|
+
- "The review engine filters agents by detected project stack, loading only relevant agents"
|
|
32
|
+
- "All API endpoints validate input with Zod schemas and return structured error responses"
|
|
33
|
+
|
|
34
|
+
**Bad goals:**
|
|
35
|
+
- "Build the auth system" (too vague — what does "build" mean?)
|
|
36
|
+
- "Refactor the code" (refactor what, to achieve what outcome?)
|
|
37
|
+
- "Fix the bug" (which bug? what is the expected behavior?)
|
|
38
|
+
|
|
39
|
+
**Process:**
|
|
40
|
+
1. Write the goal as a single sentence starting with a noun ("Users can...", "The system...", "All endpoints...")
|
|
41
|
+
2. Include the observable behavior (what a user or developer would see)
|
|
42
|
+
3. Include the key constraint or quality attribute (performance, security, correctness)
|
|
43
|
+
4. If you cannot state the goal in one sentence, you have multiple goals — write multiple plans
|
|
44
|
+
|
|
45
|
+
### Step 2: List Required Artifacts
|
|
46
|
+
|
|
47
|
+
For each goal, list the concrete files that must exist or be modified. Use exact file paths.
|
|
48
|
+
|
|
49
|
+
**Process:**
|
|
50
|
+
1. List every source file that must be created or modified
|
|
51
|
+
2. List every test file that must be created or modified
|
|
52
|
+
3. List every configuration file affected (schemas, migrations, config)
|
|
53
|
+
4. List every type/interface file needed
|
|
54
|
+
5. Use exact paths relative to the project root: `src/auth/login.ts`, not "the login module"
|
|
55
|
+
|
|
56
|
+
**Example:**
|
|
57
|
+
```
|
|
58
|
+
Goal: Users can log in with email and password
|
|
59
|
+
|
|
60
|
+
Artifacts:
|
|
61
|
+
- src/auth/login.ts (new — login endpoint handler)
|
|
62
|
+
- src/auth/login.test.ts (new — tests for login)
|
|
63
|
+
- src/auth/token.ts (new — JWT creation and verification)
|
|
64
|
+
- src/auth/token.test.ts (new — tests for token utilities)
|
|
65
|
+
- src/types/auth.ts (new — LoginRequest, LoginResponse types)
|
|
66
|
+
- src/middleware/auth.ts (modify — add JWT verification middleware)
|
|
67
|
+
- src/index.ts (modify — register login route)
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
**Why file paths matter:** Vague artifact descriptions ("create the auth module") leave too much ambiguity. Exact file paths make the scope visible, reviewable, and trackable. If you cannot name the file, you do not understand the implementation well enough to plan it.
|
|
71
|
+
|
|
72
|
+
### Step 3: Map Dependencies
|
|
73
|
+
|
|
74
|
+
For each artifact, identify what must exist before it can be built.
|
|
75
|
+
|
|
76
|
+
**Process:**
|
|
77
|
+
1. For each file, ask: "What does this file import or depend on?"
|
|
78
|
+
2. Draw arrows from dependencies to dependents
|
|
79
|
+
3. Files with no dependencies are starting points
|
|
80
|
+
4. Files that everything depends on are critical path items
|
|
81
|
+
|
|
82
|
+
**Example:**
|
|
83
|
+
```
|
|
84
|
+
src/types/auth.ts → depends on: nothing (pure types)
|
|
85
|
+
src/auth/token.ts → depends on: src/types/auth.ts
|
|
86
|
+
src/auth/login.ts → depends on: src/types/auth.ts, src/auth/token.ts
|
|
87
|
+
src/middleware/auth.ts → depends on: src/auth/token.ts
|
|
88
|
+
src/auth/token.test.ts → depends on: src/auth/token.ts
|
|
89
|
+
src/auth/login.test.ts → depends on: src/auth/login.ts
|
|
90
|
+
src/index.ts → depends on: src/auth/login.ts, src/middleware/auth.ts
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**Dependency rules:**
|
|
94
|
+
- Types and interfaces have no dependencies (they go first)
|
|
95
|
+
- Utility functions depend on types but not on business logic
|
|
96
|
+
- Business logic depends on types and utilities
|
|
97
|
+
- Tests depend on the code they test
|
|
98
|
+
- Wiring/registration depends on everything it wires together
|
|
99
|
+
|
|
100
|
+
### Step 4: Group into Tasks
|
|
101
|
+
|
|
102
|
+
Each task is a unit of work that can be completed, verified, and committed independently.
|
|
103
|
+
|
|
104
|
+
**Task sizing rules:**
|
|
105
|
+
- **1-3 files per task** — enough to make progress, small enough to verify
|
|
106
|
+
- **15-60 minutes of work** — less than 15 means combine with another task, more than 60 means split
|
|
107
|
+
- **Single concern** — one task should not mix unrelated changes
|
|
108
|
+
- **Independently verifiable** — each task has a way to prove it works
|
|
109
|
+
|
|
110
|
+
**Each task must have:**
|
|
111
|
+
1. **Name** — action-oriented verb phrase ("Create auth types and token utilities")
|
|
112
|
+
2. **Files** — exact file paths created or modified
|
|
113
|
+
3. **Action** — specific instructions for what to implement
|
|
114
|
+
4. **Verification** — command or check that proves the task is done
|
|
115
|
+
5. **Done criteria** — measurable statement of completeness
|
|
116
|
+
|
|
117
|
+
**Example task:**
|
|
118
|
+
```
|
|
119
|
+
Task 1: Create auth types and token utilities
|
|
120
|
+
Files: src/types/auth.ts, src/auth/token.ts, src/auth/token.test.ts
|
|
121
|
+
Action: Define LoginRequest (email: string, password: string) and
|
|
122
|
+
LoginResponse (token: string, expiresAt: number) types.
|
|
123
|
+
Implement createToken(userId) and verifyToken(token) using jose.
|
|
124
|
+
Write tests for both functions including expired token and invalid token cases.
|
|
125
|
+
Verification: bun test tests/auth/token.test.ts
|
|
126
|
+
Done: Token creation and verification work with test coverage for happy path and error cases.
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### Step 5: Assign Waves
|
|
130
|
+
|
|
131
|
+
Group tasks into dependency waves for execution ordering.
|
|
132
|
+
|
|
133
|
+
**Process:**
|
|
134
|
+
1. **Wave 1** — tasks with no dependencies on other tasks (can run in parallel)
|
|
135
|
+
2. **Wave 2** — tasks that depend only on Wave 1 tasks
|
|
136
|
+
3. **Wave 3** — tasks that depend on Wave 1 or Wave 2 tasks
|
|
137
|
+
4. Continue until all tasks are assigned
|
|
138
|
+
|
|
139
|
+
**Principles:**
|
|
140
|
+
- More waves of smaller tasks is better than fewer waves of larger tasks
|
|
141
|
+
- Tasks in the same wave can theoretically run in parallel
|
|
142
|
+
- Each wave should leave the codebase in a working state
|
|
143
|
+
- The final wave typically handles wiring, integration, and end-to-end verification
|
|
144
|
+
|
|
145
|
+
**Example:**
|
|
146
|
+
```
|
|
147
|
+
Wave 1 (no dependencies):
|
|
148
|
+
Task 1: Create auth types and token utilities
|
|
149
|
+
Task 2: Create password hashing utilities
|
|
150
|
+
|
|
151
|
+
Wave 2 (depends on Wave 1):
|
|
152
|
+
Task 3: Create login endpoint handler
|
|
153
|
+
Task 4: Create auth middleware
|
|
154
|
+
|
|
155
|
+
Wave 3 (depends on Wave 2):
|
|
156
|
+
Task 5: Wire login route and middleware into app
|
|
157
|
+
Task 6: Add end-to-end login test
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
### Step 6: Add Verification
|
|
161
|
+
|
|
162
|
+
Every task needs a verification command. The plan as a whole needs an end-to-end verification step.
|
|
163
|
+
|
|
164
|
+
**Per-task verification:**
|
|
165
|
+
- A test command: `bun test tests/auth/token.test.ts`
|
|
166
|
+
- A build check: `bunx tsc --noEmit`
|
|
167
|
+
- A lint check: `bun run lint`
|
|
168
|
+
- A runtime check: "Start the server and POST to /login with valid credentials"
|
|
169
|
+
|
|
170
|
+
**Plan-level verification:**
|
|
171
|
+
- Run the full test suite: `bun test`
|
|
172
|
+
- Run the linter: `bun run lint`
|
|
173
|
+
- Verify the goal: "A user can log in with email/password and receive a JWT"
|
|
174
|
+
- Check for regressions: "All previously passing tests still pass"
|
|
175
|
+
|
|
176
|
+
## Task Sizing Guide
|
|
177
|
+
|
|
178
|
+
### Too Small (Less Than 15 Minutes)
|
|
179
|
+
|
|
180
|
+
**Symptoms:** "Create the User type" (one file, one type, 5 minutes)
|
|
181
|
+
|
|
182
|
+
**Fix:** Combine with a related task. Types + the first function that uses them is a natural grouping.
|
|
183
|
+
|
|
184
|
+
### Right Size (15-60 Minutes)
|
|
185
|
+
|
|
186
|
+
**Symptoms:** Touches 1-3 files. Single concern. Clear done criteria. You can explain the task in one sentence.
|
|
187
|
+
|
|
188
|
+
**Examples:**
|
|
189
|
+
- "Create auth types and token utilities with tests" (3 files, 30 min)
|
|
190
|
+
- "Add input validation to all API endpoints" (3-4 files, 45 min)
|
|
191
|
+
- "Implement the review agent selection logic with stack gating" (2 files, 60 min)
|
|
192
|
+
|
|
193
|
+
### Too Large (More Than 60 Minutes)
|
|
194
|
+
|
|
195
|
+
**Symptoms:** Touches 5+ files. Multiple concerns mixed together. Done criteria is vague. You need sub-steps to explain it.
|
|
196
|
+
|
|
197
|
+
**Fix:** Split by one of these dimensions:
|
|
198
|
+
- **By file:** Types in one task, implementation in another, tests in a third
|
|
199
|
+
- **By concern:** Validation in one task, business logic in another
|
|
200
|
+
- **By layer:** Data access first, business logic second, wiring third
|
|
201
|
+
- **By feature slice:** User creation first, user login second (vertical slices over horizontal layers)
|
|
202
|
+
|
|
203
|
+
## Anti-Pattern Catalog
|
|
204
|
+
|
|
205
|
+
### Anti-Pattern: Vague Tasks
|
|
206
|
+
|
|
207
|
+
**What goes wrong:** "Set up the database" — what tables? What columns? What constraints? What migrations? The implementer has to make all the decisions that should have been made during planning.
|
|
208
|
+
|
|
209
|
+
**Instead:** "Add User and Project models to schema.prisma with UUID primary keys, email unique constraint on User, and a one-to-many relation from User to Project."
|
|
210
|
+
|
|
211
|
+
### Anti-Pattern: No File Paths
|
|
212
|
+
|
|
213
|
+
**What goes wrong:** "Create the auth module" — which files? What directory structure? What naming convention? The implementer makes different choices than the planner intended.
|
|
214
|
+
|
|
215
|
+
**Instead:** "Create `src/auth/login.ts` with a POST handler accepting `{ email: string, password: string }` and returning `{ token: string }`."
|
|
216
|
+
|
|
217
|
+
### Anti-Pattern: Horizontal Layers
|
|
218
|
+
|
|
219
|
+
**What goes wrong:** "Create all models, then all APIs, then all UIs." This means nothing works end-to-end until the last layer is done. Integration issues are discovered late.
|
|
220
|
+
|
|
221
|
+
**Instead:** Vertical slices — "User feature (model + API + test), then Product feature (model + API + test)." Each slice delivers a working feature.
|
|
222
|
+
|
|
223
|
+
### Anti-Pattern: Missing Verification
|
|
224
|
+
|
|
225
|
+
**What goes wrong:** Tasks without a way to prove they are done. The implementer finishes the code and says "looks good" — but nothing was verified.
|
|
226
|
+
|
|
227
|
+
**Instead:** Every task has a verification command. If you cannot write a verification step, the task is not well-defined enough.
|
|
228
|
+
|
|
229
|
+
### Anti-Pattern: No Dependencies Mapped
|
|
230
|
+
|
|
231
|
+
**What goes wrong:** The implementer starts Task 3 and discovers it depends on something from Task 5. They either hack around it or rearrange on the fly, losing time and introducing bugs.
|
|
232
|
+
|
|
233
|
+
**Instead:** Map dependencies explicitly in Step 3. If Task 3 depends on Task 5, reorder them.
|
|
234
|
+
|
|
235
|
+
### Anti-Pattern: Plan as Documentation
|
|
236
|
+
|
|
237
|
+
**What goes wrong:** The plan is written after the code, as documentation of what was built. This defeats the purpose — the plan should guide the implementation, not describe it.
|
|
238
|
+
|
|
239
|
+
**Instead:** Write the plan before writing any code. Review the plan (are the tasks right-sized? dependencies correct? verification clear?) before implementing.
|
|
240
|
+
|
|
241
|
+
## Integration with Our Tools
|
|
242
|
+
|
|
243
|
+
- **`oc_orchestrate`** — Execute the plan automatically. The orchestrator reads the plan and dispatches tasks to implementation agents
|
|
244
|
+
- **`oc_plan`** — Track task completion status as implementation progresses
|
|
245
|
+
- **plan-executing skill** — Use the companion skill for the execution methodology (how to work through the plan task by task)
|
|
246
|
+
- **`oc_review`** — After writing the plan, review it for completeness before implementation begins
|
|
247
|
+
|
|
248
|
+
## Failure Modes
|
|
249
|
+
|
|
250
|
+
### Plan Too Large
|
|
251
|
+
|
|
252
|
+
**Symptom:** More than 5-6 tasks in a single plan, or estimated total time exceeds 4 hours.
|
|
253
|
+
|
|
254
|
+
**Fix:** Split into multiple plans of 2-4 tasks each. Each plan should deliver a working increment. Plan A provides the foundation, Plan B builds on it.
|
|
255
|
+
|
|
256
|
+
### Circular Dependencies
|
|
257
|
+
|
|
258
|
+
**Symptom:** Task A depends on Task B, which depends on Task A. The dependency graph has a cycle.
|
|
259
|
+
|
|
260
|
+
**Fix:** The cycle means the tasks are not properly separated. Extract the shared dependency into its own task (usually types or interfaces). Both Task A and Task B depend on the new task instead of each other.
|
|
261
|
+
|
|
262
|
+
### Tasks Keep Growing
|
|
263
|
+
|
|
264
|
+
**Symptom:** "This task was supposed to be 30 minutes but it is been 2 hours." Implementation reveals more work than planned.
|
|
265
|
+
|
|
266
|
+
**Fix:** You are combining concerns. Stop, re-plan the remaining work. Split the current task into smaller tasks. The sunk time is gone — do not let it cascade into more wasted time.
|
|
267
|
+
|
|
268
|
+
### Verification Cannot Be Automated
|
|
269
|
+
|
|
270
|
+
**Symptom:** The verification step is "manually check that it works" — no test command, no build check, nothing automated.
|
|
271
|
+
|
|
272
|
+
**Fix:** If you truly cannot automate verification, write a manual verification checklist with specific steps ("Open the browser, navigate to /login, enter email and password, verify token appears in response"). But first, ask: can this be a test? Usually it can.
|
|
273
|
+
|
|
274
|
+
### Scope Creep During Planning
|
|
275
|
+
|
|
276
|
+
**Symptom:** The plan keeps growing as you discover more work. What started as 3 tasks is now 12.
|
|
277
|
+
|
|
278
|
+
**Fix:** Separate "must have for the goal" from "nice to have." The plan delivers the goal — everything else goes into a follow-up plan. A plan that does one thing well is better than a plan that does five things partially.
|
|
@@ -0,0 +1,255 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: python-patterns
|
|
3
|
+
description: Pythonic patterns covering type hints, error handling, async, testing with pytest, and project organization
|
|
4
|
+
stacks:
|
|
5
|
+
- python
|
|
6
|
+
requires: []
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Python Patterns
|
|
10
|
+
|
|
11
|
+
Pythonic patterns for writing clean, typed, and testable Python code. Covers type hints, error handling, async programming, testing with pytest, project organization, and common anti-patterns. Apply these when writing, reviewing, or refactoring Python code.
|
|
12
|
+
|
|
13
|
+
## 1. Type Hints
|
|
14
|
+
|
|
15
|
+
**DO:** Use type hints on all function signatures and module-level variables for clarity and static analysis.
|
|
16
|
+
|
|
17
|
+
- Annotate all function parameters and return types:
|
|
18
|
+
```python
|
|
19
|
+
def fetch_user(user_id: str) -> User | None:
|
|
20
|
+
...
|
|
21
|
+
```
|
|
22
|
+
- Use `from __future__ import annotations` at the top of every module for forward references and PEP 604 union syntax in older Python versions
|
|
23
|
+
- Use `TypedDict` for structured dictionaries that cross boundaries:
|
|
24
|
+
```python
|
|
25
|
+
class UserResponse(TypedDict):
|
|
26
|
+
id: str
|
|
27
|
+
name: str
|
|
28
|
+
email: str
|
|
29
|
+
is_active: bool
|
|
30
|
+
```
|
|
31
|
+
- Use `@dataclass` for value objects with automatic `__init__`, `__eq__`, and `__repr__`:
|
|
32
|
+
```python
|
|
33
|
+
@dataclass(frozen=True)
|
|
34
|
+
class Coordinate:
|
|
35
|
+
lat: float
|
|
36
|
+
lon: float
|
|
37
|
+
```
|
|
38
|
+
- Use `Protocol` for structural subtyping (duck typing with type safety):
|
|
39
|
+
```python
|
|
40
|
+
class Readable(Protocol):
|
|
41
|
+
def read(self, n: int = -1) -> bytes: ...
|
|
42
|
+
|
|
43
|
+
def process(source: Readable) -> bytes:
|
|
44
|
+
return source.read()
|
|
45
|
+
# Any object with a read() method satisfies Readable
|
|
46
|
+
```
|
|
47
|
+
- Use `Literal` for constrained string/int values:
|
|
48
|
+
```python
|
|
49
|
+
def set_log_level(level: Literal["debug", "info", "warning", "error"]) -> None:
|
|
50
|
+
...
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
**DON'T:**
|
|
54
|
+
|
|
55
|
+
- Use `Any` without justification -- prefer `object` for truly unknown types or narrow with `isinstance`
|
|
56
|
+
- Use `dict` when `TypedDict` or a dataclass better describes the structure
|
|
57
|
+
- Use `Optional[X]` -- prefer `X | None` (clearer, shorter)
|
|
58
|
+
- Omit return type annotations -- even `-> None` is valuable documentation
|
|
59
|
+
- Use `Union[X, Y]` -- prefer `X | Y` (PEP 604 syntax)
|
|
60
|
+
|
|
61
|
+
## 2. Error Handling
|
|
62
|
+
|
|
63
|
+
**DO:** Use specific exceptions and context managers for clean resource management.
|
|
64
|
+
|
|
65
|
+
- Raise specific exceptions with descriptive messages:
|
|
66
|
+
```python
|
|
67
|
+
raise ValueError(f"age must be positive, got {age}")
|
|
68
|
+
```
|
|
69
|
+
- Create a domain exception hierarchy:
|
|
70
|
+
```python
|
|
71
|
+
class AppError(Exception):
|
|
72
|
+
"""Base for all application errors."""
|
|
73
|
+
|
|
74
|
+
class AuthError(AppError):
|
|
75
|
+
"""Authentication or authorization failure."""
|
|
76
|
+
|
|
77
|
+
class NotFoundError(AppError):
|
|
78
|
+
"""Requested resource does not exist."""
|
|
79
|
+
```
|
|
80
|
+
- Use context managers for resource cleanup:
|
|
81
|
+
```python
|
|
82
|
+
with open(path, "r") as f:
|
|
83
|
+
data = json.load(f)
|
|
84
|
+
```
|
|
85
|
+
- Chain exceptions to preserve the original cause:
|
|
86
|
+
```python
|
|
87
|
+
try:
|
|
88
|
+
result = parse_config(raw)
|
|
89
|
+
except json.JSONDecodeError as e:
|
|
90
|
+
raise ConfigError(f"invalid config at {path}") from e
|
|
91
|
+
```
|
|
92
|
+
- Use `logging.exception()` in catch blocks to capture the full traceback:
|
|
93
|
+
```python
|
|
94
|
+
except DatabaseError:
|
|
95
|
+
logger.exception("Failed to connect to database")
|
|
96
|
+
raise
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**DON'T:**
|
|
100
|
+
|
|
101
|
+
- Use bare `except:` -- it catches `KeyboardInterrupt` and `SystemExit`. Use `except Exception:` at minimum
|
|
102
|
+
- Catch and silently swallow: `except Exception: pass` is almost always a bug
|
|
103
|
+
- Use exceptions for control flow -- check conditions with `if` before potentially failing operations
|
|
104
|
+
- Raise `Exception("something")` -- use specific types
|
|
105
|
+
- Log and re-raise without `from` -- you lose the traceback chain
|
|
106
|
+
|
|
107
|
+
## 3. Async Patterns
|
|
108
|
+
|
|
109
|
+
**DO:** Use `async`/`await` for I/O-bound operations and structured concurrency for parallelism.
|
|
110
|
+
|
|
111
|
+
- Use `async`/`await` for network, file, and database operations:
|
|
112
|
+
```python
|
|
113
|
+
async def fetch_user(session: aiohttp.ClientSession, user_id: str) -> User:
|
|
114
|
+
async with session.get(f"/users/{user_id}") as resp:
|
|
115
|
+
data = await resp.json()
|
|
116
|
+
return User(**data)
|
|
117
|
+
```
|
|
118
|
+
- Use `asyncio.gather()` for concurrent independent tasks:
|
|
119
|
+
```python
|
|
120
|
+
users, orders = await asyncio.gather(
|
|
121
|
+
fetch_users(session),
|
|
122
|
+
fetch_orders(session),
|
|
123
|
+
)
|
|
124
|
+
```
|
|
125
|
+
- Use `asyncio.TaskGroup` (Python 3.11+) for structured concurrency with automatic cancellation:
|
|
126
|
+
```python
|
|
127
|
+
async with asyncio.TaskGroup() as tg:
|
|
128
|
+
task1 = tg.create_task(fetch_users())
|
|
129
|
+
task2 = tg.create_task(fetch_orders())
|
|
130
|
+
# Both tasks complete or all are cancelled on first failure
|
|
131
|
+
```
|
|
132
|
+
- Use `async with` for async context managers (database connections, HTTP sessions)
|
|
133
|
+
- Use `asyncio.Semaphore` for rate limiting concurrent operations:
|
|
134
|
+
```python
|
|
135
|
+
sem = asyncio.Semaphore(10)
|
|
136
|
+
async def limited_fetch(url: str) -> bytes:
|
|
137
|
+
async with sem:
|
|
138
|
+
return await fetch(url)
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
**DON'T:**
|
|
142
|
+
|
|
143
|
+
- Mix sync and async in the same module -- pick one paradigm
|
|
144
|
+
- Use `asyncio.run()` inside an already-running event loop
|
|
145
|
+
- Block the event loop with CPU-bound work -- use `asyncio.to_thread()` or `ProcessPoolExecutor`
|
|
146
|
+
- Use `time.sleep()` in async code -- use `await asyncio.sleep()`
|
|
147
|
+
- Create tasks without awaiting them -- orphan tasks are goroutine leaks
|
|
148
|
+
|
|
149
|
+
## 4. Testing with pytest
|
|
150
|
+
|
|
151
|
+
**DO:** Write focused tests using pytest fixtures, parametrize, and clear assertion patterns.
|
|
152
|
+
|
|
153
|
+
- Use `@pytest.fixture` for test setup and dependency injection:
|
|
154
|
+
```python
|
|
155
|
+
@pytest.fixture
|
|
156
|
+
def db_connection():
|
|
157
|
+
conn = create_test_db()
|
|
158
|
+
yield conn
|
|
159
|
+
conn.close()
|
|
160
|
+
|
|
161
|
+
def test_insert_user(db_connection):
|
|
162
|
+
db_connection.execute("INSERT INTO users ...")
|
|
163
|
+
assert db_connection.query("SELECT count(*) FROM users") == 1
|
|
164
|
+
```
|
|
165
|
+
- Parametrize tests for multiple inputs:
|
|
166
|
+
```python
|
|
167
|
+
@pytest.mark.parametrize("input,expected", [
|
|
168
|
+
("hello", "HELLO"),
|
|
169
|
+
("", ""),
|
|
170
|
+
("Hello World", "HELLO WORLD"),
|
|
171
|
+
])
|
|
172
|
+
def test_uppercase(input: str, expected: str):
|
|
173
|
+
assert uppercase(input) == expected
|
|
174
|
+
```
|
|
175
|
+
- Use `conftest.py` for shared fixtures across test modules
|
|
176
|
+
- Use `pytest.raises` for exception testing:
|
|
177
|
+
```python
|
|
178
|
+
with pytest.raises(ValueError, match="must be positive"):
|
|
179
|
+
validate_age(-1)
|
|
180
|
+
```
|
|
181
|
+
- Use `tmp_path` fixture for temporary file tests:
|
|
182
|
+
```python
|
|
183
|
+
def test_write_config(tmp_path: Path):
|
|
184
|
+
config_file = tmp_path / "config.json"
|
|
185
|
+
write_config(config_file, {"key": "value"})
|
|
186
|
+
assert config_file.read_text() == '{"key": "value"}'
|
|
187
|
+
```
|
|
188
|
+
- Use `monkeypatch` for mocking environment variables and module attributes
|
|
189
|
+
|
|
190
|
+
**DON'T:**
|
|
191
|
+
|
|
192
|
+
- Use `unittest.TestCase` in new code -- pytest's function-based tests are simpler and more powerful
|
|
193
|
+
- Create test fixtures with complex inheritance hierarchies
|
|
194
|
+
- Test implementation details -- test behavior through the public API
|
|
195
|
+
- Use `mock.patch` on internal functions -- inject dependencies instead
|
|
196
|
+
- Write tests that depend on execution order
|
|
197
|
+
|
|
198
|
+
## 5. Project Organization
|
|
199
|
+
|
|
200
|
+
**DO:** Use modern Python project structure with `pyproject.toml` and the `src/` layout.
|
|
201
|
+
|
|
202
|
+
- Standard project structure:
|
|
203
|
+
```
|
|
204
|
+
project/
|
|
205
|
+
pyproject.toml
|
|
206
|
+
src/
|
|
207
|
+
mypackage/
|
|
208
|
+
__init__.py
|
|
209
|
+
models/
|
|
210
|
+
services/
|
|
211
|
+
api/
|
|
212
|
+
tests/
|
|
213
|
+
conftest.py
|
|
214
|
+
test_models.py
|
|
215
|
+
test_services.py
|
|
216
|
+
```
|
|
217
|
+
- Use `pyproject.toml` as the single source for project metadata, dependencies, and tool configuration
|
|
218
|
+
- Use `__init__.py` for public API exports only -- keep them minimal:
|
|
219
|
+
```python
|
|
220
|
+
# src/mypackage/__init__.py
|
|
221
|
+
from mypackage.client import Client
|
|
222
|
+
from mypackage.errors import AppError
|
|
223
|
+
|
|
224
|
+
__all__ = ["Client", "AppError"]
|
|
225
|
+
```
|
|
226
|
+
- Separate concerns: `models/` for data structures, `services/` for business logic, `api/` for HTTP layer
|
|
227
|
+
- Use `pydantic` for data validation at system boundaries (API input, config files, external data)
|
|
228
|
+
- Pin dependencies with lock files (`uv.lock`, `poetry.lock`, `requirements.txt` with hashes)
|
|
229
|
+
|
|
230
|
+
**DON'T:**
|
|
231
|
+
|
|
232
|
+
- Use `setup.py` for new projects -- `pyproject.toml` is the standard
|
|
233
|
+
- Put everything in `__init__.py` -- it becomes a maintenance burden
|
|
234
|
+
- Import from `tests/` in production code
|
|
235
|
+
- Use relative imports across package boundaries -- absolute imports are clearer
|
|
236
|
+
|
|
237
|
+
## 6. Anti-Pattern Catalog
|
|
238
|
+
|
|
239
|
+
**Anti-Pattern: Mutable Default Arguments**
|
|
240
|
+
`def f(items=[])` shares the same list across all calls. The default is evaluated once at function definition, not per call. Instead: `def f(items: list[str] | None = None): items = items if items is not None else []`
|
|
241
|
+
|
|
242
|
+
**Anti-Pattern: Bare Except**
|
|
243
|
+
`except:` catches everything including `KeyboardInterrupt`, `SystemExit`, and `GeneratorExit`. This prevents Ctrl+C from working and hides real bugs. Instead: `except Exception:` for broad catching, or specific exception types.
|
|
244
|
+
|
|
245
|
+
**Anti-Pattern: Star Imports**
|
|
246
|
+
`from module import *` pollutes the namespace, makes it impossible to trace where names come from, and breaks static analysis. Instead: import explicitly: `from module import ClassName, function_name`
|
|
247
|
+
|
|
248
|
+
**Anti-Pattern: God Class**
|
|
249
|
+
A single class with 20+ methods handling validation, database access, formatting, and business logic. Instead: split into focused classes with single responsibilities. Use composition over inheritance.
|
|
250
|
+
|
|
251
|
+
**Anti-Pattern: String Formatting for SQL**
|
|
252
|
+
`f"SELECT * FROM users WHERE id = '{user_id}'"` is a SQL injection vulnerability. Instead: use parameterized queries: `cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))`
|
|
253
|
+
|
|
254
|
+
**Anti-Pattern: Nested Try/Except**
|
|
255
|
+
Three levels of `try/except` blocks handling different errors. Instead: use early returns, separate the operations into functions, or use a result type pattern.
|