@kodrunhq/opencode-autopilot 1.4.0 → 1.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/assets/commands/brainstorm.md +7 -0
- package/assets/commands/stocktake.md +7 -0
- package/assets/commands/tdd.md +7 -0
- package/assets/commands/update-docs.md +7 -0
- package/assets/commands/write-plan.md +7 -0
- package/assets/skills/brainstorming/SKILL.md +295 -0
- package/assets/skills/code-review/SKILL.md +241 -0
- package/assets/skills/e2e-testing/SKILL.md +266 -0
- package/assets/skills/git-worktrees/SKILL.md +296 -0
- package/assets/skills/go-patterns/SKILL.md +240 -0
- package/assets/skills/plan-executing/SKILL.md +258 -0
- package/assets/skills/plan-writing/SKILL.md +278 -0
- package/assets/skills/python-patterns/SKILL.md +255 -0
- package/assets/skills/rust-patterns/SKILL.md +293 -0
- package/assets/skills/strategic-compaction/SKILL.md +217 -0
- package/assets/skills/systematic-debugging/SKILL.md +299 -0
- package/assets/skills/tdd-workflow/SKILL.md +311 -0
- package/assets/skills/typescript-patterns/SKILL.md +278 -0
- package/assets/skills/verification/SKILL.md +240 -0
- package/package.json +1 -1
- package/src/index.ts +4 -0
- package/src/orchestrator/skill-injection.ts +38 -0
- package/src/review/sanitize.ts +1 -1
- package/src/skills/adaptive-injector.ts +122 -0
- package/src/skills/dependency-resolver.ts +88 -0
- package/src/skills/linter.ts +113 -0
- package/src/skills/loader.ts +88 -0
- package/src/templates/skill-template.ts +4 -0
- package/src/tools/create-skill.ts +12 -0
- package/src/tools/stocktake.ts +170 -0
- package/src/tools/update-docs.ts +116 -0
|
@@ -0,0 +1,240 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: go-patterns
|
|
3
|
+
description: Idiomatic Go patterns covering error handling, concurrency, interfaces, and testing conventions
|
|
4
|
+
stacks:
|
|
5
|
+
- go
|
|
6
|
+
requires: []
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Go Patterns
|
|
10
|
+
|
|
11
|
+
Idiomatic Go patterns for writing clean, concurrent, and testable code. Covers error handling, concurrency primitives, interface design, testing, package organization, and common anti-patterns. Apply these when writing, reviewing, or refactoring Go code.
|
|
12
|
+
|
|
13
|
+
## 1. Error Handling
|
|
14
|
+
|
|
15
|
+
**DO:** Treat errors as values. Check every error and provide context for debugging.
|
|
16
|
+
|
|
17
|
+
- Always check errors immediately after the call:
|
|
18
|
+
```go
|
|
19
|
+
f, err := os.Open(path)
|
|
20
|
+
if err != nil {
|
|
21
|
+
return fmt.Errorf("open config %s: %w", path, err)
|
|
22
|
+
}
|
|
23
|
+
defer f.Close()
|
|
24
|
+
```
|
|
25
|
+
- Wrap errors with `%w` for unwrapping with `errors.Is()` and `errors.As()`:
|
|
26
|
+
```go
|
|
27
|
+
if err := db.Connect(); err != nil {
|
|
28
|
+
return fmt.Errorf("database connection: %w", err)
|
|
29
|
+
}
|
|
30
|
+
```
|
|
31
|
+
- Use sentinel errors for expected conditions:
|
|
32
|
+
```go
|
|
33
|
+
var ErrNotFound = errors.New("not found")
|
|
34
|
+
var ErrConflict = errors.New("conflict")
|
|
35
|
+
|
|
36
|
+
// Caller checks:
|
|
37
|
+
if errors.Is(err, ErrNotFound) { ... }
|
|
38
|
+
```
|
|
39
|
+
- Create custom error types for rich context:
|
|
40
|
+
```go
|
|
41
|
+
type ValidationError struct {
|
|
42
|
+
Field string
|
|
43
|
+
Message string
|
|
44
|
+
}
|
|
45
|
+
func (e *ValidationError) Error() string {
|
|
46
|
+
return fmt.Sprintf("validation: %s: %s", e.Field, e.Message)
|
|
47
|
+
}
|
|
48
|
+
```
|
|
49
|
+
- Add context that helps debugging -- include the operation, the input, and the wrapped cause
|
|
50
|
+
|
|
51
|
+
**DON'T:**
|
|
52
|
+
|
|
53
|
+
- Ignore errors with `_` unless there is a comment explaining why: `_ = f.Close() // best-effort cleanup`
|
|
54
|
+
- Use `panic` for recoverable errors -- reserve `panic` for truly unrecoverable bugs (nil dereference, impossible state)
|
|
55
|
+
- Return generic `errors.New("something failed")` without context
|
|
56
|
+
- Log AND return the same error -- choose one to avoid duplicate noise
|
|
57
|
+
- Use `fmt.Errorf` without `%w` when the caller might need to inspect the cause
|
|
58
|
+
|
|
59
|
+
## 2. Concurrency Patterns
|
|
60
|
+
|
|
61
|
+
**DO:** Use goroutines and channels for communication, mutexes for shared state protection.
|
|
62
|
+
|
|
63
|
+
- Always pass `context.Context` as the first parameter for cancellation and timeouts:
|
|
64
|
+
```go
|
|
65
|
+
func fetchUser(ctx context.Context, id string) (*User, error) {
|
|
66
|
+
select {
|
|
67
|
+
case <-ctx.Done():
|
|
68
|
+
return nil, ctx.Err()
|
|
69
|
+
default:
|
|
70
|
+
}
|
|
71
|
+
// ... fetch logic
|
|
72
|
+
}
|
|
73
|
+
```
|
|
74
|
+
- Use `errgroup.Group` for parallel tasks with error collection:
|
|
75
|
+
```go
|
|
76
|
+
g, ctx := errgroup.WithContext(ctx)
|
|
77
|
+
for _, url := range urls {
|
|
78
|
+
g.Go(func() error {
|
|
79
|
+
return fetch(ctx, url)
|
|
80
|
+
})
|
|
81
|
+
}
|
|
82
|
+
if err := g.Wait(); err != nil {
|
|
83
|
+
return fmt.Errorf("parallel fetch: %w", err)
|
|
84
|
+
}
|
|
85
|
+
```
|
|
86
|
+
- Every goroutine must have a clear shutdown path:
|
|
87
|
+
```go
|
|
88
|
+
func worker(ctx context.Context, jobs <-chan Job) {
|
|
89
|
+
for {
|
|
90
|
+
select {
|
|
91
|
+
case <-ctx.Done():
|
|
92
|
+
return
|
|
93
|
+
case job, ok := <-jobs:
|
|
94
|
+
if !ok { return }
|
|
95
|
+
process(job)
|
|
96
|
+
}
|
|
97
|
+
}
|
|
98
|
+
}
|
|
99
|
+
```
|
|
100
|
+
- Use `sync.WaitGroup` for fan-out when you don't need error collection
|
|
101
|
+
- Use `sync.Once` for lazy initialization of shared resources
|
|
102
|
+
- Use `sync.Mutex` only when channels are impractical (protecting a shared map, counter, or cache)
|
|
103
|
+
|
|
104
|
+
**DON'T:**
|
|
105
|
+
|
|
106
|
+
- Start a goroutine without a way to stop it -- every goroutine needs a cancellation signal
|
|
107
|
+
- Use `time.Sleep()` for synchronization -- use channels or `sync.WaitGroup`
|
|
108
|
+
- Share memory by communicating -- communicate by sharing channels (Go proverb)
|
|
109
|
+
- Use unbuffered channels when the producer and consumer run at different speeds
|
|
110
|
+
- Forget `defer mu.Unlock()` after `mu.Lock()` -- always pair them on adjacent lines
|
|
111
|
+
|
|
112
|
+
## 3. Interface Design
|
|
113
|
+
|
|
114
|
+
**DO:** Keep interfaces small, define them at the consumer, and accept them as parameters.
|
|
115
|
+
|
|
116
|
+
- Keep interfaces to 1-3 methods. The smaller the interface, the more types satisfy it:
|
|
117
|
+
```go
|
|
118
|
+
type Reader interface {
|
|
119
|
+
Read(p []byte) (n int, err error)
|
|
120
|
+
}
|
|
121
|
+
```
|
|
122
|
+
- Define interfaces where they are used, not where they are implemented:
|
|
123
|
+
```go
|
|
124
|
+
// In the service package (consumer), not the repository package (provider)
|
|
125
|
+
type UserStore interface {
|
|
126
|
+
FindByID(ctx context.Context, id string) (*User, error)
|
|
127
|
+
}
|
|
128
|
+
```
|
|
129
|
+
- Accept interfaces, return structs:
|
|
130
|
+
```go
|
|
131
|
+
func NewService(store UserStore) *Service { ... } // accepts interface
|
|
132
|
+
func NewPostgresStore(db *sql.DB) *PostgresStore { ... } // returns concrete type
|
|
133
|
+
```
|
|
134
|
+
- Use the standard library interfaces (`io.Reader`, `io.Writer`, `fmt.Stringer`) when possible
|
|
135
|
+
- Implicit satisfaction -- no `implements` keyword. If the methods match, the type satisfies the interface
|
|
136
|
+
|
|
137
|
+
**DON'T:**
|
|
138
|
+
|
|
139
|
+
- Create interfaces with 5+ methods -- break them into smaller, composable interfaces
|
|
140
|
+
- Define interfaces before you need them -- extract when you have 2+ implementations or need testing
|
|
141
|
+
- Put all interfaces in a single `interfaces.go` file -- define them next to their consumers
|
|
142
|
+
- Use empty interface (`interface{}` or `any`) when a more specific type is possible
|
|
143
|
+
|
|
144
|
+
## 4. Testing Patterns
|
|
145
|
+
|
|
146
|
+
**DO:** Write table-driven tests, use `t.Helper()`, and keep tests close to the code they test.
|
|
147
|
+
|
|
148
|
+
- Table-driven tests for multiple inputs:
|
|
149
|
+
```go
|
|
150
|
+
func TestValidate(t *testing.T) {
|
|
151
|
+
tests := []struct {
|
|
152
|
+
name string
|
|
153
|
+
input string
|
|
154
|
+
want error
|
|
155
|
+
}{
|
|
156
|
+
{"valid email", "a@b.com", nil},
|
|
157
|
+
{"missing @", "ab.com", ErrInvalidEmail},
|
|
158
|
+
{"empty", "", ErrRequired},
|
|
159
|
+
}
|
|
160
|
+
for _, tt := range tests {
|
|
161
|
+
t.Run(tt.name, func(t *testing.T) {
|
|
162
|
+
err := Validate(tt.input)
|
|
163
|
+
if !errors.Is(err, tt.want) {
|
|
164
|
+
t.Errorf("Validate(%q) = %v, want %v", tt.input, err, tt.want)
|
|
165
|
+
}
|
|
166
|
+
})
|
|
167
|
+
}
|
|
168
|
+
}
|
|
169
|
+
```
|
|
170
|
+
- Use `t.Helper()` in test helper functions for better error locations:
|
|
171
|
+
```go
|
|
172
|
+
func assertNoError(t *testing.T, err error) {
|
|
173
|
+
t.Helper()
|
|
174
|
+
if err != nil {
|
|
175
|
+
t.Fatalf("unexpected error: %v", err)
|
|
176
|
+
}
|
|
177
|
+
}
|
|
178
|
+
```
|
|
179
|
+
- Use `t.Parallel()` for independent tests to run faster
|
|
180
|
+
- Use `testdata/` directory for test fixtures (Go tooling ignores this directory)
|
|
181
|
+
- Use `package foo_test` for black-box testing of the public API
|
|
182
|
+
- Use `package foo` for white-box testing of internal behavior
|
|
183
|
+
|
|
184
|
+
**DON'T:**
|
|
185
|
+
|
|
186
|
+
- Use `assert` libraries that hide what's being tested -- prefer standard `t.Errorf` with context
|
|
187
|
+
- Test private functions directly -- test through the public API
|
|
188
|
+
- Use global test state -- each test case should be independent
|
|
189
|
+
- Skip `t.Run` -- subtest names appear in failure output and make debugging easier
|
|
190
|
+
|
|
191
|
+
## 5. Package Organization
|
|
192
|
+
|
|
193
|
+
**DO:** Keep packages flat, focused, and named by what they provide.
|
|
194
|
+
|
|
195
|
+
- Flat structure -- avoid deep nesting:
|
|
196
|
+
```
|
|
197
|
+
// DO
|
|
198
|
+
auth/
|
|
199
|
+
user/
|
|
200
|
+
order/
|
|
201
|
+
|
|
202
|
+
// DON'T
|
|
203
|
+
pkg/services/auth/handlers/middleware/
|
|
204
|
+
```
|
|
205
|
+
- Use `internal/` for implementation details that other packages should not import
|
|
206
|
+
- Use `cmd/` for entry points -- one `main.go` per binary:
|
|
207
|
+
```
|
|
208
|
+
cmd/server/main.go
|
|
209
|
+
cmd/cli/main.go
|
|
210
|
+
```
|
|
211
|
+
- Name packages by what they provide, not what they contain: `auth` not `authutils`, `http` not `httphandlers`
|
|
212
|
+
- One package per concern -- don't create `utils` or `helpers` grab-bag packages
|
|
213
|
+
- Keep `main.go` thin -- parse flags, wire dependencies, call `Run()`
|
|
214
|
+
|
|
215
|
+
**DON'T:**
|
|
216
|
+
|
|
217
|
+
- Create a `models` or `types` package -- put types with the code that uses them
|
|
218
|
+
- Use package names that stutter: `user.UserService` -- prefer `user.Service`
|
|
219
|
+
- Import from `internal/` across module boundaries -- it won't compile
|
|
220
|
+
- Put everything in one package to avoid import cycles -- fix the design instead
|
|
221
|
+
|
|
222
|
+
## 6. Anti-Pattern Catalog
|
|
223
|
+
|
|
224
|
+
**Anti-Pattern: Naked Returns**
|
|
225
|
+
Using `return` without values in functions with named return parameters. Named returns are fine for documentation, but naked returns obscure what's being returned. Be explicit: `return user, nil`.
|
|
226
|
+
|
|
227
|
+
**Anti-Pattern: Interface Pollution**
|
|
228
|
+
Defining an interface before it has two implementations or a testing need. Interfaces are for decoupling -- premature interfaces add indirection without benefit. Wait until you need polymorphism, then extract.
|
|
229
|
+
|
|
230
|
+
**Anti-Pattern: Global State**
|
|
231
|
+
Package-level `var db *sql.DB` or `var logger *Logger`. Global state makes testing painful, creates hidden coupling, and breaks concurrent test execution. Pass dependencies via function parameters or struct fields.
|
|
232
|
+
|
|
233
|
+
**Anti-Pattern: Init Function Overuse**
|
|
234
|
+
Putting complex logic in `func init()` -- database connections, HTTP clients, file parsing. Init functions run at import time with no error handling. Move initialization to explicit `New()` or `Setup()` functions that return errors.
|
|
235
|
+
|
|
236
|
+
**Anti-Pattern: Error String Matching**
|
|
237
|
+
Checking `err.Error() == "not found"` instead of using `errors.Is(err, ErrNotFound)`. String matching is fragile -- error messages change, wrapping adds context. Use sentinel errors or custom types.
|
|
238
|
+
|
|
239
|
+
**Anti-Pattern: Goroutine Leak**
|
|
240
|
+
Starting a goroutine that blocks forever on a channel or context that's never cancelled. Every goroutine must have a clear exit condition. Use `context.WithCancel`, `context.WithTimeout`, or close the channel.
|
|
@@ -0,0 +1,258 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: plan-executing
|
|
3
|
+
description: Batch execution methodology for implementing plans with verification checkpoints after each task
|
|
4
|
+
stacks: []
|
|
5
|
+
requires:
|
|
6
|
+
- plan-writing
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
# Plan Executing
|
|
10
|
+
|
|
11
|
+
A systematic methodology for working through implementation plans task by task. Each task is executed, verified, and committed before moving to the next. Deviations are logged, failures are diagnosed, and progress is tracked throughout.
|
|
12
|
+
|
|
13
|
+
## When to Use
|
|
14
|
+
|
|
15
|
+
- **After writing a plan** (using the plan-writing skill) — the plan provides the task list, this skill provides the execution discipline
|
|
16
|
+
- **When implementing a multi-task feature** — any work with more than 2 tasks benefits from structured execution
|
|
17
|
+
- **When running through a task list systematically** — avoids skipping steps, forgetting verification, or losing track of progress
|
|
18
|
+
- **When multiple people are implementing the same plan** — consistent execution methodology keeps everyone aligned
|
|
19
|
+
- **When resuming work after a break** — the execution log tells you exactly where you left off and what state things are in
|
|
20
|
+
|
|
21
|
+
## The Execution Process
|
|
22
|
+
|
|
23
|
+
### Step 1: Read the Full Plan
|
|
24
|
+
|
|
25
|
+
Before implementing anything, read every task in the plan. Do not start coding after reading just the first task.
|
|
26
|
+
|
|
27
|
+
**Process:**
|
|
28
|
+
1. Read the plan objective — what must be true when this work is complete?
|
|
29
|
+
2. Read every task, including its files, action, verification, and done criteria
|
|
30
|
+
3. Understand the dependency graph — which tasks depend on which?
|
|
31
|
+
4. Identify the critical path — which tasks, if delayed, delay everything?
|
|
32
|
+
5. Note any tasks that can run in parallel (same wave, no shared files)
|
|
33
|
+
|
|
34
|
+
**Why read everything first:**
|
|
35
|
+
- You may spot dependency errors before they block you
|
|
36
|
+
- You will understand how early tasks set up later tasks
|
|
37
|
+
- You can identify shared patterns and avoid redundant work
|
|
38
|
+
- You will catch scope issues before investing implementation time
|
|
39
|
+
|
|
40
|
+
### Step 2: Execute Wave by Wave
|
|
41
|
+
|
|
42
|
+
Start with Wave 1 tasks (no dependencies). Complete each task fully before starting the next.
|
|
43
|
+
|
|
44
|
+
**Per-task execution flow:**
|
|
45
|
+
1. **Read the task** — files, action, verification, done criteria
|
|
46
|
+
2. **Check prerequisites** — are all dependency tasks complete? Are their outputs available?
|
|
47
|
+
3. **Implement** — follow the action description. If it says "create X with Y," create X with Y
|
|
48
|
+
4. **Run verification** — execute the task's verification command
|
|
49
|
+
5. **Check done criteria** — does the implementation meet the stated criteria?
|
|
50
|
+
6. **Commit** — one commit per task, referencing the task number
|
|
51
|
+
|
|
52
|
+
**Wave transition:**
|
|
53
|
+
- After all Wave N tasks are complete and verified, move to Wave N+1
|
|
54
|
+
- Do not start a Wave N+1 task until all its Wave N dependencies are complete
|
|
55
|
+
- If a Wave N task fails, fix it before moving forward
|
|
56
|
+
|
|
57
|
+
### Step 3: Verify After Each Task
|
|
58
|
+
|
|
59
|
+
Verification is not optional. Every task has a verification step, and you must run it.
|
|
60
|
+
|
|
61
|
+
**Verification hierarchy:**
|
|
62
|
+
1. **Task-specific verification** — the command listed in the task (e.g., `bun test tests/auth/token.test.ts`)
|
|
63
|
+
2. **Build check** — `bunx tsc --noEmit` to catch type errors across the project
|
|
64
|
+
3. **Full test suite** — `bun test` to catch regressions in other modules
|
|
65
|
+
4. **Lint check** — `bun run lint` to catch formatting and style issues
|
|
66
|
+
|
|
67
|
+
**Rules:**
|
|
68
|
+
- Run at least the task-specific verification after every task
|
|
69
|
+
- Run the full test suite after every 2-3 tasks (or after every task if the project is small)
|
|
70
|
+
- If any verification fails, fix it before proceeding — do NOT continue with a broken base
|
|
71
|
+
- If a test that was passing before your change is now failing, you introduced a regression — fix it
|
|
72
|
+
|
|
73
|
+
### Step 4: Track Progress
|
|
74
|
+
|
|
75
|
+
Keep a running log of what is done, what deviated from the plan, and what remains.
|
|
76
|
+
|
|
77
|
+
**Track:**
|
|
78
|
+
- Completed tasks with commit hashes
|
|
79
|
+
- Time spent per task (helps calibrate future estimates)
|
|
80
|
+
- Deviations from the plan (scope changes, unexpected issues, reordered tasks)
|
|
81
|
+
- New tasks discovered during implementation (add to the plan, do not just do them ad hoc)
|
|
82
|
+
- Blockers encountered and how they were resolved
|
|
83
|
+
|
|
84
|
+
**Why track:**
|
|
85
|
+
- If you are interrupted, you (or someone else) can resume from the log
|
|
86
|
+
- Deviations documented during implementation are easier to review than deviations discovered later
|
|
87
|
+
- Time tracking reveals whether your task sizing is accurate (improving future plans)
|
|
88
|
+
- New tasks discovered during implementation are visible for review (preventing scope creep)
|
|
89
|
+
|
|
90
|
+
### Step 5: Handle Failures
|
|
91
|
+
|
|
92
|
+
When something goes wrong (and it will), follow a structured response.
|
|
93
|
+
|
|
94
|
+
**Task verification fails:**
|
|
95
|
+
1. Read the error message carefully — what specifically failed?
|
|
96
|
+
2. Is this a problem with the implementation or the test?
|
|
97
|
+
3. Use the systematic-debugging skill for non-obvious failures
|
|
98
|
+
4. Fix the issue, re-run verification, confirm it passes
|
|
99
|
+
5. Log the failure and fix as a deviation
|
|
100
|
+
|
|
101
|
+
**Unexpected dependency discovered:**
|
|
102
|
+
1. The task requires something that is not in the plan
|
|
103
|
+
2. Check: is this a missing task, or a missing prerequisite from an existing task?
|
|
104
|
+
3. Add the missing work to the plan (new task or expanded existing task)
|
|
105
|
+
4. Re-evaluate wave assignments — does this change the dependency graph?
|
|
106
|
+
5. Log as a deviation
|
|
107
|
+
|
|
108
|
+
**Scope creep detected:**
|
|
109
|
+
1. While implementing Task N, you discover that "it would be nice to also do X"
|
|
110
|
+
2. Ask: is X required for the plan's goal, or just a nice-to-have?
|
|
111
|
+
3. If required: add it to the plan as a new task with proper sizing and dependencies
|
|
112
|
+
4. If nice-to-have: log it as a follow-up item, do NOT implement it now
|
|
113
|
+
5. Every unplanned addition increases risk — be disciplined
|
|
114
|
+
|
|
115
|
+
**Blocked by external factor:**
|
|
116
|
+
1. Cannot proceed due to missing API key, unavailable service, pending PR review, etc.
|
|
117
|
+
2. Document the blocker with: what is blocked, what is needed, who can unblock it
|
|
118
|
+
3. Skip to the next non-blocked task (if one exists in the current wave)
|
|
119
|
+
4. Do NOT implement workarounds that will need to be undone later
|
|
120
|
+
|
|
121
|
+
### Step 6: Final Verification
|
|
122
|
+
|
|
123
|
+
After all tasks are complete, run the plan-level verification.
|
|
124
|
+
|
|
125
|
+
**Process:**
|
|
126
|
+
1. Run the full test suite: `bun test`
|
|
127
|
+
2. Run the linter: `bun run lint`
|
|
128
|
+
3. Run the type checker: `bunx tsc --noEmit`
|
|
129
|
+
4. Verify the plan objective — is the stated goal actually achieved?
|
|
130
|
+
5. Check for regressions — are all previously passing tests still passing?
|
|
131
|
+
6. Review all deviations — do they make sense? Are they documented?
|
|
132
|
+
|
|
133
|
+
**This is the "ship it" gate.** If final verification passes, the work is complete. If it fails, the work is not complete — regardless of how many tasks are checked off.
|
|
134
|
+
|
|
135
|
+
## Commit Strategy
|
|
136
|
+
|
|
137
|
+
One commit per task. No exceptions.
|
|
138
|
+
|
|
139
|
+
**Commit message format:**
|
|
140
|
+
```
|
|
141
|
+
type(scope): concise description (task N/M)
|
|
142
|
+
|
|
143
|
+
- Key change 1
|
|
144
|
+
- Key change 2
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
**Examples:**
|
|
148
|
+
```
|
|
149
|
+
feat(auth): create login types and token utilities (task 1/5)
|
|
150
|
+
|
|
151
|
+
- Add LoginRequest and LoginResponse types
|
|
152
|
+
- Implement createToken and verifyToken with jose
|
|
153
|
+
- Add tests for token creation and expired token handling
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
```
|
|
157
|
+
fix(auth): add rate limiting to login endpoint (task 4/5)
|
|
158
|
+
|
|
159
|
+
- Limit to 5 attempts per minute per IP
|
|
160
|
+
- Return 429 with retry-after header
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
**Rules:**
|
|
164
|
+
- Each commit should leave the codebase in a working state (tests pass, builds succeed)
|
|
165
|
+
- Never commit broken code — if verification fails, fix first, then commit
|
|
166
|
+
- Never batch multiple tasks into one commit — the commit history should match the plan
|
|
167
|
+
- If a task requires no code changes (e.g., documentation-only), commit the docs
|
|
168
|
+
|
|
169
|
+
## Anti-Pattern Catalog
|
|
170
|
+
|
|
171
|
+
### Anti-Pattern: Skipping Verification
|
|
172
|
+
|
|
173
|
+
**What goes wrong:** "I will test it all at the end." You implement 5 tasks, run the tests, and 3 fail. Now you have to debug failures across 5 tasks worth of changes with no idea which task introduced which failure.
|
|
174
|
+
|
|
175
|
+
**Instead:** Verify after every task. When a test fails, you know exactly which change caused it (the one you just made).
|
|
176
|
+
|
|
177
|
+
### Anti-Pattern: Continuing on Failures
|
|
178
|
+
|
|
179
|
+
**What goes wrong:** Task 2 verification fails, but you start Task 3 anyway because "I will fix it later." Task 3 depends on Task 2 working correctly, so now Task 3 is also broken. The failure cascades.
|
|
180
|
+
|
|
181
|
+
**Instead:** Fix Task 2 before starting Task 3. A broken foundation makes everything built on top of it unreliable.
|
|
182
|
+
|
|
183
|
+
### Anti-Pattern: Not Committing
|
|
184
|
+
|
|
185
|
+
**What goes wrong:** You complete 5 tasks and make one giant commit. If something goes wrong, you cannot revert a single task — you revert everything. Code review is painful because the diff is enormous.
|
|
186
|
+
|
|
187
|
+
**Instead:** Commit after each verified task. Small, focused commits are easier to review, revert, and bisect.
|
|
188
|
+
|
|
189
|
+
### Anti-Pattern: Deviating Without Logging
|
|
190
|
+
|
|
191
|
+
**What goes wrong:** You change the plan on the fly — reorder tasks, add new ones, modify scope — without documenting why. Later, reviewers do not understand why the implementation differs from the plan.
|
|
192
|
+
|
|
193
|
+
**Instead:** Log every deviation with: what changed, why, and what impact it has. Deviations are normal — undocumented deviations are not.
|
|
194
|
+
|
|
195
|
+
### Anti-Pattern: Gold Plating
|
|
196
|
+
|
|
197
|
+
**What goes wrong:** Task 3 says "implement the login endpoint." You implement login, registration, password reset, and email verification because "we will need them eventually."
|
|
198
|
+
|
|
199
|
+
**Instead:** Implement exactly what the task says. Nothing more. Additional features go into additional tasks in additional plans. Scope discipline is the difference between plans that finish on time and plans that never finish.
|
|
200
|
+
|
|
201
|
+
### Anti-Pattern: Parallelizing Without Understanding
|
|
202
|
+
|
|
203
|
+
**What goes wrong:** You see two tasks in the same wave and assume they can be done simultaneously. But they modify the same file, causing merge conflicts.
|
|
204
|
+
|
|
205
|
+
**Instead:** Check for file conflicts before parallelizing. Two tasks in the same wave can run in parallel only if they do not modify the same files.
|
|
206
|
+
|
|
207
|
+
## Integration with Our Tools
|
|
208
|
+
|
|
209
|
+
- **`oc_orchestrate`** — Autonomous plan execution. The orchestrator reads the plan, dispatches tasks to agents, verifies each task, and tracks progress automatically. Use for hands-off execution of well-defined plans.
|
|
210
|
+
- **`oc_quick`** — For single-task execution when you want to implement one specific task from the plan.
|
|
211
|
+
- **`oc_review`** — Run after each task for automated code review. Catches issues the verification command might miss (code quality, security, naming).
|
|
212
|
+
- **`oc_state`** — Track pipeline state during execution. Shows current phase, completed tasks, and any blockers.
|
|
213
|
+
- **`oc_phase`** — Check phase transitions. Useful when a plan spans the boundary between two pipeline phases.
|
|
214
|
+
- **`oc_session_stats`** — Monitor session health during long execution runs. Check for accumulating errors or performance degradation.
|
|
215
|
+
|
|
216
|
+
## Failure Modes
|
|
217
|
+
|
|
218
|
+
### All Tasks Fail
|
|
219
|
+
|
|
220
|
+
**Symptom:** Every task's verification fails. Nothing works.
|
|
221
|
+
|
|
222
|
+
**Diagnosis:** The plan itself may be fundamentally flawed — wrong assumptions, missing infrastructure, incorrect dependency ordering. Go back to the plan-writing skill and re-plan from scratch. Examine: are the dependencies right? Are the task actions actually implementable?
|
|
223
|
+
|
|
224
|
+
### Velocity Is Too Slow
|
|
225
|
+
|
|
226
|
+
**Symptom:** Tasks that were estimated at 30 minutes are taking 2 hours each. The plan will take 3x longer than expected.
|
|
227
|
+
|
|
228
|
+
**Diagnosis:** Tasks are too large or too vaguely defined. Split them. A task taking 2 hours probably has 3-4 sub-tasks hiding inside it. Re-plan the remaining tasks with smaller granularity.
|
|
229
|
+
|
|
230
|
+
### Tests Pass but Feature Does Not Work
|
|
231
|
+
|
|
232
|
+
**Symptom:** All unit tests pass, but the feature fails when used for real. The tests are testing the wrong things.
|
|
233
|
+
|
|
234
|
+
**Diagnosis:** Missing integration or end-to-end test. Unit tests verify individual pieces; integration tests verify that the pieces work together. Add an integration test that exercises the actual feature path.
|
|
235
|
+
|
|
236
|
+
### Cascading Failures After One Task
|
|
237
|
+
|
|
238
|
+
**Symptom:** Task 3 passes verification, but then tasks 4, 5, and 6 all fail because Task 3 changed something they depend on.
|
|
239
|
+
|
|
240
|
+
**Diagnosis:** Task 3's verification was insufficient — it checked its own output but not its impact on downstream consumers. Add broader verification (full test suite) after tasks that modify shared interfaces.
|
|
241
|
+
|
|
242
|
+
### Plan Becomes Obsolete Mid-Execution
|
|
243
|
+
|
|
244
|
+
**Symptom:** After implementing 3 of 6 tasks, you realize the remaining tasks no longer make sense because the first 3 revealed a better approach.
|
|
245
|
+
|
|
246
|
+
**Diagnosis:** This is normal. Plans are a best estimate based on current knowledge. When the plan becomes obsolete, stop and re-plan the remaining tasks. Do not force an outdated plan. The work already completed is not wasted — it informed the better approach.
|
|
247
|
+
|
|
248
|
+
## Quick Reference
|
|
249
|
+
|
|
250
|
+
**Per-task cycle:**
|
|
251
|
+
1. Read task
|
|
252
|
+
2. Check prerequisites
|
|
253
|
+
3. Implement
|
|
254
|
+
4. Verify
|
|
255
|
+
5. Commit
|
|
256
|
+
6. Log progress
|
|
257
|
+
|
|
258
|
+
**Verification after every task. Commit after every task. Log deviations in real time.**
|