oco-claude-plugin 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,64 @@
1
+ ---
2
+ name: oco-trace-stack
3
+ description: Analyze a stack trace or runtime error to identify root cause. Use when a stacktrace or runtime error is present.
4
+ triggers:
5
+ - "stacktrace"
6
+ - "stack trace"
7
+ - "traceback"
8
+ - "runtime error"
9
+ - "panic"
10
+ - "exception"
11
+ - "crash"
12
+ - "error at line"
13
+ ---
14
+
15
+ # OCO: Trace Stack Error
16
+
17
+ You are analyzing a runtime error or stack trace. Follow this evidence-based workflow.
18
+
19
+ ## Step 1: Parse the Stack Trace
20
+
21
+ Extract from the error:
22
+ - **Error type and message**
23
+ - **File paths and line numbers** (ordered by stack depth)
24
+ - **Relevant variable values** if visible
25
+ - **Error chain** (caused by / wrapped errors)
26
+
27
+ ## Step 2: Map to Codebase
28
+
29
+ Use `oco.trace_error` MCP tool if available:
30
+
31
+ ```
32
+ oco.trace_error({ stacktrace: "<paste stacktrace>", workspace: "." })
33
+ ```
34
+
35
+ Otherwise, manually inspect the files referenced in the stack trace, starting from the deepest application frame (skip library/framework frames).
36
+
37
+ ## Step 3: Inspect Likely Root Cause Regions
38
+
39
+ For each candidate location:
40
+ 1. Read the file at the specific line
41
+ 2. Read surrounding context (function scope)
42
+ 3. Check for: null/undefined access, type mismatches, missing error handling, race conditions, invalid state
43
+
44
+ ## Step 4: Generate Hypotheses
45
+
46
+ Produce 1-3 ranked hypotheses:
47
+ - **H1** (most likely): description + evidence
48
+ - **H2** (alternative): description + evidence
49
+ - **H3** (edge case): description + evidence
50
+
51
+ ## Step 5: Verify Before Claiming a Fix
52
+
53
+ **Do NOT propose a fix until you have:**
54
+ 1. Confirmed the hypothesis by reading the actual failing code
55
+ 2. Checked if the error is reproducible from the described scenario
56
+ 3. Verified the fix won't introduce regressions
57
+
58
+ ## Rules
59
+
60
+ - Never guess at a fix without reading the code
61
+ - If the stack trace references >5 files, delegate deep reading to `@codebase-investigator`
62
+ - Always state which hypothesis you're most confident in and why
63
+ - After applying a fix, run the verification workflow described in the `oco-verify-fix` skill (build, test, lint, typecheck)
64
+ - Use `oco.collect_findings` to synthesize evidence and open questions before concluding
@@ -0,0 +1,82 @@
1
+ ---
2
+ name: oco-verify-fix
3
+ description: Run verification suite after code changes. Enforces build, test, lint, typecheck discipline with evidence-based completion.
4
+ triggers:
5
+ - "verify"
6
+ - "check my changes"
7
+ - "run tests"
8
+ - "does it build"
9
+ - "make sure it works"
10
+ - "validate"
11
+ ---
12
+
13
+ # OCO: Verify Fix
14
+
15
+ You are verifying that code changes are correct and complete. Follow this structured verification workflow.
16
+
17
+ ## Step 1: Identify What Changed
18
+
19
+ List all modified files:
20
+ ```bash
21
+ git diff --name-only HEAD 2>/dev/null || git status --short
22
+ ```
23
+
24
+ ## Step 2: Detect Project Type and Available Checks
25
+
26
+ Detect the verification suite from project manifests:
27
+
28
+ | Signal | Build | Types | Lint | Test |
29
+ |--------|-------|-------|------|------|
30
+ | `Cargo.toml` | `cargo build` | `cargo check` | `cargo clippy` | `cargo test` |
31
+ | `package.json` | `npm run build` | `tsc --noEmit` | `npm run lint` | `npm test` |
32
+ | `pyproject.toml` | - | `mypy .` | `ruff check .` | `pytest` |
33
+ | `go.mod` | `go build ./...` | `go vet ./...` | `golangci-lint run` | `go test ./...` |
34
+
35
+ Use `oco.verify_patch` MCP tool if available for automated detection and execution.
36
+
37
+ ## Step 3: Run Verification Sequence
38
+
39
+ Execute in order (stop on first failure):
40
+
41
+ 1. **Build** — Does it compile?
42
+ 2. **Type check** — Are types consistent?
43
+ 3. **Lint** — Are there style/quality issues?
44
+ 4. **Test** — Do tests pass? Are new tests needed?
45
+
46
+ For each step, report:
47
+ - Status: pass / fail / skip (not available)
48
+ - Output summary (compact, not raw dump)
49
+
50
+ ## Step 4: Assess Results
51
+
52
+ Produce a verification verdict:
53
+
54
+ ```
55
+ VERDICT: PASS | FAIL | PARTIAL
56
+ - Build: [pass/fail/skip]
57
+ - Types: [pass/fail/skip]
58
+ - Lint: [pass/fail/skip]
59
+ - Tests: [pass/fail/skip]
60
+ - Missing coverage: [description if applicable]
61
+ ```
62
+
63
+ ## Step 5: Handle Failures
64
+
65
+ If any check fails:
66
+ 1. Identify the specific failure
67
+ 2. Fix it
68
+ 3. Re-run the failing check
69
+ 4. Continue the sequence
70
+
71
+ ## Step 6: Verification Complete
72
+
73
+ After all checks pass, the PostToolUse hook automatically detects verification commands (`cargo test`, `npm test`, etc.) and marks the session as verified. The Stop hook will then allow completion without warning.
74
+
75
+ No manual marker is needed — the hook system handles this automatically.
76
+
77
+ ## Rules
78
+
79
+ - Never skip a check that's available in the project
80
+ - Never report PASS if any check failed
81
+ - If tests are missing for the changed code, flag it explicitly
82
+ - Keep output summaries compact — report failures in detail, successes briefly