opencode-plugin-coding 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of opencode-plugin-coding might be problematic. Click here for more details.

@@ -0,0 +1,155 @@
1
+ ---
2
+ name: playwright
3
+ description: Browser automation and testing via Playwright MCP server. Navigate, interact, screenshot, and assert web application behavior.
4
+ ---
5
+
6
+ # Playwright
7
+
8
+ Browser automation and end-to-end testing via the Playwright MCP (Model Context Protocol) server. Use this skill to interact with web applications — navigate pages, fill forms, click buttons, take screenshots, and assert DOM state.
9
+
10
+ **This skill requires an external MCP server** — it is not a CLI-based tool.
11
+
12
+ ---
13
+
14
+ ## When to Activate
15
+
16
+ - User asks to test a web application interactively
17
+ - End-to-end testing of a web UI
18
+ - Visual verification of UI changes
19
+ - Debugging UI issues that require browser interaction
20
+ - User says "test in browser", "check the UI", "open the page"
21
+
22
+ ---
23
+
24
+ ## Prerequisites
25
+
26
+ ### MCP Server Required
27
+
28
+ The Playwright MCP server must be configured in your OpenCode setup. Add to your `opencode.json`:
29
+
30
+ ```json
31
+ {
32
+ "mcp": {
33
+ "playwright": {
34
+ "command": "npx",
35
+ "args": ["@anthropic/mcp-playwright"]
36
+ }
37
+ }
38
+ }
39
+ ```
40
+
41
+ Or use the Docker variant:
42
+
43
+ ```json
44
+ {
45
+ "mcp": {
46
+ "playwright": {
47
+ "command": "docker",
48
+ "args": ["run", "-i", "--rm", "mcr.microsoft.com/playwright/mcp"]
49
+ }
50
+ }
51
+ }
52
+ ```
53
+
54
+ ### Fallback When MCP is Unavailable
55
+
56
+ If the Playwright MCP server is not configured:
57
+
58
+ 1. Report to the user: "Playwright MCP server is not available. Install it with: `npm install -g @anthropic/mcp-playwright`"
59
+ 2. Suggest alternatives:
60
+ - Write Playwright test scripts that the user can run manually
61
+ - Use `curl` for API-level testing
62
+ - Describe expected UI behavior for manual verification
63
+
64
+ ---
65
+
66
+ ## Available MCP Tools
67
+
68
+ When the Playwright MCP server is running, these tools are available:
69
+
70
+ | Tool | Description |
71
+ |------|-------------|
72
+ | `playwright_navigate` | Navigate to a URL |
73
+ | `playwright_screenshot` | Take a screenshot of the current page |
74
+ | `playwright_click` | Click an element (by selector or text) |
75
+ | `playwright_fill` | Fill an input field |
76
+ | `playwright_select` | Select from a dropdown |
77
+ | `playwright_evaluate` | Execute JavaScript in the browser |
78
+ | `playwright_get_text` | Get text content of an element |
79
+ | `playwright_wait_for` | Wait for an element or condition |
80
+
81
+ ---
82
+
83
+ ## Workflow
84
+
85
+ ### 1. Start the Application
86
+
87
+ Ensure the application is running locally:
88
+
89
+ ```bash
90
+ # Check if already running
91
+ curl -s http://localhost:<port> > /dev/null 2>&1 && echo "Running" || echo "Not running"
92
+
93
+ # Start if needed (use the project's dev command)
94
+ <packageManager> run dev &
95
+ ```
96
+
97
+ ### 2. Navigate and Interact
98
+
99
+ Use MCP tools to interact with the application:
100
+
101
+ ```
102
+ Navigate to http://localhost:<port>
103
+ Screenshot the page
104
+ Click on "Login" button
105
+ Fill the email field with "test@example.com"
106
+ Screenshot to verify the form state
107
+ ```
108
+
109
+ ### 3. Assert and Verify
110
+
111
+ Check that the UI matches expectations:
112
+
113
+ ```
114
+ Get text of the ".welcome-message" element
115
+ Screenshot the final state
116
+ ```
117
+
118
+ ### 4. Report Results
119
+
120
+ Return:
121
+
122
+ 1. **What was tested** — pages visited, interactions performed
123
+ 2. **Screenshots** — key states captured
124
+ 3. **Issues found** — any UI bugs or unexpected behavior
125
+ 4. **Pass/Fail** — overall assessment
126
+
127
+ ---
128
+
129
+ ## Writing Playwright Tests
130
+
131
+ For persistent test coverage, write Playwright test files:
132
+
133
+ ```typescript
134
+ import {test, expect} from '@playwright/test';
135
+
136
+ test('should display welcome message after login', async ({page}) => {
137
+ await page.goto('http://localhost:3000');
138
+ await page.fill('[name="email"]', 'test@example.com');
139
+ await page.fill('[name="password"]', 'password');
140
+ await page.click('button[type="submit"]');
141
+ await expect(page.locator('.welcome-message')).toHaveText('Welcome, Test User');
142
+ });
143
+ ```
144
+
145
+ Place test files following the project's existing pattern (e.g., `e2e/`, `tests/`, `__tests__/`).
146
+
147
+ ---
148
+
149
+ ## Rules
150
+
151
+ - **Always screenshot key states** — before and after interactions, to document what happened
152
+ - **Use stable selectors** — prefer `data-testid`, `role`, or semantic selectors over fragile CSS classes
153
+ - **Wait for readiness** — use `waitFor` before asserting on elements that load asynchronously
154
+ - **Report MCP unavailability** — if tools are missing, tell the user how to install the MCP server
155
+ - **Do not assume the app is running** — check first, start if needed
@@ -0,0 +1,145 @@
1
+ ---
2
+ name: systematic-debugging
3
+ description: Systematic 4-phase debugging — reproduce, hypothesize, isolate, fix — for resolving bugs, test failures, or abnormal behaviors.
4
+ ---
5
+
6
+ # Systematic Debugging
7
+
8
+ A structured 4-phase approach to debugging: reproduce → hypothesize → isolate → fix. Prevents the common anti-pattern of guessing at fixes without understanding the root cause.
9
+
10
+ Inspired by the systematic-debugging skill from Superpowers.
11
+
12
+ ---
13
+
14
+ ## When to Activate
15
+
16
+ - A test is failing and the cause isn't obvious
17
+ - User reports a bug or unexpected behavior
18
+ - A verification command fails during orchestration
19
+ - Build errors that resist simple fixes
20
+ - User says "debug", "investigate", "figure out why"
21
+
22
+ ---
23
+
24
+ ## Phase 1: Reproduce
25
+
26
+ **Goal:** Get a reliable, minimal reproduction of the problem.
27
+
28
+ 1. **Understand the symptom** — What exactly is failing? Error message? Wrong output? Crash?
29
+ 2. **Run the failing command** — Execute and capture the full output:
30
+ ```bash
31
+ <failing command> 2>&1
32
+ ```
33
+ 3. **Identify the minimal trigger** — Can you reduce the input or test case to the smallest reproduction?
34
+ 4. **Document the reproduction:**
35
+ ```
36
+ **Bug:** <one-line description>
37
+ **Symptom:** <what happens>
38
+ **Expected:** <what should happen>
39
+ **Reproduction:** <exact command to reproduce>
40
+ **Environment:** <relevant versions, OS, config>
41
+ ```
42
+
43
+ **Do not proceed to Phase 2 until you have a reliable reproduction.** If you can't reproduce it, gather more information first.
44
+
45
+ ---
46
+
47
+ ## Phase 2: Hypothesize
48
+
49
+ **Goal:** Form 2–3 concrete hypotheses about the root cause.
50
+
51
+ 1. **Read the error carefully** — Parse stack traces, error codes, and messages
52
+ 2. **Trace the code path** — Read the source code along the execution path:
53
+ ```bash
54
+ # Read the file where the error originates
55
+ # Read the callers
56
+ # Read the data flow
57
+ ```
58
+ 3. **Form hypotheses** — List 2–3 possible causes, ranked by likelihood:
59
+ ```
60
+ Hypotheses:
61
+ 1. [Most likely] <description> — because <evidence>
62
+ 2. [Possible] <description> — because <evidence>
63
+ 3. [Less likely] <description> — because <evidence>
64
+ ```
65
+ 4. **Check recent changes** — Could a recent commit have introduced this?
66
+ ```bash
67
+ git log --oneline -20
68
+ git diff HEAD~5..HEAD -- <relevant files>
69
+ ```
70
+
71
+ **Do not jump to fixing.** The goal is to understand, not to patch.
72
+
73
+ ---
74
+
75
+ ## Phase 3: Isolate
76
+
77
+ **Goal:** Confirm which hypothesis is correct by testing each one.
78
+
79
+ For each hypothesis:
80
+
81
+ 1. **Design a test** — What would confirm or disprove this hypothesis?
82
+ 2. **Execute the test** — Add logging, run with different inputs, or write a targeted unit test
83
+ 3. **Record the result:**
84
+ ```
85
+ Hypothesis 1: [CONFIRMED / DISPROVED]
86
+ Evidence: <what you observed>
87
+ ```
88
+
89
+ Techniques:
90
+
91
+ - **Binary search** — If the bug is in a range of commits, use `git bisect`
92
+ - **Add logging** — Temporarily add `console.log` or equivalent at key points
93
+ - **Simplify inputs** — Remove complexity until the bug disappears, then add back
94
+ - **Unit test isolation** — Write a test that directly exercises the suspected code path
95
+
96
+ **Continue until exactly one hypothesis is confirmed.** If all are disproved, return to Phase 2 with new information.
97
+
98
+ ---
99
+
100
+ ## Phase 4: Fix
101
+
102
+ **Goal:** Apply the correct fix and verify it resolves the issue without regressions.
103
+
104
+ 1. **Write the fix** — Target the confirmed root cause, not the symptom
105
+ 2. **Add a regression test** — Write a test that would have caught this bug:
106
+ ```
107
+ it('should <expected behavior> when <condition that caused the bug>')
108
+ ```
109
+ 3. **Remove debug artifacts** — Remove any temporary logging added in Phase 3
110
+ 4. **Run the reproduction** — Confirm the original symptom is gone
111
+ 5. **Run the full test suite** — Ensure no regressions:
112
+ ```bash
113
+ <test command from workflow.json>
114
+ ```
115
+ 6. **Run all verification commands** — Typecheck, lint, etc.
116
+ 7. **Document the root cause:**
117
+ ```
118
+ **Root cause:** <what was wrong>
119
+ **Fix:** <what was changed>
120
+ **Regression test:** <test file and name>
121
+ ```
122
+
123
+ ---
124
+
125
+ ## Anti-Patterns
126
+
127
+ Avoid these common debugging mistakes:
128
+
129
+ - **Shotgun debugging** — Making random changes and hoping one works. Always have a hypothesis.
130
+ - **Fixing symptoms** — Suppressing an error message without understanding the cause.
131
+ - **Skipping reproduction** — Trying to fix a bug you can't reliably reproduce.
132
+ - **Tunnel vision** — Getting stuck on one hypothesis. If it's disproved, move on.
133
+ - **Leaving debug code** — Forgetting to remove `console.log`, `debugger`, or test hacks.
134
+
135
+ ---
136
+
137
+ ## Output
138
+
139
+ Return to the user or orchestrator:
140
+
141
+ 1. **Root cause** — what was wrong
142
+ 2. **Fix applied** — what was changed (file paths and description)
143
+ 3. **Regression test** — test file and test name
144
+ 4. **Verification** — all gate commands pass
145
+ 5. **Learnings** — anything that should be added to `AGENTS.md` or `doc/TODO.md`
@@ -0,0 +1,114 @@
1
+ ---
2
+ name: test-driven-development
3
+ description: Enforce RED-GREEN-REFACTOR test-driven development cycle. Write failing test first, implement minimal code, then refactor. Opt-in per project.
4
+ ---
5
+
6
+ # Test-Driven Development
7
+
8
+ Enforce the RED-GREEN-REFACTOR test-driven development cycle. This skill is **opt-in** — enable it in `workflow.json` → `testDrivenDevelopment.enabled: true`.
9
+
10
+ When active, the core-coder must write failing tests before implementation code. This ensures test coverage is a first-class deliverable, not an afterthought.
11
+
12
+ Inspired by the test-driven-development skill from Superpowers.
13
+
14
+ ---
15
+
16
+ ## When to Activate
17
+
18
+ - `workflow.json` → `testDrivenDevelopment.enabled` is `true`
19
+ - User explicitly requests TDD approach
20
+ - The orchestrator or core-coder decides TDD is appropriate for a task
21
+
22
+ ---
23
+
24
+ ## The RED-GREEN-REFACTOR Cycle
25
+
26
+ ### RED: Write a Failing Test
27
+
28
+ 1. Identify the behavior to implement from the plan
29
+ 2. Write a test that asserts the expected behavior
30
+ 3. Run the test — it **must fail** (if it passes, the test is wrong or the behavior already exists)
31
+ 4. Commit the failing test: `test: add failing test for <behavior>`
32
+
33
+ ### GREEN: Make It Pass
34
+
35
+ 1. Write the **minimum** code to make the failing test pass
36
+ 2. Do not add extra features, optimizations, or edge cases yet
37
+ 3. Run the test — it **must pass**
38
+ 4. Run all other tests — ensure nothing is broken
39
+ 5. Commit: `feat(<scope>): implement <behavior>`
40
+
41
+ ### REFACTOR: Clean Up
42
+
43
+ 1. Improve code quality without changing behavior
44
+ 2. Look for: duplication, naming, structure, readability
45
+ 3. Run all tests after each refactoring step — they must still pass
46
+ 4. Commit: `refactor(<scope>): <what was improved>`
47
+
48
+ ### Repeat
49
+
50
+ Move to the next behavior from the plan and start a new RED cycle.
51
+
52
+ ---
53
+
54
+ ## Process
55
+
56
+ For each task in the plan:
57
+
58
+ 1. **Analyze the task** — What testable behaviors does it introduce?
59
+ 2. **List test cases** — Before writing any code, list the test cases:
60
+ ```
61
+ Test cases for Task N:
62
+ - [ ] When <input/condition>, should <expected behavior>
63
+ - [ ] When <edge case>, should <expected behavior>
64
+ - [ ] When <error case>, should <expected behavior>
65
+ ```
66
+ 3. **RED-GREEN-REFACTOR** — Execute the cycle for each test case
67
+ 4. **Verify** — Run the full test suite after completing all cases for a task
68
+
69
+ ### Test File Conventions
70
+
71
+ - Place tests adjacent to source files or in a `__tests__/` directory, following the project's existing pattern
72
+ - Name test files consistently: `*.test.ts`, `*.spec.ts`, or match existing convention
73
+ - Use descriptive test names: `it('should return empty array when no items match filter')`
74
+
75
+ ### When TDD Doesn't Fit
76
+
77
+ Some changes don't benefit from TDD:
78
+
79
+ - Pure configuration changes
80
+ - Documentation updates
81
+ - Trivial one-line fixes with obvious correctness
82
+ - UI layout changes (use visual testing instead)
83
+
84
+ For these, write tests after implementation (or skip if truly trivial).
85
+
86
+ ---
87
+
88
+ ## Integration with Orchestrate
89
+
90
+ When `testDrivenDevelopment.enabled` is true, the `orchestrate` skill modifies the core-coder implementation template:
91
+
92
+ ```
93
+ Implement the approved plan using TDD (RED-GREEN-REFACTOR cycle).
94
+
95
+ For each task:
96
+ 1. Write failing tests first
97
+ 2. Implement minimal code to pass
98
+ 3. Refactor
99
+ 4. Run full test suite before moving to next task
100
+
101
+ Follow the test-driven-development skill instructions.
102
+ ```
103
+
104
+ Reviewers should verify TDD compliance: are there tests for new behavior? Were tests written before implementation (check commit order)?
105
+
106
+ ---
107
+
108
+ ## Rules
109
+
110
+ - **Tests must fail first.** If a test passes immediately, investigate — the behavior may already exist or the test may be wrong.
111
+ - **Minimum code to pass.** Do not over-engineer in the GREEN phase. Save improvements for REFACTOR.
112
+ - **All tests must pass after GREEN.** Never leave the codebase with failing tests.
113
+ - **Commit at each phase.** RED, GREEN, and REFACTOR each get their own commit for traceability.
114
+ - **Do not skip REFACTOR.** Even if the GREEN code looks clean, take a moment to evaluate.
@@ -0,0 +1,27 @@
1
+ ---
2
+ status: not_started
3
+ branch:
4
+ created:
5
+ ---
6
+
7
+ # Plan: [Title]
8
+
9
+ ## Context
10
+
11
+ [Brief description of what this plan accomplishes]
12
+
13
+ ## Tasks
14
+
15
+ - [ ] Task 1: [Description]
16
+ - Files: [exact file paths]
17
+ - Acceptance: [criteria]
18
+
19
+ ## Verification
20
+
21
+ - Build: `[command]`
22
+ - Tests: `[command]`
23
+ - Lint: `[command]`
24
+
25
+ ## Learnings (updated during execution)
26
+
27
+ - (none yet)
@@ -0,0 +1,51 @@
1
+ ---
2
+ branch:
3
+ date:
4
+ pr:
5
+ rounds: # integer: total review rounds completed
6
+ ---
7
+
8
+ # Retrospective: [Title]
9
+
10
+ ## Summary
11
+
12
+ [Brief summary of the completed work]
13
+
14
+ ## Reviewer Scores
15
+
16
+ | Reviewer | Model | Focus Areas | Score | Notes |
17
+ |----------|-------|-------------|-------|-------|
18
+ | | | | | |
19
+
20
+ ## Key Findings
21
+
22
+ ### Critical Issues Found
23
+ - (none)
24
+
25
+ ### Patterns Observed
26
+ - (none)
27
+
28
+ ## Proposed AGENTS.md Updates
29
+
30
+ ### New Gotchas
31
+ - (none)
32
+
33
+ ### Convention Updates
34
+ - (none)
35
+
36
+ ## Reviewer Knowledge Updates
37
+
38
+ [Replace each placeholder below with actual values and merge into `.opencode/reviewer-knowledge.json`. Example format:]
39
+
40
+ ```json
41
+ {
42
+ "models": {
43
+ "<model-id>": {
44
+ "areas": {
45
+ "<area>": { "totalScore": <N>, "reviewCount": <N> }
46
+ }
47
+ }
48
+ },
49
+ "lastUpdated": "<YYYY-MM-DD>"
50
+ }
51
+ ```
@@ -0,0 +1,33 @@
1
+ {
2
+
3
+ "project": {
4
+ "name": "",
5
+ "repo": "",
6
+ "defaultBranch": "master"
7
+ },
8
+ "stack": {
9
+ "language": "",
10
+ "framework": "",
11
+ "packageManager": ""
12
+ },
13
+ "commands": {
14
+ "build": "",
15
+ "lint": "",
16
+ "test": "",
17
+ "typecheck": ""
18
+ },
19
+ "agents": {
20
+ "coreCoder": { "name": "core-coder", "model": "" },
21
+ "coreReviewers": [
22
+ { "name": "core-reviewer-1", "model": "" },
23
+ { "name": "core-reviewer-2", "model": "" }
24
+ ],
25
+ "reviewers": [
26
+ { "name": "reviewer-1", "model": "" }
27
+ ],
28
+ "securityReviewers": []
29
+ },
30
+ "testDrivenDevelopment": { "enabled": false },
31
+ "docsToRead": [],
32
+ "reviewFocus": []
33
+ }