@nano-step/skill-manager 5.6.2 → 5.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,58 @@
1
+ # Checkpoint System
2
+
3
+ Reviews are resumable via checkpoints saved at each phase. If the agent crashes mid-review, you can resume from the last completed phase instead of starting over.
4
+
5
+ ## Checkpoint Directory
6
+
7
+ **For PR Reviews:** `$REVIEW_DIR/.checkpoints/` (inside the temp clone directory)
8
+ **For Local Reviews (`--staged`):** `{current_working_directory}/.checkpoints/`
9
+
10
+ Checkpoints are automatically removed when the clone directory is deleted (Phase 6 cleanup).
11
+
12
+ ## Checkpoint Files
13
+
14
+ | File | Content | Updated When |
15
+ |------|---------|--------------|
16
+ | `manifest.json` | Master state tracker | After every phase |
17
+ | `phase-0-clone.json` | Clone metadata (clone_dir, branches, head_sha, files_changed) | Phase 0 |
18
+ | `phase-1-context.json` | PR metadata, file classifications | Phase 1 |
19
+ | `phase-1.5-linear.json` | Linear ticket context, acceptance criteria | Phase 1.5 |
20
+ | `phase-2-tracing.json` | Smart tracing results per file | Phase 2 |
21
+ | `phase-2.5-summary.json` | PR summary text | Phase 2.5 |
22
+ | `phase-3-subagents.json` | Subagent findings (updated after EACH subagent completes) | Phase 3 |
23
+ | `phase-4-refined.json` | Deduplicated/filtered findings | Phase 4 |
24
+ | `phase-4.5-verification.json` | Verification results (verified/false/unverifiable counts, dropped/downgraded findings) | Phase 4.5 |
25
+ | `phase-4.6-confidence.json` | Result confidence score, per-finding confidence, gate action | Phase 4.6 |
26
+ | `phase-5-report.md` | Copy of final report | Phase 5 |
27
+
28
+ ## Manifest Schema
29
+
30
+ ```json
31
+ {
32
+ "version": "1.0",
33
+ "pr": { "repo": "owner/repo", "number": 123, "url": "..." },
34
+ "clone_dir": "/tmp/pr-review-...",
35
+ "started_at": "ISO-8601",
36
+ "last_updated": "ISO-8601",
37
+ "completed_phase": 2,
38
+ "next_phase": 2.5,
39
+ "phase_status": {
40
+ "0": "complete", "1": "complete", "1.5": "complete",
41
+ "2": "complete", "2.5": "pending", "3": "pending",
42
+ "4": "pending", "4.5": "pending", "4.6": "pending", "5": "pending"
43
+ },
44
+ "subagent_status": {
45
+ "explore": "pending", "oracle": "pending",
46
+ "librarian": "pending", "general": "pending"
47
+ }
48
+ }
49
+ ```
50
+
51
+ ## Phase 3 Special Handling
52
+
53
+ Phase 3 runs 4 parallel subagents. After **EACH** subagent completes:
54
+ 1. Update `phase-3-subagents.json` with that subagent's findings and status
55
+ 2. Update `manifest.json` subagent_status to `"complete"` for that subagent
56
+ 3. On resume: only run subagents with status != `"complete"`
57
+
58
+ This allows resuming mid-Phase-3 if only some subagents completed before a crash.
@@ -0,0 +1,98 @@
1
+ # Confidence Scoring (Phase 4.6)
2
+
3
+ Re-review the final findings to score how confident we are in the review's **results** — are the findings correct and complete?
4
+
5
+ **Always run this phase**, even if no critical/warning findings remain. The confidence score reflects the entire review quality, not just individual findings.
6
+
7
+ ## Per-Finding Confidence
8
+
9
+ For each surviving finding (post Phase 4.5), assign a confidence level:
10
+
11
+ | Level | Criteria |
12
+ |-------|----------|
13
+ | 🟢 High | Verified with concrete file:line evidence AND 2+ agents independently flagged it |
14
+ | 🟡 Medium | Verified but single agent, OR 2+ agents agreed but evidence is indirect |
15
+ | 🔴 Low | Unverifiable, pattern-based reasoning, no agent consensus |
16
+
17
+ ## Overall Confidence Score (0–100)
18
+
19
+ Compute from three measurable rates:
20
+
21
+ ```
22
+ # Inputs (from Phase 4 + 4.5)
23
+ raw_findings = total findings from all 4 subagents (before dedup)
24
+ false_positives = findings dropped in Phase 4.5 as verified:false
25
+ verified = findings confirmed real in Phase 4.5 (verified:true)
26
+ unverifiable = findings marked unverifiable in Phase 4.5
27
+ final_findings = all findings in the final report (critical + warning + improvement + suggestion)
28
+ consensus_findings = final findings that were flagged by 2+ agents independently
29
+ evidence_findings = final findings that cite concrete file:line proof
30
+
31
+ # Rates
32
+ accuracy_rate = verified / max(verified + false_positives, 1) # How often we're right (0.0–1.0)
33
+ consensus_rate = consensus_findings / max(final_findings, 1) # Agreement level (0.0–1.0)
34
+ evidence_rate = evidence_findings / max(final_findings, 1) # Proof quality (0.0–1.0)
35
+
36
+ # Weighted score
37
+ overall = round((accuracy_rate * 40) + (consensus_rate * 30) + (evidence_rate * 30))
38
+ ```
39
+
40
+ ## Special Cases
41
+
42
+ - If Phase 4.5 was skipped (no critical/warning to verify): `accuracy_rate = 1.0` (no false positives possible)
43
+ - If `final_findings == 0` (clean review, no issues found): `overall = 100` (nothing to be wrong about)
44
+ - If all subagents returned empty findings: `overall = 100` with note "No issues found by any agent"
45
+
46
+ ## Confidence Thresholds & Gate Behavior
47
+
48
+ | Score | Label | Gate Action |
49
+ |-------|-------|-------------|
50
+ | 80–100 | 🟢 High | Proceed normally — findings are reliable |
51
+ | 60–79 | 🟡 Medium | Add ⚠️ in report: "Some findings may be inaccurate — verify manually" |
52
+ | < 60 | 🔴 Low | Add 🔴 in report: "Low confidence review — significant uncertainty in findings. Manual review recommended." |
53
+
54
+ ## Display Format (included in Phase 5 TL;DR)
55
+
56
+ ```
57
+ 📊 Result Confidence: 🟢 92/100
58
+ Accuracy: 85% (3 false positives caught and removed)
59
+ Consensus: 100% (all findings had 2+ agent agreement)
60
+ Evidence: 100% (all findings cite file:line proof)
61
+ ```
62
+
63
+ For low confidence:
64
+ ```
65
+ 📊 Result Confidence: 🔴 45/100
66
+ Accuracy: 50% (4 of 8 findings were false positives)
67
+ Consensus: 30% (most findings from single agent only)
68
+ Evidence: 60% (some findings lack concrete proof)
69
+ ⚠️ Low confidence — manual review recommended for: [list specific uncertain findings]
70
+ ```
71
+
72
+ ## Checkpoint Schema
73
+
74
+ Save results to `.checkpoints/phase-4.6-confidence.json`:
75
+
76
+ ```json
77
+ {
78
+ "raw_findings": 16,
79
+ "false_positives": 3,
80
+ "verified": 5,
81
+ "unverifiable": 0,
82
+ "final_findings": 6,
83
+ "consensus_findings": 4,
84
+ "evidence_findings": 6,
85
+ "accuracy_rate": 0.625,
86
+ "consensus_rate": 0.667,
87
+ "evidence_rate": 1.0,
88
+ "overall_score": 75,
89
+ "label": "medium",
90
+ "per_finding_confidence": [
91
+ { "file": "src/service.js", "line": 42, "severity": "warning", "confidence": "high" },
92
+ { "file": "src/utils.js", "line": 10, "severity": "improvement", "confidence": "medium" }
93
+ ],
94
+ "gate_action": "Add warning to report"
95
+ }
96
+ ```
97
+
98
+ Update `manifest.json` (`completed_phase: 4.6`, `next_phase: 5`).
@@ -0,0 +1,58 @@
1
+ # Next.js Code Review Rules
2
+
3
+ ## Critical Rules
4
+
5
+ ### Data Fetching
6
+ - `getServerSideProps` fetching data that doesn't change per-request → use `getStaticProps` + ISR
7
+ - Missing `revalidate` in `getStaticProps` for dynamic content → stale data
8
+ - Calling internal API routes from `getServerSideProps` → call the handler function directly
9
+ - `fetch` in Server Components without `{ cache: 'no-store' }` when fresh data required → unexpected caching
10
+
11
+ ### Routing & Navigation
12
+ - `useRouter().push()` for external URLs → use `window.location.href`
13
+ - Missing `loading.tsx` for slow data fetches → no loading state shown
14
+ - Dynamic route params accessed before null check → crash on direct URL access
15
+
16
+ ### App Router (Next.js 13+)
17
+ - `'use client'` on a component that has no client-side interactivity → unnecessary client bundle
18
+ - `useState` / `useEffect` in Server Component → runtime error
19
+ - Passing non-serializable props from Server → Client component → crash
20
+
21
+ ### Server Actions
22
+ - No input validation in Server Actions → security hole
23
+ - Missing `revalidatePath` / `revalidateTag` after mutations → stale cache
24
+
25
+ ## Warning Rules
26
+
27
+ ### Performance
28
+ - Large `node_modules` imports in `'use client'` components → bundle bloat (use dynamic imports)
29
+ - Missing `next/image` for `<img>` tags → no lazy loading / optimization
30
+ - Missing `next/font` for Google Fonts → layout shift (CLS)
31
+ - `useEffect` for data fetching → waterfall; prefer Server Components
32
+
33
+ ### Auth & Security
34
+ - Middleware not protecting `/api` routes that require auth
35
+ - `cookies()` / `headers()` in cached Server Components → dynamic rendering forced silently
36
+ - Sensitive env vars without `NEXT_PUBLIC_` prefix check — public prefix exposes to client
37
+
38
+ ## Suggestions
39
+ - Use `next/dynamic` for heavy components with `ssr: false` when not needed server-side
40
+ - Use `unstable_cache` for expensive DB queries in Server Components
41
+ - Wrap `<Suspense>` around async Server Components for streaming
42
+
43
+ ## Detection Patterns
44
+
45
+ ```tsx
46
+ // CRITICAL: useState in Server Component
47
+ // app/page.tsx (no 'use client')
48
+ const [count, setCount] = useState(0) // Error: hooks in Server Component
49
+
50
+ // CRITICAL: internal API fetch in getServerSideProps
51
+ export async function getServerSideProps() {
52
+ const res = await fetch('http://localhost:3000/api/data') // Wrong
53
+ // Should: import and call the handler directly
54
+ }
55
+
56
+ // WARNING: img instead of next/image
57
+ <img src={user.avatar} /> // Use <Image> from 'next/image'
58
+ ```
@@ -0,0 +1,54 @@
1
+ # Prisma Code Review Rules
2
+
3
+ ## Critical Rules
4
+
5
+ ### Data Safety
6
+ - `prisma.user.deleteMany({})` with empty where → deletes all rows
7
+ - Raw queries with string interpolation → SQL injection (`prisma.$queryRaw` must use tagged template literal `Prisma.sql`)
8
+ - Missing `select` / `omit` on queries that return sensitive fields (e.g., `password`, `token`) → data leak
9
+ - `prisma.$transaction` with non-atomic operations that should be atomic → partial write on failure
10
+
11
+ ### Migrations
12
+ - Renaming a column by adding new + deleting old in one migration → data loss if deployed without backfill
13
+ - `@default(now())` added to existing non-nullable column → migration fails on non-empty table
14
+ - Missing `migrate deploy` step noted in release process for schema changes
15
+
16
+ ## Warning Rules
17
+
18
+ ### Performance
19
+ - `findMany` without `take` (limit) → unbounded query, potential OOM
20
+ - N+1: loop calling `findUnique` per item → use `findMany` with `in` filter or `include`
21
+ - Missing `index` on columns used in `where`, `orderBy`, or `join` → full table scan
22
+ - `include` nesting 3+ levels deep → over-fetching
23
+
24
+ ### Patterns
25
+ - Using `upsert` when only create or only update is semantically correct → confusing intent
26
+ - `update` without checking record exists → silent no-op if not found (use `update` which throws, not `upsert`)
27
+ - Accessing `prisma` directly in route handlers → should go through a repository/service layer
28
+
29
+ ## Suggestions
30
+ - Use `select` to fetch only needed columns (reduces payload size)
31
+ - Use `prisma.$transaction([...])` for related writes that must succeed together
32
+ - Add `@@index` directives for common query patterns in schema
33
+
34
+ ## Detection Patterns
35
+
36
+ ```typescript
37
+ // CRITICAL: SQL injection via raw query
38
+ const user = await prisma.$queryRaw(`SELECT * FROM users WHERE id = ${userId}`)
39
+ // SAFE:
40
+ const user = await prisma.$queryRaw(Prisma.sql`SELECT * FROM users WHERE id = ${userId}`)
41
+
42
+ // CRITICAL: missing field exclusion
43
+ const user = await prisma.user.findUnique({ where: { id } })
44
+ // user.password is returned — should use select or omit
45
+
46
+ // WARNING: N+1
47
+ for (const order of orders) {
48
+ const user = await prisma.user.findUnique({ where: { id: order.userId } })
49
+ }
50
+ // FIX: findMany with where: { id: { in: orders.map(o => o.userId) } }
51
+
52
+ // WARNING: unbounded query
53
+ const all = await prisma.product.findMany() // missing take:
54
+ ```
@@ -0,0 +1,61 @@
1
+ # React Code Review Rules
2
+
3
+ ## Critical Rules
4
+
5
+ ### Hooks
6
+ - Hooks called conditionally → violates Rules of Hooks, runtime error
7
+ - `useEffect` with missing dependency → stale closure / infinite loop
8
+ - `useEffect` with object/array literal in deps → new reference every render, infinite loop
9
+ - Mutating state directly (`state.items.push(x)`) → no re-render triggered
10
+
11
+ ### State & Props
12
+ - Derived state stored in `useState` when it can be computed inline → sync issues
13
+ - Prop drilling 3+ levels deep without context or state manager → maintenance burden (flag as warning)
14
+ - Reading stale ref value in async callback after unmount → memory leak / crash
15
+
16
+ ### Event Handling
17
+ - Missing cleanup in `useEffect` (subscriptions, timers, event listeners) → memory leak
18
+ - `onClick` on non-interactive element without keyboard equivalent → accessibility gap
19
+
20
+ ## Warning Rules
21
+
22
+ ### Performance
23
+ - Anonymous functions / object literals as props to memoized components → memo is bypassed
24
+ - Missing `useCallback` on functions passed to child components with `React.memo`
25
+ - Large lists rendered without virtualization (`react-window` / `react-virtual`) → slow paint
26
+ - `useContext` on frequently-changing context → unnecessary re-renders
27
+
28
+ ### Patterns
29
+ - `index` as `key` in lists that can reorder/filter → wrong element reconciled
30
+ - Multiple `useState` for related state → use `useReducer`
31
+ - `useEffect` for synchronizing with external system vs for side effects — distinguish them
32
+
33
+ ## Suggestions
34
+ - Use `React.lazy` + `<Suspense>` for code-splitting heavy routes
35
+ - Consider `useMemo` for expensive calculations with stable inputs
36
+ - Use `<ErrorBoundary>` around async components
37
+
38
+ ## Detection Patterns
39
+
40
+ ```tsx
41
+ // CRITICAL: conditional hook
42
+ if (user) {
43
+ const [name, setName] = useState(user.name) // Error: conditional hook
44
+ }
45
+
46
+ // CRITICAL: missing cleanup
47
+ useEffect(() => {
48
+ const sub = eventBus.subscribe(handler)
49
+ // Missing: return () => sub.unsubscribe()
50
+ }, [])
51
+
52
+ // CRITICAL: stale closure
53
+ useEffect(() => {
54
+ setInterval(() => {
55
+ console.log(count) // stale if count not in deps
56
+ }, 1000)
57
+ }, []) // Missing: count in deps
58
+
59
+ // WARNING: object literal breaks memo
60
+ <MyMemo style={{ color: 'red' }} /> // new object every render
61
+ ```
@@ -5,30 +5,26 @@ nano-brain provides persistent memory across sessions. The reviewer uses it for
5
5
  - Known issues and tech debt in affected modules
6
6
  - Prior discussions about the same code areas
7
7
 
8
- ## Access Method: CLI
8
+ ## Access Method: HTTP API
9
9
 
10
- All nano-brain operations use the CLI via Bash tool:
10
+ All nano-brain operations use HTTP API (nano-brain runs as Docker service on port 3100):
11
11
 
12
- | Need | CLI Command |
13
- |------|-------------|
14
- | Hybrid search (best quality) | `npx nano-brain query "search terms"` |
15
- | Keyword search (function name, error) | `npx nano-brain query "exact term" -c codebase` |
16
- | Save review findings | `npx nano-brain write "content" --tags=review` |
17
-
18
- ## Setup
19
-
20
- Run `/nano-brain-init` in the workspace, or `npx nano-brain init --root=/path/to/workspace`.
12
+ | Need | Command |
13
+ |------|---------|
14
+ | Hybrid search (best quality) | `curl -s localhost:3100/api/query -d '{"query":"search terms"}'` |
15
+ | Keyword search (function name, error) | `curl -s localhost:3100/api/search -d '{"query":"exact term"}'` |
16
+ | Save review findings | `curl -s localhost:3100/api/write -d '{"content":"...","tags":"review"}'` |
21
17
 
22
18
  ## Phase 1 Memory Queries
23
19
 
24
20
  For each significantly changed file/module:
25
- - **Hybrid search**: `npx nano-brain query "<module-name>"` (best quality, combines BM25 + vector + reranking)
26
- - **Scoped search**: `npx nano-brain query "<function-name>" -c codebase` for code-specific results
21
+ - **Hybrid search**: `curl -s localhost:3100/api/query -d '{"query":"<module-name>"}'` (best quality, combines BM25 + vector + reranking)
22
+ - **Scoped search**: `curl -s localhost:3100/api/query -d '{"query":"<function-name>","collection":"codebase"}'` for code-specific results
27
23
 
28
24
  Specific queries:
29
- - Past review findings: `npx nano-brain query "review <module-name>"`
30
- - Architectural decisions: `npx nano-brain query "<module-name> architecture design decision"`
31
- - Known issues: `npx nano-brain query "<function-name> bug issue regression"`
25
+ - Past review findings: `curl -s localhost:3100/api/query -d '{"query":"review <module-name>"}'`
26
+ - Architectural decisions: `curl -s localhost:3100/api/query -d '{"query":"<module-name> architecture design decision"}'`
27
+ - Known issues: `curl -s localhost:3100/api/query -d '{"query":"<function-name> bug issue regression"}'`
32
28
 
33
29
  Collect relevant memory hits as `projectMemory` context for subagents.
34
30
 
@@ -36,7 +32,7 @@ Collect relevant memory hits as `projectMemory` context for subagents.
36
32
 
37
33
  Query nano-brain for known issues:
38
34
  ```bash
39
- npx nano-brain query "<function-name> bug issue edge case regression"
35
+ curl -s localhost:3100/api/query -d '{"query":"<function-name> bug issue edge case regression"}'
40
36
  ```
41
37
 
42
38
  ## Phase 5.5: Save Review to nano-brain
@@ -44,18 +40,7 @@ npx nano-brain query "<function-name> bug issue edge case regression"
44
40
  After generating the report, save key findings for future sessions:
45
41
 
46
42
  ```bash
47
- npx nano-brain write "## Code Review: PR #<number> - <title>
48
- Date: <date>
49
- Files: <changed_files>
50
-
51
- ### Key Findings
52
- <critical_issues_summary>
53
- <warnings_summary>
54
-
55
- ### Decisions
56
- <architectural_decisions_noted>
57
-
58
- ### Recommendation: <APPROVE|REQUEST_CHANGES|COMMENT>" --tags=review,pr-<number>
43
+ curl -s localhost:3100/api/write -d '{"content":"## Code Review: PR #<number> - <title>\nDate: <date>\nFiles: <changed_files>\n\n### Key Findings\n<critical_issues_summary>\n<warnings_summary>\n\n### Decisions\n<architectural_decisions_noted>\n\n### Recommendation: <APPROVE|REQUEST_CHANGES|COMMENT>","tags":"review,pr-<number>"}'
59
44
  ```
60
45
 
61
46
  This ensures future reviews can reference past findings on the same codebase areas.
@@ -30,6 +30,11 @@ Create `.opencode/reviews/` if it does not exist.
30
30
  {If Phase 4.5 verified all findings: "🔍 All findings verified"}
31
31
  {If Phase 4.5 was skipped (no critical/warning): omit this line entirely}
32
32
 
33
+ 📊 **Result Confidence: {emoji} {score}/100**
34
+ Accuracy: {accuracy_rate}% ({false_positives} false positive(s) caught) | Consensus: {consensus_rate}% | Evidence: {evidence_rate}%
35
+ {If score < 80: "⚠️ {gate_message}"}
36
+ {If score >= 80: omit the warning line}
37
+
33
38
  ## What This PR Does
34
39
 
35
40
  {1-3 sentences. Start with action verb. Include business impact if clear.}
@@ -0,0 +1,207 @@
1
+ # Setup Wizard (Phase -2)
2
+
3
+ Runs once when no `.opencode/code-reviewer.json` exists, or when user runs `/review --setup`.
4
+
5
+ ## Wizard Flow
6
+
7
+ Ask each question conversationally. Accept number or name. If the user says "both" or lists multiple, select all that apply.
8
+
9
+ ---
10
+
11
+ ### Question 1: Frontend framework
12
+
13
+ ```
14
+ What frontend framework does this project use?
15
+
16
+ 1. Nuxt 3
17
+ 2. Next.js
18
+ 3. Vue 3 (SPA, no SSR)
19
+ 4. React (CRA / Vite)
20
+ 5. None (API-only project)
21
+ ```
22
+
23
+ → Maps to `stack.frontend`: `"nuxt"` | `"nextjs"` | `"vue"` | `"react"` | `null`
24
+
25
+ ---
26
+
27
+ ### Question 2: Backend framework
28
+
29
+ ```
30
+ What backend framework?
31
+
32
+ 1. Express
33
+ 2. NestJS
34
+ 3. Fastify
35
+ 4. None (frontend-only project)
36
+ ```
37
+
38
+ → Maps to `stack.backend`: `"express"` | `"nestjs"` | `"fastify"` | `null`
39
+
40
+ ---
41
+
42
+ ### Question 3: ORM / database access
43
+
44
+ ```
45
+ How does the project access the database?
46
+
47
+ 1. TypeORM
48
+ 2. Prisma
49
+ 3. Sequelize
50
+ 4. Raw SQL / query builder
51
+ 5. No database
52
+ ```
53
+
54
+ → Maps to `stack.orm`: `"typeorm"` | `"prisma"` | `"sequelize"` | `"raw"` | `null`
55
+
56
+ ---
57
+
58
+ ### Question 4: Language
59
+
60
+ ```
61
+ TypeScript, JavaScript, or mixed?
62
+
63
+ 1. TypeScript (strict)
64
+ 2. TypeScript (loose / partial)
65
+ 3. JavaScript
66
+ 4. Mixed
67
+ ```
68
+
69
+ → Maps to `stack.language`: `"typescript-strict"` | `"typescript"` | `"javascript"` | `"mixed"`
70
+
71
+ ---
72
+
73
+ ### Question 5: State management (skip if backend-only)
74
+
75
+ ```
76
+ Frontend state management?
77
+
78
+ 1. Pinia
79
+ 2. Vuex
80
+ 3. Redux / Zustand
81
+ 4. React hooks only
82
+ 5. None / not applicable
83
+ ```
84
+
85
+ → Maps to `stack.state`: `"pinia"` | `"vuex"` | `"redux"` | `"hooks"` | `null`
86
+
87
+ ---
88
+
89
+ ---
90
+
91
+ ### Question 6: Agent knowledge base
92
+
93
+ ```
94
+ Does your workspace have an AGENTS.md or a .agents/ knowledge folder?
95
+
96
+ 1. Yes — I have AGENTS.md + .agents/ folder at workspace root
97
+ 2. Yes — I have only AGENTS.md (no .agents/ folder)
98
+ 3. No — I don't have any of these
99
+ ```
100
+
101
+ If yes:
102
+ - Ask for the **workspace root path** (the folder that contains all repos).
103
+ Example: `/Users/tamlh/workspaces/NUSTechnology/Projects/zengamingx`
104
+ - Auto-detect by going one level up from the current repo and checking for `AGENTS.md`.
105
+ - Verify: check if `.agents/_repos/` and `.agents/_domains/` directories exist.
106
+
107
+ → Maps to `agents.workspace_root`, `agents.has_repos_dir`, `agents.has_domains_dir`
108
+
109
+ ---
110
+
111
+ ## Saving the Config
112
+
113
+ After all questions, write `.opencode/code-reviewer.json`:
114
+
115
+ ```json
116
+ {
117
+ "version": "3.3.0",
118
+ "stack": {
119
+ "frontend": "nuxt",
120
+ "backend": "express",
121
+ "orm": "typeorm",
122
+ "language": "typescript",
123
+ "state": "pinia"
124
+ },
125
+ "agents": {
126
+ "workspace_root": "/Users/tamlh/workspaces/NUSTechnology/Projects/zengamingx",
127
+ "has_agents_md": true,
128
+ "has_repos_dir": true,
129
+ "has_domains_dir": true
130
+ },
131
+ "workspace": {
132
+ "name": "",
133
+ "github": {
134
+ "owner": "",
135
+ "default_base": "main"
136
+ },
137
+ "linear": {
138
+ "enabled": true,
139
+ "extract_from": ["branch_name", "pr_description", "pr_title"],
140
+ "fetch_comments": true
141
+ }
142
+ },
143
+ "output": {
144
+ "directory": ".opencode/reviews",
145
+ "filename_pattern": "{type}_{identifier}_{date}"
146
+ },
147
+ "report": {
148
+ "verbosity": "compact",
149
+ "show_suggestions": false,
150
+ "show_improvements": true
151
+ }
152
+ }
153
+ ```
154
+
155
+ Then show a confirmation:
156
+
157
+ ```
158
+ ✅ Setup complete! Code reviewer configured for:
159
+ Frontend: Nuxt 3
160
+ Backend: Express
161
+ ORM: TypeORM
162
+ Language: TypeScript
163
+ State: Pinia
164
+
165
+ Knowledge base:
166
+ ✅ AGENTS.md found at workspace root
167
+ ✅ .agents/_repos/ — 37 repo files
168
+ ✅ .agents/_domains/ — 7 domain files
169
+
170
+ Framework rules that will be applied:
171
+ • references/framework-rules/vue-nuxt.md
172
+ • references/framework-rules/express.md
173
+ • references/framework-rules/typeorm.md
174
+ • references/framework-rules/typescript.md
175
+
176
+ Config saved to: .opencode/code-reviewer.json
177
+ Run /review PR#123 to start your first review.
178
+ ```
179
+
180
+ ---
181
+
182
+ ## Framework Rule File Mapping
183
+
184
+ | Stack value | Rule file | Applies to agent |
185
+ |-------------|-----------|-----------------|
186
+ | `frontend: "nuxt"` or `"vue"` | `framework-rules/vue-nuxt.md` | Quality + Security |
187
+ | `frontend: "nextjs"` | `framework-rules/nextjs.md` | Quality + Security |
188
+ | `frontend: "react"` | `framework-rules/react.md` | Quality + Security |
189
+ | `backend: "express"` | `framework-rules/express.md` | Security + Quality |
190
+ | `backend: "nestjs"` | `framework-rules/nestjs.md` | Security + Quality |
191
+ | `orm: "typeorm"` | `framework-rules/typeorm.md` | Security + Quality |
192
+ | `orm: "prisma"` | `framework-rules/prisma.md` | Security + Quality |
193
+ | `language: "typescript*"` | `framework-rules/typescript.md` | Quality |
194
+
195
+ Load ONLY the files that match the configured stack. Do not load all framework rules.
196
+
197
+ ---
198
+
199
+ ## Re-running Setup
200
+
201
+ Users can reset at any time:
202
+
203
+ ```
204
+ /review --setup
205
+ ```
206
+
207
+ This overwrites `.opencode/code-reviewer.json` entirely. Warn the user before overwriting existing config.