codingbuddy-rules 5.2.0 → 5.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/.ai-rules/adapters/antigravity.md +2 -0
  2. package/.ai-rules/adapters/claude-code.md +271 -46
  3. package/.ai-rules/adapters/codex.md +2 -0
  4. package/.ai-rules/adapters/cursor.md +2 -0
  5. package/.ai-rules/adapters/kiro.md +2 -0
  6. package/.ai-rules/adapters/opencode.md +14 -8
  7. package/.ai-rules/adapters/q.md +2 -0
  8. package/.ai-rules/adapters/windsurf.md +2 -0
  9. package/.ai-rules/agent-stacks/api-development.json +17 -0
  10. package/.ai-rules/agent-stacks/data-pipeline.json +12 -0
  11. package/.ai-rules/agent-stacks/frontend-polish.json +17 -0
  12. package/.ai-rules/agent-stacks/full-stack.json +18 -0
  13. package/.ai-rules/agent-stacks/ml-infrastructure.json +17 -0
  14. package/.ai-rules/agent-stacks/security-audit.json +16 -0
  15. package/.ai-rules/agents/README.md +6 -4
  16. package/.ai-rules/agents/accessibility-specialist.json +1 -0
  17. package/.ai-rules/agents/act-mode.json +2 -1
  18. package/.ai-rules/agents/architecture-specialist.json +1 -0
  19. package/.ai-rules/agents/auto-mode.json +14 -8
  20. package/.ai-rules/agents/code-quality-specialist.json +1 -0
  21. package/.ai-rules/agents/code-reviewer.json +1 -0
  22. package/.ai-rules/agents/documentation-specialist.json +1 -0
  23. package/.ai-rules/agents/eval-mode.json +1 -0
  24. package/.ai-rules/agents/event-architecture-specialist.json +1 -0
  25. package/.ai-rules/agents/i18n-specialist.json +1 -0
  26. package/.ai-rules/agents/integration-specialist.json +1 -0
  27. package/.ai-rules/agents/observability-specialist.json +1 -0
  28. package/.ai-rules/agents/performance-specialist.json +1 -0
  29. package/.ai-rules/agents/plan-mode.json +14 -8
  30. package/.ai-rules/agents/security-specialist.json +1 -0
  31. package/.ai-rules/agents/seo-specialist.json +1 -0
  32. package/.ai-rules/agents/solution-architect.json +6 -6
  33. package/.ai-rules/agents/technical-planner.json +11 -11
  34. package/.ai-rules/agents/test-strategy-specialist.json +1 -0
  35. package/.ai-rules/agents/ui-ux-designer.json +1 -0
  36. package/.ai-rules/rules/core.md +51 -25
  37. package/.ai-rules/rules/parallel-execution.md +59 -0
  38. package/.ai-rules/rules/pr-review-cycle.md +272 -0
  39. package/.ai-rules/rules/severity-classification.md +214 -0
  40. package/.ai-rules/schemas/agent.schema.json +2 -2
  41. package/.ai-rules/skills/incident-response/severity-classification.md +17 -141
  42. package/.ai-rules/skills/pr-review/SKILL.md +2 -0
  43. package/.ai-rules/skills/ship/SKILL.md +35 -0
  44. package/.ai-rules/skills/systematic-debugging/SKILL.md +39 -0
  45. package/lib/init/scaffold.js +4 -0
  46. package/package.json +2 -1
@@ -593,6 +593,8 @@ When AUTO keyword is detected, Antigravity calls `parse_mode` MCP tool which ret
593
593
  - Success: `Critical = 0 AND High = 0`
594
594
  - Failure: Max iterations reached (default: 3)
595
595
 
596
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
597
+
596
598
  ### Configuration
597
599
 
598
600
  Configure in `codingbuddy.config.json`:
@@ -529,67 +529,79 @@ When `parse_mode` returns `dispatch="auto"` or `dispatchReady` with specialist a
529
529
 
530
530
  **Rule:** Every listed specialist MUST be dispatched. Skipping any specialist is a protocol violation.
531
531
 
532
- #### Teams-Based Specialist Dispatch (Preferred)
532
+ #### Red Flags
533
533
 
534
- Teams are preferred over the Agent tool for specialist dispatch because they enable structured coordination and message-based reporting:
534
+ | Thought | Reality |
535
+ |---------|---------|
536
+ | "I can handle this analysis myself" | Specialists have domain expertise. Dispatch them. |
537
+ | "It's just a small change" | dispatch="auto" means the system determined specialists are needed. |
538
+ | "I'll save time by skipping" | Skipping causes missed issues that cost more later. |
539
+ | "I'll dispatch later" | Dispatch IMMEDIATELY when dispatch="auto" is returned. |
535
540
 
536
- ```
537
- 1. TeamCreate({ team_name: "<task>-specialists" })
538
- 2. Spawn specialists as teammates:
539
- Agent({ team_name, name: "security-specialist", subagent_type: "general-purpose", prompt: ... })
540
- Agent({ team_name, name: "code-quality-specialist", subagent_type: "general-purpose", prompt: ... })
541
- 3. Create and assign tasks:
542
- TaskCreate({ subject: "Security review of auth module" })
543
- TaskUpdate({ taskId, owner: "security-specialist" })
544
- 4. Specialists work autonomously, report via SendMessage:
545
- SendMessage({ to: "team-lead", message: "## Security Findings\n- ...", summary: "Security review done" })
546
- 5. Team lead collects all findings
547
- 6. Shutdown: SendMessage({ to: "security-specialist", message: { type: "shutdown_request" } })
548
- ```
541
+ ### Execution Model: Outer Transport vs Inner Coordination
549
542
 
550
- #### SendMessage-Based Reporting
543
+ CodingBuddy uses a **nested execution model** with two distinct layers:
551
544
 
552
- Specialists report findings through `SendMessage` to the team lead. This enables:
553
- - Structured collection of all specialist outputs
554
- - Consolidated summary for the user
555
- - Clear audit trail of what each specialist found
545
+ | Layer | Role | Tool | Scope |
546
+ |-------|------|------|-------|
547
+ | **Outer transport** | Parallel task execution across isolated environments | **TaskMaestro** (tmux + git worktree) or **SubAgent** (background agents) | One pane/agent per issue or task |
548
+ | **Inner coordination** | Specialist collaboration within a single session | **Teams** (experimental) | Multiple specialists within one pane/session |
556
549
 
557
- **Report format:**
558
- ```markdown
559
- ## [Specialist Name] Findings
550
+ > **Key distinction:** TaskMaestro and SubAgent are alternatives for the *outer* layer. Teams is an *inner* layer that can optionally run inside either outer strategy.
560
551
 
561
- ### Critical
562
- - [finding]
552
+ #### Nested Execution Examples
563
553
 
564
- ### High
565
- - [finding]
554
+ **Example 1: TaskMaestro (outer) + Teams (inner)**
566
555
 
567
- ### Medium
568
- - [finding]
556
+ ```
557
+ TaskMaestro session (outer)
558
+ ├── Pane 1: Issue #101 (auth feature)
559
+ │ └── Teams session (inner, optional)
560
+ │ ├── security-specialist → reviews auth impl
561
+ │ └── test-strategy-specialist → validates test coverage
562
+ ├── Pane 2: Issue #102 (dashboard UI)
563
+ │ └── Single agent (no inner Teams needed)
564
+ └── Pane 3: Issue #103 (API refactor)
565
+ └── Teams session (inner, optional)
566
+ ├── architecture-specialist → validates API design
567
+ └── performance-specialist → checks query efficiency
568
+ ```
569
569
 
570
- ### Recommendations
571
- - [recommendation]
570
+ **Example 2: SubAgent (outer) without inner Teams**
571
+
572
+ ```
573
+ SubAgent dispatch (outer)
574
+ ├── Agent 1: security-specialist (run_in_background)
575
+ ├── Agent 2: accessibility-specialist (run_in_background)
576
+ └── Agent 3: performance-specialist (run_in_background)
577
+ → Collect results via TaskOutput
572
578
  ```
573
579
 
574
- #### Fallback: Agent Tool
580
+ **Example 3: TaskMaestro (outer) + SubAgent (inner, within worker)**
575
581
 
576
- If Teams-based dispatch fails, fall back to the Agent tool:
577
- - Use `run_in_background: true` for each specialist
578
- - Collect results via `TaskOutput`
579
- - Document the fallback reason in your response
582
+ ```
583
+ TaskMaestro session (outer, conductor)
584
+ ├── Pane 1: Worker for Issue #101 (auth feature)
585
+ │ ├── Explore subAgent researches existing auth patterns
586
+ │ ├── Plan subAgent → drafts TDD test plan
587
+ │ ├── [Worker writes code directly in its own worktree]
588
+ │ └── [Worker commits, pushes, creates PR, writes RESULT.json]
589
+ ├── Pane 2: Worker for Issue #102 (dashboard UI)
590
+ │ └── Worker uses sub-agents for component research
591
+ │ (no cross-pane interference because each worker owns its worktree)
592
+ └── Pane 3: Review Agent (from review cycle protocol)
593
+ └── EVAL mode reviewer for completed PRs
594
+ ```
580
595
 
581
- #### Red Flags
596
+ This is the **recommended pattern for complex worker tasks** where parallel research or context protection would benefit the worker. The conductor still uses TaskMaestro for the outer dispatch — only the worker's internal orchestration uses sub-agents.
582
597
 
583
- | Thought | Reality |
584
- |---------|---------|
585
- | "I can handle this analysis myself" | Specialists have domain expertise. Dispatch them. |
586
- | "It's just a small change" | dispatch="auto" means the system determined specialists are needed. |
587
- | "I'll save time by skipping" | Skipping causes missed issues that cost more later. |
588
- | "I'll dispatch later" | Dispatch IMMEDIATELY when dispatch="auto" is returned. |
598
+ **Key invariant:** Sub-agents dispatched by a worker operate inside that worker's git worktree. Cross-pane file conflicts are impossible because each pane's worker owns its own isolated worktree.
599
+
600
+ See [`../rules/parallel-execution.md`](../rules/parallel-execution.md) "Conductor vs Worker Context" section for the authoritative rule.
589
601
 
590
602
  ### Execution Strategy Selection (MANDATORY)
591
603
 
592
- When `parse_mode` returns `availableStrategies`:
604
+ When `parse_mode` returns `availableStrategies`, select the **outer transport** strategy:
593
605
 
594
606
  1. **Check `availableStrategies`** in the response
595
607
  2. **If both strategies available** (`["subagent", "taskmaestro"]`), ask user with AskUserQuestion:
@@ -600,8 +612,8 @@ When `parse_mode` returns `availableStrategies`:
600
612
  - Yes → invoke `/taskmaestro` skill to guide installation, then re-check
601
613
  - No → proceed with subagent
602
614
  4. **Call `dispatch_agents`** with chosen `executionStrategy` parameter:
603
- - `dispatch_agents({ mode, specialists, executionStrategy: "subagent" })` — existing Agent tool flow
604
- - `dispatch_agents({ mode, specialists, executionStrategy: "taskmaestro" })` — returns tmux assignments
615
+ - `dispatch_agents({ mode, specialists, executionStrategy: "subagent" })` — Agent tool flow
616
+ - `dispatch_agents({ mode, specialists, executionStrategy: "taskmaestro" })` — tmux pane assignments
605
617
  5. **Execute** based on strategy:
606
618
  - **subagent**: Use `dispatchParams` with Agent tool (`run_in_background: true`)
607
619
  - **taskmaestro**: Follow `executionHint` — start panes, assign prompts, monitor, collect results
@@ -626,6 +638,71 @@ When `executionStrategy: "taskmaestro"` is chosen, `dispatch_agents` returns:
626
638
 
627
639
  Execute by following the `executionHint` commands sequentially.
628
640
 
641
+ ### Teams as Inner Coordination Layer (Experimental)
642
+
643
+ > **Capability gate:** Teams-based coordination is experimental and depends on Claude Code native Teams support being available at runtime. If Teams APIs (`TeamCreate`, `SendMessage`, etc.) are not available, fall back to the SubAgent dispatch pattern.
644
+
645
+ Teams provide structured specialist coordination **within** a single session or TaskMaestro pane. Use Teams when a task benefits from multiple specialists collaborating and reporting back to a coordinator, rather than running independently.
646
+
647
+ #### When to Use Inner Teams
648
+
649
+ - A single task (or pane) needs input from 2+ specialists who should coordinate
650
+ - Specialist findings need to be collected and consolidated by a team lead
651
+ - The task requires structured message-based reporting between specialists
652
+
653
+ #### When NOT to Use Inner Teams
654
+
655
+ - Each specialist can run independently with no cross-specialist dependencies
656
+ - You are dispatching specialists across separate issues/tasks (use outer transport instead)
657
+ - Teams APIs are not available at runtime
658
+
659
+ #### Teams Workflow (within a session)
660
+
661
+ ```
662
+ 1. TeamCreate({ team_name: "<task>-specialists" })
663
+ 2. Spawn specialists as teammates:
664
+ Agent({ team_name, name: "security-specialist", subagent_type: "general-purpose", prompt: ... })
665
+ Agent({ team_name, name: "code-quality-specialist", subagent_type: "general-purpose", prompt: ... })
666
+ 3. Create and assign tasks:
667
+ TaskCreate({ subject: "Security review of auth module" })
668
+ TaskUpdate({ taskId, owner: "security-specialist" })
669
+ 4. Specialists work autonomously, report via SendMessage:
670
+ SendMessage({ to: "team-lead", message: "## Security Findings\n- ...", summary: "Security review done" })
671
+ 5. Team lead collects all findings
672
+ 6. Shutdown: SendMessage({ to: "security-specialist", message: { type: "shutdown_request" } })
673
+ ```
674
+
675
+ #### SendMessage-Based Reporting
676
+
677
+ Specialists report findings through `SendMessage` to the team lead. This enables:
678
+ - Structured collection of all specialist outputs
679
+ - Consolidated summary for the user
680
+ - Clear audit trail of what each specialist found
681
+
682
+ **Report format:**
683
+ ```markdown
684
+ ## [Specialist Name] Findings
685
+
686
+ ### Critical
687
+ - [finding]
688
+
689
+ ### High
690
+ - [finding]
691
+
692
+ ### Medium
693
+ - [finding]
694
+
695
+ ### Recommendations
696
+ - [recommendation]
697
+ ```
698
+
699
+ #### Fallback: SubAgent Dispatch
700
+
701
+ If Teams APIs are unavailable or Teams-based dispatch fails:
702
+ - Use SubAgent with `run_in_background: true` for each specialist
703
+ - Collect results via `TaskOutput`
704
+ - Document the fallback reason in your response
705
+
629
706
  ## PR All-in-One Skill
630
707
 
631
708
  Unified commit and PR workflow that:
@@ -715,6 +792,8 @@ AUTO implement user authentication with JWT
715
792
  - **Success**: `Critical = 0 AND High = 0` severity issues
716
793
  - **Failure**: Max iterations reached (default: 3, configurable via `auto.maxIterations`)
717
794
 
795
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The approval loop Claude Code runs over a PR (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
796
+
718
797
  ### Configuration
719
798
 
720
799
  Configure AUTO mode in `codingbuddy.config.json`:
@@ -763,3 +842,149 @@ module.exports = {
763
842
  | Iterations | Single pass per mode | Multiple cycles until quality met |
764
843
  | Exit | User decides completion | Quality criteria or max iterations |
765
844
  | Intervention | Required for each step | Only when requested or on failure |
845
+
846
+ ## EVAL Review Agent Prompt Template
847
+
848
+ Canonical template for review agents that evaluate PRs in EVAL mode. Use this when a conductor, review pane, or solo workflow needs to generate a structured review prompt.
849
+
850
+ ### When to Use
851
+
852
+ - **Conductor review**: The conductor generates this prompt for itself or a dedicated review pane
853
+ - **TaskMaestro review pane**: The review agent receives this prompt as its `TASK.md`
854
+ - **Solo workflow**: A developer enters EVAL mode to review their own PR before requesting human review
855
+
856
+ ### Template
857
+
858
+ The review agent prompt follows this structure:
859
+
860
+ ```
861
+ EVAL: review PR #<PR_NUMBER> for issue #<ISSUE_NUMBER>
862
+
863
+ Review the PR against the linked issue's acceptance criteria.
864
+ Use review_pr MCP tool, dispatch recommended specialists, and follow pr-review-cycle.md protocol.
865
+ Approve only when Critical = 0 AND High = 0.
866
+ ```
867
+
868
+ ### Step-by-Step Execution
869
+
870
+ When the review agent receives the prompt above, it MUST execute these steps in order:
871
+
872
+ #### 1. Enter EVAL Mode
873
+
874
+ ```typescript
875
+ const result = await parse_mode({
876
+ prompt: "EVAL: review PR #<PR_NUMBER> for issue #<ISSUE_NUMBER>"
877
+ });
878
+ ```
879
+
880
+ This returns:
881
+ - `mode: "EVAL"` with code-reviewer as primary agent
882
+ - `parallelAgentsRecommendation` with EVAL-mode specialists
883
+ - `dispatchReady` (if auto-dispatch is enabled)
884
+
885
+ #### 2. Fetch Structured Review Data
886
+
887
+ ```typescript
888
+ const reviewData = await review_pr({
889
+ pr_number: <PR_NUMBER>,
890
+ issue_number: <ISSUE_NUMBER> // optional, for spec compliance
891
+ });
892
+ ```
893
+
894
+ The `review_pr` tool returns:
895
+ - PR metadata (title, author, base/head branches)
896
+ - Diff summary and changed files list
897
+ - Auto-generated checklists for changed file domains
898
+ - Recommended specialist agents based on file patterns
899
+
900
+ #### 3. Dispatch Recommended Specialists
901
+
902
+ Use the specialists from `parse_mode` or `review_pr` response:
903
+
904
+ ```typescript
905
+ // Option A: Auto-dispatch from parse_mode (preferred)
906
+ if (result.dispatchReady?.parallelAgents) {
907
+ for (const agent of result.dispatchReady.parallelAgents) {
908
+ Agent({
909
+ subagent_type: agent.dispatchParams.subagent_type,
910
+ prompt: agent.dispatchParams.prompt,
911
+ description: agent.dispatchParams.description,
912
+ run_in_background: true,
913
+ });
914
+ }
915
+ }
916
+
917
+ // Option B: Manual dispatch from review_pr recommendations
918
+ const dispatch = await dispatch_agents({
919
+ mode: "EVAL",
920
+ specialists: reviewData.recommendedSpecialists,
921
+ taskDescription: `Review PR #${prNumber}`,
922
+ targetFiles: reviewData.changedFiles,
923
+ });
924
+ ```
925
+
926
+ Typical EVAL-mode specialists: `security-specialist`, `accessibility-specialist`, `performance-specialist`, `code-quality-specialist`.
927
+
928
+ #### 4. Collect and Classify Findings
929
+
930
+ Collect all specialist results and classify each finding using the [Code Review Severity](../rules/severity-classification.md#code-review-severity) scale:
931
+
932
+ | Severity | Meaning | Merge Gate |
933
+ |----------|---------|------------|
934
+ | `critical` | Blocks approval, must fix | BLOCKED |
935
+ | `high` | Should fix before merge | BLOCKED (unless explicitly deferred) |
936
+ | `medium` | Worth addressing, does not gate merge | APPROVED with comments |
937
+ | `low` | Style/polish suggestions | APPROVED |
938
+
939
+ #### 5. Write the Review
940
+
941
+ Post the structured review on the PR:
942
+
943
+ ```bash
944
+ gh pr review <PR_NUMBER> --comment --body "<structured review>"
945
+ ```
946
+
947
+ Review body format (per [`pr-review-cycle.md`](../rules/pr-review-cycle.md)):
948
+
949
+ ```markdown
950
+ ## Review: [APPROVE | CHANGES_REQUESTED]
951
+ ### CI Status: [PASS | FAIL]
952
+ ### Issues Found:
953
+ - [critical]: <description> — <file:line>
954
+ - [high]: <description> — <file:line>
955
+ - [medium]: <description> — <file:line>
956
+ ### Recommendation: [APPROVE | REQUEST_CHANGES]
957
+ ```
958
+
959
+ #### 6. Approval Gate
960
+
961
+ Follow the approval criteria from [`pr-review-cycle.md`](../rules/pr-review-cycle.md):
962
+
963
+ - **Approve** when: CI green, Critical = 0, High = 0 (or explicitly deferred with ticket)
964
+ - **Request changes** when: any Critical or High finding remains unresolved
965
+
966
+ ```bash
967
+ # Approve (when reviewer is not the PR author)
968
+ gh pr review <PR_NUMBER> --approve --body "LGTM - all review comments addressed"
969
+
970
+ # Request changes
971
+ gh pr review <PR_NUMBER> --request-changes --body "<structured review>"
972
+ ```
973
+
974
+ ### Complete Prompt Example
975
+
976
+ For a conductor generating a review agent task:
977
+
978
+ ```markdown
979
+ EVAL: review PR #42 for issue #40
980
+
981
+ Review the PR against the linked issue's acceptance criteria.
982
+ Use review_pr MCP tool, dispatch recommended specialists, and follow pr-review-cycle.md protocol.
983
+ Approve only when Critical = 0 AND High = 0.
984
+ ```
985
+
986
+ ### Canonical References
987
+
988
+ - **Severity scale**: [`severity-classification.md`](../rules/severity-classification.md#code-review-severity) — Critical/High/Medium/Low definitions
989
+ - **Review protocol**: [`pr-review-cycle.md`](../rules/pr-review-cycle.md) — CI gate, review steps, approval criteria, commit hygiene
990
+ - **MCP tool**: `review_pr(pr_number, issue_number?, timeout?)` — structured PR review data
@@ -619,6 +619,8 @@ AUTO implement user authentication with JWT
619
619
  - Success: `Critical = 0 AND High = 0`
620
620
  - Failure: Max iterations reached (default: 3)
621
621
 
622
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
623
+
622
624
  ### Copilot Integration
623
625
 
624
626
  When using GitHub Copilot Chat with AUTO mode:
@@ -429,6 +429,8 @@ When AUTO keyword is detected, Cursor calls `parse_mode` MCP tool which returns
429
429
  - Success: `Critical = 0 AND High = 0`
430
430
  - Failure: Max iterations reached (default: 3)
431
431
 
432
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
433
+
432
434
  ### Configuration
433
435
 
434
436
  Configure in `codingbuddy.config.json`:
@@ -552,6 +552,8 @@ When AUTO keyword is detected, Kiro calls `parse_mode` MCP tool which returns AU
552
552
  - Success: `Critical = 0 AND High = 0`
553
553
  - Failure: Max iterations reached (default: 3)
554
554
 
555
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
556
+
555
557
  ### Configuration
556
558
 
557
559
  Configure in `codingbuddy.config.json`:
@@ -145,11 +145,13 @@ Update your configuration file (`.opencode.json` or `crush.json`):
145
145
 
146
146
  | Codingbuddy Agent | OpenCode Agent | Purpose |
147
147
  |------------------|----------------|---------|
148
- | **plan-mode.json** | `plan-mode` | PLAN mode workflow (delegates to frontend-developer) |
149
- | **act-mode.json** | `act-mode` | ACT mode workflow (delegates to frontend-developer) |
148
+ | **plan-mode.json** | `plan-mode` | PLAN mode workflow (delegates to solution-architect or technical-planner based on task complexity) |
149
+ | **act-mode.json** | `act-mode` | ACT mode workflow (delegates to software-engineer or domain specialist per ACT resolution rules) |
150
150
  | **eval-mode.json** | `eval-mode` | EVAL mode workflow (delegates to code-reviewer) |
151
151
  | **auto-mode.json** | N/A (keyword-triggered) | AUTO mode workflow (autonomous PLAN→ACT→EVAL cycle) |
152
- | **frontend-developer.json** | N/A (delegate) | Primary development implementation |
152
+ | **solution-architect.json** | N/A (delegate) | PLAN mode system-level design and architecture |
153
+ | **technical-planner.json** | N/A (delegate) | PLAN mode implementation-level TDD planning |
154
+ | **frontend-developer.json** | N/A (delegate) | ACT mode implementation for frontend projects |
153
155
  | **backend-developer.json** | `backend` | Backend development (Node.js, Python, Go, Java, Rust) |
154
156
  | **code-reviewer.json** | N/A (delegate) | Code quality evaluation implementation |
155
157
  | **architecture-specialist.json** | `architect` | Architecture and design patterns |
@@ -162,7 +164,7 @@ Update your configuration file (`.opencode.json` or `crush.json`):
162
164
 
163
165
  - **Mode Agents** (`plan-mode`, `act-mode`, `eval-mode`, `auto-mode`): Workflow orchestrators that delegate to appropriate implementation agents
164
166
  - **Specialist Agents** (`architect`, `security`, etc.): Domain-specific expertise for specialized tasks
165
- - **Delegate Agents** (`frontend-developer`, `code-reviewer`): Implementation agents that Mode Agents delegate to
167
+ - **Delegate Agents**: PLAN mode delegates to `solution-architect` or `technical-planner`; ACT mode delegates to `software-engineer` or a domain specialist (e.g., `frontend-developer`, `backend-developer`); EVAL mode delegates to `code-reviewer`
166
168
 
167
169
  ### 3. MCP Server Integration
168
170
 
@@ -292,15 +294,17 @@ The `parse_mode` tool now returns additional Mode Agent information and dynamic
292
294
  "language": "en",
293
295
  "languageInstruction": "Always respond in English.",
294
296
  "agent": "plan-mode",
295
- "delegates_to": "frontend-developer",
297
+ "delegates_to": "solution-architect",
296
298
  "delegate_agent_info": {
297
- "name": "Frontend Developer",
298
- "description": "React/Next.js expert, TDD and design system experience",
299
- "expertise": ["React", "Next.js", "TDD", "TypeScript"]
299
+ "name": "Solution Architect",
300
+ "description": "High-level system design and architecture planning specialist",
301
+ "expertise": ["System Architecture", "Technology Selection", "Integration Patterns", "Scalability Planning"]
300
302
  }
301
303
  }
302
304
  ```
303
305
 
306
+ > **Note:** `delegates_to` is resolved dynamically based on prompt intent. System-level design prompts resolve to `solution-architect`; implementation-level planning prompts resolve to `technical-planner`.
307
+
304
308
  **New Fields:**
305
309
  - `language`: Language code from codingbuddy.config.json
306
310
  - `languageInstruction`: Formatted instruction text for AI assistants (🆕)
@@ -774,6 +778,8 @@ AUTO Build a new user authentication feature
774
778
  - Success: `Critical = 0 AND High = 0`
775
779
  - Failure: Max iterations reached (default: 3)
776
780
 
781
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
782
+
777
783
  ### OpenCode Agent Integration
778
784
 
779
785
  AUTO mode describes an autonomous PLAN→ACT→EVAL cycle. However, agent switching behavior differs by platform:
@@ -198,6 +198,8 @@ AUTO create a new Lambda function
198
198
  - Success: `Critical = 0 AND High = 0`
199
199
  - Failure: Max iterations reached (default: 3)
200
200
 
201
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
202
+
201
203
  ### AWS Integration with AUTO Mode
202
204
 
203
205
  Amazon Q's AWS expertise complements AUTO mode:
@@ -299,6 +299,8 @@ Use the `AUTO` keyword at the start of your message:
299
299
  - Success: `Critical = 0 AND High = 0`
300
300
  - Failure: Max iterations reached (default: 3)
301
301
 
302
+ > **Severity and review-cycle canonical sources:** The `Critical`/`High` levels above are the **Code Review Severity** scale defined in [`../rules/severity-classification.md`](../rules/severity-classification.md#code-review-severity). The PR approval loop (CI gate → review → fix → re-review → approve) is specified in [`../rules/pr-review-cycle.md`](../rules/pr-review-cycle.md). Follow those canonical sources rather than re-deriving severity or approval criteria from this adapter.
303
+
302
304
  ### Configuration
303
305
 
304
306
  Configure in `codingbuddy.config.json`:
@@ -0,0 +1,17 @@
1
+ {
2
+ "name": "api-development",
3
+ "description": "API development team for building and maintaining backend services",
4
+ "category": "backend",
5
+ "tags": ["api", "backend", "rest", "graphql"],
6
+ "primary_agent": "backend-developer",
7
+ "specialist_agents": [
8
+ "security-specialist",
9
+ "test-engineer",
10
+ "performance-specialist",
11
+ "documentation-specialist"
12
+ ],
13
+ "recommended_for": {
14
+ "file_patterns": ["*.controller.ts", "*.service.ts", "*.resolver.ts", "*.route.ts"],
15
+ "modes": ["PLAN", "ACT"]
16
+ }
17
+ }
@@ -0,0 +1,12 @@
1
+ {
2
+ "name": "data-pipeline",
3
+ "description": "Data pipeline team for data engineering, analysis, and monitoring",
4
+ "category": "data",
5
+ "tags": ["data", "pipeline", "etl", "analytics"],
6
+ "primary_agent": "data-engineer",
7
+ "specialist_agents": ["data-scientist", "performance-specialist", "observability-specialist"],
8
+ "recommended_for": {
9
+ "file_patterns": ["*.sql", "*.py", "*migration*", "*pipeline*"],
10
+ "modes": ["PLAN", "ACT"]
11
+ }
12
+ }
@@ -0,0 +1,17 @@
1
+ {
2
+ "name": "frontend-polish",
3
+ "description": "Frontend polish team for UI/UX refinement, accessibility, and performance",
4
+ "category": "frontend",
5
+ "tags": ["ui", "ux", "accessibility", "frontend", "seo"],
6
+ "primary_agent": "frontend-developer",
7
+ "specialist_agents": [
8
+ "ui-ux-designer",
9
+ "accessibility-specialist",
10
+ "performance-specialist",
11
+ "seo-specialist"
12
+ ],
13
+ "recommended_for": {
14
+ "file_patterns": ["*.tsx", "*.css", "*.scss", "*.module.css"],
15
+ "modes": ["EVAL", "ACT"]
16
+ }
17
+ }
@@ -0,0 +1,18 @@
1
+ {
2
+ "name": "full-stack",
3
+ "description": "Full-stack development team for end-to-end web application development",
4
+ "category": "development",
5
+ "tags": ["web", "fullstack", "frontend", "backend"],
6
+ "primary_agent": "software-engineer",
7
+ "specialist_agents": [
8
+ "frontend-developer",
9
+ "backend-developer",
10
+ "security-specialist",
11
+ "test-engineer",
12
+ "performance-specialist"
13
+ ],
14
+ "recommended_for": {
15
+ "file_patterns": ["*.ts", "*.tsx", "*.js", "*.jsx"],
16
+ "modes": ["PLAN", "ACT", "AUTO"]
17
+ }
18
+ }
@@ -0,0 +1,17 @@
1
+ {
2
+ "name": "ml-infrastructure",
3
+ "description": "ML infrastructure team for AI/ML model development and deployment",
4
+ "category": "ml",
5
+ "tags": ["ml", "ai", "model", "inference", "training"],
6
+ "primary_agent": "ai-ml-engineer",
7
+ "specialist_agents": [
8
+ "data-scientist",
9
+ "performance-specialist",
10
+ "observability-specialist",
11
+ "test-engineer"
12
+ ],
13
+ "recommended_for": {
14
+ "file_patterns": ["*.py", "*model*", "*predict*", "*train*"],
15
+ "modes": ["PLAN", "ACT", "EVAL"]
16
+ }
17
+ }
@@ -0,0 +1,16 @@
1
+ {
2
+ "name": "security-audit",
3
+ "description": "Security audit team for vulnerability assessment and security hardening",
4
+ "category": "security",
5
+ "tags": ["security", "audit", "vulnerability", "compliance"],
6
+ "primary_agent": "security-engineer",
7
+ "specialist_agents": [
8
+ "security-specialist",
9
+ "test-strategy-specialist",
10
+ "code-quality-specialist"
11
+ ],
12
+ "recommended_for": {
13
+ "file_patterns": ["*auth*", "*token*", "*session*", "*crypto*"],
14
+ "modes": ["EVAL", "AUTO"]
15
+ }
16
+ }
@@ -342,7 +342,7 @@ This ensures Mode Agents appear first in agent selection interfaces.
342
342
  Mode Agents handle workflow orchestration and delegate to implementation experts:
343
343
 
344
344
  - **Plan Mode** (`plan-mode.json`): Analysis and planning (delegates to primary developer)
345
- - **Act Mode** (`act-mode.json`): Implementation execution (delegates to primary developer)
345
+ - **Act Mode** (`act-mode.json`): Implementation execution (delegates to software-engineer or a domain specialist)
346
346
  - **Eval Mode** (`eval-mode.json`): Quality evaluation (delegates to code reviewer)
347
347
  - **Auto Mode** (`auto-mode.json`): Autonomous PLAN→ACT→EVAL cycle until quality achieved (Critical=0, High=0)
348
348
 
@@ -350,9 +350,11 @@ Mode Agents handle workflow orchestration and delegate to implementation experts
350
350
 
351
351
  These agents are automatically activated via Mode Agent delegation:
352
352
 
353
- - **Primary Developer Agent**: Activated by plan-mode/act-mode
354
- - Example: `frontend-developer.json` (React/Next.js projects)
355
- - Customize per project: `backend-developer.json`, `mobile-developer.json`, etc.
353
+ - **Planning Agents**: Activated by plan-mode (resolved by intent)
354
+ - `solution-architect.json`: System-level design (new features, architecture decisions, technology selection)
355
+ - `technical-planner.json`: Implementation-level planning (bite-sized TDD tasks with exact file paths)
356
+ - **Implementation Agent**: Activated by act-mode (resolved by intent / project config)
357
+ - Example: `frontend-developer.json` (React/Next.js projects), `backend-developer.json`, `mobile-developer.json`, or the language-agnostic `software-engineer.json`
356
358
  - **Code Reviewer** (`code-reviewer.json`): Activated by eval-mode
357
359
 
358
360
  ### Domain Specialists
@@ -4,6 +4,7 @@
4
4
  "color": "#27AE60",
5
5
  "role": {
6
6
  "title": "Accessibility Engineer",
7
+ "type": "specialist",
7
8
  "expertise": [
8
9
  "WCAG 2.1 AA compliance planning and verification",
9
10
  "ARIA attributes planning and verification",
@@ -4,8 +4,9 @@
4
4
  "color": "#82B366",
5
5
  "role": {
6
6
  "title": "Act Mode Agent",
7
+ "type": "utility",
7
8
  "mode": "ACT",
8
- "purpose": "Mode Agent - delegates to Primary Developer Agent",
9
+ "purpose": "Mode Agent - delegates to the resolved implementation agent (software-engineer by default, domain specialist when selected)",
9
10
  "expertise": [
10
11
  "TDD cycle execution (Red → Green → Refactor)",
11
12
  "Code quality standards compliance",
@@ -4,6 +4,7 @@
4
4
  "color": "#2C3E80",
5
5
  "role": {
6
6
  "title": "Architecture Engineer",
7
+ "type": "specialist",
7
8
  "expertise": [
8
9
  "Layer placement planning and verification",
9
10
  "Dependency direction design and verification",