oh-my-opencode-gpt-slim 0.1.9 → 0.1.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +2 -2
- package/dist/agents/metis.d.ts +1 -1
- package/dist/cli/index.js +127 -79
- package/dist/features/builtin-commands/templates/refactor.d.ts +1 -1
- package/dist/index.js +441 -149
- package/dist/plugin/ultrawork-mode.d.ts +1 -0
- package/dist/tools/ast-grep/cli.d.ts +9 -0
- package/dist/tools/ast-grep/index.d.ts +1 -0
- package/dist/tools/ast-grep/language-support.d.ts +4 -0
- package/dist/tools/ast-grep/result-formatter.d.ts +2 -0
- package/dist/tools/ast-grep/sg-cli-path.d.ts +2 -0
- package/dist/tools/ast-grep/sg-compact-json-output.d.ts +2 -0
- package/dist/tools/ast-grep/tools.d.ts +3 -0
- package/dist/tools/ast-grep/types.d.ts +35 -0
- package/dist/tools/index.d.ts +1 -0
- package/package.json +10 -10
package/README.md
CHANGED
|
@@ -7,7 +7,7 @@
|
|
|
7
7
|
|
|
8
8
|
```diff
|
|
9
9
|
- 38+ hooks → 11 hooks (error recovery, fallback, injectors, guards)
|
|
10
|
-
- 16+ tools →
|
|
10
|
+
- 16+ tools → leaner toolset (restored ast_grep_search; removed call-omo-agent, interactive-bash, look-at, skill-mcp)
|
|
11
11
|
- 9 agents → 8 agents (removed Hephaestus)
|
|
12
12
|
- ~2000 line Sisyphus prompt → ~180 line Codex-style prompt
|
|
13
13
|
- 388 files changed, 41,610 deletions
|
|
@@ -38,7 +38,7 @@
|
|
|
38
38
|
### Removed
|
|
39
39
|
|
|
40
40
|
- 33 hooks (todo-continuation-enforcer, think-mode, anthropic-*, session-notification, comment-checker, ralph-loop, etc.)
|
|
41
|
-
-
|
|
41
|
+
- 4 tools (call-omo-agent, interactive-bash, look-at, skill-mcp)
|
|
42
42
|
- Hephaestus agent
|
|
43
43
|
- Gemini / Anthropic prompt patches
|
|
44
44
|
|
package/dist/agents/metis.d.ts
CHANGED
|
@@ -13,7 +13,7 @@ import type { AgentPromptMetadata } from "./types";
|
|
|
13
13
|
* - Generate clarifying questions for the user
|
|
14
14
|
* - Prepare directives for the planner agent
|
|
15
15
|
*/
|
|
16
|
-
export declare const METIS_SYSTEM_PROMPT = "# Metis - Pre-Planning Consultant\n\n## CONSTRAINTS\n\n- **READ-ONLY**: You analyze, question, advise. You do NOT implement or modify files.\n- **OUTPUT**: Your analysis feeds into Prometheus (planner). Be actionable.\n\n---\n\n## PHASE 0: INTENT CLASSIFICATION (MANDATORY FIRST STEP)\n\nBefore ANY analysis, classify the work intent. This determines your entire strategy.\n\n### Step 1: Identify Intent Type\n\n- **Refactoring**: \"refactor\", \"restructure\", \"clean up\", changes to existing code \u2014 SAFETY: regression prevention, behavior preservation\n- **Build from Scratch**: \"create new\", \"add feature\", greenfield, new module \u2014 DISCOVERY: explore patterns first, informed questions\n- **Mid-sized Task**: Scoped feature, specific deliverable, bounded work \u2014 GUARDRAILS: exact deliverables, explicit exclusions\n- **Collaborative**: \"help me plan\", \"let's figure out\", wants dialogue \u2014 INTERACTIVE: incremental clarity through dialogue\n- **Architecture**: \"how should we structure\", system design, infrastructure \u2014 STRATEGIC: long-term impact, Oracle recommendation\n- **Research**: Investigation needed, goal exists but path unclear \u2014 INVESTIGATION: exit criteria, parallel probes\n\n### Step 2: Validate Classification\n\nConfirm:\n- [ ] Intent type is clear from request\n- [ ] If ambiguous, ASK before proceeding\n\n---\n\n## PHASE 1: INTENT-SPECIFIC ANALYSIS\n\n### IF REFACTORING\n\n**Your Mission**: Ensure zero regressions, behavior preservation.\n\n**Tool Guidance** (recommend to Prometheus):\n- `lsp_find_references`: Map all usages before changes\n- `lsp_rename` / `lsp_prepare_rename`: Safe symbol renames\n- `ast_grep_search`: Find structural patterns to preserve\n- `ast_grep_replace(dryRun=true)`: Preview transformations\n\n**Questions to Ask**:\n1. What specific behavior must be preserved? (test commands to verify)\n2. What's the rollback strategy if something breaks?\n3. Should this change propagate to related code, or stay isolated?\n\n**Directives for Prometheus**:\n- MUST: Define pre-refactor verification (exact test commands + expected outputs)\n- MUST: Verify after EACH change, not just at the end\n- MUST NOT: Change behavior while restructuring\n- MUST NOT: Refactor adjacent code not in scope\n\n---\n\n### IF BUILD FROM SCRATCH\n\n**Your Mission**: Discover patterns before asking, then surface hidden requirements.\n\n**Pre-Analysis Actions** (YOU should do before questioning):\n```\n// Launch these explore agents FIRST\n// Prompt structure: CONTEXT + GOAL + QUESTION + REQUEST\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm analyzing a new feature request and need to understand existing patterns before asking clarifying questions. Find similar implementations in this codebase - their structure and conventions.\")\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm planning to build [feature type] and want to ensure consistency with the project. Find how similar features are organized - file structure, naming patterns, and architectural approach.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm implementing [technology] and need to understand best practices before making recommendations. Find official documentation, common patterns, and known pitfalls to avoid.\")\n```\n\n**Questions to Ask** (AFTER exploration):\n1. Found pattern X in codebase. Should new code follow this, or deviate? Why?\n2. What should explicitly NOT be built? (scope boundaries)\n3. What's the minimum viable version vs full vision?\n\n**Directives for Prometheus**:\n- MUST: Follow patterns from `[discovered file:lines]`\n- MUST: Define \"Must NOT Have\" section (AI over-engineering prevention)\n- MUST NOT: Invent new patterns when existing ones work\n- MUST NOT: Add features not explicitly requested\n\n---\n\n### IF MID-SIZED TASK\n\n**Your Mission**: Define exact boundaries. AI slop prevention is critical.\n\n**Questions to Ask**:\n1. What are the EXACT outputs? (files, endpoints, UI elements)\n2. What must NOT be included? (explicit exclusions)\n3. What are the hard boundaries? (no touching X, no changing Y)\n4. Acceptance criteria: how do we know it's done?\n\n**AI-Slop Patterns to Flag**:\n- **Scope inflation**: \"Also tests for adjacent modules\" \u2014 \"Should I add tests beyond [TARGET]?\"\n- **Premature abstraction**: \"Extracted to utility\" \u2014 \"Do you want abstraction, or inline?\"\n- **Over-validation**: \"15 error checks for 3 inputs\" \u2014 \"Error handling: minimal or comprehensive?\"\n- **Documentation bloat**: \"Added JSDoc everywhere\" \u2014 \"Documentation: none, minimal, or full?\"\n\n**Directives for Prometheus**:\n- MUST: \"Must Have\" section with exact deliverables\n- MUST: \"Must NOT Have\" section with explicit exclusions\n- MUST: Per-task guardrails (what each task should NOT do)\n- MUST NOT: Exceed defined scope\n\n---\n\n### IF COLLABORATIVE\n\n**Your Mission**: Build understanding through dialogue. No rush.\n\n**Behavior**:\n1. Start with open-ended exploration questions\n2. Use explore/librarian to gather context as user provides direction\n3. Incrementally refine understanding\n4. Don't finalize until user confirms direction\n\n**Questions to Ask**:\n1. What problem are you trying to solve? (not what solution you want)\n2. What constraints exist? (time, tech stack, team skills)\n3. What trade-offs are acceptable? (speed vs quality vs cost)\n\n**Directives for Prometheus**:\n- MUST: Record all user decisions in \"Key Decisions\" section\n- MUST: Flag assumptions explicitly\n- MUST NOT: Proceed without user confirmation on major decisions\n\n---\n\n### IF ARCHITECTURE\n\n**Your Mission**: Strategic analysis. Long-term impact assessment.\n\n**Oracle Consultation** (RECOMMEND to Prometheus):\n```\nTask(\n subagent_type=\"oracle\",\n prompt=\"Architecture consultation:\n Request: [user's request]\n Current state: [gathered context]\n \n Analyze: options, trade-offs, long-term implications, risks\"\n)\n```\n\n**Questions to Ask**:\n1. What's the expected lifespan of this design?\n2. What scale/load should it handle?\n3. What are the non-negotiable constraints?\n4. What existing systems must this integrate with?\n\n**AI-Slop Guardrails for Architecture**:\n- MUST NOT: Over-engineer for hypothetical future requirements\n- MUST NOT: Add unnecessary abstraction layers\n- MUST NOT: Ignore existing patterns for \"better\" design\n- MUST: Document decisions and rationale\n\n**Directives for Prometheus**:\n- MUST: Consult Oracle before finalizing plan\n- MUST: Document architectural decisions with rationale\n- MUST: Define \"minimum viable architecture\"\n- MUST NOT: Introduce complexity without justification\n\n---\n\n### IF RESEARCH\n\n**Your Mission**: Define investigation boundaries and exit criteria.\n\n**Questions to Ask**:\n1. What's the goal of this research? (what decision will it inform?)\n2. How do we know research is complete? (exit criteria)\n3. What's the time box? (when to stop and synthesize)\n4. What outputs are expected? (report, recommendations, prototype?)\n\n**Investigation Structure**:\n```\n// Parallel probes - Prompt structure: CONTEXT + GOAL + QUESTION + REQUEST\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm researching how to implement [feature] and need to understand the current approach. Find how X is currently handled - implementation details, edge cases, and any known issues.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm implementing Y and need authoritative guidance. Find official documentation - API reference, configuration options, and recommended patterns.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm looking for proven implementations of Z. Find open source projects that solve this - focus on production-quality code and lessons learned.\")\n```\n\n**Directives for Prometheus**:\n- MUST: Define clear exit criteria\n- MUST: Specify parallel investigation tracks\n- MUST: Define synthesis format (how to present findings)\n- MUST NOT: Research indefinitely without convergence\n\n---\n\n## OUTPUT FORMAT\n\n```markdown\n## Intent Classification\n**Type**: [Refactoring | Build | Mid-sized | Collaborative | Architecture | Research]\n**Confidence**: [High | Medium | Low]\n**Rationale**: [Why this classification]\n\n## Pre-Analysis Findings\n[Results from explore/librarian agents if launched]\n[Relevant codebase patterns discovered]\n\n## Questions for User\n1. [Most critical question first]\n2. [Second priority]\n3. [Third priority]\n\n## Identified Risks\n- [Risk 1]: [Mitigation]\n- [Risk 2]: [Mitigation]\n\n## Directives for Prometheus\n\n### Core Directives\n- MUST: [Required action]\n- MUST: [Required action]\n- MUST NOT: [Forbidden action]\n- MUST NOT: [Forbidden action]\n- PATTERN: Follow `[file:lines]`\n- TOOL: Use `[specific tool]` for [purpose]\n\n### QA/Acceptance Criteria Directives (MANDATORY)\n> **ZERO USER INTERVENTION PRINCIPLE**: All acceptance criteria AND QA scenarios MUST be executable by agents.\n\n- MUST: Write acceptance criteria as executable commands (curl, bun test, playwright actions)\n- MUST: Include exact expected outputs, not vague descriptions\n- MUST: Specify verification tool for each deliverable type (playwright for UI, curl for API, etc.)\n- MUST: Every task has QA scenarios with: specific tool, concrete steps, exact assertions, evidence path\n- MUST: QA scenarios include BOTH happy-path AND failure/edge-case scenarios\n- MUST: QA scenarios use specific data (`\"test@example.com\"`, not `\"[email]\"`) and selectors (`.login-button`, not \"the login button\")\n- MUST NOT: Create criteria requiring \"user manually tests...\"\n- MUST NOT: Create criteria requiring \"user visually confirms...\"\n- MUST NOT: Create criteria requiring \"user clicks/interacts...\"\n- MUST NOT: Use placeholders without concrete examples (bad: \"[endpoint]\", good: \"/api/users\")\n- MUST NOT: Write vague QA scenarios (\"verify it works\", \"check the page loads\", \"test the API returns data\")\n\n## Recommended Approach\n[1-2 sentence summary of how to proceed]\n```\n\n---\n\n## TOOL REFERENCE\n\n- **`lsp_find_references`**: Map impact before changes \u2014 Refactoring\n- **`lsp_rename`**: Safe symbol renames \u2014 Refactoring\n- **`ast_grep_search`**: Find structural patterns \u2014 Refactoring, Build\n- **`explore` agent**: Codebase pattern discovery \u2014 Build, Research\n- **`librarian` agent**: External docs, best practices \u2014 Build, Architecture, Research\n- **`oracle` agent**: Read-only consultation. High-IQ debugging, architecture \u2014 Architecture\n\n---\n\n## CRITICAL RULES\n\n**NEVER**:\n- Skip intent classification\n- Ask generic questions (\"What's the scope?\")\n- Proceed without addressing ambiguity\n- Make assumptions about user's codebase\n- Suggest acceptance criteria requiring user intervention (\"user manually tests\", \"user confirms\", \"user clicks\")\n- Leave QA/acceptance criteria vague or placeholder-heavy\n\n**ALWAYS**:\n- Classify intent FIRST\n- Be specific (\"Should this change UserService only, or also AuthService?\")\n- Explore before asking (for Build/Research intents)\n- Provide actionable directives for Prometheus\n- Include QA automation directives in every output\n- Ensure acceptance criteria are agent-executable (commands, not human actions)\n";
|
|
16
|
+
export declare const METIS_SYSTEM_PROMPT = "# Metis - Pre-Planning Consultant\n\n## CONSTRAINTS\n\n- **READ-ONLY**: You analyze, question, advise. You do NOT implement or modify files.\n- **OUTPUT**: Your analysis feeds into Prometheus (planner). Be actionable.\n\n---\n\n## PHASE 0: INTENT CLASSIFICATION (MANDATORY FIRST STEP)\n\nBefore ANY analysis, classify the work intent. This determines your entire strategy.\n\n### Step 1: Identify Intent Type\n\n- **Refactoring**: \"refactor\", \"restructure\", \"clean up\", changes to existing code \u2014 SAFETY: regression prevention, behavior preservation\n- **Build from Scratch**: \"create new\", \"add feature\", greenfield, new module \u2014 DISCOVERY: explore patterns first, informed questions\n- **Mid-sized Task**: Scoped feature, specific deliverable, bounded work \u2014 GUARDRAILS: exact deliverables, explicit exclusions\n- **Collaborative**: \"help me plan\", \"let's figure out\", wants dialogue \u2014 INTERACTIVE: incremental clarity through dialogue\n- **Architecture**: \"how should we structure\", system design, infrastructure \u2014 STRATEGIC: long-term impact, Oracle recommendation\n- **Research**: Investigation needed, goal exists but path unclear \u2014 INVESTIGATION: exit criteria, parallel probes\n\n### Step 2: Validate Classification\n\nConfirm:\n- [ ] Intent type is clear from request\n- [ ] If ambiguous, ASK before proceeding\n\n---\n\n## PHASE 1: INTENT-SPECIFIC ANALYSIS\n\n### IF REFACTORING\n\n**Your Mission**: Ensure zero regressions, behavior preservation.\n\n**Tool Guidance** (recommend to Prometheus):\n- `lsp_find_references`: Map all usages before changes\n- `lsp_rename` / `lsp_prepare_rename`: Safe symbol renames\n- `ast_grep_search`: Find structural patterns to preserve\n- Use `ast_grep_search` to inspect structure before applying LSP/Edit changes\n\n**Questions to Ask**:\n1. What specific behavior must be preserved? (test commands to verify)\n2. What's the rollback strategy if something breaks?\n3. Should this change propagate to related code, or stay isolated?\n\n**Directives for Prometheus**:\n- MUST: Define pre-refactor verification (exact test commands + expected outputs)\n- MUST: Verify after EACH change, not just at the end\n- MUST NOT: Change behavior while restructuring\n- MUST NOT: Refactor adjacent code not in scope\n\n---\n\n### IF BUILD FROM SCRATCH\n\n**Your Mission**: Discover patterns before asking, then surface hidden requirements.\n\n**Pre-Analysis Actions** (YOU should do before questioning):\n```\n// Launch these explore agents FIRST\n// Prompt structure: CONTEXT + GOAL + QUESTION + REQUEST\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm analyzing a new feature request and need to understand existing patterns before asking clarifying questions. Find similar implementations in this codebase - their structure and conventions.\")\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm planning to build [feature type] and want to ensure consistency with the project. Find how similar features are organized - file structure, naming patterns, and architectural approach.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm implementing [technology] and need to understand best practices before making recommendations. Find official documentation, common patterns, and known pitfalls to avoid.\")\n```\n\n**Questions to Ask** (AFTER exploration):\n1. Found pattern X in codebase. Should new code follow this, or deviate? Why?\n2. What should explicitly NOT be built? (scope boundaries)\n3. What's the minimum viable version vs full vision?\n\n**Directives for Prometheus**:\n- MUST: Follow patterns from `[discovered file:lines]`\n- MUST: Define \"Must NOT Have\" section (AI over-engineering prevention)\n- MUST NOT: Invent new patterns when existing ones work\n- MUST NOT: Add features not explicitly requested\n\n---\n\n### IF MID-SIZED TASK\n\n**Your Mission**: Define exact boundaries. AI slop prevention is critical.\n\n**Questions to Ask**:\n1. What are the EXACT outputs? (files, endpoints, UI elements)\n2. What must NOT be included? (explicit exclusions)\n3. What are the hard boundaries? (no touching X, no changing Y)\n4. Acceptance criteria: how do we know it's done?\n\n**AI-Slop Patterns to Flag**:\n- **Scope inflation**: \"Also tests for adjacent modules\" \u2014 \"Should I add tests beyond [TARGET]?\"\n- **Premature abstraction**: \"Extracted to utility\" \u2014 \"Do you want abstraction, or inline?\"\n- **Over-validation**: \"15 error checks for 3 inputs\" \u2014 \"Error handling: minimal or comprehensive?\"\n- **Documentation bloat**: \"Added JSDoc everywhere\" \u2014 \"Documentation: none, minimal, or full?\"\n\n**Directives for Prometheus**:\n- MUST: \"Must Have\" section with exact deliverables\n- MUST: \"Must NOT Have\" section with explicit exclusions\n- MUST: Per-task guardrails (what each task should NOT do)\n- MUST NOT: Exceed defined scope\n\n---\n\n### IF COLLABORATIVE\n\n**Your Mission**: Build understanding through dialogue. No rush.\n\n**Behavior**:\n1. Start with open-ended exploration questions\n2. Use explore/librarian to gather context as user provides direction\n3. Incrementally refine understanding\n4. Don't finalize until user confirms direction\n\n**Questions to Ask**:\n1. What problem are you trying to solve? (not what solution you want)\n2. What constraints exist? (time, tech stack, team skills)\n3. What trade-offs are acceptable? (speed vs quality vs cost)\n\n**Directives for Prometheus**:\n- MUST: Record all user decisions in \"Key Decisions\" section\n- MUST: Flag assumptions explicitly\n- MUST NOT: Proceed without user confirmation on major decisions\n\n---\n\n### IF ARCHITECTURE\n\n**Your Mission**: Strategic analysis. Long-term impact assessment.\n\n**Oracle Consultation** (RECOMMEND to Prometheus):\n```\nTask(\n subagent_type=\"oracle\",\n prompt=\"Architecture consultation:\n Request: [user's request]\n Current state: [gathered context]\n \n Analyze: options, trade-offs, long-term implications, risks\"\n)\n```\n\n**Questions to Ask**:\n1. What's the expected lifespan of this design?\n2. What scale/load should it handle?\n3. What are the non-negotiable constraints?\n4. What existing systems must this integrate with?\n\n**AI-Slop Guardrails for Architecture**:\n- MUST NOT: Over-engineer for hypothetical future requirements\n- MUST NOT: Add unnecessary abstraction layers\n- MUST NOT: Ignore existing patterns for \"better\" design\n- MUST: Document decisions and rationale\n\n**Directives for Prometheus**:\n- MUST: Consult Oracle before finalizing plan\n- MUST: Document architectural decisions with rationale\n- MUST: Define \"minimum viable architecture\"\n- MUST NOT: Introduce complexity without justification\n\n---\n\n### IF RESEARCH\n\n**Your Mission**: Define investigation boundaries and exit criteria.\n\n**Questions to Ask**:\n1. What's the goal of this research? (what decision will it inform?)\n2. How do we know research is complete? (exit criteria)\n3. What's the time box? (when to stop and synthesize)\n4. What outputs are expected? (report, recommendations, prototype?)\n\n**Investigation Structure**:\n```\n// Parallel probes - Prompt structure: CONTEXT + GOAL + QUESTION + REQUEST\ncall_omo_agent(subagent_type=\"explore\", prompt=\"I'm researching how to implement [feature] and need to understand the current approach. Find how X is currently handled - implementation details, edge cases, and any known issues.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm implementing Y and need authoritative guidance. Find official documentation - API reference, configuration options, and recommended patterns.\")\ncall_omo_agent(subagent_type=\"librarian\", prompt=\"I'm looking for proven implementations of Z. Find open source projects that solve this - focus on production-quality code and lessons learned.\")\n```\n\n**Directives for Prometheus**:\n- MUST: Define clear exit criteria\n- MUST: Specify parallel investigation tracks\n- MUST: Define synthesis format (how to present findings)\n- MUST NOT: Research indefinitely without convergence\n\n---\n\n## OUTPUT FORMAT\n\n```markdown\n## Intent Classification\n**Type**: [Refactoring | Build | Mid-sized | Collaborative | Architecture | Research]\n**Confidence**: [High | Medium | Low]\n**Rationale**: [Why this classification]\n\n## Pre-Analysis Findings\n[Results from explore/librarian agents if launched]\n[Relevant codebase patterns discovered]\n\n## Questions for User\n1. [Most critical question first]\n2. [Second priority]\n3. [Third priority]\n\n## Identified Risks\n- [Risk 1]: [Mitigation]\n- [Risk 2]: [Mitigation]\n\n## Directives for Prometheus\n\n### Core Directives\n- MUST: [Required action]\n- MUST: [Required action]\n- MUST NOT: [Forbidden action]\n- MUST NOT: [Forbidden action]\n- PATTERN: Follow `[file:lines]`\n- TOOL: Use `[specific tool]` for [purpose]\n\n### QA/Acceptance Criteria Directives (MANDATORY)\n> **ZERO USER INTERVENTION PRINCIPLE**: All acceptance criteria AND QA scenarios MUST be executable by agents.\n\n- MUST: Write acceptance criteria as executable commands (curl, bun test, playwright actions)\n- MUST: Include exact expected outputs, not vague descriptions\n- MUST: Specify verification tool for each deliverable type (playwright for UI, curl for API, etc.)\n- MUST: Every task has QA scenarios with: specific tool, concrete steps, exact assertions, evidence path\n- MUST: QA scenarios include BOTH happy-path AND failure/edge-case scenarios\n- MUST: QA scenarios use specific data (`\"test@example.com\"`, not `\"[email]\"`) and selectors (`.login-button`, not \"the login button\")\n- MUST NOT: Create criteria requiring \"user manually tests...\"\n- MUST NOT: Create criteria requiring \"user visually confirms...\"\n- MUST NOT: Create criteria requiring \"user clicks/interacts...\"\n- MUST NOT: Use placeholders without concrete examples (bad: \"[endpoint]\", good: \"/api/users\")\n- MUST NOT: Write vague QA scenarios (\"verify it works\", \"check the page loads\", \"test the API returns data\")\n\n## Recommended Approach\n[1-2 sentence summary of how to proceed]\n```\n\n---\n\n## TOOL REFERENCE\n\n- **`lsp_find_references`**: Map impact before changes \u2014 Refactoring\n- **`lsp_rename`**: Safe symbol renames \u2014 Refactoring\n- **`ast_grep_search`**: Find structural patterns \u2014 Refactoring, Build\n- **`explore` agent**: Codebase pattern discovery \u2014 Build, Research\n- **`librarian` agent**: External docs, best practices \u2014 Build, Architecture, Research\n- **`oracle` agent**: Read-only consultation. High-IQ debugging, architecture \u2014 Architecture\n\n---\n\n## CRITICAL RULES\n\n**NEVER**:\n- Skip intent classification\n- Ask generic questions (\"What's the scope?\")\n- Proceed without addressing ambiguity\n- Make assumptions about user's codebase\n- Suggest acceptance criteria requiring user intervention (\"user manually tests\", \"user confirms\", \"user clicks\")\n- Leave QA/acceptance criteria vague or placeholder-heavy\n\n**ALWAYS**:\n- Classify intent FIRST\n- Be specific (\"Should this change UserService only, or also AuthService?\")\n- Explore before asking (for Build/Research intents)\n- Provide actionable directives for Prometheus\n- Include QA automation directives in every output\n- Ensure acceptance criteria are agent-executable (commands, not human actions)\n";
|
|
17
17
|
export declare function createMetisAgent(model: string): AgentConfig;
|
|
18
18
|
export declare namespace createMetisAgent {
|
|
19
19
|
var mode: "subagent";
|
package/dist/cli/index.js
CHANGED
|
@@ -2145,7 +2145,7 @@ var package_default;
|
|
|
2145
2145
|
var init_package = __esm(() => {
|
|
2146
2146
|
package_default = {
|
|
2147
2147
|
name: "oh-my-opencode-gpt-slim",
|
|
2148
|
-
version: "0.1.
|
|
2148
|
+
version: "0.1.11",
|
|
2149
2149
|
description: "GPT-optimized lean fork of oh-my-openagent \u2014 33 hooks removed, 5 tools removed, Sisyphus prompt rewritten based on OpenAI Codex prompt.md",
|
|
2150
2150
|
main: "dist/index.js",
|
|
2151
2151
|
types: "dist/index.d.ts",
|
|
@@ -2223,15 +2223,15 @@ var init_package = __esm(() => {
|
|
|
2223
2223
|
typescript: "^5.7.3"
|
|
2224
2224
|
},
|
|
2225
2225
|
optionalDependencies: {
|
|
2226
|
-
"oh-my-opencode-gpt-slim-darwin-arm64": "0.1.
|
|
2227
|
-
"oh-my-opencode-gpt-slim-darwin-x64": "0.1.
|
|
2228
|
-
"oh-my-opencode-gpt-slim-darwin-x64-baseline": "0.1.
|
|
2229
|
-
"oh-my-opencode-gpt-slim-linux-arm64": "0.1.
|
|
2230
|
-
"oh-my-opencode-gpt-slim-linux-arm64-musl": "0.1.
|
|
2231
|
-
"oh-my-opencode-gpt-slim-linux-x64": "0.1.
|
|
2232
|
-
"oh-my-opencode-gpt-slim-linux-x64-baseline": "0.1.
|
|
2233
|
-
"oh-my-opencode-gpt-slim-linux-x64-musl": "0.1.
|
|
2234
|
-
"oh-my-opencode-gpt-slim-linux-x64-musl-baseline": "0.1.
|
|
2226
|
+
"oh-my-opencode-gpt-slim-darwin-arm64": "0.1.11",
|
|
2227
|
+
"oh-my-opencode-gpt-slim-darwin-x64": "0.1.11",
|
|
2228
|
+
"oh-my-opencode-gpt-slim-darwin-x64-baseline": "0.1.11",
|
|
2229
|
+
"oh-my-opencode-gpt-slim-linux-arm64": "0.1.11",
|
|
2230
|
+
"oh-my-opencode-gpt-slim-linux-arm64-musl": "0.1.11",
|
|
2231
|
+
"oh-my-opencode-gpt-slim-linux-x64": "0.1.11",
|
|
2232
|
+
"oh-my-opencode-gpt-slim-linux-x64-baseline": "0.1.11",
|
|
2233
|
+
"oh-my-opencode-gpt-slim-linux-x64-musl": "0.1.11",
|
|
2234
|
+
"oh-my-opencode-gpt-slim-linux-x64-musl-baseline": "0.1.11"
|
|
2235
2235
|
},
|
|
2236
2236
|
overrides: {
|
|
2237
2237
|
"@opencode-ai/sdk": "^1.2.17"
|
|
@@ -27720,10 +27720,60 @@ async function checkConfig() {
|
|
|
27720
27720
|
}
|
|
27721
27721
|
|
|
27722
27722
|
// src/cli/doctor/checks/dependencies.ts
|
|
27723
|
-
|
|
27724
|
-
import {
|
|
27723
|
+
import { existsSync as existsSync26 } from "fs";
|
|
27724
|
+
import { createRequire as createRequire2 } from "module";
|
|
27725
|
+
import { dirname as dirname5, join as join23 } from "path";
|
|
27726
|
+
|
|
27727
|
+
// src/tools/ast-grep/sg-cli-path.ts
|
|
27728
|
+
import { existsSync as existsSync25, statSync as statSync3 } from "fs";
|
|
27725
27729
|
import { createRequire } from "module";
|
|
27726
27730
|
import { dirname as dirname4, join as join22 } from "path";
|
|
27731
|
+
function isValidBinary(filePath) {
|
|
27732
|
+
try {
|
|
27733
|
+
return statSync3(filePath).size > 1e4;
|
|
27734
|
+
} catch {
|
|
27735
|
+
return false;
|
|
27736
|
+
}
|
|
27737
|
+
}
|
|
27738
|
+
function findWithBunWhich(binaryName) {
|
|
27739
|
+
try {
|
|
27740
|
+
const filePath = Bun.which(binaryName);
|
|
27741
|
+
if (filePath && isValidBinary(filePath)) {
|
|
27742
|
+
return filePath;
|
|
27743
|
+
}
|
|
27744
|
+
} catch {}
|
|
27745
|
+
return null;
|
|
27746
|
+
}
|
|
27747
|
+
function findBundledCliBinary() {
|
|
27748
|
+
const binaryName = process.platform === "win32" ? "sg.exe" : "sg";
|
|
27749
|
+
try {
|
|
27750
|
+
const require2 = createRequire(import.meta.url);
|
|
27751
|
+
const packageJsonPath = require2.resolve("@ast-grep/cli/package.json");
|
|
27752
|
+
const packageDir = dirname4(packageJsonPath);
|
|
27753
|
+
const filePath = join22(packageDir, binaryName);
|
|
27754
|
+
if (existsSync25(filePath) && isValidBinary(filePath)) {
|
|
27755
|
+
return filePath;
|
|
27756
|
+
}
|
|
27757
|
+
} catch {}
|
|
27758
|
+
return null;
|
|
27759
|
+
}
|
|
27760
|
+
function findHomebrewBinary() {
|
|
27761
|
+
if (process.platform !== "darwin") {
|
|
27762
|
+
return null;
|
|
27763
|
+
}
|
|
27764
|
+
for (const filePath of ["/opt/homebrew/bin/sg", "/usr/local/bin/sg"]) {
|
|
27765
|
+
if (existsSync25(filePath) && isValidBinary(filePath)) {
|
|
27766
|
+
return filePath;
|
|
27767
|
+
}
|
|
27768
|
+
}
|
|
27769
|
+
return null;
|
|
27770
|
+
}
|
|
27771
|
+
function findSgCliPathSync() {
|
|
27772
|
+
return findWithBunWhich("sg") ?? findWithBunWhich("ast-grep") ?? findBundledCliBinary() ?? findHomebrewBinary();
|
|
27773
|
+
}
|
|
27774
|
+
|
|
27775
|
+
// src/cli/doctor/checks/dependencies.ts
|
|
27776
|
+
init_spawn_with_windows_hide();
|
|
27727
27777
|
async function checkBinaryExists(binary2) {
|
|
27728
27778
|
try {
|
|
27729
27779
|
const path3 = Bun.which(binary2);
|
|
@@ -27746,26 +27796,24 @@ async function getBinaryVersion(binary2) {
|
|
|
27746
27796
|
return null;
|
|
27747
27797
|
}
|
|
27748
27798
|
async function checkAstGrepCli() {
|
|
27749
|
-
const
|
|
27750
|
-
|
|
27751
|
-
const binary2 = binaryCheck.exists ? binaryCheck : altBinaryCheck;
|
|
27752
|
-
if (!binary2 || !binary2.exists) {
|
|
27799
|
+
const resolvedPath = findSgCliPathSync();
|
|
27800
|
+
if (!resolvedPath) {
|
|
27753
27801
|
return {
|
|
27754
27802
|
name: "AST-Grep CLI",
|
|
27755
27803
|
required: false,
|
|
27756
27804
|
installed: false,
|
|
27757
27805
|
version: null,
|
|
27758
27806
|
path: null,
|
|
27759
|
-
installHint: "Install:
|
|
27807
|
+
installHint: "Install: bun add -g @ast-grep/cli or brew install ast-grep"
|
|
27760
27808
|
};
|
|
27761
27809
|
}
|
|
27762
|
-
const version2 = await getBinaryVersion(
|
|
27810
|
+
const version2 = await getBinaryVersion(resolvedPath);
|
|
27763
27811
|
return {
|
|
27764
27812
|
name: "AST-Grep CLI",
|
|
27765
27813
|
required: false,
|
|
27766
27814
|
installed: true,
|
|
27767
27815
|
version: version2,
|
|
27768
|
-
path:
|
|
27816
|
+
path: resolvedPath
|
|
27769
27817
|
};
|
|
27770
27818
|
}
|
|
27771
27819
|
async function checkAstGrepNapi() {
|
|
@@ -27779,15 +27827,15 @@ async function checkAstGrepNapi() {
|
|
|
27779
27827
|
path: null
|
|
27780
27828
|
};
|
|
27781
27829
|
} catch {
|
|
27782
|
-
const { existsSync:
|
|
27783
|
-
const { join: join23 } = await import("path");
|
|
27830
|
+
const { existsSync: existsSync27 } = await import("fs");
|
|
27784
27831
|
const { homedir: homedir7 } = await import("os");
|
|
27832
|
+
const { join: join24 } = await import("path");
|
|
27785
27833
|
const pathsToCheck = [
|
|
27786
|
-
|
|
27787
|
-
|
|
27834
|
+
join24(homedir7(), ".config", "opencode", "node_modules", "@ast-grep", "napi"),
|
|
27835
|
+
join24(process.cwd(), "node_modules", "@ast-grep", "napi")
|
|
27788
27836
|
];
|
|
27789
27837
|
for (const napiPath of pathsToCheck) {
|
|
27790
|
-
if (
|
|
27838
|
+
if (existsSync27(napiPath)) {
|
|
27791
27839
|
return {
|
|
27792
27840
|
name: "AST-Grep NAPI",
|
|
27793
27841
|
required: false,
|
|
@@ -27810,10 +27858,10 @@ async function checkAstGrepNapi() {
|
|
|
27810
27858
|
function findCommentCheckerPackageBinary() {
|
|
27811
27859
|
const binaryName = process.platform === "win32" ? "comment-checker.exe" : "comment-checker";
|
|
27812
27860
|
try {
|
|
27813
|
-
const require2 =
|
|
27861
|
+
const require2 = createRequire2(import.meta.url);
|
|
27814
27862
|
const pkgPath = require2.resolve("@code-yeongyu/comment-checker/package.json");
|
|
27815
|
-
const binaryPath =
|
|
27816
|
-
if (
|
|
27863
|
+
const binaryPath = join23(dirname5(pkgPath), "bin", binaryName);
|
|
27864
|
+
if (existsSync26(binaryPath))
|
|
27817
27865
|
return binaryPath;
|
|
27818
27866
|
} catch {}
|
|
27819
27867
|
return null;
|
|
@@ -27933,14 +27981,14 @@ init_omo_config_file();
|
|
|
27933
27981
|
|
|
27934
27982
|
// src/tools/lsp/server-installation.ts
|
|
27935
27983
|
init_shared();
|
|
27936
|
-
import { existsSync as
|
|
27937
|
-
import { join as
|
|
27984
|
+
import { existsSync as existsSync27 } from "fs";
|
|
27985
|
+
import { join as join24 } from "path";
|
|
27938
27986
|
function isServerInstalled(command) {
|
|
27939
27987
|
if (command.length === 0)
|
|
27940
27988
|
return false;
|
|
27941
27989
|
const cmd = command[0];
|
|
27942
27990
|
if (cmd.includes("/") || cmd.includes("\\")) {
|
|
27943
|
-
if (
|
|
27991
|
+
if (existsSync27(cmd))
|
|
27944
27992
|
return true;
|
|
27945
27993
|
}
|
|
27946
27994
|
const isWindows = process.platform === "win32";
|
|
@@ -27962,23 +28010,23 @@ function isServerInstalled(command) {
|
|
|
27962
28010
|
const paths = pathEnv.split(pathSeparator);
|
|
27963
28011
|
for (const p2 of paths) {
|
|
27964
28012
|
for (const suffix of exts) {
|
|
27965
|
-
if (
|
|
28013
|
+
if (existsSync27(join24(p2, cmd + suffix))) {
|
|
27966
28014
|
return true;
|
|
27967
28015
|
}
|
|
27968
28016
|
}
|
|
27969
28017
|
}
|
|
27970
28018
|
const cwd = process.cwd();
|
|
27971
28019
|
const configDir = getOpenCodeConfigDir({ binary: "opencode" });
|
|
27972
|
-
const dataDir =
|
|
28020
|
+
const dataDir = join24(getDataDir(), "opencode");
|
|
27973
28021
|
const additionalBases = [
|
|
27974
|
-
|
|
27975
|
-
|
|
27976
|
-
|
|
27977
|
-
|
|
28022
|
+
join24(cwd, "node_modules", ".bin"),
|
|
28023
|
+
join24(configDir, "bin"),
|
|
28024
|
+
join24(configDir, "node_modules", ".bin"),
|
|
28025
|
+
join24(dataDir, "bin")
|
|
27978
28026
|
];
|
|
27979
28027
|
for (const base of additionalBases) {
|
|
27980
28028
|
for (const suffix of exts) {
|
|
27981
|
-
if (
|
|
28029
|
+
if (existsSync27(join24(base, cmd + suffix))) {
|
|
27982
28030
|
return true;
|
|
27983
28031
|
}
|
|
27984
28032
|
}
|
|
@@ -28012,21 +28060,21 @@ function getLspServerStats(servers) {
|
|
|
28012
28060
|
|
|
28013
28061
|
// src/cli/doctor/checks/tools-mcp.ts
|
|
28014
28062
|
init_shared();
|
|
28015
|
-
import { existsSync as
|
|
28063
|
+
import { existsSync as existsSync28, readFileSync as readFileSync20 } from "fs";
|
|
28016
28064
|
import { homedir as homedir7 } from "os";
|
|
28017
|
-
import { join as
|
|
28065
|
+
import { join as join25 } from "path";
|
|
28018
28066
|
var BUILTIN_MCP_SERVERS = ["context7", "grep_app"];
|
|
28019
28067
|
function getMcpConfigPaths() {
|
|
28020
28068
|
return [
|
|
28021
|
-
|
|
28022
|
-
|
|
28023
|
-
|
|
28069
|
+
join25(homedir7(), ".claude", ".mcp.json"),
|
|
28070
|
+
join25(process.cwd(), ".mcp.json"),
|
|
28071
|
+
join25(process.cwd(), ".claude", ".mcp.json")
|
|
28024
28072
|
];
|
|
28025
28073
|
}
|
|
28026
28074
|
function loadUserMcpConfig() {
|
|
28027
28075
|
const servers = {};
|
|
28028
28076
|
for (const configPath of getMcpConfigPaths()) {
|
|
28029
|
-
if (!
|
|
28077
|
+
if (!existsSync28(configPath))
|
|
28030
28078
|
continue;
|
|
28031
28079
|
try {
|
|
28032
28080
|
const content = readFileSync20(configPath, "utf-8");
|
|
@@ -28094,9 +28142,9 @@ function buildToolIssues(summary) {
|
|
|
28094
28142
|
issues.push({
|
|
28095
28143
|
title: "AST-Grep unavailable",
|
|
28096
28144
|
description: "Neither AST-Grep CLI nor NAPI backend is available.",
|
|
28097
|
-
fix: "Install @ast-grep/cli
|
|
28145
|
+
fix: "Install ast-grep with `bun add -g @ast-grep/cli` or `brew install ast-grep`",
|
|
28098
28146
|
severity: "warning",
|
|
28099
|
-
affects: ["ast_grep_search"
|
|
28147
|
+
affects: ["ast_grep_search"]
|
|
28100
28148
|
});
|
|
28101
28149
|
}
|
|
28102
28150
|
if (!summary.commentChecker) {
|
|
@@ -28431,11 +28479,11 @@ async function doctor(options = { mode: "default" }) {
|
|
|
28431
28479
|
|
|
28432
28480
|
// src/features/mcp-oauth/storage.ts
|
|
28433
28481
|
init_shared();
|
|
28434
|
-
import { chmodSync, existsSync as
|
|
28435
|
-
import { dirname as
|
|
28482
|
+
import { chmodSync, existsSync as existsSync29, mkdirSync as mkdirSync5, readFileSync as readFileSync21, unlinkSync as unlinkSync2, writeFileSync as writeFileSync7 } from "fs";
|
|
28483
|
+
import { dirname as dirname6, join as join26 } from "path";
|
|
28436
28484
|
var STORAGE_FILE_NAME = "mcp-oauth.json";
|
|
28437
28485
|
function getMcpOauthStoragePath() {
|
|
28438
|
-
return
|
|
28486
|
+
return join26(getOpenCodeConfigDir({ binary: "opencode" }), STORAGE_FILE_NAME);
|
|
28439
28487
|
}
|
|
28440
28488
|
function normalizeHost(serverHost) {
|
|
28441
28489
|
let host = serverHost.trim();
|
|
@@ -28472,7 +28520,7 @@ function buildKey(serverHost, resource) {
|
|
|
28472
28520
|
}
|
|
28473
28521
|
function readStore() {
|
|
28474
28522
|
const filePath = getMcpOauthStoragePath();
|
|
28475
|
-
if (!
|
|
28523
|
+
if (!existsSync29(filePath)) {
|
|
28476
28524
|
return null;
|
|
28477
28525
|
}
|
|
28478
28526
|
try {
|
|
@@ -28485,8 +28533,8 @@ function readStore() {
|
|
|
28485
28533
|
function writeStore(store2) {
|
|
28486
28534
|
const filePath = getMcpOauthStoragePath();
|
|
28487
28535
|
try {
|
|
28488
|
-
const dir =
|
|
28489
|
-
if (!
|
|
28536
|
+
const dir = dirname6(filePath);
|
|
28537
|
+
if (!existsSync29(dir)) {
|
|
28490
28538
|
mkdirSync5(dir, { recursive: true });
|
|
28491
28539
|
}
|
|
28492
28540
|
writeFileSync7(filePath, JSON.stringify(store2, null, 2), { encoding: "utf-8", mode: 384 });
|
|
@@ -28521,7 +28569,7 @@ function deleteToken(serverHost, resource) {
|
|
|
28521
28569
|
if (Object.keys(store2).length === 0) {
|
|
28522
28570
|
try {
|
|
28523
28571
|
const filePath = getMcpOauthStoragePath();
|
|
28524
|
-
if (
|
|
28572
|
+
if (existsSync29(filePath)) {
|
|
28525
28573
|
unlinkSync2(filePath);
|
|
28526
28574
|
}
|
|
28527
28575
|
return true;
|
|
@@ -29052,8 +29100,8 @@ function createMcpOAuthCommand() {
|
|
|
29052
29100
|
}
|
|
29053
29101
|
|
|
29054
29102
|
// src/cli/validate-skill/validate-skill.ts
|
|
29055
|
-
import { existsSync as
|
|
29056
|
-
import { join as
|
|
29103
|
+
import { existsSync as existsSync31 } from "fs";
|
|
29104
|
+
import { join as join31 } from "path";
|
|
29057
29105
|
|
|
29058
29106
|
// src/features/skill-creator/eval-grader.ts
|
|
29059
29107
|
function includesAll(text, values) {
|
|
@@ -29132,23 +29180,23 @@ function gradeEvalReport(report, sourceReport) {
|
|
|
29132
29180
|
}
|
|
29133
29181
|
// src/features/skill-creator/eval-runner.ts
|
|
29134
29182
|
import { mkdtemp, mkdir, cp, rm } from "fs/promises";
|
|
29135
|
-
import { dirname as
|
|
29183
|
+
import { dirname as dirname7, join as join29, resolve as resolve2 } from "path";
|
|
29136
29184
|
import { tmpdir as tmpdir2 } from "os";
|
|
29137
29185
|
|
|
29138
29186
|
// src/tools/session-manager/storage.ts
|
|
29139
29187
|
init_shared();
|
|
29140
29188
|
init_opencode_message_dir();
|
|
29141
29189
|
init_opencode_storage_detection();
|
|
29142
|
-
import { existsSync as
|
|
29190
|
+
import { existsSync as existsSync30 } from "fs";
|
|
29143
29191
|
import { readdir, readFile } from "fs/promises";
|
|
29144
|
-
import { join as
|
|
29192
|
+
import { join as join28 } from "path";
|
|
29145
29193
|
|
|
29146
29194
|
// src/tools/session-manager/constants.ts
|
|
29147
29195
|
init_shared();
|
|
29148
29196
|
init_shared();
|
|
29149
|
-
import { join as
|
|
29150
|
-
var TODO_DIR =
|
|
29151
|
-
var TRANSCRIPT_DIR =
|
|
29197
|
+
import { join as join27 } from "path";
|
|
29198
|
+
var TODO_DIR = join27(getClaudeConfigDir(), "todos");
|
|
29199
|
+
var TRANSCRIPT_DIR = join27(getClaudeConfigDir(), "transcripts");
|
|
29152
29200
|
|
|
29153
29201
|
// src/tools/session-manager/storage.ts
|
|
29154
29202
|
init_opencode_message_dir();
|
|
@@ -29211,7 +29259,7 @@ async function readSessionMessages(sessionID) {
|
|
|
29211
29259
|
}
|
|
29212
29260
|
}
|
|
29213
29261
|
const messageDir = getMessageDir(sessionID);
|
|
29214
|
-
if (!messageDir || !
|
|
29262
|
+
if (!messageDir || !existsSync30(messageDir))
|
|
29215
29263
|
return [];
|
|
29216
29264
|
const messages = [];
|
|
29217
29265
|
try {
|
|
@@ -29220,7 +29268,7 @@ async function readSessionMessages(sessionID) {
|
|
|
29220
29268
|
if (!file2.endsWith(".json"))
|
|
29221
29269
|
continue;
|
|
29222
29270
|
try {
|
|
29223
|
-
const content = await readFile(
|
|
29271
|
+
const content = await readFile(join28(messageDir, file2), "utf-8");
|
|
29224
29272
|
const meta3 = JSON.parse(content);
|
|
29225
29273
|
const parts = await readParts(meta3.id);
|
|
29226
29274
|
messages.push({
|
|
@@ -29244,8 +29292,8 @@ async function readSessionMessages(sessionID) {
|
|
|
29244
29292
|
});
|
|
29245
29293
|
}
|
|
29246
29294
|
async function readParts(messageID) {
|
|
29247
|
-
const partDir =
|
|
29248
|
-
if (!
|
|
29295
|
+
const partDir = join28(PART_STORAGE, messageID);
|
|
29296
|
+
if (!existsSync30(partDir))
|
|
29249
29297
|
return [];
|
|
29250
29298
|
const parts = [];
|
|
29251
29299
|
try {
|
|
@@ -29254,7 +29302,7 @@ async function readParts(messageID) {
|
|
|
29254
29302
|
if (!file2.endsWith(".json"))
|
|
29255
29303
|
continue;
|
|
29256
29304
|
try {
|
|
29257
|
-
const content = await readFile(
|
|
29305
|
+
const content = await readFile(join28(partDir, file2), "utf-8");
|
|
29258
29306
|
parts.push(JSON.parse(content));
|
|
29259
29307
|
} catch {}
|
|
29260
29308
|
}
|
|
@@ -29336,14 +29384,14 @@ function detectSkillInvocation(messages, skillName) {
|
|
|
29336
29384
|
}));
|
|
29337
29385
|
}
|
|
29338
29386
|
async function createEvalWorkspace(skillDir, skillName, evalBaseDir, files) {
|
|
29339
|
-
const workspaceDir = await mkdtemp(
|
|
29340
|
-
const skillTargetDir =
|
|
29341
|
-
await mkdir(
|
|
29387
|
+
const workspaceDir = await mkdtemp(join29(tmpdir2(), "omo-skill-eval-"));
|
|
29388
|
+
const skillTargetDir = join29(workspaceDir, ".opencode", "skills", skillName);
|
|
29389
|
+
await mkdir(dirname7(skillTargetDir), { recursive: true });
|
|
29342
29390
|
await cp(skillDir, skillTargetDir, { recursive: true });
|
|
29343
29391
|
for (const file2 of files) {
|
|
29344
29392
|
const sourcePath = resolve2(evalBaseDir, file2);
|
|
29345
|
-
const targetPath =
|
|
29346
|
-
await mkdir(
|
|
29393
|
+
const targetPath = join29(workspaceDir, file2);
|
|
29394
|
+
await mkdir(dirname7(targetPath), { recursive: true });
|
|
29347
29395
|
await cp(sourcePath, targetPath, { recursive: true });
|
|
29348
29396
|
}
|
|
29349
29397
|
return {
|
|
@@ -29354,7 +29402,7 @@ async function createEvalWorkspace(skillDir, skillName, evalBaseDir, files) {
|
|
|
29354
29402
|
};
|
|
29355
29403
|
}
|
|
29356
29404
|
async function runSkillEvalSuite(options) {
|
|
29357
|
-
const evalBaseDir =
|
|
29405
|
+
const evalBaseDir = dirname7(options.evalFilePath);
|
|
29358
29406
|
const cases = [];
|
|
29359
29407
|
for (const evalCase of options.suite.evals) {
|
|
29360
29408
|
const { workspaceDir, cleanup } = await createEvalWorkspace(options.skillDir, options.skillName, evalBaseDir, evalCase.files ?? []);
|
|
@@ -29449,7 +29497,7 @@ async function loadSkillEvalSuite(evalFilePath) {
|
|
|
29449
29497
|
}
|
|
29450
29498
|
// src/features/skill-creator/skill-validator.ts
|
|
29451
29499
|
import { access, readFile as readFile3 } from "fs/promises";
|
|
29452
|
-
import { dirname as
|
|
29500
|
+
import { dirname as dirname8, join as join30, resolve as resolve3 } from "path";
|
|
29453
29501
|
init_frontmatter();
|
|
29454
29502
|
var ALLOWED_FRONTMATTER_KEYS = new Set([
|
|
29455
29503
|
"name",
|
|
@@ -29476,13 +29524,13 @@ async function pathExists(path3) {
|
|
|
29476
29524
|
}
|
|
29477
29525
|
}
|
|
29478
29526
|
async function resolveSkillFilePath(skillPath) {
|
|
29479
|
-
const directSkillFile = skillPath.endsWith("SKILL.md") ? skillPath :
|
|
29527
|
+
const directSkillFile = skillPath.endsWith("SKILL.md") ? skillPath : join30(skillPath, "SKILL.md");
|
|
29480
29528
|
const exists = await pathExists(directSkillFile);
|
|
29481
29529
|
if (!exists) {
|
|
29482
29530
|
throw new Error(`SKILL.md not found at ${directSkillFile}`);
|
|
29483
29531
|
}
|
|
29484
29532
|
return {
|
|
29485
|
-
skillDir:
|
|
29533
|
+
skillDir: dirname8(directSkillFile),
|
|
29486
29534
|
skillFilePath: directSkillFile
|
|
29487
29535
|
};
|
|
29488
29536
|
}
|
|
@@ -29543,7 +29591,7 @@ async function validateSkillDirectory(skillPath, evalFilePath) {
|
|
|
29543
29591
|
if (evalFilePath) {
|
|
29544
29592
|
try {
|
|
29545
29593
|
const suite = await loadSkillEvalSuite(evalFilePath);
|
|
29546
|
-
const evalBaseDir =
|
|
29594
|
+
const evalBaseDir = dirname8(evalFilePath);
|
|
29547
29595
|
if (skillName && suite.skill_name && suite.skill_name !== skillName) {
|
|
29548
29596
|
issues.push({
|
|
29549
29597
|
code: "skill-name-mismatch",
|
|
@@ -29598,11 +29646,11 @@ async function validateSkillDirectory(skillPath, evalFilePath) {
|
|
|
29598
29646
|
// src/cli/validate-skill/validate-skill.ts
|
|
29599
29647
|
function findDefaultEvalFile(skillDir) {
|
|
29600
29648
|
const candidates = [
|
|
29601
|
-
|
|
29602
|
-
|
|
29603
|
-
|
|
29649
|
+
join31(skillDir, "evals", "evals.json"),
|
|
29650
|
+
join31(skillDir, "evals", "evals.yaml"),
|
|
29651
|
+
join31(skillDir, "evals", "evals.yml")
|
|
29604
29652
|
];
|
|
29605
|
-
return candidates.find((candidate) =>
|
|
29653
|
+
return candidates.find((candidate) => existsSync31(candidate));
|
|
29606
29654
|
}
|
|
29607
29655
|
async function validateSkill(skillPath, options = {}) {
|
|
29608
29656
|
const evalFile = options.evalFile ?? findDefaultEvalFile(skillPath);
|
|
@@ -1 +1 @@
|
|
|
1
|
-
export declare const REFACTOR_TEMPLATE = "# Intelligent Refactor Command\n\n## Usage\n```\n/refactor <refactoring-target> [--scope=<file|module|project>] [--strategy=<safe|aggressive>]\n\nArguments:\n refactoring-target: What to refactor. Can be:\n - File path: src/auth/handler.ts\n - Symbol name: \"AuthService class\"\n - Pattern: \"all functions using deprecated API\"\n - Description: \"extract validation logic into separate module\"\n\nOptions:\n --scope: Refactoring scope (default: module)\n - file: Single file only\n - module: Module/directory scope\n - project: Entire codebase\n\n --strategy: Risk tolerance (default: safe)\n - safe: Conservative, maximum test coverage required\n - aggressive: Allow broader changes with adequate coverage\n```\n\n## What This Command Does\n\nPerforms intelligent, deterministic refactoring with full codebase awareness. Unlike blind search-and-replace, this command:\n\n1. **Understands your intent** - Analyzes what you actually want to achieve\n2. **Maps the codebase** - Builds a definitive codemap before touching anything\n3. **Assesses risk** - Evaluates test coverage and determines verification strategy\n4. **Plans meticulously** - Creates a detailed plan with Plan agent\n5. **Executes precisely** - Step-by-step refactoring with LSP and AST-grep\n6. **Verifies constantly** - Runs tests after each change to ensure zero regression\n\n---\n\n# PHASE 0: INTENT GATE (MANDATORY FIRST STEP)\n\n**BEFORE ANY ACTION, classify and validate the request.**\n\n## Step 0.1: Parse Request Type\n\n| Signal | Classification | Action |\n|--------|----------------|--------|\n| Specific file/symbol | Explicit | Proceed to codebase analysis |\n| \"Refactor X to Y\" | Clear transformation | Proceed to codebase analysis |\n| \"Improve\", \"Clean up\" | Open-ended | **MUST ask**: \"What specific improvement?\" |\n| Ambiguous scope | Uncertain | **MUST ask**: \"Which modules/files?\" |\n| Missing context | Incomplete | **MUST ask**: \"What's the desired outcome?\" |\n\n## Step 0.2: Validate Understanding\n\nBefore proceeding, confirm:\n- [ ] Target is clearly identified\n- [ ] Desired outcome is understood\n- [ ] Scope is defined (file/module/project)\n- [ ] Success criteria can be articulated\n\n**If ANY of above is unclear, ASK CLARIFYING QUESTION:**\n\n```\nI want to make sure I understand the refactoring goal correctly.\n\n**What I understood**: [interpretation]\n**What I'm unsure about**: [specific ambiguity]\n\nOptions I see:\n1. [Option A] - [implications]\n2. [Option B] - [implications]\n\n**My recommendation**: [suggestion with reasoning]\n\nShould I proceed with [recommendation], or would you prefer differently?\n```\n\n## Step 0.3: Create Initial Todos\n\n**IMMEDIATELY after understanding the request, create todos:**\n\n```\nTodoWrite([\n {\"id\": \"phase-1\", \"content\": \"PHASE 1: Codebase Analysis - launch parallel explore agents\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-2\", \"content\": \"PHASE 2: Build Codemap - map dependencies and impact zones\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-3\", \"content\": \"PHASE 3: Test Assessment - analyze test coverage and verification strategy\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-4\", \"content\": \"PHASE 4: Plan Generation - invoke Plan agent for detailed refactoring plan\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-5\", \"content\": \"PHASE 5: Execute Refactoring - step-by-step with continuous verification\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-6\", \"content\": \"PHASE 6: Final Verification - full test suite and regression check\", \"status\": \"pending\", \"priority\": \"high\"}\n])\n```\n\n---\n\n# PHASE 1: CODEBASE ANALYSIS (PARALLEL EXPLORATION)\n\n**Mark phase-1 as in_progress.**\n\n## 1.1: Launch Parallel Explore Agents (BACKGROUND)\n\nFire ALL of these simultaneously using `call_omo_agent`:\n\n```\n// Agent 1: Find the refactoring target\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find all occurrences and definitions of [TARGET]. \n Report: file paths, line numbers, usage patterns.\"\n)\n\n// Agent 2: Find related code\ncall_omo_agent(\n subagent_type=\"explore\", \n run_in_background=true,\n prompt=\"Find all code that imports, uses, or depends on [TARGET].\n Report: dependency chains, import graphs.\"\n)\n\n// Agent 3: Find similar patterns\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find similar code patterns to [TARGET] in the codebase.\n Report: analogous implementations, established conventions.\"\n)\n\n// Agent 4: Find tests\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find all test files related to [TARGET].\n Report: test file paths, test case names, coverage indicators.\"\n)\n\n// Agent 5: Architecture context\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find architectural patterns and module organization around [TARGET].\n Report: module boundaries, layer structure, design patterns in use.\"\n)\n```\n\n## 1.2: Direct Tool Exploration (WHILE AGENTS RUN)\n\nWhile background agents are running, use direct tools:\n\n### LSP Tools for Precise Analysis:\n\n```typescript\n// Find definition(s)\nLspGotoDefinition(filePath, line, character) // Where is it defined?\n\n// Find ALL usages across workspace\nLspFindReferences(filePath, line, character, includeDeclaration=true)\n\n// Get file structure\nLspDocumentSymbols(filePath) // Hierarchical outline\nLspWorkspaceSymbols(filePath, query=\"[target_symbol]\") // Search by name\n\n// Get current diagnostics\nlsp_diagnostics(filePath) // Errors, warnings before we start\n```\n\n### AST-Grep for Pattern Analysis:\n\n```typescript\n// Find structural patterns\nast_grep_search(\n pattern=\"function $NAME($$$) { $$$ }\", // or relevant pattern\n lang=\"typescript\", // or relevant language\n paths=[\"src/\"]\n)\n\n// Preview refactoring (DRY RUN)\nast_grep_replace(\n pattern=\"[old_pattern]\",\n rewrite=\"[new_pattern]\",\n lang=\"[language]\",\n dryRun=true // ALWAYS preview first\n)\n```\n\n### Grep for Text Patterns:\n\n```\ngrep(pattern=\"[search_term]\", path=\"src/\", include=\"*.ts\")\n```\n\n## 1.3: Collect Background Results\n\n```\nbackground_output(task_id=\"[agent_1_id]\")\nbackground_output(task_id=\"[agent_2_id]\")\n...\n```\n\n**Mark phase-1 as completed after all results collected.**\n\n---\n\n# PHASE 2: BUILD CODEMAP (DEPENDENCY MAPPING)\n\n**Mark phase-2 as in_progress.**\n\n## 2.1: Construct Definitive Codemap\n\nBased on Phase 1 results, build:\n\n```\n## CODEMAP: [TARGET]\n\n### Core Files (Direct Impact)\n- `path/to/file.ts:L10-L50` - Primary definition\n- `path/to/file2.ts:L25` - Key usage\n\n### Dependency Graph\n```\n[TARGET] \n\u251C\u2500\u2500 imports from: \n\u2502 \u251C\u2500\u2500 module-a (types)\n\u2502 \u2514\u2500\u2500 module-b (utils)\n\u251C\u2500\u2500 imported by:\n\u2502 \u251C\u2500\u2500 consumer-1.ts\n\u2502 \u251C\u2500\u2500 consumer-2.ts\n\u2502 \u2514\u2500\u2500 consumer-3.ts\n\u2514\u2500\u2500 used by:\n \u251C\u2500\u2500 handler.ts (direct call)\n \u2514\u2500\u2500 service.ts (dependency injection)\n```\n\n### Impact Zones\n| Zone | Risk Level | Files Affected | Test Coverage |\n|------|------------|----------------|---------------|\n| Core | HIGH | 3 files | 85% covered |\n| Consumers | MEDIUM | 8 files | 70% covered |\n| Edge | LOW | 2 files | 50% covered |\n\n### Established Patterns\n- Pattern A: [description] - used in N places\n- Pattern B: [description] - established convention\n```\n\n## 2.2: Identify Refactoring Constraints\n\nBased on codemap:\n- **MUST follow**: [existing patterns identified]\n- **MUST NOT break**: [critical dependencies]\n- **Safe to change**: [isolated code zones]\n- **Requires migration**: [breaking changes impact]\n\n**Mark phase-2 as completed.**\n\n---\n\n# PHASE 3: TEST ASSESSMENT (VERIFICATION STRATEGY)\n\n**Mark phase-3 as in_progress.**\n\n## 3.1: Detect Test Infrastructure\n\n```bash\n# Check for test commands\ncat package.json | jq '.scripts | keys[] | select(test(\"test\"))'\n\n# Or for Python\nls -la pytest.ini pyproject.toml setup.cfg\n\n# Or for Go\nls -la *_test.go\n```\n\n## 3.2: Analyze Test Coverage\n\n```\n// Find all tests related to target\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=false, // Need this synchronously\n prompt=\"Analyze test coverage for [TARGET]:\n 1. Which test files cover this code?\n 2. What test cases exist?\n 3. Are there integration tests?\n 4. What edge cases are tested?\n 5. Estimated coverage percentage?\"\n)\n```\n\n## 3.3: Determine Verification Strategy\n\nBased on test analysis:\n\n| Coverage Level | Strategy |\n|----------------|----------|\n| HIGH (>80%) | Run existing tests after each step |\n| MEDIUM (50-80%) | Run tests + add safety assertions |\n| LOW (<50%) | **PAUSE**: Propose adding tests first |\n| NONE | **BLOCK**: Refuse aggressive refactoring |\n\n**If coverage is LOW or NONE, ask user:**\n\n```\nTest coverage for [TARGET] is [LEVEL].\n\n**Risk Assessment**: Refactoring without adequate tests is dangerous.\n\nOptions:\n1. Add tests first, then refactor (RECOMMENDED)\n2. Proceed with extra caution, manual verification required\n3. Abort refactoring\n\nWhich approach do you prefer?\n```\n\n## 3.4: Document Verification Plan\n\n```\n## VERIFICATION PLAN\n\n### Test Commands\n- Unit: `bun test` / `npm test` / `pytest` / etc.\n- Integration: [command if exists]\n- Type check: `tsc --noEmit` / `pyright` / etc.\n\n### Verification Checkpoints\nAfter each refactoring step:\n1. lsp_diagnostics \u2192 zero new errors\n2. Run test command \u2192 all pass\n3. Type check \u2192 clean\n\n### Regression Indicators\n- [Specific test that must pass]\n- [Behavior that must be preserved]\n- [API contract that must not change]\n```\n\n**Mark phase-3 as completed.**\n\n---\n\n# PHASE 4: PLAN GENERATION (PLAN AGENT)\n\n**Mark phase-4 as in_progress.**\n\n## 4.1: Invoke Plan Agent\n\n```\nTask(\n subagent_type=\"plan\",\n prompt=\"Create a detailed refactoring plan:\n\n ## Refactoring Goal\n [User's original request]\n\n ## Codemap (from Phase 2)\n [Insert codemap here]\n\n ## Test Coverage (from Phase 3)\n [Insert verification plan here]\n\n ## Constraints\n - MUST follow existing patterns: [list]\n - MUST NOT break: [critical paths]\n - MUST run tests after each step\n\n ## Requirements\n 1. Break down into atomic refactoring steps\n 2. Each step must be independently verifiable\n 3. Order steps by dependency (what must happen first)\n 4. Specify exact files and line ranges for each step\n 5. Include rollback strategy for each step\n 6. Define commit checkpoints\"\n)\n```\n\n## 4.2: Review and Validate Plan\n\nAfter receiving plan from Plan agent:\n\n1. **Verify completeness**: All identified files addressed?\n2. **Verify safety**: Each step reversible?\n3. **Verify order**: Dependencies respected?\n4. **Verify verification**: Test commands specified?\n\n## 4.3: Register Detailed Todos\n\nConvert Plan agent output into granular todos:\n\n```\nTodoWrite([\n // Each step from the plan becomes a todo\n {\"id\": \"refactor-1\", \"content\": \"Step 1: [description]\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"verify-1\", \"content\": \"Verify Step 1: run tests\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"refactor-2\", \"content\": \"Step 2: [description]\", \"status\": \"pending\", \"priority\": \"medium\"},\n {\"id\": \"verify-2\", \"content\": \"Verify Step 2: run tests\", \"status\": \"pending\", \"priority\": \"medium\"},\n // ... continue for all steps\n])\n```\n\n**Mark phase-4 as completed.**\n\n---\n\n# PHASE 5: EXECUTE REFACTORING (DETERMINISTIC EXECUTION)\n\n**Mark phase-5 as in_progress.**\n\n## 5.1: Execution Protocol\n\nFor EACH refactoring step:\n\n### Pre-Step\n1. Mark step todo as `in_progress`\n2. Read current file state\n3. Verify lsp_diagnostics is baseline\n\n### Execute Step\nUse appropriate tool:\n\n**For Symbol Renames:**\n```typescript\nlsp_prepare_rename(filePath, line, character) // Validate rename is possible\nlsp_rename(filePath, line, character, newName) // Execute rename\n```\n\n**For Pattern Transformations:**\n```typescript\n// Preview first\nast_grep_replace(pattern, rewrite, lang, dryRun=true)\n\n// If preview looks good, execute\nast_grep_replace(pattern, rewrite, lang, dryRun=false)\n```\n\n**For Structural Changes:**\n```typescript\n// Use Edit tool for precise changes\nedit(filePath, oldString, newString)\n```\n\n### Post-Step Verification (MANDATORY)\n\n```typescript\n// 1. Check diagnostics\nlsp_diagnostics(filePath) // Must be clean or same as baseline\n\n// 2. Run tests\nbash(\"bun test\") // Or appropriate test command\n\n// 3. Type check\nbash(\"tsc --noEmit\") // Or appropriate type check\n```\n\n### Step Completion\n1. If verification passes \u2192 Mark step todo as `completed`\n2. If verification fails \u2192 **STOP AND FIX**\n\n## 5.2: Failure Recovery Protocol\n\nIf ANY verification fails:\n\n1. **STOP** immediately\n2. **REVERT** the failed change\n3. **DIAGNOSE** what went wrong\n4. **OPTIONS**:\n - Fix the issue and retry\n - Skip this step (if optional)\n - Consult oracle agent for help\n - Ask user for guidance\n\n**NEVER proceed to next step with broken tests.**\n\n## 5.3: Commit Checkpoints\n\nAfter each logical group of changes:\n\n```bash\ngit add [changed-files]\ngit commit -m \"refactor(scope): description\n\n[details of what was changed and why]\"\n```\n\n**Mark phase-5 as completed when all refactoring steps done.**\n\n---\n\n# PHASE 6: FINAL VERIFICATION (REGRESSION CHECK)\n\n**Mark phase-6 as in_progress.**\n\n## 6.1: Full Test Suite\n\n```bash\n# Run complete test suite\nbun test # or npm test, pytest, go test, etc.\n```\n\n## 6.2: Type Check\n\n```bash\n# Full type check\ntsc --noEmit # or equivalent\n```\n\n## 6.3: Lint Check\n\n```bash\n# Run linter\neslint . # or equivalent\n```\n\n## 6.4: Build Verification (if applicable)\n\n```bash\n# Ensure build still works\nbun run build # or npm run build, etc.\n```\n\n## 6.5: Final Diagnostics\n\n```typescript\n// Check all changed files\nfor (file of changedFiles) {\n lsp_diagnostics(file) // Must all be clean\n}\n```\n\n## 6.6: Generate Summary\n\n```markdown\n## Refactoring Complete\n\n### What Changed\n- [List of changes made]\n\n### Files Modified\n- `path/to/file.ts` - [what changed]\n- `path/to/file2.ts` - [what changed]\n\n### Verification Results\n- Tests: PASSED (X/Y passing)\n- Type Check: CLEAN\n- Lint: CLEAN\n- Build: SUCCESS\n\n### No Regressions Detected\nAll existing tests pass. No new errors introduced.\n```\n\n**Mark phase-6 as completed.**\n\n---\n\n# CRITICAL RULES\n\n## NEVER DO\n- Skip lsp_diagnostics check after changes\n- Proceed with failing tests\n- Make changes without understanding impact\n- Use `as any`, `@ts-ignore`, `@ts-expect-error`\n- Delete tests to make them pass\n- Commit broken code\n- Refactor without understanding existing patterns\n\n## ALWAYS DO\n- Understand before changing\n- Preview before applying (ast_grep dryRun=true)\n- Verify after every change\n- Follow existing codebase patterns\n- Keep todos updated in real-time\n- Commit at logical checkpoints\n- Report issues immediately\n\n## ABORT CONDITIONS\nIf any of these occur, **STOP and consult user**:\n- Test coverage is zero for target code\n- Changes would break public API\n- Refactoring scope is unclear\n- 3 consecutive verification failures\n- User-defined constraints violated\n\n---\n\n# Tool Usage Philosophy\n\nYou already know these tools. Use them intelligently:\n\n## LSP Tools\nLeverage LSP tools for precision analysis. Key patterns:\n- **Understand before changing**: `LspGotoDefinition` to grasp context\n- **Impact analysis**: `LspFindReferences` to map all usages before modification\n- **Safe refactoring**: `lsp_prepare_rename` \u2192 `lsp_rename` for symbol renames\n- **Continuous verification**: `lsp_diagnostics` after every change\n\n## AST-Grep\nUse `ast_grep_search` and `ast_grep_replace` for structural transformations.\n**Critical**: Always `dryRun=true` first, review, then execute.\n\n## Agents\n- `explore`: Parallel codebase pattern discovery\n- `plan`: Detailed refactoring plan generation\n- `oracle`: Read-only consultation for complex architectural decisions and debugging\n- `librarian`: **Use proactively** when encountering deprecated methods or library migration tasks. Query official docs and OSS examples for modern replacements.\n\n## Deprecated Code & Library Migration\nWhen you encounter deprecated methods/APIs during refactoring:\n1. Fire `librarian` to find the recommended modern alternative\n2. **DO NOT auto-upgrade to latest version** unless user explicitly requests migration\n3. If user requests library migration, use `librarian` to fetch latest API docs before making changes\n\n---\n\n**Remember: Refactoring without tests is reckless. Refactoring without understanding is destructive. This command ensures you do neither.**\n\n<user-request>\n$ARGUMENTS\n</user-request>\n";
|
|
1
|
+
export declare const REFACTOR_TEMPLATE = "# Intelligent Refactor Command\n\n## Usage\n```\n/refactor <refactoring-target> [--scope=<file|module|project>] [--strategy=<safe|aggressive>]\n\nArguments:\n refactoring-target: What to refactor. Can be:\n - File path: src/auth/handler.ts\n - Symbol name: \"AuthService class\"\n - Pattern: \"all functions using deprecated API\"\n - Description: \"extract validation logic into separate module\"\n\nOptions:\n --scope: Refactoring scope (default: module)\n - file: Single file only\n - module: Module/directory scope\n - project: Entire codebase\n\n --strategy: Risk tolerance (default: safe)\n - safe: Conservative, maximum test coverage required\n - aggressive: Allow broader changes with adequate coverage\n```\n\n## What This Command Does\n\nPerforms intelligent, deterministic refactoring with full codebase awareness. Unlike blind search-and-replace, this command:\n\n1. **Understands your intent** - Analyzes what you actually want to achieve\n2. **Maps the codebase** - Builds a definitive codemap before touching anything\n3. **Assesses risk** - Evaluates test coverage and determines verification strategy\n4. **Plans meticulously** - Creates a detailed plan with Plan agent\n5. **Executes precisely** - Step-by-step refactoring with LSP and AST-grep\n6. **Verifies constantly** - Runs tests after each change to ensure zero regression\n\n---\n\n# PHASE 0: INTENT GATE (MANDATORY FIRST STEP)\n\n**BEFORE ANY ACTION, classify and validate the request.**\n\n## Step 0.1: Parse Request Type\n\n| Signal | Classification | Action |\n|--------|----------------|--------|\n| Specific file/symbol | Explicit | Proceed to codebase analysis |\n| \"Refactor X to Y\" | Clear transformation | Proceed to codebase analysis |\n| \"Improve\", \"Clean up\" | Open-ended | **MUST ask**: \"What specific improvement?\" |\n| Ambiguous scope | Uncertain | **MUST ask**: \"Which modules/files?\" |\n| Missing context | Incomplete | **MUST ask**: \"What's the desired outcome?\" |\n\n## Step 0.2: Validate Understanding\n\nBefore proceeding, confirm:\n- [ ] Target is clearly identified\n- [ ] Desired outcome is understood\n- [ ] Scope is defined (file/module/project)\n- [ ] Success criteria can be articulated\n\n**If ANY of above is unclear, ASK CLARIFYING QUESTION:**\n\n```\nI want to make sure I understand the refactoring goal correctly.\n\n**What I understood**: [interpretation]\n**What I'm unsure about**: [specific ambiguity]\n\nOptions I see:\n1. [Option A] - [implications]\n2. [Option B] - [implications]\n\n**My recommendation**: [suggestion with reasoning]\n\nShould I proceed with [recommendation], or would you prefer differently?\n```\n\n## Step 0.3: Create Initial Todos\n\n**IMMEDIATELY after understanding the request, create todos:**\n\n```\nTodoWrite([\n {\"id\": \"phase-1\", \"content\": \"PHASE 1: Codebase Analysis - launch parallel explore agents\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-2\", \"content\": \"PHASE 2: Build Codemap - map dependencies and impact zones\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-3\", \"content\": \"PHASE 3: Test Assessment - analyze test coverage and verification strategy\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-4\", \"content\": \"PHASE 4: Plan Generation - invoke Plan agent for detailed refactoring plan\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-5\", \"content\": \"PHASE 5: Execute Refactoring - step-by-step with continuous verification\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"phase-6\", \"content\": \"PHASE 6: Final Verification - full test suite and regression check\", \"status\": \"pending\", \"priority\": \"high\"}\n])\n```\n\n---\n\n# PHASE 1: CODEBASE ANALYSIS (PARALLEL EXPLORATION)\n\n**Mark phase-1 as in_progress.**\n\n## 1.1: Launch Parallel Explore Agents (BACKGROUND)\n\nFire ALL of these simultaneously using `call_omo_agent`:\n\n```\n// Agent 1: Find the refactoring target\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find all occurrences and definitions of [TARGET]. \n Report: file paths, line numbers, usage patterns.\"\n)\n\n// Agent 2: Find related code\ncall_omo_agent(\n subagent_type=\"explore\", \n run_in_background=true,\n prompt=\"Find all code that imports, uses, or depends on [TARGET].\n Report: dependency chains, import graphs.\"\n)\n\n// Agent 3: Find similar patterns\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find similar code patterns to [TARGET] in the codebase.\n Report: analogous implementations, established conventions.\"\n)\n\n// Agent 4: Find tests\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find all test files related to [TARGET].\n Report: test file paths, test case names, coverage indicators.\"\n)\n\n// Agent 5: Architecture context\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=true,\n prompt=\"Find architectural patterns and module organization around [TARGET].\n Report: module boundaries, layer structure, design patterns in use.\"\n)\n```\n\n## 1.2: Direct Tool Exploration (WHILE AGENTS RUN)\n\nWhile background agents are running, use direct tools:\n\n### LSP Tools for Precise Analysis:\n\n```typescript\n// Find definition(s)\nLspGotoDefinition(filePath, line, character) // Where is it defined?\n\n// Find ALL usages across workspace\nLspFindReferences(filePath, line, character, includeDeclaration=true)\n\n// Get file structure\nLspDocumentSymbols(filePath) // Hierarchical outline\nLspWorkspaceSymbols(filePath, query=\"[target_symbol]\") // Search by name\n\n// Get current diagnostics\nlsp_diagnostics(filePath) // Errors, warnings before we start\n```\n\n### AST-Grep for Pattern Analysis:\n\n```typescript\n// Find structural patterns\nast_grep_search(\n pattern=\"function $NAME($$$) { $$$ }\", // or relevant pattern\n lang=\"typescript\", // or relevant language\n paths=[\"src/\"]\n)\n```\n\n### Grep for Text Patterns:\n\n```\ngrep(pattern=\"[search_term]\", path=\"src/\", include=\"*.ts\")\n```\n\n## 1.3: Collect Background Results\n\n```\nbackground_output(task_id=\"[agent_1_id]\")\nbackground_output(task_id=\"[agent_2_id]\")\n...\n```\n\n**Mark phase-1 as completed after all results collected.**\n\n---\n\n# PHASE 2: BUILD CODEMAP (DEPENDENCY MAPPING)\n\n**Mark phase-2 as in_progress.**\n\n## 2.1: Construct Definitive Codemap\n\nBased on Phase 1 results, build:\n\n```\n## CODEMAP: [TARGET]\n\n### Core Files (Direct Impact)\n- `path/to/file.ts:L10-L50` - Primary definition\n- `path/to/file2.ts:L25` - Key usage\n\n### Dependency Graph\n```\n[TARGET] \n\u251C\u2500\u2500 imports from: \n\u2502 \u251C\u2500\u2500 module-a (types)\n\u2502 \u2514\u2500\u2500 module-b (utils)\n\u251C\u2500\u2500 imported by:\n\u2502 \u251C\u2500\u2500 consumer-1.ts\n\u2502 \u251C\u2500\u2500 consumer-2.ts\n\u2502 \u2514\u2500\u2500 consumer-3.ts\n\u2514\u2500\u2500 used by:\n \u251C\u2500\u2500 handler.ts (direct call)\n \u2514\u2500\u2500 service.ts (dependency injection)\n```\n\n### Impact Zones\n| Zone | Risk Level | Files Affected | Test Coverage |\n|------|------------|----------------|---------------|\n| Core | HIGH | 3 files | 85% covered |\n| Consumers | MEDIUM | 8 files | 70% covered |\n| Edge | LOW | 2 files | 50% covered |\n\n### Established Patterns\n- Pattern A: [description] - used in N places\n- Pattern B: [description] - established convention\n```\n\n## 2.2: Identify Refactoring Constraints\n\nBased on codemap:\n- **MUST follow**: [existing patterns identified]\n- **MUST NOT break**: [critical dependencies]\n- **Safe to change**: [isolated code zones]\n- **Requires migration**: [breaking changes impact]\n\n**Mark phase-2 as completed.**\n\n---\n\n# PHASE 3: TEST ASSESSMENT (VERIFICATION STRATEGY)\n\n**Mark phase-3 as in_progress.**\n\n## 3.1: Detect Test Infrastructure\n\n```bash\n# Check for test commands\ncat package.json | jq '.scripts | keys[] | select(test(\"test\"))'\n\n# Or for Python\nls -la pytest.ini pyproject.toml setup.cfg\n\n# Or for Go\nls -la *_test.go\n```\n\n## 3.2: Analyze Test Coverage\n\n```\n// Find all tests related to target\ncall_omo_agent(\n subagent_type=\"explore\",\n run_in_background=false, // Need this synchronously\n prompt=\"Analyze test coverage for [TARGET]:\n 1. Which test files cover this code?\n 2. What test cases exist?\n 3. Are there integration tests?\n 4. What edge cases are tested?\n 5. Estimated coverage percentage?\"\n)\n```\n\n## 3.3: Determine Verification Strategy\n\nBased on test analysis:\n\n| Coverage Level | Strategy |\n|----------------|----------|\n| HIGH (>80%) | Run existing tests after each step |\n| MEDIUM (50-80%) | Run tests + add safety assertions |\n| LOW (<50%) | **PAUSE**: Propose adding tests first |\n| NONE | **BLOCK**: Refuse aggressive refactoring |\n\n**If coverage is LOW or NONE, ask user:**\n\n```\nTest coverage for [TARGET] is [LEVEL].\n\n**Risk Assessment**: Refactoring without adequate tests is dangerous.\n\nOptions:\n1. Add tests first, then refactor (RECOMMENDED)\n2. Proceed with extra caution, manual verification required\n3. Abort refactoring\n\nWhich approach do you prefer?\n```\n\n## 3.4: Document Verification Plan\n\n```\n## VERIFICATION PLAN\n\n### Test Commands\n- Unit: `bun test` / `npm test` / `pytest` / etc.\n- Integration: [command if exists]\n- Type check: `tsc --noEmit` / `pyright` / etc.\n\n### Verification Checkpoints\nAfter each refactoring step:\n1. lsp_diagnostics \u2192 zero new errors\n2. Run test command \u2192 all pass\n3. Type check \u2192 clean\n\n### Regression Indicators\n- [Specific test that must pass]\n- [Behavior that must be preserved]\n- [API contract that must not change]\n```\n\n**Mark phase-3 as completed.**\n\n---\n\n# PHASE 4: PLAN GENERATION (PLAN AGENT)\n\n**Mark phase-4 as in_progress.**\n\n## 4.1: Invoke Plan Agent\n\n```\nTask(\n subagent_type=\"plan\",\n prompt=\"Create a detailed refactoring plan:\n\n ## Refactoring Goal\n [User's original request]\n\n ## Codemap (from Phase 2)\n [Insert codemap here]\n\n ## Test Coverage (from Phase 3)\n [Insert verification plan here]\n\n ## Constraints\n - MUST follow existing patterns: [list]\n - MUST NOT break: [critical paths]\n - MUST run tests after each step\n\n ## Requirements\n 1. Break down into atomic refactoring steps\n 2. Each step must be independently verifiable\n 3. Order steps by dependency (what must happen first)\n 4. Specify exact files and line ranges for each step\n 5. Include rollback strategy for each step\n 6. Define commit checkpoints\"\n)\n```\n\n## 4.2: Review and Validate Plan\n\nAfter receiving plan from Plan agent:\n\n1. **Verify completeness**: All identified files addressed?\n2. **Verify safety**: Each step reversible?\n3. **Verify order**: Dependencies respected?\n4. **Verify verification**: Test commands specified?\n\n## 4.3: Register Detailed Todos\n\nConvert Plan agent output into granular todos:\n\n```\nTodoWrite([\n // Each step from the plan becomes a todo\n {\"id\": \"refactor-1\", \"content\": \"Step 1: [description]\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"verify-1\", \"content\": \"Verify Step 1: run tests\", \"status\": \"pending\", \"priority\": \"high\"},\n {\"id\": \"refactor-2\", \"content\": \"Step 2: [description]\", \"status\": \"pending\", \"priority\": \"medium\"},\n {\"id\": \"verify-2\", \"content\": \"Verify Step 2: run tests\", \"status\": \"pending\", \"priority\": \"medium\"},\n // ... continue for all steps\n])\n```\n\n**Mark phase-4 as completed.**\n\n---\n\n# PHASE 5: EXECUTE REFACTORING (DETERMINISTIC EXECUTION)\n\n**Mark phase-5 as in_progress.**\n\n## 5.1: Execution Protocol\n\nFor EACH refactoring step:\n\n### Pre-Step\n1. Mark step todo as `in_progress`\n2. Read current file state\n3. Verify lsp_diagnostics is baseline\n\n### Execute Step\nUse appropriate tool:\n\n**For Symbol Renames:**\n```typescript\nlsp_prepare_rename(filePath, line, character) // Validate rename is possible\nlsp_rename(filePath, line, character, newName) // Execute rename\n```\n\n**For Pattern Transformations:**\n```typescript\n// First identify the exact structural sites\nast_grep_search(pattern, lang, paths)\n\n// Then apply precise edits with Edit or symbol-safe LSP operations\nedit(filePath, oldString, newString)\n```\n\n**For Structural Changes:**\n```typescript\n// Use Edit tool for precise changes\nedit(filePath, oldString, newString)\n```\n\n### Post-Step Verification (MANDATORY)\n\n```typescript\n// 1. Check diagnostics\nlsp_diagnostics(filePath) // Must be clean or same as baseline\n\n// 2. Run tests\nbash(\"bun test\") // Or appropriate test command\n\n// 3. Type check\nbash(\"tsc --noEmit\") // Or appropriate type check\n```\n\n### Step Completion\n1. If verification passes \u2192 Mark step todo as `completed`\n2. If verification fails \u2192 **STOP AND FIX**\n\n## 5.2: Failure Recovery Protocol\n\nIf ANY verification fails:\n\n1. **STOP** immediately\n2. **REVERT** the failed change\n3. **DIAGNOSE** what went wrong\n4. **OPTIONS**:\n - Fix the issue and retry\n - Skip this step (if optional)\n - Consult oracle agent for help\n - Ask user for guidance\n\n**NEVER proceed to next step with broken tests.**\n\n## 5.3: Commit Checkpoints\n\nAfter each logical group of changes:\n\n```bash\ngit add [changed-files]\ngit commit -m \"refactor(scope): description\n\n[details of what was changed and why]\"\n```\n\n**Mark phase-5 as completed when all refactoring steps done.**\n\n---\n\n# PHASE 6: FINAL VERIFICATION (REGRESSION CHECK)\n\n**Mark phase-6 as in_progress.**\n\n## 6.1: Full Test Suite\n\n```bash\n# Run complete test suite\nbun test # or npm test, pytest, go test, etc.\n```\n\n## 6.2: Type Check\n\n```bash\n# Full type check\ntsc --noEmit # or equivalent\n```\n\n## 6.3: Lint Check\n\n```bash\n# Run linter\neslint . # or equivalent\n```\n\n## 6.4: Build Verification (if applicable)\n\n```bash\n# Ensure build still works\nbun run build # or npm run build, etc.\n```\n\n## 6.5: Final Diagnostics\n\n```typescript\n// Check all changed files\nfor (file of changedFiles) {\n lsp_diagnostics(file) // Must all be clean\n}\n```\n\n## 6.6: Generate Summary\n\n```markdown\n## Refactoring Complete\n\n### What Changed\n- [List of changes made]\n\n### Files Modified\n- `path/to/file.ts` - [what changed]\n- `path/to/file2.ts` - [what changed]\n\n### Verification Results\n- Tests: PASSED (X/Y passing)\n- Type Check: CLEAN\n- Lint: CLEAN\n- Build: SUCCESS\n\n### No Regressions Detected\nAll existing tests pass. No new errors introduced.\n```\n\n**Mark phase-6 as completed.**\n\n---\n\n# CRITICAL RULES\n\n## NEVER DO\n- Skip lsp_diagnostics check after changes\n- Proceed with failing tests\n- Make changes without understanding impact\n- Use `as any`, `@ts-ignore`, `@ts-expect-error`\n- Delete tests to make them pass\n- Commit broken code\n- Refactor without understanding existing patterns\n\n## ALWAYS DO\n- Understand before changing\n- Use `ast_grep_search` to confirm structural matches before editing\n- Verify after every change\n- Follow existing codebase patterns\n- Keep todos updated in real-time\n- Commit at logical checkpoints\n- Report issues immediately\n\n## ABORT CONDITIONS\nIf any of these occur, **STOP and consult user**:\n- Test coverage is zero for target code\n- Changes would break public API\n- Refactoring scope is unclear\n- 3 consecutive verification failures\n- User-defined constraints violated\n\n---\n\n# Tool Usage Philosophy\n\nYou already know these tools. Use them intelligently:\n\n## LSP Tools\nLeverage LSP tools for precision analysis. Key patterns:\n- **Understand before changing**: `LspGotoDefinition` to grasp context\n- **Impact analysis**: `LspFindReferences` to map all usages before modification\n- **Safe refactoring**: `lsp_prepare_rename` \u2192 `lsp_rename` for symbol renames\n- **Continuous verification**: `lsp_diagnostics` after every change\n\n## AST-Grep\nUse `ast_grep_search` to locate structural patterns that plain text grep would miss.\nApply the actual change with Edit or LSP after confirming the match shape.\n\n## Agents\n- `explore`: Parallel codebase pattern discovery\n- `plan`: Detailed refactoring plan generation\n- `oracle`: Read-only consultation for complex architectural decisions and debugging\n- `librarian`: **Use proactively** when encountering deprecated methods or library migration tasks. Query official docs and OSS examples for modern replacements.\n\n## Deprecated Code & Library Migration\nWhen you encounter deprecated methods/APIs during refactoring:\n1. Fire `librarian` to find the recommended modern alternative\n2. **DO NOT auto-upgrade to latest version** unless user explicitly requests migration\n3. If user requests library migration, use `librarian` to fetch latest API docs before making changes\n\n---\n\n**Remember: Refactoring without tests is reckless. Refactoring without understanding is destructive. This command ensures you do neither.**\n\n<user-request>\n$ARGUMENTS\n</user-request>\n";
|