@oh-my-pi/pi-coding-agent 14.1.0 → 14.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +79 -0
- package/package.json +8 -8
- package/src/async/job-manager.ts +43 -10
- package/src/commit/agentic/tools/analyze-file.ts +1 -2
- package/src/config/mcp-schema.json +1 -1
- package/src/config/model-equivalence.ts +1 -0
- package/src/config/model-registry.ts +63 -34
- package/src/config/model-resolver.ts +111 -15
- package/src/config/settings-schema.ts +4 -3
- package/src/config/settings.ts +1 -1
- package/src/cursor.ts +64 -23
- package/src/edit/index.ts +254 -89
- package/src/edit/modes/chunk.ts +336 -57
- package/src/edit/modes/hashline.ts +51 -26
- package/src/edit/modes/patch.ts +16 -10
- package/src/edit/modes/replace.ts +15 -7
- package/src/edit/renderer.ts +248 -94
- package/src/export/html/template.generated.ts +1 -1
- package/src/export/html/template.js +6 -4
- package/src/extensibility/custom-tools/types.ts +0 -3
- package/src/extensibility/extensions/loader.ts +16 -0
- package/src/extensibility/extensions/runner.ts +2 -7
- package/src/extensibility/extensions/types.ts +8 -4
- package/src/internal-urls/docs-index.generated.ts +3 -3
- package/src/ipy/executor.ts +447 -52
- package/src/ipy/kernel.ts +39 -13
- package/src/lsp/client.ts +54 -0
- package/src/lsp/index.ts +8 -0
- package/src/lsp/types.ts +6 -0
- package/src/main.ts +0 -1
- package/src/modes/acp/acp-agent.ts +4 -1
- package/src/modes/components/bash-execution.ts +16 -4
- package/src/modes/components/status-line/presets.ts +17 -6
- package/src/modes/components/status-line/segments.ts +15 -0
- package/src/modes/components/status-line-segment-editor.ts +1 -0
- package/src/modes/components/status-line.ts +7 -1
- package/src/modes/components/tool-execution.ts +145 -75
- package/src/modes/controllers/command-controller.ts +24 -1
- package/src/modes/controllers/event-controller.ts +4 -1
- package/src/modes/controllers/extension-ui-controller.ts +28 -5
- package/src/modes/controllers/input-controller.ts +9 -3
- package/src/modes/controllers/selector-controller.ts +4 -1
- package/src/modes/interactive-mode.ts +19 -3
- package/src/modes/print-mode.ts +13 -4
- package/src/modes/prompt-action-autocomplete.ts +3 -5
- package/src/modes/rpc/rpc-mode.ts +8 -2
- package/src/modes/shared.ts +2 -2
- package/src/modes/types.ts +1 -0
- package/src/modes/utils/ui-helpers.ts +1 -0
- package/src/prompts/tools/bash.md +2 -2
- package/src/prompts/tools/chunk-edit.md +191 -163
- package/src/prompts/tools/hashline.md +11 -11
- package/src/prompts/tools/patch.md +10 -5
- package/src/prompts/tools/{await.md → poll.md} +1 -1
- package/src/prompts/tools/read-chunk.md +3 -3
- package/src/prompts/tools/task.md +2 -2
- package/src/prompts/tools/vim.md +98 -0
- package/src/sdk.ts +754 -724
- package/src/session/agent-session.ts +164 -34
- package/src/session/session-manager.ts +50 -4
- package/src/slash-commands/builtin-registry.ts +17 -0
- package/src/task/executor.ts +4 -4
- package/src/task/index.ts +3 -5
- package/src/task/types.ts +2 -2
- package/src/tools/bash.ts +26 -8
- package/src/tools/find.ts +5 -2
- package/src/tools/grep.ts +77 -8
- package/src/tools/index.ts +48 -19
- package/src/tools/{await-tool.ts → poll-tool.ts} +36 -30
- package/src/tools/python.ts +293 -278
- package/src/tools/submit-result.ts +5 -2
- package/src/tools/todo-write.ts +8 -2
- package/src/tools/vim.ts +966 -0
- package/src/utils/edit-mode.ts +2 -1
- package/src/utils/session-color.ts +55 -0
- package/src/utils/title-generator.ts +15 -6
- package/src/vim/buffer.ts +309 -0
- package/src/vim/commands.ts +382 -0
- package/src/vim/engine.ts +2426 -0
- package/src/vim/parser.ts +151 -0
- package/src/vim/render.ts +252 -0
- package/src/vim/types.ts +197 -0
|
@@ -10,16 +10,16 @@ export const EMBEDDED_DOCS: Readonly<Record<string, string>> = {
|
|
|
10
10
|
"custom-tools.md": "# Custom Tools\n\nCustom tools are model-callable functions that plug into the same tool execution pipeline as built-in tools.\n\nA custom tool is a TypeScript/JavaScript module that exports a factory. The factory receives a host API (`CustomToolAPI`) and returns one tool or an array of tools.\n\n## What this is (and is not)\n\n- **Custom tool**: callable by the model during a turn (`execute` + TypeBox schema).\n- **Extension**: lifecycle/event framework that can register tools and intercept/modify events.\n- **Hook**: external pre/post command scripts.\n- **Skill**: static guidance/context package, not executable tool code.\n\nIf you need the model to call code directly, use a custom tool.\n\n## Integration paths in current code\n\nThere are two active integration styles:\n\n1. **SDK-provided custom tools** (`options.customTools`)\n - Wrapped into agent tools via `CustomToolAdapter` or extension wrappers.\n - Always included in the initial active tool set in SDK bootstrap.\n\n2. **Filesystem-discovered modules via loader API** (`discoverAndLoadCustomTools` / `loadCustomTools`)\n - Exposed as library APIs in `src/extensibility/custom-tools/loader.ts`.\n - Host code can call these to discover and load tool modules from config/provider/plugin paths.\n\n```text\nModel tool call flow\n\nLLM tool call\n │\n ▼\nTool registry (built-ins + custom tool adapters)\n │\n ▼\nCustomTool.execute(toolCallId, params, onUpdate, ctx, signal)\n │\n ├─ onUpdate(...) -> streamed partial result\n └─ return result -> final tool content/details\n```\n\n## Discovery locations (loader API)\n\n`discoverAndLoadCustomTools(configuredPaths, cwd, builtInToolNames)` merges:\n\n1. Capability providers (`toolCapability`), including:\n - Native OMP config (`~/.omp/agent/tools`, `.omp/tools`)\n - Claude config (`~/.claude/tools`, `.claude/tools`)\n - Codex config (`~/.codex/tools`, `.codex/tools`)\n - Claude marketplace plugin cache provider\n2. Installed plugin manifests (`~/.omp/plugins/node_modules/*` via plugin loader)\n3. Explicit configured paths passed to the loader\n\n### Important behavior\n\n- Duplicate resolved paths are deduplicated.\n- Tool name conflicts are rejected against built-ins and already-loaded custom tools.\n- `.md` and `.json` files are discovered as tool metadata by some providers, but the executable module loader rejects them as runnable tools.\n- Relative configured paths are resolved from `cwd`; `~` is expanded.\n\n## Module contract\n\nA custom tool module must export a function (default export preferred):\n\n```ts\nimport type { CustomToolFactory } from \"@oh-my-pi/pi-coding-agent\";\n\nconst factory: CustomToolFactory = (pi) => ({\n\tname: \"repo_stats\",\n\tlabel: \"Repo Stats\",\n\tdescription: \"Counts tracked TypeScript files\",\n\tparameters: pi.typebox.Type.Object({\n\t\tglob: pi.typebox.Type.Optional(pi.typebox.Type.String({ default: \"**/*.ts\" })),\n\t}),\n\n\tasync execute(toolCallId, params, onUpdate, ctx, signal) {\n\t\tonUpdate?.({\n\t\t\tcontent: [{ type: \"text\", text: \"Scanning files...\" }],\n\t\t\tdetails: { phase: \"scan\" },\n\t\t});\n\n\t\tconst result = await pi.exec(\"git\", [\"ls-files\", params.glob ?? \"**/*.ts\"], { signal, cwd: pi.cwd });\n\t\tif (result.killed) {\n\t\t\tthrow new Error(\"Scan was cancelled\");\n\t\t}\n\t\tif (result.code !== 0) {\n\t\t\tthrow new Error(result.stderr || \"git ls-files failed\");\n\t\t}\n\n\t\tconst files = result.stdout.split(\"\\n\").filter(Boolean);\n\t\treturn {\n\t\t\tcontent: [{ type: \"text\", text: `Found ${files.length} files` }],\n\t\t\tdetails: { count: files.length, sample: files.slice(0, 10) },\n\t\t};\n\t},\n\n\tonSession(event) {\n\t\tif (event.reason === \"shutdown\") {\n\t\t\t// cleanup resources if needed\n\t\t}\n\t},\n});\n\nexport default factory;\n```\n\nFactory return type:\n\n- `CustomTool`\n- `CustomTool[]`\n- `Promise<CustomTool | CustomTool[]>`\n\n## API surface passed to factories (`CustomToolAPI`)\n\nFrom `types.ts` and `loader.ts`:\n\n- `cwd`: host working directory\n- `exec(command, args, options?)`: process execution helper\n- `ui`: UI context (can be no-op in headless modes)\n- `hasUI`: `false` in non-interactive flows\n- `logger`: shared file logger\n- `typebox`: injected `@sinclair/typebox`\n- `pi`: injected `@oh-my-pi/pi-coding-agent` exports\n- `pushPendingAction(action)`: register a preview action for hidden `resolve` tool (`docs/resolve-tool-runtime.md`)\n\nLoader starts with a no-op UI context and requires host code to call `setUIContext(...)` when real UI is ready.\n\n## Execution contract and typing\n\n`CustomTool.execute` signature:\n\n```ts\nexecute(toolCallId, params, onUpdate, ctx, signal)\n```\n\n- `params` is statically typed from your TypeBox schema via `Static<TParams>`.\n- Runtime argument validation happens before execution in the agent loop.\n- `onUpdate` emits partial results for UI streaming.\n- `ctx` includes session/model state and an `abort()` helper.\n- `signal` carries cancellation.\n\n`CustomToolAdapter` bridges this to the agent tool interface and forwards calls in the correct argument order.\n\n## How tools are exposed to the model\n\n- Tools are wrapped into `AgentTool` instances (`CustomToolAdapter` or extension wrappers).\n- They are inserted into the session tool registry by name.\n- In SDK bootstrap, custom and extension-registered tools are force-included in the initial active set.\n- CLI `--tools` currently validates only built-in tool names; custom tool inclusion is handled through discovery/registration paths and SDK options.\n\n## Rendering hooks\n\nOptional rendering hooks:\n\n- `renderCall(args, theme)`\n- `renderResult(result, options, theme, args?)`\n\nRuntime behavior in TUI:\n\n- If hooks exist, tool output is rendered inside a `Box` container.\n- `renderResult` receives `{ expanded, isPartial, spinnerFrame? }`.\n- Renderer errors are caught and logged; UI falls back to default text rendering.\n\n## Session/state handling\n\nOptional `onSession(event, ctx)` receives session lifecycle events, including:\n\n- `start`, `switch`, `branch`, `tree`, `shutdown`\n- `auto_compaction_start`, `auto_compaction_end`\n- `auto_retry_start`, `auto_retry_end`\n- `ttsr_triggered`, `todo_reminder`\n\nUse `ctx.sessionManager` to reconstruct state from history when branch/session context changes.\n\n## Failures and cancellation semantics\n\n### Synchronous/async failures\n\n- Throwing (or rejected promises) in `execute` is treated as tool failure.\n- Agent runtime converts failures into tool result messages with `isError: true` and error text content.\n- With extension wrappers, `tool_result` handlers can further rewrite content/details and even override error status.\n\n### Cancellation\n\n- Agent abort propagates through `AbortSignal` to `execute`.\n- Forward `signal` to subprocess work (`pi.exec(..., { signal })`) for cooperative cancellation.\n- `ctx.abort()` lets a tool request abort of the current agent operation.\n\n### onSession errors\n\n- `onSession` errors are caught and logged as warnings; they do not crash the session.\n\n## Real constraints to design for\n\n- Tool names must be globally unique in the active registry.\n- Prefer deterministic, schema-shaped outputs in `details` for renderer/state reconstruction.\n- Guard UI usage with `pi.hasUI`.\n- Treat `.md`/`.json` in tool directories as metadata, not executable modules.\n",
|
|
11
11
|
"environment-variables.md": "# Environment Variables (Current Runtime Reference)\n\nThis reference is derived from current code paths in:\n\n- `packages/coding-agent/src/**`\n- `packages/ai/src/**` (provider/auth resolution used by coding-agent)\n- `packages/utils/src/**` and `packages/tui/src/**` where those vars directly affect coding-agent runtime\n\nIt documents only active behavior.\n\n## Resolution model and precedence\n\nMost runtime lookups use `$env` from `@oh-my-pi/pi-utils` (`packages/utils/src/env.ts`).\n\n`$env` loading order:\n\n1. Existing process environment (`Bun.env`)\n2. Project `.env` (`$PWD/.env`) for keys not already set\n3. Home `.env` (`~/.env`) for keys not already set\n\nAdditional rule in `.env` files: `OMP_*` keys are mirrored to `PI_*` keys during parse.\n\n---\n\n## 1) Model/provider authentication\n\nThese are consumed via `getEnvApiKey()` (`packages/ai/src/stream.ts`) unless noted otherwise.\n\n### Core provider credentials\n\n| Variable | Used for | Required when | Notes / precedence |\n|---------------------------------|---|---------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|\n| `ANTHROPIC_OAUTH_TOKEN` | Anthropic API auth | Using Anthropic with OAuth token auth | Takes precedence over `ANTHROPIC_API_KEY` for provider auth resolution |\n| `ANTHROPIC_API_KEY` | Anthropic API auth | Using Anthropic without OAuth token | Fallback after `ANTHROPIC_OAUTH_TOKEN` |\n| `ANTHROPIC_FOUNDRY_API_KEY` | Anthropic via Azure Foundry / enterprise gateway | `CLAUDE_CODE_USE_FOUNDRY` enabled | Takes precedence over `ANTHROPIC_OAUTH_TOKEN` and `ANTHROPIC_API_KEY` when Foundry mode is enabled |\n| `OPENAI_API_KEY` | OpenAI auth | Using OpenAI-family providers without explicit apiKey argument | Used by OpenAI Completions/Responses providers |\n| `GEMINI_API_KEY` | Google Gemini auth | Using `google` provider models | Primary key for Gemini provider mapping |\n| `GOOGLE_API_KEY` | Gemini image tool auth fallback | Using `gemini_image` tool without `GEMINI_API_KEY` | Used by coding-agent image tool fallback path |\n| `GROQ_API_KEY` | Groq auth | Using Groq models | |\n| `CEREBRAS_API_KEY` | Cerebras auth | Using Cerebras models | |\n| `TOGETHER_API_KEY` | Together auth | Using `together` provider | |\n| `HUGGINGFACE_HUB_TOKEN` | Hugging Face auth | Using `huggingface` provider | Primary Hugging Face token env var |\n| `HF_TOKEN` | Hugging Face auth | Using `huggingface` provider | Fallback when `HUGGINGFACE_HUB_TOKEN` is unset |\n| `SYNTHETIC_API_KEY` | Synthetic auth | Using Synthetic models | |\n| `NVIDIA_API_KEY` | NVIDIA auth | Using `nvidia` provider | |\n| `NANO_GPT_API_KEY` | NanoGPT auth | Using `nanogpt` provider | |\n| `VENICE_API_KEY` | Venice auth | Using `venice` provider | |\n| `LITELLM_API_KEY` | LiteLLM auth | Using `litellm` provider | OpenAI-compatible LiteLLM proxy key |\n| `LM_STUDIO_API_KEY` | LM Studio auth (optional) | Using `lm-studio` provider with authenticated hosts | Local LM Studio usually runs without auth; any non-empty token works when a key is required |\n| `OLLAMA_API_KEY` | Ollama auth (optional) | Using `ollama` provider with authenticated hosts | Local Ollama usually runs without auth; any non-empty token works when a key is required |\n| `LLAMA_CPP_API_KEY` | Ollama auth (optional) | Using `llama-server` with `--api-key` parameter | Local llama.cpp usually runs without auth; any non-empty token works when a key is configured |\n| `XIAOMI_API_KEY` | Xiaomi MiMo auth | Using `xiaomi` provider | |\n| `MOONSHOT_API_KEY` | Moonshot auth | Using `moonshot` provider | |\n| `XAI_API_KEY` | xAI auth | Using xAI models | |\n| `OPENROUTER_API_KEY` | OpenRouter auth | Using OpenRouter models | Also used by image tool when preferred/auto provider is OpenRouter |\n| `MISTRAL_API_KEY` | Mistral auth | Using Mistral models | |\n| `ZAI_API_KEY` | z.ai auth | Using z.ai models | Also used by z.ai web search provider |\n| `MINIMAX_API_KEY` | MiniMax auth | Using `minimax` provider | |\n| `MINIMAX_CODE_API_KEY` | MiniMax Code auth | Using `minimax-code` provider | |\n| `MINIMAX_CODE_CN_API_KEY` | MiniMax Code CN auth | Using `minimax-code-cn` provider | |\n| `OPENCODE_API_KEY` | OpenCode auth | Using OpenCode models | |\n| `QIANFAN_API_KEY` | Qianfan auth | Using `qianfan` provider | |\n| `QWEN_OAUTH_TOKEN` | Qwen Portal auth | Using `qwen-portal` with OAuth token | Takes precedence over `QWEN_PORTAL_API_KEY` |\n| `QWEN_PORTAL_API_KEY` | Qwen Portal auth | Using `qwen-portal` with API key | Fallback after `QWEN_OAUTH_TOKEN` |\n| `ZENMUX_API_KEY` | ZenMux auth | Using `zenmux` provider | Used for ZenMux OpenAI and Anthropic-compatible routes |\n| `VLLM_API_KEY` | vLLM auth/discovery opt-in | Using `vllm` provider (local OpenAI-compatible servers) | Any non-empty value works for no-auth local servers |\n| `CURSOR_ACCESS_TOKEN` | Cursor provider auth | Using Cursor provider | |\n| `AI_GATEWAY_API_KEY` | Vercel AI Gateway auth | Using `vercel-ai-gateway` provider | |\n| `CLOUDFLARE_AI_GATEWAY_API_KEY` | Cloudflare AI Gateway auth | Using `cloudflare-ai-gateway` provider | Base URL must be configured as `https://gateway.ai.cloudflare.com/v1/<account>/<gateway>/anthropic` |\n\n### GitHub/Copilot token chains\n\n| Variable | Used for | Chain |\n|---|---|---|\n| `COPILOT_GITHUB_TOKEN` | GitHub Copilot provider auth | `COPILOT_GITHUB_TOKEN` → `GH_TOKEN` → `GITHUB_TOKEN` |\n| `GH_TOKEN` | Copilot fallback; GitHub API auth in web scraper | In web scraper: `GITHUB_TOKEN` → `GH_TOKEN` |\n| `GITHUB_TOKEN` | Copilot fallback; GitHub API auth in web scraper | In web scraper: checked before `GH_TOKEN` |\n\n---\n\n## 2) Provider-specific runtime configuration\n\n### Anthropic Foundry Gateway (Azure / enterprise proxy)\n\nWhen `CLAUDE_CODE_USE_FOUNDRY` is enabled, Anthropic requests switch to Foundry mode:\n\n- Base URL resolves from `FOUNDRY_BASE_URL` (fallback remains model/default base URL if unset).\n- API key resolution for provider `anthropic` becomes:\n `ANTHROPIC_FOUNDRY_API_KEY` → `ANTHROPIC_OAUTH_TOKEN` → `ANTHROPIC_API_KEY`.\n- `ANTHROPIC_CUSTOM_HEADERS` is parsed as comma/newline-separated `key: value` pairs and merged into request headers.\n- TLS client/server material can be injected from env values:\n `NODE_EXTRA_CA_CERTS`, `CLAUDE_CODE_CLIENT_CERT`, `CLAUDE_CODE_CLIENT_KEY`.\n Each accepts either:\n - a filesystem path to PEM content, or\n - inline PEM (including escaped `\\n` sequences).\n\n| Variable | Value type | Behavior |\n|---|---|---|\n| `CLAUDE_CODE_USE_FOUNDRY` | Boolean-like string (`1`, `true`, `yes`, `on`) | Enables Foundry mode for Anthropic provider |\n| `FOUNDRY_BASE_URL` | URL string | Anthropic endpoint base URL in Foundry mode |\n| `ANTHROPIC_FOUNDRY_API_KEY` | Token string | Used for `Authorization: Bearer <token>` |\n| `ANTHROPIC_CUSTOM_HEADERS` | Header list string | Extra headers; format `header-a: value, header-b: value` or newline-separated |\n| `NODE_EXTRA_CA_CERTS` | PEM path or inline PEM | Extra CA chain for server certificate validation |\n| `CLAUDE_CODE_CLIENT_CERT` | PEM path or inline PEM | mTLS client certificate |\n| `CLAUDE_CODE_CLIENT_KEY` | PEM path or inline PEM | mTLS client private key (must be paired with cert) |\n\n### Amazon Bedrock\n\n| Variable | Default / behavior |\n|---|---|\n| `AWS_REGION` | Primary region source |\n| `AWS_DEFAULT_REGION` | Fallback if `AWS_REGION` unset |\n| `AWS_PROFILE` | Enables named profile auth path |\n| `AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY` | Enables IAM key auth path |\n| `AWS_BEARER_TOKEN_BEDROCK` | Enables bearer token auth path |\n| `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` / `AWS_CONTAINER_CREDENTIALS_FULL_URI` | Enables ECS task credential path |\n| `AWS_WEB_IDENTITY_TOKEN_FILE` + `AWS_ROLE_ARN` | Enables web identity auth path |\n| `AWS_BEDROCK_SKIP_AUTH` | If `1`, injects dummy credentials (proxy/non-auth scenarios) |\n| `AWS_BEDROCK_FORCE_HTTP1` | If `1`, forces Node HTTP/1 request handler |\n\nRegion fallback in provider code: `options.region` → `AWS_REGION` → `AWS_DEFAULT_REGION` → `us-east-1`.\n\n### Azure OpenAI Responses\n\n| Variable | Default / behavior |\n|---|---|\n| `AZURE_OPENAI_API_KEY` | Required unless API key passed as option |\n| `AZURE_OPENAI_API_VERSION` | Default `v1` |\n| `AZURE_OPENAI_BASE_URL` | Direct base URL override |\n| `AZURE_OPENAI_RESOURCE_NAME` | Used to construct base URL: `https://<resource>.openai.azure.com/openai/v1` |\n| `AZURE_OPENAI_DEPLOYMENT_NAME_MAP` | Optional mapping string: `modelId=deploymentName,model2=deployment2` |\n\nBase URL resolution: option `azureBaseUrl` → env `AZURE_OPENAI_BASE_URL` → option/env resource name → `model.baseUrl`.\n\n### Google Vertex AI\n\n| Variable | Required? | Notes |\n|---|---|---|\n| `GOOGLE_CLOUD_PROJECT` | Yes (unless passed in options) | Fallback: `GCLOUD_PROJECT` |\n| `GCLOUD_PROJECT` | Fallback | Used as alternate project ID source |\n| `GOOGLE_CLOUD_LOCATION` | Yes (unless passed in options) | No default in provider |\n| `GOOGLE_APPLICATION_CREDENTIALS` | Conditional | If set, file must exist; otherwise ADC fallback path is checked (`~/.config/gcloud/application_default_credentials.json`) |\n\n### Kimi\n\n| Variable | Default / behavior |\n|---|---|\n| `KIMI_CODE_OAUTH_HOST` | Primary OAuth host override |\n| `KIMI_OAUTH_HOST` | Fallback OAuth host override |\n| `KIMI_CODE_BASE_URL` | Overrides Kimi usage endpoint base URL (`usage/kimi.ts`) |\n\nOAuth host chain: `KIMI_CODE_OAUTH_HOST` → `KIMI_OAUTH_HOST` → `https://auth.kimi.com`.\n\n### Antigravity/Gemini image compatibility\n\n| Variable | Default / behavior |\n|---|---|\n| `PI_AI_ANTIGRAVITY_VERSION` | Overrides Antigravity user-agent version tag in Gemini CLI provider |\n\n### OpenAI Codex responses (feature/debug controls)\n\n| Variable | Behavior |\n|---|---|\n| `PI_CODEX_DEBUG` | `1`/`true` enables Codex provider debug logging |\n| `PI_CODEX_WEBSOCKET` | `1`/`true` enables websocket transport preference |\n| `PI_CODEX_WEBSOCKET_V2` | `1`/`true` enables websocket v2 path |\n| `PI_CODEX_WEBSOCKET_IDLE_TIMEOUT_MS` | Positive integer override (default 300000) |\n| `PI_CODEX_WEBSOCKET_RETRY_BUDGET` | Non-negative integer override (default 5) |\n| `PI_CODEX_WEBSOCKET_RETRY_DELAY_MS` | Positive integer base backoff override (default 500) |\n\n### Cursor provider debug\n\n| Variable | Behavior |\n|---|---|\n| `DEBUG_CURSOR` | Enables provider debug logs; `2`/`verbose` for detailed payload snippets |\n| `DEBUG_CURSOR_LOG` | Optional file path for JSONL debug log output |\n\n### Prompt cache compatibility switch\n\n| Variable | Behavior |\n|---|---|\n| `PI_CACHE_RETENTION` | If `long`, enables long retention where supported (`anthropic`, `openai-responses`, Bedrock retention resolution) |\n\n---\n\n## 3) Web search subsystem\n\n### Search provider credentials\n\n| Variable | Used by |\n|---|---|\n| `EXA_API_KEY` | Exa search provider and Exa MCP tools |\n| `BRAVE_API_KEY` | Brave search provider |\n| `PERPLEXITY_API_KEY` | Perplexity search provider API-key mode |\n| `TAVILY_API_KEY` | Tavily search provider |\n| `ZAI_API_KEY` | z.ai search provider (also checks stored OAuth in `agent.db`) |\n| `OPENAI_API_KEY` / Codex OAuth in DB | Codex search provider availability/auth |\n\n### Anthropic web search auth chain\n\n`packages/coding-agent/src/web/search/auth.ts` resolves Anthropic web-search credentials in this order:\n\n1. `ANTHROPIC_SEARCH_API_KEY` (+ optional `ANTHROPIC_SEARCH_BASE_URL`)\n2. `models.json` provider entry with `api: \"anthropic-messages\"`\n3. Anthropic OAuth credentials from `agent.db` (must not expire within 5-minute buffer)\n4. Generic Anthropic env fallback: provider key (`ANTHROPIC_FOUNDRY_API_KEY`/`ANTHROPIC_OAUTH_TOKEN`/`ANTHROPIC_API_KEY`) + optional `ANTHROPIC_BASE_URL` (`FOUNDRY_BASE_URL` when Foundry mode is enabled)\n\nRelated vars:\n\n| Variable | Default / behavior |\n|---|---|\n| `ANTHROPIC_SEARCH_API_KEY` | Highest-priority explicit search key |\n| `ANTHROPIC_SEARCH_BASE_URL` | Defaults to `https://api.anthropic.com` when omitted |\n| `ANTHROPIC_SEARCH_MODEL` | Defaults to `claude-haiku-4-5` |\n| `ANTHROPIC_BASE_URL` | Generic fallback base URL for tier-4 auth path |\n\n### Perplexity OAuth flow behavior flag\n\n| Variable | Behavior |\n|---|---|\n| `PI_AUTH_NO_BORROW` | If set, disables macOS native-app token borrowing path in Perplexity login flow |\n\n---\n\n## 4) Python tooling and kernel runtime\n\n| Variable | Default / behavior |\n|---|---|\n| `PI_PY` | Python tool mode override: `0`/`bash`=`bash-only`, `1`/`py`=`ipy-only`, `mix`/`both`=`both`; invalid values ignored |\n| `PI_PYTHON_SKIP_CHECK` | If `1`, skips Python kernel availability checks/warm checks |\n| `PI_PYTHON_GATEWAY_URL` | If set, uses external kernel gateway instead of local shared gateway |\n| `PI_PYTHON_GATEWAY_TOKEN` | Optional auth token for external gateway (`Authorization: token <value>`) |\n| `PI_PYTHON_IPC_TRACE` | If `1`, enables low-level IPC trace path in kernel module |\n| `VIRTUAL_ENV` | Highest-priority venv path for Python runtime resolution |\n\nExtra conditional behavior:\n\n- If `BUN_ENV=test` or `NODE_ENV=test`, Python availability checks are treated as OK and warming is skipped.\n- Python env filtering denies common API keys and allows safe base vars + `LC_`, `XDG_`, `PI_` prefixes.\n\n---\n\n## 5) Agent/runtime behavior toggles\n\n| Variable | Default / behavior |\n|----------------------------|----------------------------------------------------------------------------------------------|\n| `PI_SMOL_MODEL` | Ephemeral model-role override for `smol` (CLI `--smol` takes precedence) |\n| `PI_SLOW_MODEL` | Ephemeral model-role override for `slow` (CLI `--slow` takes precedence) |\n| `PI_PLAN_MODEL` | Ephemeral model-role override for `plan` (CLI `--plan` takes precedence) |\n| `PI_NO_TITLE` | If set (any non-empty value), disables auto session title generation on first user message |\n| `NULL_PROMPT` | If `true`, system prompt builder returns empty string |\n| `PI_BLOCKED_AGENT` | Blocks a specific subagent type in task tool |\n| `PI_SUBPROCESS_CMD` | Overrides subagent spawn command (`omp` / `omp.cmd` resolution bypass) |\n| `PI_TASK_MAX_OUTPUT_BYTES` | Max captured output bytes per subagent (default `500000`) |\n| `PI_TASK_MAX_OUTPUT_LINES` | Max captured output lines per subagent (default `5000`) |\n| `PI_TIMING` | If `1`, enables startup/tool timing instrumentation logs |\n| `PI_DEBUG_STARTUP` | Enables startup stage debug prints to stderr in multiple startup paths |\n| `PI_PACKAGE_DIR` | Overrides package asset base dir resolution (docs/examples/changelog path lookup) |\n| `PI_DISABLE_LSPMUX` | If `1`, disables lspmux detection/integration and forces direct LSP server spawning |\n| `LM_STUDIO_BASE_URL` | Default implicit LM Studio discovery base URL override (`http://127.0.0.1:1234/v1` if unset) |\n| `OLLAMA_BASE_URL` | Default implicit Ollama discovery base URL override (`http://127.0.0.1:11434` if unset) |\n| `LLAMA_CPP_BASE_URL` | Default implicit Llama.cpp discovery base URL override (`http://127.0.0.1:8080` if unset) |\n| `PI_EDIT_VARIANT` | If `hashline`, forces hashline read/grep display mode when edit tool available |\n| `PI_NO_PTY` | If `1`, disables interactive PTY path for bash tool |\n\n`PI_NO_PTY` is also set internally when CLI `--no-pty` is used.\n\n---\n\n## 6) Storage and config root paths\n\nThese are consumed via `@oh-my-pi/pi-utils/dirs` and affect where coding-agent stores data.\n\n| Variable | Default / behavior |\n|---|---|\n| `PI_CONFIG_DIR` | Config root dirname under home (default `.omp`) |\n| `PI_CODING_AGENT_DIR` | Full override for agent directory (default `~/<PI_CONFIG_DIR or .omp>/agent`) |\n| `PWD` | Used when matching canonical current working directory in path helpers |\n\n---\n\n## 7) Shell/tool execution environment\n\n(From `packages/utils/src/procmgr.ts` and coding-agent bash tool integration.)\n\n| Variable | Behavior |\n|---|---|\n| `PI_BASH_NO_CI` | Suppresses automatic `CI=true` injection into spawned shell env |\n| `CLAUDE_BASH_NO_CI` | Legacy alias fallback for `PI_BASH_NO_CI` |\n| `PI_BASH_NO_LOGIN` | Intended to disable login shell mode |\n| `CLAUDE_BASH_NO_LOGIN` | Legacy alias fallback for `PI_BASH_NO_LOGIN` |\n| `PI_SHELL_PREFIX` | Optional command prefix wrapper |\n| `CLAUDE_CODE_SHELL_PREFIX` | Legacy alias fallback for `PI_SHELL_PREFIX` |\n| `VISUAL` | Preferred external editor command |\n| `EDITOR` | Fallback external editor command |\n\nCurrent implementation note: `PI_BASH_NO_LOGIN`/`CLAUDE_BASH_NO_LOGIN` are read, but current `getShellArgs()` returns `['-l','-c']` in both branches (effectively no-op today).\n\n---\n\n## 8) UI/theme/session detection (auto-detected env)\n\nThese are read as runtime signals; they are usually set by the terminal/OS rather than manually configured.\n\n| Variable | Used for |\n|---|---|\n| `COLORTERM`, `TERM`, `WT_SESSION` | Color capability detection (theme color mode) |\n| `COLORFGBG` | Terminal background light/dark auto-detection |\n| `TERM_PROGRAM`, `TERM_PROGRAM_VERSION`, `TERMINAL_EMULATOR` | Terminal identity in system prompt/context |\n| `KDE_FULL_SESSION`, `XDG_CURRENT_DESKTOP`, `DESKTOP_SESSION`, `XDG_SESSION_DESKTOP`, `GDMSESSION`, `WINDOWMANAGER` | Desktop/window-manager detection in system prompt/context |\n| `KITTY_WINDOW_ID`, `TMUX_PANE`, `TERM_SESSION_ID`, `WT_SESSION` | Stable per-terminal session breadcrumb IDs |\n| `SHELL`, `ComSpec`, `TERM_PROGRAM`, `TERM` | System info diagnostics |\n| `APPDATA`, `XDG_CONFIG_HOME` | lspmux config path resolution |\n| `HOME` | Path shortening in MCP command UI |\n\n---\n\n## 9) Native loader/debug flags\n\n| Variable | Behavior |\n|---|---|\n| `PI_DEV` | Enables verbose native addon load diagnostics in `packages/natives` |\n\n## 10) TUI runtime flags (shared package, affects coding-agent UX)\n\n| Variable | Behavior |\n|---|---|\n| `PI_NOTIFICATIONS` | `off` / `0` / `false` suppress desktop notifications |\n| `PI_TUI_WRITE_LOG` | If set, logs TUI writes to file |\n| `PI_HARDWARE_CURSOR` | If `1`, enables hardware cursor mode |\n| `PI_CLEAR_ON_SHRINK` | If `1`, clears empty rows when content shrinks |\n| `PI_DEBUG_REDRAW` | If `1`, enables redraw debug logging |\n| `PI_TUI_DEBUG` | If `1`, enables deep TUI debug dump path |\n\n---\n\n## 11) Commit generation controls\n\n| Variable | Behavior |\n|---|---|\n| `PI_COMMIT_TEST_FALLBACK` | If `true` (case-insensitive), force commit fallback generation path |\n| `PI_COMMIT_NO_FALLBACK` | If `true`, disables fallback when agent returns no proposal |\n| `PI_COMMIT_MAP_REDUCE` | If `false`, disables map-reduce commit analysis path |\n| `DEBUG` | If set, commit agent error stack traces are printed |\n\n---\n\n## Security-sensitive variables\n\nTreat these as secrets; do not log or commit them:\n\n- Provider/API keys and OAuth/bearer credentials (all `*_API_KEY`, `*_TOKEN`, OAuth access/refresh tokens)\n- Cloud credentials (`AWS_*`, `GOOGLE_APPLICATION_CREDENTIALS` path may expose service-account material)\n- Search/provider auth vars (`EXA_API_KEY`, `BRAVE_API_KEY`, `PERPLEXITY_API_KEY`, Anthropic search keys)\n- Foundry mTLS material (`CLAUDE_CODE_CLIENT_CERT`, `CLAUDE_CODE_CLIENT_KEY`, `NODE_EXTRA_CA_CERTS` when it points to private CA bundles)\n\nPython runtime also explicitly strips many common key vars before spawning kernel subprocesses (`packages/coding-agent/src/ipy/runtime.ts`).\n",
|
|
12
12
|
"extension-loading.md": "# Extension Loading (TypeScript/JavaScript Modules)\n\nThis document covers how the coding agent discovers and loads **extension modules** (`.ts`/`.js`) at startup.\n\nIt does **not** cover `gemini-extension.json` manifest extensions (documented separately).\n\n## What this subsystem does\n\nExtension loading builds a list of module entry files, imports each module with Bun, executes its factory, and returns:\n\n- loaded extension definitions\n- per-path load errors (without aborting the whole load)\n- a shared extension runtime object used later by `ExtensionRunner`\n\n## Primary implementation files\n\n- `src/extensibility/extensions/loader.ts` — path discovery + import/execution\n- `src/extensibility/extensions/index.ts` — public exports\n- `src/extensibility/extensions/runner.ts` — runtime/event execution after load\n- `src/discovery/builtin.ts` — native auto-discovery provider for extension modules\n- `src/config/settings.ts` — loads merged `extensions` / `disabledExtensions` settings\n\n---\n\n## Inputs to extension loading\n\n### 1) Auto-discovered native extension modules\n\n`discoverAndLoadExtensions()` first asks discovery providers for `extension-module` capability items, then keeps only provider `native` items.\n\nEffective native locations:\n\n- Project: `<cwd>/.omp/extensions`\n- User: `~/.omp/agent/extensions`\n\nPath roots come from the native provider (`SOURCE_PATHS.native`).\n\nNotes:\n\n- Native auto-discovery is currently `.omp` based.\n- Legacy `.pi` is still accepted in `package.json` manifest keys (`pi.extensions`), but not as a native root here.\n\n### 2) Explicitly configured paths\n\nAfter auto-discovery, configured paths are appended and resolved.\n\nConfigured path sources in the main session startup path (`sdk.ts`):\n\n1. CLI-provided paths (`--extension/-e`, and `--hook` is also treated as an extension path)\n2. Settings `extensions` array (merged global + project settings)\n\nGlobal settings file:\n\n- `~/.omp/agent/config.yml` (or custom agent dir via `PI_CODING_AGENT_DIR`)\n\nProject settings file:\n\n- `<cwd>/.omp/settings.json`\n\nExamples:\n\n```yaml\n# ~/.omp/agent/config.yml\nextensions:\n - ~/my-exts/safety.ts\n - ./local/ext-pack\n```\n\n```json\n{\n \"extensions\": [\"./.omp/extensions/my-extra\"]\n}\n```\n\n---\n\n## Enable/disable controls\n\n### Disable discovery\n\n- CLI: `--no-extensions`\n- SDK option: `disableExtensionDiscovery`\n\nBehavior split:\n\n- SDK: when `disableExtensionDiscovery=true`, it still loads `additionalExtensionPaths` via `loadExtensions()`.\n- CLI path building (`main.ts`) currently clears CLI extension paths when `--no-extensions` is set, so explicit `-e/--hook` are not forwarded in that mode.\n\n### Disable specific extension modules\n\n`disabledExtensions` setting filters by extension id format:\n\n- `extension-module:<derivedName>`\n\n`derivedName` is based on entry path (`getExtensionNameFromPath`), for example:\n\n- `/x/foo.ts` -> `foo`\n- `/x/bar/index.ts` -> `bar`\n\nExample:\n\n```yaml\ndisabledExtensions:\n - extension-module:foo\n```\n\n---\n\n## Path and entry resolution\n\n### Path normalization\n\nFor configured paths:\n\n1. Normalize unicode spaces\n2. Expand `~`\n3. If relative, resolve against current `cwd`\n\n### If configured path is a file\n\nIt is used directly as a module entry candidate.\n\n### If configured path is a directory\n\nResolution order:\n\n1. `package.json` in that directory with `omp.extensions` (or legacy `pi.extensions`) -> use declared entries\n2. `index.ts`\n3. `index.js`\n4. Otherwise scan one level for extension entries:\n - direct `*.ts` / `*.js`\n - subdir `index.ts` / `index.js`\n - subdir `package.json` with `omp.extensions` / `pi.extensions`\n\nRules and constraints:\n\n- no recursive discovery beyond one subdirectory level\n- declared `extensions` manifest entries are resolved relative to that package directory\n- declared entries are included only if file exists/access is allowed\n- in `*/index.{ts,js}` pairs, TypeScript is preferred over JavaScript\n- symlinks are treated as eligible files/directories\n\n### Ignore behavior differs by source\n\n- Native auto-discovery (`discoverExtensionModulePaths` in discovery helpers) uses native glob with `gitignore: true` and `hidden: false`.\n- Explicit configured directory scanning in `loader.ts` uses `readdir` rules and does **not** apply gitignore filtering.\n\n---\n\n## Load order and precedence\n\n`discoverAndLoadExtensions()` builds one ordered list and then calls `loadExtensions()`.\n\nOrder:\n\n1. Native auto-discovered modules\n2. Explicit configured paths (in provided order)\n\nIn `sdk.ts`, configured order is:\n\n1. CLI additional paths\n2. Settings `extensions`\n\nDe-duplication:\n\n- absolute path based\n- first seen path wins\n- later duplicates are ignored\n\nImplication: if the same module path is both auto-discovered and explicitly configured, it is loaded once at the first position (auto-discovered stage).\n\n---\n\n## Module import and factory contract\n\nEach candidate path is loaded with dynamic import:\n\n- `await import(resolvedPath)`\n- factory is `module.default ?? module`\n- factory must be a function (`ExtensionFactory`)\n\nIf export is not a function, that path fails with a structured error and loading continues.\n\n---\n\n## Failure handling and isolation\n\n### During loading\n\nPer extension path, failures are captured as `{ path, error }` and do not stop other paths from loading.\n\nCommon cases:\n\n- import failure / missing file\n- invalid factory export (non-function)\n- exception thrown while executing factory\n\n### Runtime isolation model\n\n- Extensions are **not sandboxed** (same process/runtime).\n- They share one `EventBus` and one `ExtensionRuntime` instance.\n- During load, runtime action methods intentionally throw `ExtensionRuntimeNotInitializedError`; action wiring happens later in `ExtensionRunner.initialize()`.\n\n### After loading\n\nWhen events run through `ExtensionRunner`, handler exceptions are caught and emitted as extension errors instead of crashing the runner loop.\n\n---\n\n## Minimal user/project layout examples\n\n### User-level\n\n```text\n~/.omp/agent/\n config.yml\n extensions/\n guardrails.ts\n audit/\n index.ts\n```\n\n### Project-level\n\n```text\n<repo>/\n .omp/\n settings.json\n extensions/\n checks/\n package.json\n lint-gates.ts\n```\n\n`checks/package.json`:\n\n```json\n{\n \"omp\": {\n \"extensions\": [\"./src/check-a.ts\", \"./src/check-b.js\"]\n }\n}\n```\n\nLegacy manifest key still accepted:\n\n```json\n{\n \"pi\": {\n \"extensions\": [\"./index.ts\"]\n }\n}\n```\n",
|
|
13
|
-
"extensions.md": "# Extensions\n\nPrimary guide for authoring runtime extensions in `packages/coding-agent`.\n\nThis document covers the current extension runtime in:\n\n- `src/extensibility/extensions/types.ts`\n- `src/extensibility/extensions/runner.ts`\n- `src/extensibility/extensions/wrapper.ts`\n- `src/extensibility/extensions/index.ts`\n- `src/modes/controllers/extension-ui-controller.ts`\n\nFor discovery paths and filesystem loading rules, see `docs/extension-loading.md`.\n\n## What an extension is\n\nAn extension is a TS/JS module exporting a default factory:\n\n```ts\nimport type { ExtensionAPI } from \"@oh-my-pi/pi-coding-agent\";\n\nexport default function myExtension(pi: ExtensionAPI) {\n\t// register handlers/tools/commands/renderers\n}\n```\n\nExtensions can combine all of the following in one module:\n\n- event handlers (`pi.on(...)`)\n- LLM-callable tools (`pi.registerTool(...)`)\n- slash commands (`pi.registerCommand(...)`)\n- keyboard shortcuts and flags\n- custom message rendering\n- session/message injection APIs (`sendMessage`, `sendUserMessage`, `appendEntry`)\n\n## Runtime model\n\n1. Extensions are imported and their factory functions run.\n2. During that load phase, registration methods are valid; runtime action methods are not yet initialized.\n3. `ExtensionRunner.initialize(...)` wires live actions/contexts for the active mode.\n4. Session/agent/tool lifecycle events are emitted to handlers.\n5. Every tool execution is wrapped with extension interception (`tool_call` / `tool_result`).\n\n```text\nExtension lifecycle (simplified)\n\nload paths\n │\n ▼\nimport module + run factory (registration only)\n │\n ▼\nExtensionRunner.initialize(mode/session/tool registry)\n │\n ├─ emit session/agent events to handlers\n ├─ wrap tool execution (tool_call/tool_result)\n └─ expose runtime actions (sendMessage, setActiveTools, ...)\n```\n\nImportant constraint from `loader.ts`:\n\n- calling action methods like `pi.sendMessage()` during extension load throws `ExtensionRuntimeNotInitializedError`\n- register first; perform runtime behavior from events/commands/tools\n\n## Quick start\n\n```ts\nimport type { ExtensionAPI } from \"@oh-my-pi/pi-coding-agent\";\nimport { Type } from \"@sinclair/typebox\";\n\nexport default function (pi: ExtensionAPI) {\n\tpi.setLabel(\"Safety + Utilities\");\n\n\tpi.on(\"session_start\", async (_event, ctx) => {\n\t\tctx.ui.notify(`Extension loaded in ${ctx.cwd}`, \"info\");\n\t});\n\n\tpi.on(\"tool_call\", async (event) => {\n\t\tif (event.toolName === \"bash\" && event.input.command?.includes(\"rm -rf\")) {\n\t\t\treturn { block: true, reason: \"Blocked by extension policy\" };\n\t\t}\n\t});\n\n\tpi.registerTool({\n\t\tname: \"hello_extension\",\n\t\tlabel: \"Hello Extension\",\n\t\tdescription: \"Return a greeting\",\n\t\tparameters: Type.Object({ name: Type.String() }),\n\t\tasync execute(_toolCallId, params, _signal, _onUpdate, _ctx) {\n\t\t\treturn {\n\t\t\t\tcontent: [{ type: \"text\", text: `Hello, ${params.name}` }],\n\t\t\t\tdetails: { greeted: params.name },\n\t\t\t};\n\t\t},\n\t});\n\n\tpi.registerCommand(\"hello-ext\", {\n\t\tdescription: \"Show queue state\",\n\t\thandler: async (_args, ctx) => {\n\t\t\tctx.ui.notify(`pending=${ctx.hasPendingMessages()}`, \"info\");\n\t\t},\n\t});\n}\n```\n\n## Extension API surfaces\n\n## 1) Registration and actions (`ExtensionAPI`)\n\nCore methods:\n\n- `on(event, handler)`\n- `registerTool`, `registerCommand`, `registerShortcut`, `registerFlag`\n- `registerMessageRenderer`\n- `sendMessage`, `sendUserMessage`, `appendEntry`\n- `getActiveTools`, `getAllTools`, `setActiveTools`\n- `setModel`, `getThinkingLevel`, `setThinkingLevel`\n- `registerProvider`\n- `events` (shared event bus)\n\nAlso exposed:\n\n- `pi.logger`\n- `pi.typebox`\n- `pi.pi` (package exports)\n\n### Message delivery semantics\n\n`pi.sendMessage(message, options)` supports:\n\n- `deliverAs: \"steer\"` (default) — interrupts current run\n- `deliverAs: \"followUp\"` — queued to run after current run\n- `deliverAs: \"nextTurn\"` — stored and injected on the next user prompt\n- `triggerTurn: true` — starts a turn when idle (`nextTurn` ignores this)\n\n`pi.sendUserMessage(content, { deliverAs })` always goes through prompt flow; while streaming it queues as steer/follow-up.\n\n## 2) Handler context (`ExtensionContext`)\n\nHandlers and tool `execute` receive `ctx` with:\n\n- `ui`\n- `hasUI`\n- `cwd`\n- `sessionManager` (read-only)\n- `modelRegistry`, `model`\n- `getContextUsage()`\n- `compact(...)`\n- `isIdle()`, `hasPendingMessages()`, `abort()`\n- `shutdown()`\n- `getSystemPrompt()`\n\n## 3) Command context (`ExtensionCommandContext`)\n\nCommand handlers additionally get:\n\n- `waitForIdle()`\n- `newSession(...)`\n- `switchSession(...)`\n- `branch(entryId)`\n- `navigateTree(targetId, { summarize })`\n- `reload()`\n\nUse command context for session-control flows; these methods are intentionally separated from general event handlers.\n\n## Event surface (current names and behavior)\n\nCanonical event unions and payload types are in `types.ts`.\n\n### Session lifecycle\n\n- `session_start`\n- `session_before_switch` / `session_switch`\n- `session_before_branch` / `session_branch`\n- `session_before_compact` / `session.compacting` / `session_compact`\n- `session_before_tree` / `session_tree`\n- `session_shutdown`\n\nCancelable pre-events:\n\n- `session_before_switch` → `{ cancel?: boolean }`\n- `session_before_branch` → `{ cancel?: boolean; skipConversationRestore?: boolean }`\n- `session_before_compact` → `{ cancel?: boolean; compaction?: CompactionResult }`\n- `session_before_tree` → `{ cancel?: boolean; summary?: { summary: string; details?: unknown } }`\n\n### Prompt and turn lifecycle\n\n- `input`\n- `before_agent_start`\n- `context`\n- `agent_start` / `agent_end`\n- `turn_start` / `turn_end`\n- `message_start` / `message_update` / `message_end`\n\n### Tool lifecycle\n\n- `tool_call` (pre-exec, may block)\n- `tool_result` (post-exec, may patch content/details/isError)\n- `tool_execution_start` / `tool_execution_update` / `tool_execution_end` (observability)\n\n`tool_result` is middleware-style: handlers run in extension order and each sees prior modifications.\n\n### Reliability/runtime signals\n\n- `auto_compaction_start` / `auto_compaction_end`\n- `auto_retry_start` / `auto_retry_end`\n- `ttsr_triggered`\n- `todo_reminder`\n\n### User command interception\n\n- `user_bash` (override with `{ result }`)\n- `user_python` (override with `{ result }`)\n\n### `resources_discover`\n\n`resources_discover` exists in extension types and `ExtensionRunner`.\nCurrent runtime note: `ExtensionRunner.emitResourcesDiscover(...)` is implemented, but there are no `AgentSession` callsites invoking it in the current codebase.\n\n## Tool authoring details\n\n`registerTool` uses `ToolDefinition` from `types.ts`.\n\nCurrent `execute` signature:\n\n```ts\nexecute(\n\ttoolCallId,\n\tparams,\n\tsignal,\n\tonUpdate,\n\tctx,\n): Promise<AgentToolResult>\n```\n\nTemplate:\n\n```ts\npi.registerTool({\n\tname: \"my_tool\",\n\tlabel: \"My Tool\",\n\tdescription: \"...\",\n\tparameters: Type.Object({}),\n\tasync execute(_id, _params, signal, onUpdate, ctx) {\n\t\tif (signal?.aborted) {\n\t\t\treturn { content: [{ type: \"text\", text: \"Cancelled\" }] };\n\t\t}\n\t\tonUpdate?.({ content: [{ type: \"text\", text: \"Working...\" }] });\n\t\treturn { content: [{ type: \"text\", text: \"Done\" }], details: {} };\n\t},\n\tonSession(event, ctx) {\n\t\t// reason: start|switch|branch|tree|shutdown\n\t},\n\trenderCall(args, theme) {\n\t\t// optional TUI render\n\t},\n\trenderResult(result, options, theme, args) {\n\t\t// optional TUI render\n\t},\n});\n```\n\n`tool_call`/`tool_result` intercept all tools once the registry is wrapped in `sdk.ts`, including built-ins and extension/custom tools.\n\n## UI integration points\n\n`ctx.ui` implements the `ExtensionUIContext` interface. Support differs by mode.\n\n### Interactive mode (`extension-ui-controller.ts`)\n\nSupported:\n\n- dialogs: `select`, `confirm`, `input`, `editor`\n- notifications/status/editor text/terminal input/custom overlays\n- theme listing/loading by name (`setTheme` supports string names)\n- tools expanded toggle\n\nCurrent no-op methods in this controller:\n\n- `setFooter`\n- `setHeader`\n- `setEditorComponent`\n\nAlso note: `setWidget` currently routes to status-line text via `setHookWidget(...)`.\n\n### RPC mode (`rpc-mode.ts`)\n\n`ctx.ui` is backed by RPC `extension_ui_request` events:\n\n- dialog methods (`select`, `confirm`, `input`, `editor`) round-trip to client responses\n- fire-and-forget methods emit requests (`notify`, `setStatus`, `setWidget` for string arrays, `setTitle`, `setEditorText`)\n\nUnsupported/no-op in RPC implementation:\n\n- `onTerminalInput`\n- `custom`\n- `setFooter`, `setHeader`, `setEditorComponent`\n- `setWorkingMessage`\n- theme switching/loading (`setTheme` returns failure)\n- tool expansion controls are inert\n\n### Print/headless/subagent paths\n\nWhen no UI context is supplied to runner init, `ctx.hasUI` is `false` and methods are no-op/default-returning.\n\n### Background interactive mode\n\nBackground mode installs a non-interactive UI context object. In current implementation, `ctx.hasUI` may still be `true` while interactive dialogs return defaults/no-op behavior.\n\n## Session and state patterns\n\nFor durable extension state:\n\n1. Persist with `pi.appendEntry(customType, data)`.\n2. Rebuild state from `ctx.sessionManager.getBranch()` on `session_start`, `session_branch`, `session_tree`.\n3. Keep tool result `details` structured when state should be visible/reconstructible from tool result history.\n\nExample reconstruction pattern:\n\n```ts\npi.on(\"session_start\", async (_event, ctx) => {\n\tlet latest;\n\tfor (const entry of ctx.sessionManager.getBranch()) {\n\t\tif (entry.type === \"custom\" && entry.customType === \"my-state\") {\n\t\t\tlatest = entry.data;\n\t\t}\n\t}\n\t// restore from latest\n});\n```\n\n## Rendering extension points\n\n## Custom message renderer\n\n```ts\npi.registerMessageRenderer(\"my-type\", (message, { expanded }, theme) => {\n\t// return pi-tui Component\n});\n```\n\nUsed by interactive rendering when custom messages are displayed.\n\n## Tool call/result renderer\n\nProvide `renderCall` / `renderResult` on `registerTool` definitions for custom tool visualization in TUI.\n\n## Constraints and pitfalls\n\n- Runtime actions are unavailable during extension load.\n- `tool_call` errors block execution (fail-closed).\n- Command name conflicts with built-ins are skipped with diagnostics.\n- Reserved shortcuts are ignored (`ctrl+c`, `ctrl+d`, `ctrl+z`, `ctrl+k`, `ctrl+p`, `ctrl+l`, `ctrl+o`, `ctrl+t`, `ctrl+g`, `shift+tab`, `shift+ctrl+p`, `alt+enter`, `escape`, `enter`).\n- Treat `ctx.reload()` as terminal for the current command handler frame.\n\n## Extensions vs hooks vs custom-tools\n\nUse the right surface:\n\n- **Extensions** (`src/extensibility/extensions/*`): unified system (events + tools + commands + renderers + provider registration).\n- **Hooks** (`src/extensibility/hooks/*`): separate legacy event API.\n- **Custom-tools** (`src/extensibility/custom-tools/*`): tool-focused modules; when loaded alongside extensions they are adapted and still pass through extension interception wrappers.\n\nIf you need one package that owns policy, tools, command UX, and rendering together, use extensions.\n",
|
|
13
|
+
"extensions.md": "# Extensions\n\nPrimary guide for authoring runtime extensions in `packages/coding-agent`.\n\nThis document covers the current extension runtime in:\n\n- `src/extensibility/extensions/types.ts`\n- `src/extensibility/extensions/runner.ts`\n- `src/extensibility/extensions/wrapper.ts`\n- `src/extensibility/extensions/index.ts`\n- `src/modes/controllers/extension-ui-controller.ts`\n\nFor discovery paths and filesystem loading rules, see `docs/extension-loading.md`.\n\n## What an extension is\n\nAn extension is a TS/JS module exporting a default factory:\n\n```ts\nimport type { ExtensionAPI } from \"@oh-my-pi/pi-coding-agent\";\n\nexport default function myExtension(pi: ExtensionAPI) {\n\t// register handlers/tools/commands/renderers\n}\n```\n\nExtensions can combine all of the following in one module:\n\n- event handlers (`pi.on(...)`)\n- LLM-callable tools (`pi.registerTool(...)`)\n- slash commands (`pi.registerCommand(...)`)\n- keyboard shortcuts and flags\n- custom message rendering\n- session/message injection APIs (`sendMessage`, `sendUserMessage`, `appendEntry`)\n\n## Runtime model\n\n1. Extensions are imported and their factory functions run.\n2. During that load phase, registration methods are valid; runtime action methods are not yet initialized.\n3. `ExtensionRunner.initialize(...)` wires live actions/contexts for the active mode.\n4. Session/agent/tool lifecycle events are emitted to handlers.\n5. Every tool execution is wrapped with extension interception (`tool_call` / `tool_result`).\n\n```text\nExtension lifecycle (simplified)\n\nload paths\n │\n ▼\nimport module + run factory (registration only)\n │\n ▼\nExtensionRunner.initialize(mode/session/tool registry)\n │\n ├─ emit session/agent events to handlers\n ├─ wrap tool execution (tool_call/tool_result)\n └─ expose runtime actions (sendMessage, setActiveTools, ...)\n```\n\nImportant constraint from `loader.ts`:\n\n- calling action methods like `pi.sendMessage()` during extension load throws `ExtensionRuntimeNotInitializedError`\n- register first; perform runtime behavior from events/commands/tools\n\n## Quick start\n\n```ts\nimport type { ExtensionAPI } from \"@oh-my-pi/pi-coding-agent\";\nimport { Type } from \"@sinclair/typebox\";\n\nexport default function (pi: ExtensionAPI) {\n\tpi.setLabel(\"Safety + Utilities\");\n\n\tpi.on(\"session_start\", async (_event, ctx) => {\n\t\tctx.ui.notify(`Extension loaded in ${ctx.cwd}`, \"info\");\n\t});\n\n\tpi.on(\"tool_call\", async (event) => {\n\t\tif (event.toolName === \"bash\" && event.input.command?.includes(\"rm -rf\")) {\n\t\t\treturn { block: true, reason: \"Blocked by extension policy\" };\n\t\t}\n\t});\n\n\tpi.registerTool({\n\t\tname: \"hello_extension\",\n\t\tlabel: \"Hello Extension\",\n\t\tdescription: \"Return a greeting\",\n\t\tparameters: Type.Object({ name: Type.String() }),\n\t\tasync execute(_toolCallId, params, _signal, _onUpdate, _ctx) {\n\t\t\treturn {\n\t\t\t\tcontent: [{ type: \"text\", text: `Hello, ${params.name}` }],\n\t\t\t\tdetails: { greeted: params.name },\n\t\t\t};\n\t\t},\n\t});\n\n\tpi.registerCommand(\"hello-ext\", {\n\t\tdescription: \"Show queue state\",\n\t\thandler: async (_args, ctx) => {\n\t\t\tctx.ui.notify(`pending=${ctx.hasPendingMessages()}`, \"info\");\n\t\t},\n\t});\n}\n```\n\n## Extension API surfaces\n\n## 1) Registration and actions (`ExtensionAPI`)\n\nCore methods:\n\n- `on(event, handler)`\n- `registerTool`, `registerCommand`, `registerShortcut`, `registerFlag`\n- `registerMessageRenderer`\n- `sendMessage`, `sendUserMessage`, `appendEntry`\n- `getActiveTools`, `getAllTools`, `setActiveTools`\n- `getSessionName`, `setSessionName`\n- `setModel`, `getThinkingLevel`, `setThinkingLevel`\n- `registerProvider`\n- `events` (shared event bus)\n\nIn interactive mode, `input` handlers run before the built-in first-message auto-title check. Extensions that call `await pi.setSessionName(...)` from `input` can set the persisted session name and prevent the default auto-generated title from running for that session.\n\nAlso exposed:\n\n- `pi.logger`\n- `pi.typebox`\n- `pi.pi` (package exports)\n\n### Message delivery semantics\n\n`pi.sendMessage(message, options)` supports:\n\n- `deliverAs: \"steer\"` (default) — interrupts current run\n- `deliverAs: \"followUp\"` — queued to run after current run\n- `deliverAs: \"nextTurn\"` — stored and injected on the next user prompt\n- `triggerTurn: true` — starts a turn when idle (`nextTurn` ignores this)\n\n`pi.sendUserMessage(content, { deliverAs })` always goes through prompt flow; while streaming it queues as steer/follow-up.\n\n## 2) Handler context (`ExtensionContext`)\n\nHandlers and tool `execute` receive `ctx` with:\n\n- `ui`\n- `hasUI`\n- `cwd`\n- `sessionManager` (read-only)\n- `modelRegistry`, `model`\n- `getContextUsage()`\n- `compact(...)`\n- `isIdle()`, `hasPendingMessages()`, `abort()`\n- `shutdown()`\n- `getSystemPrompt()`\n\n## 3) Command context (`ExtensionCommandContext`)\n\nCommand handlers additionally get:\n\n- `waitForIdle()`\n- `newSession(...)`\n- `switchSession(...)`\n- `branch(entryId)`\n- `navigateTree(targetId, { summarize })`\n- `reload()`\n\nUse command context for session-control flows; these methods are intentionally separated from general event handlers.\n\n## Event surface (current names and behavior)\n\nCanonical event unions and payload types are in `types.ts`.\n\n### Session lifecycle\n\n- `session_start`\n- `session_before_switch` / `session_switch`\n- `session_before_branch` / `session_branch`\n- `session_before_compact` / `session.compacting` / `session_compact`\n- `session_before_tree` / `session_tree`\n- `session_shutdown`\n\nCancelable pre-events:\n\n- `session_before_switch` → `{ cancel?: boolean }`\n- `session_before_branch` → `{ cancel?: boolean; skipConversationRestore?: boolean }`\n- `session_before_compact` → `{ cancel?: boolean; compaction?: CompactionResult }`\n- `session_before_tree` → `{ cancel?: boolean; summary?: { summary: string; details?: unknown } }`\n\n### Prompt and turn lifecycle\n\n- `input`\n- `before_agent_start`\n- `context`\n- `agent_start` / `agent_end`\n- `turn_start` / `turn_end`\n- `message_start` / `message_update` / `message_end`\n\n### Tool lifecycle\n\n- `tool_call` (pre-exec, may block)\n- `tool_result` (post-exec, may patch content/details/isError)\n- `tool_execution_start` / `tool_execution_update` / `tool_execution_end` (observability)\n\n`tool_result` is middleware-style: handlers run in extension order and each sees prior modifications.\n\n### Reliability/runtime signals\n\n- `auto_compaction_start` / `auto_compaction_end`\n- `auto_retry_start` / `auto_retry_end`\n- `ttsr_triggered`\n- `todo_reminder`\n\n### User command interception\n\n- `user_bash` (override with `{ result }`)\n- `user_python` (override with `{ result }`)\n\n### `resources_discover`\n\n`resources_discover` exists in extension types and `ExtensionRunner`.\nCurrent runtime note: `ExtensionRunner.emitResourcesDiscover(...)` is implemented, but there are no `AgentSession` callsites invoking it in the current codebase.\n\n## Tool authoring details\n\n`registerTool` uses `ToolDefinition` from `types.ts`.\n\nCurrent `execute` signature:\n\n```ts\nexecute(\n\ttoolCallId,\n\tparams,\n\tsignal,\n\tonUpdate,\n\tctx,\n): Promise<AgentToolResult>\n```\n\nTemplate:\n\n```ts\npi.registerTool({\n\tname: \"my_tool\",\n\tlabel: \"My Tool\",\n\tdescription: \"...\",\n\tparameters: Type.Object({}),\n\tasync execute(_id, _params, signal, onUpdate, ctx) {\n\t\tif (signal?.aborted) {\n\t\t\treturn { content: [{ type: \"text\", text: \"Cancelled\" }] };\n\t\t}\n\t\tonUpdate?.({ content: [{ type: \"text\", text: \"Working...\" }] });\n\t\treturn { content: [{ type: \"text\", text: \"Done\" }], details: {} };\n\t},\n\tonSession(event, ctx) {\n\t\t// reason: start|switch|branch|tree|shutdown\n\t},\n\trenderCall(args, theme) {\n\t\t// optional TUI render\n\t},\n\trenderResult(result, options, theme, args) {\n\t\t// optional TUI render\n\t},\n});\n```\n\n`tool_call`/`tool_result` intercept all tools once the registry is wrapped in `sdk.ts`, including built-ins and extension/custom tools.\n\n## UI integration points\n\n`ctx.ui` implements the `ExtensionUIContext` interface. Support differs by mode.\n\n### Interactive mode (`extension-ui-controller.ts`)\n\nSupported:\n\n- dialogs: `select`, `confirm`, `input`, `editor`\n- notifications/status/editor text/terminal input/custom overlays\n- theme listing/loading by name (`setTheme` supports string names)\n- tools expanded toggle\n\nCurrent no-op methods in this controller:\n\n- `setFooter`\n- `setHeader`\n- `setEditorComponent`\n\nAlso note: `setWidget` currently routes to status-line text via `setHookWidget(...)`.\n\n### RPC mode (`rpc-mode.ts`)\n\n`ctx.ui` is backed by RPC `extension_ui_request` events:\n\n- dialog methods (`select`, `confirm`, `input`, `editor`) round-trip to client responses\n- fire-and-forget methods emit requests (`notify`, `setStatus`, `setWidget` for string arrays, `setTitle`, `setEditorText`)\n\nUnsupported/no-op in RPC implementation:\n\n- `onTerminalInput`\n- `custom`\n- `setFooter`, `setHeader`, `setEditorComponent`\n- `setWorkingMessage`\n- theme switching/loading (`setTheme` returns failure)\n- tool expansion controls are inert\n\n### Print/headless/subagent paths\n\nWhen no UI context is supplied to runner init, `ctx.hasUI` is `false` and methods are no-op/default-returning.\n\n### Background interactive mode\n\nBackground mode installs a non-interactive UI context object. In current implementation, `ctx.hasUI` may still be `true` while interactive dialogs return defaults/no-op behavior.\n\n## Session and state patterns\n\nFor durable extension state:\n\n1. Persist with `pi.appendEntry(customType, data)`.\n2. Rebuild state from `ctx.sessionManager.getBranch()` on `session_start`, `session_branch`, `session_tree`.\n3. Keep tool result `details` structured when state should be visible/reconstructible from tool result history.\n\nExample reconstruction pattern:\n\n```ts\npi.on(\"session_start\", async (_event, ctx) => {\n\tlet latest;\n\tfor (const entry of ctx.sessionManager.getBranch()) {\n\t\tif (entry.type === \"custom\" && entry.customType === \"my-state\") {\n\t\t\tlatest = entry.data;\n\t\t}\n\t}\n\t// restore from latest\n});\n```\n\n## Rendering extension points\n\n## Custom message renderer\n\n```ts\npi.registerMessageRenderer(\"my-type\", (message, { expanded }, theme) => {\n\t// return pi-tui Component\n});\n```\n\nUsed by interactive rendering when custom messages are displayed.\n\n## Tool call/result renderer\n\nProvide `renderCall` / `renderResult` on `registerTool` definitions for custom tool visualization in TUI.\n\n## Constraints and pitfalls\n\n- Runtime actions are unavailable during extension load.\n- `tool_call` errors block execution (fail-closed).\n- Command name conflicts with built-ins are skipped with diagnostics.\n- Reserved shortcuts are ignored (`ctrl+c`, `ctrl+d`, `ctrl+z`, `ctrl+k`, `ctrl+p`, `ctrl+l`, `ctrl+o`, `ctrl+t`, `ctrl+g`, `shift+tab`, `shift+ctrl+p`, `alt+enter`, `escape`, `enter`).\n- Treat `ctx.reload()` as terminal for the current command handler frame.\n\n## Extensions vs hooks vs custom-tools\n\nUse the right surface:\n\n- **Extensions** (`src/extensibility/extensions/*`): unified system (events + tools + commands + renderers + provider registration).\n- **Hooks** (`src/extensibility/hooks/*`): separate legacy event API.\n- **Custom-tools** (`src/extensibility/custom-tools/*`): tool-focused modules; when loaded alongside extensions they are adapted and still pass through extension interception wrappers.\n\nIf you need one package that owns policy, tools, command UX, and rendering together, use extensions.\n",
|
|
14
14
|
"fs-scan-cache-architecture.md": "# Filesystem Scan Cache Architecture Contract\n\nThis document defines the current contract for the shared filesystem scan cache implemented in Rust (`crates/pi-natives/src/fs_cache.rs`) and consumed by native discovery/search APIs exposed to `packages/coding-agent`.\n\n## What this cache is\n\nThe cache stores full directory-scan entry lists (`GlobMatch[]`) keyed by scan scope and traversal policy, then lets higher-level operations (glob filtering, fuzzy scoring, grep file selection) run against those cached entries.\n\nPrimary goals:\n- avoid repeated filesystem walks for repeated discovery/search calls\n- keep consistency across `glob`, `fuzzyFind`, and `grep` when they share the same scan policy\n- allow explicit staleness recovery for empty results and explicit invalidation after file mutations\n\n## Ownership and public surface\n\n- Cache implementation and policy: `crates/pi-natives/src/fs_cache.rs`\n- Native consumers:\n - `crates/pi-natives/src/glob.rs`\n - `crates/pi-natives/src/fd.rs` (`fuzzyFind`)\n - `crates/pi-natives/src/grep.rs`\n- JS binding/export:\n - `packages/natives/src/glob/index.ts` (`invalidateFsScanCache`)\n - `packages/natives/src/glob/types.ts`\n - `packages/natives/src/grep/types.ts`\n- Coding-agent mutation invalidation helpers:\n - `packages/coding-agent/src/tools/fs-cache-invalidation.ts`\n\n## Cache key partitioning (hard contract)\n\nEach entry is keyed by:\n- canonicalized `root` directory path\n- `include_hidden` boolean\n- `use_gitignore` boolean\n\nImplications:\n- Hidden and non-hidden scans do **not** share entries.\n- Gitignore-respecting and ignore-disabled scans do **not** share entries.\n- Consumers must pass stable semantics for hidden/gitignore behavior; changing either flag creates a different cache partition.\n\n`node_modules` inclusion is **not** in the cache key. The cache stores entries with `node_modules` included; per-consumer filtering is applied after retrieval.\n\n## Scan collection behavior\n\nCache population uses a deterministic walker (`ignore::WalkBuilder`) configured by `include_hidden` and `use_gitignore`:\n- `follow_links(false)`\n- sorted by file path\n- `.git` is always skipped\n- `node_modules` is always collected at cache-scan time (and optionally filtered later)\n- entry file type + `mtime` are captured via `symlink_metadata`\n\nSearch roots are resolved by `resolve_search_path`:\n- relative paths are resolved against current cwd\n- target must be an existing directory\n- root is canonicalized when possible\n\n## Freshness and eviction policy\n\nGlobal policy (environment-overridable):\n- `FS_SCAN_CACHE_TTL_MS` (default `1000`)\n- `FS_SCAN_EMPTY_RECHECK_MS` (default `200`)\n- `FS_SCAN_CACHE_MAX_ENTRIES` (default `16`)\n\nBehavior:\n- `get_or_scan(...)`\n - if TTL is `0`: bypass cache entirely, always fresh scan (`cache_age_ms = 0`)\n - on cache hit within TTL: return cached entries + non-zero `cache_age_ms`\n - on expired hit: evict key, rescan, store fresh entry\n- max entry enforcement is oldest-first eviction by `created_at`\n\n## Empty-result fast recheck (separate from normal hits)\n\nNormal cache hit:\n- a cache hit inside TTL returns cached entries and does nothing else.\n\nEmpty-result fast recheck:\n- this is a **caller-side** policy using `ScanResult.cache_age_ms`\n- if filtered/query result is empty and cached scan age is at least `empty_recheck_ms()`, caller performs one `force_rescan(...)` and retries\n- intended to reduce stale-negative results when files were recently added but cache is still within TTL\n\nCurrent consumers:\n- `glob`: rechecks when filtered matches are empty and scan age exceeds threshold\n- `fuzzyFind` (`fd.rs`): rechecks only when query is non-empty and scored matches are empty\n- `grep`: rechecks when selected candidate file list is empty\n\n## Consumer defaults and cache usage\n\nCache is opt-in on all exposed APIs (`cache?: boolean`, default `false`).\n\nCurrent defaults in native APIs:\n- `glob`: `hidden=false`, `gitignore=true`, `cache=false`\n- `fuzzyFind`: `hidden=false`, `gitignore=true`, `cache=false`\n- `grep`: `hidden=true`, `cache=false`, and cache scan always uses `use_gitignore=true`\n\nCoding-agent callers today:\n- High-volume mention candidate discovery enables cache:\n - `packages/coding-agent/src/utils/file-mentions.ts`\n - profile: `hidden=true`, `gitignore=true`, `includeNodeModules=true`, `cache=true`\n- Tool-level `grep` integration currently disables scan cache (`cache: false`):\n - `packages/coding-agent/src/tools/grep.ts`\n\n## Invalidation contract\n\nNative invalidation entrypoint:\n- `invalidateFsScanCache(path?: string)`\n - with `path`: remove cache entries whose root is a prefix of target path\n - without path: clear all scan cache entries\n\nPath handling details:\n- relative invalidation paths are resolved against cwd\n- invalidation attempts canonicalization\n- if target does not exist (e.g., delete), fallback canonicalizes parent and reattaches filename when possible\n- this preserves invalidation behavior for create/delete/rename where one side may not exist\n\n## Coding-agent mutation flow responsibilities\n\nCoding-agent code must invalidate after successful filesystem mutations.\n\nCentral helpers:\n- `invalidateFsScanAfterWrite(path)`\n- `invalidateFsScanAfterDelete(path)`\n- `invalidateFsScanAfterRename(oldPath, newPath)` (invalidates both sides when paths differ)\n\nCurrent mutation tool callsites:\n- `packages/coding-agent/src/tools/write.ts`\n- `packages/coding-agent/src/patch/index.ts` (hashline/patch/replace flows)\n\nRule: if a flow mutates filesystem content or location and bypasses these helpers, cache staleness bugs are expected.\n\n## Adding a new cache consumer safely\n\nWhen introducing cache use in a new scanner/search path:\n\n1. **Use stable scan policy inputs**\n - decide hidden/gitignore semantics first\n - pass them consistently to `get_or_scan`/`force_rescan` so cache partitions are intentional\n\n2. **Treat cache data as pre-filtered only by traversal policy**\n - apply tool-specific filtering (glob patterns, type filters, node_modules rules) after retrieval\n - never assume cached entries already reflect your higher-level filters\n\n3. **Implement empty-result fast recheck only for stale-negative risk**\n - use `scan.cache_age_ms >= empty_recheck_ms()`\n - retry once with `force_rescan(..., store=true, ...)`\n - keep this path separate from normal cache-hit logic\n\n4. **Respect no-cache mode explicitly**\n - when caller disables cache, call `force_rescan(..., store=false, ...)`\n - do not populate shared cache in a no-cache request path\n\n5. **Wire mutation invalidation for any new write path**\n - after successful write/edit/delete/rename, call the coding-agent invalidation helper\n - for rename/move, invalidate both old and new paths\n\n6. **Do not add per-call TTL knobs**\n - current contract is global policy only (env-configured), no per-request TTL override\n\n## Known boundaries\n\n- Cache scope is process-local in-memory (`DashMap`), not persisted across process restarts.\n- Cache stores scan entries, not final tool results.\n- `glob`/`fuzzyFind`/`grep` share scan entries only when key dimensions (`root`, `hidden`, `gitignore`) match.\n- `.git` is always excluded at scan collection time regardless of caller options.\n",
|
|
15
15
|
"gemini-manifest-extensions.md": "# Gemini Manifest Extensions (`gemini-extension.json`)\n\nThis document covers how the coding-agent discovers and parses Gemini-style manifest extensions (`gemini-extension.json`) into the `extensions` capability.\n\nIt does **not** cover TypeScript/JavaScript extension module loading (`extensions/*.ts`, `index.ts`, `package.json omp.extensions`), which is documented in `extension-loading.md`.\n\n## Implementation files\n\n- [`../src/discovery/gemini.ts`](../packages/coding-agent/src/discovery/gemini.ts)\n- [`../src/discovery/builtin.ts`](../packages/coding-agent/src/discovery/builtin.ts)\n- [`../src/discovery/helpers.ts`](../packages/coding-agent/src/discovery/helpers.ts)\n- [`../src/capability/extension.ts`](../packages/coding-agent/src/capability/extension.ts)\n- [`../src/capability/index.ts`](../packages/coding-agent/src/capability/index.ts)\n- [`../src/extensibility/extensions/loader.ts`](../packages/coding-agent/src/extensibility/extensions/loader.ts)\n\n---\n\n## What gets discovered\n\nThe Gemini provider (`id: gemini`, priority `60`) registers an `extensions` loader that scans two fixed roots:\n\n- User: `~/.gemini/extensions`\n- Project: `<cwd>/.gemini/extensions`\n\nPath resolution is direct from `ctx.home` and `ctx.cwd` via `getUserPath()` / `getProjectPath()`.\n\nImportant scope rule: project lookup is **cwd-only**. It does not walk parent directories.\n\n---\n\n## Directory scan rules\n\nFor each root (`~/.gemini/extensions` and `<cwd>/.gemini/extensions`), discovery does:\n\n1. `readDirEntries(root)`\n2. keep only direct child directories (`entry.isDirectory()`)\n3. for each child `<name>`, attempt to read exactly:\n - `<root>/<name>/gemini-extension.json`\n\nThere is no recursive scan beyond one directory level.\n\n### Hidden directories\n\nGemini manifest discovery does **not** filter out dot-prefixed directory names. If a hidden child directory exists and contains `gemini-extension.json`, it is considered.\n\n### Missing/unreadable files\n\nIf `gemini-extension.json` is missing or unreadable, that directory is skipped silently (no warning).\n\n---\n\n## Manifest shape (as implemented)\n\nThe capability type defines this manifest shape:\n\n```ts\ninterface ExtensionManifest {\n\tname?: string;\n\tdescription?: string;\n\tmcpServers?: Record<string, Omit<MCPServer, \"name\" | \"_source\">>;\n\ttools?: unknown[];\n\tcontext?: unknown;\n}\n```\n\nDiscovery-time behavior is intentionally loose:\n\n- JSON parse success is required.\n- There is no runtime schema validation for field types/content beyond JSON syntax.\n- The parsed object is stored as `manifest` on the capability item.\n\n### Name normalization\n\n`Extension.name` is set to:\n\n1. `manifest.name` if it is not `null`/`undefined`\n2. otherwise the extension directory name\n\nNo string-type enforcement is applied here.\n\n---\n\n## Materialization into capability items\n\nA valid parsed manifest creates one `Extension` capability item:\n\n```ts\n{\n\tname: manifest.name ?? <directory-name>,\n\tpath: <extension-directory>,\n\tmanifest: <parsed-json>,\n\tlevel: \"user\" | \"project\",\n\t_source: {\n\t\tprovider: \"gemini\",\n\t\tproviderName: \"Gemini CLI\" // attached by capability registry\n\t\tpath: <absolute-manifest-path>,\n\t\tlevel: \"user\" | \"project\"\n\t}\n}\n```\n\nNotes:\n\n- `_source.path` is normalized to an absolute path by `createSourceMeta()`.\n- Registry-level capability validation for `extensions` only checks presence of `name` and `path`.\n- Manifest internals (`mcpServers`, `tools`, `context`) are not validated during discovery.\n\n---\n\n## Error handling and warning semantics\n\n### Warned\n\n- Invalid JSON in a manifest file:\n - warning format: `Invalid JSON in <manifestPath>`\n\n### Not warned (silent skip)\n\n- `extensions` directory missing\n- child directory has no `gemini-extension.json`\n- unreadable manifest file\n- manifest JSON is syntactically valid but semantically odd/incomplete\n\nThis means partial validity is accepted: only syntactic JSON failure emits a warning.\n\n---\n\n## Precedence and deduplication with other sources\n\n`extensions` capability is aggregated across providers by the capability registry.\n\nCurrent providers for this capability:\n\n- `native` (`packages/coding-agent/src/discovery/builtin.ts`) priority `100`\n- `gemini` (`packages/coding-agent/src/discovery/gemini.ts`) priority `60`\n\nDedup key is `ext.name` (`extensionCapability.key = ext => ext.name`).\n\n### Cross-provider precedence\n\nHigher-priority provider wins on duplicate extension names.\n\n- If `native` and `gemini` both emit extension name `foo`, the native item is kept.\n- Lower-priority duplicate is retained only in `result.all` with `_shadowed = true`.\n\n### Intra-provider order effects\n\nBecause dedup is “first seen wins”, provider-local item order matters.\n\n- Gemini loader appends **user first**, then **project**.\n- Therefore, duplicate names between `~/.gemini/extensions` and `<cwd>/.gemini/extensions` keep the user entry and shadow the project entry.\n\nBy contrast, native provider builds config dir order differently (`project` then `user` in `getConfigDirs()`), so native intra-provider shadowing is the opposite direction.\n\n---\n\n## User vs project behavior summary\n\nFor Gemini manifests specifically:\n\n- Both user and project roots are scanned every load.\n- Project root is fixed to `<cwd>/.gemini/extensions` (no ancestor walk).\n- Duplicate names inside Gemini source resolve to user-first.\n- Duplicate names against higher-priority providers (notably native) lose by priority.\n\n---\n\n## Boundary: discovery metadata vs runtime extension loading\n\n`gemini-extension.json` discovery currently feeds capability metadata (`Extension` items). It does **not** directly load runnable TS/JS extension modules.\n\nRuntime module loading (`discoverAndLoadExtensions()` / `loadExtensions()`) uses `extension-modules` and explicit paths, and currently filters auto-discovered modules to provider `native` only.\n\nPractical implication:\n\n- Gemini manifest extensions are discoverable as capability records.\n- They are not, by themselves, executed as runtime extension modules by the extension loader pipeline.\n\nThis boundary is intentional in current implementation and explains why manifest discovery and executable module loading can diverge.\n",
|
|
16
16
|
"handoff-generation-pipeline.md": "# `/handoff` generation pipeline\n\nThis document describes how the coding-agent implements `/handoff` today: trigger path, generation prompt, completion capture, session switch, and context reinjection.\n\n## Scope\n\nCovers:\n\n- Interactive `/handoff` command dispatch\n- `AgentSession.handoff()` lifecycle and state transitions\n- How handoff output is captured from assistant output\n- How old/new sessions persist handoff data differently\n- UI behavior for success, cancel, and failure\n\nDoes not cover:\n\n- Generic tree navigation/branch internals\n- Non-handoff session commands (`/new`, `/fork`, `/resume`)\n\n## Implementation files\n\n- [`../src/modes/controllers/input-controller.ts`](../packages/coding-agent/src/modes/controllers/input-controller.ts)\n- [`../src/modes/controllers/command-controller.ts`](../packages/coding-agent/src/modes/controllers/command-controller.ts)\n- [`../src/session/agent-session.ts`](../packages/coding-agent/src/session/agent-session.ts)\n- [`../src/session/session-manager.ts`](../packages/coding-agent/src/session/session-manager.ts)\n- [`../src/extensibility/slash-commands.ts`](../packages/coding-agent/src/extensibility/slash-commands.ts)\n\n## Trigger path\n\n1. `/handoff` is declared in builtin slash command metadata (`slash-commands.ts`) with optional inline hint: `[focus instructions]`.\n2. In interactive input handling (`InputController`), submit text matching `/handoff` or `/handoff ...` is intercepted before normal prompt submission.\n3. The editor is cleared and `handleHandoffCommand(customInstructions?)` is called.\n4. `CommandController.handleHandoffCommand` performs a preflight guard using current entries:\n - Counts `type === \"message\"` entries.\n - If `< 2`, it warns: `Nothing to hand off (no messages yet)` and returns.\n\nThe same minimum-content guard exists again inside `AgentSession.handoff()` and throws if violated. This duplicates safety at both UI and session layers.\n\n## End-to-end lifecycle\n\n### 1) Start handoff generation\n\n`AgentSession.handoff(customInstructions?)`:\n\n- Reads current branch entries (`sessionManager.getBranch()`)\n- Validates minimum message count (`>= 2`)\n- Creates `#handoffAbortController`\n- Builds a fixed, inline prompt requesting a structured handoff document (`Goal`, `Constraints & Preferences`, `Progress`, `Key Decisions`, `Critical Context`, `Next Steps`)\n- Appends `Additional focus: ...` if custom instructions are provided\n\nPrompt is sent via:\n\n```ts\nawait this.prompt(handoffPrompt, { expandPromptTemplates: false });\n```\n\n`expandPromptTemplates: false` prevents slash/prompt-template expansion of this internal instruction payload.\n\n### 2) Capture completion\n\nBefore sending prompt, `handoff()` subscribes to session events and waits for `agent_end`.\n\nOn `agent_end`, it extracts handoff text from agent state by scanning backward for the most recent `assistant` message, then concatenating all `content` blocks where `type === \"text\"` with `\\n`.\n\nImportant extraction assumptions:\n\n- Only text blocks are used; non-text content is ignored.\n- It assumes the latest assistant message corresponds to handoff generation.\n- It does not parse markdown sections or validate format compliance.\n- If assistant output has no text blocks, handoff is treated as missing.\n\n### 3) Cancellation checks\n\n`handoff()` returns `undefined` when either condition holds:\n\n- no captured handoff text, or\n- `#handoffAbortController.signal.aborted` is true\n\nIt always clears `#handoffAbortController` in `finally`.\n\n### 4) New session creation\n\nIf text was captured and not aborted:\n\n1. Flush current session writer (`sessionManager.flush()`)\n2. Start a brand-new session (`sessionManager.newSession()`)\n3. Reset in-memory agent state (`agent.reset()`)\n4. Rebind `agent.sessionId` to new session id\n5. Clear queued context arrays (`#steeringMessages`, `#followUpMessages`, `#pendingNextTurnMessages`)\n6. Reset todo reminder counter\n\n`newSession()` creates a fresh header and empty entry list (leaf reset to `null`). In the handoff path, no `parentSession` is passed.\n\n### 5) Handoff-context injection\n\nThe generated handoff document is wrapped and appended to the new session as a `custom_message` entry:\n\n```text\n<handoff-context>\n...handoff text...\n</handoff-context>\n\nThe above is a handoff document from a previous session. Use this context to continue the work seamlessly.\n```\n\nInsertion call:\n\n```ts\nthis.sessionManager.appendCustomMessageEntry(\"handoff\", handoffContent, true);\n```\n\nSemantics:\n\n- `customType`: `\"handoff\"`\n- `display`: `true` (visible in TUI rebuild)\n- Entry type: `custom_message` (participates in LLM context)\n\n### 6) Rebuild active agent context\n\nAfter injection:\n\n1. `sessionManager.buildSessionContext()` resolves message list for current leaf\n2. `agent.replaceMessages(sessionContext.messages)` makes the injected handoff message active context\n3. Method returns `{ document: handoffText }`\n\nAt this point, the active LLM context in the new session contains the injected handoff message, not the old transcript.\n\n## Persistence model: old session vs new session\n\n### Old session\n\nDuring generation, normal message persistence remains active. The assistant handoff response is persisted as a regular `message` entry on `message_end`.\n\nResult: the original session contains the visible generated handoff as part of historical transcript.\n\n### New session\n\nAfter session reset, handoff is persisted as `custom_message` with `customType: \"handoff\"`.\n\n`buildSessionContext()` converts this entry into a runtime custom/user-context message via `createCustomMessage(...)`, so it is included in future prompts from the new session.\n\n## Controller/UI behavior\n\n`CommandController.handleHandoffCommand` behavior:\n\n- Calls `await session.handoff(customInstructions)`\n- If result is `undefined`: `showError(\"Handoff cancelled\")`\n- On success:\n - `rebuildChatFromMessages()` (loads new session context, including injected handoff)\n - invalidates status line and editor top border\n - reloads todos\n - appends success chat line: `New session started with handoff context`\n- On exception:\n - if message is `\"Handoff cancelled\"` or error name is `AbortError`: `showError(\"Handoff cancelled\")`\n - otherwise: `showError(\"Handoff failed: <message>\")`\n- Requests render at end\n\n## Cancellation semantics (current behavior)\n\n### Session-level cancellation primitive\n\n`AgentSession` exposes:\n\n- `abortHandoff()` → aborts `#handoffAbortController`\n- `isGeneratingHandoff` → true while controller exists\n\nWhen this abort path is used, the handoff subscriber rejects with `Error(\"Handoff cancelled\")`, and command controller maps it to cancellation UI.\n\n### Interactive `/handoff` path limitation\n\nIn current interactive controller wiring, `/handoff` does not install a dedicated Escape handler that calls `abortHandoff()` (unlike compaction/branch-summary paths that temporarily override `editor.onEscape`).\n\nPractical impact:\n\n- There is session-level cancellation support, but no handoff-specific keybinding hook in the `/handoff` command path.\n- User interruption may still occur through broader agent abort paths, but that is not the same explicit cancellation channel used by `abortHandoff()`.\n\n## Aborted vs failed handoff\n\nCurrent UI classification:\n\n- **Aborted/cancelled**\n - `abortHandoff()` path triggers `\"Handoff cancelled\"`, or\n - thrown `AbortError`\n - UI shows `Handoff cancelled`\n\n- **Failed**\n - any other thrown error from `handoff()` / prompt pipeline (model/API validation errors, runtime exceptions, etc.)\n - UI shows `Handoff failed: ...`\n\nAdditional nuance: if generation completes but no text is extracted, `handoff()` returns `undefined` and controller currently reports **cancelled**, not **failed**.\n\n## Short-session and minimum-content guardrails\n\nTwo guards prevent low-signal handoffs:\n\n- UI layer (`handleHandoffCommand`): warns and returns early for `< 2` message entries\n- Session layer (`handoff()`): throws the same condition as an error\n\nThis avoids creating a new session with empty/near-empty handoff context.\n\n## State transition summary\n\nHigh-level state flow:\n\n1. Interactive slash command intercepted\n2. Preflight message-count guard\n3. `#handoffAbortController` created (`isGeneratingHandoff = true`)\n4. Internal handoff prompt submitted (visible in chat as normal assistant generation)\n5. On `agent_end`, last assistant text extracted\n6. If missing/aborted → return `undefined` or cancellation error path\n7. If present:\n - flush old session\n - create new empty session\n - reset runtime queues/counters\n - append `custom_message(handoff)`\n - rebuild and replace active agent messages\n8. Controller rebuilds chat UI and announces success\n9. `#handoffAbortController` cleared (`isGeneratingHandoff = false`)\n\n## Known assumptions and limitations\n\n- Handoff extraction is heuristic: \"last assistant text blocks\"; no structural validation.\n- No hard check that generated markdown follows requested section format.\n- Missing extracted text is reported as cancellation in controller UX.\n- `/handoff` interactive flow currently lacks a dedicated Escape→`abortHandoff()` binding.\n- New session lineage metadata (`parentSession`) is not set by this path.",
|
|
17
17
|
"hooks.md": "# Hooks\n\nThis document describes the **current hook subsystem code** in `src/extensibility/hooks/*`.\n\n## Current status in runtime\n\nThe hook package (`src/extensibility/hooks/`) is still exported and usable as an API surface, but the default CLI runtime now initializes the **extension runner** path. In current startup flow:\n\n- `--hook` is treated as an alias for `--extension` (CLI paths are merged into `additionalExtensionPaths`)\n- tools are wrapped by `ExtensionToolWrapper`, not `HookToolWrapper`\n- context transforms and lifecycle emissions go through `ExtensionRunner`\n\nSo this file documents the hook subsystem implementation itself (types/loader/runner/wrapper), including legacy behavior and constraints.\n\n## Key files\n\n- `src/extensibility/hooks/types.ts` — hook context, event types, and result contracts\n- `src/extensibility/hooks/loader.ts` — module loading and hook discovery bridge\n- `src/extensibility/hooks/runner.ts` — event dispatch, command lookup, error signaling\n- `src/extensibility/hooks/tool-wrapper.ts` — pre/post tool interception wrapper\n- `src/extensibility/hooks/index.ts` — exports/re-exports\n\n## What a hook module is\n\nA hook module must default-export a factory:\n\n```ts\nimport type { HookAPI } from \"@oh-my-pi/pi-coding-agent/hooks\";\n\nexport default function hook(pi: HookAPI): void {\n\tpi.on(\"tool_call\", async (event, ctx) => {\n\t\tif (event.toolName === \"bash\" && String(event.input.command ?? \"\").includes(\"rm -rf\")) {\n\t\t\treturn { block: true, reason: \"blocked by policy\" };\n\t\t}\n\t});\n}\n```\n\nThe factory can:\n\n- register event handlers with `pi.on(...)`\n- send persistent custom messages with `pi.sendMessage(...)`\n- persist non-LLM state with `pi.appendEntry(...)`\n- register slash commands via `pi.registerCommand(...)`\n- register custom message renderers via `pi.registerMessageRenderer(...)`\n- run shell commands via `pi.exec(...)`\n\n## Discovery and loading\n\n`discoverAndLoadHooks(configuredPaths, cwd)` does:\n\n1. Load discovered hooks from capability registry (`loadCapability(\"hooks\")`)\n2. Append explicitly configured paths (deduped by absolute path)\n3. Call `loadHooks(allPaths, cwd)`\n\n`loadHooks` then imports each path and expects a `default` function.\n\n### Path resolution\n\n`loader.ts` resolves hook paths as:\n\n- absolute path: used as-is\n- `~` path: expanded\n- relative path: resolved against `cwd`\n\n### Important legacy mismatch\n\nDiscovery providers for `hookCapability` still model pre/post shell-style hook files (for example `.claude/hooks/pre/*`, `.omp/.../hooks/pre/*`).\n\nThe hook loader here uses dynamic module import and requires a default JS/TS hook factory. If a discovered hook path is not importable as a module, load fails and is reported in `LoadHooksResult.errors`.\n\n## Event surfaces\n\nHook events are strongly typed in `types.ts`.\n\n### Session events\n\n- `session_start`\n- `session_before_switch` → can return `{ cancel?: boolean }`\n- `session_switch`\n- `session_before_branch` → can return `{ cancel?: boolean; skipConversationRestore?: boolean }`\n- `session_branch`\n- `session_before_compact` → can return `{ cancel?: boolean; compaction?: CompactionResult }`\n- `session.compacting` → can return `{ context?: string[]; prompt?: string; preserveData?: Record<string, unknown> }`\n- `session_compact`\n- `session_before_tree` → can return `{ cancel?: boolean; summary?: { summary: string; details?: unknown } }`\n- `session_tree`\n- `session_shutdown`\n\n### Agent/context events\n\n- `context` → can return `{ messages?: Message[] }`\n- `before_agent_start` → can return `{ message?: { customType; content; display; details } }`\n- `agent_start`\n- `agent_end`\n- `turn_start`\n- `turn_end`\n- `auto_compaction_start`\n- `auto_compaction_end`\n- `auto_retry_start`\n- `auto_retry_end`\n- `ttsr_triggered`\n- `todo_reminder`\n\n### Tool events (pre/post model)\n\n- `tool_call` (pre-execution) → can return `{ block?: boolean; reason?: string }`\n- `tool_result` (post-execution) → can return `{ content?; details?; isError? }`\n\nThis is the hook subsystem’s core pre/post interception model.\n\n```text\nHook tool interception flow\n\ntool_call handlers\n │\n ├─ any { block: true }? ── yes ──> throw (tool blocked)\n │\n └─ no\n │\n ▼\n execute underlying tool\n │\n ├─ success ──> tool_result handlers can override { content, details }\n │\n └─ error ──> emit tool_result(isError=true) then rethrow original error\n```\n\n\n## Execution model and mutation semantics\n\n### 1) Pre-execution: `tool_call`\n\n`HookToolWrapper.execute()` emits `tool_call` before tool execution.\n\n- if any handler returns `{ block: true }`, execution stops\n- if handler throws, wrapper fails closed and blocks execution\n- returned `reason` becomes the thrown error text\n\n### 2) Tool execution\n\nUnderlying tool executes normally if not blocked.\n\n### 3) Post-execution: `tool_result`\n\nAfter success, wrapper emits `tool_result` with:\n\n- `toolName`, `toolCallId`, `input`\n- `content`\n- `details`\n- `isError: false`\n\nIf handler returns overrides:\n\n- `content` can replace result content\n- `details` can replace result details\n\nOn tool failure, wrapper emits `tool_result` with `isError: true` and error text content, then rethrows original error.\n\n### What hooks can mutate\n\n- LLM context for a single call via `context` (`messages` replacement chain)\n- tool output content/details on successful tool calls (`tool_result` path)\n- pre-agent injected message via `before_agent_start`\n- cancellation/custom compaction/tree behavior via `session_before_*` and `session.compacting`\n\n### What hooks cannot mutate in this implementation\n\n- raw tool input parameters in-place (only block/allow on `tool_call`)\n- execution continuation after thrown tool errors (error path rethrows)\n- final success/error status in wrapper behavior (returned `isError` is typed but not applied by `HookToolWrapper`)\n\n## Ordering and conflict behavior\n\n### Discovery-level ordering\n\nCapability providers are priority-sorted (higher first). Dedupe is by capability key, first wins.\n\nFor `hooks`, capability key is `${type}:${tool}:${name}`. Shadowed duplicates from lower-priority providers are marked and excluded from effective discovered list.\n\n### Load order\n\n`discoverAndLoadHooks` builds a flat `allPaths` list, deduped by resolved absolute path, then `loadHooks` iterates in that order.\nFile order within each discovered directory depends on `readdir` output; the hook loader does not perform an additional sort.\n\n### Runtime handler order\n\nInside `HookRunner`, order is deterministic by registration sequence:\n\n1. hooks array order\n2. handler registration order per hook/event\n\nConflict behavior by event type:\n\n- `tool_call`: last returned result wins unless a handler blocks; first block short-circuits\n- `tool_result`: last returned override wins (no short-circuit)\n- `context`: chained; each handler receives prior handler’s message output\n- `before_agent_start`: first returned message is kept; later messages ignored\n- `session_before_*`: latest returned result is tracked; `cancel: true` short-circuits immediately\n- `session.compacting`: latest returned result wins\n\nCommand/renderer conflicts:\n\n- `getCommand(name)` returns first match across hooks (first loaded wins)\n- `getMessageRenderer(customType)` returns first match\n- `getRegisteredCommands()` returns all commands (no dedupe)\n\n## UI interactions (`HookContext.ui`)\n\n`HookUIContext` includes:\n\n- `select`, `confirm`, `input`, `editor`\n- `notify`\n- `setStatus`\n- `custom`\n- `setEditorText`, `getEditorText`\n- `theme` getter\n\n`ctx.hasUI` indicates whether interactive UI is available.\n\nWhen running with no UI, the default no-op context behavior is:\n\n- `select/input/editor` return `undefined`\n- `confirm` returns `false`\n- `notify`, `setStatus`, `setEditorText` are no-ops\n- `getEditorText` returns `\"\"`\n\n### Status line behavior\n\nHook status text set via `ctx.ui.setStatus(key, text)` is:\n\n- stored per key\n- sorted by key name\n- sanitized (`\\r`, `\\n`, `\\t` → spaces; repeated spaces collapsed)\n- joined and width-truncated for display\n\n## Error propagation and fallback\n\n### Load-time\n\n- invalid module or missing default export → captured in `LoadHooksResult.errors`\n- loading continues for other hooks\n\n### Event-time\n\n`HookRunner.emit(...)` catches handler errors for most events and emits `HookError` to listeners (`hookPath`, `event`, `error`), then continues.\n\n`emitToolCall(...)` is stricter: handler errors are not swallowed there; they propagate to caller. In `HookToolWrapper`, this blocks the tool call (fail-safe).\n\n## Realistic API examples\n\n### Block unsafe bash commands\n\n```ts\nimport type { HookAPI } from \"@oh-my-pi/pi-coding-agent/hooks\";\n\nexport default function (pi: HookAPI): void {\n\tpi.on(\"tool_call\", async (event, ctx) => {\n\t\tif (event.toolName !== \"bash\") return;\n\t\tconst cmd = String(event.input.command ?? \"\");\n\t\tif (!cmd.includes(\"rm -rf\")) return;\n\n\t\tif (!ctx.hasUI) return { block: true, reason: \"rm -rf blocked (no UI)\" };\n\t\tconst ok = await ctx.ui.confirm(\"Dangerous command\", `Allow: ${cmd}`);\n\t\tif (!ok) return { block: true, reason: \"user denied command\" };\n\t});\n}\n```\n\n### Redact tool output on post-execution\n\n```ts\nimport type { HookAPI } from \"@oh-my-pi/pi-coding-agent/hooks\";\n\nexport default function (pi: HookAPI): void {\n\tpi.on(\"tool_result\", async event => {\n\t\tif (event.toolName !== \"read\" || event.isError) return;\n\n\t\tconst redacted = event.content.map(chunk => {\n\t\t\tif (chunk.type !== \"text\") return chunk;\n\t\t\treturn { ...chunk, text: chunk.text.replaceAll(/API_KEY=\\S+/g, \"API_KEY=[REDACTED]\") };\n\t\t});\n\n\t\treturn { content: redacted };\n\t});\n}\n```\n\n### Modify model context per LLM call\n\n```ts\nimport type { HookAPI } from \"@oh-my-pi/pi-coding-agent/hooks\";\n\nexport default function (pi: HookAPI): void {\n\tpi.on(\"context\", async event => {\n\t\tconst filtered = event.messages.filter(msg => !(msg.role === \"custom\" && msg.customType === \"debug-only\"));\n\t\treturn { messages: filtered };\n\t});\n}\n```\n\n### Register slash command with command-safe context methods\n\n```ts\nimport type { HookAPI } from \"@oh-my-pi/pi-coding-agent/hooks\";\n\nexport default function (pi: HookAPI): void {\n\tpi.registerCommand(\"handoff\", {\n\t\tdescription: \"Create a new session with setup message\",\n\t\thandler: async (_args, ctx) => {\n\t\t\tawait ctx.waitForIdle();\n\t\t\tawait ctx.newSession({\n\t\t\t\tparentSession: ctx.sessionManager.getSessionFile(),\n\t\t\t\tsetup: async sm => {\n\t\t\t\t\tsm.appendMessage({\n\t\t\t\t\t\trole: \"user\",\n\t\t\t\t\t\tcontent: [{ type: \"text\", text: \"Continue from prior session summary.\" }],\n\t\t\t\t\t\ttimestamp: Date.now(),\n\t\t\t\t\t});\n\t\t\t\t},\n\t\t\t});\n\t\t},\n\t});\n}\n```\n\n## Export surface\n\n`src/extensibility/hooks/index.ts` exports:\n\n- loading APIs (`discoverAndLoadHooks`, `loadHooks`)\n- runner and wrapper (`HookRunner`, `HookToolWrapper`)\n- all hook types\n- `execCommand` re-export\n\nAnd package root (`src/index.ts`) re-exports hook **types** as a legacy compatibility surface.\n",
|
|
18
18
|
"marketplace.md": "# Marketplace plugin system\n\nThe marketplace system lets you discover, install, and manage plugins from Git-hosted catalogs. It is compatible with the Claude Code plugin registry format.\n\n## Quick start\n\n```\n/marketplace add anthropics/claude-plugins-official\n/marketplace install wordpress.com@claude-plugins-official\n```\n\nOr just type `/marketplace` with no arguments to open the interactive plugin browser.\n\n## Concepts\n\nA **marketplace** is a Git repository (or local directory) containing a catalog file at `.claude-plugin/marketplace.json`. The catalog lists available plugins with their sources, descriptions, and metadata.\n\nA **plugin** is a directory containing skills, commands, hooks, MCP servers, or LSP servers. Plugins are identified by `name@marketplace` (e.g. `code-review@claude-plugins-official`).\n\n**Scopes**: plugins can be installed at two scopes:\n\n- **user** (default) -- available in all projects, stored in `~/.omp/plugins/installed_plugins.json`\n- **project** -- available only in the current project, stored in `.omp/installed_plugins.json`\n\nProject-scoped installs shadow user-scoped installs of the same plugin.\n\n## Commands\n\n### Interactive mode\n\n| Command | Effect |\n|---|---|\n| `/marketplace` | Open interactive plugin browser (install) |\n\n### Marketplace management\n\n| Command | Effect |\n|---|---|\n| `/marketplace add <source>` | Add a marketplace source |\n| `/marketplace remove <name>` | Remove a marketplace |\n| `/marketplace update [name]` | Re-fetch catalog(s); omit name to update all |\n| `/marketplace list` | List configured marketplaces |\n\n### Plugin operations\n\n| Command | Effect |\n|---|---|\n| `/marketplace discover [marketplace]` | Browse available plugins |\n| `/marketplace install [--force] [--scope user\\|project] name@marketplace` | Install a plugin |\n| `/marketplace uninstall [--scope user\\|project] name@marketplace` | Uninstall a plugin |\n| `/marketplace installed` | List installed marketplace plugins |\n| `/marketplace upgrade [--scope user\\|project] [name@marketplace]` | Upgrade one or all plugins |\n\n### CLI equivalents\n\nThe same operations are available from the command line:\n\n```\nomp plugin marketplace add <source>\nomp plugin marketplace remove <name>\nomp plugin marketplace update [name]\nomp plugin marketplace list\nomp plugin discover [marketplace]\nomp plugin install --scope project name@marketplace\n```\n\n## Marketplace sources\n\nWhen you run `/marketplace add <source>`, the system classifies the source:\n\n| Source format | Type | Example |\n|---|---|---|\n| `owner/repo` | GitHub shorthand | `anthropics/claude-plugins-official` |\n| `https://...*.json` | Direct catalog URL | `https://example.com/marketplace.json` |\n| `https://...*.git` or `git@...` | Git repository | `https://github.com/org/repo.git` |\n| `./path` or `~/path` or `/path` | Local directory | `./my-marketplace` |\n\nThe system clones the repository (or reads the local directory), locates `.claude-plugin/marketplace.json`, validates it, and caches the catalog locally.\n\n## Catalog format (marketplace.json)\n\nA marketplace catalog lives at `.claude-plugin/marketplace.json` in the repository root:\n\n```json\n{\n \"$schema\": \"https://anthropic.com/claude-code/marketplace.schema.json\",\n \"name\": \"my-marketplace\",\n \"owner\": {\n \"name\": \"Your Name\",\n \"email\": \"you@example.com\"\n },\n \"description\": \"A collection of plugins\",\n \"plugins\": [\n {\n \"name\": \"my-plugin\",\n \"description\": \"What this plugin does\",\n \"source\": \"./plugins/my-plugin\",\n \"category\": \"development\",\n \"homepage\": \"https://github.com/you/my-plugin\"\n }\n ]\n}\n```\n\n### Required fields\n\n| Field | Description |\n|---|---|\n| `name` | Marketplace name. Lowercase alphanumeric, hyphens, and dots. Must start and end with alphanumeric. Max 64 chars. |\n| `owner.name` | Marketplace owner name |\n| `plugins` | Array of plugin entries |\n\n### Plugin entry fields\n\n| Field | Required | Description |\n|---|---|---|\n| `name` | yes | Plugin name (same rules as marketplace name) |\n| `source` | yes | Where to find the plugin (see below) |\n| `description` | no | Short description |\n| `version` | no | Version string |\n| `author` | no | `{ name, email? }` |\n| `homepage` | no | URL |\n| `category` | no | Category string (e.g. `development`, `productivity`, `security`) |\n| `tags` | no | Array of string tags |\n| `strict` | no | Boolean |\n| `commands` | no | Slash commands provided |\n| `agents` | no | Agents provided |\n| `hooks` | no | Hook definitions |\n| `mcpServers` | no | MCP server definitions |\n| `lspServers` | no | LSP server definitions |\n\n### Plugin source formats\n\nThe `source` field supports several formats:\n\n**Relative path** (within the marketplace repo):\n```json\n\"source\": \"./plugins/my-plugin\"\n```\n\n**Git repository URL**:\n```json\n\"source\": {\n \"source\": \"url\",\n \"url\": \"https://github.com/org/repo.git\",\n \"sha\": \"abc123...\"\n}\n```\n\n**GitHub shorthand**:\n```json\n\"source\": {\n \"source\": \"github\",\n \"repo\": \"org/repo\",\n \"ref\": \"main\",\n \"sha\": \"abc123...\"\n}\n```\n\n**Git subdirectory** (monorepo):\n```json\n\"source\": {\n \"source\": \"git-subdir\",\n \"url\": \"https://github.com/org/monorepo.git\",\n \"path\": \"plugins/my-plugin\",\n \"ref\": \"main\",\n \"sha\": \"abc123...\"\n}\n```\n\n**npm package**:\n```json\n\"source\": {\n \"source\": \"npm\",\n \"package\": \"@scope/my-plugin\",\n \"version\": \"1.0.0\"\n}\n```\n\n## On-disk layout\n\n```\n~/.omp/\n config/\n marketplaces.json # Registry of added marketplaces\n plugins/\n installed_plugins.json # User-scoped installed plugins\n cache/\n marketplaces/ # Cached marketplace catalogs\n plugins/ # Cached plugin directories\n\n<project>/.omp/\n installed_plugins.json # Project-scoped installed plugins\n```\n\n## Naming rules\n\nMarketplace and plugin names must:\n\n- Start and end with a lowercase letter or digit\n- Contain only lowercase letters, digits, hyphens, and dots\n- Be at most 64 characters\n\nPlugin IDs (`name@marketplace`) must be at most 128 characters total.\n\nValid examples: `my-plugin`, `code-review`, `wordpress.com`, `ai-firstify`\nInvalid examples: `-bad`, `bad-`, `.bad`, `Bad`, `under_score`\n",
|
|
19
|
-
"mcp-config.md": "# MCP configuration in OMP\n\nThis guide explains how to add, edit, and validate MCP servers for the OMP coding agent.\n\nSource of truth in code:\n\n- Runtime config types: `packages/coding-agent/src/mcp/types.ts`\n- Config writer: `packages/coding-agent/src/mcp/config-writer.ts`\n- Loader + validation: `packages/coding-agent/src/mcp/config.ts`\n- Standalone `mcp.json` discovery: `packages/coding-agent/src/discovery/mcp-json.ts`\n- Schema: `packages/coding-agent/src/config/mcp-schema.json`\n\n## Preferred config locations\n\nOMP can discover MCP servers from multiple tools (`.claude/`, `.cursor/`, `.vscode/`, `opencode.json`, and more), but for OMP-native configuration you should usually use one of these files:\n\n- Project: `.omp/mcp.json`\n- User: `~/.omp/mcp.json`\n\nOMP also accepts fallback standalone files in the project root:\n\n- `mcp.json`\n- `.mcp.json`\n\nUse `.omp/mcp.json` when you want OMP to own the configuration. Use root `mcp.json` / `.mcp.json` only when you want a portable fallback file that other MCP clients may also read.\n\n## Add a schema reference\n\nAdd this line at the top of the file for editor autocomplete and validation:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {}\n}\n```\n\nOMP now writes this automatically when `/mcp add`, `/mcp enable`, `/mcp disable`, `/mcp reauth`, or other config-writing flows create or update an OMP-managed MCP file.\n\n## File shape\n\nOMP supports this top-level structure:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"server-name\": {\n \"type\": \"stdio\",\n \"command\": \"npx\",\n \"args\": [\"-y\", \"some-mcp-server\"]\n }\n },\n \"disabledServers\": [\"server-name\"]\n}\n```\n\nTop-level keys:\n\n- `$schema` — optional JSON Schema URL for tooling\n- `mcpServers` — map of server name to server config\n- `disabledServers` — user-level denylist used to turn off discovered servers by name\n\nServer names must match `^[a-zA-Z0-9_.-]{1,100}$`.\n\n## Supported server fields\n\nShared fields for every transport:\n\n- `enabled?: boolean` — skip this server when `false`\n- `timeout?: number` — connection timeout in milliseconds\n- `auth?: { ... }` — auth metadata used by OMP for OAuth/API-key flows\n- `oauth?: { ... }` — explicit OAuth client settings used during auth/reauth\n\n### `stdio` transport\n\n`stdio` is the default when `type` is omitted.\n\nRequired:\n\n- `command: string`\n\nOptional:\n\n- `type?: \"stdio\"`\n- `args?: string[]`\n- `env?: Record<string, string>`\n- `cwd?: string`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-filesystem\",\n \"/Users/alice/projects\",\n \"/Users/alice/Documents\"\n ]\n }\n }\n}\n```\n\nThis follows the official Filesystem MCP server package (`@modelcontextprotocol/server-filesystem`).\n\n### `http` transport\n\nRequired:\n\n- `type: \"http\"`\n- `url: string`\n\nOptional:\n\n- `headers?: Record<string, string>`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\"\n }\n }\n}\n```\n\nThis matches GitHub's hosted GitHub MCP server endpoint.\n\n### `sse` transport\n\nRequired:\n\n- `type: \"sse\"`\n- `url: string`\n\nOptional:\n\n- `headers?: Record<string, string>`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"legacy-remote\": {\n \"type\": \"sse\",\n \"url\": \"https://example.com/mcp/sse\"\n }\n }\n}\n```\n\n`sse` is still supported for compatibility, but the MCP spec now prefers Streamable HTTP (`type: \"http\"`) for new servers.\n\n## Auth fields\n\nOMP understands two auth-related objects.\n\n### `auth`\n\n```json\n{\n \"type\": \"oauth\" | \"apikey\",\n \"credentialId\": \"optional-stored-credential-id\",\n \"tokenUrl\": \"optional-token-endpoint\",\n \"clientId\": \"optional-client-id\",\n \"clientSecret\": \"optional-client-secret\"\n}\n```\n\nUse this when OMP should remember how to rehydrate credentials for a server.\n\n### `oauth`\n\n```json\n{\n \"clientId\": \"...\",\n \"clientSecret\": \"...\",\n \"redirectUri\": \"...\",\n \"callbackPort\": 3334,\n \"callbackPath\": \"/oauth/callback\"\n}\n```\n\nUse this when the MCP server requires explicit OAuth client settings.\n\nSlack is the clearest current example. Slack's MCP server is hosted at `https://mcp.slack.com/mcp`, uses Streamable HTTP, and requires confidential OAuth with your Slack app's client credentials.\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"slack\": {\n \"type\": \"http\",\n \"url\": \"https://mcp.slack.com/mcp\",\n \"oauth\": {\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n },\n \"auth\": {\n \"type\": \"oauth\",\n \"tokenUrl\": \"https://slack.com/api/oauth.v2.user.access\",\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n }\n }\n }\n}\n```\n\nRelevant Slack endpoints from Slack's docs:\n\n- MCP endpoint: `https://mcp.slack.com/mcp`\n- Authorization endpoint: `https://slack.com/oauth/v2_user/authorize`\n- Token endpoint: `https://slack.com/api/oauth.v2.user.access`\n\n## Common copy-paste examples\n\n### Filesystem server via stdio\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-filesystem\",\n \"/absolute/path/one\",\n \"/absolute/path/two\"\n ]\n }\n }\n}\n```\n\n### GitHub hosted server via HTTP\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\"\n }\n }\n}\n```\n\n### GitHub local server via Docker\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"-i\",\n \"--rm\",\n \"-e\",\n \"GITHUB_PERSONAL_ACCESS_TOKEN\",\n \"ghcr.io/github/github-mcp-server\"\n ],\n \"env\": {\n \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n }\n }\n }\n}\n```\n\nThis matches GitHub's official local Docker image `ghcr.io/github/github-mcp-server`.\n\n### Slack hosted server via OAuth\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"slack\": {\n \"type\": \"http\",\n \"url\": \"https://mcp.slack.com/mcp\",\n \"oauth\": {\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n },\n \"auth\": {\n \"type\": \"oauth\",\n \"tokenUrl\": \"https://slack.com/api/oauth.v2.user.access\",\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n }\n }\n }\n}\n```\n\n## Secrets and variable resolution\n\nThis is the part that usually trips people up.\n\n### In `.omp/mcp.json` and `~/.omp/mcp.json`\n\nBefore OMP launches a server or makes an HTTP request, it resolves `env` and `headers` values like this:\n\n1. If a value starts with `!`, OMP runs it as a shell command and uses trimmed stdout.\n2. Otherwise OMP first checks whether the value matches an environment variable name.\n3. If that environment variable is not set, OMP uses the string literally.\n\nExamples:\n\n```json\n{\n \"env\": {\n \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n },\n \"headers\": {\n \"X-MCP-Insiders\": \"true\"\n }\n}\n```\n\nThat means this is valid and convenient for local secrets:\n\n- `\"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"` → copy from the current shell environment\n- `\"Authorization\": \"Bearer hardcoded-token\"` → use the literal value\n- `\"Authorization\": \"!printf 'Bearer %s' \\\"$GITHUB_TOKEN\\\"\"` → build the header from a command\n\n### In root `mcp.json` and `.mcp.json`\n\nThe standalone fallback loader also expands `${VAR}` and `${VAR:-default}` inside strings during discovery.\n\nExample:\n\n```json\n{\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\",\n \"headers\": {\n \"Authorization\": \"Bearer ${GITHUB_TOKEN}\"\n }\n }\n }\n}\n```\n\nIf you want the least surprising OMP behavior, prefer `.omp/mcp.json` and use explicit env/header values.\n\n## `disabledServers`\n\n`disabledServers` is mainly useful in the user config file (`~/.omp/mcp.json`) when a server is discovered from some other source and you want OMP to ignore it without editing that other tool's config.\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"disabledServers\": [\"github\", \"slack\"]\n}\n```\n\n## `/mcp add` vs editing JSON directly\n\nUse `/mcp add` when you want guided setup.\n\nUse direct JSON editing when:\n\n- you need a transport or auth option the wizard does not prompt for yet\n- you want to paste a server definition from another MCP client\n- you want schema-backed validation in your editor\n\nAfter editing, use:\n\n- `/mcp reload` to rediscover and reconnect servers in the current session\n- `/mcp list` to see which config file a server came from\n- `/mcp test <name>` to test a single server\n\n## Validation rules OMP enforces\n\nFrom `validateServerConfig()` in `packages/coding-agent/src/mcp/config.ts`:\n\n- `stdio` requires `command`\n- `http` and `sse` require `url`\n- a server cannot set both `command` and `url`\n- unknown `type` values are rejected\n\nPractical implications:\n\n- Omitting `type` means `stdio`\n- If you paste a remote server config and forget `\"type\": \"http\"`, OMP will treat it as `stdio` and complain that `command` is missing\n- `sse` remains valid for compatibility, but new hosted servers should usually be configured as `http`\n\n## Discovery and precedence\n\nOMP does not merge duplicate server definitions across files. Discovery providers are prioritized, and the higher-priority definition wins.\n\nIn practice:\n\n- prefer `.omp/mcp.json` or `~/.omp/mcp.json` when you want an OMP-specific override\n- keep server names unique across tools when possible\n- use `disabledServers` in the user config when a third-party config keeps reintroducing a server you do not want\n\n## Troubleshooting\n\n### `Server \"name\": stdio server requires \"command\" field`\n\nYou probably omitted `type: \"http\"` on a remote server.\n\n### `Server \"name\": both \"command\" and \"url\" are set`\n\nPick one transport. OMP treats `command` as stdio and `url` as http/sse.\n\n### `/mcp add` worked but the server still does not connect\n\nThe JSON is valid, but the server may still be unreachable. Use `/mcp test <name>` and check whether:\n\n- the binary or Docker image exists\n- required environment variables are set\n- the remote URL is reachable\n- the OAuth or API token is valid\n\n### The server exists in another tool's config but not in OMP\n\nRun `/mcp list`. OMP discovers many third-party MCP files, but project-level loading can also be disabled via the `mcp.enableProjectConfig` setting.\n\n## References\n\n- MCP transport spec: https://modelcontextprotocol.io/specification/2025-03-26/basic/transports\n- Filesystem server package: https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem\n- GitHub MCP server: https://github.com/github/github-mcp-server\n- Slack MCP server docs: https://docs.slack.dev/ai/slack-mcp-server/\n",
|
|
19
|
+
"mcp-config.md": "# MCP configuration in OMP\n\nThis guide explains how to add, edit, and validate MCP servers for the OMP coding agent.\n\nSource of truth in code:\n\n- Runtime config types: `packages/coding-agent/src/mcp/types.ts`\n- Config writer: `packages/coding-agent/src/mcp/config-writer.ts`\n- Loader + validation: `packages/coding-agent/src/mcp/config.ts`\n- Standalone `mcp.json` discovery: `packages/coding-agent/src/discovery/mcp-json.ts`\n- Schema: `packages/coding-agent/src/config/mcp-schema.json`\n\n## Preferred config locations\n\nOMP can discover MCP servers from multiple tools (`.claude/`, `.cursor/`, `.vscode/`, `opencode.json`, and more), but for OMP-native configuration you should usually use one of these files:\n\n- Project: `.omp/mcp.json`\n- User: `~/.omp/agent/mcp.json`\n\nOMP also accepts fallback standalone files in the project root:\n\n- `mcp.json`\n- `.mcp.json`\n\nUse `.omp/mcp.json` or `~/.omp/agent/mcp.json` when you want OMP to own the configuration. Use root `mcp.json` / `.mcp.json` only when you want a portable fallback file that other MCP clients may also read.\n\n## Add a schema reference\n\nAdd this line at the top of the file for editor autocomplete and validation:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {}\n}\n```\n\nOMP now writes this automatically when `/mcp add`, `/mcp enable`, `/mcp disable`, `/mcp reauth`, or other config-writing flows create or update an OMP-managed MCP file.\n\n## File shape\n\nOMP supports this top-level structure:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"server-name\": {\n \"type\": \"stdio\",\n \"command\": \"npx\",\n \"args\": [\"-y\", \"some-mcp-server\"]\n }\n },\n \"disabledServers\": [\"server-name\"]\n}\n```\n\nTop-level keys:\n\n- `$schema` — optional JSON Schema URL for tooling\n- `mcpServers` — map of server name to server config\n- `disabledServers` — user-level denylist used to turn off discovered servers by name\n\nServer names must match `^[a-zA-Z0-9_.-]{1,100}$`.\n\n## Supported server fields\n\nShared fields for every transport:\n\n- `enabled?: boolean` — skip this server when `false`\n- `timeout?: number` — connection timeout in milliseconds\n- `auth?: { ... }` — auth metadata used by OMP for OAuth/API-key flows\n- `oauth?: { ... }` — explicit OAuth client settings used during auth/reauth\n\n### `stdio` transport\n\n`stdio` is the default when `type` is omitted.\n\nRequired:\n\n- `command: string`\n\nOptional:\n\n- `type?: \"stdio\"`\n- `args?: string[]`\n- `env?: Record<string, string>`\n- `cwd?: string`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-filesystem\",\n \"/Users/alice/projects\",\n \"/Users/alice/Documents\"\n ]\n }\n }\n}\n```\n\nThis follows the official Filesystem MCP server package (`@modelcontextprotocol/server-filesystem`).\n\n### `http` transport\n\nRequired:\n\n- `type: \"http\"`\n- `url: string`\n\nOptional:\n\n- `headers?: Record<string, string>`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\"\n }\n }\n}\n```\n\nThis matches GitHub's hosted GitHub MCP server endpoint.\n\n### `sse` transport\n\nRequired:\n\n- `type: \"sse\"`\n- `url: string`\n\nOptional:\n\n- `headers?: Record<string, string>`\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"legacy-remote\": {\n \"type\": \"sse\",\n \"url\": \"https://example.com/mcp/sse\"\n }\n }\n}\n```\n\n`sse` is still supported for compatibility, but the MCP spec now prefers Streamable HTTP (`type: \"http\"`) for new servers.\n\n## Auth fields\n\nOMP understands two auth-related objects.\n\n### `auth`\n\n```json\n{\n \"type\": \"oauth\" | \"apikey\",\n \"credentialId\": \"optional-stored-credential-id\",\n \"tokenUrl\": \"optional-token-endpoint\",\n \"clientId\": \"optional-client-id\",\n \"clientSecret\": \"optional-client-secret\"\n}\n```\n\nUse this when OMP should remember how to rehydrate credentials for a server.\n\n### `oauth`\n\n```json\n{\n \"clientId\": \"...\",\n \"clientSecret\": \"...\",\n \"redirectUri\": \"...\",\n \"callbackPort\": 3334,\n \"callbackPath\": \"/oauth/callback\"\n}\n```\n\nUse this when the MCP server requires explicit OAuth client settings.\n\nSlack is the clearest current example. Slack's MCP server is hosted at `https://mcp.slack.com/mcp`, uses Streamable HTTP, and requires confidential OAuth with your Slack app's client credentials.\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"slack\": {\n \"type\": \"http\",\n \"url\": \"https://mcp.slack.com/mcp\",\n \"oauth\": {\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n },\n \"auth\": {\n \"type\": \"oauth\",\n \"tokenUrl\": \"https://slack.com/api/oauth.v2.user.access\",\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n }\n }\n }\n}\n```\n\nRelevant Slack endpoints from Slack's docs:\n\n- MCP endpoint: `https://mcp.slack.com/mcp`\n- Authorization endpoint: `https://slack.com/oauth/v2_user/authorize`\n- Token endpoint: `https://slack.com/api/oauth.v2.user.access`\n\n## Common copy-paste examples\n\n### Filesystem server via stdio\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"filesystem\": {\n \"command\": \"npx\",\n \"args\": [\n \"-y\",\n \"@modelcontextprotocol/server-filesystem\",\n \"/absolute/path/one\",\n \"/absolute/path/two\"\n ]\n }\n }\n}\n```\n\n### GitHub hosted server via HTTP\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\"\n }\n }\n}\n```\n\n### GitHub local server via Docker\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"github\": {\n \"command\": \"docker\",\n \"args\": [\n \"run\",\n \"-i\",\n \"--rm\",\n \"-e\",\n \"GITHUB_PERSONAL_ACCESS_TOKEN\",\n \"ghcr.io/github/github-mcp-server\"\n ],\n \"env\": {\n \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n }\n }\n }\n}\n```\n\nThis matches GitHub's official local Docker image `ghcr.io/github/github-mcp-server`.\n\n### Slack hosted server via OAuth\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"mcpServers\": {\n \"slack\": {\n \"type\": \"http\",\n \"url\": \"https://mcp.slack.com/mcp\",\n \"oauth\": {\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n },\n \"auth\": {\n \"type\": \"oauth\",\n \"tokenUrl\": \"https://slack.com/api/oauth.v2.user.access\",\n \"clientId\": \"YOUR_SLACK_CLIENT_ID\",\n \"clientSecret\": \"YOUR_SLACK_CLIENT_SECRET\"\n }\n }\n }\n}\n```\n\n## Secrets and variable resolution\n\nThis is the part that usually trips people up.\n\n### In `.omp/mcp.json` and `~/.omp/agent/mcp.json`\n\nBefore OMP launches a server or makes an HTTP request, it resolves `env` and `headers` values like this:\n\n1. If a value starts with `!`, OMP runs it as a shell command and uses trimmed stdout.\n2. Otherwise OMP first checks whether the value matches an environment variable name.\n3. If that environment variable is not set, OMP uses the string literally.\n\nExamples:\n\n```json\n{\n \"env\": {\n \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"\n },\n \"headers\": {\n \"X-MCP-Insiders\": \"true\"\n }\n}\n```\n\nThat means this is valid and convenient for local secrets:\n\n- `\"GITHUB_PERSONAL_ACCESS_TOKEN\": \"GITHUB_PERSONAL_ACCESS_TOKEN\"` → copy from the current shell environment\n- `\"Authorization\": \"Bearer hardcoded-token\"` → use the literal value\n- `\"Authorization\": \"!printf 'Bearer %s' \\\"$GITHUB_TOKEN\\\"\"` → build the header from a command\n\n### In root `mcp.json` and `.mcp.json`\n\nThe standalone fallback loader also expands `${VAR}` and `${VAR:-default}` inside strings during discovery.\n\nExample:\n\n```json\n{\n \"mcpServers\": {\n \"github\": {\n \"type\": \"http\",\n \"url\": \"https://api.githubcopilot.com/mcp/\",\n \"headers\": {\n \"Authorization\": \"Bearer ${GITHUB_TOKEN}\"\n }\n }\n }\n}\n```\n\nIf you want the least surprising OMP behavior, prefer `.omp/mcp.json` or `~/.omp/agent/mcp.json` and use explicit env/header values.\n\n## `disabledServers`\n\n`disabledServers` is mainly useful in the user config file (`~/.omp/agent/mcp.json`) when a server is discovered from some other source and you want OMP to ignore it without editing that other tool's config.\n\nExample:\n\n```json\n{\n \"$schema\": \"https://raw.githubusercontent.com/can1357/oh-my-pi/main/packages/coding-agent/src/config/mcp-schema.json\",\n \"disabledServers\": [\"github\", \"slack\"]\n}\n```\n\n## `/mcp add` vs editing JSON directly\n\nUse `/mcp add` when you want guided setup.\n\nUse direct JSON editing when:\n\n- you need a transport or auth option the wizard does not prompt for yet\n- you want to paste a server definition from another MCP client\n- you want schema-backed validation in your editor\n\nAfter editing, use:\n\n- `/mcp reload` to rediscover and reconnect servers in the current session\n- `/mcp list` to see which config file a server came from\n- `/mcp test <name>` to test a single server\n\n## Validation rules OMP enforces\n\nFrom `validateServerConfig()` in `packages/coding-agent/src/mcp/config.ts`:\n\n- `stdio` requires `command`\n- `http` and `sse` require `url`\n- a server cannot set both `command` and `url`\n- unknown `type` values are rejected\n\nPractical implications:\n\n- Omitting `type` means `stdio`\n- If you paste a remote server config and forget `\"type\": \"http\"`, OMP will treat it as `stdio` and complain that `command` is missing\n- `sse` remains valid for compatibility, but new hosted servers should usually be configured as `http`\n\n## Discovery and precedence\n\nOMP does not merge duplicate server definitions across files. Discovery providers are prioritized, and the higher-priority definition wins.\n\nIn practice:\n\n- prefer `.omp/mcp.json` or `~/.omp/agent/mcp.json` when you want an OMP-specific override\n- keep server names unique across tools when possible\n- use `disabledServers` in the user config when a third-party config keeps reintroducing a server you do not want\n\n## Troubleshooting\n\n### `Server \"name\": stdio server requires \"command\" field`\n\nYou probably omitted `type: \"http\"` on a remote server.\n\n### `Server \"name\": both \"command\" and \"url\" are set`\n\nPick one transport. OMP treats `command` as stdio and `url` as http/sse.\n\n### `/mcp add` worked but the server still does not connect\n\nThe JSON is valid, but the server may still be unreachable. Use `/mcp test <name>` and check whether:\n\n- the binary or Docker image exists\n- required environment variables are set\n- the remote URL is reachable\n- the OAuth or API token is valid\n\n### The server exists in another tool's config but not in OMP\n\nRun `/mcp list`. OMP discovers many third-party MCP files, but project-level loading can also be disabled via the `mcp.enableProjectConfig` setting.\n\n## References\n\n- MCP transport spec: https://modelcontextprotocol.io/specification/2025-03-26/basic/transports\n- Filesystem server package: https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem\n- GitHub MCP server: https://github.com/github/github-mcp-server\n- Slack MCP server docs: https://docs.slack.dev/ai/slack-mcp-server/\n",
|
|
20
20
|
"mcp-protocol-transports.md": "# MCP Protocol and Transport Internals\n\nThis document describes how coding-agent implements MCP JSON-RPC messaging and how protocol concerns are split from transport concerns.\n\n## Scope\n\nCovers:\n\n- JSON-RPC request/response and notification flow\n- Request correlation and lifecycle for stdio and HTTP/SSE transports\n- Timeout and cancellation behavior\n- Error propagation and malformed payload handling\n- Transport selection boundaries (`stdio` vs `http`/`sse`)\n- Which reconnect/retry responsibilities are transport-level vs manager-level\n\nDoes not cover extension authoring UX or command UI.\n\n## Implementation files\n\n- [`src/mcp/types.ts`](../packages/coding-agent/src/mcp/types.ts)\n- [`src/mcp/transports/stdio.ts`](../packages/coding-agent/src/mcp/transports/stdio.ts)\n- [`src/mcp/transports/http.ts`](../packages/coding-agent/src/mcp/transports/http.ts)\n- [`src/mcp/transports/index.ts`](../packages/coding-agent/src/mcp/transports/index.ts)\n- [`src/mcp/json-rpc.ts`](../packages/coding-agent/src/mcp/json-rpc.ts)\n- [`src/mcp/client.ts`](../packages/coding-agent/src/mcp/client.ts)\n- [`src/mcp/manager.ts`](../packages/coding-agent/src/mcp/manager.ts)\n\n## Layer boundaries\n\n### Protocol layer (JSON-RPC + MCP methods)\n\n- Message shapes are defined in `types.ts` (`JsonRpcRequest`, `JsonRpcNotification`, `JsonRpcResponse`, `JsonRpcMessage`).\n- MCP client logic (`client.ts`) decides method order and session handshake:\n 1. `initialize` request\n 2. `notifications/initialized` notification\n 3. method calls like `tools/list`, `tools/call`\n\n### Transport layer (`MCPTransport`)\n\n`MCPTransport` abstracts delivery and lifecycle:\n\n- `request(method, params, options?) -> Promise<T>`\n- `notify(method, params?) -> Promise<void>`\n- `close()`\n- `connected`\n- optional callbacks: `onClose`, `onError`, `onNotification`\n\nTransport implementations own framing and I/O details:\n\n- `StdioTransport`: newline-delimited JSON over subprocess stdio\n- `HttpTransport`: JSON-RPC over HTTP POST, with optional SSE responses/listening\n\n### Important current caveat\n\nTransport callbacks (`onClose`, `onError`, `onNotification`) are implemented, but current `MCPClient`/`MCPManager` flows do not wire reconnection logic to these callbacks. Notifications are only consumed if caller registers handlers.\n\n## Transport selection\n\n`client.ts:createTransport()` chooses transport from config:\n\n- `type` omitted or `\"stdio\"` -> `createStdioTransport`\n- `\"http\"` or `\"sse\"` -> `createHttpTransport`\n\n`\"sse\"` is treated as an HTTP transport variant (same class), not a separate transport implementation.\n\n## JSON-RPC message flow and correlation\n\n## Request IDs\n\nEach transport generates per-request IDs (`Math.random` + timestamp string). IDs are transport-local correlation tokens.\n\n## Stdio correlation path\n\n- Outbound request is serialized as one JSON object + `\\n`.\n- `#pendingRequests: Map<id, {resolve,reject}>` stores in-flight requests.\n- Read loop parses JSONL from stdout and calls `#handleMessage`.\n- If inbound message has matching `id`, request resolves/rejects.\n- If inbound message has `method` and no `id`, treated as notification and sent to `onNotification`.\n\nUnknown IDs are ignored (no rejection, no error callback).\n\n## HTTP correlation path\n\n- Outbound request is HTTP `POST` with JSON body and generated `id`.\n- Non-SSE response path: parse one JSON-RPC response and return `result`/throw on `error`.\n- SSE response path (`Content-Type: text/event-stream`): stream events, return first message whose `id` matches expected request ID and has `result` or `error`.\n- SSE messages with `method` and no `id` are treated as notifications.\n\nIf SSE stream ends before matching response, request fails with `No response received for request ID ...`.\n\n## Notifications\n\nClient emits JSON-RPC notifications via `transport.notify(...)`.\n\n- Stdio: writes notification frame to stdin (`jsonrpc`, `method`, optional `params`) plus newline.\n- HTTP: sends POST body without `id`; success accepts `2xx` or `202 Accepted`.\n\nServer-initiated notifications are only surfaced through transport `onNotification`; there is no default global subscriber in manager/client.\n\n## Stdio transport internals\n\n## Lifecycle and state transitions\n\n- Initial: `connected=false`, `process=null`, pending map empty\n- `connect()`:\n - spawn subprocess with configured command/args/env/cwd\n - mark connected\n - start stdout read loop (`readJsonl`)\n - start stderr loop (read/discard; currently silent)\n- `close()`:\n - mark disconnected\n - reject all pending requests (`Transport closed`)\n - kill subprocess\n - await read loop shutdown\n - emit `onClose`\n\nIf read loop exits unexpectedly, `finally` triggers `#handleClose()` which performs the same pending-request rejection and close callback.\n\n## Timeout and cancellation\n\nPer request:\n\n- timeout defaults to `config.timeout ?? 30000`\n- optional `AbortSignal` from caller\n- abort and timeout both reject the pending promise and clean map entry\n\nCancellation is local only: transport does not send protocol-level cancellation notification to the server.\n\n## Malformed payload handling\n\nIn read loop:\n\n- each parsed JSONL line is passed to `#handleMessage` in `try/catch`\n- malformed/invalid message handling exceptions are dropped (`Skip malformed lines` comment)\n- loop continues, so one bad message does not kill the connection\n\nIf the underlying stream parser throws, `onError` is invoked (when still connected), then connection closes.\n\n## Disconnect/failure behavior\n\nWhen process exits or stream closes:\n\n- all in-flight requests are rejected with `Transport closed`\n- no automatic restart or reconnect\n- higher layers must reconnect by creating a new transport\n\n## Backpressure/streaming notes\n\n- Outbound writes use `stdin.write()` + `flush()` without awaiting drain semantics.\n- There is no explicit queue or high-watermark management in transport.\n- Inbound processing is stream-driven (`for await` over `readJsonl`), one parsed message at a time.\n\n## HTTP/SSE transport internals\n\n## Lifecycle and connection semantics\n\nHTTP transport has logical connection state, but request path is stateless per HTTP call:\n\n- `connect()` sets `connected=true` (no socket/session handshake)\n- optional server session tracking via `Mcp-Session-Id` header\n- `close()` optionally sends `DELETE` with `Mcp-Session-Id`, aborts SSE listener, emits `onClose`\n\nSo `connected` means \"transport usable\", not \"persistent stream established\".\n\n## Session header behavior\n\n- On POST response, if `Mcp-Session-Id` header is present, transport stores it.\n- Subsequent requests/notifications include `Mcp-Session-Id`.\n- `close()` tries to terminate server session with HTTP DELETE; termination failures are ignored.\n\n## Timeout and cancellation\n\nFor both `request()` and `notify()`:\n\n- timeout uses `AbortController` (`config.timeout ?? 30000`)\n- external signal, if provided, is merged via `AbortSignal.any([...])`\n- AbortError handling distinguishes caller abort vs timeout\n\nErrors thrown:\n\n- timeout: `Request timeout after ...ms` (or `SSE response timeout ...`, `Notify timeout ...`)\n- caller abort: original AbortError is rethrown when external signal is already aborted\n\n## HTTP error propagation\n\nOn non-OK response:\n\n- response text is included in thrown error (`HTTP <status>: <text>`)\n- if present, auth hints from `WWW-Authenticate` and `Mcp-Auth-Server` are appended\n\nOn JSON-RPC error object:\n\n- throws `MCP error <code>: <message>`\n\nMalformed JSON body (`response.json()` failure) propagates as parse exception.\n\n## SSE behavior and modes\n\nTwo SSE paths exist:\n\n1. **Per-request SSE response** (`#parseSSEResponse`)\n - used when POST response content type is `text/event-stream`\n - consumes stream until matching response id found\n - can process interleaved notifications during same stream\n\n2. **Background SSE listener** (`startSSEListener()`)\n - optional GET listener for server-initiated notifications\n - currently not automatically started by MCP manager/client\n - if GET returns `405`, listener silently disables itself (server does not support this mode)\n\n## Malformed payload and disconnect handling\n\nSSE JSON parsing errors bubble out of `readSseJson` and reject request/listener.\n\n- Request SSE parse errors reject the active request.\n- Background listener errors trigger `onError` (except AbortError).\n- No auto-reconnect for background listener.\n\n## `json-rpc.ts` utility vs transport abstraction\n\n`src/mcp/json-rpc.ts` provides `callMCP()` and `parseSSE()` helpers for direct HTTP MCP calls (used by Exa integration), not the `MCPTransport` abstraction used by `MCPClient`/`MCPManager`.\n\nNotable differences from `HttpTransport`:\n\n- parses entire response text first, then extracts first `data: ` line (`parseSSE`), with JSON fallback\n- no request timeout management, no abort API, no session-id handling, no transport lifecycle\n- returns raw JSON-RPC envelope object\n\nThis path is lightweight but less robust than full transport implementation.\n\n## Retry/reconnect responsibilities\n\n## Transport-level\n\nCurrent transport implementations do **not**:\n\n- retry failed requests\n- reconnect after stdio process exit\n- reconnect SSE listeners\n- resend in-flight requests after disconnect\n\nThey fail fast and propagate errors.\n\n## Manager/client-level\n\n`MCPManager` handles discovery/initial connection orchestration and can reconnect only by running connect flows again (`connectToServer`/`discoverAndConnect` paths). It does not auto-heal an already connected transport on runtime failure callbacks.\n\n`MCPManager` does have startup fallback behavior for slow servers (deferred tools from cache), but that is tool availability fallback, not transport retry.\n\n## Failure scenarios summary\n\n- **Malformed stdio message line**: dropped; stream continues.\n- **Stdio stream/process ends**: transport closes; pending requests rejected as `Transport closed`.\n- **HTTP non-2xx**: request/notify throws HTTP error.\n- **Invalid JSON response**: parse exception propagated.\n- **SSE ends without matching id**: request fails with `No response received for request ID ...`.\n- **Timeout**: transport-specific timeout error.\n- **Caller abort**: AbortError/reason propagated from caller signal.\n\n## Practical boundary rule\n\nIf the concern is message shape, id correlation, or MCP method ordering, it belongs to protocol/client logic.\n\nIf the concern is framing (JSONL vs HTTP/SSE), stream parsing, fetch/spawn lifecycle, timeout clocks, or connection teardown, it belongs to transport implementation.",
|
|
21
21
|
"mcp-runtime-lifecycle.md": "# MCP runtime lifecycle\n\nThis document describes how MCP servers are discovered, connected, exposed as tools, refreshed, and torn down in the coding-agent runtime.\n\n## Lifecycle at a glance\n\n1. **SDK startup** calls `discoverAndLoadMCPTools()` (unless MCP is disabled).\n2. **Discovery** (`loadAllMCPConfigs`) resolves MCP server configs from capability sources, filters disabled/project/Exa entries, and preserves source metadata.\n3. **Manager connect phase** (`MCPManager.connectServers`) starts per-server connect + `tools/list` in parallel.\n4. **Fast startup gate** waits up to 250ms, then may return:\n - fully loaded `MCPTool`s,\n - failures per server,\n - or cached `DeferredMCPTool`s for still-pending servers.\n5. **SDK wiring** merges MCP tools into runtime tool registry for the session.\n6. **Live session** can refresh MCP tools via `/mcp` flows (`disconnectAll` + rediscover + `session.refreshMCPTools`).\n7. **Teardown** happens when callers invoke `disconnectServer`/`disconnectAll`; manager also clears MCP tool registrations for disconnected servers.\n\n## Discovery and load phase\n\n### Entry path from SDK\n\n`createAgentSession()` in `src/sdk.ts` performs MCP startup when `enableMCP` is true (default):\n\n- calls `discoverAndLoadMCPTools(cwd, { ... })`,\n- passes `authStorage`, cache storage, and `mcp.enableProjectConfig` setting,\n- always sets `filterExa: true`,\n- logs per-server load/connect errors,\n- stores returned manager in `toolSession.mcpManager` and session result.\n\nIf `enableMCP` is false, MCP discovery is skipped entirely.\n\n### Config discovery and filtering\n\n`loadAllMCPConfigs()` (`src/mcp/config.ts`) loads canonical MCP server items through capability discovery, then converts to legacy `MCPServerConfig`.\n\nFiltering behavior:\n\n- `enableProjectConfig: false` removes project-level entries (`_source.level === \"project\"`).\n- `enabled: false` servers are skipped before connect attempts.\n- Exa servers are filtered out by default and API keys are extracted for native Exa tool integration.\n\nResult includes both `configs` and `sources` (metadata used later for provider labeling).\n\n### Discovery-level failure behavior\n\n`discoverAndLoadMCPTools()` distinguishes two failure classes:\n\n- **Discovery hard failure** (exception from `manager.discoverAndConnect`, typically from config discovery): returns an empty tool set and one synthetic error `{ path: \".mcp.json\", error }`.\n- **Per-server runtime/connect failure**: manager returns partial success with `errors` map; other servers continue.\n\nSo startup does not fail the whole agent session when individual MCP servers fail.\n\n## Manager state model\n\n`MCPManager` tracks runtime lifecycle with separate registries:\n\n- `#connections: Map<string, MCPServerConnection>` — fully connected servers.\n- `#pendingConnections: Map<string, Promise<MCPServerConnection>>` — handshake in progress.\n- `#pendingToolLoads: Map<string, Promise<{ connection, serverTools }>>` — connected but tools still loading.\n- `#tools: CustomTool[]` — current MCP tool view exposed to callers.\n- `#sources: Map<string, SourceMeta>` — provider/source metadata even before connect completes.\n\n`getConnectionStatus(name)` derives status from these maps:\n\n- `connected` if in `#connections`,\n- `connecting` if pending connect or pending tool load,\n- `disconnected` otherwise.\n\n## Connection establishment and startup timing\n\n## Per-server connect pipeline\n\nFor each discovered server in `connectServers()`:\n\n1. store/update source metadata,\n2. skip if already connected/pending,\n3. validate transport fields (`validateServerConfig`),\n4. resolve auth/shell substitutions (`#resolveAuthConfig`),\n5. call `connectToServer(name, resolvedConfig)`,\n6. call `listTools(connection)`,\n7. cache tool definitions (`MCPToolCache.set`) best-effort.\n\n`connectToServer()` behavior (`src/mcp/client.ts`):\n\n- creates stdio or HTTP/SSE transport,\n- performs MCP `initialize` + `notifications/initialized`,\n- uses timeout (`config.timeout` or 30s default),\n- closes transport on init failure.\n\n### Fast startup gate + deferred fallback\n\n`connectServers()` waits on a race between:\n\n- all connect/tool-load tasks settled, and\n- `STARTUP_TIMEOUT_MS = 250`.\n\nAfter 250ms:\n\n- fulfilled tasks become live `MCPTool`s,\n- rejected tasks produce per-server errors,\n- still-pending tasks:\n - use cached tool definitions if available (`MCPToolCache.get`) to create `DeferredMCPTool`s,\n - otherwise block until those pending tasks settle.\n\nThis is a hybrid startup model: fast return when cache is available, correctness wait when cache is not.\n\n### Background completion behavior\n\nEach pending `toolsPromise` also has a background continuation that eventually:\n\n- replaces that server’s tool slice in manager state via `#replaceServerTools`,\n- writes cache,\n- logs late failures only after startup (`allowBackgroundLogging`).\n\n## Tool exposure and live-session availability\n\n### Startup registration\n\n`discoverAndLoadMCPTools()` converts manager tools into `LoadedCustomTool[]` and decorates paths (`mcp:<server> via <providerName>` when known).\n\n`createAgentSession()` then pushes these tools into `customTools`, which are wrapped and added to the runtime tool registry with names like `mcp_<server>_<tool>`.\n\n### Tool calls\n\n- `MCPTool` calls tools through an already connected `MCPServerConnection`.\n- `DeferredMCPTool` waits for `waitForConnection(server)` before calling; this allows cached tools to exist before connection is ready.\n\nBoth return structured tool output and convert transport/tool errors into `MCP error: ...` tool content (abort remains abort).\n\n## Refresh/reload paths (startup vs live reload)\n\n### Initial startup path\n\n- one-time discovery/load in `sdk.ts`,\n- tools are registered in initial session tool registry.\n\n### Interactive reload path\n\n`/mcp reload` path (`src/modes/controllers/mcp-command-controller.ts`) does:\n\n1. `mcpManager.disconnectAll()`,\n2. `mcpManager.discoverAndConnect()`,\n3. `session.refreshMCPTools(mcpManager.getTools())`.\n\n`session.refreshMCPTools()` (`src/session/agent-session.ts`) removes all `mcp_` tools, re-wraps latest MCP tools, and re-activates tool set so MCP changes apply without restarting session.\n\nThere is also a follow-up path for late connections: after waiting for a specific server, if status becomes `connected`, it re-runs `session.refreshMCPTools(...)` so newly available tools are rebound in-session.\n\n## Health, reconnect, and partial failure behavior\n\nCurrent runtime behavior is intentionally minimal:\n\n- **No autonomous health monitor** in manager/client.\n- **No automatic reconnect loop** when a transport drops.\n- Manager does not subscribe to transport `onClose`/`onError`; status is registry-driven.\n- Reconnect is explicit: reload flow or direct `connectServers()` invocation.\n\nOperationally:\n\n- one server failing does not remove tools from healthy servers,\n- connect/list failures are isolated per server,\n- tool cache and background updates are best-effort (warnings/errors logged, no hard stop).\n\n## Teardown semantics\n\n### Server-level teardown\n\n`disconnectServer(name)`:\n\n- removes pending entries/source metadata,\n- closes transport if connected,\n- removes that server’s `mcp_` tools from manager state.\n\n### Global teardown\n\n`disconnectAll()`:\n\n- closes all active transports with `Promise.allSettled`,\n- clears pending maps, sources, connections, and manager tool list.\n\nIn current wiring, explicit teardown is used in MCP command flows (for reload/remove/disable). There is no separate automatic manager disposal hook in the startup path itself; callers are responsible for invoking manager disconnect methods when they need deterministic MCP shutdown.\n\n## Failure modes and guarantees\n\n| Scenario | Behavior | Hard fail vs best-effort |\n| --- | --- | --- |\n| Discovery throws (capability/config load path) | Loader returns empty tools + synthetic `.mcp.json` error | Best-effort session startup |\n| Invalid server config | Server skipped with validation error entry | Best-effort per server |\n| Connect timeout/init failure | Server error recorded; others continue | Best-effort per server |\n| `tools/list` still pending at startup with cache hit | Deferred tools returned immediately | Best-effort fast startup |\n| `tools/list` still pending at startup without cache | Startup waits for pending to settle | Hard wait for correctness |\n| Late background tool-load failure | Logged after startup gate | Best-effort logging |\n| Runtime dropped transport | No automatic reconnect; future calls fail until reconnect/reload | Best-effort recovery via manual action |\n\n## Public API surface\n\n`src/mcp/index.ts` re-exports loader/manager/client APIs for external callers. `src/sdk.ts` exposes `discoverMCPServers()` as a convenience wrapper returning the same loader result shape.\n\n## Implementation files\n\n- [`src/mcp/loader.ts`](../packages/coding-agent/src/mcp/loader.ts) — loader facade, discovery error normalization, `LoadedCustomTool` conversion.\n- [`src/mcp/manager.ts`](../packages/coding-agent/src/mcp/manager.ts) — lifecycle state registries, parallel connect/list flow, refresh/disconnect.\n- [`src/mcp/client.ts`](../packages/coding-agent/src/mcp/client.ts) — transport setup, initialize handshake, list/call/disconnect.\n- [`src/mcp/index.ts`](../packages/coding-agent/src/mcp/index.ts) — MCP module API exports.\n- [`src/sdk.ts`](../packages/coding-agent/src/sdk.ts) — startup wiring into session/tool registry.\n- [`src/mcp/config.ts`](../packages/coding-agent/src/mcp/config.ts) — config discovery/filtering/validation used by manager.\n- [`src/mcp/tool-bridge.ts`](../packages/coding-agent/src/mcp/tool-bridge.ts) — `MCPTool` and `DeferredMCPTool` runtime behavior.\n- [`src/session/agent-session.ts`](../packages/coding-agent/src/session/agent-session.ts) — `refreshMCPTools` live rebinding.\n- [`src/modes/controllers/mcp-command-controller.ts`](../packages/coding-agent/src/modes/controllers/mcp-command-controller.ts) — interactive reload/reconnect flows.\n- [`src/task/executor.ts`](../packages/coding-agent/src/task/executor.ts) — subagent MCP proxying via parent manager connections.\n",
|
|
22
|
-
"mcp-server-tool-authoring.md": "# MCP server and tool authoring\n\nThis document explains how MCP server definitions become callable `mcp_*` tools in coding-agent, and what operators should expect when configs are invalid, duplicated, disabled, or auth-gated.\n\n## Architecture at a glance\n\n```text\nConfig sources (.omp/.claude/.cursor/.vscode/mcp.json, mcp.json, etc.)\n -> discovery providers normalize to canonical MCPServer\n -> capability loader dedupes by server name (higher provider priority wins)\n -> loadAllMCPConfigs converts to MCPServerConfig + skips enabled:false\n -> MCPManager connects/listTools (with auth/header/env resolution)\n -> MCPTool/DeferredMCPTool bridge exposes tools as mcp_<server>_<tool>\n -> AgentSession.refreshMCPTools replaces live MCP tools immediately\n```\n\n## 1) Server config model and validation\n\n`src/mcp/types.ts` defines the authoring shape used by MCP config writers and runtime:\n\n- `stdio` (default when `type` missing): requires `command`, optional `args`, `env`, `cwd`\n- `http`: requires `url`, optional `headers`\n- `sse`: requires `url`, optional `headers` (kept for compatibility)\n- shared fields: `enabled`, `timeout`, `auth`\n\n`validateServerConfig()` (`src/mcp/config.ts`) enforces transport basics:\n\n- rejects configs that set both `command` and `url`\n- requires `command` for stdio\n- requires `url` for http/sse\n- rejects unknown `type`\n\n`config-writer.ts` applies this validation for add/update operations and also validates server names:\n\n- non-empty\n- max 100 chars\n- only `[a-zA-Z0-9_.-]`\n\n### Transport pitfalls\n\n- `type` omitted means stdio. If you intended HTTP/SSE but omitted `type`, `command` becomes mandatory.\n- `sse` is still accepted but treated as HTTP transport internally (`createHttpTransport`).\n- Validation is structural, not reachability: a syntactically valid URL can still fail at connect time.\n\n## 2) Discovery, normalization, and precedence\n\n### Capability-based discovery\n\n`loadAllMCPConfigs()` (`src/mcp/config.ts`) loads canonical `MCPServer` items via `loadCapability(mcpCapability.id)`.\n\nThe capability layer (`src/capability/index.ts`) then:\n\n1. loads providers in priority order\n2. dedupes by `server.name` (first win = highest priority)\n3. validates deduped items\n\nResult: duplicate server names across sources are not merged. One definition wins; lower-priority duplicates are shadowed.\n\n### `.mcp.json` and related files\n\nThe dedicated fallback provider in `src/discovery/mcp-json.ts` reads project-root `mcp.json` and `.mcp.json` (low priority).\n\nIn practice MCP servers also come from higher-priority providers (for example native `.omp/...` and tool-specific config dirs). Authoring guidance:\n\n- Prefer `.omp/mcp.json` (project) or `~/.omp/mcp.json` (user) for explicit control.\n- Use root `mcp.json` / `.mcp.json` when you need fallback compatibility.\n- Reusing the same server name in multiple sources causes precedence shadowing, not merge.\n\n### Normalization behavior\n\n`convertToLegacyConfig()` (`src/mcp/config.ts`) maps canonical `MCPServer` to runtime `MCPServerConfig`.\n\nKey behavior:\n\n- transport inferred as `server.transport ?? (command ? \"stdio\" : url ? \"http\" : \"stdio\")`\n- disabled servers (`enabled === false`) are dropped before connection\n- optional fields are preserved when present\n\n### Environment expansion during discovery\n\n`mcp-json.ts` expands env placeholders in string fields with `expandEnvVarsDeep()`:\n\n- supports `${VAR}` and `${VAR:-default}`\n- unresolved values remain literal `${VAR}` strings\n\n`mcp-json.ts` also performs runtime type checks for user JSON and logs warnings for invalid `enabled`/`timeout` values instead of hard-failing the whole file.\n\n## 3) Auth and runtime value resolution\n\n`MCPManager.prepareConfig()`/`#resolveAuthConfig()` (`src/mcp/manager.ts`) is the final pre-connect pass.\n\n### OAuth credential injection\n\nIf config has:\n\n```ts\nauth: { type: \"oauth\", credentialId: \"...\" }\n```\n\nand credential exists in auth storage:\n\n- `http`/`sse`: injects `Authorization: Bearer <access_token>` header\n- `stdio`: injects `OAUTH_ACCESS_TOKEN` env var\n\nIf credential lookup fails, manager logs a warning and continues with unresolved auth.\n\n### Header/env value resolution\n\nBefore connect, manager resolves each header/env value via `resolveConfigValue()` (`src/config/resolve-config-value.ts`):\n\n- value starting with `!` => execute shell command, use trimmed stdout (cached)\n- otherwise, treat value as environment variable name first (`process.env[name]`), fallback to literal value\n- unresolved command/env values are omitted from final headers/env map\n\nOperational caveat: this means a mistyped secret command/env key can silently remove that header/env entry, producing downstream 401/403 or server startup failures.\n\n## 4) Tool bridge: MCP -> agent-callable tools\n\n`src/mcp/tool-bridge.ts` converts MCP tool definitions into `CustomTool`s.\n\n### Naming and collision domain\n\nTool names are generated as:\n\n```text\nmcp_<sanitized_server_name>_<sanitized_tool_name>\n```\n\nRules:\n\n- lowercases\n- non-`[a-z_]` chars become `_`\n- repeated underscores collapse\n- redundant `<server>_` prefix in tool name is stripped once\n\nThis avoids many collisions, but not all. Different raw names can still sanitize to the same identifier (for example `my-server` and `my.server` both sanitize similarly), and registry insertion is last-write-wins.\n\n### Schema mapping\n\n`convertSchema()` keeps MCP JSON Schema mostly as-is but patches object schemas missing `properties` with `{}` for provider compatibility.\n\n### Execution mapping\n\n`MCPTool.execute()` / `DeferredMCPTool.execute()`:\n\n- calls MCP `tools/call`\n- flattens MCP content into displayable text\n- returns structured details (`serverName`, `mcpToolName`, provider metadata)\n- maps server-reported `isError` to `Error: ...` text result\n- maps thrown transport/runtime failures to `MCP error: ...`\n- preserves abort semantics by translating AbortError into `ToolAbortError`\n\n## 5) Operator lifecycle: add/edit/remove and live updates\n\nInteractive mode exposes `/mcp` in `src/modes/controllers/mcp-command-controller.ts`.\n\nSupported operations:\n\n- `add` (wizard or quick-add)\n- `remove` / `rm`\n- `enable` / `disable`\n- `test`\n- `reauth` / `unauth`\n- `reload`\n\nConfig writes are atomic (`writeMCPConfigFile`: temp file + rename).\n\nAfter changes, controller calls `#reloadMCP()`:\n\n1. `mcpManager.disconnectAll()`\n2. `mcpManager.discoverAndConnect()`\n3. `session.refreshMCPTools(mcpManager.getTools())`\n\n`refreshMCPTools()` replaces all `mcp_` registry entries and immediately re-activates the latest MCP tool set, so changes take effect without restarting the session.\n\n### Mode differences\n\n- **Interactive/TUI mode**: `/mcp` gives in-app UX (wizard, OAuth flow, connection status text, immediate runtime rebinding).\n- **SDK/headless integration**: `discoverAndLoadMCPTools()` (`src/mcp/loader.ts`) returns loaded tools + per-server errors; no `/mcp` command UX.\n\n## 6) User-visible error surfaces\n\nCommon error strings users/operators see:\n\n- add/update validation failures:\n - `Invalid server config: ...`\n - `Server \"<name>\" already exists in <path>`\n- quick-add argument issues:\n - `Use either --url or -- <command...>, not both.`\n - `--token requires --url (HTTP/SSE transport).`\n- connect/test failures:\n - `Failed to connect to \"<name>\": <message>`\n - timeout help text suggests increasing timeout\n - auth help text for `401/403`\n- auth/OAuth flows:\n - `Authentication required ... OAuth endpoints could not be discovered`\n - `OAuth flow timed out. Please try again.`\n - `OAuth authentication failed: ...`\n- disabled server usage:\n - `Server \"<name>\" is disabled. Run /mcp enable <name> first.`\n\nBad source JSON in discovery is generally handled as warnings/logs; config-writer paths throw explicit errors.\n\n## 7) Practical authoring guidance\n\nFor robust MCP authoring in this codebase:\n\n1. Keep server names globally unique across all MCP-capable config sources.\n2. Prefer alphanumeric/underscore names to avoid sanitized-name collisions in generated `mcp_*` tool names.\n3. Use explicit `type` to avoid accidental stdio defaults.\n4. Treat `enabled: false` as hard-off: server is omitted from runtime connect set.\n5. For OAuth configs, store a valid `credentialId`; otherwise auth injection is skipped.\n6. If using command-based secret resolution (`!cmd`), verify command output is stable and non-empty.\n\n## Implementation files\n\n- [`src/mcp/types.ts`](../packages/coding-agent/src/mcp/types.ts)\n- [`src/mcp/config.ts`](../packages/coding-agent/src/mcp/config.ts)\n- [`src/mcp/config-writer.ts`](../packages/coding-agent/src/mcp/config-writer.ts)\n- [`src/mcp/tool-bridge.ts`](../packages/coding-agent/src/mcp/tool-bridge.ts)\n- [`src/discovery/mcp-json.ts`](../packages/coding-agent/src/discovery/mcp-json.ts)\n- [`src/modes/controllers/mcp-command-controller.ts`](../packages/coding-agent/src/modes/controllers/mcp-command-controller.ts)\n- [`src/mcp/manager.ts`](../packages/coding-agent/src/mcp/manager.ts)\n- [`src/capability/index.ts`](../packages/coding-agent/src/capability/index.ts)\n- [`src/config/resolve-config-value.ts`](../packages/coding-agent/src/config/resolve-config-value.ts)\n- [`src/mcp/loader.ts`](../packages/coding-agent/src/mcp/loader.ts)\n",
|
|
22
|
+
"mcp-server-tool-authoring.md": "# MCP server and tool authoring\n\nThis document explains how MCP server definitions become callable `mcp_*` tools in coding-agent, and what operators should expect when configs are invalid, duplicated, disabled, or auth-gated.\n\n## Architecture at a glance\n\n```text\nConfig sources (.omp/.claude/.cursor/.vscode/mcp.json, mcp.json, etc.)\n -> discovery providers normalize to canonical MCPServer\n -> capability loader dedupes by server name (higher provider priority wins)\n -> loadAllMCPConfigs converts to MCPServerConfig + skips enabled:false\n -> MCPManager connects/listTools (with auth/header/env resolution)\n -> MCPTool/DeferredMCPTool bridge exposes tools as mcp_<server>_<tool>\n -> AgentSession.refreshMCPTools replaces live MCP tools immediately\n```\n\n## 1) Server config model and validation\n\n`src/mcp/types.ts` defines the authoring shape used by MCP config writers and runtime:\n\n- `stdio` (default when `type` missing): requires `command`, optional `args`, `env`, `cwd`\n- `http`: requires `url`, optional `headers`\n- `sse`: requires `url`, optional `headers` (kept for compatibility)\n- shared fields: `enabled`, `timeout`, `auth`\n\n`validateServerConfig()` (`src/mcp/config.ts`) enforces transport basics:\n\n- rejects configs that set both `command` and `url`\n- requires `command` for stdio\n- requires `url` for http/sse\n- rejects unknown `type`\n\n`config-writer.ts` applies this validation for add/update operations and also validates server names:\n\n- non-empty\n- max 100 chars\n- only `[a-zA-Z0-9_.-]`\n\n### Transport pitfalls\n\n- `type` omitted means stdio. If you intended HTTP/SSE but omitted `type`, `command` becomes mandatory.\n- `sse` is still accepted but treated as HTTP transport internally (`createHttpTransport`).\n- Validation is structural, not reachability: a syntactically valid URL can still fail at connect time.\n\n## 2) Discovery, normalization, and precedence\n\n### Capability-based discovery\n\n`loadAllMCPConfigs()` (`src/mcp/config.ts`) loads canonical `MCPServer` items via `loadCapability(mcpCapability.id)`.\n\nThe capability layer (`src/capability/index.ts`) then:\n\n1. loads providers in priority order\n2. dedupes by `server.name` (first win = highest priority)\n3. validates deduped items\n\nResult: duplicate server names across sources are not merged. One definition wins; lower-priority duplicates are shadowed.\n\n### `.mcp.json` and related files\n\nThe dedicated fallback provider in `src/discovery/mcp-json.ts` reads project-root `mcp.json` and `.mcp.json` (low priority).\n\nIn practice MCP servers also come from higher-priority providers (for example native `.omp/...` and tool-specific config dirs). Authoring guidance:\n\n- Prefer `.omp/mcp.json` (project) or `~/.omp/agent/mcp.json` (user) for explicit control.\n- Use root `mcp.json` / `.mcp.json` when you need fallback compatibility.\n- Reusing the same server name in multiple sources causes precedence shadowing, not merge.\n\n### Normalization behavior\n\n`convertToLegacyConfig()` (`src/mcp/config.ts`) maps canonical `MCPServer` to runtime `MCPServerConfig`.\n\nKey behavior:\n\n- transport inferred as `server.transport ?? (command ? \"stdio\" : url ? \"http\" : \"stdio\")`\n- disabled servers (`enabled === false`) are dropped before connection\n- optional fields are preserved when present\n\n### Environment expansion during discovery\n\n`mcp-json.ts` expands env placeholders in string fields with `expandEnvVarsDeep()`:\n\n- supports `${VAR}` and `${VAR:-default}`\n- unresolved values remain literal `${VAR}` strings\n\n`mcp-json.ts` also performs runtime type checks for user JSON and logs warnings for invalid `enabled`/`timeout` values instead of hard-failing the whole file.\n\n## 3) Auth and runtime value resolution\n\n`MCPManager.prepareConfig()`/`#resolveAuthConfig()` (`src/mcp/manager.ts`) is the final pre-connect pass.\n\n### OAuth credential injection\n\nIf config has:\n\n```ts\nauth: { type: \"oauth\", credentialId: \"...\" }\n```\n\nand credential exists in auth storage:\n\n- `http`/`sse`: injects `Authorization: Bearer <access_token>` header\n- `stdio`: injects `OAUTH_ACCESS_TOKEN` env var\n\nIf credential lookup fails, manager logs a warning and continues with unresolved auth.\n\n### Header/env value resolution\n\nBefore connect, manager resolves each header/env value via `resolveConfigValue()` (`src/config/resolve-config-value.ts`):\n\n- value starting with `!` => execute shell command, use trimmed stdout (cached)\n- otherwise, treat value as environment variable name first (`process.env[name]`), fallback to literal value\n- unresolved command/env values are omitted from final headers/env map\n\nOperational caveat: this means a mistyped secret command/env key can silently remove that header/env entry, producing downstream 401/403 or server startup failures.\n\n## 4) Tool bridge: MCP -> agent-callable tools\n\n`src/mcp/tool-bridge.ts` converts MCP tool definitions into `CustomTool`s.\n\n### Naming and collision domain\n\nTool names are generated as:\n\n```text\nmcp_<sanitized_server_name>_<sanitized_tool_name>\n```\n\nRules:\n\n- lowercases\n- non-`[a-z_]` chars become `_`\n- repeated underscores collapse\n- redundant `<server>_` prefix in tool name is stripped once\n\nThis avoids many collisions, but not all. Different raw names can still sanitize to the same identifier (for example `my-server` and `my.server` both sanitize similarly), and registry insertion is last-write-wins.\n\n### Schema mapping\n\n`convertSchema()` keeps MCP JSON Schema mostly as-is but patches object schemas missing `properties` with `{}` for provider compatibility.\n\n### Execution mapping\n\n`MCPTool.execute()` / `DeferredMCPTool.execute()`:\n\n- calls MCP `tools/call`\n- flattens MCP content into displayable text\n- returns structured details (`serverName`, `mcpToolName`, provider metadata)\n- maps server-reported `isError` to `Error: ...` text result\n- maps thrown transport/runtime failures to `MCP error: ...`\n- preserves abort semantics by translating AbortError into `ToolAbortError`\n\n## 5) Operator lifecycle: add/edit/remove and live updates\n\nInteractive mode exposes `/mcp` in `src/modes/controllers/mcp-command-controller.ts`.\n\nSupported operations:\n\n- `add` (wizard or quick-add)\n- `remove` / `rm`\n- `enable` / `disable`\n- `test`\n- `reauth` / `unauth`\n- `reload`\n\nConfig writes are atomic (`writeMCPConfigFile`: temp file + rename).\n\nAfter changes, controller calls `#reloadMCP()`:\n\n1. `mcpManager.disconnectAll()`\n2. `mcpManager.discoverAndConnect()`\n3. `session.refreshMCPTools(mcpManager.getTools())`\n\n`refreshMCPTools()` replaces all `mcp_` registry entries and immediately re-activates the latest MCP tool set, so changes take effect without restarting the session.\n\n### Mode differences\n\n- **Interactive/TUI mode**: `/mcp` gives in-app UX (wizard, OAuth flow, connection status text, immediate runtime rebinding).\n- **SDK/headless integration**: `discoverAndLoadMCPTools()` (`src/mcp/loader.ts`) returns loaded tools + per-server errors; no `/mcp` command UX.\n\n## 6) User-visible error surfaces\n\nCommon error strings users/operators see:\n\n- add/update validation failures:\n - `Invalid server config: ...`\n - `Server \"<name>\" already exists in <path>`\n- quick-add argument issues:\n - `Use either --url or -- <command...>, not both.`\n - `--token requires --url (HTTP/SSE transport).`\n- connect/test failures:\n - `Failed to connect to \"<name>\": <message>`\n - timeout help text suggests increasing timeout\n - auth help text for `401/403`\n- auth/OAuth flows:\n - `Authentication required ... OAuth endpoints could not be discovered`\n - `OAuth flow timed out. Please try again.`\n - `OAuth authentication failed: ...`\n- disabled server usage:\n - `Server \"<name>\" is disabled. Run /mcp enable <name> first.`\n\nBad source JSON in discovery is generally handled as warnings/logs; config-writer paths throw explicit errors.\n\n## 7) Practical authoring guidance\n\nFor robust MCP authoring in this codebase:\n\n1. Keep server names globally unique across all MCP-capable config sources.\n2. Prefer alphanumeric/underscore names to avoid sanitized-name collisions in generated `mcp_*` tool names.\n3. Use explicit `type` to avoid accidental stdio defaults.\n4. Treat `enabled: false` as hard-off: server is omitted from runtime connect set.\n5. For OAuth configs, store a valid `credentialId`; otherwise auth injection is skipped.\n6. If using command-based secret resolution (`!cmd`), verify command output is stable and non-empty.\n\n## Implementation files\n\n- [`src/mcp/types.ts`](../packages/coding-agent/src/mcp/types.ts)\n- [`src/mcp/config.ts`](../packages/coding-agent/src/mcp/config.ts)\n- [`src/mcp/config-writer.ts`](../packages/coding-agent/src/mcp/config-writer.ts)\n- [`src/mcp/tool-bridge.ts`](../packages/coding-agent/src/mcp/tool-bridge.ts)\n- [`src/discovery/mcp-json.ts`](../packages/coding-agent/src/discovery/mcp-json.ts)\n- [`src/modes/controllers/mcp-command-controller.ts`](../packages/coding-agent/src/modes/controllers/mcp-command-controller.ts)\n- [`src/mcp/manager.ts`](../packages/coding-agent/src/mcp/manager.ts)\n- [`src/capability/index.ts`](../packages/coding-agent/src/capability/index.ts)\n- [`src/config/resolve-config-value.ts`](../packages/coding-agent/src/config/resolve-config-value.ts)\n- [`src/mcp/loader.ts`](../packages/coding-agent/src/mcp/loader.ts)\n",
|
|
23
23
|
"memory.md": "# Autonomous Memory\n\nWhen enabled, the agent automatically extracts durable knowledge from past sessions and injects a compact summary into each new session. Over time it builds a project-scoped memory store — technical decisions, recurring workflows, pitfalls — that carries forward without manual effort.\n\nDisabled by default. Enable via `/settings` or `config.yml`:\n\n```yaml\nmemories:\n enabled: true\n```\n\n## Usage\n\n### What gets injected\n\nAt session start, if a memory summary exists for the current project, it is injected into the system prompt as a **Memory Guidance** block. The agent is instructed to:\n\n- Treat memory as heuristic context — useful for process and prior decisions, not authoritative on current repo state.\n- Cite the memory artifact path when memory changes the plan, and pair it with current-repo evidence before acting.\n- Prefer repo state and user instruction when they conflict with memory; treat conflicting memory as stale.\n\n### Reading memory artifacts\n\nThe agent can read memory files directly using `memory://` URLs with the `read` tool:\n\n| URL | Content |\n|---|---|\n| `memory://root` | Compact summary injected at startup |\n| `memory://root/MEMORY.md` | Full long-term memory document |\n| `memory://root/skills/<name>/SKILL.md` | A generated skill playbook |\n\n### `/memory` slash command\n\n| Subcommand | Effect |\n|---|---|\n| `view` | Show the current memory injection payload |\n| `clear` / `reset` | Delete all memory data and generated artifacts |\n| `enqueue` / `rebuild` | Force consolidation to run at next startup |\n\n## How it works\n\nMemories are built by a background pipeline that at startup or manually triggered via slash command.\n\n**Phase 1 — per-session extraction:** For each past session that has changed since it was last processed, a model reads the session history and extracts durable signal: technical decisions, constraints, resolved failures, recurring workflows. Sessions that are too recent, too old, or currently active are skipped. Each extraction produces a raw memory block and a short synopsis for that session.\n\n**Phase 2 — consolidation:** After extraction, a second model pass reads all per-session extractions and produces three outputs written to disk:\n\n- `MEMORY.md` — a curated long-term memory document\n- `memory_summary.md` — the compact text injected at session start\n- `skills/` — reusable procedural playbooks, each in its own subdirectory\n\nPhase 2 uses a lease to prevent double-running when multiple processes start simultaneously. Stale skill directories from prior runs are pruned automatically.\n\nAll output is scanned for secrets before being written to disk.\n\n### Extraction behavior\n\nMemory extraction and consolidation behavior is driven entirely by static prompt files in `src/prompts/memories/`.\n\n| File | Purpose | Variables |\n|---|---|---|\n| `stage_one_system.md` | System prompt for per-session extraction | — |\n| `stage_one_input.md` | User-turn template wrapping session content | `{{thread_id}}`, `{{response_items_json}}` |\n| `consolidation.md` | Prompt for cross-session consolidation | `{{raw_memories}}`, `{{rollout_summaries}}` |\n| `read_path.md` | Memory guidance injected into live sessions | `{{memory_summary}}` |\n\n### Model selection\n\nMemory piggybacks on the model role system.\n\n| Phase | Role | Purpose |\n|---|---|---|\n| Phase 1 (extraction) | `default` | Per-session knowledge extraction |\n| Phase 2 (consolidation) | `smol` | Cross-session synthesis |\n\nIf `smol` is not configured, Phase 2 falls back to the `default` role.\n\n## Configuration\n\n| Setting | Default | Description |\n|---|---|---|\n| `memories.enabled` | `false` | Master switch |\n| `memories.maxRolloutAgeDays` | `30` | Sessions older than this are not processed |\n| `memories.minRolloutIdleHours` | `12` | Sessions active more recently than this are skipped |\n| `memories.maxRolloutsPerStartup` | `64` | Cap on sessions processed in a single startup |\n| `memories.summaryInjectionTokenLimit` | `5000` | Max tokens of the summary injected into the system prompt |\n\nAdditional tuning knobs (concurrency, lease durations, token budgets) are available in config for advanced use.\n\n## Key files\n\n- `src/memories/index.ts` — pipeline orchestration, injection, slash command handling\n- `src/memories/storage.ts` — SQLite-backed job queue and thread registry\n- `src/prompts/memories/` — memory prompt templates\n- `src/internal-urls/memory-protocol.ts` — `memory://` URL handler\n",
|
|
24
24
|
"models.md": "# Model and Provider Configuration (`models.yml`)\n\nThis document describes how the coding-agent currently loads models, applies overrides, resolves credentials, and chooses models at runtime.\n\n## What controls model behavior\n\nPrimary implementation files:\n\n- `src/config/model-registry.ts` — loads built-in + custom models, provider overrides, runtime discovery, auth integration\n- `src/config/model-resolver.ts` — parses model patterns and selects initial/smol/slow models\n- `src/config/settings-schema.ts` — model-related settings (`modelRoles`, provider transport preferences)\n- `src/session/auth-storage.ts` — API key + OAuth resolution order\n- `packages/ai/src/models.ts` and `packages/ai/src/types.ts` — built-in providers/models and `Model`/`compat` types\n\n## Config file location and legacy behavior\n\nDefault config path:\n\n- `~/.omp/agent/models.yml`\n\nLegacy behavior still present:\n\n- If `models.yml` is missing and `models.json` exists at the same location, it is migrated to `models.yml`.\n- Explicit `.json` / `.jsonc` config paths are still supported when passed programmatically to `ModelRegistry`.\n\n## `models.yml` shape\n\n```yaml\nproviders:\n <provider-id>:\n # provider-level config\nequivalence:\n overrides:\n <provider-id>/<model-id>: <canonical-model-id>\n exclude:\n - <provider-id>/<model-id>\n```\n\n`provider-id` is the canonical provider key used across selection and auth lookup.\n\n`equivalence` is optional and configures canonical model grouping on top of concrete provider models:\n\n- `overrides` maps an exact concrete selector (`provider/modelId`) to an official upstream canonical id\n- `exclude` opts a concrete selector out of canonical grouping\n\n## Provider-level fields\n\n```yaml\nproviders:\n my-provider:\n baseUrl: https://api.example.com/v1\n apiKey: MY_PROVIDER_API_KEY\n api: openai-completions\n headers:\n X-Team: platform\n authHeader: true\n auth: apiKey\n discovery:\n type: ollama\n modelOverrides:\n some-model-id:\n name: Renamed model\n models:\n - id: some-model-id\n name: Some Model\n api: openai-completions\n reasoning: false\n input: [text]\n cost:\n input: 0\n output: 0\n cacheRead: 0\n cacheWrite: 0\n contextWindow: 128000\n maxTokens: 16384\n headers:\n X-Model: value\n compat:\n supportsStore: true\n supportsDeveloperRole: true\n supportsReasoningEffort: true\n maxTokensField: max_completion_tokens\n openRouterRouting:\n only: [anthropic]\n vercelGatewayRouting:\n order: [anthropic, openai]\n extraBody:\n gateway: m1-01\n controller: mlx\n```\n\n### Allowed provider/model `api` values\n\n- `openai-completions`\n- `openai-responses`\n- `openai-codex-responses`\n- `azure-openai-responses`\n- `anthropic-messages`\n- `google-generative-ai`\n- `google-vertex`\n\n### Allowed auth/discovery values\n\n- `auth`: `apiKey` (default) or `none`\n- `discovery.type`: `ollama`\n\n## Validation rules (current)\n\n### Full custom provider (`models` is non-empty)\n\nRequired:\n\n- `baseUrl`\n- `apiKey` unless `auth: none`\n- `api` at provider level or each model\n\n### Override-only provider (`models` missing or empty)\n\nMust define at least one of:\n\n- `baseUrl`\n- `modelOverrides`\n- `discovery`\n\n### Discovery\n\n- `discovery` requires provider-level `api`.\n\n### Model value checks\n\n- `id` required\n- `contextWindow` and `maxTokens` must be positive if provided\n\n## Merge and override order\n\nModelRegistry pipeline (on refresh):\n\n1. Load built-in providers/models from `@oh-my-pi/pi-ai`.\n2. Load `models.yml` custom config.\n3. Apply provider overrides (`baseUrl`, `headers`) to built-in models.\n4. Apply `modelOverrides` (per provider + model id).\n5. Merge custom `models`:\n - same `provider + id` replaces existing\n - otherwise append\n6. Apply runtime-discovered models (currently Ollama and LM Studio), then re-apply model overrides.\n\n## Canonical model equivalence and coalescing\n\nThe registry keeps every concrete provider model and then builds a canonical layer above them.\n\nCanonical ids are official upstream ids only, for example:\n\n- `claude-opus-4-6`\n- `claude-haiku-4-5`\n- `gpt-5.3-codex`\n### `models.yml` equivalence config\n\nExample:\n\n```yaml\nproviders:\n zenmux:\n baseUrl: https://api.zenmux.example/v1\n apiKey: ZENMUX_API_KEY\n api: openai-codex-responses\n models:\n - id: codex\n name: Zenmux Codex\n reasoning: true\n input: [text]\n cost:\n input: 0\n output: 0\n cacheRead: 0\n cacheWrite: 0\n contextWindow: 200000\n maxTokens: 32768\n\nequivalence:\n overrides:\n zenmux/codex: gpt-5.3-codex\n p-codex/codex: gpt-5.3-codex\n exclude:\n - demo/codex-preview\n```\n\nBuild order for canonical grouping:\n\n1. exact user override from `equivalence.overrides`\n2. bundled official-id matches from built-in model metadata\n3. conservative heuristic normalization for gateway/provider variants\n4. fallback to the concrete model's own id\n\nCurrent heuristics are intentionally narrow:\n\n- embedded upstream prefixes can be stripped when present, for example `anthropic/...` or `openai/...`\n- dotted and dashed version variants can normalize only when they map to an existing official id, for example `4.6 -> 4-6`\n- ambiguous families or versions are not merged without a bundled match or explicit override\n\n### Canonical resolution behavior\n\nWhen multiple concrete variants share a canonical id, resolution uses:\n\n1. availability and auth\n2. `config.yml` `modelProviderOrder`\n3. existing registry/provider order if `modelProviderOrder` is unset\n\nDisabled or unauthenticated providers are skipped.\n\nSession state and transcripts continue to record the concrete provider/model that actually executed the turn.\n\nProvider defaults vs per-model overrides:\n\n- Provider `headers` are baseline.\n- Model `headers` override provider header keys.\n- `modelOverrides` can override model metadata (`name`, `reasoning`, `input`, `cost`, `contextWindow`, `maxTokens`, `headers`, `compat`, `contextPromotionTarget`).\n- `compat` is deep-merged for nested routing blocks (`openRouterRouting`, `vercelGatewayRouting`, `extraBody`).\n\n## Runtime discovery integration\n\n### Implicit Ollama discovery\n\nIf `ollama` is not explicitly configured, registry adds an implicit discoverable provider:\n\n- provider: `ollama`\n- api: `openai-completions`\n- base URL: `OLLAMA_BASE_URL` or `http://127.0.0.1:11434`\n- auth mode: keyless (`auth: none` behavior)\n\nRuntime discovery calls `GET /api/tags` on Ollama and synthesizes model entries with local defaults.\n\n### Implicit llama.cpp discovery\n\nIf `llama.cpp` is not explicitly configured, registry adds an implicit discoverable provider:\nNote: it's using the newer antropic messages api instead of the openai-competions.\n\n- provider: `llama.cpp`\n- api: `openai-responses`\n- base URL: `LLAMA_CPP_BASE_URL` or `http://127.0.0.1:8080`\n- auth mode: keyless (`auth: none` behavior)\n\nRuntime discovery calls `GET models` on llama.cpp and synthesizes model entries with local defaults.\n\n### Implicit LM Studio discovery\n\nIf `lm-studio` is not explicitly configured, registry adds an implicit discoverable provider:\n\n- provider: `lm-studio`\n- api: `openai-completions`\n- base URL: `LM_STUDIO_BASE_URL` or `http://127.0.0.1:1234/v1`\n- auth mode: keyless (`auth: none` behavior)\n\nRuntime discovery fetches models (`GET /models`) and synthesizes model entries with local defaults.\n\n### Explicit provider discovery\n\nYou can configure discovery yourself:\n\n```yaml\nproviders:\n ollama:\n baseUrl: http://127.0.0.1:11434\n api: openai-completions\n auth: none\n discovery:\n type: ollama\n \n llama.cpp:\n baseUrl: http://127.0.0.1:8080\n api: openai-responses\n auth: none\n discovery:\n type: llama.cpp\n```\n\n### Extension provider registration\n\nExtensions can register providers at runtime (`pi.registerProvider(...)`), including:\n\n- model replacement/append for a provider\n- custom stream handler registration for new API IDs\n- custom OAuth provider registration\n\n## Auth and API key resolution order\n\nWhen requesting a key for a provider, effective order is:\n\n1. Runtime override (CLI `--api-key`)\n2. Stored API key credential in `agent.db`\n3. Stored OAuth credential in `agent.db` (with refresh)\n4. Environment variable mapping (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, etc.)\n5. ModelRegistry fallback resolver (provider `apiKey` from `models.yml`, env-name-or-literal semantics)\n\n`models.yml` `apiKey` behavior:\n\n- Value is first treated as an environment variable name.\n- If no env var exists, the literal string is used as the token.\n\nIf `authHeader: true` and provider `apiKey` is set, models get:\n\n- `Authorization: Bearer <resolved-key>` header injected.\n\nKeyless providers:\n\n- Providers marked `auth: none` are treated as available without credentials.\n- `getApiKey*` returns `kNoAuth` for them.\n\n## Model availability vs all models\n\n- `getAll()` returns the loaded model registry (built-in + merged custom + discovered).\n- `getAvailable()` filters to models that are keyless or have resolvable auth.\n\nSo a model can exist in registry but not be selectable until auth is available.\n\n## Runtime model resolution\n\n### CLI and pattern parsing\n\n`model-resolver.ts` supports:\n\n- exact `provider/modelId`\n- exact canonical model id\n- exact model id (provider inferred)\n- fuzzy/substring matching\n- glob scope patterns in `--models` (e.g. `openai/*`, `*sonnet*`)\n- optional `:thinkingLevel` suffix (`off|minimal|low|medium|high|xhigh`)\n\n`--provider` is legacy; `--model` is preferred.\n\nResolution precedence for exact selectors:\n\n1. exact `provider/modelId` bypasses coalescing\n2. exact canonical id resolves through the canonical index\n3. exact bare concrete id still works\n4. fuzzy and glob matching run after the exact paths\n\n### Initial model selection priority\n\n`findInitialModel(...)` uses this order:\n\n1. explicit CLI provider+model\n2. first scoped model (if not resuming)\n3. saved default provider/model\n4. known provider defaults (e.g. OpenAI/Anthropic/etc.) among available models\n5. first available model\n\n### Role aliases and settings\n\nSupported model roles:\n\n- `default`, `smol`, `slow`, `plan`, `commit`\n\nRole aliases like `pi/smol` expand through `settings.modelRoles`. Each role value can also append a thinking selector such as `:minimal`, `:low`, `:medium`, or `:high`.\n\nIf a role points at another role, the target model still inherits normally and any explicit suffix on the referring role wins for that role-specific use.\n\nRelated settings:\n\n- `modelRoles` (record)\n- `enabledModels` (scoped pattern list)\n- `modelProviderOrder` (global canonical-provider precedence)\n- `providers.kimiApiFormat` (`openai` or `anthropic` request format)\n- `providers.openaiWebsockets` (`auto|off|on` websocket preference for OpenAI Codex transport)\n\n`modelRoles` may store either:\n\n- `provider/modelId` to pin a concrete provider variant\n- a canonical id such as `gpt-5.3-codex` to allow provider coalescing\n\nFor `enabledModels` and CLI `--models`:\n\n- exact canonical ids expand to all concrete variants in that canonical group\n- explicit `provider/modelId` entries stay exact\n- globs and fuzzy matches still operate on concrete models\n\n## `/model` and `--list-models`\n\nBoth surfaces keep provider-prefixed models visible and selectable.\n\nThey now also expose canonical/coalesced models:\n\n- `/model` includes a canonical view alongside provider tabs\n- `--list-models` prints a canonical section plus the concrete provider rows\n\nSelecting a canonical entry stores the canonical selector. Selecting a provider row stores the explicit `provider/modelId`.\n\n## Context promotion (model-level fallback chains)\n\nContext promotion is an overflow recovery mechanism for small-context variants (for example `*-spark`) that automatically promotes to a larger-context sibling when the API rejects a request with a context length error.\n\n### Trigger and order\n\nWhen a turn fails with a context overflow error (e.g. `context_length_exceeded`), `AgentSession` attempts promotion **before** falling back to compaction:\n\n1. If `contextPromotion.enabled` is true, resolve a promotion target (see below).\n2. If a target is found, switch to it and retry the request — no compaction needed.\n3. If no target is available, fall through to auto-compaction on the current model.\n\n### Target selection\n\nSelection is model-driven, not role-driven:\n\n1. `currentModel.contextPromotionTarget` (if configured)\n2. smallest larger-context model on the same provider + API\n\nCandidates are ignored unless credentials resolve (`ModelRegistry.getApiKey(...)`).\n\n### OpenAI Codex websocket handoff\n\nIf switching from/to `openai-codex-responses`, session provider state key `openai-codex-responses` is closed before model switch. This drops websocket transport state so the next turn starts clean on the promoted model.\n\n### Persistence behavior\n\nPromotion uses temporary switching (`setModelTemporary`):\n\n- recorded as a temporary `model_change` in session history\n- does not rewrite saved role mapping\n\n### Configuring explicit fallback chains\n\nConfigure fallback directly in model metadata via `contextPromotionTarget`.\n\n`contextPromotionTarget` accepts either:\n\n- `provider/model-id` (explicit)\n- `model-id` (resolved within current provider)\n\nExample (`models.yml`) for Spark -> non-Spark on the same provider:\n\n```yaml\nproviders:\n openai-codex:\n modelOverrides:\n gpt-5.3-codex-spark:\n contextPromotionTarget: openai-codex/gpt-5.3-codex\n```\n\nThe built-in model generator also assigns this automatically for `*-spark` models when a same-provider base model exists.\n\n## Compatibility and routing fields\n\n`models.yml` supports this `compat` subset:\n\n- `supportsStore`\n- `supportsDeveloperRole`\n- `supportsReasoningEffort`\n- `maxTokensField` (`max_completion_tokens` or `max_tokens`)\n- `openRouterRouting.only` / `openRouterRouting.order`\n- `vercelGatewayRouting.only` / `vercelGatewayRouting.order`\n\nThese are consumed by the OpenAI-completions transport logic and combined with URL-based auto-detection.\n\n## Practical examples\n\n### Local OpenAI-compatible endpoint (no auth)\n\n```yaml\nproviders:\n local-openai:\n baseUrl: http://127.0.0.1:8000/v1\n auth: none\n api: openai-completions\n models:\n - id: Qwen/Qwen2.5-Coder-32B-Instruct\n name: Qwen 2.5 Coder 32B (local)\n```\n\n### Hosted proxy with env-based key\n\n```yaml\nproviders:\n anthropic-proxy:\n baseUrl: https://proxy.example.com/anthropic\n apiKey: ANTHROPIC_PROXY_API_KEY\n api: anthropic-messages\n authHeader: true\n models:\n - id: claude-sonnet-4-20250514\n name: Claude Sonnet 4 (Proxy)\n reasoning: true\n input: [text, image]\n```\n\n### Override built-in provider route + model metadata\n\n```yaml\nproviders:\n openrouter:\n baseUrl: https://my-proxy.example.com/v1\n headers:\n X-Team: platform\n modelOverrides:\n anthropic/claude-sonnet-4:\n name: Sonnet 4 (Corp)\n compat:\n openRouterRouting:\n only: [anthropic]\n```\n\n## Legacy consumer caveat\n\nMost model configuration now flows through `models.yml` via `ModelRegistry`.\n\nOne notable legacy path remains: web-search Anthropic auth resolution still reads `~/.omp/agent/models.json` directly in `src/web/search/auth.ts`.\n\nIf you rely on that specific path, keep JSON compatibility in mind until that module is migrated.\n\n## Failure mode\n\nIf `models.yml` fails schema or validation checks:\n\n- registry keeps operating with built-in models\n- error is exposed via `ModelRegistry.getError()` and surfaced in UI/notifications\n",
|
|
25
25
|
"natives-addon-loader-runtime.md": "# Natives Addon Loader Runtime\n\nThis document deep-dives the addon loading/validation layer in `@oh-my-pi/pi-natives`: how `native.ts` decides which `.node` file to load, when embedded payload extraction runs, and how startup failures are reported.\n\n## Implementation files\n\n- `packages/natives/src/native.ts`\n- `packages/natives/src/embedded-addon.ts`\n- `packages/natives/src/bindings.ts`\n- `packages/natives/package.json`\n\n## Scope and responsibility\n\nLoader/runtime responsibilities are intentionally narrow:\n\n- Build a platform/CPU-aware candidate list for addon filenames and directories.\n- Optionally materialize an embedded addon into a versioned per-user cache directory.\n- Attempt candidates in deterministic order.\n- Reject stale or incompatible addons via `validateNative` before exposing bindings.\n\nOut of scope here: module-specific grep/text/highlight behavior.\n\n## Runtime inputs and derived state\n\nAt module initialization (`export const native = loadNative();`), `native.ts` computes static context:\n\n- **Platform tag**: ``${process.platform}-${process.arch}`` (for example `darwin-arm64`).\n- **Package version**: from `packages/natives/package.json` (`version` field).\n- **Core directories**:\n - `nativeDir`: package-local `packages/natives/native`.\n - `execDir`: directory containing `process.execPath`.\n - `versionedDir`: `<getNativesDir()>/<packageVersion>`.\n - `userDataDir` fallback:\n - Windows: `%LOCALAPPDATA%/omp` (or `%USERPROFILE%/AppData/Local/omp`).\n - Non-Windows: `~/.local/bin`.\n- **Compiled-binary mode** (`isCompiledBinary`): true if any of:\n - `PI_COMPILED` env var is set, or\n - `import.meta.url` contains Bun-embedded markers (`$bunfs`, `~BUN`, `%7EBUN`).\n- **Variant override**: `PI_NATIVE_VARIANT` (`modern`/`baseline` only; invalid values ignored).\n- **Selected variant**: explicit override, otherwise runtime AVX2 detection on x64 (`modern` if AVX2, else `baseline`).\n\n## Platform support and tag resolution\n\n`SUPPORTED_PLATFORMS` is fixed to:\n\n- `linux-x64`\n- `linux-arm64`\n- `darwin-x64`\n- `darwin-arm64`\n- `win32-x64`\n\nBehavior detail:\n\n- Unsupported platforms are not rejected up-front.\n- Loader still tries all computed candidates first.\n- If nothing loads, it throws an explicit unsupported-platform error listing supported tags.\n\nThis preserves useful diagnostics for near-miss cases while still failing hard for truly unsupported targets.\n\n## Variant selection (`modern` / `baseline` / default)\n\n### x64 behavior\n\n1. If `PI_NATIVE_VARIANT` is `modern` or `baseline`, that value wins.\n2. Else detect AVX2 support:\n - Linux: scan `/proc/cpuinfo` for `avx2`.\n - macOS: query `sysctl` (`machdep.cpu.leaf7_features`, fallback `machdep.cpu.features`).\n - Windows: run PowerShell `[System.Runtime.Intrinsics.X86.Avx2]::IsSupported`.\n3. Result:\n - AVX2 available -> `modern`\n - AVX2 unavailable/undetectable -> `baseline`\n\n### Non-x64 behavior\n\n- No variant is used; loader stays on the default filename (`pi_natives.<platform>-<arch>.node`).\n\n### Filename construction\n\nGiven `tag = <platform>-<arch>`:\n\n- Non-x64 or no variant: `pi_natives.<tag>.node`\n- x64 + `modern`: try in order\n 1. `pi_natives.<tag>-modern.node`\n 2. `pi_natives.<tag>-baseline.node` (intentional fallback)\n- x64 + `baseline`: only `pi_natives.<tag>-baseline.node`\n\nThe `addonLabel` used in final error messages is either `<tag>` or `<tag> (<variant>)`.\n\n## Candidate path construction and fallback ordering\n\n`native.ts` builds candidate pools before any `require(...)` call.\n\n### Release candidates\n\nBuilt from variant-resolved filename list and searched in this order:\n\n- **Non-compiled runtime**:\n 1. `<nativeDir>/<filename>`\n 2. `<execDir>/<filename>`\n\n- **Compiled runtime** (`PI_COMPILED` or Bun embedded markers):\n 1. `<versionedDir>/<filename>`\n 2. `<userDataDir>/<filename>`\n 3. `<nativeDir>/<filename>`\n 4. `<execDir>/<filename>`\n\n`dedupedCandidates` removes duplicates while preserving first occurrence order.\n\n### Final runtime sequence\n\nAt load time:\n\n1. Optional embedded extraction candidate (if produced) is inserted at the front.\n2. Remaining deduplicated candidates are tried in order.\n3. First candidate that both `require(...)`s and passes `validateNative(...)` wins.\n\n## Embedded addon extraction lifecycle\n\n`embedded-addon.ts` defines a generated manifest shape:\n\n- `platformTag`\n- `version`\n- `files[]` where each entry has `variant`, `filename`, `filePath`\n\nCurrent checked-in default is `embeddedAddon: null`; compiled artifacts may replace this with real metadata.\n\n### Extraction state machine\n\nExtraction (`maybeExtractEmbeddedAddon`) runs only when all gates pass:\n\n1. `isCompiledBinary === true`\n2. `embeddedAddon !== null`\n3. `embeddedAddon.platformTag === platformTag`\n4. `embeddedAddon.version === packageVersion`\n5. A variant-appropriate embedded file is found\n\nVariant file selection mirrors runtime variant intent:\n\n- Non-x64: prefer `default`, then first available file.\n- x64 + `modern`: prefer `modern`, fallback to `baseline`.\n- x64 + `baseline`: require `baseline`.\n\nMaterialization behavior:\n\n1. Ensure `<versionedDir>` exists (`mkdirSync(..., { recursive: true })`).\n2. If `<versionedDir>/<selected filename>` already exists, reuse it (no rewrite).\n3. Else read embedded source `filePath` and write target file.\n4. Return target path for highest-priority load attempt.\n\nOn failure, extraction does not crash immediately; it appends an error entry (directory creation or write failure) and loader proceeds to normal candidate probing.\n\n## Lifecycle and state transitions\n\n```text\nInit\n -> Compute platform/version/variant/candidate lists\n -> (Compiled + embedded manifest matches?)\n yes -> Try extract embedded to versionedDir (record errors, continue)\n no -> Skip extraction\n -> For each runtime candidate in order:\n require(candidate)\n -> success: validateNative\n -> pass: return bindings (READY)\n -> fail: record error, continue\n -> failure: record error, continue\n -> none loaded:\n if unsupported platform tag -> throw Unsupported platform\n else -> throw Failed to load (full tried-path diagnostics + hints)\n```\n\n## `validateNative` contract checks\n\n`validateNative(bindings, source)` enforces a function-only contract over `NativeBindings` at startup.\n\nMechanics:\n\n- For each required export name, it checks `typeof bindings[name] === \"function\"`.\n- Missing names are aggregated.\n- If any are missing, loader throws:\n - source addon path,\n - missing export list,\n - rebuild command hint.\n\nThis is a hard compatibility gate against stale binaries, partial builds, and symbol/name drift.\n\n### JS API ↔ native export mapping (validation gate)\n\n| JS binding name checked in `validateNative` | Expected native export name |\n| --- | --- |\n| `grep` | `grep` |\n| `glob` | `glob` |\n| `highlightCode` | `highlightCode` |\n| `executeShell` | `executeShell` |\n| `PtySession` | `PtySession` |\n| `Shell` | `Shell` |\n| `visibleWidth` | `visibleWidth` |\n| `getSystemInfo` | `getSystemInfo` |\n| `getWorkProfile` | `getWorkProfile` |\n| `invalidateFsScanCache` | `invalidateFsScanCache` |\n\nNote: `bindings.ts` declares only the base `cancelWork(id)` member; module `types.ts` files declaration-merge additional symbols that `validateNative` enforces.\n\n## Failure behavior and diagnostics\n\n## Unsupported platform\n\nIf all candidates fail and `platformTag` is not in `SUPPORTED_PLATFORMS`, loader throws:\n\n- `Unsupported platform: <tag>`\n- Full supported-platform list\n- Explicit issue-reporting guidance\n\n## Stale binary / mismatch symptoms\n\nTypical stale mismatch signal:\n\n- `Native addon missing exports (<candidate>). Missing: ...`\n\nCommon causes:\n\n- Old `.node` binary from previous package version/API shape.\n- Wrong variant artifact selected (for x64).\n- New Rust export not present in loaded artifact.\n\nLoader behavior:\n\n- Records per-candidate missing-export failures.\n- Continues probing remaining candidates.\n- If no candidate validates, final error includes every attempted path with each failure message.\n\n## Compiled-binary startup failures\n\nIn compiled mode final diagnostics include:\n\n- expected versioned cache target paths (`<versionedDir>/<filename>`),\n- remediation to delete stale `<versionedDir>` and rerun,\n- direct release download `curl` commands for each expected filename.\n\n## Non-compiled startup failures\n\nIn normal package/runtime mode final diagnostics include:\n\n- reinstall hint (`bun install @oh-my-pi/pi-natives`),\n- local rebuild command (`bun --cwd=packages/natives run build`),\n- optional x64 variant build hint (`TARGET_VARIANT=baseline|modern ...`).\n\n## Runtime behavior\n\n- The loader always uses the release candidate chain.\n- Setting `PI_DEV` only enables per-candidate console diagnostics (`Loaded native addon...` and load errors).",
|