opencode-task-router 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Doug Ritter
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,172 @@
1
+ # OpenCode Task Router
2
+
3
+ An [OpenCode](https://opencode.ai) plugin that classifies a development task with a local Ollama model and recommends the right cost tier and agent to use.
4
+
5
+ It is designed to keep simple work on free local models and reserve paid models for tasks that actually need them.
6
+
7
+ ## What You Get
8
+
9
+ - A `route_task` tool for task-to-model routing recommendations
10
+ - Cost-tier recommendations: `free`, `cheap`, `moderate`, `expensive`
11
+ - Agent guidance based on the recommended tier
12
+ - Lightweight learning from your past routing decisions
13
+
14
+ ## Prerequisites
15
+
16
+ 1. [OpenCode](https://opencode.ai) installed
17
+ 2. [Ollama](https://ollama.ai) installed and running
18
+ 3. The classifier model pulled locally:
19
+
20
+ ```bash
21
+ ollama serve
22
+ ollama pull qwen3:8b
23
+ ```
24
+
25
+ If your global OpenCode config uses `enabled_providers`, include `ollama` there too:
26
+
27
+ ```jsonc
28
+ {
29
+ "enabled_providers": ["github-copilot", "ollama"]
30
+ }
31
+ ```
32
+
33
+ ## Install From NPM
34
+
35
+ Add the plugin package to your OpenCode config:
36
+
37
+ ```json
38
+ {
39
+ "$schema": "https://opencode.ai/config.json",
40
+ "plugin": ["opencode-task-router"]
41
+ }
42
+ ```
43
+
44
+ Then restart OpenCode. OpenCode installs npm plugins automatically with Bun at startup.
45
+
46
+ ## Minimal Usage
47
+
48
+ Ask OpenCode to call the tool directly:
49
+
50
+ ```text
51
+ Use the route_task tool to analyze this task: refactor the authentication module
52
+ ```
53
+
54
+ Typical output:
55
+
56
+ ```text
57
+ ## Task Routing Recommendation
58
+
59
+ | Factor | Assessment |
60
+ |--------|-----------|
61
+ | Complexity | **moderate** |
62
+ | Context needs | **medium** |
63
+ | Cost tier | **moderate** |
64
+
65
+ **Reasoning:** Multi-file refactoring usually needs broader codebase context.
66
+ ```
67
+
68
+ ## Optional Full Setup
69
+
70
+ The npm package guarantees the `route_task` tool.
71
+
72
+ If you also want the `/route` shortcut, a local free-tier agent, and an example Ollama provider config, copy the example files from `examples/opencode/` into your project:
73
+
74
+ ```text
75
+ examples/opencode/
76
+ ├── opencode.json
77
+ └── .opencode/
78
+ ├── agents/
79
+ │ └── local-worker.md
80
+ └── commands/
81
+ └── route.md
82
+ ```
83
+
84
+ Suggested mapping:
85
+
86
+ - `examples/opencode/.opencode/commands/route.md` -> `.opencode/commands/route.md`
87
+ - `examples/opencode/.opencode/agents/local-worker.md` -> `.opencode/agents/local-worker.md`
88
+ - `examples/opencode/opencode.json` -> merge into your project `opencode.json`
89
+
90
+ ## How It Works
91
+
92
+ 1. `route_task` sends your task description to a local Ollama classifier
93
+ 2. The classifier returns task complexity, context estimate, and cost tier
94
+ 3. The plugin recommends an agent and model tier
95
+ 4. When the session goes idle, the plugin logs what was recommended to `.opencode/router-history.jsonl`
96
+ 5. Recent history is reused as calibration on future routing decisions
97
+
98
+ ## Configuration
99
+
100
+ The plugin currently defaults to:
101
+
102
+ ```ts
103
+ const OLLAMA_BASE_URL = "http://localhost:11434"
104
+ const CLASSIFIER_MODEL = "qwen3:8b"
105
+ const MAX_HISTORY_EXAMPLES = 20
106
+ ```
107
+
108
+ These live in `src/index.ts`.
109
+
110
+ ## Local Development
111
+
112
+ Install dependencies and build the package:
113
+
114
+ ```bash
115
+ npm install
116
+ npm run build
117
+ npm test
118
+ ```
119
+
120
+ For local OpenCode development in this repo, the project plugin entrypoint at `.opencode/plugins/task-router.ts` re-exports the package source from `src/index.ts`.
121
+
122
+ The smoke test automates what is practical without depending on a real OpenCode session:
123
+
124
+ - builds the package
125
+ - creates a tarball
126
+ - installs the tarball into a temporary directory
127
+ - imports the installed package
128
+ - executes `route_task` with a mocked Ollama response
129
+ - verifies the history file is written
130
+
131
+ The unit tests cover the internal routing logic directly:
132
+
133
+ - cost-tier inference from provider/model ids
134
+ - classifier response parsing and normalization
135
+ - prompt construction with calibration history
136
+ - history file read/write helpers
137
+ - plugin recommendation, retry, and idle logging behavior
138
+
139
+ ## CI And Publishing
140
+
141
+ This repo includes GitHub Actions for:
142
+
143
+ - CI on pushes to `main` and pull requests
144
+ - npm publishing from version tags like `v0.1.0`
145
+ - manual publish dry runs through `workflow_dispatch`
146
+
147
+ Publishing can be fully automated through GitHub Actions using npm trusted publishing with GitHub OIDC.
148
+
149
+ See `RELEASE.md` for the release checklist and tag-based publishing flow.
150
+
151
+ ## Troubleshooting
152
+
153
+ ### Cannot connect to Ollama
154
+
155
+ Make sure Ollama is running and the model is installed:
156
+
157
+ ```bash
158
+ ollama serve
159
+ ollama pull qwen3:8b
160
+ ```
161
+
162
+ ### Plugin loads but routing falls back to moderate
163
+
164
+ That usually means the classifier returned invalid output. The plugin retries once, then falls back to a safe default.
165
+
166
+ ### No history file created yet
167
+
168
+ The history file is only written after a routing recommendation and when the session goes idle.
169
+
170
+ ## License
171
+
172
+ MIT
@@ -0,0 +1,37 @@
1
+ import { type Plugin } from "@opencode-ai/plugin";
2
+ interface ClassifierResponse {
3
+ complexity: "trivial" | "simple" | "moderate" | "complex";
4
+ contextEstimate: "small" | "medium" | "large";
5
+ costTier: "free" | "cheap" | "moderate" | "expensive";
6
+ reasoning: string;
7
+ }
8
+ interface HistoryEntry {
9
+ ts: string;
10
+ prompt: string;
11
+ recommendedTier: string;
12
+ accepted?: boolean;
13
+ actualTier?: string;
14
+ actualModel?: string;
15
+ }
16
+ declare function inferTierFromModel(providerID: string, modelID: string): string;
17
+ declare function getHistoryDir(worktree: string): string;
18
+ declare function getHistoryPath(worktree: string): string;
19
+ declare function readHistory(worktree: string): HistoryEntry[];
20
+ declare function appendHistory(worktree: string, entry: HistoryEntry): void;
21
+ declare function buildClassifierPrompt(taskPrompt: string, history: HistoryEntry[]): string;
22
+ declare function truncate(value: string, maxLength: number): string;
23
+ declare function callOllama(prompt: string): Promise<string>;
24
+ declare function parseClassifierResponse(raw: string): ClassifierResponse;
25
+ export declare const __testUtils: {
26
+ buildClassifierPrompt: typeof buildClassifierPrompt;
27
+ callOllama: typeof callOllama;
28
+ getHistoryDir: typeof getHistoryDir;
29
+ getHistoryPath: typeof getHistoryPath;
30
+ inferTierFromModel: typeof inferTierFromModel;
31
+ parseClassifierResponse: typeof parseClassifierResponse;
32
+ readHistory: typeof readHistory;
33
+ appendHistory: typeof appendHistory;
34
+ truncate: typeof truncate;
35
+ };
36
+ export declare const TaskRouterPlugin: Plugin;
37
+ export default TaskRouterPlugin;
package/dist/index.js ADDED
@@ -0,0 +1,355 @@
1
+ import { tool } from "@opencode-ai/plugin";
2
+ import { appendFileSync, existsSync, mkdirSync, readFileSync } from "node:fs";
3
+ import { join } from "node:path";
4
+ const OLLAMA_BASE_URL = "http://localhost:11434";
5
+ const CLASSIFIER_MODEL = "qwen3:8b";
6
+ const HISTORY_DIR = ".opencode";
7
+ const HISTORY_FILE = "router-history.jsonl";
8
+ const MAX_HISTORY_EXAMPLES = 20;
9
+ const TIER_INFO = {
10
+ free: {
11
+ description: "Local model (Ollama) -- zero cost, good for trivial/simple tasks",
12
+ agent: "local-worker",
13
+ },
14
+ cheap: {
15
+ description: "Fast paid model (e.g. Haiku, GPT-4o Mini, Gemini Flash)",
16
+ agent: "build",
17
+ },
18
+ moderate: {
19
+ description: "Capable paid model (e.g. Sonnet, GPT-4o, Codex)",
20
+ agent: "build",
21
+ },
22
+ expensive: {
23
+ description: "Premium paid model (e.g. Opus, GPT-5, o1-pro)",
24
+ agent: "build",
25
+ },
26
+ };
27
+ function inferTierFromModel(providerID, modelID) {
28
+ const provider = providerID.toLowerCase();
29
+ const model = modelID.toLowerCase();
30
+ if (provider === "ollama" ||
31
+ provider === "lmstudio" ||
32
+ provider === "llama.cpp" ||
33
+ provider === "llamacpp") {
34
+ return "free";
35
+ }
36
+ if (/opus|o1-pro|gpt-?5|gemini[.-_]?ultra/.test(model)) {
37
+ return "expensive";
38
+ }
39
+ if (/haiku|gpt-?4o-?mini|gemini[.-_]?flash|nano|mini|small/.test(model) &&
40
+ !/sonnet/.test(model)) {
41
+ return "cheap";
42
+ }
43
+ return "moderate";
44
+ }
45
+ function getHistoryDir(worktree) {
46
+ return join(worktree, HISTORY_DIR);
47
+ }
48
+ function getHistoryPath(worktree) {
49
+ return join(getHistoryDir(worktree), HISTORY_FILE);
50
+ }
51
+ function readHistory(worktree) {
52
+ const historyPath = getHistoryPath(worktree);
53
+ if (!existsSync(historyPath)) {
54
+ return [];
55
+ }
56
+ try {
57
+ const content = readFileSync(historyPath, "utf-8");
58
+ const trimmed = content.trim();
59
+ if (!trimmed) {
60
+ return [];
61
+ }
62
+ return trimmed
63
+ .split("\n")
64
+ .filter(Boolean)
65
+ .slice(-MAX_HISTORY_EXAMPLES)
66
+ .map((line) => JSON.parse(line));
67
+ }
68
+ catch {
69
+ return [];
70
+ }
71
+ }
72
+ function appendHistory(worktree, entry) {
73
+ const historyDir = getHistoryDir(worktree);
74
+ if (!existsSync(historyDir)) {
75
+ mkdirSync(historyDir, { recursive: true });
76
+ }
77
+ appendFileSync(getHistoryPath(worktree), JSON.stringify(entry) + "\n", "utf-8");
78
+ }
79
+ function buildClassifierPrompt(taskPrompt, history) {
80
+ let prompt = `You are a task classifier for software development work.
81
+ Your job is to analyze a task description and classify it so the developer
82
+ can pick the right AI model (local/free vs. paid/premium).
83
+
84
+ Respond with ONLY a valid JSON object. No markdown fences, no explanation
85
+ outside the JSON. Do not use any thinking tags.
86
+
87
+ The JSON must have exactly these fields:
88
+ {
89
+ "complexity": "trivial" | "simple" | "moderate" | "complex",
90
+ "contextEstimate": "small" | "medium" | "large",
91
+ "costTier": "free" | "cheap" | "moderate" | "expensive",
92
+ "reasoning": "one sentence explaining your classification"
93
+ }
94
+
95
+ Classification rules:
96
+ - trivial: typos, small text edits, renaming, formatting
97
+ - simple: single-file fixes, small scripts, doc edits, straightforward questions
98
+ - moderate: new features, multi-file refactors, bug fixes requiring investigation
99
+ - complex: architecture changes, security-sensitive work, multi-system integration
100
+
101
+ Context size rules:
102
+ - small (<2K tokens of codebase context needed): simple lookups, isolated changes
103
+ - medium (2K-20K tokens): moderate features, understanding a module
104
+ - large (20K+ tokens): cross-cutting changes, architectural understanding
105
+
106
+ Cost tier mapping:
107
+ - free: trivial + simple tasks -> use a local model (zero cost)
108
+ - cheap: moderate tasks with small context -> use a fast paid model
109
+ - moderate: moderate tasks with medium/large context -> use a capable paid model
110
+ - expensive: complex tasks -> use a premium paid model
111
+ `;
112
+ if (history.length > 0) {
113
+ prompt += `
114
+ Here are recent routing decisions for calibration.
115
+ When "accepted" is false, the developer disagreed with the recommendation
116
+ and chose a different tier -- adjust your future classifications accordingly:
117
+
118
+ `;
119
+ for (const entry of history) {
120
+ if (entry.accepted === false && entry.actualTier) {
121
+ prompt += `- "${truncate(entry.prompt, 80)}" -> recommended: ${entry.recommendedTier}, OVERRIDDEN -> developer used: ${entry.actualTier}\n`;
122
+ continue;
123
+ }
124
+ if (entry.accepted === true) {
125
+ prompt += `- "${truncate(entry.prompt, 80)}" -> recommended: ${entry.recommendedTier}, accepted\n`;
126
+ continue;
127
+ }
128
+ prompt += `- "${truncate(entry.prompt, 80)}" -> recommended: ${entry.recommendedTier}\n`;
129
+ }
130
+ prompt += "\n";
131
+ }
132
+ prompt += `\nNow classify this task:\n${taskPrompt}`;
133
+ return prompt;
134
+ }
135
+ function truncate(value, maxLength) {
136
+ return value.length > maxLength ? `${value.slice(0, maxLength)}...` : value;
137
+ }
138
+ async function callOllama(prompt) {
139
+ const response = await fetch(`${OLLAMA_BASE_URL}/api/generate`, {
140
+ method: "POST",
141
+ headers: { "Content-Type": "application/json" },
142
+ body: JSON.stringify({
143
+ model: CLASSIFIER_MODEL,
144
+ prompt,
145
+ stream: false,
146
+ options: {
147
+ temperature: 0.1,
148
+ num_predict: 256,
149
+ },
150
+ }),
151
+ });
152
+ if (!response.ok) {
153
+ throw new Error(`Ollama returned ${response.status}: ${response.statusText}`);
154
+ }
155
+ const data = (await response.json());
156
+ return data.response;
157
+ }
158
+ function parseClassifierResponse(raw) {
159
+ let cleaned = raw
160
+ .replace(/```json\s*/g, "")
161
+ .replace(/```\s*/g, "")
162
+ .replace(/<think>[\s\S]*?<\/think>/g, "")
163
+ .trim();
164
+ const jsonMatch = cleaned.match(/\{[\s\S]*\}/);
165
+ if (jsonMatch) {
166
+ cleaned = jsonMatch[0];
167
+ }
168
+ const parsed = JSON.parse(cleaned);
169
+ const validComplexity = ["trivial", "simple", "moderate", "complex"];
170
+ const validContext = ["small", "medium", "large"];
171
+ const validCost = ["free", "cheap", "moderate", "expensive"];
172
+ if (!validComplexity.includes(parsed.complexity)) {
173
+ parsed.complexity = "moderate";
174
+ }
175
+ if (!validContext.includes(parsed.contextEstimate)) {
176
+ parsed.contextEstimate = "medium";
177
+ }
178
+ if (!validCost.includes(parsed.costTier)) {
179
+ parsed.costTier = "moderate";
180
+ }
181
+ return {
182
+ complexity: parsed.complexity,
183
+ contextEstimate: parsed.contextEstimate,
184
+ costTier: parsed.costTier,
185
+ reasoning: parsed.reasoning || "No reasoning provided",
186
+ };
187
+ }
188
+ export const __testUtils = {
189
+ buildClassifierPrompt,
190
+ callOllama,
191
+ getHistoryDir,
192
+ getHistoryPath,
193
+ inferTierFromModel,
194
+ parseClassifierResponse,
195
+ readHistory,
196
+ appendHistory,
197
+ truncate,
198
+ };
199
+ export const TaskRouterPlugin = async ({ directory, worktree }) => {
200
+ let pending = null;
201
+ const sessionModels = new Map();
202
+ return {
203
+ tool: {
204
+ route_task: tool({
205
+ description: "Analyze a development task and recommend the best cost tier " +
206
+ "(free/cheap/moderate/expensive) and agent to use. " +
207
+ "Evaluates task complexity, context size needs, and cost implications. " +
208
+ "Uses a local Ollama model for classification (zero cost).",
209
+ args: {
210
+ prompt: tool.schema
211
+ .string()
212
+ .describe("The task description or prompt to analyze for routing"),
213
+ },
214
+ async execute(args, context) {
215
+ const projectRoot = context.worktree || context.directory;
216
+ const history = readHistory(projectRoot);
217
+ const classifierPrompt = buildClassifierPrompt(args.prompt, history);
218
+ let classification;
219
+ try {
220
+ const rawResponse = await callOllama(classifierPrompt);
221
+ classification = parseClassifierResponse(rawResponse);
222
+ }
223
+ catch (error) {
224
+ try {
225
+ const retryPrompt = `Respond with ONLY valid JSON, no other text.\n\n${classifierPrompt}`;
226
+ const rawRetry = await callOllama(retryPrompt);
227
+ classification = parseClassifierResponse(rawRetry);
228
+ }
229
+ catch {
230
+ try {
231
+ await fetch(`${OLLAMA_BASE_URL}/api/tags`);
232
+ }
233
+ catch {
234
+ return [
235
+ "## Router Error",
236
+ "",
237
+ "Cannot connect to Ollama. Make sure it is running:",
238
+ "```",
239
+ "ollama serve",
240
+ "```",
241
+ "",
242
+ `And that the model \`${CLASSIFIER_MODEL}\` is pulled:`,
243
+ "```",
244
+ `ollama pull ${CLASSIFIER_MODEL}`,
245
+ "```",
246
+ ].join("\n");
247
+ }
248
+ return [
249
+ "## Router Warning",
250
+ "",
251
+ `Classification failed (${error}). Defaulting to **moderate** tier.`,
252
+ "",
253
+ "Run **`/models`** to pick a capable paid model, or just continue.",
254
+ ].join("\n");
255
+ }
256
+ }
257
+ const tierInfo = TIER_INFO[classification.costTier];
258
+ pending = {
259
+ ts: new Date().toISOString(),
260
+ prompt: args.prompt,
261
+ tier: classification.costTier,
262
+ sessionID: context.sessionID,
263
+ };
264
+ const historyNote = history.length > 0
265
+ ? `*Calibrated from ${history.length} past routing decisions.*`
266
+ : "*No routing history yet -- recommendations will improve as you use the router.*";
267
+ const tierLines = ["", "**Cost tiers:**", ""];
268
+ for (const [tier, info] of Object.entries(TIER_INFO)) {
269
+ const marker = tier === classification.costTier ? " **<-- recommended**" : "";
270
+ tierLines.push(`- **${tier}**: ${info.description}${marker}`);
271
+ }
272
+ return [
273
+ "## Task Routing Recommendation",
274
+ "",
275
+ "| Factor | Assessment |",
276
+ "|--------|-----------|",
277
+ `| Complexity | **${classification.complexity}** |`,
278
+ `| Context needs | **${classification.contextEstimate}** |`,
279
+ `| Cost tier | **${classification.costTier}** |`,
280
+ "",
281
+ `**Reasoning:** ${classification.reasoning}`,
282
+ "",
283
+ historyNote,
284
+ ...tierLines,
285
+ "",
286
+ "---",
287
+ "",
288
+ "### How to proceed",
289
+ "",
290
+ `1. **Switch agent** -- press \`Tab\` and select **\`${tierInfo.agent}\`**`,
291
+ `2. **Switch model** -- run **\`/models\`** and pick a **${classification.costTier}**-tier model`,
292
+ "3. **Ignore** -- just keep working with your current setup if you disagree",
293
+ "",
294
+ "*Your choice will be observed and used to improve future recommendations.*",
295
+ ].join("\n");
296
+ },
297
+ }),
298
+ },
299
+ event: async ({ event }) => {
300
+ if (event.type === "message.updated") {
301
+ const msg = event.properties?.info;
302
+ if (!msg) {
303
+ return;
304
+ }
305
+ if (msg.role === "user" && msg.sessionID) {
306
+ sessionModels.set(msg.sessionID, {
307
+ providerID: msg.model?.providerID || "unknown",
308
+ modelID: msg.model?.modelID || "unknown",
309
+ agent: msg.agent || "unknown",
310
+ });
311
+ }
312
+ if (msg.role === "assistant" && msg.sessionID) {
313
+ sessionModels.set(msg.sessionID, {
314
+ providerID: msg.providerID || "unknown",
315
+ modelID: msg.modelID || "unknown",
316
+ agent: msg.mode || "unknown",
317
+ });
318
+ }
319
+ }
320
+ if (event.type === "session.idle" && pending) {
321
+ try {
322
+ const projectRoot = worktree || directory;
323
+ const sessionID = event.properties?.sessionID;
324
+ const actualInfo = sessionID ? sessionModels.get(sessionID) : null;
325
+ let accepted;
326
+ let actualTier;
327
+ let actualModel;
328
+ if (actualInfo && actualInfo.providerID !== "unknown") {
329
+ actualTier = inferTierFromModel(actualInfo.providerID, actualInfo.modelID);
330
+ actualModel = `${actualInfo.providerID}/${actualInfo.modelID}`;
331
+ accepted = actualTier === pending.tier;
332
+ }
333
+ appendHistory(projectRoot, {
334
+ ts: pending.ts,
335
+ prompt: truncate(pending.prompt, 200),
336
+ recommendedTier: pending.tier,
337
+ accepted,
338
+ actualTier,
339
+ actualModel,
340
+ });
341
+ if (sessionID) {
342
+ sessionModels.delete(sessionID);
343
+ }
344
+ }
345
+ catch {
346
+ // Ignore history logging failures.
347
+ }
348
+ finally {
349
+ pending = null;
350
+ }
351
+ }
352
+ },
353
+ };
354
+ };
355
+ export default TaskRouterPlugin;
@@ -0,0 +1,23 @@
1
+ ---
2
+ description: Lightweight agent for simple tasks -- switch to this for trivial work
3
+ mode: primary
4
+ model: ollama/qwen3:8b
5
+ temperature: 0.2
6
+ ---
7
+
8
+ You are a lightweight coding assistant running on a local model.
9
+ You handle simple, well-defined tasks efficiently without incurring API costs.
10
+
11
+ Focus on:
12
+ - Small code fixes, typos, and formatting
13
+ - Simple scripts and utility snippets
14
+ - File renaming and reorganization
15
+ - Documentation edits and updates
16
+ - Straightforward questions about code
17
+ - Generating boilerplate and templates
18
+
19
+ Guidelines:
20
+ - Be concise and direct in your responses.
21
+ - If a task seems too complex for your capabilities, explicitly suggest
22
+ the user switch to the build agent with a more capable model by pressing Tab.
23
+ - Prefer simple, working solutions over clever ones.
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Analyze a task and recommend the best model/agent to use
3
+ ---
4
+
5
+ Use the route_task tool to analyze the following task and recommend which model and agent I should use for it:
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,21 @@
1
+ {
2
+ "$schema": "https://opencode.ai/config.json",
3
+ "provider": {
4
+ "ollama": {
5
+ "npm": "@ai-sdk/openai-compatible",
6
+ "name": "Ollama (local)",
7
+ "options": {
8
+ "baseURL": "http://localhost:11434/v1"
9
+ },
10
+ "models": {
11
+ "qwen3:8b": {
12
+ "name": "Qwen3 8B (local/free)",
13
+ "limit": {
14
+ "context": 32768,
15
+ "output": 8192
16
+ }
17
+ }
18
+ }
19
+ }
20
+ }
21
+ }
package/package.json ADDED
@@ -0,0 +1,54 @@
1
+ {
2
+ "$schema": "https://json.schemastore.org/package.json",
3
+ "name": "opencode-task-router",
4
+ "version": "0.1.0",
5
+ "description": "OpenCode plugin that recommends the right model cost tier for a development task using a local Ollama classifier.",
6
+ "type": "module",
7
+ "main": "dist/index.js",
8
+ "types": "dist/index.d.ts",
9
+ "exports": {
10
+ ".": {
11
+ "import": "./dist/index.js",
12
+ "types": "./dist/index.d.ts"
13
+ }
14
+ },
15
+ "files": [
16
+ "dist",
17
+ "examples",
18
+ "README.md",
19
+ "LICENSE"
20
+ ],
21
+ "scripts": {
22
+ "build": "tsc -p tsconfig.json",
23
+ "test": "npm run test:unit && npm run test:smoke",
24
+ "test:smoke": "node scripts/smoke-test.mjs",
25
+ "test:unit": "npm run build && node --test tests/*.test.mjs",
26
+ "typecheck": "tsc --noEmit",
27
+ "prepublishOnly": "npm run build"
28
+ },
29
+ "keywords": [
30
+ "opencode",
31
+ "plugin",
32
+ "ollama",
33
+ "routing",
34
+ "agent",
35
+ "task-router"
36
+ ],
37
+ "repository": {
38
+ "type": "git",
39
+ "url": "git+https://github.com/dougritter/opencode-task-router.git"
40
+ },
41
+ "bugs": {
42
+ "url": "https://github.com/dougritter/opencode-task-router/issues"
43
+ },
44
+ "homepage": "https://github.com/dougritter/opencode-task-router#readme",
45
+ "license": "MIT",
46
+ "peerDependencies": {
47
+ "@opencode-ai/plugin": ">=1.2.27"
48
+ },
49
+ "devDependencies": {
50
+ "@opencode-ai/plugin": "1.2.27",
51
+ "@types/node": "^24.3.0",
52
+ "typescript": "^5.9.2"
53
+ }
54
+ }