@kodrunhq/opencode-autopilot 1.1.1 → 1.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,41 +5,84 @@ agent: autopilot
5
5
  Help the user configure opencode-autopilot by walking through the model
6
6
  assignment process interactively.
7
7
 
8
- Start by calling the oc_configure tool with subcommand "start". This returns:
9
- - Available models grouped by provider (from the user's OpenCode config)
10
- - The 8 agent groups with descriptions and recommendations
11
- - Current assignments if reconfiguring
12
- - Adversarial diversity rules
8
+ ## Step 1: Discover models and show the list
13
9
 
14
- Then walk through each of the 8 agent groups in order (architects first,
15
- utilities last), explaining for each:
10
+ Call the oc_configure tool with subcommand "start". The response contains:
11
+ - `displayText`: a pre-formatted numbered list of ALL available models.
12
+ **Show this to the user VERBATIM. Do not summarize, truncate, or reformat it.**
13
+ - `modelIndex`: a map of number -> model ID (e.g. {"1": "anthropic/claude-opus-4-6"})
14
+ - `groups`: the 8 agent groups with descriptions and recommendations
15
+ - `currentConfig`: existing assignments if reconfiguring
16
+ - `diversityRules`: adversarial diversity constraints
16
17
 
17
- 1. What the group does and which agents belong to it
18
- 2. The model tier recommendation
18
+ Print `displayText` exactly as returned. This is the complete model list
19
+ with instructions. Do not add, remove, or reorder entries.
20
+
21
+ ## Step 2: Walk through each group
22
+
23
+ For each of the 8 groups (architects first, utilities last):
24
+
25
+ 1. Explain what the group does and which agents belong to it
26
+ 2. Show the tier recommendation
19
27
  3. For adversarial groups (challengers, reviewers, red-team): explain WHY
20
- model diversity matters and which group they're adversarial to
21
- 4. List available models from the user's providers
28
+ model diversity matters and which group they are adversarial to
29
+ 4. Re-print `displayText` so the user can see the numbered list
30
+
31
+ Then ask:
32
+
33
+ ```
34
+ Enter model numbers for [Group Name], separated by commas (e.g. 1,4,7):
35
+ First = primary, rest = fallbacks in order.
36
+ ```
37
+
38
+ ### Parsing the user's response
22
39
 
23
- For each group, present the available models as a numbered list:
40
+ - Numbers like "1,4,7": look up each in `modelIndex` to get model IDs
41
+ - Model IDs typed directly (e.g. "anthropic/claude-opus-4-6"): use as-is
42
+ - Single number (e.g. "1"): primary only, no fallbacks
24
43
 
25
- Example format:
26
- Primary model for Architects:
27
- 1. anthropic/claude-opus-4-6
28
- 2. openai/gpt-5.4
29
- 3. google/gemini-3.1-pro
44
+ The FIRST model is the primary. All subsequent models are fallbacks,
45
+ tried in sequence when the primary is rate-limited or fails.
30
46
 
31
- Pick a number (or type a model ID):
47
+ Call oc_configure with subcommand "assign":
48
+ - `group`: the group ID
49
+ - `primary`: first model from the user's list
50
+ - `fallbacks`: remaining models as comma-separated string
32
51
 
33
- If the user sends just a number, map it to the corresponding model.
34
- Then ask for optional fallbacks the same way (1-3 fallback models).
35
- Call oc_configure with subcommand "assign" for each group.
52
+ ### Diversity warnings
36
53
 
37
- If the assign response contains diversityWarnings, explain them
54
+ If the assign response contains `diversityWarnings`, explain them
38
55
  conversationally. Strong warnings should be highlighted — the user can
39
56
  still proceed, but make the quality trade-off clear.
40
57
 
41
- After all 8 groups are assigned, call oc_configure with subcommand "commit"
42
- to persist the configuration.
58
+ Example: "Heads up: Architects and Challengers both use Claude models.
59
+ Challengers are supposed to critique Architect decisions — using the same
60
+ model family means you get confirmation bias instead of genuine challenge.
61
+ Consider picking a different family for one of them. Continue anyway?"
62
+
63
+ ## Step 3: Commit and verify
64
+
65
+ After all 8 groups are assigned, call oc_configure with subcommand "commit".
66
+
67
+ Then call oc_configure with subcommand "doctor" to verify health.
68
+
69
+ Show a final summary table:
70
+
71
+ ```
72
+ Group | Primary | Fallbacks
73
+ ───────────────┼──────────────────────────────┼──────────────────────────
74
+ Architects | anthropic/claude-opus-4-6 | openai/gpt-5.4
75
+ Challengers | openai/gpt-5.4 | google/gemini-3.1-pro
76
+ ...
77
+ ```
78
+
79
+ ## Rules
43
80
 
44
- End by showing the final summary table and running the "doctor" subcommand
45
- to verify everything is healthy.
81
+ - ALWAYS show `displayText` VERBATIM never summarize or truncate the model list.
82
+ - ALWAYS re-print `displayText` before asking for each group's selection.
83
+ - ALWAYS ask for comma-separated numbers (ordered list, not just one pick).
84
+ - NEVER pre-select models for the user. They choose from the full list.
85
+ - NEVER skip fallback collection. Emphasize: more fallbacks = more resilience.
86
+ - If the user says "pick for me" or "use defaults", THEN you may suggest
87
+ assignments based on the tier recommendations and diversity rules, but
88
+ still show what you picked and ask for confirmation.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kodrunhq/opencode-autopilot",
3
- "version": "1.1.1",
3
+ "version": "1.1.3",
4
4
  "description": "Curated agents, skills, and commands for the OpenCode AI coding CLI — autonomous orchestrator, multi-agent code review, model fallback, and in-session asset creation tools.",
5
5
  "main": "src/index.ts",
6
6
  "keywords": [
@@ -117,6 +117,37 @@ function serializeDiversityWarnings(warnings: readonly DiversityWarning[]): read
117
117
  }));
118
118
  }
119
119
 
120
+ /**
121
+ * Build a flat numbered list of all available models and an index map.
122
+ * Returns { numberedList: "1. provider/model\n2. ...", indexMap: { "1": "provider/model", ... } }
123
+ */
124
+ function buildNumberedModelList(modelsByProvider: Map<string, string[]>): {
125
+ numberedList: string;
126
+ indexMap: Record<string, string>;
127
+ totalCount: number;
128
+ } {
129
+ const allModels: string[] = [];
130
+ for (const models of modelsByProvider.values()) {
131
+ allModels.push(...models);
132
+ }
133
+ // Sort alphabetically for stable ordering
134
+ allModels.sort();
135
+
136
+ const indexMap: Record<string, string> = {};
137
+ const lines: string[] = [];
138
+ for (let i = 0; i < allModels.length; i++) {
139
+ const num = String(i + 1);
140
+ indexMap[num] = allModels[i];
141
+ lines.push(` ${num}. ${allModels[i]}`);
142
+ }
143
+
144
+ return {
145
+ numberedList: lines.join("\n"),
146
+ indexMap,
147
+ totalCount: allModels.length,
148
+ };
149
+ }
150
+
120
151
  async function handleStart(configPath?: string): Promise<string> {
121
152
  // Wait for background provider discovery (up to 5s) before building model list
122
153
  await Promise.race([
@@ -125,6 +156,7 @@ async function handleStart(configPath?: string): Promise<string> {
125
156
  ]);
126
157
 
127
158
  const modelsByProvider = discoverAvailableModels();
159
+ const { numberedList, indexMap, totalCount } = buildNumberedModelList(modelsByProvider);
128
160
 
129
161
  // Load current plugin config to show existing assignments
130
162
  const currentConfig = await loadConfig(configPath);
@@ -148,10 +180,29 @@ async function handleStart(configPath?: string): Promise<string> {
148
180
  };
149
181
  });
150
182
 
183
+ // Pre-formatted text the LLM should show verbatim — avoids summarization
184
+ const displayText =
185
+ totalCount > 0
186
+ ? [
187
+ `Available models (${totalCount} total):`,
188
+ numberedList,
189
+ "",
190
+ "For each group below, enter model numbers separated by commas (e.g. 1,4,7).",
191
+ "First number = primary model. Remaining = fallbacks tried in order.",
192
+ "More fallbacks = more resilience when a model is rate-limited.",
193
+ ].join("\n")
194
+ : [
195
+ "No models were discovered from your providers.",
196
+ "Run `opencode models` in your terminal to see available models,",
197
+ "then type model IDs manually (e.g. anthropic/claude-opus-4-6).",
198
+ ].join("\n");
199
+
151
200
  return JSON.stringify({
152
201
  action: "configure",
153
202
  stage: "start",
154
203
  availableModels: Object.fromEntries(modelsByProvider),
204
+ modelIndex: indexMap,
205
+ displayText,
155
206
  groups,
156
207
  currentConfig: currentConfig
157
208
  ? { configured: currentConfig.configured, groups: currentConfig.groups }