pi-prompt-template-model 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,42 @@
1
+ # Changelog
2
+
3
+ ## 2025-01-12
4
+
5
+ **Print Mode Support**
6
+
7
+ - Commands now work with `pi -p "/command args"` for scripting
8
+ - Handler waits for agent to complete before returning
9
+
10
+ **Thinking Level Control**
11
+
12
+ - Added `thinking` frontmatter field to set thinking level per prompt
13
+ - Valid levels: `off`, `minimal`, `low`, `medium`, `high`, `xhigh`
14
+ - Previous thinking level restored after response (when `restore: true`)
15
+ - Thinking level shown in autocomplete: `[sonnet high]`
16
+
17
+ **Skill Injection**
18
+
19
+ - Added `skill` frontmatter field to inject skill content into system prompt
20
+ - Skills resolved from project (`.pi/skills/`) first, then user (`~/.pi/agent/skills/`)
21
+ - Skill content wrapped in `<skill name="...">` tags for clear context
22
+ - Fancy TUI display: expandable box shows skill name, path, and truncated content preview
23
+
24
+ **Subdirectory Support**
25
+
26
+ - Prompts directory now scanned recursively
27
+ - Subdirectories create namespaced commands shown as `(user:subdir)` or `(project:subdir)`
28
+ - Example: `~/.pi/agent/prompts/frontend/component.md` → `/component (user:frontend)`
29
+
30
+ **Documentation**
31
+
32
+ - Expanded Model Format section with explicit provider selection examples
33
+ - Added OpenAI vs OpenAI-Codex distinction (API key vs OAuth)
34
+ - Documented auto-selection priority for models on multiple providers
35
+ - Updated examples to use latest frontier models
36
+
37
+ **Initial Release**
38
+
39
+ - Model switching via `model` frontmatter in prompt templates
40
+ - Auto-restore previous model after response (configurable via `restore: false`)
41
+ - Provider resolution with priority fallback (anthropic → github-copilot → openrouter)
42
+ - Support for explicit `provider/model-id` format
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Nico Bailon
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,220 @@
1
+ # Prompt Template Model Extension
2
+
3
+ **Pi prompt templates on steroids.** Adds `model`, `skill`, and `thinking` frontmatter support. Create specialized agent modes that switch to the right model, set thinking level, and inject the right skill, then auto-restore when done.
4
+
5
+ ```
6
+ ┌─────────────────────────────────────────────────────────────────────────────┐
7
+ │ │
8
+ │ You're using Opus │
9
+ │ │ │
10
+ │ ▼ │
11
+ │ /debug-python ──► Extension detects model + skill │
12
+ │ │ │
13
+ │ ▼ │
14
+ │ Switches to Sonnet ──► Injects tmux skill into system prompt │
15
+ │ │ │
16
+ │ ▼ │
17
+ │ Agent responds with Sonnet + tmux expertise │
18
+ │ │ │
19
+ │ ▼ │
20
+ │ agent_end fires ──► Restores Opus │
21
+ │ │ │
22
+ │ ▼ │
23
+ │ You're back on Opus │
24
+ │ │
25
+ └─────────────────────────────────────────────────────────────────────────────┘
26
+ ```
27
+
28
+ ## Why?
29
+
30
+ Create switchable agent "modes" with a single slash command. Each mode bundles:
31
+
32
+ - **The right model** for the task complexity and cost tradeoff
33
+ - **The right skill** so the agent knows exactly how to approach it
34
+ - **Auto-restore** to your daily driver when done
35
+
36
+ Instead of manually switching models and hoping the agent picks up on the right skill, you define prompt templates that configure both. `/quick-debug` spins up a cheap fast agent with REPL skills. `/deep-analysis` brings in the heavy hitter with refactoring expertise. Then you're back to your normal setup.
37
+
38
+ ## Installation
39
+
40
+ ```bash
41
+ git clone https://github.com/nicobailon/pi-prompt-template-model.git ~/.pi/agent/extensions/pi-prompt-template-model
42
+ ```
43
+
44
+ Pi auto-discovers extensions from `~/.pi/agent/extensions/*/index.ts`. Just restart pi.
45
+
46
+ ## Quick Start
47
+
48
+ Add `model` and optionally `skill` to any prompt template:
49
+
50
+ ```markdown
51
+ ---
52
+ description: Debug Python in tmux REPL
53
+ model: claude-sonnet-4-20250514
54
+ skill: tmux
55
+ ---
56
+ Start a Python REPL session and help me debug: $@
57
+ ```
58
+
59
+ Run `/debug-python some issue` and the agent has:
60
+ - Sonnet as the active model
61
+ - Full tmux skill instructions already loaded
62
+ - Your task ready to go
63
+
64
+ ## Skills as a Cheat Code
65
+
66
+ Normally, skills work like this: pi lists available skills in the system prompt, the agent sees your task, decides it needs a skill, and uses the read tool to load it. That's an extra round-trip, and the agent might not always pick the right one.
67
+
68
+ With the `skill` field, you're forcing it:
69
+
70
+ ```markdown
71
+ ---
72
+ description: Browser testing mode
73
+ model: claude-sonnet-4-20250514
74
+ skill: surf
75
+ ---
76
+ $@
77
+ ```
78
+
79
+ Here `skill: surf` loads `~/.pi/agent/skills/surf/SKILL.md` and injects its content directly into the system prompt before the agent even sees your task. No decision-making, no read tool, just immediate expertise. It's a forcing function for when you know exactly what workflow the agent needs.
80
+
81
+ ## Frontmatter Fields
82
+
83
+ | Field | Required | Default | Description |
84
+ |-------|----------|---------|-------------|
85
+ | `model` | Yes | - | Model ID or `provider/model-id` |
86
+ | `skill` | No | - | Skill name to inject into system prompt |
87
+ | `thinking` | No | - | Thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
88
+ | `description` | No | - | Shown in autocomplete |
89
+ | `restore` | No | `true` | Restore previous model and thinking level after response |
90
+
91
+ ## Model Format
92
+
93
+ ```yaml
94
+ model: claude-sonnet-4-20250514 # Model ID only - auto-selects provider
95
+ model: anthropic/claude-sonnet-4-20250514 # Explicit provider/model
96
+ ```
97
+
98
+ When you specify just the model ID, the extension picks a provider automatically based on where you have auth configured, preferring: `anthropic` → `github-copilot` → `openrouter`.
99
+
100
+ For explicit control:
101
+
102
+ ```yaml
103
+ model: anthropic/claude-opus-4-5 # Direct Anthropic API
104
+ model: github-copilot/claude-opus-4-5 # Via Copilot subscription
105
+ model: openrouter/claude-opus-4-5 # Via OpenRouter
106
+ model: openai/gpt-5.2 # Direct OpenAI API
107
+ model: openai-codex/gpt-5.2 # Via Codex subscription (OAuth)
108
+ ```
109
+
110
+ ## Skill Resolution
111
+
112
+ The `skill` field matches the skill's directory name:
113
+
114
+ ```yaml
115
+ skill: tmux
116
+ ```
117
+
118
+ Resolves to (checked in order):
119
+ 1. `<cwd>/.pi/skills/tmux/SKILL.md` (project)
120
+ 2. `~/.pi/agent/skills/tmux/SKILL.md` (user)
121
+
122
+ This matches pi's precedence - project skills override user skills.
123
+
124
+ ## Subdirectories
125
+
126
+ Organize prompts in subdirectories for namespacing:
127
+
128
+ ```
129
+ ~/.pi/agent/prompts/
130
+ ├── quick.md → /quick (user)
131
+ ├── debug-python.md → /debug-python (user)
132
+ └── frontend/
133
+ ├── component.md → /component (user:frontend)
134
+ └── hook.md → /hook (user:frontend)
135
+ ```
136
+
137
+ The subdirectory shows in autocomplete as the source label. Note: command names are based on filename only, so avoid duplicate filenames across subdirectories (e.g., `quick.md` and `frontend/quick.md` would collide).
138
+
139
+ ## Examples
140
+
141
+ **Cost optimization** - use Haiku for simple summarization:
142
+
143
+ ```markdown
144
+ ---
145
+ description: Save progress doc for handoff
146
+ model: claude-haiku-4-5
147
+ ---
148
+ Create a progress document that captures everything needed for another
149
+ engineer to continue this work. Save to ~/Documents/docs/...
150
+ ```
151
+
152
+ **Skill injection** - guarantee the agent has REPL expertise:
153
+
154
+ ```markdown
155
+ ---
156
+ description: Python debugging session
157
+ model: claude-sonnet-4-20250514
158
+ skill: tmux
159
+ ---
160
+ Start a Python REPL and help me debug: $@
161
+ ```
162
+
163
+ **Browser automation** - pair surf skill with a capable model:
164
+
165
+ ```markdown
166
+ ---
167
+ description: Test user flow in browser
168
+ model: claude-sonnet-4-20250514
169
+ skill: surf
170
+ ---
171
+ Test this user flow: $@
172
+ ```
173
+
174
+ **Deep thinking** - max thinking for complex analysis:
175
+
176
+ ```markdown
177
+ ---
178
+ description: Deep code analysis with extended thinking
179
+ model: claude-sonnet-4-20250514
180
+ thinking: high
181
+ ---
182
+ Analyze this code thoroughly, considering edge cases and potential issues: $@
183
+ ```
184
+
185
+ **Mode switching** - stay on the new model:
186
+
187
+ ```markdown
188
+ ---
189
+ description: Switch to Haiku for this session
190
+ model: claude-haiku-4-5
191
+ restore: false
192
+ ---
193
+ Switched to Haiku. How can I help?
194
+ ```
195
+
196
+ ## Autocomplete Display
197
+
198
+ Commands show model, thinking level, and skill in the description:
199
+
200
+ ```
201
+ /debug-python Debug Python session [sonnet +tmux] (user)
202
+ /deep-analysis Deep code analysis [sonnet high] (user)
203
+ /component Create React component [sonnet] (user:frontend)
204
+ /quick Quick answer [haiku] (user)
205
+ ```
206
+
207
+ ## Print Mode (`pi -p`)
208
+
209
+ These commands work in print mode too:
210
+
211
+ ```bash
212
+ pi -p "/debug-python my code crashes on line 42"
213
+ ```
214
+
215
+ The model switches, skill injects, agent responds, and output prints to stdout. Useful for scripting or piping to other tools.
216
+
217
+ ## Limitations
218
+
219
+ - Templates discovered at startup. Restart pi after adding/modifying.
220
+ - Model restore state is in-memory. Closing pi mid-response loses restore state.
package/index.ts ADDED
@@ -0,0 +1,598 @@
1
+ /**
2
+ * Prompt Model Extension
3
+ *
4
+ * Adds support for `model`, `skill`, and `thinking` frontmatter in prompt template .md files.
5
+ * Create specialized agent modes that switch to the right model, set thinking level,
6
+ * and inject the right skill, then auto-restore when done.
7
+ *
8
+ * ┌─────────────────────────────────────────────────────────────────────────────┐
9
+ * │ │
10
+ * │ You're using Opus │
11
+ * │ │ │
12
+ * │ ▼ │
13
+ * │ /debug-python ──► Extension detects model + skill frontmatter │
14
+ * │ │ │
15
+ * │ ▼ │
16
+ * │ Switches to Sonnet ──► Stores "Opus" as previous model │
17
+ * │ │ │
18
+ * │ ▼ │
19
+ * │ before_agent_start ──► Injects tmux skill into system prompt │
20
+ * │ │ │
21
+ * │ ▼ │
22
+ * │ Agent responds with Sonnet + tmux expertise │
23
+ * │ │ │
24
+ * │ ▼ │
25
+ * │ agent_end fires ──► Restores Opus ──► Shows "Restored to opus" notif │
26
+ * │ │ │
27
+ * │ ▼ │
28
+ * │ You're back on Opus │
29
+ * │ │
30
+ * └─────────────────────────────────────────────────────────────────────────────┘
31
+ *
32
+ * Prompt template locations:
33
+ * - ~/.pi/agent/prompts/**\/*.md (global, recursive)
34
+ * - <cwd>/.pi/prompts/**\/*.md (project-local, recursive)
35
+ *
36
+ * Skill locations (checked in order):
37
+ * - <cwd>/.pi/skills/{name}/SKILL.md (project)
38
+ * - ~/.pi/agent/skills/{name}/SKILL.md (user)
39
+ *
40
+ * Example prompt file (e.g., ~/.pi/agent/prompts/debug-python.md):
41
+ * ```markdown
42
+ * ---
43
+ * description: Debug Python in tmux REPL
44
+ * model: claude-sonnet-4-20250514
45
+ * skill: tmux
46
+ * ---
47
+ * Start a Python REPL session and help me debug: $@
48
+ * ```
49
+ *
50
+ * Frontmatter fields:
51
+ * - `description`: Description shown in autocomplete (standard)
52
+ * - `model`: Model ID (e.g., "claude-sonnet-4-20250514") or full "provider/model-id"
53
+ * - `skill`: Skill name to inject into system prompt (e.g., "tmux")
54
+ * - `thinking`: Thinking level (off, minimal, low, medium, high, xhigh)
55
+ * - `restore`: Whether to restore the previous model/thinking after response (default: true)
56
+ *
57
+ * Usage:
58
+ * - `/debug-python my code is broken` - switches model, injects skill, runs prompt
59
+ *
60
+ * Notes:
61
+ * - Templates without `model` frontmatter work normally (handled by pi core)
62
+ * - Skills are injected via the before_agent_start hook into the system prompt
63
+ * - Subdirectories create namespaced commands shown as (user:subdir) or (project:subdir)
64
+ */
65
+
66
+ import { existsSync, readdirSync, readFileSync, statSync } from "node:fs";
67
+ import { homedir } from "node:os";
68
+ import { join, resolve } from "node:path";
69
+ import type { Model } from "@mariozechner/pi-ai";
70
+ import type { ExtensionAPI, ExtensionContext, MessageRenderOptions } from "@mariozechner/pi-coding-agent";
71
+ import type { ThinkingLevel } from "@mariozechner/pi-agent-core";
72
+ import { Box, Text, Spacer, Container } from "@mariozechner/pi-tui";
73
+ import type { Theme } from "@mariozechner/pi-coding-agent";
74
+
75
+ const VALID_THINKING_LEVELS = ["off", "minimal", "low", "medium", "high", "xhigh"] as const;
76
+
77
+ interface PromptWithModel {
78
+ name: string;
79
+ description: string;
80
+ content: string;
81
+ model: string;
82
+ restore: boolean;
83
+ skill?: string;
84
+ thinking?: ThinkingLevel;
85
+ source: "user" | "project";
86
+ subdir?: string;
87
+ }
88
+
89
+ /**
90
+ * Parse YAML frontmatter from markdown content.
91
+ */
92
+ function parseFrontmatter(content: string): { frontmatter: Record<string, string>; content: string } {
93
+ const frontmatter: Record<string, string> = {};
94
+ const normalized = content.replace(/\r\n/g, "\n");
95
+
96
+ if (!normalized.startsWith("---")) {
97
+ return { frontmatter, content: normalized };
98
+ }
99
+
100
+ const endIndex = normalized.indexOf("\n---", 3);
101
+ if (endIndex === -1) {
102
+ return { frontmatter, content: normalized };
103
+ }
104
+
105
+ const frontmatterBlock = normalized.slice(4, endIndex);
106
+ const body = normalized.slice(endIndex + 4).trim();
107
+
108
+ for (const line of frontmatterBlock.split("\n")) {
109
+ const match = line.match(/^([\w-]+):\s*(.*)$/);
110
+ if (match) {
111
+ let value = match[2].trim();
112
+ if ((value.startsWith('"') && value.endsWith('"')) || (value.startsWith("'") && value.endsWith("'"))) {
113
+ value = value.slice(1, -1);
114
+ }
115
+ frontmatter[match[1]] = value;
116
+ }
117
+ }
118
+
119
+ return { frontmatter, content: body };
120
+ }
121
+
122
+ /**
123
+ * Parse command arguments respecting quoted strings.
124
+ */
125
+ function parseCommandArgs(argsString: string): string[] {
126
+ const args: string[] = [];
127
+ let current = "";
128
+ let inQuote: string | null = null;
129
+
130
+ for (let i = 0; i < argsString.length; i++) {
131
+ const char = argsString[i];
132
+
133
+ if (inQuote) {
134
+ if (char === inQuote) {
135
+ inQuote = null;
136
+ } else {
137
+ current += char;
138
+ }
139
+ } else if (char === '"' || char === "'") {
140
+ inQuote = char;
141
+ } else if (char === " " || char === "\t") {
142
+ if (current) {
143
+ args.push(current);
144
+ current = "";
145
+ }
146
+ } else {
147
+ current += char;
148
+ }
149
+ }
150
+
151
+ if (current) {
152
+ args.push(current);
153
+ }
154
+
155
+ return args;
156
+ }
157
+
158
+ /**
159
+ * Substitute argument placeholders in template content.
160
+ */
161
+ function substituteArgs(content: string, args: string[]): string {
162
+ let result = content;
163
+
164
+ // Replace $1, $2, etc. with positional args
165
+ result = result.replace(/\$(\d+)/g, (_, num) => {
166
+ const index = parseInt(num, 10) - 1;
167
+ return args[index] ?? "";
168
+ });
169
+
170
+ const allArgs = args.join(" ");
171
+
172
+ // Replace $ARGUMENTS and $@ with all args
173
+ result = result.replace(/\$ARGUMENTS/g, allArgs);
174
+ result = result.replace(/\$@/g, allArgs);
175
+
176
+ return result;
177
+ }
178
+
179
+ /**
180
+ * Resolve skill path from name. Checks project first, then user.
181
+ */
182
+ function resolveSkillPath(skillName: string, cwd: string): string | undefined {
183
+ // Project skills first
184
+ const projectPath = resolve(cwd, ".pi", "skills", skillName, "SKILL.md");
185
+ if (existsSync(projectPath)) return projectPath;
186
+
187
+ // Fall back to user skills
188
+ const userPath = join(homedir(), ".pi", "agent", "skills", skillName, "SKILL.md");
189
+ if (existsSync(userPath)) return userPath;
190
+
191
+ return undefined;
192
+ }
193
+
194
+ /**
195
+ * Read skill content, stripping frontmatter.
196
+ */
197
+ function readSkillContent(skillPath: string): string | undefined {
198
+ try {
199
+ const raw = readFileSync(skillPath, "utf-8");
200
+ const { content } = parseFrontmatter(raw);
201
+ return content;
202
+ } catch {
203
+ return undefined;
204
+ }
205
+ }
206
+
207
+ /**
208
+ * Load prompt templates that have model frontmatter from a directory.
209
+ * Recursively scans subdirectories.
210
+ */
211
+ function loadPromptsWithModelFromDir(
212
+ dir: string,
213
+ source: "user" | "project",
214
+ subdir = ""
215
+ ): PromptWithModel[] {
216
+ const prompts: PromptWithModel[] = [];
217
+
218
+ if (!existsSync(dir)) {
219
+ return prompts;
220
+ }
221
+
222
+ try {
223
+ const entries = readdirSync(dir, { withFileTypes: true });
224
+
225
+ for (const entry of entries) {
226
+ const fullPath = join(dir, entry.name);
227
+
228
+ // Handle symlinks
229
+ let isFile = entry.isFile();
230
+ let isDirectory = entry.isDirectory();
231
+ if (entry.isSymbolicLink()) {
232
+ try {
233
+ const stats = statSync(fullPath);
234
+ isFile = stats.isFile();
235
+ isDirectory = stats.isDirectory();
236
+ } catch {
237
+ continue;
238
+ }
239
+ }
240
+
241
+ // Recurse into subdirectories
242
+ if (isDirectory) {
243
+ const newSubdir = subdir ? `${subdir}:${entry.name}` : entry.name;
244
+ prompts.push(...loadPromptsWithModelFromDir(fullPath, source, newSubdir));
245
+ continue;
246
+ }
247
+
248
+ if (!isFile || !entry.name.endsWith(".md")) continue;
249
+
250
+ try {
251
+ const rawContent = readFileSync(fullPath, "utf-8");
252
+ const { frontmatter, content: body } = parseFrontmatter(rawContent);
253
+
254
+ // Only include templates that have a model field
255
+ if (!frontmatter.model) continue;
256
+
257
+ const name = entry.name.slice(0, -3); // Remove .md
258
+
259
+ // Parse restore field (default: true)
260
+ const restore = frontmatter.restore?.toLowerCase() !== "false";
261
+
262
+ // Parse thinking level if valid
263
+ const thinkingRaw = frontmatter.thinking?.toLowerCase();
264
+ const validThinking = thinkingRaw && (VALID_THINKING_LEVELS as readonly string[]).includes(thinkingRaw)
265
+ ? thinkingRaw as ThinkingLevel
266
+ : undefined;
267
+
268
+ prompts.push({
269
+ name,
270
+ description: frontmatter.description || "",
271
+ content: body,
272
+ model: frontmatter.model,
273
+ restore,
274
+ skill: frontmatter.skill || undefined,
275
+ thinking: validThinking,
276
+ source,
277
+ subdir: subdir || undefined,
278
+ });
279
+ } catch {
280
+ // Skip files that can't be read or parsed
281
+ }
282
+ }
283
+ } catch {
284
+ // Skip directories that can't be read
285
+ }
286
+
287
+ return prompts;
288
+ }
289
+
290
+ /**
291
+ * Load all prompt templates with model frontmatter.
292
+ * Project templates override global templates with the same name.
293
+ */
294
+ function loadPromptsWithModel(cwd: string): Map<string, PromptWithModel> {
295
+ const globalDir = join(homedir(), ".pi", "agent", "prompts");
296
+ const projectDir = resolve(cwd, ".pi", "prompts");
297
+
298
+ const promptMap = new Map<string, PromptWithModel>();
299
+
300
+ // Load global first
301
+ for (const prompt of loadPromptsWithModelFromDir(globalDir, "user")) {
302
+ promptMap.set(prompt.name, prompt);
303
+ }
304
+
305
+ // Project overrides global
306
+ for (const prompt of loadPromptsWithModelFromDir(projectDir, "project")) {
307
+ promptMap.set(prompt.name, prompt);
308
+ }
309
+
310
+ return promptMap;
311
+ }
312
+
313
+ /** Details for skill-loaded custom message */
314
+ interface SkillLoadedDetails {
315
+ skillName: string;
316
+ skillContent: string;
317
+ skillPath: string;
318
+ }
319
+
320
+ /** Max lines to show when collapsed */
321
+ const SKILL_PREVIEW_LINES = 5;
322
+
323
+ /**
324
+ * Render the skill-loaded message with expandable content
325
+ */
326
+ function renderSkillLoaded(
327
+ message: { details?: SkillLoadedDetails },
328
+ options: MessageRenderOptions,
329
+ theme: Theme
330
+ ) {
331
+ const { skillName, skillContent, skillPath } = message.details!;
332
+ const container = new Container();
333
+
334
+ container.addChild(new Spacer(1));
335
+
336
+ const box = new Box(1, 1, (t: string) => theme.bg("toolSuccessBg", t));
337
+
338
+ // Header with skill name
339
+ const header = theme.fg("toolTitle", theme.bold(`⚡ Skill loaded: ${skillName}`));
340
+ box.addChild(new Text(header, 0, 0));
341
+
342
+ // Show path in muted color
343
+ const pathLine = theme.fg("toolOutput", ` ${skillPath}`);
344
+ box.addChild(new Text(pathLine, 0, 0));
345
+ box.addChild(new Spacer(1));
346
+
347
+ // Content preview or full content
348
+ const lines = skillContent.split("\n");
349
+
350
+ if (options.expanded) {
351
+ // Show full content
352
+ const content = lines.map(line => theme.fg("toolOutput", line)).join("\n");
353
+ box.addChild(new Text(content, 0, 0));
354
+ } else {
355
+ // Show truncated preview
356
+ const previewLines = lines.slice(0, SKILL_PREVIEW_LINES);
357
+ const remaining = lines.length - SKILL_PREVIEW_LINES;
358
+
359
+ const preview = previewLines.map(line => theme.fg("toolOutput", line)).join("\n");
360
+ box.addChild(new Text(preview, 0, 0));
361
+
362
+ if (remaining > 0) {
363
+ box.addChild(new Text(theme.fg("warning", `\n... (${remaining} more lines)`), 0, 0));
364
+ }
365
+ }
366
+
367
+ container.addChild(box);
368
+ return container;
369
+ }
370
+
371
+ export default function promptModelExtension(pi: ExtensionAPI) {
372
+ let prompts = new Map<string, PromptWithModel>();
373
+ let previousModel: Model<any> | undefined;
374
+ let previousThinking: ThinkingLevel | undefined;
375
+ let pendingSkill: { name: string; cwd: string } | undefined;
376
+
377
+ // Register custom message renderer for skill-loaded messages
378
+ pi.registerMessageRenderer<SkillLoadedDetails>("skill-loaded", renderSkillLoaded);
379
+
380
+ /**
381
+ * Find and resolve a model from "provider/model-id" or just "model-id".
382
+ * If no provider is specified, searches all models by ID.
383
+ * Prefers models with auth, then by provider priority: anthropic > github-copilot > openrouter.
384
+ */
385
+ function resolveModel(modelSpec: string, ctx: ExtensionContext): Model<any> | undefined {
386
+ const slashIndex = modelSpec.indexOf("/");
387
+
388
+ if (slashIndex !== -1) {
389
+ // Has provider: use exact match
390
+ const provider = modelSpec.slice(0, slashIndex);
391
+ const modelId = modelSpec.slice(slashIndex + 1);
392
+
393
+ if (!provider || !modelId) {
394
+ ctx.ui.notify(`Invalid model format "${modelSpec}". Expected "provider/model-id"`, "error");
395
+ return undefined;
396
+ }
397
+
398
+ const model = ctx.modelRegistry.find(provider, modelId);
399
+ if (!model) {
400
+ ctx.ui.notify(`Model "${modelSpec}" not found`, "error");
401
+ return undefined;
402
+ }
403
+ return model;
404
+ }
405
+
406
+ // No provider: search all models by ID
407
+ const allMatches = ctx.modelRegistry.getAll().filter((m) => m.id === modelSpec);
408
+
409
+ if (allMatches.length === 0) {
410
+ ctx.ui.notify(`Model "${modelSpec}" not found`, "error");
411
+ return undefined;
412
+ }
413
+
414
+ if (allMatches.length === 1) {
415
+ return allMatches[0];
416
+ }
417
+
418
+ // Multiple matches - prefer models with auth configured
419
+ const availableMatches = ctx.modelRegistry.getAvailable().filter((m) => m.id === modelSpec);
420
+
421
+ if (availableMatches.length === 1) {
422
+ return availableMatches[0];
423
+ }
424
+
425
+ if (availableMatches.length > 1) {
426
+ // Multiple with auth - prefer by provider priority
427
+ const preferredProviders = ["anthropic", "github-copilot", "openrouter"];
428
+ for (const provider of preferredProviders) {
429
+ const preferred = availableMatches.find((m) => m.provider === provider);
430
+ if (preferred) {
431
+ return preferred;
432
+ }
433
+ }
434
+ // No preferred provider found, use first available
435
+ return availableMatches[0];
436
+ }
437
+
438
+ // No matches with auth - show all options
439
+ const options = allMatches.map((m) => `${m.provider}/${m.id}`).join(", ");
440
+ ctx.ui.notify(`Ambiguous model "${modelSpec}". Options: ${options}`, "error");
441
+ return undefined;
442
+ }
443
+
444
+ // Reload prompts on session start (in case cwd changed)
445
+ pi.on("session_start", async (_event, ctx) => {
446
+ prompts = loadPromptsWithModel(ctx.cwd);
447
+ });
448
+
449
+ // Inject skill into system prompt before agent starts
450
+ pi.on("before_agent_start", async (event, ctx) => {
451
+ if (!pendingSkill) {
452
+ return;
453
+ }
454
+
455
+ const { name: skillName, cwd } = pendingSkill;
456
+ pendingSkill = undefined;
457
+
458
+ const skillPath = resolveSkillPath(skillName, cwd);
459
+ if (!skillPath) {
460
+ ctx.ui.notify(`Skill "${skillName}" not found`, "error");
461
+ return;
462
+ }
463
+
464
+ const skillContent = readSkillContent(skillPath);
465
+ if (skillContent === undefined) {
466
+ ctx.ui.notify(`Failed to read skill "${skillName}"`, "error");
467
+ return;
468
+ }
469
+
470
+ // Send a custom message to display the skill loaded notification
471
+ pi.sendMessage<SkillLoadedDetails>({
472
+ customType: "skill-loaded",
473
+ content: `Loaded skill: ${skillName}`,
474
+ display: true,
475
+ details: {
476
+ skillName,
477
+ skillContent,
478
+ skillPath,
479
+ },
480
+ });
481
+
482
+ // Append skill to system prompt wrapped in <skill> tags
483
+ return {
484
+ systemPrompt: `${event.systemPrompt}\n\n<skill name="${skillName}">\n${skillContent}\n</skill>`,
485
+ };
486
+ });
487
+
488
+ // Restore model and thinking level after the agent finishes responding
489
+ pi.on("agent_end", async (_event, ctx) => {
490
+ const restoredParts: string[] = [];
491
+
492
+ if (previousModel) {
493
+ restoredParts.push(previousModel.id);
494
+ await pi.setModel(previousModel);
495
+ previousModel = undefined;
496
+ }
497
+
498
+ if (previousThinking !== undefined) {
499
+ restoredParts.push(`thinking:${previousThinking}`);
500
+ pi.setThinkingLevel(previousThinking);
501
+ previousThinking = undefined;
502
+ }
503
+
504
+ if (restoredParts.length > 0) {
505
+ ctx.ui.notify(`Restored to ${restoredParts.join(", ")}`, "info");
506
+ }
507
+ });
508
+
509
+ // Initialize: register commands for prompts with model frontmatter
510
+ const initialCwd = process.cwd();
511
+ const initialPrompts = loadPromptsWithModel(initialCwd);
512
+
513
+ for (const [name, prompt] of initialPrompts) {
514
+ // Build source label with subdir namespace
515
+ let sourceLabel: string;
516
+ if (prompt.subdir) {
517
+ sourceLabel = `(${prompt.source}:${prompt.subdir})`;
518
+ } else {
519
+ sourceLabel = `(${prompt.source})`;
520
+ }
521
+
522
+ // Build model label (short form)
523
+ const modelLabel = prompt.model.split("/").pop() || prompt.model;
524
+
525
+ // Build skill label if present
526
+ const skillLabel = prompt.skill ? ` +${prompt.skill}` : "";
527
+
528
+ // Build thinking label if present
529
+ const thinkingLabel = prompt.thinking ? ` ${prompt.thinking}` : "";
530
+
531
+ pi.registerCommand(name, {
532
+ description: prompt.description
533
+ ? `${prompt.description} [${modelLabel}${thinkingLabel}${skillLabel}] ${sourceLabel}`
534
+ : `[${modelLabel}${thinkingLabel}${skillLabel}] ${sourceLabel}`,
535
+
536
+ handler: async (args, ctx) => {
537
+ // Re-fetch the prompt in case it was updated
538
+ const currentPrompt = prompts.get(name);
539
+ if (!currentPrompt) {
540
+ ctx.ui.notify(`Prompt "${name}" no longer exists`, "error");
541
+ return;
542
+ }
543
+
544
+ // Resolve the model
545
+ const model = resolveModel(currentPrompt.model, ctx);
546
+ if (!model) return;
547
+
548
+ // Check if we're already on the target model (skip switch)
549
+ const alreadyOnTargetModel = ctx.model?.provider === model.provider && ctx.model?.id === model.id;
550
+
551
+ if (!alreadyOnTargetModel) {
552
+ // Store previous model to restore after response (only if restore is enabled)
553
+ if (currentPrompt.restore) {
554
+ previousModel = ctx.model;
555
+ }
556
+
557
+ // Switch to the specified model
558
+ const success = await pi.setModel(model);
559
+ if (!success) {
560
+ ctx.ui.notify(`No API key for model "${currentPrompt.model}"`, "error");
561
+ previousModel = undefined;
562
+ return;
563
+ }
564
+ }
565
+
566
+ // Set thinking level if specified
567
+ if (currentPrompt.thinking) {
568
+ const currentThinking = pi.getThinkingLevel();
569
+ if (currentThinking !== currentPrompt.thinking) {
570
+ if (currentPrompt.restore) {
571
+ previousThinking = currentThinking;
572
+ }
573
+ pi.setThinkingLevel(currentPrompt.thinking);
574
+ }
575
+ }
576
+
577
+ // Set pending skill for before_agent_start handler
578
+ if (currentPrompt.skill) {
579
+ pendingSkill = { name: currentPrompt.skill, cwd: ctx.cwd };
580
+ }
581
+
582
+ // Expand the template with arguments
583
+ const parsedArgs = parseCommandArgs(args);
584
+ const expandedContent = substituteArgs(currentPrompt.content, parsedArgs);
585
+
586
+ // Send the expanded prompt as a user message
587
+ pi.sendUserMessage(expandedContent);
588
+
589
+ // Wait for agent to start processing, then wait for it to finish
590
+ // (required for print mode, harmless in interactive)
591
+ while (ctx.isIdle()) {
592
+ await new Promise(resolve => setTimeout(resolve, 10));
593
+ }
594
+ await ctx.waitForIdle();
595
+ },
596
+ });
597
+ }
598
+ }
package/package.json ADDED
@@ -0,0 +1,30 @@
1
+ {
2
+ "name": "pi-prompt-template-model",
3
+ "version": "0.1.0",
4
+ "description": "Prompt template model selector extension for pi coding agent",
5
+ "author": "Nico Bailon",
6
+ "license": "MIT",
7
+ "repository": {
8
+ "type": "git",
9
+ "url": "git+https://github.com/nicobailon/pi-prompt-template-model.git"
10
+ },
11
+ "keywords": [
12
+ "pi-package",
13
+ "pi",
14
+ "coding-agent",
15
+ "prompt-template",
16
+ "model-selector",
17
+ "extension"
18
+ ],
19
+ "files": [
20
+ "index.ts",
21
+ "README.md",
22
+ "CHANGELOG.md",
23
+ "LICENSE"
24
+ ],
25
+ "pi": {
26
+ "extensions": [
27
+ "./index.ts"
28
+ ]
29
+ }
30
+ }