pi-prompt-template-model 0.6.1 → 0.6.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,14 @@
2
2
 
3
3
  ## [Unreleased]
4
4
 
5
+ ## [0.6.2] - 2026-03-20
6
+
7
+ ### Added
8
+ - Added delegated-subprocess working directory controls via `cwd` frontmatter (for `subagent` prompts and chain-template defaults) plus runtime `--cwd=<path>` overrides.
9
+
10
+ ### Fixed
11
+ - Rewrote README for clarity: fixed default subagent name (`delegate`, not `worker`), corrected provider priority to include `openai-codex`, merged broken frontmatter table into grouped sections with readable descriptions, cut redundant examples, and tightened prose throughout.
12
+
5
13
  ## [0.6.1] - 2026-03-20
6
14
 
7
15
  ### Added
package/README.md CHANGED
@@ -4,40 +4,19 @@
4
4
 
5
5
  # Prompt Template Model Extension
6
6
 
7
- **Pi prompt templates on steroids.** Adds `model`, `skill`, and `thinking` frontmatter support. Create specialized agent modes that switch to the right model, set thinking level, and inject the right skill, then auto-restore when done.
8
-
9
- ```
10
- ┌─────────────────────────────────────────────────────────────────────────────┐
11
- │ │
12
- You're using Opus │
13
- │ │ │
14
- │ ▼ │
15
- │ /debug-python ──► Extension detects model + skill │
16
- │ │ │
17
- │ ▼ │
18
- │ Switches to Sonnet ──► Queues tmux skill context for next turn │
19
- │ │ │
20
- │ ▼ │
21
- │ Agent responds with Sonnet + tmux expertise │
22
- │ │ │
23
- │ ▼ │
24
- │ agent_end fires ──► Restores Opus │
25
- │ │ │
26
- │ ▼ │
27
- │ You're back on Opus │
28
- │ │
29
- └─────────────────────────────────────────────────────────────────────────────┘
7
+ Adds `model`, `skill`, and `thinking` frontmatter to pi prompt templates. Define slash commands that switch to the right model, set a thinking level, inject skill context, and auto-restore your session when done.
8
+
9
+ ```
10
+ /debug-python my code crashes
11
+ → switches to Sonnet, loads tmux skill, agent responds
12
+ restores your previous model when finished
30
13
  ```
31
14
 
32
15
  ## Why?
33
16
 
34
- Create switchable agent "modes" with a single slash command. Each mode bundles:
17
+ Each prompt template becomes a self-contained agent mode. `/quick-debug` spins up a cheap model with REPL skills. `/deep-analysis` brings in extended thinking with refactoring expertise. When the command finishes, you're back to your daily driver without touching anything.
35
18
 
36
- - **The right model** for the task complexity and cost tradeoff
37
- - **The right skill** so the agent knows exactly how to approach it
38
- - **Auto-restore** to your daily driver when done
39
-
40
- Instead of manually switching models and hoping the agent picks up on the right skill, you define prompt templates that configure both. `/quick-debug` spins up a cheap fast agent with REPL skills. `/deep-analysis` brings in the heavy hitter with refactoring expertise. Then you're back to your normal setup.
19
+ No more manually switching models, no hoping the agent picks up on the right skill. You define the configuration once, and the slash command handles the rest.
41
20
 
42
21
  ## Installation
43
22
 
@@ -53,11 +32,11 @@ For delegated subagent execution (`subagent` and `inheritContext` frontmatter),
53
32
  pi install npm:pi-subagents
54
33
  ```
55
34
 
56
- pi-subagents is optional — everything else works without it. If you use `subagent: true` in a prompt template without pi-subagents installed, execution fails fast with a clear error.
35
+ pi-subagents is optional — everything else works without it. Using `subagent: true` without it installed fails fast with a clear error.
57
36
 
58
37
  ## Quick Start
59
38
 
60
- Add `model` (or omit it to inherit the current session model) and optionally `skill` to any prompt template:
39
+ Add `model` and optionally `skill` to any prompt template:
61
40
 
62
41
  ```markdown
63
42
  ---
@@ -68,124 +47,111 @@ skill: tmux
68
47
  Start a Python REPL session and help me debug: $@
69
48
  ```
70
49
 
71
- Run `/debug-python some issue` and the agent has:
72
- - Sonnet as the active model
73
- - Full tmux skill instructions already loaded
74
- - Your task ready to go
75
-
76
- ## Skills as a Cheat Code
50
+ Run `/debug-python some issue` and the agent switches to Sonnet, receives the tmux skill as context, and starts working. When it finishes, your previous model is restored.
77
51
 
78
- Normally, skills work like this: pi lists available skills in the system prompt, the agent sees your task, decides it needs a skill, and uses the read tool to load it. That's an extra round-trip, and the agent might not always pick the right one.
52
+ ## Frontmatter Reference
79
53
 
80
- With the `skill` field, you're forcing it:
81
-
82
- ```markdown
83
- ---
84
- description: Browser testing mode
85
- model: claude-sonnet-4-20250514
86
- skill: surf
87
- ---
88
- $@
89
- ```
54
+ All fields are optional. Templates that don't use any extension features (no `model`, `skill`, `thinking`, etc.) are left to pi's default prompt loader.
90
55
 
91
- Here `skill: surf` loads `~/.pi/agent/skills/surf/SKILL.md` and injects its content as a context message on the next turn before the agent handles your task. No decision-making, no read tool, just immediate expertise. It's a forcing function for when you know exactly what workflow the agent needs.
56
+ ### Core Fields
92
57
 
93
- ## Delegated Subagent Execution
58
+ | Field | Default | What it does |
59
+ |-------|---------|--------------|
60
+ | `model` | current session model | Which model to use. Accepts a single model, a `provider/model-id` pair, or a comma-separated fallback list (see [Model Format](#model-format)). Ignored when `chain` is set. |
61
+ | `skill` | — | Injects a skill's content as a context message before the agent handles your task. No extra round-trip — the agent gets the expertise immediately. See [Skill Resolution](#skill-resolution). |
62
+ | `thinking` | — | Thinking level for the model: `off`, `minimal`, `low`, `medium`, `high`, or `xhigh`. |
63
+ | `description` | — | Short text shown next to the command in autocomplete. |
64
+ | `chain` | — | Declares a reusable pipeline of templates (`step -> step`). When set, the body is ignored. See [Chain Templates](#chain-templates). |
94
65
 
95
- You can delegate a prompt template directly to the `subagent` extension without metaprompted tool-call instructions.
66
+ ### Execution Control
96
67
 
97
- ```markdown
98
- ---
99
- model: anthropic/claude-sonnet-4-20250514
100
- subagent: true
101
- ---
102
- Review and simplify this code: $@
103
- ```
104
-
105
- `subagent: true` uses the default `worker` agent. To target a specific agent, set a string value:
106
-
107
- ```markdown
108
- ---
109
- model: anthropic/claude-sonnet-4-20250514
110
- subagent: reviewer
111
- inheritContext: true
112
- ---
113
- Audit this diff for correctness and edge cases: $@
114
- ```
68
+ | Field | Default | What it does |
69
+ |-------|---------|--------------|
70
+ | `restore` | `true` | After the command finishes, switch back to whatever model and thinking level were active before. Set `false` to stay on the new model. |
71
+ | `loop` | — | Run this template multiple times by default (1–999). CLI `--loop` overrides this. See [Loop Execution](#loop-execution). |
72
+ | `fresh` | `false` | When looping, collapse the conversation between iterations to a brief summary instead of carrying the full context forward. Saves tokens on long loops. |
73
+ | `converge` | `true` | When looping, stop early if an iteration makes no file changes. Set `false` to always run every iteration. |
115
74
 
116
- `inheritContext: true` maps to delegated `context: "fork"`. It is valid only when `subagent` is configured.
75
+ ### Delegation
117
76
 
118
- Forked subagents receive a default preamble (from the subagent extension's `DEFAULT_FORK_PREAMBLE`) that anchors them to the task and prevents them from continuing the parent conversation.
119
-
120
- During execution, a live progress widget appears above the editor showing elapsed time, tool count, token usage, and the current/last tool — matching the native subagent tool card layout. The widget updates in real-time and clears when the run completes, replaced by a styled completion card with task preview, tool call history, output, and usage stats.
121
-
122
- You can override delegation at runtime per invocation:
123
-
124
- - `--subagent`
125
- - `--subagent=<name>`
126
- - `--subagent:<name>`
127
-
128
- Runtime flags take precedence for that invocation only. Bare `--subagent` keeps template agent when present, otherwise defaults to `worker`.
129
-
130
- ## Frontmatter Fields
131
-
132
- | Field | Required | Default | Description |
133
- |-------|----------|---------|-------------|
134
- | `model` | No | `current` | Target model(s). If omitted on a non-chain template, execution inherits the current session model. Ignored when `chain` is set. |
135
- | `chain` | Conditional | - | Chain declaration (`step -> step --loop 2`) for orchestration templates; body is ignored |
136
- | `skill` | No | - | Skill name to inject as next-turn context message |
137
- | `thinking` | No | - | Thinking level: `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
138
- | `subagent` | No | - | Delegate execution to subagent mode (`true` for default `worker`, or explicit agent name string) |
139
- | `inheritContext` | No | `false` | Only with `subagent`; when `true`, delegates with subagent `context: "fork"` |
140
-
141
- | `description` | No | - | Shown in autocomplete |
142
- | `restore` | No | `true` | Restore previous model and thinking level after response |
143
- | `fresh` | No | `false` | Collapse context between loop iterations (applies when looping via `--loop` or frontmatter `loop`) |
144
- | `loop` | No | - | Default loop count for this template (`1`-`999`) |
145
- | `converge` | No | `true` | Loop convergence behavior; set `false` to always run all iterations |
77
+ | Field | Default | What it does |
78
+ |-------|---------|--------------|
79
+ | `subagent` | | Delegate execution to a subagent instead of running in the current session. `true` uses the default `delegate` agent; a string value like `reviewer` targets that specific agent. Requires [pi-subagents](https://github.com/nicobailon/pi-subagents/). |
80
+ | `inheritContext` | `false` | Only meaningful with `subagent`. When `true`, the subagent receives a fork of the current conversation context instead of starting fresh. |
81
+ | `cwd` | | Working directory for delegated subagent subprocesses. Must be an absolute path (`~/...` is expanded). Valid with `subagent`, and also on chain templates as the default cwd for delegated steps. |
146
82
 
147
83
  ## Model Format
148
84
 
149
85
  ```yaml
150
- model: claude-sonnet-4-20250514 # Model ID only - auto-selects provider
151
- model: anthropic/claude-sonnet-4-20250514 # Explicit provider/model
86
+ model: claude-sonnet-4-20250514 # bare model ID auto-selects provider
87
+ model: anthropic/claude-sonnet-4-20250514 # explicit provider/model
152
88
  ```
153
89
 
154
- When you specify just the model ID, the extension picks a provider automatically based on where you have auth configured, preferring: `anthropic` → `github-copilot` → `openrouter`.
90
+ Bare model IDs resolve through a provider priority list: `openai-codex` `anthropic` → `github-copilot` → `openrouter`. The first provider with valid auth wins.
155
91
 
156
92
  For explicit control:
157
93
 
158
94
  ```yaml
159
95
  model: anthropic/claude-opus-4-5 # Direct Anthropic API
96
+ model: openai-codex/gpt-5.2 # Via Codex subscription (OAuth)
160
97
  model: github-copilot/claude-opus-4-5 # Via Copilot subscription
161
98
  model: openrouter/claude-opus-4-5 # Via OpenRouter
162
99
  model: openai/gpt-5.2 # Direct OpenAI API
163
- model: openai-codex/gpt-5.2 # Via Codex subscription (OAuth)
164
100
  ```
165
101
 
166
- ## Model Fallback
102
+ ### Model Fallback
167
103
 
168
- Specify multiple models as a comma-separated list. The extension tries each one in order and uses the first that resolves and has auth configured.
104
+ Comma-separated lists try each model in order:
169
105
 
170
106
  ```yaml
171
107
  model: claude-haiku-4-5, claude-sonnet-4-20250514
172
108
  ```
173
109
 
174
- This tries Haiku first. If it can't be found or has no API key, falls back to Sonnet. Useful when you have multiple provider accounts with different availability, or want a cost-optimized primary with a guaranteed fallback.
110
+ Haiku is tried first. If it can't be found or has no API key, Sonnet is used instead. If the session is already on one of the listed models, that one is kept without switching. When every candidate fails, you get a single error listing what was tried.
175
111
 
176
- You can mix bare model IDs and explicit provider/model specs:
112
+ You can mix bare IDs and explicit provider specs:
177
113
 
178
114
  ```yaml
179
115
  model: anthropic/claude-haiku-4-5, openrouter/claude-haiku-4-5, claude-sonnet-4-20250514
180
116
  ```
181
117
 
182
- Here the extension tries Haiku on Anthropic first, then Haiku on OpenRouter, then Sonnet on whatever provider has auth. If you're already on one of the listed models when the command runs, it uses that without switching.
118
+ ## Skills
119
+
120
+ Normally, pi lists available skills in the system prompt, the agent reads your task, decides which skill it needs, and loads it with the read tool. That's an extra round-trip, and the agent might not pick the right one.
121
+
122
+ The `skill` field bypasses all of that:
123
+
124
+ ```markdown
125
+ ---
126
+ description: Browser testing mode
127
+ model: claude-sonnet-4-20250514
128
+ skill: surf
129
+ ---
130
+ $@
131
+ ```
132
+
133
+ The skill content is injected as a context message before the agent processes your task. No decision-making, no tool call — immediate expertise. If the skill file can't be found, the command fails fast instead of running without it.
183
134
 
184
- When all candidates fail, a single error notification lists everything that was tried.
135
+ ### Skill Resolution
136
+
137
+ The `skill` field accepts a bare name or a `skill:` prefix:
138
+
139
+ ```yaml
140
+ skill: tmux
141
+ skill: skill:tmux # equivalent
142
+ ```
143
+
144
+ Resolution order:
145
+
146
+ 1. Registered skill commands from `pi.getCommands()` (source: `"skill"`)
147
+ 2. `<cwd>/.pi/skills/<name>/SKILL.md` or `<cwd>/.pi/skills/<name>.md`
148
+ 3. `.agents/skills` in the current directory and ancestors (up to the git root)
149
+ 4. `~/.pi/agent/skills/<name>/SKILL.md` or `~/.pi/agent/skills/<name>.md`
150
+ 5. `~/.agents/skills/<name>/SKILL.md` or `~/.agents/skills/<name>.md`
185
151
 
186
152
  ## Inline Model Conditionals
187
153
 
188
- Prompt bodies can embed model-specific instructions directly in the markdown:
154
+ Prompt bodies can include sections that only render for specific models:
189
155
 
190
156
  ```markdown
191
157
  ---
@@ -201,43 +167,28 @@ Do a deeper pass and call out subtle risks.
201
167
  </if-model>
202
168
  ```
203
169
 
204
- Conditionals are evaluated against the model that actually runs the command. For fallback prompts, that means after candidate resolution; for prompts without `model`, that means the current session model. The same template can render differently depending on which model is active.
205
-
206
- Supported matches inside `is="..."`:
207
-
208
- - Exact `provider/model-id`
209
- - Exact bare `model-id`
210
- - Provider wildcard like `anthropic/*`
211
- - Comma-separated lists combining any of the above
170
+ Conditionals evaluate against whichever model actually runs after fallback resolution for multi-model templates, or against the session model when `model` is omitted.
212
171
 
213
- Examples:
172
+ The `is` attribute supports exact model IDs, `provider/model-id` pairs, provider wildcards like `anthropic/*`, and comma-separated combinations:
214
173
 
215
174
  ```markdown
216
- <if-model is="anthropic/claude-sonnet-4-20250514">...</if-model>
217
- <if-model is="claude-sonnet-4-20250514">...</if-model>
218
- <if-model is="anthropic/*">...</if-model>
219
- <if-model is="openai/gpt-5.2, anthropic/*">...</if-model>
175
+ <if-model is="anthropic/*">Anthropic-specific instructions</if-model>
176
+ <if-model is="openai/gpt-5.2, anthropic/*">Either OpenAI or Anthropic</if-model>
220
177
  ```
221
178
 
222
- `<else>` is the fallback branch for the current `<if-model>` block. Nested blocks are supported.
223
-
224
- Conditionals are a raw text preprocessing step, not markdown-aware syntax. If you want to show the directive literally inside a prompt, escape it in the source text, for example with `&lt;if-model is="anthropic/*"&gt;`.
179
+ `<else>` is the fallback branch. Nested `<if-model>` blocks work.
225
180
 
226
181
  ## Argument Substitution
227
182
 
228
- Prompt bodies support argument placeholders that expand to command arguments:
183
+ Prompt bodies support placeholders that expand to the arguments passed after the command name:
229
184
 
230
- | Placeholder | Description |
231
- |-------------|-------------|
232
- | `$1`, `$2`, ... | Positional argument (1-indexed) |
233
- | `$@` | All arguments joined with spaces |
234
- | `@$` | Alias for `$@` |
235
- | `$ARGUMENTS` | Same as `$@` |
185
+ | Placeholder | Expands to |
186
+ |-------------|------------|
187
+ | `$1`, `$2`, ... | The Nth argument (1-indexed) |
188
+ | `$@` or `@$` or `$ARGUMENTS` | All arguments joined with spaces |
236
189
  | `${@:N}` | All arguments from position N onward |
237
190
  | `${@:N:L}` | L arguments starting from position N |
238
191
 
239
- Example:
240
-
241
192
  ```markdown
242
193
  ---
243
194
  model: claude-sonnet-4-20250514
@@ -245,168 +196,114 @@ model: claude-sonnet-4-20250514
245
196
  Analyze $1 focusing on $2. Additional context: ${@:3}
246
197
  ```
247
198
 
248
- Running `/analyze src/main.ts performance edge cases error handling` expands to:
249
- - `$1` → `src/main.ts`
250
- - `$2` → `performance`
251
- - `${@:3}` → `edge cases error handling`
252
-
253
- ## Skill Resolution
254
-
255
- The `skill` field accepts either a bare skill name or a slash-command style name:
256
-
257
- ```yaml
258
- skill: tmux
259
- # also valid
260
- skill: skill:tmux
261
- ```
262
-
263
- Resolution order:
264
- 1. Registered skill commands from `pi.getCommands()` (`source: "skill"`), matched by `skill:name` or `name`
265
- 2. `<cwd>/.pi/skills/<name>/SKILL.md` or `<cwd>/.pi/skills/<name>.md`
266
- 3. `.agents/skills` in `cwd` and ancestor directories (up to git repo root)
267
- 4. `~/.pi/agent/skills/<name>/SKILL.md` or `~/.pi/agent/skills/<name>.md`
268
- 5. `~/.agents/skills/<name>/SKILL.md` or `~/.agents/skills/<name>.md`
269
-
270
- If the configured skill file is missing or unreadable, the command fails fast and does not send the prompt body to the model.
271
-
272
- ## Subdirectories
273
-
274
- Organize prompts in subdirectories for namespacing:
275
-
276
- ```
277
- ~/.pi/agent/prompts/
278
- ├── quick.md → /quick (user)
279
- ├── debug-python.md → /debug-python (user)
280
- └── frontend/
281
- ├── component.md → /component (user:frontend)
282
- └── hook.md → /hook (user:frontend)
283
- ```
284
-
285
- The subdirectory shows in autocomplete as the source label. Command names are based on filename only. If duplicates exist within the same source layer, the first one found after lexical sorting wins and later duplicates are skipped with a warning. Reserved command names like `model`, `reload`, and `chain-prompts` are also skipped with a warning.
199
+ `/analyze src/main.ts performance edge cases error handling` expands `$1` to `src/main.ts`, `$2` to `performance`, and `${@:3}` to `edge cases error handling`.
286
200
 
287
- ## Examples
201
+ ## Delegated Subagent Execution
288
202
 
289
- **Cost optimization** - use Haiku for simple summarization:
203
+ Instead of running a prompt in the current session, you can hand it off to a subagent:
290
204
 
291
205
  ```markdown
292
206
  ---
293
- description: Save progress doc for handoff
294
- model: claude-haiku-4-5
207
+ model: anthropic/claude-sonnet-4-20250514
208
+ subagent: true
295
209
  ---
296
- Create a progress document that captures everything needed for another
297
- engineer to continue this work. Save to ~/Documents/docs/...
210
+ Review and simplify this code: $@
298
211
  ```
299
212
 
300
- **Skill injection** - guarantee the agent has REPL expertise:
213
+ `subagent: true` delegates to the default `delegate` agent. To target a specific agent:
301
214
 
302
215
  ```markdown
303
216
  ---
304
- description: Python debugging session
305
- model: claude-sonnet-4-20250514
306
- skill: tmux
217
+ model: anthropic/claude-sonnet-4-20250514
218
+ subagent: reviewer
219
+ inheritContext: true
307
220
  ---
308
- Start a Python REPL and help me debug: $@
221
+ Audit this diff for correctness and edge cases: $@
309
222
  ```
310
223
 
311
- **Browser automation** - pair surf skill with a capable model:
224
+ `inheritContext: true` forks the current conversation so the subagent has full context. Without it, the subagent starts fresh.
225
+
226
+ To force a subagent into a specific working directory, add `cwd`:
312
227
 
313
228
  ```markdown
314
229
  ---
315
- description: Test user flow in browser
316
230
  model: claude-sonnet-4-20250514
317
- skill: surf
231
+ subagent: browser-screenshoter
232
+ cwd: /tmp/screenshots
318
233
  ---
319
- Test this user flow: $@
234
+ Use url in the prompt to take screenshot: $@
320
235
  ```
321
236
 
322
- **Deep thinking** - max thinking for complex analysis:
237
+ The subagent process runs with `/tmp/screenshots` as its working directory. Paths must be absolute (`~/...` is expanded). The directory is validated at execution time.
323
238
 
324
- ```markdown
325
- ---
326
- description: Deep code analysis with extended thinking
327
- model: claude-sonnet-4-20250514
328
- thinking: high
329
- ---
330
- Analyze this code thoroughly, considering edge cases and potential issues: $@
331
- ```
239
+ During execution, a live progress widget appears above the editor showing elapsed time, tool count, token usage, and the current tool. When the run finishes, it's replaced by a completion card with the task preview, tool call history, output, and usage stats.
332
240
 
333
- **Model fallback** - prefer cheap, fall back to reliable:
241
+ You can override delegation at runtime per invocation with `--subagent`, `--subagent=<name>`, `--subagent:<name>`, or `--cwd=<path>`. `--cwd=<path>` must be absolute after optional `~/...` expansion. Runtime flags take precedence for that invocation only.
334
242
 
335
- ```markdown
336
- ---
337
- description: Save progress doc for handoff
338
- model: claude-haiku-4-5, claude-sonnet-4-20250514
339
- ---
340
- Create a progress document that captures everything needed for another
341
- engineer to continue this work. Save to ~/Documents/docs/...
342
- ```
243
+ ## Loop Execution
343
244
 
344
- **Cross-provider fallback** - same model, different providers:
245
+ Run a template multiple times with `--loop`:
345
246
 
346
- ```markdown
347
- ---
348
- description: Quick analysis
349
- model: anthropic/claude-haiku-4-5, openrouter/claude-haiku-4-5
350
- ---
351
- $@
247
+ ```
248
+ /deslop --loop 5
249
+ /deslop --loop=5
250
+ /deslop --loop # unlimited — runs until convergence (50-iteration cap)
352
251
  ```
353
252
 
354
- **Mode switching** - stay on the new model:
253
+ You can also set a default in frontmatter. CLI `--loop` always overrides:
355
254
 
356
255
  ```markdown
357
256
  ---
358
- description: Switch to Haiku for this session
359
- model: claude-haiku-4-5
360
- restore: false
257
+ loop: 5
361
258
  ---
362
- Switched to Haiku. How can I help?
363
259
  ```
364
260
 
365
- ## Chaining Templates
261
+ ### How looping works
366
262
 
367
- The `/chain-prompts` command runs multiple templates sequentially. Each step switches to its own model (or, if the step has no `model`, to the chain-start model snapshot), renders inline model conditionals against that resolved step model, injects its own skill context message, and conversation context carries forward between steps.
263
+ Each iteration runs the same prompt. By default, context accumulates iteration 3 sees the full conversation from iterations 1 and 2 and builds on that work.
368
264
 
369
- ```
370
- /chain-prompts analyze-code -> fix-plan -> summarize -- src/main.ts
371
- ```
265
+ **Convergence**: If an iteration makes no file changes (no `write` or `edit` tool calls), the loop stops early. This is on by default. Use `--no-converge` or `converge: false` to always run every iteration. Bare `--loop` (unlimited) always forces convergence on, since its whole purpose is "run until nothing changes."
266
+
267
+ **Fresh context**: Add `--fresh` (or `fresh: true` in frontmatter) to collapse the conversation between iterations. Each iteration gets a clean slate with only brief summaries of what previous iterations did. Good for long loops where accumulated context would blow up the token count.
268
+
269
+ **Status**: The TUI status bar shows `loop 2/5` during execution.
372
270
 
373
- This runs `analyze-code` first, then `fix-plan` (which sees the analysis in conversation context), then `summarize`. The ` -- src/main.ts` part is optional. The literal ` -- ` separator means "shared args start here": everything after it is passed to each step as `$@`, unless that step already has its own inline args.
271
+ Model, thinking level, and skill are maintained throughout. If `restore: true` (the default), everything is restored after the final iteration.
374
272
 
375
- Each step can also receive its own args, overriding the shared args for that step:
273
+ ## Chaining Templates
274
+
275
+ `/chain-prompts` runs multiple templates in sequence. Each step uses its own model, skill, and thinking level, while conversation context flows between them:
376
276
 
377
277
  ```
378
- /chain-prompts analyze-code "look at error handling" -> fix-plan "focus on perf" -> summarize
278
+ /chain-prompts analyze-code -> fix-plan -> summarize -- src/main.ts
379
279
  ```
380
280
 
381
- Here `analyze-code` gets `$@ = "look at error handling"`, `fix-plan` gets `$@ = "focus on perf"`, and `summarize` has no per-step args so it falls back to the shared args (empty in this case, but conversation context from prior steps is usually enough).
382
-
383
- You can mix both:
281
+ This runs `analyze-code`, then `fix-plan` (which sees the analysis), then `summarize`. The ` -- ` separator marks shared args everything after it is passed to each step as `$@`, unless a step has its own inline args:
384
282
 
385
283
  ```
386
284
  /chain-prompts analyze-code "error handling" -> fix-plan -> summarize -- src/main.ts
387
285
  ```
388
286
 
389
- Step 1 uses its per-step args (`"error handling"`), steps 2 and 3 fall back to the shared args (`"src/main.ts"`).
287
+ Step 1 gets `"error handling"` as its args. Steps 2 and 3 fall back to the shared `"src/main.ts"`.
390
288
 
391
- The chain captures your current model and thinking level before starting, and restores them when the chain finishes (or if any step fails mid-chain). Individual template `restore` settings are ignored during chain execution.
289
+ The chain captures your model and thinking level before starting and restores them when finished (or if any step fails).
392
290
 
393
291
  ### Chain Templates
394
292
 
395
- For reusable pipelines, define a chain in frontmatter instead of typing `/chain-prompts` every time:
293
+ For reusable pipelines, put the chain in frontmatter:
396
294
 
397
295
  ```markdown
398
296
  ---
399
297
  description: Review then clean up
400
298
  chain: double-check --loop 2 -> deslop --loop 2
401
299
  ---
402
- ignored — chain templates don't use the body
403
300
  ```
404
301
 
405
- This registers `/review-then-clean` as a command that runs `double-check` twice, then `deslop` twice. Each step references a separate prompt template. Steps with `model` use their configured model; steps without `model` inherit the chain-start model snapshot (the model active when the chain command began), so behavior stays deterministic even if earlier steps switch models.
302
+ This registers the file's name as a command that runs `double-check` twice, then `deslop` twice. Per-step `--loop N` repeats that step before moving to the next, with per-step convergence (stops early if no changes, unless the step's template has `converge: false`).
406
303
 
407
- Per-step `--loop N` repeats that step N times before moving to the next. Per-step convergence applies: if a step makes no file changes on an iteration, its inner loop stops early (unless the step's template has `converge: false`).
304
+ Steps with a `model` field use their own model. Steps without one inherit a snapshot of whatever model was active when the chain started not the previous step's model. This keeps behavior deterministic regardless of what earlier steps do.
408
305
 
409
- Chain templates support `loop`, `fresh`, `converge`, and `restore` in their frontmatter for overall execution control:
306
+ Chain templates support `loop`, `fresh`, `converge`, `restore`, and `cwd` in their frontmatter for controlling the overall execution:
410
307
 
411
308
  ```markdown
412
309
  ---
@@ -417,152 +314,125 @@ converge: false
417
314
  ---
418
315
  ```
419
316
 
420
- This runs the full analyze → fix chain 3 times, with fresh context between iterations and no early stopping. CLI `--loop` overrides frontmatter `loop` when invoking the command.
317
+ This runs the full analyze → fix chain 3 times, with fresh context between iterations and no early stopping. Chain nesting is not supported steps can't reference other chain templates.
421
318
 
422
- Chain nesting is not supported a chain template's steps cannot reference other chain templates.
319
+ When a chain template sets `cwd`, it becomes the default delegated subprocess working directory for all delegated steps in that chain. Runtime `--cwd=<path>` overrides the chain template value.
423
320
 
424
- ## Loop Execution
425
-
426
- Looping uses the `--loop` flag:
427
-
428
- ```
429
- /deslop --loop 5
430
- /deslop --loop=5
431
- /deslop "focus on performance" --loop 3
432
- /deslop --loop
433
- ```
434
-
435
- `--loop` without a number means unlimited looping until convergence, with a built-in safety cap of 50 iterations.
436
-
437
- You can also set a default loop count in frontmatter:
438
-
439
- ```markdown
440
- ---
441
- model: claude-sonnet-4-20250514
442
- loop: 5
443
- ---
444
- ...
445
- ```
446
-
447
- With that template, `/deslop` runs 5 iterations by default. CLI `--loop` overrides frontmatter (`/deslop --loop 3` runs 3 iterations).
448
-
449
- The agent runs the same prompt N times. Context accumulates across iterations — by iteration 3, the agent sees the full conversation from iterations 1 and 2 and builds on that work. Use `--fresh` to collapse context between iterations instead (see below).
450
-
451
- By default, the loop stops early if an iteration makes no file changes (no `write` or `edit` tool calls), since there's nothing left to improve. Add `--no-converge` to always run all iterations for bounded loops, or set `converge: false` in frontmatter:
452
-
453
- ```
454
- /deslop --loop 5 --no-converge
455
- ```
456
-
457
- ```markdown
458
- ---
459
- model: claude-sonnet-4-20250514
460
- loop: 5
461
- converge: false
462
- ---
463
- ...
464
- ```
465
-
466
- Bare `--loop` always forces convergence on (even with `--no-converge` or `converge: false`) because its intent is "run until no changes." `--loop N` and `--loop=N` support range 1-999. Quoted `"--loop"` is treated as a regular argument.
467
-
468
- Model, thinking level, and skill are maintained throughout the loop. If the template has `restore: true` (the default), the original model and thinking level are restored after the final iteration (or if any iteration fails). If `restore: false`, the switched model persists after the loop ends.
469
-
470
- ### Fresh Context
471
-
472
- Add `--fresh` to collapse context between iterations:
473
-
474
- ```
475
- /deslop --loop 5 --fresh
476
- /deslop --fresh # when frontmatter sets loop: N
477
- ```
478
-
479
- Each iteration's conversation is collapsed to a brief summary (files read, files modified, outcome) before the next iteration starts. The agent sees accumulated summaries from all previous iterations but not the full conversation. This saves tokens on long loops and gives each iteration a clean slate for reasoning.
480
-
481
- You can also set `fresh: true` in the template frontmatter to make it the default when looped:
482
-
483
- ```markdown
484
- ---
485
- description: Remove AI slop from code
486
- model: claude-sonnet-4-20250514
487
- fresh: true
488
- ---
489
- Review the codebase and improve code quality. $@
490
- ```
491
-
492
- ### Loop with Chains
493
-
494
- Chains support the same looping forms:
321
+ ### Looping chains from the CLI
495
322
 
496
323
  ```
497
324
  /chain-prompts analyze -> fix --loop 3
498
- /chain-prompts analyze -> fix --loop=3
499
- /chain-prompts analyze -> fix --loop
500
325
  /chain-prompts analyze -> fix --loop 3 --fresh
501
326
  /chain-prompts analyze -> fix --loop 3 --no-converge
502
- /chain-prompts analyze -> fix --loop 3 -- src/main.ts
327
+ /chain-prompts analyze -> fix --loop
503
328
  ```
504
329
 
505
- This runs the full chain (analyze → fix) three times. The final example adds optional shared args: ` -- src/main.ts` means "pass `src/main.ts` to any step that doesn't already have its own args." If you don't need shared args, leave that part out entirely. Convergence detection applies across all steps in each iteration — if no step made file changes, the loop stops. Each iteration re-reads prompts from disk, so template edits take effect between iterations. The status bar shows `loop 2/3` during execution. Chain frontmatter declarations also support per-step `--loop N` inside the `chain:` value (for example `chain: double-check --loop 3 -> simplify -> deslop`).
330
+ Convergence applies across all steps in each iteration — if no step made file changes, the loop stops. Templates are re-read from disk between iterations, so edits take effect live.
506
331
 
507
332
  ## Agent Tool
508
333
 
509
- The agent can run prompt templates on its own via the `run-prompt` tool. Disabled by default — enable it with:
334
+ The agent can invoke prompt templates itself via a `run-prompt` tool. It's off by default:
510
335
 
511
336
  ```
512
337
  /prompt-tool on
513
338
  ```
514
339
 
515
- Once enabled, the agent sees `run-prompt` in its tool list and can call it with any template command:
340
+ Once enabled, the agent sees `run-prompt` in its tool list:
516
341
 
517
342
  ```
518
343
  run-prompt({ command: "deslop --loop 5 --fresh" })
519
- run-prompt({ command: "deslop --loop" })
520
- run-prompt({ command: "deslop --subagent" })
521
- run-prompt({ command: "deslop --subagent:reviewer" })
522
344
  run-prompt({ command: "chain-prompts analyze -> fix --loop 3" })
345
+ run-prompt({ command: "deslop --subagent" })
523
346
  ```
524
347
 
525
- The tool queues the command for execution when the agent's current turn ends. All loop, fresh context, and convergence features work the same as when invoked via slash commands.
348
+ The tool queues the command for execution after the agent's current turn ends. All loop, chain, and convergence features work the same as slash commands.
526
349
 
527
- Add guidance to steer when the agent uses it:
350
+ You can add guidance to steer when the agent reaches for it:
528
351
 
529
352
  ```
530
353
  /prompt-tool on Use run-prompt for iterative code improvement tasks
531
354
  /prompt-tool guidance Use sparingly, only for multi-pass refinement
532
355
  /prompt-tool guidance clear
533
356
  /prompt-tool off
534
- /prompt-tool
357
+ /prompt-tool # show current status
535
358
  ```
536
359
 
537
360
  Config persists across sessions in `~/.pi/agent/prompt-template-model.json`.
538
361
 
539
- ## Autocomplete Display
362
+ ## Autocomplete
540
363
 
541
- Commands show model, thinking level, and skill in the description:
364
+ Commands show their configuration in the autocomplete description:
542
365
 
543
366
  ```
544
367
  /debug-python Debug Python session [sonnet +tmux] (user)
545
368
  /deep-analysis Deep code analysis [sonnet high] (user)
546
369
  /save-progress Save progress doc [haiku|sonnet] (user)
547
370
  /component Create React component [sonnet] (user:frontend)
548
- /quick Quick answer [haiku] (user)
549
371
  ```
550
372
 
551
- ## Print Mode (`pi -p`)
373
+ ## Subdirectories
552
374
 
553
- These commands work in print mode too:
375
+ Organize prompts in subdirectories for namespacing:
376
+
377
+ ```
378
+ ~/.pi/agent/prompts/
379
+ ├── quick.md → /quick (user)
380
+ ├── debug-python.md → /debug-python (user)
381
+ └── frontend/
382
+ ├── component.md → /component (user:frontend)
383
+ └── hook.md → /hook (user:frontend)
384
+ ```
385
+
386
+ The subdirectory shows as the source label in autocomplete. Command names are based on filename only. Duplicates within the same source layer are skipped with a warning, as are reserved names like `model`, `reload`, and `chain-prompts`.
387
+
388
+ ## Print Mode
389
+
390
+ These commands work in `pi -p` too:
554
391
 
555
392
  ```bash
556
393
  pi -p "/debug-python my code crashes on line 42"
557
394
  ```
558
395
 
559
- The model switches, a skill context message is injected, the agent responds, and output prints to stdout. Useful for scripting or piping to other tools.
396
+ The model switches, skill is injected, the agent responds, and output goes to stdout. Useful for scripting or piping.
397
+
398
+ ## Examples
399
+
400
+ **Thinking levels** — max thinking for thorny analysis:
401
+
402
+ ```markdown
403
+ ---
404
+ description: Deep code analysis with extended thinking
405
+ model: claude-sonnet-4-20250514
406
+ thinking: high
407
+ ---
408
+ Analyze this code thoroughly, considering edge cases and potential issues: $@
409
+ ```
410
+
411
+ **Sticky mode switch** — switch models for the rest of the session:
412
+
413
+ ```markdown
414
+ ---
415
+ description: Switch to Haiku for this session
416
+ model: claude-haiku-4-5
417
+ restore: false
418
+ ---
419
+ Switched to Haiku. How can I help?
420
+ ```
421
+
422
+ **Cross-provider fallback** — try the same model on different providers:
423
+
424
+ ```markdown
425
+ ---
426
+ description: Quick analysis
427
+ model: anthropic/claude-haiku-4-5, openrouter/claude-haiku-4-5
428
+ ---
429
+ $@
430
+ ```
560
431
 
561
432
  ## Limitations
562
433
 
563
- - Prompt files are reloaded on session start and whenever an extension-owned prompt command runs. If you add a brand-new prompt file while already inside a session, run another extension-owned command such as `/chain-prompts`, start a new session, or reload pi so the new slash command is registered.
564
- - Model restore state is in-memory. Closing pi mid-response loses restore state.
565
- - Model-less templates are only managed by this extension when they use extension features (for example `skill`, `thinking`, loop flags, or inline `<if-model ...>`). Plain prompt templates without extension features stay with pi's default prompt loader to avoid command conflicts.
566
- - In chains, model-less steps inherit the chain-start model snapshot, not the immediately previous step model. This is intentional for deterministic behavior.
567
- - Delegated `subagent` prompts require [pi-subagents](https://github.com/nicobailon/pi-subagents/) (`pi install npm:pi-subagents`).
568
- - The `run-prompt` tool must be explicitly enabled with `/prompt-tool on` before the agent can use it.
434
+ - Prompt files are reloaded on session start and whenever an extension-owned command runs. If you add a new prompt file mid-session, run any extension command (like `/chain-prompts`), start a new session, or reload pi to pick it up.
435
+ - Model restore state is in-memory. Closing pi mid-response loses it.
436
+ - In chains, model-less steps inherit the chain-start model snapshot, not the previous step's model. This is intentional for deterministic behavior.
437
+ - Delegated `subagent` prompts require [pi-subagents](https://github.com/nicobailon/pi-subagents/).
438
+ - `run-prompt` must be explicitly enabled with `/prompt-tool on`.
package/args.ts CHANGED
@@ -19,6 +19,7 @@ export interface SubagentOverride {
19
19
  export interface SubagentOverrideExtraction {
20
20
  args: string;
21
21
  override?: SubagentOverride;
22
+ cwd?: string;
22
23
  }
23
24
 
24
25
  export function extractLoopCount(argsString: string): LoopExtraction | null {
@@ -164,6 +165,7 @@ export function extractLoopFlags(argsString: string): LoopFlags {
164
165
 
165
166
  export function extractSubagentOverride(argsString: string): SubagentOverrideExtraction {
166
167
  let override: SubagentOverride | undefined;
168
+ let cwdRaw: string | undefined;
167
169
  const tokensToRemove: Array<{ start: number; end: number }> = [];
168
170
 
169
171
  let i = 0;
@@ -197,10 +199,17 @@ export function extractSubagentOverride(argsString: string): SubagentOverrideExt
197
199
  tokensToRemove.push({ start: tokenStart, end: i });
198
200
  const value = token.includes("=") ? token.slice("--subagent=".length) : token.slice("--subagent:".length);
199
201
  override = value ? { enabled: true, agent: value } : { enabled: true };
202
+ continue;
203
+ }
204
+
205
+ if (token.startsWith("--cwd=")) {
206
+ tokensToRemove.push({ start: tokenStart, end: i });
207
+ const value = token.slice("--cwd=".length);
208
+ cwdRaw = value || undefined;
200
209
  }
201
210
  }
202
211
 
203
- if (!override) return { args: argsString.trim() };
212
+ if (tokensToRemove.length === 0) return { args: argsString.trim() };
204
213
 
205
214
  tokensToRemove.sort((a, b) => b.start - a.start);
206
215
  let cleaned = argsString;
@@ -210,7 +219,8 @@ export function extractSubagentOverride(argsString: string): SubagentOverrideExt
210
219
 
211
220
  return {
212
221
  args: cleaned.trim(),
213
- override,
222
+ ...(override ? { override } : {}),
223
+ ...(cwdRaw !== undefined ? { cwd: cwdRaw } : {}),
214
224
  };
215
225
  }
216
226
 
package/index.ts CHANGED
@@ -6,7 +6,7 @@ import { parseChainSteps, parseChainDeclaration, type ChainStep } from "./chain-
6
6
  import { generateIterationSummary, didIterationMakeChanges, getIterationEntries } from "./loop-utils.js";
7
7
  import { notify, summarizePromptDiagnostics, diagnosticsFingerprint } from "./notifications.js";
8
8
  import { preparePromptExecution } from "./prompt-execution.js";
9
- import { buildPromptCommandDescription, loadPromptsWithModel, readSkillContent, resolveSkillPath, type PromptWithModel } from "./prompt-loader.js";
9
+ import { buildPromptCommandDescription, expandCwdPath, loadPromptsWithModel, readSkillContent, resolveSkillPath, type PromptWithModel } from "./prompt-loader.js";
10
10
  import { renderSkillLoaded, type SkillLoadedDetails } from "./skill-loaded-renderer.js";
11
11
  import { createToolManager } from "./tool-manager.js";
12
12
  import { executeSubagentPromptStep } from "./subagent-step.js";
@@ -372,6 +372,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
372
372
  converge: boolean,
373
373
  ctx: ExtensionCommandContext,
374
374
  subagentOverride?: SubagentOverride,
375
+ cwdOverride?: string,
375
376
  ) {
376
377
  refreshPrompts(ctx.cwd, ctx);
377
378
  const initialPrompt = prompts.get(name);
@@ -411,10 +412,11 @@ export default function promptModelExtension(pi: ExtensionAPI) {
411
412
  notify(ctx, `Prompt "${name}" no longer exists`, "error");
412
413
  break;
413
414
  }
415
+ const effectivePrompt = cwdOverride ? { ...prompt, cwd: cwdOverride } : prompt;
414
416
 
415
417
  const iterationStartId = ctx.sessionManager.getLeafId();
416
418
  const stepResult = await executePromptStep(
417
- prompt,
419
+ effectivePrompt,
418
420
  parseCommandArgs(cleanedArgs),
419
421
  ctx,
420
422
  currentModel,
@@ -426,7 +428,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
426
428
  currentThinking = pi.getThinkingLevel();
427
429
  completedIterations++;
428
430
 
429
- const iterationChanged = shouldDelegatePrompt(prompt, subagentOverride)
431
+ const iterationChanged = shouldDelegatePrompt(effectivePrompt, subagentOverride)
430
432
  ? stepResult.changed
431
433
  : didIterationMakeChanges(getIterationEntries(ctx, iterationStartId));
432
434
  if (useConverge && (isUnlimited || effectiveMax > 1) && !iterationChanged) {
@@ -483,6 +485,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
483
485
  shouldRestore: boolean,
484
486
  ctx: ExtensionCommandContext,
485
487
  subagentOverride?: SubagentOverride,
488
+ cwdOverride?: string,
486
489
  ) {
487
490
  const validateChainSteps = (): boolean => {
488
491
  const missingTemplates = steps.filter((step) => !prompts.has(step.name));
@@ -538,6 +541,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
538
541
 
539
542
  const templates = steps.map((step) => ({
540
543
  ...prompts.get(step.name)!,
544
+ ...(cwdOverride ? { cwd: cwdOverride } : {}),
541
545
  stepArgs: step.args,
542
546
  stepLoop: step.loopCount ?? 1,
543
547
  }));
@@ -665,6 +669,11 @@ export default function promptModelExtension(pi: ExtensionAPI) {
665
669
  }
666
670
 
667
671
  const subagent = extractSubagentOverride(args);
672
+ const runtimeCwd = subagent.cwd ? expandCwdPath(subagent.cwd) : undefined;
673
+ if (subagent.cwd && !runtimeCwd) {
674
+ notify(ctx, `Invalid --cwd path: must be absolute`, "error");
675
+ return;
676
+ }
668
677
  const argsWithoutSubagent = subagent.args;
669
678
 
670
679
  if (prompt.chain) {
@@ -696,6 +705,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
696
705
  return;
697
706
  }
698
707
 
708
+ const cwdOverride = runtimeCwd ?? prompt.cwd;
699
709
  await runSharedChainExecution(
700
710
  steps,
701
711
  parseCommandArgs(cleanedArgs),
@@ -705,26 +715,28 @@ export default function promptModelExtension(pi: ExtensionAPI) {
705
715
  prompt.restore,
706
716
  ctx,
707
717
  subagent.override,
718
+ cwdOverride,
708
719
  );
709
720
  return;
710
721
  }
711
722
 
712
723
  const loop = extractLoopCount(argsWithoutSubagent);
713
724
  if (loop) {
714
- await runPromptLoop(name, loop.args, loop.loopCount, loop.fresh, loop.converge, ctx, subagent.override);
725
+ await runPromptLoop(name, loop.args, loop.loopCount, loop.fresh, loop.converge, ctx, subagent.override, runtimeCwd);
715
726
  return;
716
727
  }
717
728
 
718
729
  if (prompt.loop !== undefined) {
719
730
  const flags = extractLoopFlags(argsWithoutSubagent);
720
- await runPromptLoop(name, flags.args, prompt.loop, flags.fresh, flags.converge, ctx, subagent.override);
731
+ await runPromptLoop(name, flags.args, prompt.loop, flags.fresh, flags.converge, ctx, subagent.override, runtimeCwd);
721
732
  return;
722
733
  }
723
734
 
735
+ const effectivePrompt = runtimeCwd ? { ...prompt, cwd: runtimeCwd } : prompt;
724
736
  const savedModel = getCurrentModel(ctx);
725
737
  const savedThinking = pi.getThinkingLevel();
726
738
  const stepResult = await executePromptStep(
727
- prompt,
739
+ effectivePrompt,
728
740
  parseCommandArgs(argsWithoutSubagent),
729
741
  ctx,
730
742
  savedModel,
@@ -732,13 +744,13 @@ export default function promptModelExtension(pi: ExtensionAPI) {
732
744
  );
733
745
  if (stepResult === "aborted") return;
734
746
 
735
- if (!shouldDelegatePrompt(prompt, subagent.override) && prompt.restore) {
747
+ if (!shouldDelegatePrompt(effectivePrompt, subagent.override) && prompt.restore) {
736
748
  const currentModel = getCurrentModel(ctx);
737
749
  if (savedModel && currentModel && !sameModel(savedModel, currentModel)) {
738
750
  previousModel = savedModel;
739
751
  previousThinking = savedThinking;
740
752
  }
741
- if (prompt.thinking && previousThinking === undefined && prompt.thinking !== savedThinking) {
753
+ if (effectivePrompt.thinking && previousThinking === undefined && effectivePrompt.thinking !== savedThinking) {
742
754
  previousThinking = savedThinking;
743
755
  }
744
756
  }
@@ -840,6 +852,11 @@ export default function promptModelExtension(pi: ExtensionAPI) {
840
852
  refreshPrompts(ctx.cwd, ctx);
841
853
 
842
854
  const subagent = extractSubagentOverride(args);
855
+ const runtimeCwd = subagent.cwd ? expandCwdPath(subagent.cwd) : undefined;
856
+ if (subagent.cwd && !runtimeCwd) {
857
+ notify(ctx, `Invalid --cwd path: must be absolute`, "error");
858
+ return;
859
+ }
843
860
  const loop = extractLoopCount(subagent.args);
844
861
  const cleanedArgs = loop ? loop.args : subagent.args;
845
862
 
@@ -862,6 +879,7 @@ export default function promptModelExtension(pi: ExtensionAPI) {
862
879
  true,
863
880
  ctx,
864
881
  subagent.override,
882
+ runtimeCwd,
865
883
  );
866
884
  }
867
885
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "pi-prompt-template-model",
3
- "version": "0.6.1",
3
+ "version": "0.6.2",
4
4
  "type": "module",
5
5
  "description": "Prompt template model selector extension for pi coding agent",
6
6
  "author": "Nico Bailon",
package/prompt-loader.ts CHANGED
@@ -1,6 +1,6 @@
1
1
  import { existsSync, readdirSync, readFileSync, realpathSync, statSync } from "node:fs";
2
2
  import { homedir } from "node:os";
3
- import { dirname, join, resolve } from "node:path";
3
+ import { dirname, isAbsolute, join, resolve } from "node:path";
4
4
  import type { ThinkingLevel } from "@mariozechner/pi-agent-core";
5
5
  import { parseFrontmatter } from "@mariozechner/pi-coding-agent";
6
6
 
@@ -45,6 +45,7 @@ export interface PromptWithModel {
45
45
  converge?: boolean;
46
46
  subagent?: true | string;
47
47
  inheritContext?: boolean;
48
+ cwd?: string;
48
49
  source: PromptSource;
49
50
  subdir?: string;
50
51
  filePath: string;
@@ -331,6 +332,47 @@ function normalizeSubagent(
331
332
  return normalized;
332
333
  }
333
334
 
335
+ export function expandCwdPath(raw: string): string | undefined {
336
+ const expanded = raw.startsWith("~/") ? join(homedir(), raw.slice(2)) : raw;
337
+ return isAbsolute(expanded) ? expanded : undefined;
338
+ }
339
+
340
+ function normalizeCwd(
341
+ value: unknown,
342
+ filePath: string,
343
+ source: PromptSource,
344
+ diagnostics: PromptLoaderDiagnostic[],
345
+ ): string | undefined {
346
+ if (value === undefined) return undefined;
347
+ if (typeof value !== "string") {
348
+ diagnostics.push(
349
+ createDiagnostic(
350
+ "invalid-cwd",
351
+ filePath,
352
+ source,
353
+ `Ignoring invalid cwd in ${filePath}: expected a string.`,
354
+ ),
355
+ );
356
+ return undefined;
357
+ }
358
+
359
+ const trimmed = value.trim();
360
+ if (!trimmed) return undefined;
361
+ const expanded = expandCwdPath(trimmed);
362
+ if (!expanded) {
363
+ diagnostics.push(
364
+ createDiagnostic(
365
+ "invalid-cwd",
366
+ filePath,
367
+ source,
368
+ `Ignoring cwd in ${filePath}: must be an absolute path.`,
369
+ ),
370
+ );
371
+ return undefined;
372
+ }
373
+ return expanded;
374
+ }
375
+
334
376
  function normalizeInheritContext(
335
377
  value: unknown,
336
378
  filePath: string,
@@ -500,6 +542,7 @@ function loadPromptsWithModelFromDir(
500
542
  const { body } = parsed;
501
543
  const chain = normalizeChain(frontmatter.chain, fullPath, source, diagnostics);
502
544
  let subagent = normalizeSubagent(frontmatter.subagent, fullPath, source, diagnostics);
545
+ const cwd = normalizeCwd(frontmatter.cwd, fullPath, source, diagnostics);
503
546
  const inheritContext = normalizeInheritContext(frontmatter.inheritContext, fullPath, source, diagnostics);
504
547
  if (chain && subagent !== undefined) {
505
548
  diagnostics.push(
@@ -522,6 +565,16 @@ function loadPromptsWithModelFromDir(
522
565
  ),
523
566
  );
524
567
  }
568
+ if (!chain && subagent === undefined && cwd) {
569
+ diagnostics.push(
570
+ createDiagnostic(
571
+ "invalid-cwd",
572
+ fullPath,
573
+ source,
574
+ `Ignoring cwd in ${fullPath}: frontmatter field "cwd" requires "subagent".`,
575
+ ),
576
+ );
577
+ }
525
578
  const hasModelField = Object.hasOwn(frontmatter, "model");
526
579
  const parsedModels = chain ? [] : normalizeModelSpecs(frontmatter.model, fullPath, source, diagnostics);
527
580
  if (!chain && hasModelField && !parsedModels) continue;
@@ -541,6 +594,7 @@ function loadPromptsWithModelFromDir(
541
594
  }
542
595
 
543
596
  const safeInheritContext = subagent !== undefined && inheritContext;
597
+ const safeCwd = (chain || subagent !== undefined) ? cwd : undefined;
544
598
  const description = normalizeStringField("description", frontmatter.description, fullPath, source, diagnostics) ?? "";
545
599
  const skill = chain ? undefined : normalizeStringField("skill", frontmatter.skill, fullPath, source, diagnostics);
546
600
  const thinking = chain ? undefined : normalizeThinking(frontmatter.thinking, fullPath, source, diagnostics);
@@ -576,6 +630,7 @@ function loadPromptsWithModelFromDir(
576
630
  converge: converge === false ? false : undefined,
577
631
  subagent,
578
632
  inheritContext: safeInheritContext || undefined,
633
+ cwd: safeCwd || undefined,
579
634
  source,
580
635
  subdir: subdir || undefined,
581
636
  filePath: fullPath,
@@ -651,7 +706,8 @@ export function loadPromptsWithModel(cwd: string): LoadPromptsWithModelResult {
651
706
  export function buildPromptCommandDescription(prompt: PromptWithModel): string {
652
707
  const sourceLabel = prompt.subdir ? `(${prompt.source}:${prompt.subdir})` : `(${prompt.source})`;
653
708
  if (prompt.chain) {
654
- const details = `[chain: ${prompt.chain}] ${sourceLabel}`;
709
+ const cwdLabel = prompt.cwd ? ` cwd:${prompt.cwd}` : "";
710
+ const details = `[chain: ${prompt.chain}${cwdLabel}] ${sourceLabel}`;
655
711
  return prompt.description ? `${prompt.description} ${details}` : details;
656
712
  }
657
713
  const modelLabel = prompt.models.length > 0 ? prompt.models.map((model) => model.split("/").pop() || model).join("|") : "current";
@@ -659,8 +715,9 @@ export function buildPromptCommandDescription(prompt: PromptWithModel): string {
659
715
  const thinkingLabel = prompt.thinking ? ` ${prompt.thinking}` : "";
660
716
  const loopLabel = prompt.loop ? ` loop:${prompt.loop}` : "";
661
717
  const subagentLabel = prompt.subagent ? ` subagent:${prompt.subagent === true ? "delegate" : prompt.subagent}` : "";
718
+ const cwdLabel = prompt.cwd ? ` cwd:${prompt.cwd}` : "";
662
719
  const inheritContextLabel = prompt.inheritContext ? " fork" : "";
663
- const details = `[${modelLabel}${thinkingLabel}${skillLabel}${loopLabel}${subagentLabel}${inheritContextLabel}] ${sourceLabel}`;
720
+ const details = `[${modelLabel}${thinkingLabel}${skillLabel}${loopLabel}${subagentLabel}${cwdLabel}${inheritContextLabel}] ${sourceLabel}`;
664
721
  return prompt.description ? `${prompt.description} ${details}` : details;
665
722
  }
666
723
 
package/subagent-step.ts CHANGED
@@ -1,3 +1,4 @@
1
+ import { existsSync } from "node:fs";
1
2
  import { randomUUID } from "node:crypto";
2
3
  import type { AssistantMessage, Message } from "@mariozechner/pi-ai";
3
4
  import type { ExtensionAPI, ExtensionContext, ModelRegistry } from "@mariozechner/pi-coding-agent";
@@ -254,6 +255,10 @@ export async function executeSubagentPromptStep(options: DelegatedPromptOptions)
254
255
  throw new Error(prepared.message);
255
256
  }
256
257
  if (prepared.warning) notify(ctx, prepared.warning, "warning");
258
+ const effectiveCwd = prompt.cwd ?? ctx.cwd;
259
+ if (effectiveCwd !== ctx.cwd && !existsSync(effectiveCwd)) {
260
+ throw new Error(`cwd directory does not exist: ${effectiveCwd}`);
261
+ }
257
262
 
258
263
  const request: DelegatedSubagentRequest = {
259
264
  requestId: randomUUID(),
@@ -261,7 +266,7 @@ export async function executeSubagentPromptStep(options: DelegatedPromptOptions)
261
266
  task: prepared.content,
262
267
  context: prompt.inheritContext ? "fork" : "fresh",
263
268
  model: `${prepared.selectedModel.model.provider}/${prepared.selectedModel.model.id}`,
264
- cwd: ctx.cwd,
269
+ cwd: effectiveCwd,
265
270
  };
266
271
 
267
272
  if (ctx.hasUI) {
@@ -320,4 +325,3 @@ export async function executeSubagentPromptStep(options: DelegatedPromptOptions)
320
325
  }
321
326
  }
322
327
 
323
-