@davidorex/pi-behavior-monitors 0.1.4 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,12 +1,124 @@
1
1
  ---
2
2
  name: pi-behavior-monitors
3
3
  description: >
4
- Behavior monitors that watch agent activity and steer corrections when issues are detected.
5
- Monitors are JSON files (.monitor.json) in .pi/monitors/ with classify, patterns, actions,
6
- and scope blocks. Patterns and instructions are JSON arrays. Use when creating, editing,
7
- debugging, or understanding behavior monitors.
4
+ Behavior monitors that watch agent activity and steer corrections when issues
5
+ are detected. Monitors are JSON files (.monitor.json) in .pi/monitors/ with
6
+ classify, patterns, actions, and scope blocks. Patterns and instructions are
7
+ JSON arrays. Use when creating, editing, debugging, or understanding behavior
8
+ monitors.
8
9
  ---
9
10
 
11
+ <tools_reference>
12
+ <tool name="monitors-status">
13
+ List all behavior monitors with their current state.
14
+
15
+ *List all behavior monitors with their current state*
16
+
17
+ </tool>
18
+
19
+ <tool name="monitors-inspect">
20
+ Inspect a monitor — config, state, pattern count, rule count.
21
+
22
+ *Inspect a monitor — config, state, pattern count, rule count*
23
+
24
+ | Parameter | Type | Required | Description |
25
+ |-----------|------|----------|-------------|
26
+ | `monitor` | string | yes | Monitor name |
27
+ </tool>
28
+
29
+ <tool name="monitors-control">
30
+ Control monitors — enable, disable, dismiss, or reset.
31
+
32
+ *Control monitors — enable, disable, dismiss, or reset*
33
+
34
+ | Parameter | Type | Required | Description |
35
+ |-----------|------|----------|-------------|
36
+ | `action` | unknown | yes | |
37
+ | `monitor` | string | no | Monitor name (required for dismiss/reset) |
38
+ </tool>
39
+
40
+ <tool name="monitors-rules">
41
+ Manage monitor rules — list, add, remove, or replace calibration rules.
42
+
43
+ *Manage monitor rules — list, add, remove, or replace calibration rules*
44
+
45
+ | Parameter | Type | Required | Description |
46
+ |-----------|------|----------|-------------|
47
+ | `monitor` | string | yes | Monitor name |
48
+ | `action` | unknown | yes | |
49
+ | `text` | string | no | Rule text (for add/replace) |
50
+ | `index` | number | no | Rule index, 1-based (for remove/replace) |
51
+ </tool>
52
+
53
+ <tool name="monitors-patterns">
54
+ List patterns for a behavior monitor.
55
+
56
+ *List patterns for a behavior monitor*
57
+
58
+ | Parameter | Type | Required | Description |
59
+ |-----------|------|----------|-------------|
60
+ | `monitor` | string | yes | Monitor name |
61
+ </tool>
62
+
63
+ </tools_reference>
64
+
65
+ <commands_reference>
66
+ <command name="/monitors">
67
+ Manage behavior monitors
68
+
69
+ Subcommands: `on`, `off`, `fragility`, `response-style`
70
+ </command>
71
+
72
+ </commands_reference>
73
+
74
+ <events>
75
+ `session_start`, `session_switch`, `agent_end`, `turn_start`, `message_end`
76
+ </events>
77
+
78
+ <bundled_resources>
79
+ 2 schemas, 20 examples bundled.
80
+ See references/bundled-resources.md for full inventory.
81
+ </bundled_resources>
82
+
83
+ <monitor_vocabulary>
84
+
85
+ **Context Collectors:**
86
+
87
+ | Collector | Placeholder | Description | Limits |
88
+ |-----------|-------------|-------------|--------|
89
+ | `user_text` | `{user_text}` / `{{ user_text }}` | Most recent user message text | — |
90
+ | `assistant_text` | `{assistant_text}` / `{{ assistant_text }}` | Most recent assistant message text | — |
91
+ | `tool_results` | `{tool_results}` / `{{ tool_results }}` | Tool results with tool name and error status | Last 5, truncated 2000 chars |
92
+ | `tool_calls` | `{tool_calls}` / `{{ tool_calls }}` | Tool calls and results interleaved | Last 20, truncated 2000 chars |
93
+ | `custom_messages` | `{custom_messages}` / `{{ custom_messages }}` | Custom extension messages since last user message | — |
94
+ | `project_vision` | `{project_vision}` / `{{ project_vision }}` | .project/project.json vision, core_value, name | — |
95
+ | `project_conventions` | `{project_conventions}` / `{{ project_conventions }}` | .project/conformance-reference.json principle names | — |
96
+ | `git_status` | `{git_status}` / `{{ git_status }}` | Output of git status --porcelain | 5s timeout |
97
+
98
+ Any string is accepted in `classify.context`. Unknown collector names produce empty string.
99
+
100
+ Built-in placeholders (always available, not in `classify.context`):
101
+ - `{{ patterns }}` — patterns JSON as numbered list
102
+ - `{{ instructions }}` — instructions JSON as bulleted list with "follow strictly" preamble
103
+ - `{{ iteration }}` — consecutive steer count (0-indexed)
104
+
105
+ **When Conditions:**
106
+
107
+ - `always` — Fire every time the event occurs
108
+ - `has_tool_results` — Fire only if tool results present since last user message
109
+ - `has_file_writes` — Fire only if write or edit tool called since last user message
110
+ - `has_bash` — Fire only if bash tool called since last user message
111
+ - `every(N)` — Fire every Nth activation (counter resets when user text changes)
112
+ - `tool(name)` — Fire only if specific named tool called since last user message
113
+
114
+ **Events:** `message_end`, `turn_end`, `agent_end`, `command`
115
+
116
+ **Verdict Types:** `clean`, `flag`, `new`
117
+
118
+ **Scope Targets:** `main`, `subagent`, `all`, `workflow`
119
+
120
+ </monitor_vocabulary>
121
+
10
122
  <objective>
11
123
  Monitors are autonomous watchdogs that observe agent activity, classify it against a
12
124
  JSON pattern library using a side-channel LLM call, and either steer corrections or
@@ -42,17 +154,24 @@ bundled monitor, delete its three files (`.monitor.json`, `.patterns.json`,
42
154
  </seeding>
43
155
 
44
156
  <file_structure>
45
- Each monitor is a triad of JSON files sharing a name prefix:
157
+ Each monitor is a set of files sharing a name prefix:
46
158
 
47
159
  ```
48
160
  .pi/monitors/
49
161
  ├── fragility.monitor.json # Monitor definition (classify + patterns + actions + scope)
50
162
  ├── fragility.patterns.json # Known patterns (JSON array, grows automatically)
51
163
  ├── fragility.instructions.json # User corrections (JSON array, optional)
164
+ ├── fragility/
165
+ │ └── classify.md # Nunjucks template for classification prompt (optional)
52
166
  ```
53
167
 
54
168
  The instructions file is optional. If omitted, the extension defaults the path to
55
169
  `${name}.instructions.json` and treats a missing file as an empty array.
170
+
171
+ The classify template is optional. When `classify.promptTemplate` is set in the monitor
172
+ definition, the template is resolved through a three-tier search: `.pi/monitors/` (project),
173
+ `~/.pi/agent/monitors/` (user), then the package `examples/` directory. A user overrides a
174
+ bundled template by placing a file at the same relative path in `.pi/monitors/`.
56
175
  </file_structure>
57
176
 
58
177
  <monitor_definition>
@@ -72,7 +191,8 @@ A `.monitor.json` file conforms to `schemas/monitor.schema.json`:
72
191
  "model": "claude-sonnet-4-20250514",
73
192
  "context": ["tool_results", "assistant_text"],
74
193
  "excludes": ["other-monitor"],
75
- "prompt": "Classification prompt with {tool_results} {assistant_text} {patterns} {instructions} placeholders.\n\nReply CLEAN, FLAG:<desc>, or NEW:<pattern>|<desc>."
194
+ "promptTemplate": "my-monitor/classify.md",
195
+ "prompt": "Inline fallback if template not found. {tool_results} {assistant_text} {patterns} {instructions}\n\nReply CLEAN, FLAG:<desc>, or NEW:<pattern>|<desc>."
76
196
  },
77
197
  "patterns": {
78
198
  "path": "my-monitor.patterns.json",
@@ -140,9 +260,10 @@ Non-main scopes can still write findings to JSON files.
140
260
  | Field | Default | Description |
141
261
  |-------|---------|-------------|
142
262
  | `classify.model` | `claude-sonnet-4-20250514` | Model for classification. Plain model ID uses `anthropic` provider. Use `provider/model` for other providers. |
143
- | `classify.context` | `["tool_results", "assistant_text"]` | Conversation parts to collect. |
263
+ | `classify.context` | `["tool_results", "assistant_text"]` | Context collector names. Any string accepted — unknown collectors produce empty string. |
144
264
  | `classify.excludes` | `[]` | Monitor names — skip activation if any of these already steered this turn. |
145
- | `classify.prompt` | (required) | Classification prompt template with `{placeholders}`. |
265
+ | `classify.promptTemplate` | | Path to `.md` Nunjucks template file. Searched in `.pi/monitors/`, `~/.pi/agent/monitors/`, then package `examples/`. Takes precedence over `prompt`. |
266
+ | `classify.prompt` | — | Inline classification prompt with `{placeholder}` substitution. Used when `promptTemplate` is absent. One of `promptTemplate` or `prompt` is required. |
146
267
 
147
268
  **Actions block** — per verdict (`on_flag`, `on_new`, `on_clean`):
148
269
 
@@ -160,29 +281,7 @@ Non-main scopes can still write findings to JSON files.
160
281
  `null` means no action on clean (the default behavior).
161
282
  </fields>
162
283
 
163
- <when_conditions>
164
- - `always` — fire every time the event occurs
165
- - `has_tool_results` — fire only if tool results are present since last user message
166
- - `has_file_writes` — fire only if `write` or `edit` tool was called since last user message
167
- - `has_bash` — fire only if `bash` tool was called since last user message
168
- - `tool(name)` — fire only if a specific named tool was called since last user message
169
- - `every(N)` — fire every Nth activation within the same user prompt (counter resets when user text changes)
170
- </when_conditions>
171
-
172
- <context_collectors>
173
- | Collector | Placeholder | What it collects | Limits |
174
- |-----------|-------------|------------------|--------|
175
- | `user_text` | `{user_text}` | Most recent user message text (walks back past assistant to find preceding user message) | — |
176
- | `assistant_text` | `{assistant_text}` | Most recent assistant message text | — |
177
- | `tool_results` | `{tool_results}` | Tool results with tool name and error status | Last 5, each truncated to 2000 chars |
178
- | `tool_calls` | `{tool_calls}` | Tool calls and their results interleaved | Last 20, each truncated to 2000 chars |
179
- | `custom_messages` | `{custom_messages}` | Custom extension messages since last user message | — |
180
-
181
- Built-in placeholders (always available, not listed in `classify.context`):
182
- - `{patterns}` — formatted from patterns JSON as numbered list: `1. [severity] description`
183
- - `{instructions}` — formatted from instructions JSON as bulleted list with preamble "Operating instructions from the user (follow these strictly):" — empty string if no instructions
184
- - `{iteration}` — current consecutive steer count (0-indexed)
185
- </context_collectors>
284
+ <!-- when_conditions and context_collectors tables are generated from code registries — see Monitor Vocabulary section in SKILL.md -->
186
285
 
187
286
  <patterns_file>
188
287
  JSON array conforming to `schemas/monitor-pattern.schema.json`:
@@ -243,6 +342,52 @@ Rules are injected into the classification prompt under a preamble
243
342
  non-empty. An empty array or missing file produces no rules block in the prompt.
244
343
  </instructions_file>
245
344
 
345
+ <prompt_templates>
346
+ Monitors support two prompt rendering modes:
347
+
348
+ **Inline prompts** (`classify.prompt`) — simple `{placeholder}` string replacement. Good for
349
+ single-paragraph classifiers. All context collectors and built-in placeholders are available
350
+ as `{name}`.
351
+
352
+ **Nunjucks templates** (`classify.promptTemplate`) — `.md` files with full Nunjucks syntax:
353
+ conditionals (`{% if %}`), loops (`{% for %}`), template inheritance, filters. Used when
354
+ the classify prompt needs conditional sections (e.g., iteration-aware acknowledgment).
355
+
356
+ Template variables use `{{ name }}` syntax. All context collectors and built-in placeholders
357
+ are available: `{{ patterns }}`, `{{ instructions }}`, `{{ iteration }}`, plus any collectors
358
+ listed in `classify.context`.
359
+
360
+ When both `promptTemplate` and `prompt` are set, the template is tried first. If the template
361
+ file is not found or fails to render, the inline prompt is used as fallback.
362
+
363
+ **Iteration-aware acknowledgment pattern** — templates should include this block to support
364
+ monitor-agent dialogue (the agent acknowledging a steer and stating a plan):
365
+
366
+ ```markdown
367
+ {% if iteration > 0 %}
368
+ NOTE: You have steered {{ iteration }} time(s) already this session.
369
+ The agent's latest response is below. If the agent explicitly acknowledged
370
+ the issue and stated a concrete plan to address it (not just "noted" but
371
+ a specific action), reply CLEAN to allow the agent to follow through.
372
+ Re-flag only if the agent ignored or deflected the steer.
373
+
374
+ Agent response:
375
+ {{ assistant_text }}
376
+ {% endif %}
377
+ ```
378
+
379
+ This requires `assistant_text` in the `classify.context` array. When the classifier sees
380
+ genuine acknowledgment, it replies CLEAN, which resets `whileCount` to 0 and gives the agent
381
+ a fresh turn without re-flagging.
382
+
383
+ **Template search order** (first match wins):
384
+ 1. `.pi/monitors/<template-path>` — project-level override
385
+ 2. `~/.pi/agent/monitors/<template-path>` — user-level
386
+ 3. Package `examples/<template-path>` — builtin
387
+
388
+ All four bundled monitors ship with Nunjucks templates in `examples/<name>/classify.md`.
389
+ </prompt_templates>
390
+
246
391
  <verdict_format>
247
392
  The classification LLM must respond with one of:
248
393
 
@@ -310,7 +455,9 @@ for other extensions or workflows to invoke classification directly.
310
455
  </commands>
311
456
 
312
457
  <bundled_monitors>
313
- Three example monitors ship in `examples/` and are seeded on first run:
458
+ Four example monitors ship in `examples/` and are seeded on first run. Each has a
459
+ Nunjucks classify template in `examples/<name>/classify.md` with iteration-aware
460
+ acknowledgment support:
314
461
 
315
462
  **fragility** (`message_end`, `when: has_tool_results`)
316
463
  Watches for unaddressed fragilities after tool use — errors, warnings, or broken state the
@@ -338,6 +485,14 @@ under `category: "work-quality"`. Ceiling: 3.
338
485
  11 bundled patterns across categories: methodology (trial-and-error, symptom-fix,
339
486
  double-edit, edit-without-read, insanity-retry, no-plan), verification (no-verify),
340
487
  scope (excessive-changes, wrong-problem), quality (copy-paste), cleanup (debug-artifacts).
488
+
489
+ **commit-hygiene** (`agent_end`, `when: has_file_writes`)
490
+ Fires when the agent finishes a turn that included file writes. Checks tool call history
491
+ for git commit commands. If no commit occurred, steers to commit. If committed with a
492
+ generic or certainty-language message, steers to improve. Does not write findings — commits
493
+ are their own artifact. Ceiling: 3.
494
+ 6 bundled patterns across categories: missing-commit (no-commit), message-quality
495
+ (generic-message, certainty-language, no-context), commit-safety (amend-not-new, force-push).
341
496
  </bundled_monitors>
342
497
 
343
498
  <disabling_monitors>
@@ -356,27 +511,75 @@ Monitors also auto-silence at their ceiling. With `escalate: "ask"`, the user is
356
511
  to continue or dismiss. With `escalate: "dismiss"`, the monitor silences automatically.
357
512
  </disabling_monitors>
358
513
 
359
- <example_creating>
360
- 1. Create `.pi/monitors/naming.monitor.json`:
514
+ <creating_monitors>
515
+ When the user asks to create a monitor — either from a described behavior ("flag responses
516
+ that end with questions") or from a discovered need during conversation ("that response
517
+ did X wrong, make a monitor for it") — follow this workflow:
518
+
519
+ **Step 1: Determine the detection target.** What specific behavior in the assistant's output
520
+ should trigger a flag? Translate the user's description into concrete, observable patterns.
521
+
522
+ **Step 2: Choose event and when.** Match the detection target to the right trigger:
523
+ - Response content issues (trailing questions, lazy options, tone) → `turn_end`, `when: always`
524
+ - Tool use issues (no commit, no test, bad edits) → `agent_end`, `when: has_file_writes` or `has_tool_results`
525
+ - Post-action fragility (ignoring errors) → `message_end`, `when: has_tool_results`
526
+ - On-demand analysis → `command`, `when: always`
527
+
528
+ **Step 3: Choose context collectors.** What data does the classifier need to see?
529
+ - Checking the assistant's final response text → `assistant_text`
530
+ - Checking what the user asked (to compare against response) → `user_text`
531
+ - Checking what tools were called → `tool_calls`
532
+ - Checking tool outputs for errors/warnings → `tool_results`
533
+ - Checking git state → `git_status`
534
+ - Include `assistant_text` if you want iteration-aware acknowledgment (recommended).
535
+
536
+ **Step 4: Write the patterns file.** Each pattern is a specific, observable anti-pattern.
537
+ Write descriptions that a classifier LLM can match against the collected context. Start with
538
+ 3-8 seed patterns. Set `learn: true` so the monitor grows its pattern library from `NEW:`
539
+ verdicts at runtime.
540
+
541
+ **Step 5: Write the classify template.** Use a Nunjucks `.md` file for anything beyond
542
+ trivial classification. The template must:
543
+ - Present the collected context to the classifier
544
+ - List the patterns to check against
545
+ - Include the verdict format instructions (CLEAN/FLAG/NEW)
546
+ - Include the iteration-aware acknowledgment block if `assistant_text` is collected
547
+
548
+ **Step 6: Write the monitor definition.** Wire everything together in the `.monitor.json`.
549
+
550
+ **Step 7: Create empty instructions file.** Write `[]` so the user can add calibration
551
+ rules via `/monitors <name> rules add <text>`.
552
+
553
+ **Step 8: Activate.** After creating the files, tell the user to run `/reload 3` to
554
+ reload extensions and activate the new monitor without restarting the session.
555
+
556
+ ### Example: response-mandates monitor
557
+
558
+ User says: "create a monitor that flags responses ending with questions and responses
559
+ that present lazy deferral options."
560
+
561
+ **Files to create:**
562
+
563
+ 1. `.pi/monitors/response-mandates.monitor.json`:
361
564
 
362
565
  ```json
363
566
  {
364
- "name": "naming",
365
- "description": "Detects poor naming choices in code changes",
567
+ "name": "response-mandates",
568
+ "description": "Flags responses that violate communication mandates: trailing questions, lazy deferral options, non-optimal solutions",
366
569
  "event": "turn_end",
367
- "when": "has_file_writes",
570
+ "when": "always",
368
571
  "scope": { "target": "main" },
369
572
  "classify": {
370
573
  "model": "claude-sonnet-4-20250514",
371
- "context": ["tool_calls"],
372
- "excludes": [],
373
- "prompt": "An agent made code changes. Check if any new identifiers have poor names.\n\nActions taken:\n{tool_calls}\n\n{instructions}\n\nNaming patterns to check:\n{patterns}\n\nReply CLEAN if all names are clear.\nReply FLAG:<description> if a known naming pattern matched.\nReply NEW:<pattern>|<description> if a naming issue not covered by existing patterns."
574
+ "context": ["assistant_text", "user_text"],
575
+ "excludes": ["fragility"],
576
+ "promptTemplate": "response-mandates/classify.md"
374
577
  },
375
- "patterns": { "path": "naming.patterns.json", "learn": true },
376
- "instructions": { "path": "naming.instructions.json" },
578
+ "patterns": { "path": "response-mandates.patterns.json", "learn": true },
579
+ "instructions": { "path": "response-mandates.instructions.json" },
377
580
  "actions": {
378
- "on_flag": { "steer": "Rename the poorly named identifier." },
379
- "on_new": { "steer": "Rename the poorly named identifier.", "learn_pattern": true },
581
+ "on_flag": { "steer": "Rewrite your response: report findings and state actions — do not end with a question or present options that defer proper work." },
582
+ "on_new": { "steer": "Rewrite your response: report findings and state actions — do not end with a question or present options that defer proper work.", "learn_pattern": true },
380
583
  "on_clean": null
381
584
  },
382
585
  "ceiling": 3,
@@ -384,30 +587,106 @@ to continue or dismiss. With `escalate: "dismiss"`, the monitor silences automat
384
587
  }
385
588
  ```
386
589
 
387
- 2. Create `.pi/monitors/naming.patterns.json`:
590
+ 2. `.pi/monitors/response-mandates/classify.md`:
591
+
592
+ ```markdown
593
+ The user said:
594
+ "{{ user_text }}"
595
+
596
+ The assistant's response:
597
+ "{{ assistant_text }}"
598
+
599
+ {{ instructions }}
600
+
601
+ Check the assistant's response against these anti-patterns:
602
+ {{ patterns }}
603
+
604
+ Specifically check:
605
+ 1. Does the response end with a question to the user? The final sentence or paragraph
606
+ should not be a question unless the user explicitly asked to be consulted. Rhetorical
607
+ questions, permission-seeking ("shall I...?", "would you like...?"), and steering
608
+ questions ("what do you think?") are all violations.
609
+ 2. Does the response present options where one or more options leave known issues
610
+ unaddressed? If a problem has been identified, every option presented must address it.
611
+ Options that defer proper work to a vague future ("we could address this later",
612
+ "for now we can...") are violations.
613
+ 3. Does the response propose a non-durable solution when a durable one is known? Workarounds,
614
+ temporary fixes, and partial solutions when the root cause is understood are violations.
615
+
616
+ {% if iteration > 0 %}
617
+ NOTE: You have steered {{ iteration }} time(s) already this session.
618
+ If the agent explicitly acknowledged the mandate violation and rewrote its response
619
+ without the violation, reply CLEAN. Re-flag only if the violation persists.
620
+
621
+ Agent response:
622
+ {{ assistant_text }}
623
+ {% endif %}
624
+
625
+ Reply CLEAN if the response follows all mandates.
626
+ Reply FLAG:<description> if a known pattern was matched.
627
+ Reply NEW:<pattern>|<description> if a violation not covered by existing patterns was detected.
628
+ ```
629
+
630
+ 3. `.pi/monitors/response-mandates.patterns.json`:
388
631
 
389
632
  ```json
390
633
  [
391
- { "id": "single-letter", "description": "Single-letter variable names outside of loop counters", "severity": "warning", "source": "bundled" },
392
- { "id": "generic-names", "description": "Generic names like data, info, result, value, temp without context", "severity": "warning", "source": "bundled" },
393
- { "id": "bool-not-question", "description": "Boolean variables not phrased as questions (is, has, can, should)", "severity": "info", "source": "bundled" }
634
+ { "id": "trailing-question", "description": "Response ends with a question to the user instead of reporting and acting", "severity": "error", "category": "communication", "source": "bundled" },
635
+ { "id": "permission-seeking", "description": "Asks permission before acting when the user has already given direction", "severity": "warning", "category": "communication", "source": "bundled" },
636
+ { "id": "steering-question", "description": "Ends with 'what do you think?', 'does that sound right?', or similar steering questions", "severity": "error", "category": "communication", "source": "bundled" },
637
+ { "id": "lazy-deferral", "description": "Presents options that defer known issues to a vague future ('we can address later', 'for now')", "severity": "error", "category": "anti-laziness", "source": "bundled" },
638
+ { "id": "fragility-tolerant-option", "description": "Offers an option that leaves identified fragility unaddressed", "severity": "error", "category": "anti-laziness", "source": "bundled" },
639
+ { "id": "workaround-over-fix", "description": "Proposes workaround when root cause is understood and fixable", "severity": "warning", "category": "anti-laziness", "source": "bundled" }
394
640
  ]
395
641
  ```
396
642
 
397
- 3. Create `.pi/monitors/naming.instructions.json`:
643
+ 4. `.pi/monitors/response-mandates.instructions.json`:
398
644
 
399
645
  ```json
400
646
  []
401
647
  ```
402
- </example_creating>
648
+
649
+ After creating all files, tell the user: "Monitor created. Run `/reload 3` to activate
650
+ it in this session."
651
+ </creating_monitors>
652
+
653
+ <modifying_monitors>
654
+ **Adding patterns** — When the user identifies a new anti-pattern during conversation
655
+ ("that kind of response should also be flagged"), add it to the patterns JSON file.
656
+ Each pattern needs `id`, `description`, `severity`, and `source: "user"`.
657
+
658
+ **Adding rules** — Use the `monitors-rules` tool or `/monitors <name> rules add <text>`
659
+ to add calibration rules. Rules fine-tune the classifier without changing patterns.
660
+ Example: "responses that end with 'let me know' are not questions."
661
+
662
+ **Changing the classify prompt** — Edit the Nunjucks template file or the inline prompt.
663
+ For template-based monitors, edit the `.md` file. For inline monitors, edit the `prompt`
664
+ field in the `.monitor.json`.
665
+
666
+ **Upgrading inline to template** — When a monitor needs conditionals (iteration-aware
667
+ acknowledgment, optional context sections), create a `<name>/classify.md` template file
668
+ in `.pi/monitors/` and add `"promptTemplate": "<name>/classify.md"` to the classify block.
669
+ The inline `prompt` remains as fallback.
670
+
671
+ **Adjusting sensitivity** — Lower the `ceiling` to escalate sooner if the monitor is
672
+ over-firing. Raise it to give the agent more chances. Set `escalate: "dismiss"` to
673
+ auto-silence without prompting.
674
+
675
+ After any file changes, tell the user to run `/reload 3` to pick up the changes.
676
+ </modifying_monitors>
403
677
 
404
678
  <success_criteria>
405
679
  - Monitor `.monitor.json` validates against `schemas/monitor.schema.json`
406
680
  - Patterns `.patterns.json` validates against `schemas/monitor-pattern.schema.json`
407
681
  - Patterns array is non-empty (empty patterns = monitor does nothing)
408
- - Classification prompt includes `{patterns}` placeholder and verdict format instructions (CLEAN/FLAG/NEW)
682
+ - Classification prompt (template or inline) includes `{{ patterns }}` / `{patterns}` and verdict format instructions (CLEAN/FLAG/NEW)
683
+ - If using `promptTemplate`, the `.md` file exists at the declared path relative to one of the template search directories
684
+ - If using templates, `assistant_text` is in `classify.context` for iteration-aware acknowledgment
409
685
  - Actions specify `steer` for `scope.target: "main"` monitors, `write` for findings output
410
686
  - `write.path` is set relative to project cwd, not monitor directory
411
687
  - `excludes` lists monitors that should not double-steer in the same turn
412
688
  - Instructions file exists (even if empty `[]`) to enable `/monitors <name> rules add <text>` calibration
689
+ - After creating or modifying monitor files, remind user to run `/reload 3`
413
690
  </success_criteria>
691
+
692
+ *Generated from source by `scripts/generate-skills.js` — do not edit by hand.*
@@ -0,0 +1,29 @@
1
+ # Bundled Resources
2
+
3
+ ## schemas/ (2 files)
4
+
5
+ - `schemas/monitor-pattern.schema.json`
6
+ - `schemas/monitor.schema.json`
7
+
8
+ ## examples/ (20 files)
9
+
10
+ - `examples/commit-hygiene/classify.md`
11
+ - `examples/commit-hygiene.instructions.json`
12
+ - `examples/commit-hygiene.monitor.json`
13
+ - `examples/commit-hygiene.patterns.json`
14
+ - `examples/fragility/classify.md`
15
+ - `examples/fragility.instructions.json`
16
+ - `examples/fragility.monitor.json`
17
+ - `examples/fragility.patterns.json`
18
+ - `examples/hedge/classify.md`
19
+ - `examples/hedge.instructions.json`
20
+ - `examples/hedge.monitor.json`
21
+ - `examples/hedge.patterns.json`
22
+ - `examples/unauthorized-action/classify.md`
23
+ - `examples/unauthorized-action.instructions.json`
24
+ - `examples/unauthorized-action.monitor.json`
25
+ - `examples/unauthorized-action.patterns.json`
26
+ - `examples/work-quality/classify.md`
27
+ - `examples/work-quality.instructions.json`
28
+ - `examples/work-quality.monitor.json`
29
+ - `examples/work-quality.patterns.json`