supipowers 1.5.0 → 1.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -23,7 +23,7 @@ Run the interactive installer:
23
23
  bunx supipowers
24
24
  ```
25
25
 
26
- The installer detects your agent, registers the extension, and optionally sets up LSP servers, MCP tools, and the context-mode integration.
26
+ The installer detects your agent, registers the extension, removes legacy external context-mode MCP registrations, and can install missing optional tooling such as LSP servers, `mcpc`, and Playwright CLI.
27
27
 
28
28
  > [!TIP]
29
29
  > Run `/supi:update` at any time to upgrade to the latest version, or `/supi:doctor` to check your setup.
@@ -34,16 +34,15 @@ The installer detects your agent, registers the extension, and optionally sets u
34
34
  | ----------------------------------------------------- | --------------------------------------------------------------------- |
35
35
  | [Oh My Pi (OMP)](https://github.com/can1357/oh-my-pi) | The coding agent that supipowers extends |
36
36
  | [Bun](https://bun.sh) | Runtime — required for installation and the built-in SQLite FTS index |
37
- | [Git](https://git-scm.com) | Used by the installer and context-mode setup |
37
+ | [Git](https://git-scm.com) | Used by the installer and git-based workflows |
38
38
 
39
39
  ### Optional dependencies
40
40
 
41
- The installer scans for these and offers to install any that are missing. Everything works without them, but each one unlocks additional capabilities.
41
+ The installer scans for these and offers to install missing tooling where it can. Everything works without them, but each one unlocks additional capabilities.
42
42
 
43
43
  | Dependency | What it enables |
44
44
  | ------------------------------------- | --------------------------------------------------------------------- |
45
45
  | [mcpc](https://github.com/apify/mcpc) | MCP server management via `/supi:mcp` |
46
- | supi-context-mode | Context window protection — large outputs are sandboxed automatically |
47
46
  | `typescript-language-server` | TypeScript/JS diagnostics and references in review gates |
48
47
  | `pyright` | Python type checking |
49
48
  | `rust-analyzer` | Rust language server |
@@ -52,7 +51,8 @@ The installer scans for these and offers to install any that are missing. Everyt
52
51
 
53
52
  > [!NOTE]
54
53
  > LSP servers are language-specific — install only the ones that match your project's stack.
55
- > supi-context-mode is heavily inspired at [context-mode](https://github.com/mksglu/context-mode)
54
+ > Context protection is built into supipowers. No external `context-mode` or `supi-context-mode` dependency is required.
55
+ > The design is inspired by [context-mode](https://github.com/mksglu/context-mode).
56
56
 
57
57
  ## Commands
58
58
 
@@ -71,7 +71,7 @@ The installer scans for these and offers to install any that are missing. Everyt
71
71
  | `/supi:optimize-context` | Analyze loaded prompt/context usage and suggest reductions |
72
72
  | `/supi:mcp` | Manage MCP servers (connect, disconnect, migrate) |
73
73
  | `/supi:config` | Interactive settings TUI |
74
- | `/supi:status` | Check running sub-agents and progress |
74
+ | `/supi:status` | Show project plans and configuration summary |
75
75
  | `/supi:doctor` | Diagnose extension health and missing dependencies |
76
76
  | `/supi:generate` | Documentation drift detection |
77
77
  | `/supi:update` | Update supipowers to the latest version |
@@ -87,9 +87,20 @@ Most commands steer the AI session. These are TUI-only — they open native dial
87
87
 
88
88
  **AI code review.** `/supi:review` runs a programmatic AI review pipeline with configurable depth (quick, deep, or multi-agent). It uses headless agent sessions with structured JSON validation, always validates findings before user action, writes the current validated findings to a session `findings.md` document, and then presents three next-step choices: `Fix now`, `Document only`, or `Discuss before fixing`.
89
89
 
90
+ **Review agents.** Multi-agent review loads agents from two scopes: global and project.
91
+
92
+ - Global defaults and global custom agents live under `~/.omp/supipowers/review-agents/`.
93
+ - Project configuration lives under `.omp/supipowers/review-agents/config.yml`.
94
+ - Default built-in agent markdown files are installed globally, not per-project.
95
+ - Project custom agent markdown files can still live under `.omp/supipowers/review-agents/`.
96
+ - Merge precedence is project over global: if the project config mentions an agent name, it shadows the global agent with the same name.
97
+ - A project entry with `enabled: false` suppresses the global agent with that same name instead of falling back to the global copy.
98
+
99
+ Use `/supi:agents` to inspect the merged set that will actually run.
100
+
90
101
  **PR fixing.** `/supi:fix-pr` fetches PR review comments, critically assesses each one, checks for ripple effects, then fixes or rejects with evidence. Bot reviewers are auto-detected and filtered out.
91
102
 
92
- **Context protection.** When [context-mode](https://github.com/mksglu/context-mode) is detected, supipowers injects routing hooks that protect the agent's context window. Large outputs, file reads, and HTTP calls are automatically routed through sandboxed execution so only summaries enter the conversation.
103
+ **Context protection.** Supipowers always enables built-in context protection through native `ctx_*` tools and routing hooks. Search/find and web-fetch style operations are redirected to sandboxed execution or indexed storage, and oversized tool results are compressed before they reach the conversation.
93
104
 
94
105
  **Model assignment.** Each action can be assigned a different model and thinking level. `/supi:model` opens a TUI picker backed by OMP's model registry.
95
106
 
@@ -177,7 +188,7 @@ Supipowers ships runtime-loaded prompt skills that are also available to the age
177
188
  | Skill | Used by |
178
189
  | ----------------------- | ----------------------- |
179
190
  | `planning` | `/supi:plan` |
180
- | `code-review` | `/supi:review` |
191
+ | `code-review` | Manual prompting / reusable review guidance |
181
192
  | `qa-strategy` | `/supi:qa` |
182
193
  | `fix-pr` | `/supi:fix-pr` |
183
194
  | `debugging` | Agent sessions |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "supipowers",
3
- "version": "1.5.0",
3
+ "version": "1.5.1",
4
4
  "description": "Workflow extension for OMP coding agents.",
5
5
  "type": "module",
6
6
  "scripts": {
@@ -10,9 +10,18 @@
10
10
  "build": "tsc -p tsconfig.build.json",
11
11
  "prepare": "git config core.hooksPath hooks || true"
12
12
  },
13
+ "engines": {
14
+ "bun": ">=1.3.10"
15
+ },
13
16
  "keywords": [
17
+ "omp",
14
18
  "omp-extension",
19
+ "oh-my-pi",
20
+ "ai-agent",
21
+ "coding-agent",
15
22
  "workflow",
23
+ "code-review",
24
+ "planning",
16
25
  "agent",
17
26
  "supipowers"
18
27
  ],
@@ -20,6 +29,11 @@
20
29
  "type": "git",
21
30
  "url": "https://github.com/ogrodev/supipowers.git"
22
31
  },
32
+ "homepage": "https://supipowers.com",
33
+ "bugs": {
34
+ "url": "https://github.com/ogrodev/supipowers/issues"
35
+ },
36
+ "author": "ogrodev",
23
37
  "license": "MIT",
24
38
  "bin": {
25
39
  "supipowers": "bin/install.mjs"
@@ -123,7 +123,7 @@ Break into tasks of 2–5 minutes each. Each task must have:
123
123
  - **criteria**: acceptance criteria (testable)
124
124
  - **complexity**: `small` | `medium` | `large`
125
125
 
126
- Steps use checkbox syntax (`- [ ]`). Include function signatures or pseudocode — not vague descriptions, not full implementations.
126
+ Steps use checkbox syntax (`- [ ]`). Describe what each step changes in prose. Include function signatures or brief pseudocode only when they clarify a non-obvious interface or algorithm. Do NOT include full function bodies, full test bodies, or file-content dumps the plan describes the work, the execution session writes the code.
127
127
 
128
128
  **Plan template:**
129
129
 
@@ -161,6 +161,7 @@ tags: [<relevant>, <tags>]
161
161
  | Wait for user approval at each gate | Combine or skip phases |
162
162
  | Include test files in task file lists | Include git commit/push steps for specs |
163
163
  | Name research gaps instead of guessing | Present every explored branch (show finalists only) |
164
+ | Describe steps in prose with optional signatures | Include full function bodies, test bodies, or file contents in plans |
164
165
 
165
166
  ## Final Checklist
166
167
 
@@ -171,4 +172,5 @@ tags: [<relevant>, <tags>]
171
172
  - [ ] Spec review loop completed (sub-agent approved or user overrode)
172
173
  - [ ] User approved spec before implementation plan
173
174
  - [ ] Every task has files, criteria, complexity, and checkbox steps
174
- - [ ] Task steps reference signatures/pseudocode, not vague descriptions
175
+ - [ ] Task steps describe changes in prose; signatures/pseudocode used only when they clarify non-obvious interfaces
176
+ - [ ] No full function bodies, test bodies, or file-content dumps in the plan
@@ -6,9 +6,15 @@ import type { ReleaseChannel, BumpType, ReviewReport, ResolvedModel } from "../t
6
6
  import { loadConfig, updateConfig } from "../config/loader.js";
7
7
  import { detectChannels } from "../release/detector.js";
8
8
  import type { ChannelStatus } from "../release/channels/types.js";
9
- import { parseConventionalCommits, buildChangelogMarkdown, summarizeChanges } from "../release/changelog.js";
9
+ import {
10
+ parseConventionalCommits,
11
+ buildChangelogMarkdown,
12
+ summarizeChanges,
13
+ filterOnelineGitLogToPaths,
14
+ } from "../release/changelog.js";
10
15
  import {
11
16
  getCurrentVersion,
17
+ getPublishedPackagePaths,
12
18
  suggestBump,
13
19
  bumpVersion,
14
20
  isVersionReleased,
@@ -464,14 +470,20 @@ export async function handleRelease(platform: Platform, ctx: any, args?: string)
464
470
  : false;
465
471
 
466
472
  const sinceArg = lastTag ? `${lastTag}..HEAD` : "HEAD~50..HEAD";
473
+ const releaseScope = getPublishedPackagePaths(ctx.cwd);
474
+ const gitLogArgs = releaseScope
475
+ ? ["log", sinceArg, "--format=%x1e%H%x1f%s", "--name-only"]
476
+ : ["log", sinceArg, "--oneline"];
467
477
  let gitLogOutput: string;
468
478
  try {
469
- const result = await platform.exec(
470
- "git",
471
- ["log", sinceArg, "--oneline"],
472
- { cwd: ctx.cwd },
473
- );
474
- gitLogOutput = result.code === 0 ? result.stdout : "";
479
+ const result = await platform.exec("git", gitLogArgs, { cwd: ctx.cwd });
480
+ if (result.code !== 0) {
481
+ gitLogOutput = "";
482
+ } else if (releaseScope) {
483
+ gitLogOutput = filterOnelineGitLogToPaths(result.stdout, releaseScope);
484
+ } else {
485
+ gitLogOutput = result.stdout;
486
+ }
475
487
  } catch {
476
488
  gitLogOutput = "";
477
489
  }
@@ -140,15 +140,16 @@ function classifyContent(body: string): "code" | "prose" {
140
140
 
141
141
  /**
142
142
  * Split oversized body at paragraph boundaries (\n\n), never breaking inside code fences.
143
- * Returns parts each MAX_CHUNK_SIZE (best effort a single paragraph exceeding the limit is kept whole).
143
+ * Hard-caps every part at MAX_CHUNK_SIZE — if no paragraph boundary fits, force-splits at
144
+ * line boundaries (or raw character offset as last resort).
144
145
  */
145
146
  function splitOversized(body: string): string[] {
146
147
  // Identify paragraph boundaries that are outside code fences
147
148
  const splitPoints = findSafeSplitPoints(body);
148
149
 
149
150
  if (splitPoints.length === 0) {
150
- // No safe split points — return as single part
151
- return [body];
151
+ // No safe split points — force-split at line or character boundaries
152
+ return hardSplit(body);
152
153
  }
153
154
 
154
155
  const parts: string[] = [];
@@ -193,6 +194,44 @@ function splitOversized(body: string): string[] {
193
194
  }
194
195
  }
195
196
 
197
+ // Hard-cap: any part still exceeding MAX_CHUNK_SIZE gets force-split
198
+ return parts.flatMap(hardSplit);
199
+ }
200
+
201
+ /**
202
+ * Force-split text into parts of at most MAX_CHUNK_SIZE.
203
+ * Prefers splitting at line boundaries; falls back to raw character offset.
204
+ */
205
+ function hardSplit(text: string): string[] {
206
+ if (text.length <= MAX_CHUNK_SIZE) return [text];
207
+
208
+ const parts: string[] = [];
209
+ const lines = text.split("\n");
210
+ let current = "";
211
+
212
+ for (const line of lines) {
213
+ // If a single line exceeds the limit, chop it at character boundaries
214
+ if (line.length > MAX_CHUNK_SIZE) {
215
+ if (current) {
216
+ parts.push(current);
217
+ current = "";
218
+ }
219
+ for (let i = 0; i < line.length; i += MAX_CHUNK_SIZE) {
220
+ parts.push(line.slice(i, i + MAX_CHUNK_SIZE));
221
+ }
222
+ continue;
223
+ }
224
+
225
+ const sep = current ? "\n" : "";
226
+ if (current.length + sep.length + line.length > MAX_CHUNK_SIZE) {
227
+ if (current) parts.push(current);
228
+ current = line;
229
+ } else {
230
+ current = current + sep + line;
231
+ }
232
+ }
233
+
234
+ if (current) parts.push(current);
196
235
  return parts;
197
236
  }
198
237
 
@@ -15,6 +15,25 @@ import { fetchAndIndex } from "./web/fetcher.js";
15
15
  /** Threshold (bytes) above which intent-driven filtering kicks in. */
16
16
  const INTENT_THRESHOLD = 5 * 1024;
17
17
 
18
+ /**
19
+ * Hard cap on tool response text. Prevents oversized responses from
20
+ * exceeding model API limits (10MB). Leaves generous headroom.
21
+ */
22
+ const MAX_RESPONSE_SIZE = 100 * 1024; // 100KB
23
+
24
+ /** Truncate tool response text to MAX_RESPONSE_SIZE with a follow-up hint. */
25
+ function capResponseSize(text: string): string {
26
+ if (text.length <= MAX_RESPONSE_SIZE) return text;
27
+ const truncated = text.slice(0, MAX_RESPONSE_SIZE);
28
+ // Cut at last newline to avoid mid-line truncation
29
+ const lastNewline = truncated.lastIndexOf("\n");
30
+ const clean = lastNewline > MAX_RESPONSE_SIZE * 0.8 ? truncated.slice(0, lastNewline) : truncated;
31
+ return (
32
+ clean +
33
+ `\n\n[... output truncated at ${(MAX_RESPONSE_SIZE / 1024).toFixed(0)}KB. Use ctx_search(queries) for targeted follow-up.]`
34
+ );
35
+ }
36
+
18
37
  /** Per-session, in-memory stats. Reset on session restart. */
19
38
  interface ToolStats {
20
39
  calls: Record<string, number>;
@@ -171,7 +190,7 @@ export function registerContextModeTools(platform: Platform, store: KnowledgeSto
171
190
  output = maybeFilterByIntent(output, intent, source, store);
172
191
 
173
192
  trackCall("ctx_execute", output.length);
174
- return { content: [{ type: "text", text: output }] };
193
+ return { content: [{ type: "text", text: capResponseSize(output) }] };
175
194
  },
176
195
  });
177
196
 
@@ -212,7 +231,7 @@ export function registerContextModeTools(platform: Platform, store: KnowledgeSto
212
231
  output = maybeFilterByIntent(output, intent, source, store);
213
232
 
214
233
  trackCall("ctx_execute_file", output.length);
215
- return { content: [{ type: "text", text: output }] };
234
+ return { content: [{ type: "text", text: capResponseSize(output) }] };
216
235
  },
217
236
  });
218
237
 
@@ -285,7 +304,7 @@ export function registerContextModeTools(platform: Platform, store: KnowledgeSto
285
304
  }
286
305
 
287
306
  trackCall("ctx_batch_execute", text.length);
288
- return { content: [{ type: "text", text }] };
307
+ return { content: [{ type: "text", text: capResponseSize(text) }] };
289
308
  },
290
309
  });
291
310
 
@@ -349,7 +368,7 @@ export function registerContextModeTools(platform: Platform, store: KnowledgeSto
349
368
  const output = formatSearchResults(results);
350
369
 
351
370
  trackCall("ctx_search", output.length);
352
- return { content: [{ type: "text", text: output }] };
371
+ return { content: [{ type: "text", text: capResponseSize(output) }] };
353
372
  },
354
373
  });
355
374
 
@@ -381,7 +400,7 @@ export function registerContextModeTools(platform: Platform, store: KnowledgeSto
381
400
  }
382
401
 
383
402
  trackCall("ctx_fetch_and_index", output.length);
384
- return { content: [{ type: "text", text: output }] };
403
+ return { content: [{ type: "text", text: capResponseSize(output) }] };
385
404
  },
386
405
  });
387
406
 
@@ -0,0 +1,78 @@
1
+ interface PlanReviewCategory {
2
+ category: string;
3
+ detail: string;
4
+ summaryLabel: string;
5
+ }
6
+
7
+ export const PLAN_REVIEW_CATEGORIES: readonly PlanReviewCategory[] = [
8
+ {
9
+ category: "Completeness",
10
+ detail: "TODO markers, placeholders, incomplete tasks, missing steps",
11
+ summaryLabel: "completeness",
12
+ },
13
+ {
14
+ category: "Spec Alignment",
15
+ detail: "Chunk covers relevant spec requirements, no scope creep",
16
+ summaryLabel: "spec alignment",
17
+ },
18
+ {
19
+ category: "Task Decomposition",
20
+ detail: "Tasks atomic, clear boundaries, steps actionable",
21
+ summaryLabel: "task decomposition",
22
+ },
23
+ {
24
+ category: "File Structure",
25
+ detail: "Files have clear single responsibilities, split by responsibility not layer",
26
+ summaryLabel: "file structure",
27
+ },
28
+ {
29
+ category: "File Size",
30
+ detail: "Would any new or modified file likely grow large enough to be hard to reason about?",
31
+ summaryLabel: "file size rules",
32
+ },
33
+ {
34
+ category: "Checkbox Syntax",
35
+ detail: "Steps use checkbox (`- [ ]`) syntax for tracking",
36
+ summaryLabel: "checkbox syntax",
37
+ },
38
+ {
39
+ category: "Chunk Size",
40
+ detail: "Each chunk under 1000 lines",
41
+ summaryLabel: "chunk size",
42
+ },
43
+ {
44
+ category: "Code Content",
45
+ detail:
46
+ "Plans describe work in prose; code fences limited to signatures, brief pseudocode, or exact commands. No full function bodies, test bodies, or file-content dumps.",
47
+ summaryLabel: "code-content rules",
48
+ },
49
+ ] as const;
50
+
51
+ export const PLAN_CODE_CONTENT_REQUIREMENTS = [
52
+ "Describe what each step changes — do not dump full implementations",
53
+ "Use function signatures or brief pseudocode only when they clarify a non-obvious interface or algorithm",
54
+ "Do NOT include full file contents, full function bodies, or full test bodies",
55
+ "Code fences are allowed only for short signatures, brief pseudocode, or exact commands",
56
+ ] as const;
57
+
58
+ export const PLAN_CODE_CONTENT_CRITICAL_CHECKS = [
59
+ "Full function bodies, full test bodies, or file-sized code blocks where prose or a signature would suffice",
60
+ "Code fences that contain implementation rather than interface descriptions",
61
+ ] as const;
62
+
63
+ export const PLAN_CONTENT_POLICY_SUMMARY =
64
+ "Plans describe the work — they do not generate it. Use prose to explain what changes, code fences only for signatures, brief pseudocode, or exact commands, and never full function bodies, full test bodies, or file-content dumps.";
65
+
66
+ export const QUICK_PLAN_TASK_CONTENT_REQUIREMENT =
67
+ "Describe what each task changes in prose. Include function signatures or brief pseudocode only when they clarify a non-obvious interface. Do NOT include full function bodies, full test bodies, or file-content dumps.";
68
+
69
+ export function formatPlanReviewCategorySummary(): string {
70
+ return humanJoin(PLAN_REVIEW_CATEGORIES.map((category) => category.summaryLabel));
71
+ }
72
+
73
+ function humanJoin(values: readonly string[]): string {
74
+ if (values.length === 0) return "";
75
+ if (values.length === 1) return values[0];
76
+ if (values.length === 2) return `${values[0]} and ${values[1]}`;
77
+ return `${values.slice(0, -1).join(", ")}, and ${values.at(-1)}`;
78
+ }
@@ -1,3 +1,6 @@
1
+ import { PLAN_CODE_CONTENT_CRITICAL_CHECKS, PLAN_REVIEW_CATEGORIES } from "./plan-content-policy.js";
2
+
3
+
1
4
  /**
2
5
  * Build the prompt for dispatching a plan document reviewer sub-agent.
3
6
  * Follows the same pattern as supipowers' plan-document-reviewer-prompt.md.
@@ -17,13 +20,9 @@ export function buildPlanReviewerPrompt(
17
20
  "",
18
21
  "| Category | What to Look For |",
19
22
  "|----------|------------------|",
20
- "| Completeness | TODO markers, placeholders, incomplete tasks, missing steps |",
21
- "| Spec Alignment | Chunk covers relevant spec requirements, no scope creep |",
22
- "| Task Decomposition | Tasks atomic, clear boundaries, steps actionable |",
23
- "| File Structure | Files have clear single responsibilities, split by responsibility not layer |",
24
- "| File Size | Would any new or modified file likely grow large enough to be hard to reason about? |",
25
- "| Checkbox Syntax | Steps use checkbox (`- [ ]`) syntax for tracking |",
26
- "| Chunk Size | Each chunk under 1000 lines |",
23
+ ...PLAN_REVIEW_CATEGORIES.map(
24
+ ({ category, detail }) => `| ${category} | ${detail} |`,
25
+ ),
27
26
  "",
28
27
  "## Critical",
29
28
  "",
@@ -33,7 +32,8 @@ export function buildPlanReviewerPrompt(
33
32
  "- Incomplete task definitions",
34
33
  "- Missing verification steps or expected outputs",
35
34
  "- Files planned to hold multiple responsibilities or likely to grow unwieldy",
36
- "",
35
+ ...PLAN_CODE_CONTENT_CRITICAL_CHECKS.map((check) => `- ${check}`),
36
+
37
37
  "## Output Format",
38
38
  "",
39
39
  `## Plan Review — Chunk ${chunkNumber}`,
@@ -1,3 +1,5 @@
1
+ import { PLAN_CODE_CONTENT_REQUIREMENTS, formatPlanReviewCategorySummary } from "./plan-content-policy.js";
2
+
1
3
 
2
4
  export interface PlanWriterOptions {
3
5
  specPath: string;
@@ -24,7 +26,7 @@ export function buildPlanWriterPrompt(options: PlanWriterOptions): string {
24
26
  `**Spec document:** ${specPath}`,
25
27
  "",
26
28
  "Write the plan assuming the implementing engineer has zero context for this codebase.",
27
- "Document everything they need: which files to touch, complete code, testing, exact commands.",
29
+ "Document everything they need: which files to touch, what each change does, testing approach, exact commands.",
28
30
  "Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD.",
29
31
  "Keep the plan local-only: do NOT include git add, git commit, git push, or other VCS steps in the plan.",
30
32
  "",
@@ -88,31 +90,18 @@ export function buildPlanWriterPrompt(options: PlanWriterOptions): string {
88
90
  "- Test: `tests/exact/path/to/test.ts`",
89
91
  "",
90
92
  "- [ ] **Step 1: Write the failing test**",
91
- "",
92
- "```typescript",
93
- "test('specific behavior', () => {",
94
- " const result = myFunction(input);",
95
- " expect(result).toBe(expected);",
96
- "});",
97
- "```",
93
+ " Add a test that verifies `myFunction(input)` returns the expected result and covers the empty-input edge case.",
98
94
  "",
99
95
  "- [ ] **Step 2: Run test to verify it fails**",
100
- "",
101
- "Run: `bun test tests/path/test.ts`",
102
- 'Expected: FAIL with "myFunction is not defined"',
96
+ " Run: `bun test tests/path/test.ts`",
97
+ " Expected: FAIL because `myFunction` does not exist yet.",
103
98
  "",
104
99
  "- [ ] **Step 3: Write minimal implementation**",
105
- "",
106
- "```typescript",
107
- "export function myFunction(input: string): string {",
108
- " return expected;",
109
- "}",
110
- "```",
100
+ " Implement `myFunction(input: string): string` in `src/path/file.ts`. Validate the input, apply the transformation, and export it.",
111
101
  "",
112
102
  "- [ ] **Step 4: Run test to verify it passes**",
113
- "",
114
- "Run: `bun test tests/path/test.ts`",
115
- "Expected: PASS",
103
+ " Run: `bun test tests/path/test.ts`",
104
+ " Expected: PASS",
116
105
  "````",
117
106
  "",
118
107
 
@@ -120,7 +109,7 @@ export function buildPlanWriterPrompt(options: PlanWriterOptions): string {
120
109
  "### Requirements",
121
110
  "",
122
111
  "- Exact file paths always",
123
- "- Complete code in plan (not 'add validation' — show the actual code)",
112
+ ...PLAN_CODE_CONTENT_REQUIREMENTS.map((rule) => `- ${rule}`),
124
113
  "- Exact commands with expected output",
125
114
  "- DRY, YAGNI, TDD",
126
115
  "- No git commit, push, or other VCS steps in the plan",
@@ -133,8 +122,7 @@ export function buildPlanWriterPrompt(options: PlanWriterOptions): string {
133
122
  "Use `## Chunk N: <name>` headings to delimit chunks. Each chunk should be under 1000 lines and logically self-contained.",
134
123
  "",
135
124
  "1. Dispatch a plan-document-reviewer sub-agent for each chunk.",
136
- " The reviewer checks: completeness, spec alignment, task decomposition,",
137
- " file structure, file size rules, checkbox syntax, and chunk size.",
125
+ ` The reviewer checks: ${formatPlanReviewCategorySummary()}.`,
138
126
  " Provide the reviewer with: the plan file path, the spec file path, and the chunk number.",
139
127
 
140
128
  "",
@@ -2,6 +2,7 @@ import type { Platform } from "../platform/types.js";
2
2
  import { createDebugLogger } from "../debug/logger.js";
3
3
  import { getPlanningDebugLogger, getPlanningPromptOptions, isPlanningActive } from "./approval-flow.js";
4
4
  import { buildPlanWriterPrompt } from "./plan-writer-prompt.js";
5
+ import { PLAN_CONTENT_POLICY_SUMMARY, QUICK_PLAN_TASK_CONTENT_REQUIREMENT } from "./plan-content-policy.js";
5
6
  import { buildSpecReviewerPrompt } from "./spec-reviewer.js";
6
7
 
7
8
  export interface PlanningSystemPromptOptions {
@@ -162,6 +163,7 @@ function buildFullPlanningSection(options: PlanningSystemPromptOptions): string
162
163
  "",
163
164
  "## Planning Principles",
164
165
  "",
166
+ `- ${PLAN_CONTENT_POLICY_SUMMARY}`,
165
167
  "- One question at a time.",
166
168
  "- Multiple-choice is preferred when it speeds decisions.",
167
169
  "- Diverge broadly, converge tightly.",
@@ -206,6 +208,7 @@ function buildQuickPlanningSection(options: PlanningSystemPromptOptions): string
206
208
  "```",
207
209
  "",
208
210
  "4. Each task must include name, files, criteria, and complexity (small/medium/large).",
211
+ ` - ${QUICK_PLAN_TASK_CONTENT_REQUIREMENT}`,
209
212
  "5. Save the plan under `.omp/supipowers/plans/YYYY-MM-DD-<feature-name>.md`.",
210
213
  "6. After saving, tell the user: `Plan saved to <path>. Review it and approve when ready.`",
211
214
  "7. Then stop and wait. The approval UI handles execution handoff.",
@@ -13,6 +13,71 @@ const CONVENTIONAL_PREFIX =
13
13
  // `BREAKING CHANGE:` and `BREAKING-CHANGE:` anywhere in the message line
14
14
  const BREAKING_CHANGE_FOOTER = /BREAKING[- ]CHANGE:/;
15
15
 
16
+ const GIT_LOG_RECORD_SEPARATOR = "\u001e";
17
+ const GIT_LOG_FIELD_SEPARATOR = "\u001f";
18
+
19
+ interface GitCommitWithFiles {
20
+ hash: string;
21
+ message: string;
22
+ files: string[];
23
+ }
24
+
25
+ function normalizeReleasePath(value: string): string {
26
+ return value.replace(/\\/g, "/").replace(/^\.\/+/, "").replace(/^\/+/, "").replace(/\/+$/, "");
27
+ }
28
+
29
+ function parseGitLogWithFiles(gitLog: string): GitCommitWithFiles[] {
30
+ return normalizeLineEndings(gitLog)
31
+ .split(GIT_LOG_RECORD_SEPARATOR)
32
+ .map((record) => record.trim())
33
+ .filter(Boolean)
34
+ .flatMap((record) => {
35
+ const lines = record
36
+ .split("\n")
37
+ .map((line) => line.trim())
38
+ .filter(Boolean);
39
+
40
+ const header = lines.shift();
41
+ if (!header) {
42
+ return [];
43
+ }
44
+
45
+ const [hash, message] = header.split(GIT_LOG_FIELD_SEPARATOR);
46
+ if (!hash || !message) {
47
+ return [];
48
+ }
49
+
50
+ return [{
51
+ hash: hash.trim(),
52
+ message: message.trim(),
53
+ files: lines.map(normalizeReleasePath).filter(Boolean),
54
+ }];
55
+ });
56
+ }
57
+
58
+ function isPathInReleaseScope(filePath: string, releaseScope: string[]): boolean {
59
+ const normalizedFile = normalizeReleasePath(filePath);
60
+ return releaseScope.some((scopePath) => normalizedFile === scopePath || normalizedFile.startsWith(`${scopePath}/`));
61
+ }
62
+
63
+ /**
64
+ * Filter a `git log --format=%x1e%H%x1f%s --name-only` payload down to commits
65
+ * touching the published package scope, then emit `git log --oneline` text that
66
+ * can be parsed by `parseConventionalCommits()`.
67
+ */
68
+ export function filterOnelineGitLogToPaths(gitLog: string, releaseScope: string[]): string {
69
+ const normalizedScope = [...new Set(releaseScope.map(normalizeReleasePath).filter(Boolean))];
70
+ if (normalizedScope.length === 0) {
71
+ return "";
72
+ }
73
+
74
+ return parseGitLogWithFiles(gitLog)
75
+ .filter((commit) => commit.files.some((file) => isPathInReleaseScope(file, normalizedScope)))
76
+ .map((commit) => `${commit.hash.slice(0, 7)} ${commit.message}`)
77
+ .join("\n");
78
+ }
79
+
80
+
16
81
  /**
17
82
  * Parse a single `git log --oneline` line into hash + raw message.
18
83
  * Returns null for blank lines or lines too short to contain a real hash.
@@ -53,18 +53,47 @@ export function bumpVersion(current: string, bump: BumpType): string {
53
53
  }
54
54
 
55
55
  /**
56
- * Read the `version` field from `<cwd>/package.json`.
57
- * Returns `"0.0.0"` when the file is absent or carries no version field.
56
+ * Read the parsed package.json object from `<cwd>/package.json`. Returns null
57
+ * when the file is absent or invalid.
58
58
  */
59
- export function getCurrentVersion(cwd: string): string {
59
+ function readPackageJson(cwd: string): { version?: string; files?: unknown } | null {
60
60
  const pkgPath = path.join(cwd, "package.json");
61
61
  try {
62
62
  const raw = fs.readFileSync(pkgPath, "utf-8");
63
- const pkg = JSON.parse(raw) as { version?: string };
64
- return pkg.version ?? "0.0.0";
63
+ return JSON.parse(raw) as { version?: string; files?: unknown };
65
64
  } catch {
66
- return "0.0.0";
65
+ return null;
66
+ }
67
+ }
68
+
69
+ /**
70
+ * Read the `version` field from `<cwd>/package.json`.
71
+ * Returns `"0.0.0"` when the file is absent or carries no version field.
72
+ */
73
+ export function getCurrentVersion(cwd: string): string {
74
+ return readPackageJson(cwd)?.version ?? "0.0.0";
75
+ }
76
+
77
+ /**
78
+ * Return the publishable package paths used to scope release-note commits.
79
+ * Returns null when the package manifest does not declare a `files` whitelist.
80
+ */
81
+ export function getPublishedPackagePaths(cwd: string): string[] | null {
82
+ const files = readPackageJson(cwd)?.files;
83
+ if (!Array.isArray(files)) {
84
+ return null;
67
85
  }
86
+
87
+ const normalized = files
88
+ .filter((entry): entry is string => typeof entry === "string")
89
+ .map((entry) => entry.replace(/^\.\//, "").replace(/\/+$/, ""))
90
+ .filter(Boolean);
91
+
92
+ if (normalized.length === 0) {
93
+ return null;
94
+ }
95
+
96
+ return [...new Set(["package.json", ...normalized])];
68
97
  }
69
98
 
70
99
  interface ParsedReleaseTag {
@@ -143,14 +143,12 @@ export function ensureDefaultReviewAgents(paths: PlatformPaths, cwd: string): vo
143
143
  const agentsDir = getReviewAgentsDir(paths, cwd);
144
144
  fs.mkdirSync(agentsDir, { recursive: true });
145
145
 
146
- for (const [fileName, content] of Object.entries(DEFAULT_AGENT_TEMPLATES)) {
147
- writeIfMissing(path.join(agentsDir, fileName), content);
148
- }
149
-
146
+ // Default agent markdown files are installed globally only.
150
147
  writeIfMissing(getReviewAgentsConfigPath(paths, cwd), buildDefaultConfigText());
151
148
  }
152
149
 
153
150
  export async function loadReviewAgentsConfig(paths: PlatformPaths, cwd: string): Promise<ReviewAgentsConfig> {
151
+ ensureGlobalDefaultReviewAgents(paths);
154
152
  ensureDefaultReviewAgents(paths, cwd);
155
153
  return validateReviewAgentsConfig(await importYamlFile(getReviewAgentsConfigPath(paths, cwd)));
156
154
  }
@@ -158,20 +156,25 @@ export async function loadReviewAgentsConfig(paths: PlatformPaths, cwd: string):
158
156
  export async function loadReviewAgents(paths: PlatformPaths, cwd: string): Promise<LoadedReviewAgents> {
159
157
  const agentsDir = getReviewAgentsDir(paths, cwd);
160
158
  const configPath = getReviewAgentsConfigPath(paths, cwd);
159
+ const globalAgentsDir = getGlobalReviewAgentsDir(paths);
161
160
  const config = await loadReviewAgentsConfig(paths, cwd);
162
161
 
163
162
  const agents = config.agents
164
163
  .filter((agent) => agent.enabled)
165
164
  .map((agent) => {
166
- const filePath = path.join(agentsDir, agent.data);
165
+ const projectFilePath = path.join(agentsDir, agent.data);
166
+ const globalFilePath = path.join(globalAgentsDir, agent.data);
167
+ const filePath = fs.existsSync(projectFilePath) ? projectFilePath : globalFilePath;
167
168
  if (!fs.existsSync(filePath)) {
168
- throw new Error(`Configured review agent file does not exist: ${filePath}`);
169
+ throw new Error(
170
+ `Configured review agent file does not exist in project or global scope: ${agent.data}`
171
+ );
169
172
  }
170
173
 
171
174
  const definition = parseReviewAgentMarkdown(fs.readFileSync(filePath, "utf-8"), filePath);
172
175
  if (definition.name !== agent.name) {
173
176
  throw new Error(
174
- `Configured agent name \"${agent.name}\" does not match frontmatter name \"${definition.name}\" in ${filePath}.`,
177
+ `Configured agent name "${agent.name}" does not match frontmatter name "${definition.name}" in ${filePath}.`,
175
178
  );
176
179
  }
177
180
 
@@ -263,16 +266,17 @@ export async function loadMergedReviewAgents(
263
266
  const globalResult = await loadGlobalReviewAgents(paths);
264
267
  const projectResult = await loadReviewAgents(paths, cwd);
265
268
 
269
+ // Project config is authoritative: any agent named in the project config
270
+ // (enabled or disabled) shadows the global version with the same name.
271
+ const projectConfigNames = new Set(projectResult.config.agents.map((a) => a.name));
272
+ const uniqueGlobalAgents = globalResult.agents.filter((a) => !projectConfigNames.has(a.name));
273
+
266
274
  // Tag project agents with scope
267
275
  const projectAgents = projectResult.agents.map((agent) => ({
268
276
  ...agent,
269
277
  scope: "project" as const,
270
278
  }));
271
279
 
272
- // Project agents override global agents with the same name
273
- const projectNames = new Set(projectAgents.map((a) => a.name));
274
- const uniqueGlobalAgents = globalResult.agents.filter((a) => !projectNames.has(a.name));
275
-
276
280
  return {
277
281
  agentsDir: projectResult.agentsDir,
278
282
  configPath: projectResult.configPath,