@sienklogic/plan-build-run 2.42.1 → 2.44.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +22 -0
- package/package.json +1 -1
- package/plugins/copilot-pbr/plugin.json +1 -1
- package/plugins/copilot-pbr/skills/build/SKILL.md +34 -9
- package/plugins/cursor-pbr/.cursor-plugin/plugin.json +1 -1
- package/plugins/cursor-pbr/skills/build/SKILL.md +34 -9
- package/plugins/pbr/.claude-plugin/plugin.json +1 -1
- package/plugins/pbr/scripts/check-subagent-output.js +74 -52
- package/plugins/pbr/scripts/lib/spot-check.js +118 -0
- package/plugins/pbr/scripts/pbr-tools.js +19 -1
- package/plugins/pbr/scripts/suggest-compact.js +62 -2
- package/plugins/pbr/skills/build/SKILL.md +34 -9
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,28 @@ All notable changes to Plan-Build-Run will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [2.44.0](https://github.com/SienkLogic/plan-build-run/compare/plan-build-run-v2.43.0...plan-build-run-v2.44.0) (2026-02-28)
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
### Features
|
|
12
|
+
|
|
13
|
+
* **49-03:** add CRITICAL spot-check CLI marker to build SKILL.md Step 6c ([255ee65](https://github.com/SienkLogic/plan-build-run/commit/255ee65506334a972ceb472b2fa90f1ba94eca80))
|
|
14
|
+
* **49-03:** GREEN - refactor check-subagent-output.js to SKILL_CHECKS lookup table ([fb4b272](https://github.com/SienkLogic/plan-build-run/commit/fb4b272a8a02dd22d27745e5a9d98bb9fc57ef26))
|
|
15
|
+
|
|
16
|
+
|
|
17
|
+
### Bug Fixes
|
|
18
|
+
|
|
19
|
+
* **49-03:** sync spot-check CRITICAL marker to cursor-pbr and copilot-pbr ([994ef46](https://github.com/SienkLogic/plan-build-run/commit/994ef46aa13afde61ffa0e38e0332ab13be8863d))
|
|
20
|
+
|
|
21
|
+
## [2.43.0](https://github.com/SienkLogic/plan-build-run/compare/plan-build-run-v2.42.1...plan-build-run-v2.43.0) (2026-02-28)
|
|
22
|
+
|
|
23
|
+
|
|
24
|
+
### Features
|
|
25
|
+
|
|
26
|
+
* **49-01:** add spot-check subcommand to pbr-tools.js dispatcher ([8a01f3a](https://github.com/SienkLogic/plan-build-run/commit/8a01f3aaff45b48b5a239e3bdc8e4a0213524bc8))
|
|
27
|
+
* **49-01:** GREEN - implement spotCheck() in lib/spot-check.js ([d0d9d51](https://github.com/SienkLogic/plan-build-run/commit/d0d9d51b5a7467be7156243fbba9f05d1a624a12))
|
|
28
|
+
* **49-02:** GREEN - add tier-aware bridge warnings to suggest-compact ([e145372](https://github.com/SienkLogic/plan-build-run/commit/e145372272ab3d3f6709ec87970f7a4e19226e9e))
|
|
29
|
+
|
|
8
30
|
## [2.42.1](https://github.com/SienkLogic/plan-build-run/compare/plan-build-run-v2.42.0...plan-build-run-v2.42.1) (2026-02-28)
|
|
9
31
|
|
|
10
32
|
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "pbr",
|
|
3
3
|
"displayName": "Plan-Build-Run",
|
|
4
|
-
"version": "2.
|
|
4
|
+
"version": "2.44.0",
|
|
5
5
|
"description": "Plan-Build-Run — Structured development workflow for GitHub Copilot CLI. Solves context rot through disciplined agent delegation, structured planning, atomic execution, and goal-backward verification.",
|
|
6
6
|
"author": {
|
|
7
7
|
"name": "SienkLogic",
|
|
@@ -407,15 +407,40 @@ For each completed executor:
|
|
|
407
407
|
|
|
408
408
|
**Spot-check executor claims:**
|
|
409
409
|
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
-
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
410
|
+
CRITICAL: Before reading results or advancing to the next wave, run the spot-check CLI for each completed plan.
|
|
411
|
+
|
|
412
|
+
For each completed plan in this wave:
|
|
413
|
+
|
|
414
|
+
```bash
|
|
415
|
+
node ${PLUGIN_ROOT}/scripts/pbr-tools.js spot-check {phaseSlug} {planId}
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
Where `{phaseSlug}` is the phase directory name (e.g., `49-build-workflow-hardening`) and `{planId}` is the plan identifier (e.g., `49-01`).
|
|
419
|
+
|
|
420
|
+
The command returns JSON: `{ ok, summary_exists, key_files_checked, commits_present, detail }`
|
|
421
|
+
|
|
422
|
+
**If `ok` is `false` for ANY plan: STOP.** Do NOT advance to the next wave. Present the user with:
|
|
423
|
+
|
|
424
|
+
```
|
|
425
|
+
Spot-check FAILED for plan {planId}: {detail}
|
|
426
|
+
|
|
427
|
+
Choose an action:
|
|
428
|
+
Retry — Re-spawn executor for this plan
|
|
429
|
+
Continue — Skip this plan and proceed to next wave (may leave phase incomplete)
|
|
430
|
+
Abort — Stop the build entirely
|
|
431
|
+
```
|
|
432
|
+
|
|
433
|
+
Use AskUserQuestion with the three options. Route:
|
|
434
|
+
|
|
435
|
+
- Retry: Re-spawn the executor for this plan (go back to Step 6a for this plan only)
|
|
436
|
+
- Continue: Log the failure, skip the plan, proceed
|
|
437
|
+
- Abort: Stop all build work, leave phase in partial state
|
|
438
|
+
|
|
439
|
+
**If `ok` is `true` for all plans:**
|
|
440
|
+
|
|
441
|
+
- Also check SUMMARY.md frontmatter for `self_check_failures`: if present, warn the user: "Plan {id} reported self-check failures: {list}. Inspect before continuing?"
|
|
442
|
+
- Also search SUMMARY.md for `## Self-Check: FAILED` marker — if present, warn before next wave
|
|
443
|
+
- Between waves: verify no file conflicts from parallel executors (`git status` for uncommitted changes)
|
|
419
444
|
|
|
420
445
|
**Additional wave spot-checks:**
|
|
421
446
|
- Check for `## Self-Check: FAILED` in SUMMARY.md — if present, warn user before proceeding to next wave
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "pbr",
|
|
3
3
|
"displayName": "Plan-Build-Run",
|
|
4
|
-
"version": "2.
|
|
4
|
+
"version": "2.44.0",
|
|
5
5
|
"description": "Plan-Build-Run — Structured development workflow for Cursor. Solves context rot through disciplined subagent delegation, structured planning, atomic execution, and goal-backward verification.",
|
|
6
6
|
"author": {
|
|
7
7
|
"name": "SienkLogic",
|
|
@@ -408,15 +408,40 @@ For each completed executor:
|
|
|
408
408
|
|
|
409
409
|
**Spot-check executor claims:**
|
|
410
410
|
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
416
|
-
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
|
|
411
|
+
CRITICAL: Before reading results or advancing to the next wave, run the spot-check CLI for each completed plan.
|
|
412
|
+
|
|
413
|
+
For each completed plan in this wave:
|
|
414
|
+
|
|
415
|
+
```bash
|
|
416
|
+
node ${PLUGIN_ROOT}/scripts/pbr-tools.js spot-check {phaseSlug} {planId}
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
Where `{phaseSlug}` is the phase directory name (e.g., `49-build-workflow-hardening`) and `{planId}` is the plan identifier (e.g., `49-01`).
|
|
420
|
+
|
|
421
|
+
The command returns JSON: `{ ok, summary_exists, key_files_checked, commits_present, detail }`
|
|
422
|
+
|
|
423
|
+
**If `ok` is `false` for ANY plan: STOP.** Do NOT advance to the next wave. Present the user with:
|
|
424
|
+
|
|
425
|
+
```
|
|
426
|
+
Spot-check FAILED for plan {planId}: {detail}
|
|
427
|
+
|
|
428
|
+
Choose an action:
|
|
429
|
+
Retry — Re-spawn executor for this plan
|
|
430
|
+
Continue — Skip this plan and proceed to next wave (may leave phase incomplete)
|
|
431
|
+
Abort — Stop the build entirely
|
|
432
|
+
```
|
|
433
|
+
|
|
434
|
+
Use AskUserQuestion with the three options. Route:
|
|
435
|
+
|
|
436
|
+
- Retry: Re-spawn the executor for this plan (go back to Step 6a for this plan only)
|
|
437
|
+
- Continue: Log the failure, skip the plan, proceed
|
|
438
|
+
- Abort: Stop all build work, leave phase in partial state
|
|
439
|
+
|
|
440
|
+
**If `ok` is `true` for all plans:**
|
|
441
|
+
|
|
442
|
+
- Also check SUMMARY.md frontmatter for `self_check_failures`: if present, warn the user: "Plan {id} reported self-check failures: {list}. Inspect before continuing?"
|
|
443
|
+
- Also search SUMMARY.md for `## Self-Check: FAILED` marker — if present, warn before next wave
|
|
444
|
+
- Between waves: verify no file conflicts from parallel executors (`git status` for uncommitted changes)
|
|
420
445
|
|
|
421
446
|
**Additional wave spot-checks:**
|
|
422
447
|
- Check for `## Self-Check: FAILED` in SUMMARY.md — if present, warn user before proceeding to next wave
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "pbr",
|
|
3
|
-
"version": "2.
|
|
3
|
+
"version": "2.44.0",
|
|
4
4
|
"description": "Plan-Build-Run — Structured development workflow for Claude Code. Solves context rot through disciplined subagent delegation, structured planning, atomic execution, and goal-backward verification.",
|
|
5
5
|
"author": {
|
|
6
6
|
"name": "SienkLogic",
|
|
@@ -322,6 +322,74 @@ function checkSummaryCommits(planningDir, foundFiles, warnings) {
|
|
|
322
322
|
}
|
|
323
323
|
}
|
|
324
324
|
|
|
325
|
+
// Skill-specific check lookup table keyed by 'activeSkill:agentType'
|
|
326
|
+
const SKILL_CHECKS = {
|
|
327
|
+
'begin:pbr:planner': {
|
|
328
|
+
description: 'begin planner core files',
|
|
329
|
+
check: (planningDir, _found, warnings) => {
|
|
330
|
+
const coreFiles = ['REQUIREMENTS.md', 'ROADMAP.md', 'STATE.md'];
|
|
331
|
+
for (const f of coreFiles) {
|
|
332
|
+
if (!fs.existsSync(path.join(planningDir, f))) {
|
|
333
|
+
warnings.push(`Begin planner: ${f} was not created. The project may be in an incomplete state.`);
|
|
334
|
+
}
|
|
335
|
+
}
|
|
336
|
+
}
|
|
337
|
+
},
|
|
338
|
+
'plan:pbr:researcher': {
|
|
339
|
+
description: 'plan researcher phase-level RESEARCH.md',
|
|
340
|
+
check: (planningDir, found, warnings) => {
|
|
341
|
+
const phaseResearch = findInPhaseDir(planningDir, /^RESEARCH\.md$/i);
|
|
342
|
+
if (found.length === 0 && phaseResearch.length === 0) {
|
|
343
|
+
warnings.push('Plan researcher: No research output found in .planning/research/ or in the phase directory.');
|
|
344
|
+
}
|
|
345
|
+
}
|
|
346
|
+
},
|
|
347
|
+
'scan:pbr:codebase-mapper': {
|
|
348
|
+
description: 'scan codebase-mapper 4 focus areas',
|
|
349
|
+
check: (planningDir, _found, warnings) => {
|
|
350
|
+
const expectedAreas = ['tech', 'arch', 'quality', 'concerns'];
|
|
351
|
+
const codebaseDir = path.join(planningDir, 'codebase');
|
|
352
|
+
if (fs.existsSync(codebaseDir)) {
|
|
353
|
+
try {
|
|
354
|
+
const files = fs.readdirSync(codebaseDir).map(f => f.toLowerCase());
|
|
355
|
+
for (const area of expectedAreas) {
|
|
356
|
+
if (!files.some(f => f.includes(area))) {
|
|
357
|
+
warnings.push(`Scan mapper: No output file containing "${area}" found in .planning/codebase/. One of the 4 mappers may have failed.`);
|
|
358
|
+
}
|
|
359
|
+
}
|
|
360
|
+
} catch (_e) { /* best-effort */ }
|
|
361
|
+
}
|
|
362
|
+
}
|
|
363
|
+
},
|
|
364
|
+
'review:pbr:verifier': {
|
|
365
|
+
description: 'review verifier VERIFICATION.md status',
|
|
366
|
+
check: (planningDir, _found, warnings) => {
|
|
367
|
+
const verFiles = findInPhaseDir(planningDir, /^VERIFICATION\.md$/i);
|
|
368
|
+
for (const vf of verFiles) {
|
|
369
|
+
try {
|
|
370
|
+
const content = fs.readFileSync(path.join(planningDir, vf), 'utf8');
|
|
371
|
+
const statusMatch = content.match(/^status:\s*(\S+)/mi);
|
|
372
|
+
if (statusMatch && statusMatch[1] === 'gaps_found') {
|
|
373
|
+
warnings.push('Review verifier: VERIFICATION.md has status "gaps_found" — ensure gaps are surfaced to the user.');
|
|
374
|
+
}
|
|
375
|
+
} catch (_e) { /* best-effort */ }
|
|
376
|
+
}
|
|
377
|
+
}
|
|
378
|
+
},
|
|
379
|
+
'build:pbr:executor': {
|
|
380
|
+
description: 'build executor SUMMARY commits',
|
|
381
|
+
check: (planningDir, found, warnings) => {
|
|
382
|
+
checkSummaryCommits(planningDir, found, warnings);
|
|
383
|
+
}
|
|
384
|
+
},
|
|
385
|
+
'quick:pbr:executor': {
|
|
386
|
+
description: 'quick executor SUMMARY commits',
|
|
387
|
+
check: (planningDir, found, warnings) => {
|
|
388
|
+
checkSummaryCommits(planningDir, found, warnings);
|
|
389
|
+
}
|
|
390
|
+
}
|
|
391
|
+
};
|
|
392
|
+
|
|
325
393
|
function readStdin() {
|
|
326
394
|
try {
|
|
327
395
|
const input = fs.readFileSync(0, 'utf8').trim();
|
|
@@ -406,57 +474,11 @@ async function main() {
|
|
|
406
474
|
skillWarnings.push(`${label} output may be stale — no recent output files detected.`);
|
|
407
475
|
}
|
|
408
476
|
|
|
409
|
-
//
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
skillWarnings.push(`Begin planner: ${f} was not created. The project may be in an incomplete state.`);
|
|
415
|
-
}
|
|
416
|
-
}
|
|
417
|
-
}
|
|
418
|
-
|
|
419
|
-
// GAP-05: Plan researcher should produce phase-level RESEARCH.md
|
|
420
|
-
if (activeSkill === 'plan' && agentType === 'pbr:researcher') {
|
|
421
|
-
const phaseResearch = findInPhaseDir(planningDir, /^RESEARCH\.md$/i);
|
|
422
|
-
if (found.length === 0 && phaseResearch.length === 0) {
|
|
423
|
-
skillWarnings.push('Plan researcher: No research output found in .planning/research/ or in the phase directory.');
|
|
424
|
-
}
|
|
425
|
-
}
|
|
426
|
-
|
|
427
|
-
// GAP-08: Scan codebase-mapper should produce all 4 focus areas
|
|
428
|
-
if (activeSkill === 'scan' && agentType === 'pbr:codebase-mapper') {
|
|
429
|
-
const expectedAreas = ['tech', 'arch', 'quality', 'concerns'];
|
|
430
|
-
const codebaseDir = path.join(planningDir, 'codebase');
|
|
431
|
-
if (fs.existsSync(codebaseDir)) {
|
|
432
|
-
try {
|
|
433
|
-
const files = fs.readdirSync(codebaseDir).map(f => f.toLowerCase());
|
|
434
|
-
for (const area of expectedAreas) {
|
|
435
|
-
if (!files.some(f => f.includes(area))) {
|
|
436
|
-
skillWarnings.push(`Scan mapper: No output file containing "${area}" found in .planning/codebase/. One of the 4 mappers may have failed.`);
|
|
437
|
-
}
|
|
438
|
-
}
|
|
439
|
-
} catch (_e) { /* best-effort */ }
|
|
440
|
-
}
|
|
441
|
-
}
|
|
442
|
-
|
|
443
|
-
// GAP-07: Review verifier should produce meaningful VERIFICATION.md status
|
|
444
|
-
if (activeSkill === 'review' && agentType === 'pbr:verifier') {
|
|
445
|
-
const verFiles = findInPhaseDir(planningDir, /^VERIFICATION\.md$/i);
|
|
446
|
-
for (const vf of verFiles) {
|
|
447
|
-
try {
|
|
448
|
-
const content = fs.readFileSync(path.join(planningDir, vf), 'utf8');
|
|
449
|
-
const statusMatch = content.match(/^status:\s*(\S+)/mi);
|
|
450
|
-
if (statusMatch && statusMatch[1] === 'gaps_found') {
|
|
451
|
-
skillWarnings.push('Review verifier: VERIFICATION.md has status "gaps_found" — ensure gaps are surfaced to the user.');
|
|
452
|
-
}
|
|
453
|
-
} catch (_e) { /* best-effort */ }
|
|
454
|
-
}
|
|
455
|
-
}
|
|
456
|
-
|
|
457
|
-
// GAP-06: Build/quick executor SUMMARY should have commits
|
|
458
|
-
if ((activeSkill === 'build' || activeSkill === 'quick') && agentType === 'pbr:executor') {
|
|
459
|
-
checkSummaryCommits(planningDir, found, skillWarnings);
|
|
477
|
+
// Skill-specific dispatch via SKILL_CHECKS lookup
|
|
478
|
+
const skillCheckKey = `${activeSkill}:${agentType}`;
|
|
479
|
+
const skillCheck = SKILL_CHECKS[skillCheckKey];
|
|
480
|
+
if (skillCheck) {
|
|
481
|
+
skillCheck.check(planningDir, found, skillWarnings);
|
|
460
482
|
}
|
|
461
483
|
|
|
462
484
|
// Output logic: avoid duplicating warnings
|
|
@@ -526,5 +548,5 @@ async function main() {
|
|
|
526
548
|
process.exit(0);
|
|
527
549
|
}
|
|
528
550
|
|
|
529
|
-
module.exports = { AGENT_OUTPUTS, findInPhaseDir, findInQuickDir, checkSummaryCommits, isRecent, getCurrentPhase, checkRoadmapStaleness };
|
|
551
|
+
module.exports = { AGENT_OUTPUTS, SKILL_CHECKS, findInPhaseDir, findInQuickDir, checkSummaryCommits, isRecent, getCurrentPhase, checkRoadmapStaleness };
|
|
530
552
|
if (require.main === module || process.argv[1] === __filename) { main(); }
|
|
@@ -0,0 +1,118 @@
|
|
|
1
|
+
'use strict';
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* lib/spot-check.js — Machine-enforced wave gate spot check.
|
|
5
|
+
*
|
|
6
|
+
* Verifies that a completed plan's SUMMARY file exists, lists real key_files
|
|
7
|
+
* that exist on disk, and has a non-empty commits field.
|
|
8
|
+
*
|
|
9
|
+
* Used by pbr-tools.js `spot-check <phaseSlug> <planId>` subcommand.
|
|
10
|
+
*/
|
|
11
|
+
|
|
12
|
+
const fs = require('fs');
|
|
13
|
+
const path = require('path');
|
|
14
|
+
const { parseYamlFrontmatter } = require('./core');
|
|
15
|
+
|
|
16
|
+
/**
|
|
17
|
+
* Perform a spot-check on a completed plan's SUMMARY artifact.
|
|
18
|
+
*
|
|
19
|
+
* @param {string} planningDir - Absolute path to the .planning/ directory
|
|
20
|
+
* @param {string} phaseSlug - Phase directory name (e.g. "49-build-workflow-hardening")
|
|
21
|
+
* @param {string} planId - Plan identifier (e.g. "49-01")
|
|
22
|
+
* @returns {{
|
|
23
|
+
* ok: boolean,
|
|
24
|
+
* summary_exists: boolean,
|
|
25
|
+
* key_files_checked: Array<{path: string, exists: boolean}>,
|
|
26
|
+
* commits_present: boolean,
|
|
27
|
+
* detail: string
|
|
28
|
+
* }}
|
|
29
|
+
*/
|
|
30
|
+
function spotCheck(planningDir, phaseSlug, planId) {
|
|
31
|
+
const summaryPath = path.join(planningDir, 'phases', phaseSlug, 'SUMMARY-' + planId + '.md');
|
|
32
|
+
|
|
33
|
+
// Check 1: SUMMARY file must exist
|
|
34
|
+
if (!fs.existsSync(summaryPath)) {
|
|
35
|
+
return {
|
|
36
|
+
ok: false,
|
|
37
|
+
summary_exists: false,
|
|
38
|
+
key_files_checked: [],
|
|
39
|
+
commits_present: false,
|
|
40
|
+
detail: 'SUMMARY-' + planId + '.md not found'
|
|
41
|
+
};
|
|
42
|
+
}
|
|
43
|
+
|
|
44
|
+
// Read and parse SUMMARY frontmatter
|
|
45
|
+
let content;
|
|
46
|
+
try {
|
|
47
|
+
content = fs.readFileSync(summaryPath, 'utf8');
|
|
48
|
+
} catch (e) {
|
|
49
|
+
return {
|
|
50
|
+
ok: false,
|
|
51
|
+
summary_exists: true,
|
|
52
|
+
key_files_checked: [],
|
|
53
|
+
commits_present: false,
|
|
54
|
+
detail: 'Failed to read SUMMARY: ' + e.message
|
|
55
|
+
};
|
|
56
|
+
}
|
|
57
|
+
|
|
58
|
+
const fm = parseYamlFrontmatter(content);
|
|
59
|
+
const failures = [];
|
|
60
|
+
|
|
61
|
+
// Check 2: key_files — check first 2 entries for existence on disk
|
|
62
|
+
// key_files are repo-relative paths; resolve them relative to the repo root (planningDir/..)
|
|
63
|
+
const repoRoot = path.resolve(path.join(planningDir, '..'));
|
|
64
|
+
const rawKeyFiles = Array.isArray(fm.key_files) ? fm.key_files : [];
|
|
65
|
+
const toCheck = rawKeyFiles.slice(0, 2);
|
|
66
|
+
|
|
67
|
+
const key_files_checked = toCheck.map(kf => {
|
|
68
|
+
const absPath = path.resolve(repoRoot, kf);
|
|
69
|
+
const exists = fs.existsSync(absPath);
|
|
70
|
+
return { path: kf, exists };
|
|
71
|
+
});
|
|
72
|
+
|
|
73
|
+
const missingFiles = key_files_checked.filter(kf => !kf.exists);
|
|
74
|
+
if (missingFiles.length > 0) {
|
|
75
|
+
failures.push('missing key_files: ' + missingFiles.map(kf => kf.path).join(', '));
|
|
76
|
+
}
|
|
77
|
+
|
|
78
|
+
// Check 3: commits field must be non-empty
|
|
79
|
+
// Re-parse the raw frontmatter block for the commits field to match check-subagent-output.js pattern
|
|
80
|
+
const fmMatch = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
|
|
81
|
+
let commits_present = false;
|
|
82
|
+
|
|
83
|
+
if (fmMatch) {
|
|
84
|
+
const fmBlock = fmMatch[1];
|
|
85
|
+
const commitsMatch = fmBlock.match(/commits:\s*(\[.*?\]|.*)/);
|
|
86
|
+
if (commitsMatch) {
|
|
87
|
+
const commitsVal = commitsMatch[1].trim();
|
|
88
|
+
// Treat these as empty: [], '', ~, null
|
|
89
|
+
commits_present = !(
|
|
90
|
+
commitsVal === '[]' ||
|
|
91
|
+
commitsVal === '' ||
|
|
92
|
+
commitsVal === '~' ||
|
|
93
|
+
commitsVal === 'null' ||
|
|
94
|
+
commitsVal === '""'
|
|
95
|
+
);
|
|
96
|
+
}
|
|
97
|
+
} else {
|
|
98
|
+
// No frontmatter found at all
|
|
99
|
+
failures.push('no frontmatter found in SUMMARY');
|
|
100
|
+
}
|
|
101
|
+
|
|
102
|
+
if (!commits_present) {
|
|
103
|
+
failures.push('commits field is empty or missing');
|
|
104
|
+
}
|
|
105
|
+
|
|
106
|
+
const ok = missingFiles.length === 0 && commits_present && (fmMatch !== null);
|
|
107
|
+
const detail = failures.length === 0 ? 'all checks passed' : failures.join('; ');
|
|
108
|
+
|
|
109
|
+
return {
|
|
110
|
+
ok,
|
|
111
|
+
summary_exists: true,
|
|
112
|
+
key_files_checked,
|
|
113
|
+
commits_present,
|
|
114
|
+
detail
|
|
115
|
+
};
|
|
116
|
+
}
|
|
117
|
+
|
|
118
|
+
module.exports = { spotCheck };
|
|
@@ -40,6 +40,7 @@
|
|
|
40
40
|
* learnings ingest <json-file> — Ingest a learning entry into global store
|
|
41
41
|
* learnings query [--tags X] [--min-confidence Y] [--stack S] [--type T] — Query learnings
|
|
42
42
|
* learnings check-thresholds — Check deferral trigger conditions
|
|
43
|
+
* spot-check <phaseSlug> <planId> — Verify SUMMARY, key_files, and commits exist for a plan
|
|
43
44
|
*
|
|
44
45
|
* Environment: PBR_PROJECT_ROOT — Override project root directory (used when hooks fire from subagent cwd)
|
|
45
46
|
*/
|
|
@@ -132,6 +133,10 @@ const {
|
|
|
132
133
|
applyMigrations: _applyMigrations
|
|
133
134
|
} = require('./lib/migrate');
|
|
134
135
|
|
|
136
|
+
const {
|
|
137
|
+
spotCheck: _spotCheck
|
|
138
|
+
} = require('./lib/spot-check');
|
|
139
|
+
|
|
135
140
|
const {
|
|
136
141
|
learningsIngest: _learningsIngest,
|
|
137
142
|
learningsQuery: _learningsQuery,
|
|
@@ -282,6 +287,10 @@ function migrate(options) {
|
|
|
282
287
|
return _applyMigrations(planningDir, options);
|
|
283
288
|
}
|
|
284
289
|
|
|
290
|
+
function spotCheck(phaseDir, planId) {
|
|
291
|
+
return _spotCheck(planningDir, phaseDir, planId);
|
|
292
|
+
}
|
|
293
|
+
|
|
285
294
|
// --- validateProject stays here (cross-cutting across modules) ---
|
|
286
295
|
|
|
287
296
|
/**
|
|
@@ -711,6 +720,15 @@ async function main() {
|
|
|
711
720
|
error('Usage: learnings <ingest|query|check-thresholds>');
|
|
712
721
|
process.exit(1);
|
|
713
722
|
}
|
|
723
|
+
} else if (command === 'spot-check') {
|
|
724
|
+
// spot-check <phaseSlug> <planId>
|
|
725
|
+
// Returns JSON: { ok, summary_exists, key_files_checked, commits_present, detail }
|
|
726
|
+
const phaseSlug = args[1];
|
|
727
|
+
const planId = args[2];
|
|
728
|
+
if (!phaseSlug || !planId) {
|
|
729
|
+
error('Usage: spot-check <phaseSlug> <planId>');
|
|
730
|
+
}
|
|
731
|
+
output(spotCheck(phaseSlug, planId));
|
|
714
732
|
} else if (command === 'validate-project') {
|
|
715
733
|
output(validateProject());
|
|
716
734
|
} else {
|
|
@@ -722,6 +740,6 @@ async function main() {
|
|
|
722
740
|
}
|
|
723
741
|
|
|
724
742
|
if (require.main === module || process.argv[1] === __filename) { main().catch(err => { process.stderr.write(err.message + '\n'); process.exit(1); }); }
|
|
725
|
-
module.exports = { KNOWN_AGENTS, initExecutePhase, initPlanPhase, initQuick, initVerifyWork, initResume, initProgress, statePatch, stateAdvancePlan, stateRecordMetric, parseStateMd, parseRoadmapMd, parseYamlFrontmatter, parseMustHaves, countMustHaves, stateLoad, stateCheckProgress, configLoad, configClearCache, configValidate, lockedFileUpdate, planIndex, determinePhaseStatus, findFiles, atomicWrite, tailLines, frontmatter, mustHavesCollect, phaseInfo, stateUpdate, roadmapUpdateStatus, roadmapUpdatePlans, updateLegacyStateField, updateFrontmatterField, updateTableRow, findRoadmapRow, resolveDepthProfile, DEPTH_PROFILE_DEFAULTS, historyAppend, historyLoad, VALID_STATUS_TRANSITIONS, validateStatusTransition, writeActiveSkill, validateProject, phaseAdd, phaseRemove, phaseList, loadUserDefaults, saveUserDefaults, mergeUserDefaults, USER_DEFAULTS_PATH, todoList, todoGet, todoAdd, todoDone, migrate };
|
|
743
|
+
module.exports = { KNOWN_AGENTS, initExecutePhase, initPlanPhase, initQuick, initVerifyWork, initResume, initProgress, statePatch, stateAdvancePlan, stateRecordMetric, parseStateMd, parseRoadmapMd, parseYamlFrontmatter, parseMustHaves, countMustHaves, stateLoad, stateCheckProgress, configLoad, configClearCache, configValidate, lockedFileUpdate, planIndex, determinePhaseStatus, findFiles, atomicWrite, tailLines, frontmatter, mustHavesCollect, phaseInfo, stateUpdate, roadmapUpdateStatus, roadmapUpdatePlans, updateLegacyStateField, updateFrontmatterField, updateTableRow, findRoadmapRow, resolveDepthProfile, DEPTH_PROFILE_DEFAULTS, historyAppend, historyLoad, VALID_STATUS_TRANSITIONS, validateStatusTransition, writeActiveSkill, validateProject, phaseAdd, phaseRemove, phaseList, loadUserDefaults, saveUserDefaults, mergeUserDefaults, USER_DEFAULTS_PATH, todoList, todoGet, todoAdd, todoDone, migrate, spotCheck };
|
|
726
744
|
// NOTE: validateProject, phaseAdd, phaseRemove, phaseList were previously CLI-only (not exported).
|
|
727
745
|
// They are now exported for testability. This is additive and backwards-compatible.
|
|
@@ -4,9 +4,13 @@
|
|
|
4
4
|
* PostToolUse hook on Write|Edit: Tracks tool call count per session
|
|
5
5
|
* and suggests /compact when approaching context limits.
|
|
6
6
|
*
|
|
7
|
+
* Primary path: reads .planning/.context-budget.json (written by context-bridge.js)
|
|
8
|
+
* and emits tier-labeled warnings (DEGRADING/POOR/CRITICAL) when bridge data is
|
|
9
|
+
* fresh (<60s old). CRITICAL tier always emits; others use REMINDER_INTERVAL debounce.
|
|
10
|
+
*
|
|
11
|
+
* Fallback: when bridge is absent or stale, uses call-count threshold.
|
|
7
12
|
* Counter stored in .planning/.compact-counter (JSON).
|
|
8
13
|
* Threshold configurable via config.json hooks.compactThreshold (default: 50).
|
|
9
|
-
* After first suggestion, re-suggests every 25 calls.
|
|
10
14
|
* Counter resets on SessionStart (via progress-tracker.js).
|
|
11
15
|
*
|
|
12
16
|
* Exit codes:
|
|
@@ -17,6 +21,7 @@ const fs = require('fs');
|
|
|
17
21
|
const path = require('path');
|
|
18
22
|
const { logHook } = require('./hook-logger');
|
|
19
23
|
const { configLoad } = require('./pbr-tools');
|
|
24
|
+
const { loadBridge, TIER_MESSAGES } = require('./context-bridge');
|
|
20
25
|
|
|
21
26
|
const DEFAULT_THRESHOLD = 50;
|
|
22
27
|
const REMINDER_INTERVAL = 25;
|
|
@@ -43,8 +48,41 @@ function main() {
|
|
|
43
48
|
});
|
|
44
49
|
}
|
|
45
50
|
|
|
51
|
+
/**
|
|
52
|
+
* Check the context bridge for the current tier.
|
|
53
|
+
* Returns { tier, message } for actionable tiers (DEGRADING/POOR/CRITICAL),
|
|
54
|
+
* or null if bridge is absent, stale (>60s), or tier is PEAK/GOOD (<50%).
|
|
55
|
+
* @param {string} planningDir - Path to .planning/ directory
|
|
56
|
+
* @returns {{ tier: string, message: string }|null}
|
|
57
|
+
*/
|
|
58
|
+
function checkBridgeTier(planningDir) {
|
|
59
|
+
const bridgePath = path.join(planningDir, '.context-budget.json');
|
|
60
|
+
const bridge = loadBridge(bridgePath);
|
|
61
|
+
if (!bridge) return null;
|
|
62
|
+
|
|
63
|
+
// Check staleness: if timestamp is older than 60 seconds, treat as stale
|
|
64
|
+
if (bridge.timestamp) {
|
|
65
|
+
const ageMs = Date.now() - new Date(bridge.timestamp).getTime();
|
|
66
|
+
if (ageMs > 60000) return null;
|
|
67
|
+
}
|
|
68
|
+
|
|
69
|
+
const percent = bridge.estimated_percent || 0;
|
|
70
|
+
|
|
71
|
+
if (percent >= 85) {
|
|
72
|
+
return { tier: 'CRITICAL', message: TIER_MESSAGES.CRITICAL };
|
|
73
|
+
} else if (percent >= 70) {
|
|
74
|
+
return { tier: 'POOR', message: TIER_MESSAGES.POOR };
|
|
75
|
+
} else if (percent >= 50) {
|
|
76
|
+
return { tier: 'DEGRADING', message: TIER_MESSAGES.DEGRADING };
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
// PEAK tier (<50%) — no tier message needed
|
|
80
|
+
return null;
|
|
81
|
+
}
|
|
82
|
+
|
|
46
83
|
/**
|
|
47
84
|
* Increment tool call counter and return a suggestion if threshold is reached.
|
|
85
|
+
* Checks bridge tier first; falls back to call-count when bridge is absent or stale.
|
|
48
86
|
* @param {string} planningDir - Path to .planning/ directory
|
|
49
87
|
* @param {string} cwd - Current working directory (for config loading)
|
|
50
88
|
* @returns {Object|null} Hook output with additionalContext, or null
|
|
@@ -57,6 +95,28 @@ function checkCompaction(planningDir, cwd) {
|
|
|
57
95
|
counter.count += 1;
|
|
58
96
|
saveCounter(counterPath, counter);
|
|
59
97
|
|
|
98
|
+
// Check bridge tier first
|
|
99
|
+
const bridgeTier = checkBridgeTier(planningDir);
|
|
100
|
+
if (bridgeTier !== null) {
|
|
101
|
+
const { tier, message } = bridgeTier;
|
|
102
|
+
const isFirstSuggestion = !counter.lastSuggested;
|
|
103
|
+
const callsSinceSuggestion = counter.count - (counter.lastSuggested || 0);
|
|
104
|
+
const shouldEmit = tier === 'CRITICAL' || isFirstSuggestion || callsSinceSuggestion >= REMINDER_INTERVAL;
|
|
105
|
+
|
|
106
|
+
if (shouldEmit) {
|
|
107
|
+
counter.lastSuggested = counter.count;
|
|
108
|
+
saveCounter(counterPath, counter);
|
|
109
|
+
|
|
110
|
+
logHook('suggest-compact', 'PostToolUse', 'tier-suggest', { tier, count: counter.count });
|
|
111
|
+
|
|
112
|
+
return {
|
|
113
|
+
additionalContext: `[Context Budget - ${tier}] ${message}`
|
|
114
|
+
};
|
|
115
|
+
}
|
|
116
|
+
return null;
|
|
117
|
+
}
|
|
118
|
+
|
|
119
|
+
// Fall back to counter-based suggestion when bridge is absent or stale
|
|
60
120
|
if (counter.count < threshold) return null;
|
|
61
121
|
|
|
62
122
|
const isFirstSuggestion = !counter.lastSuggested;
|
|
@@ -115,5 +175,5 @@ function resetCounter(planningDir) {
|
|
|
115
175
|
}
|
|
116
176
|
}
|
|
117
177
|
|
|
118
|
-
module.exports = { checkCompaction, loadCounter, saveCounter, getThreshold, resetCounter, DEFAULT_THRESHOLD, REMINDER_INTERVAL };
|
|
178
|
+
module.exports = { checkCompaction, checkBridgeTier, loadCounter, saveCounter, getThreshold, resetCounter, DEFAULT_THRESHOLD, REMINDER_INTERVAL };
|
|
119
179
|
if (require.main === module || process.argv[1] === __filename) { main(); }
|
|
@@ -409,15 +409,40 @@ For each completed executor:
|
|
|
409
409
|
|
|
410
410
|
**Spot-check executor claims:**
|
|
411
411
|
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
|
|
416
|
-
|
|
417
|
-
-
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
|
|
412
|
+
CRITICAL: Before reading results or advancing to the next wave, run the spot-check CLI for each completed plan.
|
|
413
|
+
|
|
414
|
+
For each completed plan in this wave:
|
|
415
|
+
|
|
416
|
+
```bash
|
|
417
|
+
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js spot-check {phaseSlug} {planId}
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
Where `{phaseSlug}` is the phase directory name (e.g., `49-build-workflow-hardening`) and `{planId}` is the plan identifier (e.g., `49-01`).
|
|
421
|
+
|
|
422
|
+
The command returns JSON: `{ ok, summary_exists, key_files_checked, commits_present, detail }`
|
|
423
|
+
|
|
424
|
+
**If `ok` is `false` for ANY plan: STOP.** Do NOT advance to the next wave. Present the user with:
|
|
425
|
+
|
|
426
|
+
```
|
|
427
|
+
Spot-check FAILED for plan {planId}: {detail}
|
|
428
|
+
|
|
429
|
+
Choose an action:
|
|
430
|
+
Retry — Re-spawn executor for this plan
|
|
431
|
+
Continue — Skip this plan and proceed to next wave (may leave phase incomplete)
|
|
432
|
+
Abort — Stop the build entirely
|
|
433
|
+
```
|
|
434
|
+
|
|
435
|
+
Use AskUserQuestion with the three options. Route:
|
|
436
|
+
|
|
437
|
+
- Retry: Re-spawn the executor for this plan (go back to Step 6a for this plan only)
|
|
438
|
+
- Continue: Log the failure, skip the plan, proceed
|
|
439
|
+
- Abort: Stop all build work, leave phase in partial state
|
|
440
|
+
|
|
441
|
+
**If `ok` is `true` for all plans:**
|
|
442
|
+
|
|
443
|
+
- Also check SUMMARY.md frontmatter for `self_check_failures`: if present, warn the user: "Plan {id} reported self-check failures: {list}. Inspect before continuing?"
|
|
444
|
+
- Also search SUMMARY.md for `## Self-Check: FAILED` marker — if present, warn before next wave
|
|
445
|
+
- Between waves: verify no file conflicts from parallel executors (`git status` for uncommitted changes)
|
|
421
446
|
|
|
422
447
|
**Additional wave spot-checks:**
|
|
423
448
|
- Check for `## Self-Check: FAILED` in SUMMARY.md — if present, warn user before proceeding to next wave
|