cclaw-cli 0.48.32 → 0.48.34
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/content/hook-inline-snippets.d.ts +80 -0
- package/dist/content/hook-inline-snippets.js +270 -0
- package/dist/content/node-hooks.js +9 -197
- package/dist/content/opencode-plugin.js +190 -8
- package/dist/content/skills.js +63 -3
- package/dist/content/stage-schema.js +4 -0
- package/dist/content/stages/brainstorm.js +5 -0
- package/dist/content/stages/design.js +5 -0
- package/dist/content/stages/plan.js +5 -0
- package/dist/content/stages/review.js +5 -0
- package/dist/content/stages/schema-types.d.ts +19 -0
- package/dist/content/stages/scope.js +5 -0
- package/dist/content/stages/ship.js +6 -0
- package/dist/content/stages/spec.js +6 -0
- package/dist/content/stages/tdd.js +6 -0
- package/package.json +1 -1
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* hook-inline-snippets.ts
|
|
3
|
+
*
|
|
4
|
+
* Runtime `.cclaw/hooks/run-hook.mjs` is a **standalone Node script** that
|
|
5
|
+
* cannot import from `cclaw-cli` — it must work inside the end-user's
|
|
6
|
+
* project even when the CLI is not installed. Two derived computations,
|
|
7
|
+
* though, must remain 1:1 with the canonical TS implementations:
|
|
8
|
+
*
|
|
9
|
+
* 1. `computeCompoundReadinessInline` mirrors
|
|
10
|
+
* `src/knowledge-store.ts::computeCompoundReadiness`.
|
|
11
|
+
* 2. `computeRalphLoopStatusInline` mirrors
|
|
12
|
+
* `src/tdd-cycle.ts::computeRalphLoopStatus`.
|
|
13
|
+
*
|
|
14
|
+
* Previously those bodies lived inline in `src/content/node-hooks.ts` — a
|
|
15
|
+
* ~2000-line file — next to unrelated hook-handler code. Any silent drift
|
|
16
|
+
* only surfaced when someone remembered to update both sides.
|
|
17
|
+
*
|
|
18
|
+
* This module centralizes the inline JavaScript snippets so:
|
|
19
|
+
*
|
|
20
|
+
* - There is exactly **one place** (this file) that holds each inline
|
|
21
|
+
* JS body.
|
|
22
|
+
* - Each snippet carries an explicit "mirrors X, parity enforced by Y"
|
|
23
|
+
* header comment and is emitted into `run-hook.mjs` verbatim.
|
|
24
|
+
* - `src/content/node-hooks.ts` only interpolates the snippets, it no
|
|
25
|
+
* longer owns their source code.
|
|
26
|
+
*
|
|
27
|
+
* Parity with the TypeScript canonical implementations is enforced by
|
|
28
|
+
* `tests/unit/ralph-loop-parity.test.ts`. Any structural change to the
|
|
29
|
+
* canonical TS code MUST:
|
|
30
|
+
*
|
|
31
|
+
* 1. Update the matching snippet below.
|
|
32
|
+
* 2. Re-run `npm test tests/unit/ralph-loop-parity.test.ts`.
|
|
33
|
+
*
|
|
34
|
+
* DO NOT inline tests here — keep the parity check in its dedicated test
|
|
35
|
+
* file.
|
|
36
|
+
*/
|
|
37
|
+
/**
|
|
38
|
+
* Inline JS helpers used by both compound-readiness and ralph-loop
|
|
39
|
+
* snippets. Kept small and locked: they are shared across the two inline
|
|
40
|
+
* routines and must not grow into a hidden utility namespace.
|
|
41
|
+
*
|
|
42
|
+
* - `normalizeCompoundLastUpdatedAt` produces a stable ISO-8601 UTC
|
|
43
|
+
* timestamp so the hook-written `compound-readiness.json` is byte-equal
|
|
44
|
+
* to the CLI-written version for the same input.
|
|
45
|
+
* - `countArchivedRunsInline` counts immediate subdirectories of
|
|
46
|
+
* `<root>/.cclaw/runs/` so both the hook and the CLI see the same
|
|
47
|
+
* `archivedRunsCount` for the small-project relaxation.
|
|
48
|
+
*/
|
|
49
|
+
export declare const HOOK_INLINE_SHARED_HELPERS = "\nfunction normalizeCompoundLastUpdatedAt(date) {\n return date.toISOString().replace(/\\.\\d{3}Z$/u, \"Z\");\n}\n\n// Count archived runs as sub-directories under `.cclaw/runs/`. Missing\n// dir returns 0; unexpected errors return undefined so the caller can\n// skip the small-project relaxation rather than guess.\nasync function countArchivedRunsInline(root) {\n const dir = path.join(root, RUNTIME_ROOT, \"runs\");\n try {\n const entries = await fs.readdir(dir, { withFileTypes: true });\n return entries.filter((entry) => entry.isDirectory()).length;\n } catch (error) {\n const code = error && typeof error === \"object\" && \"code\" in error ? error.code : null;\n if (code === \"ENOENT\") return 0;\n return undefined;\n }\n}\n";
|
|
50
|
+
/**
|
|
51
|
+
* Inline mirror of `src/knowledge-store.ts::computeCompoundReadiness`.
|
|
52
|
+
*
|
|
53
|
+
* Parity enforced by
|
|
54
|
+
* `tests/unit/ralph-loop-parity.test.ts::compound-readiness parity`.
|
|
55
|
+
*
|
|
56
|
+
* Signature contract:
|
|
57
|
+
* async function computeCompoundReadinessInline(root, options) -> CompoundReadiness
|
|
58
|
+
*
|
|
59
|
+
* Accepted options (all optional):
|
|
60
|
+
* - prereadRaw: string | undefined — pre-read `knowledge.jsonl` contents.
|
|
61
|
+
* - threshold: integer >= 1 — default recurrence threshold.
|
|
62
|
+
* - archivedRunsCount: integer >= 0 — enables small-project relaxation.
|
|
63
|
+
* - maxReady: integer >= 1 — cap on returned `ready` cluster count
|
|
64
|
+
* (default 10).
|
|
65
|
+
*
|
|
66
|
+
* Depends on: `SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD`,
|
|
67
|
+
* `SMALL_PROJECT_RECURRENCE_THRESHOLD`, `COMPOUND_RECURRENCE_THRESHOLD`,
|
|
68
|
+
* and `HOOK_INLINE_SHARED_HELPERS` being in the same runtime scope.
|
|
69
|
+
*/
|
|
70
|
+
export declare const COMPOUND_READINESS_INLINE_SOURCE = "\nasync function computeCompoundReadinessInline(root, options) {\n const filePath = path.join(root, RUNTIME_ROOT, \"knowledge.jsonl\");\n // Caller may supply pre-read raw to avoid double-reading knowledge.jsonl.\n const raw = typeof (options && options.prereadRaw) === \"string\"\n ? options.prereadRaw\n : await readTextFile(filePath, \"\");\n const baseThresholdRaw = options && options.threshold;\n const baseThreshold = Number.isInteger(baseThresholdRaw) && baseThresholdRaw >= 1\n ? baseThresholdRaw\n : COMPOUND_RECURRENCE_THRESHOLD;\n const archivedRunsCount =\n typeof (options && options.archivedRunsCount) === \"number\" &&\n Number.isFinite(options.archivedRunsCount) &&\n options.archivedRunsCount >= 0\n ? Math.floor(options.archivedRunsCount)\n : undefined;\n const smallProjectRelaxationApplied =\n archivedRunsCount !== undefined &&\n archivedRunsCount < SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD &&\n baseThreshold > SMALL_PROJECT_RECURRENCE_THRESHOLD;\n const threshold = smallProjectRelaxationApplied\n ? SMALL_PROJECT_RECURRENCE_THRESHOLD\n : baseThreshold;\n const maxReady = Number.isInteger(options && options.maxReady) && options.maxReady >= 1\n ? options.maxReady\n : 10;\n const normalize = (value) => String(value == null ? \"\" : value).trim().replace(/\\s+/gu, \" \").toLowerCase();\n const severityWeight = (sev) => {\n if (sev === \"critical\") return 3;\n if (sev === \"important\") return 2;\n if (sev === \"suggestion\") return 1;\n return 0;\n };\n const buckets = new Map();\n for (const rawLine of raw.split(/\\r?\\n/gu)) {\n const line = rawLine.trim();\n if (line.length === 0) continue;\n let row;\n try { row = JSON.parse(line); } catch { continue; }\n if (!row || typeof row !== \"object\" || Array.isArray(row)) continue;\n if (row.maturity === \"lifted-to-enforcement\") continue;\n const type = typeof row.type === \"string\" ? row.type : \"\";\n const trigger = typeof row.trigger === \"string\" ? row.trigger : \"\";\n const action = typeof row.action === \"string\" ? row.action : \"\";\n if (type.length === 0 || trigger.length === 0 || action.length === 0) continue;\n const key = type + \"||\" + normalize(trigger) + \"||\" + normalize(action);\n const frequency = Number.isInteger(row.frequency) && row.frequency > 0 ? Math.floor(row.frequency) : 1;\n const lastSeen = typeof row.last_seen_ts === \"string\" ? row.last_seen_ts : \"\";\n let bucket = buckets.get(key);\n if (!bucket) {\n bucket = {\n trigger,\n action,\n recurrence: frequency,\n entryCount: 1,\n severity: typeof row.severity === \"string\" ? row.severity : undefined,\n lastSeenTs: lastSeen,\n types: new Set([type]),\n maturity: new Set([typeof row.maturity === \"string\" ? row.maturity : \"raw\"])\n };\n buckets.set(key, bucket);\n continue;\n }\n bucket.recurrence += frequency;\n bucket.entryCount += 1;\n bucket.types.add(type);\n bucket.maturity.add(typeof row.maturity === \"string\" ? row.maturity : \"raw\");\n if (row.severity === \"critical\") {\n bucket.severity = \"critical\";\n } else if (row.severity === \"important\" && bucket.severity !== \"critical\") {\n bucket.severity = \"important\";\n }\n if (lastSeen && Date.parse(lastSeen) > Date.parse(bucket.lastSeenTs || \"0\")) {\n bucket.lastSeenTs = lastSeen;\n }\n }\n const ready = [];\n for (const bucket of buckets.values()) {\n const criticalOverride = bucket.severity === \"critical\";\n const meetsRecurrence = bucket.recurrence >= threshold;\n if (!criticalOverride && !meetsRecurrence) continue;\n ready.push({\n trigger: bucket.trigger,\n action: bucket.action,\n recurrence: bucket.recurrence,\n entryCount: bucket.entryCount,\n qualification: criticalOverride && !meetsRecurrence ? \"critical_override\" : \"recurrence\",\n ...(bucket.severity ? { severity: bucket.severity } : {}),\n lastSeenTs: bucket.lastSeenTs,\n types: Array.from(bucket.types).sort(),\n maturity: Array.from(bucket.maturity).sort()\n });\n }\n ready.sort((a, b) => {\n const sevDiff = severityWeight(b.severity) - severityWeight(a.severity);\n if (sevDiff !== 0) return sevDiff;\n if (b.recurrence !== a.recurrence) return b.recurrence - a.recurrence;\n const recencyDiff = Date.parse(b.lastSeenTs || \"0\") - Date.parse(a.lastSeenTs || \"0\");\n if (!Number.isNaN(recencyDiff) && recencyDiff !== 0) return recencyDiff;\n return String(a.trigger).localeCompare(String(b.trigger));\n });\n return {\n schemaVersion: 2,\n threshold,\n baseThreshold,\n ...(archivedRunsCount !== undefined ? { archivedRunsCount } : {}),\n smallProjectRelaxationApplied,\n clusterCount: buckets.size,\n readyCount: ready.length,\n ready: ready.slice(0, maxReady),\n lastUpdatedAt: normalizeCompoundLastUpdatedAt(new Date())\n };\n}\n";
|
|
71
|
+
/**
|
|
72
|
+
* Inline mirror of `src/tdd-cycle.ts::computeRalphLoopStatus`.
|
|
73
|
+
*
|
|
74
|
+
* Parity enforced by
|
|
75
|
+
* `tests/unit/ralph-loop-parity.test.ts::ralph-loop parity`.
|
|
76
|
+
*
|
|
77
|
+
* Signature contract:
|
|
78
|
+
* async function computeRalphLoopStatusInline(stateDir, runId) -> RalphLoopStatus
|
|
79
|
+
*/
|
|
80
|
+
export declare const RALPH_LOOP_INLINE_SOURCE = "\nasync function computeRalphLoopStatusInline(stateDir, runId) {\n const filePath = path.join(stateDir, \"tdd-cycle-log.jsonl\");\n const raw = await readTextFile(filePath, \"\");\n const sliceMap = new Map();\n const acClosed = new Set();\n const redOpenSlices = [];\n let loopIteration = 0;\n for (const rawLine of raw.split(/\\r?\\n/gu)) {\n const line = rawLine.trim();\n if (line.length === 0) continue;\n let row;\n try { row = JSON.parse(line); } catch { continue; }\n if (!row || typeof row !== \"object\" || Array.isArray(row)) continue;\n const rowRun = typeof row.runId === \"string\" && row.runId.length > 0 ? row.runId : runId;\n if (rowRun !== runId) continue;\n const slice = typeof row.slice === \"string\" && row.slice.length > 0 ? row.slice : \"S-unknown\";\n let state = sliceMap.get(slice);\n if (!state) {\n state = { slice, redCount: 0, greenCount: 0, refactorCount: 0, redOpen: false, acIds: [] };\n sliceMap.set(slice, state);\n }\n const exitCode = typeof row.exitCode === \"number\" ? row.exitCode : undefined;\n if (row.phase === \"red\") {\n state.redCount += 1;\n if (exitCode !== undefined && exitCode !== 0) state.redOpen = true;\n } else if (row.phase === \"green\") {\n state.greenCount += 1;\n state.redOpen = false;\n loopIteration += 1;\n if (Array.isArray(row.acIds)) {\n for (const acId of row.acIds) {\n if (typeof acId !== \"string\" || acId.length === 0) continue;\n acClosed.add(acId);\n if (!state.acIds.includes(acId)) state.acIds.push(acId);\n }\n }\n } else if (row.phase === \"refactor\") {\n state.refactorCount += 1;\n }\n }\n for (const state of sliceMap.values()) {\n if (state.redOpen) redOpenSlices.push(state.slice);\n }\n const slices = Array.from(sliceMap.values()).sort((a, b) => a.slice.localeCompare(b.slice, \"en\"));\n return {\n schemaVersion: 1,\n runId,\n loopIteration,\n redOpen: redOpenSlices.length > 0,\n redOpenSlices,\n acClosed: Array.from(acClosed).sort(),\n sliceCount: slices.length,\n slices,\n lastUpdatedAt: new Date().toISOString()\n };\n}\n";
|
|
@@ -0,0 +1,270 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* hook-inline-snippets.ts
|
|
3
|
+
*
|
|
4
|
+
* Runtime `.cclaw/hooks/run-hook.mjs` is a **standalone Node script** that
|
|
5
|
+
* cannot import from `cclaw-cli` — it must work inside the end-user's
|
|
6
|
+
* project even when the CLI is not installed. Two derived computations,
|
|
7
|
+
* though, must remain 1:1 with the canonical TS implementations:
|
|
8
|
+
*
|
|
9
|
+
* 1. `computeCompoundReadinessInline` mirrors
|
|
10
|
+
* `src/knowledge-store.ts::computeCompoundReadiness`.
|
|
11
|
+
* 2. `computeRalphLoopStatusInline` mirrors
|
|
12
|
+
* `src/tdd-cycle.ts::computeRalphLoopStatus`.
|
|
13
|
+
*
|
|
14
|
+
* Previously those bodies lived inline in `src/content/node-hooks.ts` — a
|
|
15
|
+
* ~2000-line file — next to unrelated hook-handler code. Any silent drift
|
|
16
|
+
* only surfaced when someone remembered to update both sides.
|
|
17
|
+
*
|
|
18
|
+
* This module centralizes the inline JavaScript snippets so:
|
|
19
|
+
*
|
|
20
|
+
* - There is exactly **one place** (this file) that holds each inline
|
|
21
|
+
* JS body.
|
|
22
|
+
* - Each snippet carries an explicit "mirrors X, parity enforced by Y"
|
|
23
|
+
* header comment and is emitted into `run-hook.mjs` verbatim.
|
|
24
|
+
* - `src/content/node-hooks.ts` only interpolates the snippets, it no
|
|
25
|
+
* longer owns their source code.
|
|
26
|
+
*
|
|
27
|
+
* Parity with the TypeScript canonical implementations is enforced by
|
|
28
|
+
* `tests/unit/ralph-loop-parity.test.ts`. Any structural change to the
|
|
29
|
+
* canonical TS code MUST:
|
|
30
|
+
*
|
|
31
|
+
* 1. Update the matching snippet below.
|
|
32
|
+
* 2. Re-run `npm test tests/unit/ralph-loop-parity.test.ts`.
|
|
33
|
+
*
|
|
34
|
+
* DO NOT inline tests here — keep the parity check in its dedicated test
|
|
35
|
+
* file.
|
|
36
|
+
*/
|
|
37
|
+
/**
|
|
38
|
+
* Inline JS helpers used by both compound-readiness and ralph-loop
|
|
39
|
+
* snippets. Kept small and locked: they are shared across the two inline
|
|
40
|
+
* routines and must not grow into a hidden utility namespace.
|
|
41
|
+
*
|
|
42
|
+
* - `normalizeCompoundLastUpdatedAt` produces a stable ISO-8601 UTC
|
|
43
|
+
* timestamp so the hook-written `compound-readiness.json` is byte-equal
|
|
44
|
+
* to the CLI-written version for the same input.
|
|
45
|
+
* - `countArchivedRunsInline` counts immediate subdirectories of
|
|
46
|
+
* `<root>/.cclaw/runs/` so both the hook and the CLI see the same
|
|
47
|
+
* `archivedRunsCount` for the small-project relaxation.
|
|
48
|
+
*/
|
|
49
|
+
export const HOOK_INLINE_SHARED_HELPERS = `
|
|
50
|
+
function normalizeCompoundLastUpdatedAt(date) {
|
|
51
|
+
return date.toISOString().replace(/\\.\\d{3}Z$/u, "Z");
|
|
52
|
+
}
|
|
53
|
+
|
|
54
|
+
// Count archived runs as sub-directories under \`.cclaw/runs/\`. Missing
|
|
55
|
+
// dir returns 0; unexpected errors return undefined so the caller can
|
|
56
|
+
// skip the small-project relaxation rather than guess.
|
|
57
|
+
async function countArchivedRunsInline(root) {
|
|
58
|
+
const dir = path.join(root, RUNTIME_ROOT, "runs");
|
|
59
|
+
try {
|
|
60
|
+
const entries = await fs.readdir(dir, { withFileTypes: true });
|
|
61
|
+
return entries.filter((entry) => entry.isDirectory()).length;
|
|
62
|
+
} catch (error) {
|
|
63
|
+
const code = error && typeof error === "object" && "code" in error ? error.code : null;
|
|
64
|
+
if (code === "ENOENT") return 0;
|
|
65
|
+
return undefined;
|
|
66
|
+
}
|
|
67
|
+
}
|
|
68
|
+
`;
|
|
69
|
+
/**
|
|
70
|
+
* Inline mirror of `src/knowledge-store.ts::computeCompoundReadiness`.
|
|
71
|
+
*
|
|
72
|
+
* Parity enforced by
|
|
73
|
+
* `tests/unit/ralph-loop-parity.test.ts::compound-readiness parity`.
|
|
74
|
+
*
|
|
75
|
+
* Signature contract:
|
|
76
|
+
* async function computeCompoundReadinessInline(root, options) -> CompoundReadiness
|
|
77
|
+
*
|
|
78
|
+
* Accepted options (all optional):
|
|
79
|
+
* - prereadRaw: string | undefined — pre-read `knowledge.jsonl` contents.
|
|
80
|
+
* - threshold: integer >= 1 — default recurrence threshold.
|
|
81
|
+
* - archivedRunsCount: integer >= 0 — enables small-project relaxation.
|
|
82
|
+
* - maxReady: integer >= 1 — cap on returned `ready` cluster count
|
|
83
|
+
* (default 10).
|
|
84
|
+
*
|
|
85
|
+
* Depends on: `SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD`,
|
|
86
|
+
* `SMALL_PROJECT_RECURRENCE_THRESHOLD`, `COMPOUND_RECURRENCE_THRESHOLD`,
|
|
87
|
+
* and `HOOK_INLINE_SHARED_HELPERS` being in the same runtime scope.
|
|
88
|
+
*/
|
|
89
|
+
export const COMPOUND_READINESS_INLINE_SOURCE = `
|
|
90
|
+
async function computeCompoundReadinessInline(root, options) {
|
|
91
|
+
const filePath = path.join(root, RUNTIME_ROOT, "knowledge.jsonl");
|
|
92
|
+
// Caller may supply pre-read raw to avoid double-reading knowledge.jsonl.
|
|
93
|
+
const raw = typeof (options && options.prereadRaw) === "string"
|
|
94
|
+
? options.prereadRaw
|
|
95
|
+
: await readTextFile(filePath, "");
|
|
96
|
+
const baseThresholdRaw = options && options.threshold;
|
|
97
|
+
const baseThreshold = Number.isInteger(baseThresholdRaw) && baseThresholdRaw >= 1
|
|
98
|
+
? baseThresholdRaw
|
|
99
|
+
: COMPOUND_RECURRENCE_THRESHOLD;
|
|
100
|
+
const archivedRunsCount =
|
|
101
|
+
typeof (options && options.archivedRunsCount) === "number" &&
|
|
102
|
+
Number.isFinite(options.archivedRunsCount) &&
|
|
103
|
+
options.archivedRunsCount >= 0
|
|
104
|
+
? Math.floor(options.archivedRunsCount)
|
|
105
|
+
: undefined;
|
|
106
|
+
const smallProjectRelaxationApplied =
|
|
107
|
+
archivedRunsCount !== undefined &&
|
|
108
|
+
archivedRunsCount < SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD &&
|
|
109
|
+
baseThreshold > SMALL_PROJECT_RECURRENCE_THRESHOLD;
|
|
110
|
+
const threshold = smallProjectRelaxationApplied
|
|
111
|
+
? SMALL_PROJECT_RECURRENCE_THRESHOLD
|
|
112
|
+
: baseThreshold;
|
|
113
|
+
const maxReady = Number.isInteger(options && options.maxReady) && options.maxReady >= 1
|
|
114
|
+
? options.maxReady
|
|
115
|
+
: 10;
|
|
116
|
+
const normalize = (value) => String(value == null ? "" : value).trim().replace(/\\s+/gu, " ").toLowerCase();
|
|
117
|
+
const severityWeight = (sev) => {
|
|
118
|
+
if (sev === "critical") return 3;
|
|
119
|
+
if (sev === "important") return 2;
|
|
120
|
+
if (sev === "suggestion") return 1;
|
|
121
|
+
return 0;
|
|
122
|
+
};
|
|
123
|
+
const buckets = new Map();
|
|
124
|
+
for (const rawLine of raw.split(/\\r?\\n/gu)) {
|
|
125
|
+
const line = rawLine.trim();
|
|
126
|
+
if (line.length === 0) continue;
|
|
127
|
+
let row;
|
|
128
|
+
try { row = JSON.parse(line); } catch { continue; }
|
|
129
|
+
if (!row || typeof row !== "object" || Array.isArray(row)) continue;
|
|
130
|
+
if (row.maturity === "lifted-to-enforcement") continue;
|
|
131
|
+
const type = typeof row.type === "string" ? row.type : "";
|
|
132
|
+
const trigger = typeof row.trigger === "string" ? row.trigger : "";
|
|
133
|
+
const action = typeof row.action === "string" ? row.action : "";
|
|
134
|
+
if (type.length === 0 || trigger.length === 0 || action.length === 0) continue;
|
|
135
|
+
const key = type + "||" + normalize(trigger) + "||" + normalize(action);
|
|
136
|
+
const frequency = Number.isInteger(row.frequency) && row.frequency > 0 ? Math.floor(row.frequency) : 1;
|
|
137
|
+
const lastSeen = typeof row.last_seen_ts === "string" ? row.last_seen_ts : "";
|
|
138
|
+
let bucket = buckets.get(key);
|
|
139
|
+
if (!bucket) {
|
|
140
|
+
bucket = {
|
|
141
|
+
trigger,
|
|
142
|
+
action,
|
|
143
|
+
recurrence: frequency,
|
|
144
|
+
entryCount: 1,
|
|
145
|
+
severity: typeof row.severity === "string" ? row.severity : undefined,
|
|
146
|
+
lastSeenTs: lastSeen,
|
|
147
|
+
types: new Set([type]),
|
|
148
|
+
maturity: new Set([typeof row.maturity === "string" ? row.maturity : "raw"])
|
|
149
|
+
};
|
|
150
|
+
buckets.set(key, bucket);
|
|
151
|
+
continue;
|
|
152
|
+
}
|
|
153
|
+
bucket.recurrence += frequency;
|
|
154
|
+
bucket.entryCount += 1;
|
|
155
|
+
bucket.types.add(type);
|
|
156
|
+
bucket.maturity.add(typeof row.maturity === "string" ? row.maturity : "raw");
|
|
157
|
+
if (row.severity === "critical") {
|
|
158
|
+
bucket.severity = "critical";
|
|
159
|
+
} else if (row.severity === "important" && bucket.severity !== "critical") {
|
|
160
|
+
bucket.severity = "important";
|
|
161
|
+
}
|
|
162
|
+
if (lastSeen && Date.parse(lastSeen) > Date.parse(bucket.lastSeenTs || "0")) {
|
|
163
|
+
bucket.lastSeenTs = lastSeen;
|
|
164
|
+
}
|
|
165
|
+
}
|
|
166
|
+
const ready = [];
|
|
167
|
+
for (const bucket of buckets.values()) {
|
|
168
|
+
const criticalOverride = bucket.severity === "critical";
|
|
169
|
+
const meetsRecurrence = bucket.recurrence >= threshold;
|
|
170
|
+
if (!criticalOverride && !meetsRecurrence) continue;
|
|
171
|
+
ready.push({
|
|
172
|
+
trigger: bucket.trigger,
|
|
173
|
+
action: bucket.action,
|
|
174
|
+
recurrence: bucket.recurrence,
|
|
175
|
+
entryCount: bucket.entryCount,
|
|
176
|
+
qualification: criticalOverride && !meetsRecurrence ? "critical_override" : "recurrence",
|
|
177
|
+
...(bucket.severity ? { severity: bucket.severity } : {}),
|
|
178
|
+
lastSeenTs: bucket.lastSeenTs,
|
|
179
|
+
types: Array.from(bucket.types).sort(),
|
|
180
|
+
maturity: Array.from(bucket.maturity).sort()
|
|
181
|
+
});
|
|
182
|
+
}
|
|
183
|
+
ready.sort((a, b) => {
|
|
184
|
+
const sevDiff = severityWeight(b.severity) - severityWeight(a.severity);
|
|
185
|
+
if (sevDiff !== 0) return sevDiff;
|
|
186
|
+
if (b.recurrence !== a.recurrence) return b.recurrence - a.recurrence;
|
|
187
|
+
const recencyDiff = Date.parse(b.lastSeenTs || "0") - Date.parse(a.lastSeenTs || "0");
|
|
188
|
+
if (!Number.isNaN(recencyDiff) && recencyDiff !== 0) return recencyDiff;
|
|
189
|
+
return String(a.trigger).localeCompare(String(b.trigger));
|
|
190
|
+
});
|
|
191
|
+
return {
|
|
192
|
+
schemaVersion: 2,
|
|
193
|
+
threshold,
|
|
194
|
+
baseThreshold,
|
|
195
|
+
...(archivedRunsCount !== undefined ? { archivedRunsCount } : {}),
|
|
196
|
+
smallProjectRelaxationApplied,
|
|
197
|
+
clusterCount: buckets.size,
|
|
198
|
+
readyCount: ready.length,
|
|
199
|
+
ready: ready.slice(0, maxReady),
|
|
200
|
+
lastUpdatedAt: normalizeCompoundLastUpdatedAt(new Date())
|
|
201
|
+
};
|
|
202
|
+
}
|
|
203
|
+
`;
|
|
204
|
+
/**
|
|
205
|
+
* Inline mirror of `src/tdd-cycle.ts::computeRalphLoopStatus`.
|
|
206
|
+
*
|
|
207
|
+
* Parity enforced by
|
|
208
|
+
* `tests/unit/ralph-loop-parity.test.ts::ralph-loop parity`.
|
|
209
|
+
*
|
|
210
|
+
* Signature contract:
|
|
211
|
+
* async function computeRalphLoopStatusInline(stateDir, runId) -> RalphLoopStatus
|
|
212
|
+
*/
|
|
213
|
+
export const RALPH_LOOP_INLINE_SOURCE = `
|
|
214
|
+
async function computeRalphLoopStatusInline(stateDir, runId) {
|
|
215
|
+
const filePath = path.join(stateDir, "tdd-cycle-log.jsonl");
|
|
216
|
+
const raw = await readTextFile(filePath, "");
|
|
217
|
+
const sliceMap = new Map();
|
|
218
|
+
const acClosed = new Set();
|
|
219
|
+
const redOpenSlices = [];
|
|
220
|
+
let loopIteration = 0;
|
|
221
|
+
for (const rawLine of raw.split(/\\r?\\n/gu)) {
|
|
222
|
+
const line = rawLine.trim();
|
|
223
|
+
if (line.length === 0) continue;
|
|
224
|
+
let row;
|
|
225
|
+
try { row = JSON.parse(line); } catch { continue; }
|
|
226
|
+
if (!row || typeof row !== "object" || Array.isArray(row)) continue;
|
|
227
|
+
const rowRun = typeof row.runId === "string" && row.runId.length > 0 ? row.runId : runId;
|
|
228
|
+
if (rowRun !== runId) continue;
|
|
229
|
+
const slice = typeof row.slice === "string" && row.slice.length > 0 ? row.slice : "S-unknown";
|
|
230
|
+
let state = sliceMap.get(slice);
|
|
231
|
+
if (!state) {
|
|
232
|
+
state = { slice, redCount: 0, greenCount: 0, refactorCount: 0, redOpen: false, acIds: [] };
|
|
233
|
+
sliceMap.set(slice, state);
|
|
234
|
+
}
|
|
235
|
+
const exitCode = typeof row.exitCode === "number" ? row.exitCode : undefined;
|
|
236
|
+
if (row.phase === "red") {
|
|
237
|
+
state.redCount += 1;
|
|
238
|
+
if (exitCode !== undefined && exitCode !== 0) state.redOpen = true;
|
|
239
|
+
} else if (row.phase === "green") {
|
|
240
|
+
state.greenCount += 1;
|
|
241
|
+
state.redOpen = false;
|
|
242
|
+
loopIteration += 1;
|
|
243
|
+
if (Array.isArray(row.acIds)) {
|
|
244
|
+
for (const acId of row.acIds) {
|
|
245
|
+
if (typeof acId !== "string" || acId.length === 0) continue;
|
|
246
|
+
acClosed.add(acId);
|
|
247
|
+
if (!state.acIds.includes(acId)) state.acIds.push(acId);
|
|
248
|
+
}
|
|
249
|
+
}
|
|
250
|
+
} else if (row.phase === "refactor") {
|
|
251
|
+
state.refactorCount += 1;
|
|
252
|
+
}
|
|
253
|
+
}
|
|
254
|
+
for (const state of sliceMap.values()) {
|
|
255
|
+
if (state.redOpen) redOpenSlices.push(state.slice);
|
|
256
|
+
}
|
|
257
|
+
const slices = Array.from(sliceMap.values()).sort((a, b) => a.slice.localeCompare(b.slice, "en"));
|
|
258
|
+
return {
|
|
259
|
+
schemaVersion: 1,
|
|
260
|
+
runId,
|
|
261
|
+
loopIteration,
|
|
262
|
+
redOpen: redOpenSlices.length > 0,
|
|
263
|
+
redOpenSlices,
|
|
264
|
+
acClosed: Array.from(acClosed).sort(),
|
|
265
|
+
sliceCount: slices.length,
|
|
266
|
+
slices,
|
|
267
|
+
lastUpdatedAt: new Date().toISOString()
|
|
268
|
+
};
|
|
269
|
+
}
|
|
270
|
+
`;
|
|
@@ -1,6 +1,7 @@
|
|
|
1
1
|
import { DEFAULT_COMPOUND_RECURRENCE_THRESHOLD } from "../config.js";
|
|
2
2
|
import { RUNTIME_ROOT } from "../constants.js";
|
|
3
3
|
import { SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD, SMALL_PROJECT_RECURRENCE_THRESHOLD } from "../knowledge-store.js";
|
|
4
|
+
import { HOOK_INLINE_SHARED_HELPERS, COMPOUND_READINESS_INLINE_SOURCE, RALPH_LOOP_INLINE_SOURCE } from "./hook-inline-snippets.js";
|
|
4
5
|
function normalizePatterns(patterns, fallback) {
|
|
5
6
|
if (!patterns || patterns.length === 0)
|
|
6
7
|
return [...fallback];
|
|
@@ -1451,203 +1452,14 @@ async function handlePromptGuard(runtime) {
|
|
|
1451
1452
|
return 0;
|
|
1452
1453
|
}
|
|
1453
1454
|
|
|
1454
|
-
|
|
1455
|
-
|
|
1456
|
-
|
|
1457
|
-
|
|
1458
|
-
//
|
|
1459
|
-
|
|
1460
|
-
|
|
1461
|
-
|
|
1462
|
-
const dir = path.join(root, RUNTIME_ROOT, "runs");
|
|
1463
|
-
try {
|
|
1464
|
-
const entries = await fs.readdir(dir, { withFileTypes: true });
|
|
1465
|
-
return entries.filter((entry) => entry.isDirectory()).length;
|
|
1466
|
-
} catch (error) {
|
|
1467
|
-
const code = error && typeof error === "object" && "code" in error ? error.code : null;
|
|
1468
|
-
if (code === "ENOENT") return 0;
|
|
1469
|
-
return undefined;
|
|
1470
|
-
}
|
|
1471
|
-
}
|
|
1472
|
-
|
|
1473
|
-
// Mirrors src/knowledge-store.ts::computeCompoundReadiness — kept inline so
|
|
1474
|
-
// SessionStart can refresh compound-readiness.json without the CLI binary.
|
|
1475
|
-
// Any schema change must update src/knowledge-store.ts::computeCompoundReadiness
|
|
1476
|
-
// and src/internal/compound-readiness.ts in lockstep. Parity is enforced by
|
|
1477
|
-
// tests/unit/ralph-loop-parity.test.ts.
|
|
1478
|
-
async function computeCompoundReadinessInline(root, options) {
|
|
1479
|
-
const filePath = path.join(root, RUNTIME_ROOT, "knowledge.jsonl");
|
|
1480
|
-
// Caller may supply pre-read raw to avoid double-reading knowledge.jsonl.
|
|
1481
|
-
const raw = typeof (options && options.prereadRaw) === "string"
|
|
1482
|
-
? options.prereadRaw
|
|
1483
|
-
: await readTextFile(filePath, "");
|
|
1484
|
-
const baseThresholdRaw = options && options.threshold;
|
|
1485
|
-
const baseThreshold = Number.isInteger(baseThresholdRaw) && baseThresholdRaw >= 1
|
|
1486
|
-
? baseThresholdRaw
|
|
1487
|
-
: COMPOUND_RECURRENCE_THRESHOLD;
|
|
1488
|
-
const archivedRunsCount =
|
|
1489
|
-
typeof (options && options.archivedRunsCount) === "number" &&
|
|
1490
|
-
Number.isFinite(options.archivedRunsCount) &&
|
|
1491
|
-
options.archivedRunsCount >= 0
|
|
1492
|
-
? Math.floor(options.archivedRunsCount)
|
|
1493
|
-
: undefined;
|
|
1494
|
-
const smallProjectRelaxationApplied =
|
|
1495
|
-
archivedRunsCount !== undefined &&
|
|
1496
|
-
archivedRunsCount < SMALL_PROJECT_ARCHIVE_RUNS_THRESHOLD &&
|
|
1497
|
-
baseThreshold > SMALL_PROJECT_RECURRENCE_THRESHOLD;
|
|
1498
|
-
const threshold = smallProjectRelaxationApplied
|
|
1499
|
-
? SMALL_PROJECT_RECURRENCE_THRESHOLD
|
|
1500
|
-
: baseThreshold;
|
|
1501
|
-
const maxReady = Number.isInteger(options && options.maxReady) && options.maxReady >= 1
|
|
1502
|
-
? options.maxReady
|
|
1503
|
-
: 10;
|
|
1504
|
-
const normalize = (value) => String(value == null ? "" : value).trim().replace(/\\s+/gu, " ").toLowerCase();
|
|
1505
|
-
const severityWeight = (sev) => {
|
|
1506
|
-
if (sev === "critical") return 3;
|
|
1507
|
-
if (sev === "important") return 2;
|
|
1508
|
-
if (sev === "suggestion") return 1;
|
|
1509
|
-
return 0;
|
|
1510
|
-
};
|
|
1511
|
-
const buckets = new Map();
|
|
1512
|
-
for (const rawLine of raw.split(/\\r?\\n/gu)) {
|
|
1513
|
-
const line = rawLine.trim();
|
|
1514
|
-
if (line.length === 0) continue;
|
|
1515
|
-
let row;
|
|
1516
|
-
try { row = JSON.parse(line); } catch { continue; }
|
|
1517
|
-
if (!row || typeof row !== "object" || Array.isArray(row)) continue;
|
|
1518
|
-
if (row.maturity === "lifted-to-enforcement") continue;
|
|
1519
|
-
const type = typeof row.type === "string" ? row.type : "";
|
|
1520
|
-
const trigger = typeof row.trigger === "string" ? row.trigger : "";
|
|
1521
|
-
const action = typeof row.action === "string" ? row.action : "";
|
|
1522
|
-
if (type.length === 0 || trigger.length === 0 || action.length === 0) continue;
|
|
1523
|
-
const key = type + "||" + normalize(trigger) + "||" + normalize(action);
|
|
1524
|
-
const frequency = Number.isInteger(row.frequency) && row.frequency > 0 ? Math.floor(row.frequency) : 1;
|
|
1525
|
-
const lastSeen = typeof row.last_seen_ts === "string" ? row.last_seen_ts : "";
|
|
1526
|
-
let bucket = buckets.get(key);
|
|
1527
|
-
if (!bucket) {
|
|
1528
|
-
bucket = {
|
|
1529
|
-
trigger,
|
|
1530
|
-
action,
|
|
1531
|
-
recurrence: frequency,
|
|
1532
|
-
entryCount: 1,
|
|
1533
|
-
severity: typeof row.severity === "string" ? row.severity : undefined,
|
|
1534
|
-
lastSeenTs: lastSeen,
|
|
1535
|
-
types: new Set([type]),
|
|
1536
|
-
maturity: new Set([typeof row.maturity === "string" ? row.maturity : "raw"])
|
|
1537
|
-
};
|
|
1538
|
-
buckets.set(key, bucket);
|
|
1539
|
-
continue;
|
|
1540
|
-
}
|
|
1541
|
-
bucket.recurrence += frequency;
|
|
1542
|
-
bucket.entryCount += 1;
|
|
1543
|
-
bucket.types.add(type);
|
|
1544
|
-
bucket.maturity.add(typeof row.maturity === "string" ? row.maturity : "raw");
|
|
1545
|
-
if (row.severity === "critical") {
|
|
1546
|
-
bucket.severity = "critical";
|
|
1547
|
-
} else if (row.severity === "important" && bucket.severity !== "critical") {
|
|
1548
|
-
bucket.severity = "important";
|
|
1549
|
-
}
|
|
1550
|
-
if (lastSeen && Date.parse(lastSeen) > Date.parse(bucket.lastSeenTs || "0")) {
|
|
1551
|
-
bucket.lastSeenTs = lastSeen;
|
|
1552
|
-
}
|
|
1553
|
-
}
|
|
1554
|
-
const ready = [];
|
|
1555
|
-
for (const bucket of buckets.values()) {
|
|
1556
|
-
const criticalOverride = bucket.severity === "critical";
|
|
1557
|
-
const meetsRecurrence = bucket.recurrence >= threshold;
|
|
1558
|
-
if (!criticalOverride && !meetsRecurrence) continue;
|
|
1559
|
-
ready.push({
|
|
1560
|
-
trigger: bucket.trigger,
|
|
1561
|
-
action: bucket.action,
|
|
1562
|
-
recurrence: bucket.recurrence,
|
|
1563
|
-
entryCount: bucket.entryCount,
|
|
1564
|
-
qualification: criticalOverride && !meetsRecurrence ? "critical_override" : "recurrence",
|
|
1565
|
-
...(bucket.severity ? { severity: bucket.severity } : {}),
|
|
1566
|
-
lastSeenTs: bucket.lastSeenTs,
|
|
1567
|
-
types: Array.from(bucket.types).sort(),
|
|
1568
|
-
maturity: Array.from(bucket.maturity).sort()
|
|
1569
|
-
});
|
|
1570
|
-
}
|
|
1571
|
-
ready.sort((a, b) => {
|
|
1572
|
-
const sevDiff = severityWeight(b.severity) - severityWeight(a.severity);
|
|
1573
|
-
if (sevDiff !== 0) return sevDiff;
|
|
1574
|
-
if (b.recurrence !== a.recurrence) return b.recurrence - a.recurrence;
|
|
1575
|
-
const recencyDiff = Date.parse(b.lastSeenTs || "0") - Date.parse(a.lastSeenTs || "0");
|
|
1576
|
-
if (!Number.isNaN(recencyDiff) && recencyDiff !== 0) return recencyDiff;
|
|
1577
|
-
return String(a.trigger).localeCompare(String(b.trigger));
|
|
1578
|
-
});
|
|
1579
|
-
return {
|
|
1580
|
-
schemaVersion: 2,
|
|
1581
|
-
threshold,
|
|
1582
|
-
baseThreshold,
|
|
1583
|
-
...(archivedRunsCount !== undefined ? { archivedRunsCount } : {}),
|
|
1584
|
-
smallProjectRelaxationApplied,
|
|
1585
|
-
clusterCount: buckets.size,
|
|
1586
|
-
readyCount: ready.length,
|
|
1587
|
-
ready: ready.slice(0, maxReady),
|
|
1588
|
-
lastUpdatedAt: normalizeCompoundLastUpdatedAt(new Date())
|
|
1589
|
-
};
|
|
1590
|
-
}
|
|
1591
|
-
|
|
1592
|
-
// Mirrors src/tdd-cycle.ts::computeRalphLoopStatus — kept inline so the
|
|
1593
|
-
// SessionStart hook can write ralph-loop.json without depending on the CLI
|
|
1594
|
-
// binary being installed globally. Any schema change must update both copies.
|
|
1595
|
-
async function computeRalphLoopStatusInline(stateDir, runId) {
|
|
1596
|
-
const filePath = path.join(stateDir, "tdd-cycle-log.jsonl");
|
|
1597
|
-
const raw = await readTextFile(filePath, "");
|
|
1598
|
-
const sliceMap = new Map();
|
|
1599
|
-
const acClosed = new Set();
|
|
1600
|
-
const redOpenSlices = [];
|
|
1601
|
-
let loopIteration = 0;
|
|
1602
|
-
for (const rawLine of raw.split(/\\r?\\n/gu)) {
|
|
1603
|
-
const line = rawLine.trim();
|
|
1604
|
-
if (line.length === 0) continue;
|
|
1605
|
-
let row;
|
|
1606
|
-
try { row = JSON.parse(line); } catch { continue; }
|
|
1607
|
-
if (!row || typeof row !== "object" || Array.isArray(row)) continue;
|
|
1608
|
-
const rowRun = typeof row.runId === "string" && row.runId.length > 0 ? row.runId : runId;
|
|
1609
|
-
if (rowRun !== runId) continue;
|
|
1610
|
-
const slice = typeof row.slice === "string" && row.slice.length > 0 ? row.slice : "S-unknown";
|
|
1611
|
-
let state = sliceMap.get(slice);
|
|
1612
|
-
if (!state) {
|
|
1613
|
-
state = { slice, redCount: 0, greenCount: 0, refactorCount: 0, redOpen: false, acIds: [] };
|
|
1614
|
-
sliceMap.set(slice, state);
|
|
1615
|
-
}
|
|
1616
|
-
const exitCode = typeof row.exitCode === "number" ? row.exitCode : undefined;
|
|
1617
|
-
if (row.phase === "red") {
|
|
1618
|
-
state.redCount += 1;
|
|
1619
|
-
if (exitCode !== undefined && exitCode !== 0) state.redOpen = true;
|
|
1620
|
-
} else if (row.phase === "green") {
|
|
1621
|
-
state.greenCount += 1;
|
|
1622
|
-
state.redOpen = false;
|
|
1623
|
-
loopIteration += 1;
|
|
1624
|
-
if (Array.isArray(row.acIds)) {
|
|
1625
|
-
for (const acId of row.acIds) {
|
|
1626
|
-
if (typeof acId !== "string" || acId.length === 0) continue;
|
|
1627
|
-
acClosed.add(acId);
|
|
1628
|
-
if (!state.acIds.includes(acId)) state.acIds.push(acId);
|
|
1629
|
-
}
|
|
1630
|
-
}
|
|
1631
|
-
} else if (row.phase === "refactor") {
|
|
1632
|
-
state.refactorCount += 1;
|
|
1633
|
-
}
|
|
1634
|
-
}
|
|
1635
|
-
for (const state of sliceMap.values()) {
|
|
1636
|
-
if (state.redOpen) redOpenSlices.push(state.slice);
|
|
1637
|
-
}
|
|
1638
|
-
const slices = Array.from(sliceMap.values()).sort((a, b) => a.slice.localeCompare(b.slice, "en"));
|
|
1639
|
-
return {
|
|
1640
|
-
schemaVersion: 1,
|
|
1641
|
-
runId,
|
|
1642
|
-
loopIteration,
|
|
1643
|
-
redOpen: redOpenSlices.length > 0,
|
|
1644
|
-
redOpenSlices,
|
|
1645
|
-
acClosed: Array.from(acClosed).sort(),
|
|
1646
|
-
sliceCount: slices.length,
|
|
1647
|
-
slices,
|
|
1648
|
-
lastUpdatedAt: new Date().toISOString()
|
|
1649
|
-
};
|
|
1650
|
-
}
|
|
1455
|
+
// Inline mirrors of canonical CLI computations (compound-readiness,
|
|
1456
|
+
// ralph-loop) are factored into src/content/hook-inline-snippets.ts so
|
|
1457
|
+
// this 2000+-line file no longer owns their bodies. Each snippet carries
|
|
1458
|
+
// an explicit "mirrors X, parity enforced by Y" comment in the snippets
|
|
1459
|
+
// module. Parity is enforced by tests/unit/ralph-loop-parity.test.ts.
|
|
1460
|
+
${HOOK_INLINE_SHARED_HELPERS}
|
|
1461
|
+
${COMPOUND_READINESS_INLINE_SOURCE}
|
|
1462
|
+
${RALPH_LOOP_INLINE_SOURCE}
|
|
1651
1463
|
|
|
1652
1464
|
async function hasFailingRedEvidenceForPath(stateDir, runId, rawPath) {
|
|
1653
1465
|
const cycleRaw = await readTextFile(path.join(stateDir, "tdd-cycle-log.jsonl"), "");
|
|
@@ -2,7 +2,7 @@ import { RUNTIME_ROOT } from "../constants.js";
|
|
|
2
2
|
import { META_SKILL_NAME } from "./meta-skill.js";
|
|
3
3
|
export function opencodePluginJs(_options = {}) {
|
|
4
4
|
return `// cclaw OpenCode plugin — generated by cclaw sync
|
|
5
|
-
import { existsSync, mkdirSync } from "node:fs";
|
|
5
|
+
import { appendFileSync, existsSync, mkdirSync } from "node:fs";
|
|
6
6
|
import { readFile, stat } from "node:fs/promises";
|
|
7
7
|
import { join } from "node:path";
|
|
8
8
|
|
|
@@ -10,6 +10,8 @@ export default function cclawPlugin(ctx) {
|
|
|
10
10
|
const root = ctx.directory || process.cwd();
|
|
11
11
|
const runtimeDir = join(root, "${RUNTIME_ROOT}");
|
|
12
12
|
const stateDir = join(runtimeDir, "state");
|
|
13
|
+
const logsDir = join(runtimeDir, "logs");
|
|
14
|
+
const pluginLogPath = join(logsDir, "opencode-plugin.log");
|
|
13
15
|
const flowStatePath = join(stateDir, "flow-state.json");
|
|
14
16
|
const checkpointPath = join(stateDir, "checkpoint.json");
|
|
15
17
|
const activityPath = join(stateDir, "stage-activity.jsonl");
|
|
@@ -34,6 +36,27 @@ export default function cclawPlugin(ctx) {
|
|
|
34
36
|
}
|
|
35
37
|
}
|
|
36
38
|
|
|
39
|
+
/**
|
|
40
|
+
* Diagnostic log used instead of console.error on the hot path.
|
|
41
|
+
* Writing to a file keeps hook failures and unknown events from
|
|
42
|
+
* racing OpenCode's TUI renderer (which causes the overlapping-text
|
|
43
|
+
* artifact users have reported). Best-effort: any I/O failure is
|
|
44
|
+
* swallowed so logging never itself blocks or throws.
|
|
45
|
+
*/
|
|
46
|
+
function logToFile(line) {
|
|
47
|
+
try {
|
|
48
|
+
mkdirSync(logsDir, { recursive: true });
|
|
49
|
+
} catch {
|
|
50
|
+
return;
|
|
51
|
+
}
|
|
52
|
+
const timestamp = new Date().toISOString();
|
|
53
|
+
try {
|
|
54
|
+
appendFileSync(pluginLogPath, timestamp + " " + line + "\\n", "utf8");
|
|
55
|
+
} catch {
|
|
56
|
+
// ignore — never let logging fail the hook
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
|
|
37
60
|
async function readFlowState() {
|
|
38
61
|
try {
|
|
39
62
|
const raw = await readFile(flowStatePath, "utf8");
|
|
@@ -272,6 +295,17 @@ export default function cclawPlugin(ctx) {
|
|
|
272
295
|
});
|
|
273
296
|
}
|
|
274
297
|
|
|
298
|
+
const lastHookStderr = new Map();
|
|
299
|
+
function recordHookStderr(hookName, stderr) {
|
|
300
|
+
if (typeof hookName !== "string" || hookName.length === 0) return;
|
|
301
|
+
const trimmed = typeof stderr === "string" ? stderr.trim() : "";
|
|
302
|
+
if (trimmed.length === 0) {
|
|
303
|
+
lastHookStderr.delete(hookName);
|
|
304
|
+
return;
|
|
305
|
+
}
|
|
306
|
+
lastHookStderr.set(hookName, trimmed);
|
|
307
|
+
}
|
|
308
|
+
|
|
275
309
|
async function runHookScript(hookName, payload = {}) {
|
|
276
310
|
const { spawn } = await import("node:child_process");
|
|
277
311
|
const hookRuntimePath = join(root, "${RUNTIME_ROOT}/hooks/run-hook.mjs");
|
|
@@ -282,6 +316,7 @@ export default function cclawPlugin(ctx) {
|
|
|
282
316
|
const finish = (ok) => {
|
|
283
317
|
if (settled) return;
|
|
284
318
|
settled = true;
|
|
319
|
+
recordHookStderr(hookName, stderr);
|
|
285
320
|
resolve(ok);
|
|
286
321
|
};
|
|
287
322
|
|
|
@@ -296,13 +331,19 @@ export default function cclawPlugin(ctx) {
|
|
|
296
331
|
return;
|
|
297
332
|
}
|
|
298
333
|
|
|
334
|
+
// Tool.execute.before is a user-facing hot path: 20s is far too
|
|
335
|
+
// long to wait on a guard. 5s gives the hook real breathing room
|
|
336
|
+
// (typical runtime is well under 500ms) while capping the worst-
|
|
337
|
+
// case stall at a number the user will still tolerate.
|
|
299
338
|
const timer = setTimeout(() => {
|
|
300
339
|
child.kill("SIGKILL");
|
|
301
340
|
if (stderr.length > 0) {
|
|
302
|
-
|
|
341
|
+
logToFile("hook timeout: " + hookName + " stderr=" + stderr.slice(-1200));
|
|
342
|
+
} else {
|
|
343
|
+
logToFile("hook timeout: " + hookName + " (no stderr)");
|
|
303
344
|
}
|
|
304
345
|
finish(false);
|
|
305
|
-
},
|
|
346
|
+
}, 5_000);
|
|
306
347
|
|
|
307
348
|
child.stderr?.on("data", (chunk) => {
|
|
308
349
|
stderr += String(chunk ?? "");
|
|
@@ -318,7 +359,7 @@ export default function cclawPlugin(ctx) {
|
|
|
318
359
|
clearTimeout(timer);
|
|
319
360
|
const ok = code === 0;
|
|
320
361
|
if (!ok && stderr.length > 0) {
|
|
321
|
-
|
|
362
|
+
logToFile("hook failed: " + hookName + " exit=" + code + " stderr=" + stderr.slice(-1200));
|
|
322
363
|
}
|
|
323
364
|
finish(ok);
|
|
324
365
|
});
|
|
@@ -355,6 +396,118 @@ export default function cclawPlugin(ctx) {
|
|
|
355
396
|
return { input: input ?? {}, output: output ?? {} };
|
|
356
397
|
}
|
|
357
398
|
|
|
399
|
+
/**
|
|
400
|
+
* Read-only tools cannot mutate state or execute arbitrary code, so
|
|
401
|
+
* running prompt/workflow guards on them is pure overhead and — worse —
|
|
402
|
+
* surfaces a hard block when guards are misconfigured. OpenCode tool
|
|
403
|
+
* names vary (Claude/Codex use PascalCase; opencode-native often uses
|
|
404
|
+
* lowercase), so we normalize and match against a tight allow-list.
|
|
405
|
+
* Anything not in this list (bash, edit, write, patch, task, run, …)
|
|
406
|
+
* still runs through guards.
|
|
407
|
+
*/
|
|
408
|
+
const SAFE_READONLY_TOOLS = new Set([
|
|
409
|
+
"read",
|
|
410
|
+
"glob",
|
|
411
|
+
"grep",
|
|
412
|
+
"list",
|
|
413
|
+
"ls",
|
|
414
|
+
"view",
|
|
415
|
+
"webfetch",
|
|
416
|
+
"websearch"
|
|
417
|
+
]);
|
|
418
|
+
|
|
419
|
+
function isSafeReadOnlyTool(payload) {
|
|
420
|
+
if (!payload || typeof payload !== "object") return false;
|
|
421
|
+
const candidates = [
|
|
422
|
+
payload.tool,
|
|
423
|
+
payload.name,
|
|
424
|
+
payload.tool_name,
|
|
425
|
+
payload.toolName
|
|
426
|
+
];
|
|
427
|
+
const inner = payload.input;
|
|
428
|
+
if (inner && typeof inner === "object") {
|
|
429
|
+
candidates.push(inner.tool, inner.name, inner.tool_name, inner.toolName);
|
|
430
|
+
}
|
|
431
|
+
for (const candidate of candidates) {
|
|
432
|
+
if (typeof candidate !== "string" || candidate.length === 0) continue;
|
|
433
|
+
if (SAFE_READONLY_TOOLS.has(candidate.toLowerCase())) return true;
|
|
434
|
+
}
|
|
435
|
+
return false;
|
|
436
|
+
}
|
|
437
|
+
|
|
438
|
+
/**
|
|
439
|
+
* cclaw considers itself "active" in a project when both the state
|
|
440
|
+
* file and the hook runtime script exist. If either is missing the
|
|
441
|
+
* plugin behaves as a no-op for guards — this project hasn't been
|
|
442
|
+
* initialized (or the install is corrupt) and blocking every tool
|
|
443
|
+
* call would strand the user.
|
|
444
|
+
*/
|
|
445
|
+
function isCclawInitialized() {
|
|
446
|
+
try {
|
|
447
|
+
const hookRuntimePath = join(runtimeDir, "hooks/run-hook.mjs");
|
|
448
|
+
return existsSync(flowStatePath) && existsSync(hookRuntimePath);
|
|
449
|
+
} catch {
|
|
450
|
+
return false;
|
|
451
|
+
}
|
|
452
|
+
}
|
|
453
|
+
|
|
454
|
+
let notInitializedAdvised = false;
|
|
455
|
+
function noteNotInitialized() {
|
|
456
|
+
if (notInitializedAdvised) return;
|
|
457
|
+
notInitializedAdvised = true;
|
|
458
|
+
logToFile(
|
|
459
|
+
"guards skipped: cclaw is not initialized in this project. " +
|
|
460
|
+
"Run \`cclaw init\` in " + root + " to activate flow enforcement."
|
|
461
|
+
);
|
|
462
|
+
}
|
|
463
|
+
|
|
464
|
+
/**
|
|
465
|
+
* Escape hatch for a user stuck behind a misbehaving guard chain.
|
|
466
|
+
* Reads a small set of env vars that all mean "turn cclaw off for this
|
|
467
|
+
* session": CCLAW_DISABLE=1 (primary), CCLAW_STRICTNESS=off, or
|
|
468
|
+
* CCLAW_GUARDS=off. Anything truthy disables both guards and the
|
|
469
|
+
* advisory path. Logged once so users can confirm the bypass is in
|
|
470
|
+
* effect without cluttering the TUI.
|
|
471
|
+
*/
|
|
472
|
+
const DISABLE_ENV_KEYS = ["CCLAW_DISABLE", "CCLAW_GUARDS", "CCLAW_STRICTNESS"];
|
|
473
|
+
let disabledAdvised = false;
|
|
474
|
+
function isCclawDisabled() {
|
|
475
|
+
for (const key of DISABLE_ENV_KEYS) {
|
|
476
|
+
const raw = process.env[key];
|
|
477
|
+
if (typeof raw !== "string") continue;
|
|
478
|
+
const value = raw.trim().toLowerCase();
|
|
479
|
+
if (value.length === 0) continue;
|
|
480
|
+
if (key === "CCLAW_STRICTNESS") {
|
|
481
|
+
if (value === "off" || value === "disabled" || value === "none") {
|
|
482
|
+
return { disabled: true, key, value };
|
|
483
|
+
}
|
|
484
|
+
continue;
|
|
485
|
+
}
|
|
486
|
+
if (
|
|
487
|
+
value === "1" ||
|
|
488
|
+
value === "true" ||
|
|
489
|
+
value === "yes" ||
|
|
490
|
+
value === "on" ||
|
|
491
|
+
value === "off" ||
|
|
492
|
+
value === "disabled"
|
|
493
|
+
) {
|
|
494
|
+
if (key === "CCLAW_GUARDS" && (value === "on" || value === "true" || value === "yes" || value === "1")) {
|
|
495
|
+
continue;
|
|
496
|
+
}
|
|
497
|
+
return { disabled: true, key, value };
|
|
498
|
+
}
|
|
499
|
+
}
|
|
500
|
+
return { disabled: false, key: "", value: "" };
|
|
501
|
+
}
|
|
502
|
+
function noteDisabled(reason) {
|
|
503
|
+
if (disabledAdvised) return;
|
|
504
|
+
disabledAdvised = true;
|
|
505
|
+
logToFile(
|
|
506
|
+
"guards disabled by env " + reason.key + "=" + reason.value + ". " +
|
|
507
|
+
"All tool calls will pass through without prompt/workflow checks."
|
|
508
|
+
);
|
|
509
|
+
}
|
|
510
|
+
|
|
358
511
|
function resolveEventType(payload) {
|
|
359
512
|
if (typeof payload === "string") return payload;
|
|
360
513
|
if (payload && typeof payload === "object") {
|
|
@@ -407,7 +560,7 @@ export default function cclawPlugin(ctx) {
|
|
|
407
560
|
payload && typeof payload === "object"
|
|
408
561
|
? Object.keys(payload).slice(0, 10).join(", ")
|
|
409
562
|
: typeof payload;
|
|
410
|
-
|
|
563
|
+
logToFile("unknown event payload keys: " + keys);
|
|
411
564
|
}
|
|
412
565
|
// session.compacted must run pre-compact BEFORE refreshing the bootstrap
|
|
413
566
|
// cache, otherwise the injected system prompt still shows the pre-compact
|
|
@@ -435,12 +588,41 @@ export default function cclawPlugin(ctx) {
|
|
|
435
588
|
}
|
|
436
589
|
},
|
|
437
590
|
"tool.execute.before": async (input, output) => {
|
|
591
|
+
const disabled = isCclawDisabled();
|
|
592
|
+
if (disabled.disabled) {
|
|
593
|
+
// Explicit user override (CCLAW_DISABLE=1 et al): stay fully out
|
|
594
|
+
// of the way. Any real problem with the guard chain should not
|
|
595
|
+
// prevent the user from unblocking themselves.
|
|
596
|
+
noteDisabled(disabled);
|
|
597
|
+
return;
|
|
598
|
+
}
|
|
438
599
|
const payload = normalizeToolPayload(input, output);
|
|
439
|
-
|
|
440
|
-
|
|
600
|
+
if (isSafeReadOnlyTool(payload)) {
|
|
601
|
+
// Read-only tools bypass guards — they cannot mutate state and
|
|
602
|
+
// blocking them gives users an unusable session when guards are
|
|
603
|
+
// misconfigured or cclaw isn't fully initialized.
|
|
604
|
+
return;
|
|
605
|
+
}
|
|
606
|
+
if (!isCclawInitialized()) {
|
|
607
|
+
// Project has no flow-state or hook runtime: cclaw isn't in use
|
|
608
|
+
// here. Never block the user's tools because of setup they didn't
|
|
609
|
+
// ask for. Surface a single advisory so they can notice.
|
|
610
|
+
noteNotInitialized();
|
|
611
|
+
return;
|
|
612
|
+
}
|
|
613
|
+
const [promptOk, workflowOk] = await Promise.all([
|
|
614
|
+
runHookScript("prompt-guard", payload),
|
|
615
|
+
runHookScript("workflow-guard", payload)
|
|
616
|
+
]);
|
|
441
617
|
if (!promptOk || !workflowOk) {
|
|
618
|
+
const failed = !promptOk ? "prompt-guard" : "workflow-guard";
|
|
619
|
+
const rawDetail = lastHookStderr.get(failed) || "";
|
|
620
|
+
const detail = rawDetail.length > 0 ? rawDetail.slice(-400) : "(no stderr captured)";
|
|
442
621
|
throw new Error(
|
|
443
|
-
"cclaw
|
|
622
|
+
"cclaw " + failed + " blocked tool.execute.before.\\n" +
|
|
623
|
+
"Reason: " + detail + "\\n" +
|
|
624
|
+
"Diagnose: run \`cclaw doctor\` in project root.\\n" +
|
|
625
|
+
"Bypass (temporary): export CCLAW_DISABLE=1 before starting OpenCode."
|
|
444
626
|
);
|
|
445
627
|
}
|
|
446
628
|
},
|
package/dist/content/skills.js
CHANGED
|
@@ -309,6 +309,59 @@ function normalizedGuidanceKey(value) {
|
|
|
309
309
|
.trim()
|
|
310
310
|
.toLowerCase();
|
|
311
311
|
}
|
|
312
|
+
function mermaidNodeLabel(raw, index) {
|
|
313
|
+
const stripped = raw
|
|
314
|
+
.replace(/`[^`]+`/gu, "")
|
|
315
|
+
.replace(/\*\*/gu, "")
|
|
316
|
+
.replace(/[*_]/gu, "")
|
|
317
|
+
.replace(/\[[^\]]*\]\([^)]*\)/gu, "")
|
|
318
|
+
.split(/[—:.;]/u)[0]
|
|
319
|
+
?.trim() ?? "";
|
|
320
|
+
const words = stripped.split(/\s+/u).filter((word) => word.length > 0);
|
|
321
|
+
const short = words.slice(0, 4).join(" ");
|
|
322
|
+
const label = short.length === 0
|
|
323
|
+
? `Step ${index + 1}`
|
|
324
|
+
: short.replace(/["`]/gu, "");
|
|
325
|
+
return label.length > 48 ? `${label.slice(0, 45)}...` : label;
|
|
326
|
+
}
|
|
327
|
+
const MERMAID_PROCESS_MAX_NODES = 10;
|
|
328
|
+
function renderProcessFlowMermaid(executionModel) {
|
|
329
|
+
if (executionModel.processFlow && executionModel.processFlow.trim().length > 0) {
|
|
330
|
+
return `\`\`\`mermaid\n${executionModel.processFlow.trim()}\n\`\`\``;
|
|
331
|
+
}
|
|
332
|
+
const source = executionModel.process.length > 0
|
|
333
|
+
? executionModel.process
|
|
334
|
+
: executionModel.checklist;
|
|
335
|
+
if (source.length === 0) {
|
|
336
|
+
return "";
|
|
337
|
+
}
|
|
338
|
+
const limited = source.slice(0, MERMAID_PROCESS_MAX_NODES);
|
|
339
|
+
const nodes = limited.map((item, index) => ({
|
|
340
|
+
id: `S${index + 1}`,
|
|
341
|
+
label: mermaidNodeLabel(item, index)
|
|
342
|
+
}));
|
|
343
|
+
const lines = ["flowchart TD"];
|
|
344
|
+
for (const node of nodes) {
|
|
345
|
+
lines.push(` ${node.id}["${node.label}"]`);
|
|
346
|
+
}
|
|
347
|
+
for (let i = 0; i < nodes.length - 1; i += 1) {
|
|
348
|
+
lines.push(` ${nodes[i].id} --> ${nodes[i + 1].id}`);
|
|
349
|
+
}
|
|
350
|
+
if (source.length > MERMAID_PROCESS_MAX_NODES) {
|
|
351
|
+
lines.push(` S${nodes.length} --> More["...see full Checklist"]`);
|
|
352
|
+
}
|
|
353
|
+
return `\`\`\`mermaid\n${lines.join("\n")}\n\`\`\``;
|
|
354
|
+
}
|
|
355
|
+
function renderPlatformNotesBlock(notes) {
|
|
356
|
+
if (!notes || notes.length === 0) {
|
|
357
|
+
return "";
|
|
358
|
+
}
|
|
359
|
+
const body = notes.map((item) => `- ${item}`).join("\n");
|
|
360
|
+
return `## Platform Notes
|
|
361
|
+
${body}
|
|
362
|
+
|
|
363
|
+
`;
|
|
364
|
+
}
|
|
312
365
|
function dedupeGuidance(items, blockedBy) {
|
|
313
366
|
const blocked = new Set(blockedBy
|
|
314
367
|
.map((item) => normalizedGuidanceKey(item))
|
|
@@ -345,7 +398,8 @@ export function stageSkillMarkdown(stage, track = "standard") {
|
|
|
345
398
|
.map((item, i) => `${i + 1}. ${item}`)
|
|
346
399
|
.join("\n");
|
|
347
400
|
const interactionFocus = dedupeGuidance(executionModel.interactionProtocol, [...executionModel.checklist, ...executionModel.process]).slice(0, 5);
|
|
348
|
-
const
|
|
401
|
+
const processFlowMermaid = renderProcessFlowMermaid(executionModel);
|
|
402
|
+
const platformNotesBlock = renderPlatformNotesBlock(executionModel.platformNotes);
|
|
349
403
|
const stageRefs = stageSpecificSeeAlso(stage);
|
|
350
404
|
const reviewLoopSection = reviewLoopBlock(reviewLens.reviewLoop);
|
|
351
405
|
const mandatoryDelegationSummary = mandatoryDelegations.length > 0
|
|
@@ -386,7 +440,10 @@ ${philosophy.hardGate}
|
|
|
386
440
|
${mergedAntiPatterns(philosophy, executionModel)}
|
|
387
441
|
|
|
388
442
|
## Process
|
|
389
|
-
|
|
443
|
+
|
|
444
|
+
This is the stage **state machine** — the canonical ordered flow. For every detailed step, gate, and wording, follow the Checklist below; this diagram is the map, not the territory.
|
|
445
|
+
|
|
446
|
+
${processFlowMermaid.length > 0 ? processFlowMermaid : "```mermaid\nflowchart TD\n S1[\"Execute Checklist\"] --> S2[\"Satisfy required gates\"] --> S3[\"Verify before closeout\"]\n```"}
|
|
390
447
|
|
|
391
448
|
## Inputs
|
|
392
449
|
${executionModel.inputs.length > 0 ? executionModel.inputs.map((item) => `- ${item}`).join("\n") : "- (first stage — no required inputs)"}
|
|
@@ -394,7 +451,7 @@ ${executionModel.inputs.length > 0 ? executionModel.inputs.map((item) => `- ${it
|
|
|
394
451
|
## Required Context
|
|
395
452
|
${executionModel.requiredContext.length > 0 ? executionModel.requiredContext.map((item) => `- ${item}`).join("\n") : "- None beyond this skill"}
|
|
396
453
|
|
|
397
|
-
${contextLoadingBlock(artifactRules.crossStageTrace)}
|
|
454
|
+
${platformNotesBlock}${contextLoadingBlock(artifactRules.crossStageTrace)}
|
|
398
455
|
${autoSubagentDispatchBlock(stage, track)}
|
|
399
456
|
${researchPlaybooksBlock(executionModel.researchPlaybooks ?? [])}
|
|
400
457
|
|
|
@@ -407,6 +464,9 @@ ${checklistItems}
|
|
|
407
464
|
${stageExamples(stage)}
|
|
408
465
|
|
|
409
466
|
## Interaction Protocol
|
|
467
|
+
|
|
468
|
+
These are **rules for HOW you interact with the user** during this stage — tone, question shape, decision gating. Ordered steps of *what to do* live in the Checklist; do not treat these as an alternative sequence.
|
|
469
|
+
|
|
410
470
|
${interactionFocus.length > 0 ? interactionFocus.map((item, i) => `${i + 1}. ${item}`).join("\n") : "- Keep communication concise and decision-focused; rely on the Checklist for execution order."}
|
|
411
471
|
|
|
412
472
|
Decision protocol reference: \`${DECISION_PROTOCOL_PATH}\`
|
|
@@ -224,6 +224,8 @@ function normalizeStageSchemaInput(value) {
|
|
|
224
224
|
whenNotToUse: value.philosophy.whenNotToUse,
|
|
225
225
|
interactionProtocol: value.executionModel.interactionProtocol,
|
|
226
226
|
process: value.executionModel.process,
|
|
227
|
+
processFlow: value.executionModel.processFlow,
|
|
228
|
+
platformNotes: value.executionModel.platformNotes,
|
|
227
229
|
requiredGates: value.executionModel.requiredGates,
|
|
228
230
|
requiredEvidence: value.executionModel.requiredEvidence,
|
|
229
231
|
inputs: value.executionModel.inputs,
|
|
@@ -445,6 +447,8 @@ export function stageSchema(stage, track = "standard") {
|
|
|
445
447
|
const executionModel = {
|
|
446
448
|
interactionProtocol: base.interactionProtocol,
|
|
447
449
|
process: base.process,
|
|
450
|
+
processFlow: base.processFlow,
|
|
451
|
+
platformNotes: base.platformNotes,
|
|
448
452
|
checklist: base.checklist,
|
|
449
453
|
requiredGates: tieredGates,
|
|
450
454
|
requiredEvidence: base.requiredEvidence,
|
|
@@ -119,6 +119,11 @@ export const BRAINSTORM = {
|
|
|
119
119
|
"required gates marked satisfied",
|
|
120
120
|
"no implementation action taken",
|
|
121
121
|
"artifact reviewed by user"
|
|
122
|
+
],
|
|
123
|
+
platformNotes: [
|
|
124
|
+
"Write artifact paths in POSIX form (`.cclaw/artifacts/01-brainstorm-<slug>.md`) even on Windows — the runtime normalizes separators. Do NOT commit Windows-style backslashes into the artifact or flow-state.",
|
|
125
|
+
"Slugify titles with lowercase ASCII letters, digits, and single dashes only — avoid spaces and case-sensitive names so the file resolves identically on case-insensitive filesystems (macOS/Windows default).",
|
|
126
|
+
"When linking to files inside the artifact, use repo-relative forward-slash paths (`src/foo/bar.ts`) so reviewers on any OS can click through."
|
|
122
127
|
]
|
|
123
128
|
},
|
|
124
129
|
artifactRules: {
|
|
@@ -142,6 +142,11 @@ export const DESIGN = {
|
|
|
142
142
|
"required gates marked satisfied",
|
|
143
143
|
"completion dashboard present with all review-section statuses",
|
|
144
144
|
"artifact complete for spec handoff"
|
|
145
|
+
],
|
|
146
|
+
platformNotes: [
|
|
147
|
+
"Architecture diagrams (ASCII, Mermaid) must use plain ASCII punctuation — avoid smart quotes and em-dashes that render differently across Windows CMD (cp1252), macOS Terminal (UTF-8), and Linux consoles.",
|
|
148
|
+
"When referencing build or runtime tools in the design, name them by binary (`node`, `python`, `go`) rather than by IDE-specific run configurations (`npm: start (WebStorm)`, `launch.json:Debug`) so the design stays OS-agnostic.",
|
|
149
|
+
"File system layouts drawn in the artifact use forward slashes; explicitly note when a platform-specific path style is required (e.g. Windows long-path `\\\\?\\` prefix, macOS bundle `.app/Contents/MacOS/`)."
|
|
145
150
|
]
|
|
146
151
|
},
|
|
147
152
|
artifactRules: {
|
|
@@ -100,6 +100,11 @@ export const PLAN = {
|
|
|
100
100
|
"WAIT_FOR_CONFIRM present and unresolved until user approves",
|
|
101
101
|
"artifact ready for TDD execution",
|
|
102
102
|
"acceptance mapping complete"
|
|
103
|
+
],
|
|
104
|
+
platformNotes: [
|
|
105
|
+
"Per-task verification commands must be runnable on Windows PowerShell, macOS bash/zsh, and Linux bash. Prefer `npm run <script>` / `pnpm <script>` / `pytest -k <name>` over raw shell one-liners so the command portability is handled by the script runner.",
|
|
106
|
+
"If a task command needs globbing, wrap the glob in single quotes on POSIX and escape as needed on PowerShell (`'src/**/*.ts'` vs `\"src/**/*.ts\"`). Note the quoting variant when the task is expected to run in mixed-OS CI.",
|
|
107
|
+
"Environment variables referenced from tasks must be named in uppercase with underscores (`CCLAW_PROJECT_ROOT`) and set via a cross-shell wrapper (e.g. `cross-env` for Node tasks) — do not inline `KEY=value cmd` style that fails in PowerShell/cmd.exe."
|
|
103
108
|
]
|
|
104
109
|
},
|
|
105
110
|
artifactRules: {
|
|
@@ -100,6 +100,11 @@ export const REVIEW = {
|
|
|
100
100
|
"all review sections evaluated",
|
|
101
101
|
"critical blockers resolved",
|
|
102
102
|
"ship readiness explicitly stated"
|
|
103
|
+
],
|
|
104
|
+
platformNotes: [
|
|
105
|
+
"When citing file locations in findings, use repo-relative forward-slash paths with a line number (`src/foo/bar.ts:42`). Avoid IDE-generated hyperlinks that embed absolute machine-specific paths.",
|
|
106
|
+
"Line-range or diff-range references must match `git diff --unified=0` output format so reviewers on any OS can reproduce the range locally without GUI tooling.",
|
|
107
|
+
"Commands in remediation suggestions must be portable (`npm run lint`, `pytest -x path/to/test`) — if a platform-specific command is required, tag the note explicitly (`# PowerShell only`, `# macOS only`)."
|
|
103
108
|
]
|
|
104
109
|
},
|
|
105
110
|
artifactRules: {
|
|
@@ -50,6 +50,14 @@ export interface StagePhilosophy {
|
|
|
50
50
|
export interface StageExecutionModel {
|
|
51
51
|
interactionProtocol: string[];
|
|
52
52
|
process: string[];
|
|
53
|
+
/**
|
|
54
|
+
* Optional custom mermaid `flowchart` body (without the fenced `mermaid`
|
|
55
|
+
* code block) that overrides the auto-generated linear flowchart in the
|
|
56
|
+
* rendered `## Process` section. Use for stages whose state machine is
|
|
57
|
+
* non-linear (loops, conditional branches) — otherwise leave unset and
|
|
58
|
+
* let the renderer derive a simple `A --> B --> C` chart from `process`.
|
|
59
|
+
*/
|
|
60
|
+
processFlow?: string;
|
|
53
61
|
checklist: string[];
|
|
54
62
|
requiredGates: StageGate[];
|
|
55
63
|
requiredEvidence: string[];
|
|
@@ -58,6 +66,13 @@ export interface StageExecutionModel {
|
|
|
58
66
|
researchPlaybooks?: string[];
|
|
59
67
|
blockers: string[];
|
|
60
68
|
exitCriteria: string[];
|
|
69
|
+
/**
|
|
70
|
+
* Optional platform-specific notes (Windows/macOS/Linux path separators,
|
|
71
|
+
* PowerShell vs cmd, harness-specific tool names). Rendered under
|
|
72
|
+
* "## Platform Notes" when present. Omit when the stage is
|
|
73
|
+
* platform-agnostic.
|
|
74
|
+
*/
|
|
75
|
+
platformNotes?: string[];
|
|
61
76
|
}
|
|
62
77
|
export interface StageArtifactRules {
|
|
63
78
|
artifactFile: string;
|
|
@@ -108,6 +123,10 @@ export interface StageSchema {
|
|
|
108
123
|
whenNotToUse: string[];
|
|
109
124
|
interactionProtocol: string[];
|
|
110
125
|
process: string[];
|
|
126
|
+
/** See {@link StageExecutionModel.processFlow}. */
|
|
127
|
+
processFlow?: string;
|
|
128
|
+
/** See {@link StageExecutionModel.platformNotes}. */
|
|
129
|
+
platformNotes?: string[];
|
|
111
130
|
requiredGates: StageGate[];
|
|
112
131
|
requiredEvidence: string[];
|
|
113
132
|
inputs: string[];
|
|
@@ -133,6 +133,11 @@ export const SCOPE = {
|
|
|
133
133
|
"locked decisions captured with stable D-XX IDs",
|
|
134
134
|
"completion dashboard produced",
|
|
135
135
|
"scope summary produced"
|
|
136
|
+
],
|
|
137
|
+
platformNotes: [
|
|
138
|
+
"Scope contract paths must be repo-relative with forward slashes so they resolve identically on Windows, macOS, and Linux (`src/pkg/mod.ts`, NOT `src\\pkg\\mod.ts`).",
|
|
139
|
+
"When invoking `git log`/`git diff` for the Pre-Scope audit, wrap glob patterns in single quotes on POSIX shells and double quotes on PowerShell (`git log -- 'src/**/*.ts'` vs `git log -- \"src/**/*.ts\"`). Document the command with the quoting style you actually ran.",
|
|
140
|
+
"Do not hard-code machine-specific absolute paths (home dirs, drive letters) into the scope contract — keep boundaries repo-relative."
|
|
136
141
|
]
|
|
137
142
|
},
|
|
138
143
|
artifactRules: {
|
|
@@ -92,6 +92,12 @@ export const SHIP = {
|
|
|
92
92
|
"preflight completed",
|
|
93
93
|
"rollback and release notes complete",
|
|
94
94
|
"finalization action explicitly chosen and executed"
|
|
95
|
+
],
|
|
96
|
+
platformNotes: [
|
|
97
|
+
"Release commands (`npm publish`, `git tag -s`, `gh release create`, `cargo publish`, `goreleaser`) behave the same across OSes, but signing keys differ: macOS Keychain, Windows credential store, Linux GPG agent. Verify the signing flow on the actual release machine before running the real publish.",
|
|
98
|
+
"Version tags must be pure ASCII and lowercase after an optional `v` prefix (`v1.2.3`, `v1.2.3-rc.1`). Avoid Unicode dashes and non-breaking spaces that sneak in via copy-paste from docs.",
|
|
99
|
+
"When the rollback plan references timestamps (CI run windows, DB snapshot IDs), pin them to UTC ISO-8601 so the plan reads identically across CI runners in different regions.",
|
|
100
|
+
"`gh release create` requires a repo-level `GH_TOKEN`/`GITHUB_TOKEN`; document whether it is sourced from the shell env, `.env`, or the OS keychain so another operator on a different OS can reproduce the release."
|
|
95
101
|
]
|
|
96
102
|
},
|
|
97
103
|
artifactRules: {
|
|
@@ -40,6 +40,7 @@ export const SPEC = {
|
|
|
40
40
|
"Capture edge cases — for each criterion, define at least one boundary condition and one error condition.",
|
|
41
41
|
"Document constraints and assumptions — regulatory, system, integration, and performance boundaries. Surface implicit assumptions explicitly.",
|
|
42
42
|
"Confirm testability — for each acceptance criterion, describe the test that would prove it. If untestable, rewrite the criterion.",
|
|
43
|
+
"Present acceptance criteria to the user in 3-5-item batches, pausing for explicit ACK between batches (see Interaction Protocol).",
|
|
43
44
|
"Write spec artifact and request user approval — wait for explicit confirmation before proceeding."
|
|
44
45
|
],
|
|
45
46
|
interactionProtocol: [
|
|
@@ -86,6 +87,11 @@ export const SPEC = {
|
|
|
86
87
|
"required gates marked satisfied",
|
|
87
88
|
"plan-ready acceptance mapping exists",
|
|
88
89
|
"testability confirmed for all criteria"
|
|
90
|
+
],
|
|
91
|
+
platformNotes: [
|
|
92
|
+
"Acceptance criteria that reference CLI commands must name the executable portably (`node`, `npm`, `pytest`) and avoid OS-specific shell features (`&&` is safe, `||` differs subtly between cmd.exe and POSIX — prefer explicit multi-step descriptions).",
|
|
93
|
+
"When a criterion specifies file-content expectations, use `LF` as the canonical newline and state any CRLF-on-Windows tolerance explicitly (most git-managed repos normalize via `.gitattributes`; the criterion should not implicitly depend on autocrlf).",
|
|
94
|
+
"Timezone-sensitive criteria (timestamps, retention windows) must pin UTC or note the source of truth — clocks differ across CI runners (GitHub macOS vs Linux image vs Windows image)."
|
|
89
95
|
]
|
|
90
96
|
},
|
|
91
97
|
artifactRules: {
|
|
@@ -103,6 +103,12 @@ export const TDD = {
|
|
|
103
103
|
"REFACTOR evidence captured",
|
|
104
104
|
"required gates marked satisfied",
|
|
105
105
|
"traceability annotated"
|
|
106
|
+
],
|
|
107
|
+
platformNotes: [
|
|
108
|
+
"Record the **exact** test command run (`npm test -- path/to/file`, `pytest path/to/file`, `go test ./...`) so RED/GREEN evidence is reproducible on any OS. Do not paraphrase to a shorter alias.",
|
|
109
|
+
"Line-ending drift (CRLF vs LF) can turn a passing test red on Windows if the repo mixes styles. When a GREEN flip happens only after whitespace normalization, record it as a refactor note, not a hidden fix.",
|
|
110
|
+
"When invoking a test file path from Windows PowerShell, use forward slashes (`npm test -- tests/foo.test.ts`) — backslashes trip globbing on `cross-env` and similar wrappers.",
|
|
111
|
+
"Flaky tests that only fail on one OS must be marked as such in the TDD artifact (OS + runner + one failing output snippet) — do not retry until green without evidence."
|
|
106
112
|
]
|
|
107
113
|
},
|
|
108
114
|
artifactRules: {
|