@goondocks/myco 0.18.0 → 0.19.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -130
- package/dist/{agent-run-2NFYMQXW.js → agent-run-EADUYYAS.js} +6 -6
- package/dist/{agent-tasks-MEIYLXGN.js → agent-tasks-GC77JXQB.js} +6 -6
- package/dist/{chunk-EO2RQW4S.js → chunk-2CKDAFSX.js} +2 -2
- package/dist/{chunk-NZI7WBZI.js → chunk-2DF4OZ2D.js} +22 -2
- package/dist/chunk-2DF4OZ2D.js.map +1 -0
- package/dist/{chunk-OW433Q4C.js → chunk-2LN2BBKA.js} +45 -4
- package/dist/chunk-2LN2BBKA.js.map +1 -0
- package/dist/{chunk-U7GJTVSX.js → chunk-2OO3BRFK.js} +21 -7
- package/dist/chunk-2OO3BRFK.js.map +1 -0
- package/dist/{chunk-RAV5YMRU.js → chunk-3TPD6HEF.js} +4 -4
- package/dist/{chunk-JMOUFG6Y.js → chunk-44PZCAYS.js} +47 -5
- package/dist/chunk-44PZCAYS.js.map +1 -0
- package/dist/{chunk-D7TYRPRM.js → chunk-6LQIMRTC.js} +145 -145
- package/dist/chunk-6LQIMRTC.js.map +1 -0
- package/dist/{chunk-NI23QCHB.js → chunk-AELJ4PS5.js} +5 -5
- package/dist/{chunk-BUIR3JWM.js → chunk-CYBC2HZ3.js} +3 -3
- package/dist/chunk-EM63ZFKA.js +166 -0
- package/dist/chunk-EM63ZFKA.js.map +1 -0
- package/dist/{chunk-O3TRN3RC.js → chunk-INWD6AIQ.js} +2 -2
- package/dist/{chunk-CML4MCYF.js → chunk-KSXTNYXO.js} +2 -2
- package/dist/{chunk-KWTOCJLB.js → chunk-LLJMDXO2.js} +1176 -241
- package/dist/chunk-LLJMDXO2.js.map +1 -0
- package/dist/{chunk-2V7HR7HB.js → chunk-MDEUXYJG.js} +4 -4
- package/dist/{chunk-PFWIPRF6.js → chunk-MS6FDV45.js} +3 -3
- package/dist/{chunk-55QEICRO.js → chunk-N77K772N.js} +3 -3
- package/dist/{chunk-E4VLWIJC.js → chunk-ODXLRR4U.js} +1 -1
- package/dist/{chunk-DLFDBKEV.js → chunk-OZF5EURR.js} +19 -16
- package/dist/chunk-OZF5EURR.js.map +1 -0
- package/dist/{chunk-IB76KGBY.js → chunk-POEPHBQK.js} +1 -1
- package/dist/{chunk-7OYXB2NM.js → chunk-REN37KYI.js} +6 -2
- package/dist/chunk-REN37KYI.js.map +1 -0
- package/dist/{chunk-JDI4DPWD.js → chunk-RXROZBSK.js} +637 -150
- package/dist/chunk-RXROZBSK.js.map +1 -0
- package/dist/{chunk-U3J2DDSR.js → chunk-SCI55NKY.js} +2 -2
- package/dist/{chunk-GDY63YAW.js → chunk-U6PF3YII.js} +79 -79
- package/dist/chunk-U6PF3YII.js.map +1 -0
- package/dist/{chunk-FABWUX5G.js → chunk-UVKQ62II.js} +18 -4
- package/dist/chunk-UVKQ62II.js.map +1 -0
- package/dist/{chunk-VOCGURV7.js → chunk-UW6DGPSV.js} +3 -3
- package/dist/{chunk-CKJAWZQE.js → chunk-W4VHC2ES.js} +11 -3
- package/dist/chunk-W4VHC2ES.js.map +1 -0
- package/dist/{chunk-75AZFBFW.js → chunk-YPWF322W.js} +3 -3
- package/dist/{cli-IIMBALPV.js → cli-X7CFP4YD.js} +39 -39
- package/dist/{client-VZCUISHZ.js → client-YA33HUFY.js} +4 -4
- package/dist/{config-DA4IUVFL.js → config-RFB2DJC6.js} +6 -6
- package/dist/{detect-GEM3NVK6.js → detect-BEOIHGBC.js} +5 -5
- package/dist/{detect-providers-PSVKXTWE.js → detect-providers-2OQBU4VX.js} +4 -4
- package/dist/{doctor-QYD34X7Q.js → doctor-FAH7N66M.js} +11 -11
- package/dist/{executor-NSPRTH4M.js → executor-ICTRRUBY.js} +93 -285
- package/dist/executor-ICTRRUBY.js.map +1 -0
- package/dist/{init-WYYL44KZ.js → init-PTJEOTJV.js} +15 -15
- package/dist/{llm-KEDHK3TQ.js → llm-7D2OGDEK.js} +4 -4
- package/dist/{loader-Q3P3R4UP.js → loader-O2JFO2UC.js} +6 -6
- package/dist/{loader-SKKUMT5C.js → loader-VPE4RCIF.js} +6 -6
- package/dist/{main-6PY3ITQ5.js → main-EIKBLOUL.js} +752 -264
- package/dist/main-EIKBLOUL.js.map +1 -0
- package/dist/{open-HRFMJDQX.js → open-2JCSOLZS.js} +6 -6
- package/dist/{post-compact-HT24YMAN.js → post-compact-2HPPWPBI.js} +10 -10
- package/dist/{post-tool-use-DENRI5WB.js → post-tool-use-TWBBBABS.js} +9 -9
- package/dist/{post-tool-use-failure-A6SNJX42.js → post-tool-use-failure-LIJYR4KL.js} +10 -10
- package/dist/{pre-compact-3Q4BALCL.js → pre-compact-II2CMNTG.js} +10 -10
- package/dist/{provider-check-AE3L5Z6R.js → provider-check-KEQNQ6LO.js} +4 -4
- package/dist/{registry-O2NZLO3V.js → registry-X5FDGYXT.js} +7 -7
- package/dist/{remove-YB5A6HY2.js → remove-L5MVYBOY.js} +11 -11
- package/dist/{resolution-events-XWYLLDRK.js → resolution-events-MVIZMONR.js} +4 -4
- package/dist/{restart-RGDVHELZ.js → restart-VIT3JBD6.js} +7 -7
- package/dist/{search-WOHT3G55.js → search-O6BB5MTO.js} +7 -7
- package/dist/{server-6SUNYDV7.js → server-O3UPJVBR.js} +258 -173
- package/dist/server-O3UPJVBR.js.map +1 -0
- package/dist/{session-W3SKRFRV.js → session-5JV3DQIK.js} +8 -8
- package/dist/{session-end-OUTY7AFF.js → session-end-PZ2OXBGG.js} +9 -9
- package/dist/{session-start-5MB3LFOA.js → session-start-FDGM56BX.js} +22 -17
- package/dist/{session-start-5MB3LFOA.js.map → session-start-FDGM56BX.js.map} +1 -1
- package/dist/{setup-llm-ZMYGIQX5.js → setup-llm-MQK557BB.js} +10 -10
- package/dist/src/agent/definitions/tasks/extract-only.yaml +1 -1
- package/dist/src/agent/definitions/tasks/full-intelligence.yaml +10 -0
- package/dist/src/agent/definitions/tasks/skill-evolve.yaml +163 -49
- package/dist/src/agent/definitions/tasks/skill-generate.yaml +44 -27
- package/dist/src/agent/definitions/tasks/skill-survey.yaml +132 -138
- package/dist/src/agent/definitions/tasks/supersession-sweep.yaml +1 -1
- package/dist/src/cli.js +1 -1
- package/dist/src/daemon/main.js +1 -1
- package/dist/src/hooks/post-tool-use.js +1 -1
- package/dist/src/hooks/session-end.js +1 -1
- package/dist/src/hooks/session-start.js +1 -1
- package/dist/src/hooks/stop.js +1 -1
- package/dist/src/hooks/user-prompt-submit.js +1 -1
- package/dist/src/mcp/server.js +1 -1
- package/dist/src/symbionts/manifests/codex.yaml +45 -7
- package/dist/{stats-DGI6B3HX.js → stats-2STTARTC.js} +11 -11
- package/dist/{stop-YGHODSP7.js → stop-WNKCMCGO.js} +9 -9
- package/dist/{stop-failure-7IJTPJ6W.js → stop-failure-6GTOBVTN.js} +10 -10
- package/dist/{subagent-start-ZBQ5PJB5.js → subagent-start-VJF5YKVX.js} +10 -10
- package/dist/{subagent-stop-N2TDQU2D.js → subagent-stop-UW6HMICY.js} +10 -10
- package/dist/{task-completed-BDLMRSBB.js → task-completed-U4Q3XXLX.js} +10 -10
- package/dist/{team-2ZFGTSIN.js → team-N6TXS2PF.js} +148 -103
- package/dist/team-N6TXS2PF.js.map +1 -0
- package/dist/ui/assets/{index-DtT9_nlT.js → index-CHIm98OP.js} +48 -48
- package/dist/ui/index.html +1 -1
- package/dist/{update-STLAN7LR.js → update-ZYCOWKMD.js} +11 -11
- package/dist/{user-prompt-submit-4IBFUYQ3.js → user-prompt-submit-SOYL4OWF.js} +15 -12
- package/dist/user-prompt-submit-SOYL4OWF.js.map +1 -0
- package/dist/{verify-EJYPO7QA.js → verify-P37PQ4YM.js} +8 -8
- package/dist/{version-YPBIKH77.js → version-XAWC277D.js} +2 -2
- package/package.json +25 -8
- package/CONTRIBUTING.md +0 -132
- package/dist/chunk-7OYXB2NM.js.map +0 -1
- package/dist/chunk-CKJAWZQE.js.map +0 -1
- package/dist/chunk-D7TYRPRM.js.map +0 -1
- package/dist/chunk-DLFDBKEV.js.map +0 -1
- package/dist/chunk-FABWUX5G.js.map +0 -1
- package/dist/chunk-GDY63YAW.js.map +0 -1
- package/dist/chunk-JDI4DPWD.js.map +0 -1
- package/dist/chunk-JMOUFG6Y.js.map +0 -1
- package/dist/chunk-KWTOCJLB.js.map +0 -1
- package/dist/chunk-NZI7WBZI.js.map +0 -1
- package/dist/chunk-OW433Q4C.js.map +0 -1
- package/dist/chunk-RJMXDUMA.js +0 -40
- package/dist/chunk-RJMXDUMA.js.map +0 -1
- package/dist/chunk-U7GJTVSX.js.map +0 -1
- package/dist/executor-NSPRTH4M.js.map +0 -1
- package/dist/main-6PY3ITQ5.js.map +0 -1
- package/dist/server-6SUNYDV7.js.map +0 -1
- package/dist/src/worker/package-lock.json +0 -4338
- package/dist/src/worker/package.json +0 -22
- package/dist/src/worker/src/auth.ts +0 -31
- package/dist/src/worker/src/index.ts +0 -470
- package/dist/src/worker/src/mcp/auth.ts +0 -65
- package/dist/src/worker/src/mcp/server.ts +0 -53
- package/dist/src/worker/src/mcp/tools/context.ts +0 -13
- package/dist/src/worker/src/mcp/tools/get.ts +0 -15
- package/dist/src/worker/src/mcp/tools/graph.ts +0 -35
- package/dist/src/worker/src/mcp/tools/search.ts +0 -32
- package/dist/src/worker/src/mcp/tools/sessions.ts +0 -24
- package/dist/src/worker/src/mcp/tools/skills.ts +0 -16
- package/dist/src/worker/src/mcp/tools/team.ts +0 -9
- package/dist/src/worker/src/schema.ts +0 -324
- package/dist/src/worker/src/search-helpers.ts +0 -70
- package/dist/src/worker/tsconfig.json +0 -16
- package/dist/src/worker/wrangler.toml +0 -30
- package/dist/team-2ZFGTSIN.js.map +0 -1
- package/dist/user-prompt-submit-4IBFUYQ3.js.map +0 -1
- /package/dist/{agent-run-2NFYMQXW.js.map → agent-run-EADUYYAS.js.map} +0 -0
- /package/dist/{agent-tasks-MEIYLXGN.js.map → agent-tasks-GC77JXQB.js.map} +0 -0
- /package/dist/{chunk-EO2RQW4S.js.map → chunk-2CKDAFSX.js.map} +0 -0
- /package/dist/{chunk-RAV5YMRU.js.map → chunk-3TPD6HEF.js.map} +0 -0
- /package/dist/{chunk-NI23QCHB.js.map → chunk-AELJ4PS5.js.map} +0 -0
- /package/dist/{chunk-BUIR3JWM.js.map → chunk-CYBC2HZ3.js.map} +0 -0
- /package/dist/{chunk-O3TRN3RC.js.map → chunk-INWD6AIQ.js.map} +0 -0
- /package/dist/{chunk-CML4MCYF.js.map → chunk-KSXTNYXO.js.map} +0 -0
- /package/dist/{chunk-2V7HR7HB.js.map → chunk-MDEUXYJG.js.map} +0 -0
- /package/dist/{chunk-PFWIPRF6.js.map → chunk-MS6FDV45.js.map} +0 -0
- /package/dist/{chunk-55QEICRO.js.map → chunk-N77K772N.js.map} +0 -0
- /package/dist/{chunk-E4VLWIJC.js.map → chunk-ODXLRR4U.js.map} +0 -0
- /package/dist/{chunk-IB76KGBY.js.map → chunk-POEPHBQK.js.map} +0 -0
- /package/dist/{chunk-U3J2DDSR.js.map → chunk-SCI55NKY.js.map} +0 -0
- /package/dist/{chunk-VOCGURV7.js.map → chunk-UW6DGPSV.js.map} +0 -0
- /package/dist/{chunk-75AZFBFW.js.map → chunk-YPWF322W.js.map} +0 -0
- /package/dist/{cli-IIMBALPV.js.map → cli-X7CFP4YD.js.map} +0 -0
- /package/dist/{client-VZCUISHZ.js.map → client-YA33HUFY.js.map} +0 -0
- /package/dist/{config-DA4IUVFL.js.map → config-RFB2DJC6.js.map} +0 -0
- /package/dist/{detect-GEM3NVK6.js.map → detect-BEOIHGBC.js.map} +0 -0
- /package/dist/{detect-providers-PSVKXTWE.js.map → detect-providers-2OQBU4VX.js.map} +0 -0
- /package/dist/{doctor-QYD34X7Q.js.map → doctor-FAH7N66M.js.map} +0 -0
- /package/dist/{init-WYYL44KZ.js.map → init-PTJEOTJV.js.map} +0 -0
- /package/dist/{llm-KEDHK3TQ.js.map → llm-7D2OGDEK.js.map} +0 -0
- /package/dist/{loader-Q3P3R4UP.js.map → loader-O2JFO2UC.js.map} +0 -0
- /package/dist/{loader-SKKUMT5C.js.map → loader-VPE4RCIF.js.map} +0 -0
- /package/dist/{open-HRFMJDQX.js.map → open-2JCSOLZS.js.map} +0 -0
- /package/dist/{post-compact-HT24YMAN.js.map → post-compact-2HPPWPBI.js.map} +0 -0
- /package/dist/{post-tool-use-DENRI5WB.js.map → post-tool-use-TWBBBABS.js.map} +0 -0
- /package/dist/{post-tool-use-failure-A6SNJX42.js.map → post-tool-use-failure-LIJYR4KL.js.map} +0 -0
- /package/dist/{pre-compact-3Q4BALCL.js.map → pre-compact-II2CMNTG.js.map} +0 -0
- /package/dist/{provider-check-AE3L5Z6R.js.map → provider-check-KEQNQ6LO.js.map} +0 -0
- /package/dist/{registry-O2NZLO3V.js.map → registry-X5FDGYXT.js.map} +0 -0
- /package/dist/{remove-YB5A6HY2.js.map → remove-L5MVYBOY.js.map} +0 -0
- /package/dist/{resolution-events-XWYLLDRK.js.map → resolution-events-MVIZMONR.js.map} +0 -0
- /package/dist/{restart-RGDVHELZ.js.map → restart-VIT3JBD6.js.map} +0 -0
- /package/dist/{search-WOHT3G55.js.map → search-O6BB5MTO.js.map} +0 -0
- /package/dist/{session-W3SKRFRV.js.map → session-5JV3DQIK.js.map} +0 -0
- /package/dist/{session-end-OUTY7AFF.js.map → session-end-PZ2OXBGG.js.map} +0 -0
- /package/dist/{setup-llm-ZMYGIQX5.js.map → setup-llm-MQK557BB.js.map} +0 -0
- /package/dist/{stats-DGI6B3HX.js.map → stats-2STTARTC.js.map} +0 -0
- /package/dist/{stop-YGHODSP7.js.map → stop-WNKCMCGO.js.map} +0 -0
- /package/dist/{stop-failure-7IJTPJ6W.js.map → stop-failure-6GTOBVTN.js.map} +0 -0
- /package/dist/{subagent-start-ZBQ5PJB5.js.map → subagent-start-VJF5YKVX.js.map} +0 -0
- /package/dist/{subagent-stop-N2TDQU2D.js.map → subagent-stop-UW6HMICY.js.map} +0 -0
- /package/dist/{task-completed-BDLMRSBB.js.map → task-completed-U4Q3XXLX.js.map} +0 -0
- /package/dist/{update-STLAN7LR.js.map → update-ZYCOWKMD.js.map} +0 -0
- /package/dist/{verify-EJYPO7QA.js.map → verify-P37PQ4YM.js.map} +0 -0
- /package/dist/{version-YPBIKH77.js.map → version-XAWC277D.js.map} +0 -0
|
@@ -1,20 +1,20 @@
|
|
|
1
1
|
import { createRequire as __cr } from 'node:module'; const require = __cr(import.meta.url);
|
|
2
2
|
import {
|
|
3
3
|
initVaultDb
|
|
4
|
-
} from "./chunk-
|
|
4
|
+
} from "./chunk-N77K772N.js";
|
|
5
5
|
import "./chunk-SAKJMNSR.js";
|
|
6
6
|
import "./chunk-WYOE4IAX.js";
|
|
7
|
-
import "./chunk-
|
|
7
|
+
import "./chunk-KSXTNYXO.js";
|
|
8
8
|
import {
|
|
9
9
|
getSession,
|
|
10
10
|
listSessions
|
|
11
|
-
} from "./chunk-
|
|
12
|
-
import "./chunk-
|
|
11
|
+
} from "./chunk-2OO3BRFK.js";
|
|
12
|
+
import "./chunk-INWD6AIQ.js";
|
|
13
13
|
import "./chunk-MYX5NCRH.js";
|
|
14
|
-
import "./chunk-
|
|
15
|
-
import "./chunk-
|
|
14
|
+
import "./chunk-CYBC2HZ3.js";
|
|
15
|
+
import "./chunk-2CKDAFSX.js";
|
|
16
16
|
import "./chunk-LPUQPDC2.js";
|
|
17
|
-
import "./chunk-
|
|
17
|
+
import "./chunk-W4VHC2ES.js";
|
|
18
18
|
import "./chunk-E7NUADTQ.js";
|
|
19
19
|
import "./chunk-PZUWP5VK.js";
|
|
20
20
|
|
|
@@ -67,4 +67,4 @@ ${target.summary}`);
|
|
|
67
67
|
export {
|
|
68
68
|
run
|
|
69
69
|
};
|
|
70
|
-
//# sourceMappingURL=session-
|
|
70
|
+
//# sourceMappingURL=session-5JV3DQIK.js.map
|
|
@@ -2,21 +2,21 @@ import { createRequire as __cr } from 'node:module'; const require = __cr(import
|
|
|
2
2
|
import {
|
|
3
3
|
normalizeHookInput,
|
|
4
4
|
readStdin
|
|
5
|
-
} from "./chunk-
|
|
5
|
+
} from "./chunk-MS6FDV45.js";
|
|
6
6
|
import {
|
|
7
7
|
resolveVaultDir
|
|
8
8
|
} from "./chunk-5ZT2Q6P5.js";
|
|
9
9
|
import {
|
|
10
10
|
DaemonClient
|
|
11
|
-
} from "./chunk-
|
|
12
|
-
import "./chunk-
|
|
13
|
-
import "./chunk-
|
|
11
|
+
} from "./chunk-CYBC2HZ3.js";
|
|
12
|
+
import "./chunk-2CKDAFSX.js";
|
|
13
|
+
import "./chunk-UVKQ62II.js";
|
|
14
14
|
import "./chunk-LPUQPDC2.js";
|
|
15
|
-
import "./chunk-
|
|
15
|
+
import "./chunk-W4VHC2ES.js";
|
|
16
16
|
import "./chunk-E7NUADTQ.js";
|
|
17
|
-
import "./chunk-
|
|
18
|
-
import "./chunk-
|
|
19
|
-
import "./chunk-
|
|
17
|
+
import "./chunk-6LQIMRTC.js";
|
|
18
|
+
import "./chunk-ODXLRR4U.js";
|
|
19
|
+
import "./chunk-U6PF3YII.js";
|
|
20
20
|
import "./chunk-PZUWP5VK.js";
|
|
21
21
|
|
|
22
22
|
// src/hooks/session-end.ts
|
|
@@ -40,4 +40,4 @@ async function main() {
|
|
|
40
40
|
export {
|
|
41
41
|
main
|
|
42
42
|
};
|
|
43
|
-
//# sourceMappingURL=session-end-
|
|
43
|
+
//# sourceMappingURL=session-end-PZ2OXBGG.js.map
|
|
@@ -1,20 +1,21 @@
|
|
|
1
1
|
import { createRequire as __cr } from 'node:module'; const require = __cr(import.meta.url);
|
|
2
2
|
import {
|
|
3
3
|
listSpores
|
|
4
|
-
} from "./chunk-
|
|
4
|
+
} from "./chunk-REN37KYI.js";
|
|
5
5
|
import {
|
|
6
6
|
listSessions
|
|
7
|
-
} from "./chunk-
|
|
8
|
-
import "./chunk-
|
|
7
|
+
} from "./chunk-2OO3BRFK.js";
|
|
8
|
+
import "./chunk-INWD6AIQ.js";
|
|
9
9
|
import {
|
|
10
|
-
evaluateSessionStartRules
|
|
11
|
-
|
|
10
|
+
evaluateSessionStartRules,
|
|
11
|
+
readTranscriptMeta
|
|
12
|
+
} from "./chunk-44PZCAYS.js";
|
|
12
13
|
import {
|
|
13
14
|
createSchema
|
|
14
|
-
} from "./chunk-
|
|
15
|
+
} from "./chunk-2LN2BBKA.js";
|
|
15
16
|
import {
|
|
16
17
|
loadConfig
|
|
17
|
-
} from "./chunk-
|
|
18
|
+
} from "./chunk-MDEUXYJG.js";
|
|
18
19
|
import {
|
|
19
20
|
getDatabase,
|
|
20
21
|
initDatabase,
|
|
@@ -23,28 +24,28 @@ import {
|
|
|
23
24
|
import {
|
|
24
25
|
normalizeHookInput,
|
|
25
26
|
readStdin
|
|
26
|
-
} from "./chunk-
|
|
27
|
+
} from "./chunk-MS6FDV45.js";
|
|
27
28
|
import {
|
|
28
29
|
resolveVaultDir
|
|
29
30
|
} from "./chunk-5ZT2Q6P5.js";
|
|
30
31
|
import {
|
|
31
32
|
DaemonClient
|
|
32
|
-
} from "./chunk-
|
|
33
|
-
import "./chunk-
|
|
33
|
+
} from "./chunk-CYBC2HZ3.js";
|
|
34
|
+
import "./chunk-2CKDAFSX.js";
|
|
34
35
|
import {
|
|
35
36
|
loadManifests
|
|
36
|
-
} from "./chunk-
|
|
37
|
+
} from "./chunk-UVKQ62II.js";
|
|
37
38
|
import "./chunk-LPUQPDC2.js";
|
|
38
39
|
import {
|
|
39
40
|
CONTEXT_SESSION_PREVIEW_CHARS,
|
|
40
41
|
CONTEXT_SPORE_PREVIEW_CHARS,
|
|
41
42
|
EXCLUDED_SPORE_STATUSES,
|
|
42
43
|
estimateTokens
|
|
43
|
-
} from "./chunk-
|
|
44
|
+
} from "./chunk-W4VHC2ES.js";
|
|
44
45
|
import "./chunk-E7NUADTQ.js";
|
|
45
|
-
import "./chunk-
|
|
46
|
-
import "./chunk-
|
|
47
|
-
import "./chunk-
|
|
46
|
+
import "./chunk-6LQIMRTC.js";
|
|
47
|
+
import "./chunk-ODXLRR4U.js";
|
|
48
|
+
import "./chunk-U6PF3YII.js";
|
|
48
49
|
import "./chunk-PZUWP5VK.js";
|
|
49
50
|
|
|
50
51
|
// src/context/injector.ts
|
|
@@ -138,7 +139,11 @@ async function main() {
|
|
|
138
139
|
try {
|
|
139
140
|
const rawInput = JSON.parse(await readStdin());
|
|
140
141
|
const { sessionId, agent, transcriptPath } = normalizeHookInput(rawInput);
|
|
141
|
-
const
|
|
142
|
+
const transcriptMeta = transcriptPath ? readTranscriptMeta(transcriptPath) : void 0;
|
|
143
|
+
const decision = evaluateSessionStartRules(loadManifests(), agent, {
|
|
144
|
+
transcriptPath,
|
|
145
|
+
transcriptMeta: transcriptMeta ?? void 0
|
|
146
|
+
});
|
|
142
147
|
if (decision.action === "drop") {
|
|
143
148
|
process.stderr.write(`[myco] session-start: dropped (${decision.reason ?? "rule"})
|
|
144
149
|
`);
|
|
@@ -181,4 +186,4 @@ async function main() {
|
|
|
181
186
|
export {
|
|
182
187
|
main
|
|
183
188
|
};
|
|
184
|
-
//# sourceMappingURL=session-start-
|
|
189
|
+
//# sourceMappingURL=session-start-FDGM56BX.js.map
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"sources":["../src/context/injector.ts","../src/hooks/session-start.ts"],"sourcesContent":["/**\n * Context injector — assembles context from SQLite for hook injection.\n *\n * Queries sessions and spores from SQLite. For prompt-submit context,\n * semantic search is deferred to Phase 2 (requires daemon vector store).\n * If no data exists (zero-config), returns empty context gracefully.\n */\n\nimport { getDatabase } from '@myco/db/client.js';\nimport { listSessions } from '@myco/db/queries/sessions.js';\nimport { listSpores } from '@myco/db/queries/spores.js';\nimport type { MycoConfig } from '@myco/config/schema.js';\nimport {\n estimateTokens,\n CONTEXT_SESSION_PREVIEW_CHARS,\n CONTEXT_SPORE_PREVIEW_CHARS,\n PROMPT_CONTEXT_MIN_LENGTH,\n EXCLUDED_SPORE_STATUSES,\n} from '@myco/constants.js';\n\n// ---------------------------------------------------------------------------\n// Constants\n// ---------------------------------------------------------------------------\n\n/** Max recent sessions to include in context. */\nconst CONTEXT_SESSION_LIMIT = 10;\n\n/** Max sessions displayed after scoring. */\nconst CONTEXT_SESSION_DISPLAY_LIMIT = 5;\n\n/** Max spores to fetch for scoring. */\nconst CONTEXT_SPORE_FETCH_LIMIT = 20;\n\n/** Max spores displayed after scoring. */\nconst CONTEXT_SPORE_DISPLAY_LIMIT = 5;\n\n/** Default token budget for sessions layer. */\nconst DEFAULT_SESSIONS_BUDGET = 500;\n\n/** Default token budget for spores layer. */\nconst DEFAULT_SPORES_BUDGET = 300;\n\n/** Default token budget for team layer. */\nconst DEFAULT_TEAM_BUDGET = 200;\n\n/** Default total context max tokens. */\nconst DEFAULT_CONTEXT_MAX_TOKENS = 1200;\n\n// ---------------------------------------------------------------------------\n// Types\n// ---------------------------------------------------------------------------\n\ninterface InjectionContext {\n branch?: string;\n}\n\ninterface InjectedContext {\n text: string;\n tokenEstimate: number;\n layers: {\n sessions: string;\n spores: string;\n team: string;\n };\n}\n\n// ---------------------------------------------------------------------------\n// Public API\n// ---------------------------------------------------------------------------\n\n/**\n * Build injected context from SQLite data.\n *\n * Returns empty context gracefully when no data exists (zero-config behavior).\n */\nexport async function buildInjectedContext(\n _config: MycoConfig,\n context: InjectionContext,\n): Promise<InjectedContext> {\n // Verify database is available — return empty if not\n try {\n getDatabase();\n } catch {\n return emptyContext();\n }\n\n // Fetch sessions and spores in parallel\n const [sessions, spores] = await Promise.all([\n listSessions({ limit: CONTEXT_SESSION_LIMIT }),\n listSpores({ limit: CONTEXT_SPORE_FETCH_LIMIT, status: 'active' }),\n ]);\n\n // Layer 1: Recent sessions\n const sessionsText = formatLayer(\n 'Recent Sessions',\n sessions.slice(0, CONTEXT_SESSION_DISPLAY_LIMIT).map((s) => {\n const title = s.title ?? s.id;\n const summary = (s.summary ?? '').slice(0, CONTEXT_SESSION_PREVIEW_CHARS);\n const branchLabel = s.branch === context.branch ? ' (same branch)' : '';\n return `- **${title}**: ${summary}${branchLabel}`;\n }),\n DEFAULT_SESSIONS_BUDGET,\n );\n\n // Layer 2: Relevant spores (exclude superseded/archived)\n const filteredSpores = spores.filter((s) =>\n !EXCLUDED_SPORE_STATUSES.has(s.status),\n );\n const sporesText = formatLayer(\n 'Relevant Spores',\n filteredSpores.slice(0, CONTEXT_SPORE_DISPLAY_LIMIT).map((s) =>\n `- **${s.id}** (${s.observation_type}): ${s.content.slice(0, CONTEXT_SPORE_PREVIEW_CHARS)}`,\n ),\n DEFAULT_SPORES_BUDGET,\n );\n\n // Layer 3: Team activity (placeholder — populated in Phase 2)\n const teamText = formatLayer('Team Activity', [], DEFAULT_TEAM_BUDGET);\n\n // Enforce total max_tokens budget\n const allLayers = [sessionsText, sporesText, teamText].filter(Boolean);\n const parts: string[] = [];\n let totalTokens = 0;\n\n for (const layer of allLayers) {\n const layerTokens = estimateTokens(layer);\n if (totalTokens + layerTokens > DEFAULT_CONTEXT_MAX_TOKENS) break;\n parts.push(layer);\n totalTokens += layerTokens;\n }\n\n const fullText = parts.join('\\n\\n');\n\n return {\n text: fullText,\n tokenEstimate: totalTokens,\n layers: {\n sessions: sessionsText,\n spores: sporesText,\n team: teamText,\n },\n };\n}\n\n/**\n * Build per-prompt context using semantic search on spores.\n *\n * Semantic search via the daemon's in-process vector store is deferred to\n * Phase 2. For now, returns empty context. The hook (`user-prompt-submit`)\n * routes through the daemon API at `/context/prompt`, which will implement\n * vector search when ready.\n */\nexport async function buildPromptContext(\n prompt: string,\n _config: MycoConfig,\n): Promise<InjectedContext> {\n if (prompt.length < PROMPT_CONTEXT_MIN_LENGTH) {\n return emptyContext();\n }\n\n // Per-prompt semantic search deferred to Phase 2 (requires daemon vector store)\n return emptyContext();\n}\n\n// ---------------------------------------------------------------------------\n// Helpers\n// ---------------------------------------------------------------------------\n\nfunction emptyContext(): InjectedContext {\n return {\n text: '',\n tokenEstimate: 0,\n layers: { sessions: '', spores: '', team: '' },\n };\n}\n\nfunction formatLayer(heading: string, items: string[], budget: number): string {\n if (items.length === 0) return '';\n\n let text = `### ${heading}\\n`;\n let currentTokens = estimateTokens(text);\n\n for (const item of items) {\n const itemTokens = estimateTokens(item);\n if (currentTokens + itemTokens > budget) break;\n text += item + '\\n';\n currentTokens += itemTokens;\n }\n\n return text.trim();\n}\n","import { DaemonClient } from './client.js';\nimport { readStdin } from './read-stdin.js';\nimport { normalizeHookInput } from './normalize.js';\nimport { evaluateSessionStartRules } from './capture-rules.js';\nimport { loadManifests } from '../symbionts/detect.js';\nimport { loadConfig } from '../config/loader.js';\nimport { buildInjectedContext } from '../context/injector.js';\nimport { initDatabase, vaultDbPath } from '../db/client.js';\nimport { createSchema } from '../db/schema.js';\nimport { resolveVaultDir } from '../vault/resolve.js';\nimport { execFileSync } from 'node:child_process';\nimport fs from 'node:fs';\nimport path from 'node:path';\n\nexport async function main() {\n const VAULT_DIR = resolveVaultDir();\n if (!fs.existsSync(path.join(VAULT_DIR, 'myco.yaml'))) return;\n\n try {\n const rawInput = JSON.parse(await readStdin());\n const { sessionId, agent, transcriptPath } = normalizeHookInput(rawInput);\n\n // Apply session_start capture rules BEFORE registering the session.\n // For Codex ephemeral sub-invocations (title generation, etc.) this\n // structural drop prevents the phantom row from ever being created,\n // rather than creating it and cascade-deleting at user_prompt time.\n const decision = evaluateSessionStartRules(loadManifests(), agent, { transcriptPath });\n if (decision.action === 'drop') {\n process.stderr.write(`[myco] session-start: dropped (${decision.reason ?? 'rule'})\\n`);\n return;\n }\n\n const config = loadConfig(VAULT_DIR);\n const client = new DaemonClient(VAULT_DIR);\n const healthy = await client.ensureRunning();\n\n let branch: string | undefined;\n try {\n branch = execFileSync('git', ['rev-parse', '--abbrev-ref', 'HEAD'], { encoding: 'utf-8' }).trim();\n } catch { /* not a git repo */ }\n\n if (healthy) {\n await client.post('/sessions/register', {\n session_id: sessionId,\n agent,\n branch,\n started_at: new Date().toISOString(),\n });\n\n const contextResult = await client.post('/context', { session_id: sessionId, branch });\n\n if (contextResult.ok && contextResult.data?.text) {\n if (contextResult.data.source === 'digest') {\n process.stderr.write(`[myco] Injecting digest extract (tier ${contextResult.data.tier})\\n`);\n }\n process.stdout.write(contextResult.data.text);\n return;\n }\n }\n\n // Degraded: local SQLite context only\n const db = initDatabase(vaultDbPath(VAULT_DIR));\n createSchema(db);\n const injected = await buildInjectedContext(config, { branch });\n if (injected.text) process.stdout.write(injected.text);\n } catch (error) {\n process.stderr.write(`[myco] session-start error: ${(error as Error).message}\\n`);\n }\n}\n"],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAyBA,IAAM,wBAAwB;AAG9B,IAAM,gCAAgC;AAGtC,IAAM,4BAA4B;AAGlC,IAAM,8BAA8B;AAGpC,IAAM,0BAA0B;AAGhC,IAAM,wBAAwB;AAG9B,IAAM,sBAAsB;AAG5B,IAAM,6BAA6B;AA6BnC,eAAsB,qBACpB,SACA,SAC0B;AAE1B,MAAI;AACF,gBAAY;AAAA,EACd,QAAQ;AACN,WAAO,aAAa;AAAA,EACtB;AAGA,QAAM,CAAC,UAAU,MAAM,IAAI,MAAM,QAAQ,IAAI;AAAA,IAC3C,aAAa,EAAE,OAAO,sBAAsB,CAAC;AAAA,IAC7C,WAAW,EAAE,OAAO,2BAA2B,QAAQ,SAAS,CAAC;AAAA,EACnE,CAAC;AAGD,QAAM,eAAe;AAAA,IACnB;AAAA,IACA,SAAS,MAAM,GAAG,6BAA6B,EAAE,IAAI,CAAC,MAAM;AAC1D,YAAM,QAAQ,EAAE,SAAS,EAAE;AAC3B,YAAM,WAAW,EAAE,WAAW,IAAI,MAAM,GAAG,6BAA6B;AACxE,YAAM,cAAc,EAAE,WAAW,QAAQ,SAAS,mBAAmB;AACrE,aAAO,OAAO,KAAK,OAAO,OAAO,GAAG,WAAW;AAAA,IACjD,CAAC;AAAA,IACD;AAAA,EACF;AAGA,QAAM,iBAAiB,OAAO;AAAA,IAAO,CAAC,MACpC,CAAC,wBAAwB,IAAI,EAAE,MAAM;AAAA,EACvC;AACA,QAAM,aAAa;AAAA,IACjB;AAAA,IACA,eAAe,MAAM,GAAG,2BAA2B,EAAE;AAAA,MAAI,CAAC,MACxD,OAAO,EAAE,EAAE,OAAO,EAAE,gBAAgB,MAAM,EAAE,QAAQ,MAAM,GAAG,2BAA2B,CAAC;AAAA,IAC3F;AAAA,IACA;AAAA,EACF;AAGA,QAAM,WAAW,YAAY,iBAAiB,CAAC,GAAG,mBAAmB;AAGrE,QAAM,YAAY,CAAC,cAAc,YAAY,QAAQ,EAAE,OAAO,OAAO;AACrE,QAAM,QAAkB,CAAC;AACzB,MAAI,cAAc;AAElB,aAAW,SAAS,WAAW;AAC7B,UAAM,cAAc,eAAe,KAAK;AACxC,QAAI,cAAc,cAAc,2BAA4B;AAC5D,UAAM,KAAK,KAAK;AAChB,mBAAe;AAAA,EACjB;AAEA,QAAM,WAAW,MAAM,KAAK,MAAM;AAElC,SAAO;AAAA,IACL,MAAM;AAAA,IACN,eAAe;AAAA,IACf,QAAQ;AAAA,MACN,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,MAAM;AAAA,IACR;AAAA,EACF;AACF;AA0BA,SAAS,eAAgC;AACvC,SAAO;AAAA,IACL,MAAM;AAAA,IACN,eAAe;AAAA,IACf,QAAQ,EAAE,UAAU,IAAI,QAAQ,IAAI,MAAM,GAAG;AAAA,EAC/C;AACF;AAEA,SAAS,YAAY,SAAiB,OAAiB,QAAwB;AAC7E,MAAI,MAAM,WAAW,EAAG,QAAO;AAE/B,MAAI,OAAO,OAAO,OAAO;AAAA;AACzB,MAAI,gBAAgB,eAAe,IAAI;AAEvC,aAAW,QAAQ,OAAO;AACxB,UAAM,aAAa,eAAe,IAAI;AACtC,QAAI,gBAAgB,aAAa,OAAQ;AACzC,YAAQ,OAAO;AACf,qBAAiB;AAAA,EACnB;AAEA,SAAO,KAAK,KAAK;AACnB;;;ACpLA,SAAS,oBAAoB;AAC7B,OAAO,QAAQ;AACf,OAAO,UAAU;AAEjB,eAAsB,OAAO;AAC3B,QAAM,YAAY,gBAAgB;AAClC,MAAI,CAAC,GAAG,WAAW,KAAK,KAAK,WAAW,WAAW,CAAC,EAAG;AAEvD,MAAI;AACF,UAAM,WAAW,KAAK,MAAM,MAAM,UAAU,CAAC;AAC7C,UAAM,EAAE,WAAW,OAAO,eAAe,IAAI,mBAAmB,QAAQ;AAMxE,UAAM,WAAW,0BAA0B,cAAc,GAAG,OAAO,EAAE,eAAe,CAAC;AACrF,QAAI,SAAS,WAAW,QAAQ;AAC9B,cAAQ,OAAO,MAAM,kCAAkC,SAAS,UAAU,MAAM;AAAA,CAAK;AACrF;AAAA,IACF;AAEA,UAAM,SAAS,WAAW,SAAS;AACnC,UAAM,SAAS,IAAI,aAAa,SAAS;AACzC,UAAM,UAAU,MAAM,OAAO,cAAc;AAE3C,QAAI;AACJ,QAAI;AACF,eAAS,aAAa,OAAO,CAAC,aAAa,gBAAgB,MAAM,GAAG,EAAE,UAAU,QAAQ,CAAC,EAAE,KAAK;AAAA,IAClG,QAAQ;AAAA,IAAuB;AAE/B,QAAI,SAAS;AACX,YAAM,OAAO,KAAK,sBAAsB;AAAA,QACtC,YAAY;AAAA,QACZ;AAAA,QACA;AAAA,QACA,aAAY,oBAAI,KAAK,GAAE,YAAY;AAAA,MACrC,CAAC;AAED,YAAM,gBAAgB,MAAM,OAAO,KAAK,YAAY,EAAE,YAAY,WAAW,OAAO,CAAC;AAErF,UAAI,cAAc,MAAM,cAAc,MAAM,MAAM;AAChD,YAAI,cAAc,KAAK,WAAW,UAAU;AAC1C,kBAAQ,OAAO,MAAM,yCAAyC,cAAc,KAAK,IAAI;AAAA,CAAK;AAAA,QAC5F;AACA,gBAAQ,OAAO,MAAM,cAAc,KAAK,IAAI;AAC5C;AAAA,MACF;AAAA,IACF;AAGA,UAAM,KAAK,aAAa,YAAY,SAAS,CAAC;AAC9C,iBAAa,EAAE;AACf,UAAM,WAAW,MAAM,qBAAqB,QAAQ,EAAE,OAAO,CAAC;AAC9D,QAAI,SAAS,KAAM,SAAQ,OAAO,MAAM,SAAS,IAAI;AAAA,EACvD,SAAS,OAAO;AACd,YAAQ,OAAO,MAAM,+BAAgC,MAAgB,OAAO;AAAA,CAAI;AAAA,EAClF;AACF;","names":[]}
|
|
1
|
+
{"version":3,"sources":["../src/context/injector.ts","../src/hooks/session-start.ts"],"sourcesContent":["/**\n * Context injector — assembles context from SQLite for hook injection.\n *\n * Queries sessions and spores from SQLite. For prompt-submit context,\n * semantic search is deferred to Phase 2 (requires daemon vector store).\n * If no data exists (zero-config), returns empty context gracefully.\n */\n\nimport { getDatabase } from '@myco/db/client.js';\nimport { listSessions } from '@myco/db/queries/sessions.js';\nimport { listSpores } from '@myco/db/queries/spores.js';\nimport type { MycoConfig } from '@myco/config/schema.js';\nimport {\n estimateTokens,\n CONTEXT_SESSION_PREVIEW_CHARS,\n CONTEXT_SPORE_PREVIEW_CHARS,\n PROMPT_CONTEXT_MIN_LENGTH,\n EXCLUDED_SPORE_STATUSES,\n} from '@myco/constants.js';\n\n// ---------------------------------------------------------------------------\n// Constants\n// ---------------------------------------------------------------------------\n\n/** Max recent sessions to include in context. */\nconst CONTEXT_SESSION_LIMIT = 10;\n\n/** Max sessions displayed after scoring. */\nconst CONTEXT_SESSION_DISPLAY_LIMIT = 5;\n\n/** Max spores to fetch for scoring. */\nconst CONTEXT_SPORE_FETCH_LIMIT = 20;\n\n/** Max spores displayed after scoring. */\nconst CONTEXT_SPORE_DISPLAY_LIMIT = 5;\n\n/** Default token budget for sessions layer. */\nconst DEFAULT_SESSIONS_BUDGET = 500;\n\n/** Default token budget for spores layer. */\nconst DEFAULT_SPORES_BUDGET = 300;\n\n/** Default token budget for team layer. */\nconst DEFAULT_TEAM_BUDGET = 200;\n\n/** Default total context max tokens. */\nconst DEFAULT_CONTEXT_MAX_TOKENS = 1200;\n\n// ---------------------------------------------------------------------------\n// Types\n// ---------------------------------------------------------------------------\n\ninterface InjectionContext {\n branch?: string;\n}\n\ninterface InjectedContext {\n text: string;\n tokenEstimate: number;\n layers: {\n sessions: string;\n spores: string;\n team: string;\n };\n}\n\n// ---------------------------------------------------------------------------\n// Public API\n// ---------------------------------------------------------------------------\n\n/**\n * Build injected context from SQLite data.\n *\n * Returns empty context gracefully when no data exists (zero-config behavior).\n */\nexport async function buildInjectedContext(\n _config: MycoConfig,\n context: InjectionContext,\n): Promise<InjectedContext> {\n // Verify database is available — return empty if not\n try {\n getDatabase();\n } catch {\n return emptyContext();\n }\n\n // Fetch sessions and spores in parallel\n const [sessions, spores] = await Promise.all([\n listSessions({ limit: CONTEXT_SESSION_LIMIT }),\n listSpores({ limit: CONTEXT_SPORE_FETCH_LIMIT, status: 'active' }),\n ]);\n\n // Layer 1: Recent sessions\n const sessionsText = formatLayer(\n 'Recent Sessions',\n sessions.slice(0, CONTEXT_SESSION_DISPLAY_LIMIT).map((s) => {\n const title = s.title ?? s.id;\n const summary = (s.summary ?? '').slice(0, CONTEXT_SESSION_PREVIEW_CHARS);\n const branchLabel = s.branch === context.branch ? ' (same branch)' : '';\n return `- **${title}**: ${summary}${branchLabel}`;\n }),\n DEFAULT_SESSIONS_BUDGET,\n );\n\n // Layer 2: Relevant spores (exclude superseded/archived)\n const filteredSpores = spores.filter((s) =>\n !EXCLUDED_SPORE_STATUSES.has(s.status),\n );\n const sporesText = formatLayer(\n 'Relevant Spores',\n filteredSpores.slice(0, CONTEXT_SPORE_DISPLAY_LIMIT).map((s) =>\n `- **${s.id}** (${s.observation_type}): ${s.content.slice(0, CONTEXT_SPORE_PREVIEW_CHARS)}`,\n ),\n DEFAULT_SPORES_BUDGET,\n );\n\n // Layer 3: Team activity (placeholder — populated in Phase 2)\n const teamText = formatLayer('Team Activity', [], DEFAULT_TEAM_BUDGET);\n\n // Enforce total max_tokens budget\n const allLayers = [sessionsText, sporesText, teamText].filter(Boolean);\n const parts: string[] = [];\n let totalTokens = 0;\n\n for (const layer of allLayers) {\n const layerTokens = estimateTokens(layer);\n if (totalTokens + layerTokens > DEFAULT_CONTEXT_MAX_TOKENS) break;\n parts.push(layer);\n totalTokens += layerTokens;\n }\n\n const fullText = parts.join('\\n\\n');\n\n return {\n text: fullText,\n tokenEstimate: totalTokens,\n layers: {\n sessions: sessionsText,\n spores: sporesText,\n team: teamText,\n },\n };\n}\n\n/**\n * Build per-prompt context using semantic search on spores.\n *\n * Semantic search via the daemon's in-process vector store is deferred to\n * Phase 2. For now, returns empty context. The hook (`user-prompt-submit`)\n * routes through the daemon API at `/context/prompt`, which will implement\n * vector search when ready.\n */\nexport async function buildPromptContext(\n prompt: string,\n _config: MycoConfig,\n): Promise<InjectedContext> {\n if (prompt.length < PROMPT_CONTEXT_MIN_LENGTH) {\n return emptyContext();\n }\n\n // Per-prompt semantic search deferred to Phase 2 (requires daemon vector store)\n return emptyContext();\n}\n\n// ---------------------------------------------------------------------------\n// Helpers\n// ---------------------------------------------------------------------------\n\nfunction emptyContext(): InjectedContext {\n return {\n text: '',\n tokenEstimate: 0,\n layers: { sessions: '', spores: '', team: '' },\n };\n}\n\nfunction formatLayer(heading: string, items: string[], budget: number): string {\n if (items.length === 0) return '';\n\n let text = `### ${heading}\\n`;\n let currentTokens = estimateTokens(text);\n\n for (const item of items) {\n const itemTokens = estimateTokens(item);\n if (currentTokens + itemTokens > budget) break;\n text += item + '\\n';\n currentTokens += itemTokens;\n }\n\n return text.trim();\n}\n","import { DaemonClient } from './client.js';\nimport { readStdin } from './read-stdin.js';\nimport { normalizeHookInput } from './normalize.js';\nimport { evaluateSessionStartRules } from './capture-rules.js';\nimport { readTranscriptMeta } from './transcript-meta.js';\nimport { loadManifests } from '../symbionts/detect.js';\nimport { loadConfig } from '../config/loader.js';\nimport { buildInjectedContext } from '../context/injector.js';\nimport { initDatabase, vaultDbPath } from '../db/client.js';\nimport { createSchema } from '../db/schema.js';\nimport { resolveVaultDir } from '../vault/resolve.js';\nimport { execFileSync } from 'node:child_process';\nimport fs from 'node:fs';\nimport path from 'node:path';\n\nexport async function main() {\n const VAULT_DIR = resolveVaultDir();\n if (!fs.existsSync(path.join(VAULT_DIR, 'myco.yaml'))) return;\n\n try {\n const rawInput = JSON.parse(await readStdin());\n const { sessionId, agent, transcriptPath } = normalizeHookInput(rawInput);\n\n // Apply session_start capture rules BEFORE registering the session.\n // For Codex ephemeral sub-invocations (title generation, etc.) this\n // structural drop prevents the phantom row from ever being created,\n // rather than creating it and cascade-deleting at user_prompt time.\n // Read the transcript's session_meta for rules that inspect it\n // (e.g., detecting sub-agent thread spawns via source.subagent).\n const transcriptMeta = transcriptPath ? readTranscriptMeta(transcriptPath) : undefined;\n const decision = evaluateSessionStartRules(loadManifests(), agent, {\n transcriptPath,\n transcriptMeta: transcriptMeta ?? undefined,\n });\n if (decision.action === 'drop') {\n process.stderr.write(`[myco] session-start: dropped (${decision.reason ?? 'rule'})\\n`);\n return;\n }\n\n const config = loadConfig(VAULT_DIR);\n const client = new DaemonClient(VAULT_DIR);\n const healthy = await client.ensureRunning();\n\n let branch: string | undefined;\n try {\n branch = execFileSync('git', ['rev-parse', '--abbrev-ref', 'HEAD'], { encoding: 'utf-8' }).trim();\n } catch { /* not a git repo */ }\n\n if (healthy) {\n await client.post('/sessions/register', {\n session_id: sessionId,\n agent,\n branch,\n started_at: new Date().toISOString(),\n });\n\n const contextResult = await client.post('/context', { session_id: sessionId, branch });\n\n if (contextResult.ok && contextResult.data?.text) {\n if (contextResult.data.source === 'digest') {\n process.stderr.write(`[myco] Injecting digest extract (tier ${contextResult.data.tier})\\n`);\n }\n process.stdout.write(contextResult.data.text);\n return;\n }\n }\n\n // Degraded: local SQLite context only\n const db = initDatabase(vaultDbPath(VAULT_DIR));\n createSchema(db);\n const injected = await buildInjectedContext(config, { branch });\n if (injected.text) process.stdout.write(injected.text);\n } catch (error) {\n process.stderr.write(`[myco] session-start error: ${(error as Error).message}\\n`);\n }\n}\n"],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAyBA,IAAM,wBAAwB;AAG9B,IAAM,gCAAgC;AAGtC,IAAM,4BAA4B;AAGlC,IAAM,8BAA8B;AAGpC,IAAM,0BAA0B;AAGhC,IAAM,wBAAwB;AAG9B,IAAM,sBAAsB;AAG5B,IAAM,6BAA6B;AA6BnC,eAAsB,qBACpB,SACA,SAC0B;AAE1B,MAAI;AACF,gBAAY;AAAA,EACd,QAAQ;AACN,WAAO,aAAa;AAAA,EACtB;AAGA,QAAM,CAAC,UAAU,MAAM,IAAI,MAAM,QAAQ,IAAI;AAAA,IAC3C,aAAa,EAAE,OAAO,sBAAsB,CAAC;AAAA,IAC7C,WAAW,EAAE,OAAO,2BAA2B,QAAQ,SAAS,CAAC;AAAA,EACnE,CAAC;AAGD,QAAM,eAAe;AAAA,IACnB;AAAA,IACA,SAAS,MAAM,GAAG,6BAA6B,EAAE,IAAI,CAAC,MAAM;AAC1D,YAAM,QAAQ,EAAE,SAAS,EAAE;AAC3B,YAAM,WAAW,EAAE,WAAW,IAAI,MAAM,GAAG,6BAA6B;AACxE,YAAM,cAAc,EAAE,WAAW,QAAQ,SAAS,mBAAmB;AACrE,aAAO,OAAO,KAAK,OAAO,OAAO,GAAG,WAAW;AAAA,IACjD,CAAC;AAAA,IACD;AAAA,EACF;AAGA,QAAM,iBAAiB,OAAO;AAAA,IAAO,CAAC,MACpC,CAAC,wBAAwB,IAAI,EAAE,MAAM;AAAA,EACvC;AACA,QAAM,aAAa;AAAA,IACjB;AAAA,IACA,eAAe,MAAM,GAAG,2BAA2B,EAAE;AAAA,MAAI,CAAC,MACxD,OAAO,EAAE,EAAE,OAAO,EAAE,gBAAgB,MAAM,EAAE,QAAQ,MAAM,GAAG,2BAA2B,CAAC;AAAA,IAC3F;AAAA,IACA;AAAA,EACF;AAGA,QAAM,WAAW,YAAY,iBAAiB,CAAC,GAAG,mBAAmB;AAGrE,QAAM,YAAY,CAAC,cAAc,YAAY,QAAQ,EAAE,OAAO,OAAO;AACrE,QAAM,QAAkB,CAAC;AACzB,MAAI,cAAc;AAElB,aAAW,SAAS,WAAW;AAC7B,UAAM,cAAc,eAAe,KAAK;AACxC,QAAI,cAAc,cAAc,2BAA4B;AAC5D,UAAM,KAAK,KAAK;AAChB,mBAAe;AAAA,EACjB;AAEA,QAAM,WAAW,MAAM,KAAK,MAAM;AAElC,SAAO;AAAA,IACL,MAAM;AAAA,IACN,eAAe;AAAA,IACf,QAAQ;AAAA,MACN,UAAU;AAAA,MACV,QAAQ;AAAA,MACR,MAAM;AAAA,IACR;AAAA,EACF;AACF;AA0BA,SAAS,eAAgC;AACvC,SAAO;AAAA,IACL,MAAM;AAAA,IACN,eAAe;AAAA,IACf,QAAQ,EAAE,UAAU,IAAI,QAAQ,IAAI,MAAM,GAAG;AAAA,EAC/C;AACF;AAEA,SAAS,YAAY,SAAiB,OAAiB,QAAwB;AAC7E,MAAI,MAAM,WAAW,EAAG,QAAO;AAE/B,MAAI,OAAO,OAAO,OAAO;AAAA;AACzB,MAAI,gBAAgB,eAAe,IAAI;AAEvC,aAAW,QAAQ,OAAO;AACxB,UAAM,aAAa,eAAe,IAAI;AACtC,QAAI,gBAAgB,aAAa,OAAQ;AACzC,YAAQ,OAAO;AACf,qBAAiB;AAAA,EACnB;AAEA,SAAO,KAAK,KAAK;AACnB;;;ACnLA,SAAS,oBAAoB;AAC7B,OAAO,QAAQ;AACf,OAAO,UAAU;AAEjB,eAAsB,OAAO;AAC3B,QAAM,YAAY,gBAAgB;AAClC,MAAI,CAAC,GAAG,WAAW,KAAK,KAAK,WAAW,WAAW,CAAC,EAAG;AAEvD,MAAI;AACF,UAAM,WAAW,KAAK,MAAM,MAAM,UAAU,CAAC;AAC7C,UAAM,EAAE,WAAW,OAAO,eAAe,IAAI,mBAAmB,QAAQ;AAQxE,UAAM,iBAAiB,iBAAiB,mBAAmB,cAAc,IAAI;AAC7E,UAAM,WAAW,0BAA0B,cAAc,GAAG,OAAO;AAAA,MACjE;AAAA,MACA,gBAAgB,kBAAkB;AAAA,IACpC,CAAC;AACD,QAAI,SAAS,WAAW,QAAQ;AAC9B,cAAQ,OAAO,MAAM,kCAAkC,SAAS,UAAU,MAAM;AAAA,CAAK;AACrF;AAAA,IACF;AAEA,UAAM,SAAS,WAAW,SAAS;AACnC,UAAM,SAAS,IAAI,aAAa,SAAS;AACzC,UAAM,UAAU,MAAM,OAAO,cAAc;AAE3C,QAAI;AACJ,QAAI;AACF,eAAS,aAAa,OAAO,CAAC,aAAa,gBAAgB,MAAM,GAAG,EAAE,UAAU,QAAQ,CAAC,EAAE,KAAK;AAAA,IAClG,QAAQ;AAAA,IAAuB;AAE/B,QAAI,SAAS;AACX,YAAM,OAAO,KAAK,sBAAsB;AAAA,QACtC,YAAY;AAAA,QACZ;AAAA,QACA;AAAA,QACA,aAAY,oBAAI,KAAK,GAAE,YAAY;AAAA,MACrC,CAAC;AAED,YAAM,gBAAgB,MAAM,OAAO,KAAK,YAAY,EAAE,YAAY,WAAW,OAAO,CAAC;AAErF,UAAI,cAAc,MAAM,cAAc,MAAM,MAAM;AAChD,YAAI,cAAc,KAAK,WAAW,UAAU;AAC1C,kBAAQ,OAAO,MAAM,yCAAyC,cAAc,KAAK,IAAI;AAAA,CAAK;AAAA,QAC5F;AACA,gBAAQ,OAAO,MAAM,cAAc,KAAK,IAAI;AAC5C;AAAA,MACF;AAAA,IACF;AAGA,UAAM,KAAK,aAAa,YAAY,SAAS,CAAC;AAC9C,iBAAa,EAAE;AACf,UAAM,WAAW,MAAM,qBAAqB,QAAQ,EAAE,OAAO,CAAC;AAC9D,QAAI,SAAS,KAAM,SAAQ,OAAO,MAAM,SAAS,IAAI;AAAA,EACvD,SAAS,OAAO;AACd,YAAQ,OAAO,MAAM,+BAAgC,MAAgB,OAAO;AAAA,CAAI;AAAA,EAClF;AACF;","names":[]}
|
|
@@ -2,25 +2,25 @@ import { createRequire as __cr } from 'node:module'; const require = __cr(import
|
|
|
2
2
|
import {
|
|
3
3
|
withEmbedding
|
|
4
4
|
} from "./chunk-GFR542SM.js";
|
|
5
|
-
import "./chunk-
|
|
5
|
+
import "./chunk-N77K772N.js";
|
|
6
6
|
import {
|
|
7
7
|
parseStringFlag
|
|
8
8
|
} from "./chunk-SAKJMNSR.js";
|
|
9
9
|
import "./chunk-WYOE4IAX.js";
|
|
10
|
-
import "./chunk-
|
|
10
|
+
import "./chunk-KSXTNYXO.js";
|
|
11
11
|
import {
|
|
12
12
|
loadConfig,
|
|
13
13
|
updateConfig
|
|
14
|
-
} from "./chunk-
|
|
14
|
+
} from "./chunk-MDEUXYJG.js";
|
|
15
15
|
import "./chunk-MYX5NCRH.js";
|
|
16
|
-
import "./chunk-
|
|
17
|
-
import "./chunk-
|
|
16
|
+
import "./chunk-CYBC2HZ3.js";
|
|
17
|
+
import "./chunk-2CKDAFSX.js";
|
|
18
18
|
import "./chunk-LPUQPDC2.js";
|
|
19
|
-
import "./chunk-
|
|
19
|
+
import "./chunk-W4VHC2ES.js";
|
|
20
20
|
import "./chunk-E7NUADTQ.js";
|
|
21
|
-
import "./chunk-
|
|
22
|
-
import "./chunk-
|
|
23
|
-
import "./chunk-
|
|
21
|
+
import "./chunk-6LQIMRTC.js";
|
|
22
|
+
import "./chunk-ODXLRR4U.js";
|
|
23
|
+
import "./chunk-U6PF3YII.js";
|
|
24
24
|
import "./chunk-PZUWP5VK.js";
|
|
25
25
|
|
|
26
26
|
// src/cli/setup-llm.ts
|
|
@@ -79,4 +79,4 @@ async function run(args, vaultDir) {
|
|
|
79
79
|
export {
|
|
80
80
|
run
|
|
81
81
|
};
|
|
82
|
-
//# sourceMappingURL=setup-llm-
|
|
82
|
+
//# sourceMappingURL=setup-llm-MQK557BB.js.map
|
|
@@ -61,6 +61,7 @@ prompt: >
|
|
|
61
61
|
|
|
62
62
|
phases:
|
|
63
63
|
- name: read-state
|
|
64
|
+
model: claude-haiku-4-5-20251001
|
|
64
65
|
# Root phase — no dependsOn
|
|
65
66
|
prompt: |
|
|
66
67
|
Read the current vault state to determine what needs processing.
|
|
@@ -80,6 +81,7 @@ phases:
|
|
|
80
81
|
readOnly: true
|
|
81
82
|
|
|
82
83
|
- name: extract
|
|
84
|
+
model: claude-haiku-4-5-20251001
|
|
83
85
|
dependsOn: [read-state]
|
|
84
86
|
prompt: |
|
|
85
87
|
Extract observations from unprocessed batches as spores.
|
|
@@ -132,6 +134,7 @@ phases:
|
|
|
132
134
|
required: true
|
|
133
135
|
|
|
134
136
|
- name: summarize
|
|
137
|
+
model: claude-haiku-4-5-20251001
|
|
135
138
|
dependsOn: [read-state]
|
|
136
139
|
prompt: |
|
|
137
140
|
Update session titles and summaries for sessions touched during extraction.
|
|
@@ -186,6 +189,7 @@ phases:
|
|
|
186
189
|
required: false
|
|
187
190
|
|
|
188
191
|
- name: consolidate
|
|
192
|
+
model: claude-sonnet-4-6
|
|
189
193
|
dependsOn: [extract]
|
|
190
194
|
prompt: |
|
|
191
195
|
Consolidate related spores into wisdom and clean up redundancy.
|
|
@@ -240,6 +244,7 @@ phases:
|
|
|
240
244
|
required: false
|
|
241
245
|
|
|
242
246
|
- name: graph
|
|
247
|
+
model: claude-sonnet-4-6
|
|
243
248
|
dependsOn: [extract]
|
|
244
249
|
prompt: |
|
|
245
250
|
Build the knowledge graph: create entities, then link spores to them.
|
|
@@ -302,6 +307,7 @@ phases:
|
|
|
302
307
|
# in the same wave via Promise.allSettled().
|
|
303
308
|
|
|
304
309
|
- name: digest-assess
|
|
310
|
+
model: claude-sonnet-4-6
|
|
305
311
|
dependsOn: [consolidate]
|
|
306
312
|
prompt: |
|
|
307
313
|
Assess current digest state and gather material for tier updates.
|
|
@@ -367,6 +373,7 @@ phases:
|
|
|
367
373
|
required: true
|
|
368
374
|
|
|
369
375
|
- name: digest-10000
|
|
376
|
+
model: claude-haiku-4-5-20251001
|
|
370
377
|
dependsOn: [digest-assess]
|
|
371
378
|
prompt: |
|
|
372
379
|
Update digest tier 10000 — Full institutional knowledge.
|
|
@@ -399,6 +406,7 @@ phases:
|
|
|
399
406
|
required: false
|
|
400
407
|
|
|
401
408
|
- name: digest-5000
|
|
409
|
+
model: claude-haiku-4-5-20251001
|
|
402
410
|
dependsOn: [digest-assess]
|
|
403
411
|
prompt: |
|
|
404
412
|
Update digest tier 5000 — Deep onboarding.
|
|
@@ -430,6 +438,7 @@ phases:
|
|
|
430
438
|
required: false
|
|
431
439
|
|
|
432
440
|
- name: digest-1500
|
|
441
|
+
model: claude-haiku-4-5-20251001
|
|
433
442
|
dependsOn: [digest-assess]
|
|
434
443
|
prompt: |
|
|
435
444
|
Update digest tier 1500 — Executive briefing.
|
|
@@ -461,6 +470,7 @@ phases:
|
|
|
461
470
|
required: false
|
|
462
471
|
|
|
463
472
|
- name: report
|
|
473
|
+
model: claude-haiku-4-5-20251001
|
|
464
474
|
dependsOn: [extract, summarize, consolidate, graph, digest-assess, digest-10000, digest-5000, digest-1500]
|
|
465
475
|
prompt: |
|
|
466
476
|
Summarize what was done across all phases.
|
|
@@ -1,18 +1,18 @@
|
|
|
1
1
|
name: skill-evolve
|
|
2
2
|
displayName: Skill Evolution
|
|
3
3
|
description: >-
|
|
4
|
-
Evaluate and evolve existing Myco-managed skills
|
|
5
|
-
|
|
6
|
-
|
|
4
|
+
Evaluate and evolve existing Myco-managed skills. Assesses content
|
|
5
|
+
freshness, identifies merge and narrowness opportunities, and
|
|
6
|
+
autonomously consolidates the skill inventory.
|
|
7
7
|
agent: myco-agent
|
|
8
8
|
prompt: >-
|
|
9
|
-
Assess and
|
|
10
|
-
|
|
11
|
-
|
|
9
|
+
Assess and evolve skills that have new knowledge or structural
|
|
10
|
+
overlap. The instruction contains pre-filtered skills with their
|
|
11
|
+
content, new spore IDs, and pre-computed similarity analysis.
|
|
12
12
|
isDefault: false
|
|
13
13
|
model: claude-sonnet-4-6
|
|
14
|
-
maxTurns:
|
|
15
|
-
timeoutSeconds:
|
|
14
|
+
maxTurns: 48
|
|
15
|
+
timeoutSeconds: 1800
|
|
16
16
|
schedule:
|
|
17
17
|
enabled: false
|
|
18
18
|
intervalSeconds: 900
|
|
@@ -21,75 +21,161 @@ schedule:
|
|
|
21
21
|
preCondition: has-active-skills
|
|
22
22
|
params:
|
|
23
23
|
assess_interval_hours: 24
|
|
24
|
-
max_skills_per_run:
|
|
24
|
+
max_skills_per_run: 8
|
|
25
25
|
phases:
|
|
26
|
-
- name:
|
|
26
|
+
- name: inventory
|
|
27
|
+
model: claude-haiku-4-5-20251001
|
|
27
28
|
prompt: |
|
|
28
|
-
The instruction contains
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
1.
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
29
|
+
The instruction contains pre-computed structural analysis of
|
|
30
|
+
all active skills. Two mechanical signals are provided:
|
|
31
|
+
|
|
32
|
+
1. **Narrow skills** — skills with <2 H2 sections are flagged
|
|
33
|
+
as mechanically narrow. These are NOT broad enough for
|
|
34
|
+
standalone domain status.
|
|
35
|
+
|
|
36
|
+
2. **Overlap pairs** — skill pairs with similar descriptions
|
|
37
|
+
OR overlapping H2 headings are flagged with scores.
|
|
38
|
+
|
|
39
|
+
## Your job: validate and assign targets
|
|
40
|
+
|
|
41
|
+
For each **mechanically narrow** skill:
|
|
42
|
+
- Confirm it is genuinely narrow (not a false positive, e.g.,
|
|
43
|
+
a debugging playbook with one long section is not narrow).
|
|
44
|
+
- Identify which broader skill it should be absorbed into.
|
|
45
|
+
Look at the heading lists to find the best domain match.
|
|
46
|
+
|
|
47
|
+
For each **overlap pair**:
|
|
48
|
+
- Confirm the overlap is real (not just shared vocabulary).
|
|
49
|
+
- If real, decide merge direction: which skill is the target
|
|
50
|
+
(broader, higher generation) and which is the source.
|
|
51
|
+
|
|
52
|
+
If no skills are flagged as narrow and no overlap pairs exist,
|
|
53
|
+
skip to storing an empty analysis.
|
|
54
|
+
|
|
55
|
+
## Store results
|
|
56
|
+
|
|
57
|
+
Store via vault_set_state (key: skill-evolve-inventory) as:
|
|
58
|
+
{
|
|
59
|
+
"merge_candidates": [
|
|
60
|
+
{ "source": "skill-name-a", "target": "skill-name-b",
|
|
61
|
+
"reason": "..." }
|
|
62
|
+
],
|
|
63
|
+
"narrow_candidates": [
|
|
64
|
+
{ "skill": "skill-name", "absorb_into": "broader-skill",
|
|
65
|
+
"reason": "..." }
|
|
66
|
+
]
|
|
67
|
+
}
|
|
68
|
+
|
|
69
|
+
Only override the mechanical signals when you have clear
|
|
70
|
+
evidence that the flag is wrong. The default is to act on
|
|
71
|
+
the mechanical flags, not to find reasons to keep everything.
|
|
38
72
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
73
|
+
If the instruction says "No skills need assessment", report
|
|
74
|
+
skip via vault_report and finish.
|
|
75
|
+
tools:
|
|
76
|
+
- vault_skill_records
|
|
77
|
+
- vault_set_state
|
|
78
|
+
- vault_report
|
|
79
|
+
maxTurns: 8
|
|
80
|
+
required: true
|
|
81
|
+
readOnly: true
|
|
48
82
|
|
|
49
|
-
|
|
83
|
+
- name: assess
|
|
84
|
+
model: claude-sonnet-4-6
|
|
85
|
+
prompt: |
|
|
86
|
+
Read the inventory analysis from vault_state
|
|
87
|
+
(key: skill-evolve-inventory).
|
|
88
|
+
|
|
89
|
+
There are TWO sources of skills to assess:
|
|
90
|
+
|
|
91
|
+
**A. Skills with new knowledge** — listed in the instruction
|
|
92
|
+
with descriptions and new spore IDs (full content is NOT in
|
|
93
|
+
the instruction — read it on-demand). For each:
|
|
94
|
+
1. Read new spores via vault_spores. Understand what changed.
|
|
95
|
+
2. Only if the new spores suggest a procedural change, read
|
|
96
|
+
the skill's full content via vault_skill_records (action:
|
|
97
|
+
get, id: "<name>") and verify 2-3 code references via
|
|
98
|
+
vault_search_fts. Skip content reads for skills where
|
|
99
|
+
the new spores are clearly unrelated.
|
|
100
|
+
3. Check the inventory analysis: is this skill also flagged
|
|
101
|
+
for merge or narrowness?
|
|
102
|
+
|
|
103
|
+
**B. Inventory-flagged skills** — merge_candidates and
|
|
104
|
+
narrow_candidates from the inventory analysis. These may NOT
|
|
105
|
+
appear in the instruction (they may have no new knowledge).
|
|
106
|
+
For each flagged skill not already covered above:
|
|
107
|
+
1. Read its content via vault_skill_records (action: get).
|
|
108
|
+
2. Verify the inventory's merge/narrow recommendation by
|
|
109
|
+
reading both skills' content.
|
|
110
|
+
|
|
111
|
+
For ALL skills from both sources, classify with one of:
|
|
112
|
+
- CURRENT — still accurate, no changes needed.
|
|
113
|
+
- STALE — new knowledge changes specific steps, paths,
|
|
114
|
+
or gotchas. Note exactly WHAT is new.
|
|
115
|
+
- DEPRECATED — key code references are gone or the
|
|
116
|
+
procedure is no longer relevant. Note what's missing.
|
|
117
|
+
- MERGE — overlaps significantly with another skill
|
|
118
|
+
(from inventory analysis). Note the TARGET skill to
|
|
119
|
+
merge into.
|
|
120
|
+
- NARROW — too specific for standalone status (from
|
|
121
|
+
inventory analysis). Note the BROADER skill to absorb
|
|
122
|
+
into.
|
|
123
|
+
|
|
124
|
+
Bias toward CURRENT. A skill that is 90% accurate is
|
|
125
|
+
better left alone than rewritten with risk of losing detail.
|
|
126
|
+
Only classify MERGE/NARROW when the inventory analysis
|
|
127
|
+
supports it AND you agree after reading the content.
|
|
128
|
+
|
|
129
|
+
5. Update the skill's properties with the new watermark:
|
|
50
130
|
vault_skill_records (action: update, id: <skill_id>,
|
|
51
|
-
properties: '{"last_assessed_at": <now>,
|
|
131
|
+
properties: '{"last_assessed_at": <now>,
|
|
132
|
+
"knowledge_watermark": <now>,
|
|
133
|
+
"last_classification": "<classification>"}')
|
|
52
134
|
|
|
53
|
-
|
|
54
|
-
(key: skill-evolve-classifications) as JSON
|
|
135
|
+
6. Store classifications via vault_set_state
|
|
136
|
+
(key: skill-evolve-classifications) as JSON:
|
|
137
|
+
[{ "skill_id": "...", "name": "...",
|
|
138
|
+
"classification": "...",
|
|
139
|
+
"target_skill": "..." (for MERGE/NARROW),
|
|
140
|
+
"details": "..." }]
|
|
55
141
|
|
|
56
142
|
Report via vault_report.
|
|
57
143
|
|
|
58
|
-
If the instruction says "No skills need assessment"
|
|
59
|
-
|
|
144
|
+
If the instruction says "No skills need assessment" AND the
|
|
145
|
+
inventory has no merge/narrow candidates, report skip via
|
|
146
|
+
vault_report and finish.
|
|
60
147
|
tools:
|
|
61
148
|
- vault_spores
|
|
62
149
|
- vault_search_fts
|
|
63
150
|
- vault_skill_records
|
|
64
151
|
- vault_set_state
|
|
65
152
|
- vault_report
|
|
66
|
-
maxTurns:
|
|
153
|
+
maxTurns: 18
|
|
67
154
|
required: true
|
|
155
|
+
dependsOn:
|
|
156
|
+
- inventory
|
|
68
157
|
|
|
69
|
-
- name:
|
|
158
|
+
- name: act
|
|
159
|
+
model: claude-haiku-4-5-20251001
|
|
70
160
|
prompt: |
|
|
71
161
|
Read classifications from vault_state
|
|
72
162
|
(key: skill-evolve-classifications). Parse the JSON.
|
|
73
163
|
|
|
74
|
-
For each skill
|
|
164
|
+
For each classified skill, take the appropriate action:
|
|
165
|
+
|
|
166
|
+
## CURRENT — No action. Skip.
|
|
75
167
|
|
|
76
168
|
## STALE — Targeted update
|
|
77
169
|
|
|
78
170
|
1. Read the current skill content from vault_skill_records
|
|
79
171
|
(action: get).
|
|
80
|
-
|
|
81
172
|
2. START from the existing content as-is. Copy it verbatim.
|
|
82
|
-
Then make ONLY the specific changes needed
|
|
83
|
-
sections, update incorrect facts, add missing steps.
|
|
84
|
-
|
|
173
|
+
Then make ONLY the specific changes needed.
|
|
85
174
|
The diff between old and new should be MINIMAL.
|
|
86
|
-
|
|
87
|
-
Protected frontmatter rules (vault_write_skill enforces
|
|
88
|
-
these and will REJECT writes that violate them):
|
|
175
|
+
Protected frontmatter rules (enforced by vault_write_skill):
|
|
89
176
|
- user-invocable: copy EXACT value from existing skill
|
|
90
177
|
- allowed-tools: copy EXACT value from existing skill
|
|
91
178
|
- description: do not shorten (>10% reduction is rejected)
|
|
92
|
-
|
|
93
179
|
3. Write via vault_write_skill with a rationale explaining
|
|
94
180
|
what specifically changed.
|
|
95
181
|
|
|
@@ -98,17 +184,45 @@ phases:
|
|
|
98
184
|
1. vault_skill_records (action: update, status: retired)
|
|
99
185
|
2. vault_skill_records (action: delete)
|
|
100
186
|
|
|
101
|
-
##
|
|
187
|
+
## MERGE — Absorb into target skill
|
|
188
|
+
|
|
189
|
+
1. Read BOTH skills' full content from vault_skill_records
|
|
190
|
+
(action: get) — the source and the target.
|
|
191
|
+
2. Write MERGED content to the TARGET skill via
|
|
192
|
+
vault_write_skill. The merged skill should:
|
|
193
|
+
- Keep the target skill's name and frontmatter
|
|
194
|
+
- Incorporate the source skill's procedures as new sections
|
|
195
|
+
- Preserve all concrete file paths, code patterns, and
|
|
196
|
+
gotchas from both skills
|
|
197
|
+
- Have a coherent structure (not just concatenation)
|
|
198
|
+
Protected frontmatter rules apply to the target.
|
|
199
|
+
3. Rationale should explain: "Merged <source> into <target>:
|
|
200
|
+
<what was added>."
|
|
201
|
+
4. Delete the SOURCE skill: vault_skill_records (action: delete)
|
|
202
|
+
This cascades (lineage, usage, linked candidates).
|
|
203
|
+
|
|
204
|
+
## NARROW — Absorb into broader skill
|
|
205
|
+
|
|
206
|
+
Same as MERGE: read both, write merged content to the broader
|
|
207
|
+
skill, delete the narrow one. The narrow skill becomes a section
|
|
208
|
+
within the broader skill.
|
|
209
|
+
|
|
210
|
+
## Important
|
|
211
|
+
|
|
212
|
+
- Process MERGE/NARROW actions AFTER STALE updates, so you're
|
|
213
|
+
merging already-refreshed content.
|
|
214
|
+
- If vault_write_skill rejects a merge (e.g., dedup gate trips),
|
|
215
|
+
check the error and adjust content. If truly blocked, skip
|
|
216
|
+
and report the issue.
|
|
102
217
|
|
|
103
218
|
Report all actions via vault_report.
|
|
104
219
|
tools:
|
|
105
220
|
- vault_write_skill
|
|
106
221
|
- vault_skill_records
|
|
107
222
|
- vault_skill_candidates
|
|
108
|
-
- vault_state
|
|
109
223
|
- vault_search_fts
|
|
110
224
|
- vault_report
|
|
111
|
-
maxTurns:
|
|
225
|
+
maxTurns: 18
|
|
112
226
|
required: true
|
|
113
227
|
dependsOn:
|
|
114
228
|
- assess
|