@goondocks/myco 0.6.4 → 0.9.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +2 -3
- package/.claude-plugin/plugin.json +3 -3
- package/CONTRIBUTING.md +37 -30
- package/README.md +64 -28
- package/bin/myco-run +2 -0
- package/dist/agent-run-EFICNTAU.js +34 -0
- package/dist/agent-run-EFICNTAU.js.map +1 -0
- package/dist/agent-tasks-RXJ7Z5NG.js +180 -0
- package/dist/agent-tasks-RXJ7Z5NG.js.map +1 -0
- package/dist/chunk-2T7RPVPP.js +116 -0
- package/dist/chunk-2T7RPVPP.js.map +1 -0
- package/dist/chunk-3K5WGSJ4.js +165 -0
- package/dist/chunk-3K5WGSJ4.js.map +1 -0
- package/dist/chunk-46PWOKSI.js +26 -0
- package/dist/chunk-46PWOKSI.js.map +1 -0
- package/dist/chunk-4LPQ26CK.js +277 -0
- package/dist/chunk-4LPQ26CK.js.map +1 -0
- package/dist/chunk-5PEUFJ6U.js +92 -0
- package/dist/chunk-5PEUFJ6U.js.map +1 -0
- package/dist/chunk-5VZ52A4T.js +136 -0
- package/dist/chunk-5VZ52A4T.js.map +1 -0
- package/dist/chunk-BUSP3OJB.js +103 -0
- package/dist/chunk-BUSP3OJB.js.map +1 -0
- package/dist/chunk-D7TYRPRM.js +7312 -0
- package/dist/chunk-D7TYRPRM.js.map +1 -0
- package/dist/chunk-DCXRSSBP.js +22 -0
- package/dist/chunk-DCXRSSBP.js.map +1 -0
- package/dist/chunk-E4VLWIJC.js +2 -0
- package/dist/chunk-FFAYUQ5N.js +39 -0
- package/dist/chunk-FFAYUQ5N.js.map +1 -0
- package/dist/chunk-IB76KGBY.js +2 -0
- package/dist/chunk-JMJJEQ3P.js +486 -0
- package/dist/chunk-JMJJEQ3P.js.map +1 -0
- package/dist/{chunk-N33KUCFP.js → chunk-JTYZRPX5.js} +1 -9
- package/dist/chunk-JTYZRPX5.js.map +1 -0
- package/dist/{chunk-NLUE6CYG.js → chunk-JYOOJCPQ.js} +33 -17
- package/dist/chunk-JYOOJCPQ.js.map +1 -0
- package/dist/{chunk-Z74SDEKE.js → chunk-KB4DGYIY.js} +91 -9
- package/dist/chunk-KB4DGYIY.js.map +1 -0
- package/dist/{chunk-ERG2IEWX.js → chunk-KH64DHOY.js} +3 -7413
- package/dist/chunk-KH64DHOY.js.map +1 -0
- package/dist/chunk-KV4OC4H3.js +498 -0
- package/dist/chunk-KV4OC4H3.js.map +1 -0
- package/dist/chunk-KYLDNM7H.js +66 -0
- package/dist/chunk-KYLDNM7H.js.map +1 -0
- package/dist/chunk-LPUQPDC2.js +19 -0
- package/dist/chunk-LPUQPDC2.js.map +1 -0
- package/dist/chunk-M5XWW7UI.js +97 -0
- package/dist/chunk-M5XWW7UI.js.map +1 -0
- package/dist/chunk-MHSCMET3.js +275 -0
- package/dist/chunk-MHSCMET3.js.map +1 -0
- package/dist/chunk-MYX5NCRH.js +45 -0
- package/dist/chunk-MYX5NCRH.js.map +1 -0
- package/dist/chunk-OXZSXYAT.js +877 -0
- package/dist/chunk-OXZSXYAT.js.map +1 -0
- package/dist/chunk-PB6TOLRQ.js +35 -0
- package/dist/chunk-PB6TOLRQ.js.map +1 -0
- package/dist/chunk-PT5IC642.js +162 -0
- package/dist/chunk-PT5IC642.js.map +1 -0
- package/dist/chunk-QIK2XSDQ.js +187 -0
- package/dist/chunk-QIK2XSDQ.js.map +1 -0
- package/dist/chunk-RJ6ZQKG5.js +26 -0
- package/dist/chunk-RJ6ZQKG5.js.map +1 -0
- package/dist/{chunk-YIQLYIHW.js → chunk-TRUJLI6K.js} +29 -43
- package/dist/chunk-TRUJLI6K.js.map +1 -0
- package/dist/chunk-U3IBO3O3.js +41 -0
- package/dist/chunk-U3IBO3O3.js.map +1 -0
- package/dist/{chunk-7WHF2OIZ.js → chunk-UBZPD4HN.js} +25 -7
- package/dist/chunk-UBZPD4HN.js.map +1 -0
- package/dist/{chunk-HIN3UVOG.js → chunk-V7XG6V6C.js} +20 -11
- package/dist/chunk-V7XG6V6C.js.map +1 -0
- package/dist/chunk-WGTCA2NU.js +84 -0
- package/dist/chunk-WGTCA2NU.js.map +1 -0
- package/dist/{chunk-O6PERU7U.js → chunk-XNOCTDHF.js} +2 -2
- package/dist/chunk-YDN4OM33.js +80 -0
- package/dist/chunk-YDN4OM33.js.map +1 -0
- package/dist/cli-ODLFRIYS.js +128 -0
- package/dist/cli-ODLFRIYS.js.map +1 -0
- package/dist/client-EYOTW3JU.js +19 -0
- package/dist/client-MXRNQ5FI.js +13 -0
- package/dist/{config-IBS6KOLQ.js → config-UR5BSGVX.js} +21 -34
- package/dist/config-UR5BSGVX.js.map +1 -0
- package/dist/detect-H5OPI7GD.js +17 -0
- package/dist/detect-H5OPI7GD.js.map +1 -0
- package/dist/detect-providers-Q42OD4OS.js +26 -0
- package/dist/detect-providers-Q42OD4OS.js.map +1 -0
- package/dist/doctor-JLKTXDEH.js +258 -0
- package/dist/doctor-JLKTXDEH.js.map +1 -0
- package/dist/executor-ONSDHPGX.js +1441 -0
- package/dist/executor-ONSDHPGX.js.map +1 -0
- package/dist/init-6GWY345B.js +198 -0
- package/dist/init-6GWY345B.js.map +1 -0
- package/dist/init-wizard-UONLDYLI.js +294 -0
- package/dist/init-wizard-UONLDYLI.js.map +1 -0
- package/dist/llm-BV3QNVRD.js +17 -0
- package/dist/llm-BV3QNVRD.js.map +1 -0
- package/dist/loader-SH67XD54.js +28 -0
- package/dist/loader-SH67XD54.js.map +1 -0
- package/dist/loader-XVXKZZDH.js +18 -0
- package/dist/loader-XVXKZZDH.js.map +1 -0
- package/dist/{chunk-H7PRCVGQ.js → logs-QZVYF6FP.js} +74 -5
- package/dist/logs-QZVYF6FP.js.map +1 -0
- package/dist/main-BMCL7CPO.js +4393 -0
- package/dist/main-BMCL7CPO.js.map +1 -0
- package/dist/openai-embeddings-C265WRNK.js +14 -0
- package/dist/openai-embeddings-C265WRNK.js.map +1 -0
- package/dist/openrouter-U6VFCRX2.js +14 -0
- package/dist/openrouter-U6VFCRX2.js.map +1 -0
- package/dist/post-compact-OWFSOITU.js +26 -0
- package/dist/post-compact-OWFSOITU.js.map +1 -0
- package/dist/post-tool-use-DOUM7CGQ.js +56 -0
- package/dist/post-tool-use-DOUM7CGQ.js.map +1 -0
- package/dist/post-tool-use-failure-SG3C7PE6.js +28 -0
- package/dist/post-tool-use-failure-SG3C7PE6.js.map +1 -0
- package/dist/pre-compact-3J33CHXQ.js +25 -0
- package/dist/pre-compact-3J33CHXQ.js.map +1 -0
- package/dist/provider-check-3WBPZADE.js +12 -0
- package/dist/provider-check-3WBPZADE.js.map +1 -0
- package/dist/registry-J4XTWARS.js +25 -0
- package/dist/registry-J4XTWARS.js.map +1 -0
- package/dist/resolution-events-TFEQPVKS.js +12 -0
- package/dist/resolution-events-TFEQPVKS.js.map +1 -0
- package/dist/resolve-3FEUV462.js +9 -0
- package/dist/resolve-3FEUV462.js.map +1 -0
- package/dist/{restart-XCMILOL5.js → restart-2VM33WOB.js} +10 -6
- package/dist/{restart-XCMILOL5.js.map → restart-2VM33WOB.js.map} +1 -1
- package/dist/search-ZGQR5MDE.js +91 -0
- package/dist/search-ZGQR5MDE.js.map +1 -0
- package/dist/{server-6UDN35QN.js → server-6KMBJCHZ.js} +308 -517
- package/dist/server-6KMBJCHZ.js.map +1 -0
- package/dist/session-Z2FXDDG6.js +68 -0
- package/dist/session-Z2FXDDG6.js.map +1 -0
- package/dist/session-end-FLVX32LE.js +38 -0
- package/dist/session-end-FLVX32LE.js.map +1 -0
- package/dist/session-start-UCLK7PXE.js +169 -0
- package/dist/session-start-UCLK7PXE.js.map +1 -0
- package/dist/setup-digest-4KDSXAIV.js +15 -0
- package/dist/setup-digest-4KDSXAIV.js.map +1 -0
- package/dist/setup-llm-GKMCHURK.js +81 -0
- package/dist/setup-llm-GKMCHURK.js.map +1 -0
- package/dist/src/agent/definitions/agent.yaml +35 -0
- package/dist/src/agent/definitions/tasks/digest-only.yaml +84 -0
- package/dist/src/agent/definitions/tasks/extract-only.yaml +87 -0
- package/dist/src/agent/definitions/tasks/full-intelligence.yaml +472 -0
- package/dist/src/agent/definitions/tasks/graph-maintenance.yaml +92 -0
- package/dist/src/agent/definitions/tasks/review-session.yaml +132 -0
- package/dist/src/agent/definitions/tasks/supersession-sweep.yaml +86 -0
- package/dist/src/agent/definitions/tasks/title-summary.yaml +88 -0
- package/dist/src/agent/prompts/agent.md +121 -0
- package/dist/src/agent/prompts/orchestrator.md +91 -0
- package/dist/src/cli.js +1 -8
- package/dist/src/cli.js.map +1 -1
- package/dist/src/daemon/main.js +1 -8
- package/dist/src/daemon/main.js.map +1 -1
- package/dist/src/hooks/post-tool-use.js +3 -50
- package/dist/src/hooks/post-tool-use.js.map +1 -1
- package/dist/src/hooks/session-end.js +3 -32
- package/dist/src/hooks/session-end.js.map +1 -1
- package/dist/src/hooks/session-start.js +2 -8
- package/dist/src/hooks/session-start.js.map +1 -1
- package/dist/src/hooks/stop.js +3 -42
- package/dist/src/hooks/stop.js.map +1 -1
- package/dist/src/hooks/user-prompt-submit.js +3 -53
- package/dist/src/hooks/user-prompt-submit.js.map +1 -1
- package/dist/src/mcp/server.js +1 -8
- package/dist/src/mcp/server.js.map +1 -1
- package/dist/src/prompts/digest-system.md +1 -1
- package/dist/src/symbionts/manifests/claude-code.yaml +16 -0
- package/dist/src/symbionts/manifests/cursor.yaml +14 -0
- package/dist/stats-IUJPZSVZ.js +94 -0
- package/dist/stats-IUJPZSVZ.js.map +1 -0
- package/dist/stop-XRQLLXST.js +42 -0
- package/dist/stop-XRQLLXST.js.map +1 -0
- package/dist/stop-failure-2CAJJKRG.js +26 -0
- package/dist/stop-failure-2CAJJKRG.js.map +1 -0
- package/dist/subagent-start-MWWQTZMQ.js +26 -0
- package/dist/subagent-start-MWWQTZMQ.js.map +1 -0
- package/dist/subagent-stop-PJXYGRXB.js +28 -0
- package/dist/subagent-stop-PJXYGRXB.js.map +1 -0
- package/dist/task-completed-4LFRJVGI.js +27 -0
- package/dist/task-completed-4LFRJVGI.js.map +1 -0
- package/dist/ui/assets/index-DZrElonz.js +744 -0
- package/dist/ui/assets/index-TkeiYbZB.css +1 -0
- package/dist/ui/favicon.svg +7 -7
- package/dist/ui/fonts/Inter-Variable.woff2 +0 -0
- package/dist/ui/fonts/JetBrainsMono-Variable.woff2 +0 -0
- package/dist/ui/fonts/Newsreader-Italic-Variable.woff2 +0 -0
- package/dist/ui/fonts/Newsreader-Variable.woff2 +0 -0
- package/dist/ui/index.html +2 -2
- package/dist/user-prompt-submit-KSM3AR6P.js +59 -0
- package/dist/user-prompt-submit-KSM3AR6P.js.map +1 -0
- package/dist/{verify-TOWQHPBX.js → verify-UDAYVX37.js} +17 -22
- package/dist/verify-UDAYVX37.js.map +1 -0
- package/dist/{version-36RVCQA6.js → version-KLBN4HZT.js} +3 -4
- package/dist/version-KLBN4HZT.js.map +1 -0
- package/hooks/hooks.json +82 -5
- package/package.json +6 -3
- package/skills/myco/SKILL.md +10 -10
- package/skills/myco/references/cli-usage.md +15 -13
- package/skills/myco/references/vault-status.md +3 -3
- package/skills/myco/references/wisdom.md +4 -4
- package/skills/myco-curate/SKILL.md +86 -0
- package/dist/chunk-2ZIBCEYO.js +0 -113
- package/dist/chunk-2ZIBCEYO.js.map +0 -1
- package/dist/chunk-4RMSHZE4.js +0 -107
- package/dist/chunk-4RMSHZE4.js.map +0 -1
- package/dist/chunk-4XVKZ3WA.js +0 -1078
- package/dist/chunk-4XVKZ3WA.js.map +0 -1
- package/dist/chunk-6FQISQNA.js +0 -61
- package/dist/chunk-6FQISQNA.js.map +0 -1
- package/dist/chunk-7WHF2OIZ.js.map +0 -1
- package/dist/chunk-ERG2IEWX.js.map +0 -1
- package/dist/chunk-FPRXMJLT.js +0 -56
- package/dist/chunk-FPRXMJLT.js.map +0 -1
- package/dist/chunk-GENQ5QGP.js +0 -37
- package/dist/chunk-GENQ5QGP.js.map +0 -1
- package/dist/chunk-H7PRCVGQ.js.map +0 -1
- package/dist/chunk-HIN3UVOG.js.map +0 -1
- package/dist/chunk-HYVT345Y.js +0 -159
- package/dist/chunk-HYVT345Y.js.map +0 -1
- package/dist/chunk-J4D4CROB.js +0 -143
- package/dist/chunk-J4D4CROB.js.map +0 -1
- package/dist/chunk-MDLSAFPP.js +0 -99
- package/dist/chunk-MDLSAFPP.js.map +0 -1
- package/dist/chunk-N33KUCFP.js.map +0 -1
- package/dist/chunk-NL6WQO56.js +0 -65
- package/dist/chunk-NL6WQO56.js.map +0 -1
- package/dist/chunk-NLUE6CYG.js.map +0 -1
- package/dist/chunk-P723N2LP.js +0 -147
- package/dist/chunk-P723N2LP.js.map +0 -1
- package/dist/chunk-QLUE3BUL.js +0 -161
- package/dist/chunk-QLUE3BUL.js.map +0 -1
- package/dist/chunk-QN4W3JUA.js +0 -43
- package/dist/chunk-QN4W3JUA.js.map +0 -1
- package/dist/chunk-RGVBGTD6.js +0 -21
- package/dist/chunk-RGVBGTD6.js.map +0 -1
- package/dist/chunk-TWSTAVLO.js +0 -132
- package/dist/chunk-TWSTAVLO.js.map +0 -1
- package/dist/chunk-UP4P4OAA.js +0 -4423
- package/dist/chunk-UP4P4OAA.js.map +0 -1
- package/dist/chunk-YIQLYIHW.js.map +0 -1
- package/dist/chunk-YTFXA4RX.js +0 -86
- package/dist/chunk-YTFXA4RX.js.map +0 -1
- package/dist/chunk-Z74SDEKE.js.map +0 -1
- package/dist/cli-IHILSS6N.js +0 -97
- package/dist/cli-IHILSS6N.js.map +0 -1
- package/dist/client-AGFNR2S4.js +0 -12
- package/dist/config-IBS6KOLQ.js.map +0 -1
- package/dist/curate-3D4GHKJH.js +0 -78
- package/dist/curate-3D4GHKJH.js.map +0 -1
- package/dist/detect-providers-XEP4QA3R.js +0 -35
- package/dist/detect-providers-XEP4QA3R.js.map +0 -1
- package/dist/digest-7HLJXL77.js +0 -85
- package/dist/digest-7HLJXL77.js.map +0 -1
- package/dist/init-ARQ53JOR.js +0 -109
- package/dist/init-ARQ53JOR.js.map +0 -1
- package/dist/logs-IENORIYR.js +0 -84
- package/dist/logs-IENORIYR.js.map +0 -1
- package/dist/main-6AGPIMH2.js +0 -5715
- package/dist/main-6AGPIMH2.js.map +0 -1
- package/dist/rebuild-Q2ACEB6F.js +0 -64
- package/dist/rebuild-Q2ACEB6F.js.map +0 -1
- package/dist/reprocess-CDEFGQOV.js +0 -79
- package/dist/reprocess-CDEFGQOV.js.map +0 -1
- package/dist/search-7W25SKCB.js +0 -120
- package/dist/search-7W25SKCB.js.map +0 -1
- package/dist/server-6UDN35QN.js.map +0 -1
- package/dist/session-F326AWCH.js +0 -44
- package/dist/session-F326AWCH.js.map +0 -1
- package/dist/session-start-K6IGAC7H.js +0 -192
- package/dist/session-start-K6IGAC7H.js.map +0 -1
- package/dist/setup-digest-X5PN27F4.js +0 -15
- package/dist/setup-llm-S5OHQJXK.js +0 -15
- package/dist/src/prompts/classification.md +0 -43
- package/dist/stats-TTSDXGJV.js +0 -58
- package/dist/stats-TTSDXGJV.js.map +0 -1
- package/dist/templates-XPRBOWCE.js +0 -38
- package/dist/templates-XPRBOWCE.js.map +0 -1
- package/dist/ui/assets/index-08wKT7wS.css +0 -1
- package/dist/ui/assets/index-CMSMi4Jb.js +0 -369
- package/dist/verify-TOWQHPBX.js.map +0 -1
- package/skills/setup/SKILL.md +0 -174
- package/skills/setup/references/model-recommendations.md +0 -83
- /package/dist/{client-AGFNR2S4.js.map → chunk-E4VLWIJC.js.map} +0 -0
- /package/dist/{setup-digest-X5PN27F4.js.map → chunk-IB76KGBY.js.map} +0 -0
- /package/dist/{chunk-O6PERU7U.js.map → chunk-XNOCTDHF.js.map} +0 -0
- /package/dist/{setup-llm-S5OHQJXK.js.map → client-EYOTW3JU.js.map} +0 -0
- /package/dist/{version-36RVCQA6.js.map → client-MXRNQ5FI.js.map} +0 -0
|
@@ -1 +0,0 @@
|
|
|
1
|
-
{"version":3,"sources":["../src/cli/verify.ts"],"sourcesContent":["import { loadConfig } from '../config/loader.js';\nimport { createLlmProvider, createEmbeddingProvider } from '../intelligence/llm.js';\n\nconst VERIFY_LLM_PROMPT = 'Respond with OK';\nconst VERIFY_EMBEDDING_INPUT = 'test';\n\nexport async function run(_args: string[], vaultDir: string): Promise<void> {\n const config = loadConfig(vaultDir);\n const { llm: llmConfig, embedding: embeddingConfig } = config.intelligence;\n\n let llmOk = false;\n let embeddingOk = false;\n let embeddingDimensions = 0;\n\n // Test LLM\n try {\n const llm = createLlmProvider(llmConfig);\n const response = await llm.summarize(VERIFY_LLM_PROMPT);\n llmOk = response.text.length > 0;\n } catch (err) {\n llmOk = false;\n }\n\n const llmLabel = `LLM (${llmConfig.provider} / ${llmConfig.model}):`;\n console.log(`${llmLabel.padEnd(40)} ${llmOk ? 'OK' : 'FAIL'}`);\n\n // Test embedding\n try {\n const emb = createEmbeddingProvider(embeddingConfig);\n const response = await emb.embed(VERIFY_EMBEDDING_INPUT);\n embeddingDimensions = response.dimensions;\n embeddingOk = embeddingDimensions > 0;\n } catch (err) {\n embeddingOk = false;\n }\n\n const embLabel = `Embedding (${embeddingConfig.provider} / ${embeddingConfig.model}):`;\n const embStatus = embeddingOk ? `OK (${embeddingDimensions} dimensions)` : 'FAIL';\n console.log(`${embLabel.padEnd(40)} ${embStatus}`);\n\n if (!llmOk || !embeddingOk) {\n process.exit(1);\n }\n}\n"],"mappings":";;;;;;;;;;;;;;AAGA,IAAM,oBAAoB;AAC1B,IAAM,yBAAyB;AAE/B,eAAsB,IAAI,OAAiB,UAAiC;AAC1E,QAAM,SAAS,WAAW,QAAQ;AAClC,QAAM,EAAE,KAAK,WAAW,WAAW,gBAAgB,IAAI,OAAO;AAE9D,MAAI,QAAQ;AACZ,MAAI,cAAc;AAClB,MAAI,sBAAsB;AAG1B,MAAI;AACF,UAAM,MAAM,kBAAkB,SAAS;AACvC,UAAM,WAAW,MAAM,IAAI,UAAU,iBAAiB;AACtD,YAAQ,SAAS,KAAK,SAAS;AAAA,EACjC,SAAS,KAAK;AACZ,YAAQ;AAAA,EACV;AAEA,QAAM,WAAW,QAAQ,UAAU,QAAQ,MAAM,UAAU,KAAK;AAChE,UAAQ,IAAI,GAAG,SAAS,OAAO,EAAE,CAAC,IAAI,QAAQ,OAAO,MAAM,EAAE;AAG7D,MAAI;AACF,UAAM,MAAM,wBAAwB,eAAe;AACnD,UAAM,WAAW,MAAM,IAAI,MAAM,sBAAsB;AACvD,0BAAsB,SAAS;AAC/B,kBAAc,sBAAsB;AAAA,EACtC,SAAS,KAAK;AACZ,kBAAc;AAAA,EAChB;AAEA,QAAM,WAAW,cAAc,gBAAgB,QAAQ,MAAM,gBAAgB,KAAK;AAClF,QAAM,YAAY,cAAc,OAAO,mBAAmB,iBAAiB;AAC3E,UAAQ,IAAI,GAAG,SAAS,OAAO,EAAE,CAAC,IAAI,SAAS,EAAE;AAEjD,MAAI,CAAC,SAAS,CAAC,aAAa;AAC1B,YAAQ,KAAK,CAAC;AAAA,EAChB;AACF;","names":[]}
|
package/skills/setup/SKILL.md
DELETED
|
@@ -1,174 +0,0 @@
|
|
|
1
|
-
---
|
|
2
|
-
name: setup
|
|
3
|
-
description: >-
|
|
4
|
-
Initialize Myco in a new project — guided first-time setup for vault,
|
|
5
|
-
LLM provider, and intelligence backend
|
|
6
|
-
user-invocable: true
|
|
7
|
-
allowed-tools: Bash, AskUserQuestion, Skill
|
|
8
|
-
---
|
|
9
|
-
|
|
10
|
-
# Setup — Guided Myco Onboarding
|
|
11
|
-
|
|
12
|
-
This skill guides a first-time Myco setup from zero to a working vault. It detects the system's hardware and available providers, asks one question at a time, and delegates all configuration to CLI commands — never touching config files directly. If a vault already exists, it hands off to the `myco` skill for reconfiguration or status.
|
|
13
|
-
|
|
14
|
-
## Step 1: Check Vault Existence
|
|
15
|
-
|
|
16
|
-
Run:
|
|
17
|
-
|
|
18
|
-
```bash
|
|
19
|
-
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js stats
|
|
20
|
-
```
|
|
21
|
-
|
|
22
|
-
- If the command **succeeds** (exit code 0): the vault already exists. Tell the user "Myco is already configured at `<vault-path>`." Then invoke the `myco` skill using the Skill tool — the `myco` skill handles all reconfiguration, status checks, and ongoing management. **Stop here. Do not continue with the setup flow. Do not attempt reconfiguration yourself.**
|
|
23
|
-
- If the command **fails** (exit code non-zero or vault not found): proceed to Step 2.
|
|
24
|
-
|
|
25
|
-
## Step 2: Detect System
|
|
26
|
-
|
|
27
|
-
Run both commands in parallel:
|
|
28
|
-
|
|
29
|
-
```bash
|
|
30
|
-
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js detect-providers
|
|
31
|
-
```
|
|
32
|
-
|
|
33
|
-
Parse the JSON output. The result will list providers and their `available` field (boolean) and available models. Keep this data — you will use it in Step 3.
|
|
34
|
-
|
|
35
|
-
For RAM detection, run the appropriate command for the OS:
|
|
36
|
-
|
|
37
|
-
- **macOS:** `sysctl -n hw.memsize` — result is bytes; divide by `1073741824` (1024³) to get GB
|
|
38
|
-
- **Linux:** parse `/proc/meminfo` for the `MemTotal` line — result is in kB; divide by `1048576` to get GB
|
|
39
|
-
|
|
40
|
-
Use the RAM value to determine the recommended tier from `references/model-recommendations.md`:
|
|
41
|
-
|
|
42
|
-
| RAM | Processor Model | Digest Model | Digest Context | Inject Tier |
|
|
43
|
-
|-----|----------------|--------------|----------------|-------------|
|
|
44
|
-
| 64GB+ | `qwen3.5:latest` | `qwen3.5:35b` | 65536 | 3000 |
|
|
45
|
-
| 48GB | `qwen3.5:latest` | `qwen3.5:27b` | 32768 | 3000 |
|
|
46
|
-
| 32GB | `qwen3.5:4b` | `qwen3.5:latest` | 16384 | 1500 |
|
|
47
|
-
| 16GB | `qwen3.5:4b` | `qwen3.5:4b` | 8192 | 1500 |
|
|
48
|
-
|
|
49
|
-
Record: detected RAM (GB), recommended processor model, recommended digest model, digest context window, and default inject tier.
|
|
50
|
-
|
|
51
|
-
## Step 3: Ask Questions
|
|
52
|
-
|
|
53
|
-
**Use the AskUserQuestion tool for every question.** Present choices as selectable options. Do not ask questions in plain text — always use AskUserQuestion so the user can select from options. Wait for each answer before asking the next.
|
|
54
|
-
|
|
55
|
-
### Question 1: Vault Location
|
|
56
|
-
|
|
57
|
-
Use AskUserQuestion to ask the user where to store the vault. Present three choices:
|
|
58
|
-
|
|
59
|
-
- **Project-local** — `.myco/` in the current directory
|
|
60
|
-
- **Centralized** — `~/.myco/vaults/<project-name>/` (where `<project-name>` is the current directory's basename)
|
|
61
|
-
- **Custom path** — the user types a path
|
|
62
|
-
|
|
63
|
-
Record the resolved vault path.
|
|
64
|
-
|
|
65
|
-
### Question 2: Processor Model (extraction, summaries, titles)
|
|
66
|
-
|
|
67
|
-
Present the recommended processor model from the RAM table as the default. Show available models from the detected providers, grouped by provider.
|
|
68
|
-
|
|
69
|
-
Explain: "The processor model handles session extraction, summaries, and titles. Smaller, faster models work well here — speed matters more than depth."
|
|
70
|
-
|
|
71
|
-
If the recommended model is not installed:
|
|
72
|
-
|
|
73
|
-
- **Ollama:** offer to run `ollama pull <recommended-model>` before continuing.
|
|
74
|
-
- **LM Studio:** tell the user to download it from the model browser.
|
|
75
|
-
|
|
76
|
-
Record the chosen provider and processor model.
|
|
77
|
-
|
|
78
|
-
### Question 3: Digest Model (vault synthesis)
|
|
79
|
-
|
|
80
|
-
Present the recommended digest model from the RAM table as the default. Show available models from the detected providers.
|
|
81
|
-
|
|
82
|
-
Explain: "The digest model synthesizes your vault into context extracts. Larger models produce better results here — quality matters more than speed. This can be the same as the processor model on smaller machines."
|
|
83
|
-
|
|
84
|
-
If the recommended model is not installed, offer to pull/download as above.
|
|
85
|
-
|
|
86
|
-
Record the chosen provider and digest model.
|
|
87
|
-
|
|
88
|
-
### Question 4: Embedding Model
|
|
89
|
-
|
|
90
|
-
List embedding models from available providers. Exclude Anthropic — it does not support embeddings.
|
|
91
|
-
|
|
92
|
-
If no embedding models are installed:
|
|
93
|
-
|
|
94
|
-
- **Ollama:** offer to run `ollama pull bge-m3`. If the user accepts, run it before continuing.
|
|
95
|
-
- **LM Studio:** tell the user to search for and download an embedding model.
|
|
96
|
-
|
|
97
|
-
Recommend `bge-m3` as the default. Record the chosen embedding provider and model.
|
|
98
|
-
|
|
99
|
-
### Question 5: Inject Tier
|
|
100
|
-
|
|
101
|
-
Ask the user which coding agent they primarily use, then recommend an inject tier based on the agent's context window (see `references/model-recommendations.md`). Show all tiers:
|
|
102
|
-
|
|
103
|
-
| Tier | Description |
|
|
104
|
-
|------|-------------|
|
|
105
|
-
| 1500 | Executive briefing — fastest, lightest |
|
|
106
|
-
| 3000 | Team standup — balanced context |
|
|
107
|
-
| 5000 | Deep onboarding — good for 200K agents |
|
|
108
|
-
| 10000 | Institutional knowledge — richest, best for 1M+ agents |
|
|
109
|
-
|
|
110
|
-
Explain: "The inject tier controls how much vault context is injected at session start. Larger tiers give the agent more project history but use more of its context window. All tiers work regardless of local hardware."
|
|
111
|
-
|
|
112
|
-
Pre-select the default based on the agent's context window. Tell the user: "Agents can always request a different tier on-demand via the `myco_context` MCP tool."
|
|
113
|
-
|
|
114
|
-
Record the chosen inject tier.
|
|
115
|
-
|
|
116
|
-
## Step 4: Execute
|
|
117
|
-
|
|
118
|
-
Run the following commands in sequence, substituting the recorded values. Show each command before running it.
|
|
119
|
-
|
|
120
|
-
```bash
|
|
121
|
-
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js init \
|
|
122
|
-
--vault <chosen-vault-path> \
|
|
123
|
-
--llm-provider <processor-provider> \
|
|
124
|
-
--llm-model <processor-model> \
|
|
125
|
-
--embedding-provider <embedding-provider> \
|
|
126
|
-
--embedding-model <embedding-model>
|
|
127
|
-
```
|
|
128
|
-
|
|
129
|
-
```bash
|
|
130
|
-
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js setup-digest \
|
|
131
|
-
--provider <digest-provider> \
|
|
132
|
-
--model <digest-model> \
|
|
133
|
-
--context-window <digest-context-window-from-ram-table> \
|
|
134
|
-
--inject-tier <chosen-inject-tier>
|
|
135
|
-
```
|
|
136
|
-
|
|
137
|
-
```bash
|
|
138
|
-
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js verify
|
|
139
|
-
```
|
|
140
|
-
|
|
141
|
-
If any command fails, report the error and stop. Do not continue to the next command on failure. Show the full error output to the user and ask how to proceed.
|
|
142
|
-
|
|
143
|
-
## Step 5: Ollama Performance Tips
|
|
144
|
-
|
|
145
|
-
If the user is using Ollama, recommend adding these to their Ollama service configuration:
|
|
146
|
-
|
|
147
|
-
```
|
|
148
|
-
OLLAMA_FLASH_ATTENTION=1 # Required for KV cache quantization
|
|
149
|
-
OLLAMA_KV_CACHE_TYPE=q8_0 # Halves KV cache memory
|
|
150
|
-
```
|
|
151
|
-
|
|
152
|
-
Explain: "These settings halve the memory used for large context windows, making digest much more efficient. They're Ollama-wide settings — on macOS, add them to your Ollama launchd plist."
|
|
153
|
-
|
|
154
|
-
## Step 6: Report
|
|
155
|
-
|
|
156
|
-
Display a summary table:
|
|
157
|
-
|
|
158
|
-
| Setting | Value |
|
|
159
|
-
|---------|-------|
|
|
160
|
-
| Vault path | `<resolved path>` |
|
|
161
|
-
| Processor | `<provider>` / `<processor-model>` |
|
|
162
|
-
| Digest | `<provider>` / `<digest-model>` (context: `<context-window>`) |
|
|
163
|
-
| Embedding | `<embedding-provider>` / `<embedding-model>` |
|
|
164
|
-
| Inject tier | `<inject-tier>` |
|
|
165
|
-
| RAM detected | `<X>` GB |
|
|
166
|
-
|
|
167
|
-
Tell the user: "Myco is ready. Start a new session to begin capturing knowledge."
|
|
168
|
-
|
|
169
|
-
## Constraints
|
|
170
|
-
|
|
171
|
-
- All writes via CLI commands — never read or modify `myco.yaml` directly.
|
|
172
|
-
- All provider detection via `detect-providers` — no raw HTTP calls to provider APIs.
|
|
173
|
-
- One question at a time — do not batch questions or present them together.
|
|
174
|
-
- Three model choices in guided setup: processor, digest, and embedding.
|
|
@@ -1,83 +0,0 @@
|
|
|
1
|
-
# Model Recommendations
|
|
2
|
-
|
|
3
|
-
Hardware-based guidance for choosing models during Myco setup. Myco uses three model tiers that load simultaneously in Ollama.
|
|
4
|
-
|
|
5
|
-
## Three-Tier Architecture
|
|
6
|
-
|
|
7
|
-
| Tier | Purpose | Speed vs Quality |
|
|
8
|
-
|------|---------|-----------------|
|
|
9
|
-
| **Embedding** | Vector search, semantic similarity | Dedicated small model, always loaded |
|
|
10
|
-
| **Processor** | Extraction, summarization, titles, classification | Speed matters — fast model, 8K context |
|
|
11
|
-
| **Digest** | Synthesize vault knowledge into tiered extracts | Quality matters — large model, up to 65K context |
|
|
12
|
-
|
|
13
|
-
The processor and digest can be the same model on smaller machines. On larger machines, splitting them gives the best speed/quality balance — processor tasks complete in seconds instead of minutes.
|
|
14
|
-
|
|
15
|
-
## Recommended Configurations
|
|
16
|
-
|
|
17
|
-
| RAM | Processor Model | Digest Model | Digest Context | Inject Tier | Est. VRAM |
|
|
18
|
-
|-----|----------------|--------------|----------------|-------------|-----------|
|
|
19
|
-
| **64GB+** | `qwen3.5:latest` (~8B) | `qwen3.5:35b` (MoE) | 65536 | 3000 | ~35GB |
|
|
20
|
-
| **48GB** | `qwen3.5:latest` (~8B) | `qwen3.5:27b` | 32768 | 3000 | ~26GB |
|
|
21
|
-
| **32GB** | `qwen3.5:4b` | `qwen3.5:latest` (~8B) | 16384 | 1500 | ~11GB |
|
|
22
|
-
| **16GB** | `qwen3.5:4b` | `qwen3.5:4b` | 8192 | 1500 | ~6GB |
|
|
23
|
-
|
|
24
|
-
Embedding model (`bge-m3`, ~1.3GB) is included in all VRAM estimates.
|
|
25
|
-
|
|
26
|
-
When processor and digest use the same model (16GB tier), Ollama loads it once — no extra VRAM.
|
|
27
|
-
|
|
28
|
-
### Why Qwen 3.5?
|
|
29
|
-
|
|
30
|
-
Qwen 3.5 models offer strong instruction-following and synthesis quality on local hardware. The MoE variant (`35b`) runs efficiently on 64GB+ systems because only a subset of parameters activate per token. Any instruction-tuned model that handles JSON output works — prefer what the user already has loaded, but recommend Qwen 3.5 for new setups.
|
|
31
|
-
|
|
32
|
-
### Important: Reasoning Token Suppression
|
|
33
|
-
|
|
34
|
-
Qwen 3.5 models are reasoning models that generate `<think>` tokens before output. Myco automatically suppresses this via `reasoning: 'off'` on all LLM calls. No user configuration needed — this is handled in code via the `LLM_REASONING_MODE` constant.
|
|
35
|
-
|
|
36
|
-
### Ollama Performance Settings
|
|
37
|
-
|
|
38
|
-
Recommend users add these to their Ollama service configuration for best performance:
|
|
39
|
-
|
|
40
|
-
```
|
|
41
|
-
OLLAMA_FLASH_ATTENTION=1 # Required for KV cache quantization
|
|
42
|
-
OLLAMA_KV_CACHE_TYPE=q8_0 # Halves KV cache memory — makes large digest context affordable
|
|
43
|
-
```
|
|
44
|
-
|
|
45
|
-
These are system-wide Ollama settings (launchd plist on macOS, systemd on Linux), not Myco-controlled.
|
|
46
|
-
|
|
47
|
-
## Pulling Models
|
|
48
|
-
|
|
49
|
-
**Ollama:**
|
|
50
|
-
```bash
|
|
51
|
-
ollama pull qwen3.5:4b # 4B variant
|
|
52
|
-
ollama pull qwen3.5:latest # latest variant (~8B)
|
|
53
|
-
ollama pull qwen3.5:35b # 35B MoE variant
|
|
54
|
-
ollama pull bge-m3 # embedding model
|
|
55
|
-
```
|
|
56
|
-
|
|
57
|
-
**LM Studio:** Search for `qwen3.5` in the model browser. Download the variants matching the RAM tier above.
|
|
58
|
-
|
|
59
|
-
## Embedding Model
|
|
60
|
-
|
|
61
|
-
Separate from the intelligence models. Anthropic does not support embeddings — only Ollama and LM Studio provide embedding models.
|
|
62
|
-
|
|
63
|
-
Recommended:
|
|
64
|
-
- `bge-m3` — strong multilingual embeddings, good default
|
|
65
|
-
- `nomic-embed-text` — lightweight alternative
|
|
66
|
-
|
|
67
|
-
## Inject Tier
|
|
68
|
-
|
|
69
|
-
Controls how much pre-computed context the agent receives at session start. All tiers are available regardless of local hardware — the local LLM can generate any tier. The default should be based on the **coding agent's context window**, not the local model.
|
|
70
|
-
|
|
71
|
-
| Agent Context Window | Default Tier | Rationale |
|
|
72
|
-
|---------------------|-------------|-----------|
|
|
73
|
-
| **1M+** (Opus 4.6) | 10000 | Rich context is cheap relative to the window |
|
|
74
|
-
| **200K** (Sonnet 4.6, Gemini) | 5000 | Good depth without crowding the agent's context |
|
|
75
|
-
| **128K** (GPT-4o, smaller models) | 3000 | Balanced — enough for key decisions and recent activity |
|
|
76
|
-
| **32K or less** | 1500 | Executive briefing only — preserve context for the task |
|
|
77
|
-
|
|
78
|
-
### Tier Descriptions
|
|
79
|
-
|
|
80
|
-
- **1500** — executive briefing (fastest, lightest)
|
|
81
|
-
- **3000** — team standup (recommended for most setups)
|
|
82
|
-
- **5000** — deep onboarding
|
|
83
|
-
- **10000** — institutional knowledge (richest, most context)
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|