@goondocks/myco 0.4.2 → 0.4.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,7 +12,7 @@
12
12
  "source": {
13
13
  "source": "npm",
14
14
  "package": "@goondocks/myco",
15
- "version": "0.4.1"
15
+ "version": "0.4.2"
16
16
  },
17
17
  "description": "Collective agent intelligence — captures session knowledge and serves it back via MCP",
18
18
  "license": "MIT",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "myco",
3
- "version": "0.4.2",
3
+ "version": "0.4.3",
4
4
  "description": "Collective agent intelligence — captures session knowledge and serves it back to your team via MCP",
5
5
  "author": {
6
6
  "name": "goondocks-co",
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@goondocks/myco",
3
- "version": "0.4.2",
3
+ "version": "0.4.3",
4
4
  "description": "Collective agent intelligence — Claude Code plugin",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -207,6 +207,10 @@ The vault should get sharper over time, not just bigger. Every session should le
207
207
  2. If you find relevant context, factor it into your recommendation
208
208
  3. After the decision is made, `myco_remember` the rationale
209
209
 
210
+ ## Reconfiguration
211
+
212
+ To change LLM providers, models, or digest settings on an existing vault, see `references/reconfiguration.md`. It covers the exact CLI commands, flag names, and order of operations (setup-llm → restart → rebuild if needed → verify).
213
+
210
214
  ## Maintenance
211
215
 
212
216
  For the full CLI reference with all flags, see `references/cli-usage.md`.
@@ -0,0 +1,92 @@
1
+ # Reconfiguration
2
+
3
+ Workflows for changing LLM providers, models, and digest settings on an existing vault. **Use the AskUserQuestion tool** to ask which settings to change — do not guess.
4
+
5
+ ## Changing LLM or Embedding Provider/Model
6
+
7
+ Follow this exact order:
8
+
9
+ ```bash
10
+ # 1. Detect what's available
11
+ node <plugin-root>/dist/src/cli.js detect-providers
12
+
13
+ # 2. Apply the change (use the correct --llm- or --embedding- prefixed flags)
14
+ node <plugin-root>/dist/src/cli.js setup-llm \
15
+ --llm-provider <provider> --llm-model <model> \
16
+ --embedding-provider <provider> --embedding-model <model>
17
+
18
+ # 3. ALWAYS restart daemon after any config change
19
+ node <plugin-root>/dist/src/cli.js restart
20
+
21
+ # 4. Only rebuild if the EMBEDDING model changed (not needed for LLM-only changes)
22
+ node <plugin-root>/dist/src/cli.js rebuild
23
+
24
+ # 5. Verify connectivity
25
+ node <plugin-root>/dist/src/cli.js verify
26
+ ```
27
+
28
+ ### Critical Flags
29
+
30
+ The `setup-llm` command uses `--llm-provider`, `--llm-model`, `--embedding-provider`, `--embedding-model` — NOT `--provider` or `--model`. Only pass flags for settings the user explicitly wants to change.
31
+
32
+ ### Order Matters
33
+
34
+ 1. `setup-llm` writes config
35
+ 2. `restart` loads the new config into the daemon
36
+ 3. `rebuild` re-embeds with the new embedding model (skip if embedding didn't change)
37
+ 4. `verify` confirms everything works
38
+
39
+ ### Embedding Model Warning
40
+
41
+ If the embedding model changed, tell the user: "Changing the embedding model requires a full vector index rebuild. This may take a few minutes."
42
+
43
+ ## Changing Digest Settings
44
+
45
+ ```bash
46
+ node <plugin-root>/dist/src/cli.js setup-digest \
47
+ --context-window <number> --inject-tier <tier>
48
+ node <plugin-root>/dist/src/cli.js restart
49
+ ```
50
+
51
+ For all available `setup-digest` flags (tiers, provider override, metabolism tuning, token budgets), see `cli-usage.md`.
52
+
53
+ ## Viewing Current Settings
54
+
55
+ ```bash
56
+ node <plugin-root>/dist/src/cli.js setup-llm --show
57
+ node <plugin-root>/dist/src/cli.js setup-digest --show
58
+ ```
59
+
60
+ ## Common Scenarios
61
+
62
+ ### "Change my LLM model" (same provider)
63
+
64
+ ```bash
65
+ node <plugin-root>/dist/src/cli.js setup-llm --llm-model qwen3.5:35b
66
+ node <plugin-root>/dist/src/cli.js restart
67
+ node <plugin-root>/dist/src/cli.js verify
68
+ ```
69
+
70
+ No rebuild needed — embedding didn't change.
71
+
72
+ ### "Switch from Ollama to LM Studio"
73
+
74
+ ```bash
75
+ node <plugin-root>/dist/src/cli.js detect-providers
76
+ node <plugin-root>/dist/src/cli.js setup-llm \
77
+ --llm-provider lm-studio --llm-model "qwen/qwen3.5-35b-a3b"
78
+ node <plugin-root>/dist/src/cli.js restart
79
+ node <plugin-root>/dist/src/cli.js verify
80
+ ```
81
+
82
+ ### "Change everything" (provider, model, and embedding)
83
+
84
+ ```bash
85
+ node <plugin-root>/dist/src/cli.js detect-providers
86
+ node <plugin-root>/dist/src/cli.js setup-llm \
87
+ --llm-provider ollama --llm-model qwen3.5:35b \
88
+ --embedding-provider ollama --embedding-model bge-m3
89
+ node <plugin-root>/dist/src/cli.js restart
90
+ node <plugin-root>/dist/src/cli.js rebuild
91
+ node <plugin-root>/dist/src/cli.js verify
92
+ ```
@@ -19,7 +19,7 @@ Run:
19
19
  node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js stats
20
20
  ```
21
21
 
22
- - If the command **succeeds** (exit code 0): tell the user "Myco is already configured" and invoke the `myco` skill to handle reconfiguration or display status. Stop here do not continue with the setup flow.
22
+ - If the command **succeeds** (exit code 0): the vault already exists. Tell the user "Myco is already configured at `<vault-path>`." Then invoke the `myco` skill using the Skill tool — the `myco` skill handles all reconfiguration, status checks, and ongoing management. **Stop here. Do not continue with the setup flow. Do not attempt reconfiguration yourself.**
23
23
  - If the command **fails** (exit code non-zero or vault not found): proceed to Step 2.
24
24
 
25
25
  ## Step 2: Detect System
@@ -50,11 +50,11 @@ Record: detected RAM (GB), recommended model, digest context window, and default
50
50
 
51
51
  ## Step 3: Ask Questions
52
52
 
53
- Ask one question at a time. Do not batch questions. Wait for each answer before asking the next.
53
+ **Use the AskUserQuestion tool for every question.** Present choices as selectable options. Do not ask questions in plain text — always use AskUserQuestion so the user can select from options. Wait for each answer before asking the next.
54
54
 
55
55
  ### Question 1: Vault Location
56
56
 
57
- Ask the user where to store the vault. Present three choices:
57
+ Use AskUserQuestion to ask the user where to store the vault. Present three choices:
58
58
 
59
59
  - **Project-local** — `.myco/` in the current directory
60
60
  - **Centralized** — `~/.myco/vaults/<project-name>/` (where `<project-name>` is the current directory's basename)