@goondocks/myco 0.2.11 → 0.2.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,7 +12,7 @@
12
12
  "source": {
13
13
  "source": "npm",
14
14
  "package": "@goondocks/myco",
15
- "version": "0.2.10"
15
+ "version": "0.2.11"
16
16
  },
17
17
  "description": "Collective agent intelligence — captures session knowledge and serves it back via MCP",
18
18
  "license": "MIT",
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "myco",
3
- "version": "0.2.11",
3
+ "version": "0.2.12",
4
4
  "description": "Collective agent intelligence — captures session knowledge and serves it back to your team via MCP",
5
5
  "author": {
6
6
  "name": "goondocks-co",
package/commands/init.md CHANGED
@@ -7,35 +7,56 @@ description: Initialize Myco in the current project — sets up vault, config, a
7
7
 
8
8
  Guide the user through setup, then run the CLI to create the vault. **Do NOT create files manually — the CLI handles all vault creation, config writing, and env configuration.**
9
9
 
10
+ **Ask each question one at a time using AskUserQuestion with selectable options.** Wait for the user's answer before proceeding to the next question. Do NOT combine multiple questions into one message.
11
+
10
12
  ## Step 1: Choose vault location
11
13
 
12
- Ask the user where they want the vault:
14
+ Ask the user:
15
+
16
+ **Question:** "Where would you like to store the Myco vault?"
17
+
18
+ **Options:**
19
+ - "In the project (.myco/)" — vault lives with the code, can be committed to git for team sharing
20
+ - "Centralized (~/.myco/vaults/<project-name>/)" — vault stays outside the repo, good for public repos or personal use
21
+ - "Custom path" — specify your own location
13
22
 
14
- > Where would you like to store the Myco vault?
15
- >
16
- > 1. **In the project** (`.myco/`) — vault lives with the code, can be committed to git for team sharing
17
- > 2. **Centralized** (`~/.myco/vaults/<project-name>/`) — vault stays outside the repo, good for public repos or personal use
18
- > 3. **Custom path** — specify your own location
23
+ If the user picks "Custom path", ask them to type the path.
19
24
 
20
- ## Step 2: Choose intelligence backend
25
+ ## Step 2: Choose LLM provider
21
26
 
22
- Detect available providers by checking local endpoints:
27
+ First, detect available providers by checking local endpoints:
23
28
 
24
29
  - **Ollama** — `curl -s http://localhost:11434/api/tags` — list model names
25
30
  - **LM Studio** — `curl -s http://localhost:1234/v1/models` — list model IDs
26
31
  - **Anthropic** — check if `ANTHROPIC_API_KEY` is set
27
32
 
28
- Show the user what's available and recommend:
29
- - **LLM**: `gpt-oss` on Ollama or LM Studio (best for structured JSON output)
30
- - **Embeddings**: `bge-m3` on Ollama (Anthropic does not support embeddings)
33
+ Then ask the user:
34
+
35
+ **Question:** "Which LLM provider for summarization?"
36
+
37
+ **Options:** List only providers that are actually running, with recommended models noted. Example:
38
+ - "Ollama — gpt-oss (recommended)"
39
+ - "LM Studio — openai/gpt-oss-20b"
40
+ - "Anthropic"
41
+
42
+ After the user picks a provider, ask them to choose a specific model from the available models on that provider.
43
+
44
+ ## Step 3: Choose embedding provider
45
+
46
+ Ask the user:
47
+
48
+ **Question:** "Which embedding provider?"
49
+
50
+ **Options:** List only providers that are running and support embeddings (Anthropic does not). Example:
51
+ - "Ollama — bge-m3 (recommended)"
52
+ - "LM Studio — text-embedding-bge-m3"
31
53
 
32
- Let the user choose their LLM provider/model and embedding provider/model.
54
+ After the user picks a provider, ask them to choose a specific embedding model.
33
55
 
34
- If the recommended model isn't available, offer to pull it:
35
- - **Ollama**: `ollama pull <model>`
36
- - **LM Studio**: `lms get <owner/model>`
56
+ If the recommended embedding model isn't available, offer to pull it:
57
+ - **Ollama**: `ollama pull bge-m3`
37
58
 
38
- ## Step 3: Run the CLI
59
+ ## Step 4: Run the CLI
39
60
 
40
61
  Run the init command with all gathered inputs. The CLI creates the vault, writes config, sets up the FTS index, and configures `MYCO_VAULT_DIR` if the vault is external:
41
62
 
@@ -50,7 +71,7 @@ node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js init \
50
71
  --embedding-url <base-url>
51
72
  ```
52
73
 
53
- ## Step 4: Verify
74
+ ## Step 5: Verify
54
75
 
55
76
  After the CLI completes, confirm providers are reachable:
56
77
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@goondocks/myco",
3
- "version": "0.2.11",
3
+ "version": "0.2.12",
4
4
  "description": "Collective agent intelligence — Claude Code plugin",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",