@goondocks/myco 0.2.10 → 0.2.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -12,11 +12,18 @@
12
12
  "source": {
13
13
  "source": "npm",
14
14
  "package": "@goondocks/myco",
15
- "version": "0.2.9"
15
+ "version": "0.2.11"
16
16
  },
17
17
  "description": "Collective agent intelligence — captures session knowledge and serves it back via MCP",
18
18
  "license": "MIT",
19
- "keywords": ["intelligence", "memory", "mcp", "sessions", "team", "knowledge"],
19
+ "keywords": [
20
+ "intelligence",
21
+ "memory",
22
+ "mcp",
23
+ "sessions",
24
+ "team",
25
+ "knowledge"
26
+ ],
20
27
  "category": "productivity"
21
28
  }
22
29
  ]
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "myco",
3
- "version": "0.2.10",
3
+ "version": "0.2.12",
4
4
  "description": "Collective agent intelligence — captures session knowledge and serves it back to your team via MCP",
5
5
  "author": {
6
6
  "name": "goondocks-co",
package/commands/init.md CHANGED
@@ -5,211 +5,83 @@ description: Initialize Myco in the current project — sets up vault, config, a
5
5
 
6
6
  # Initialize Myco
7
7
 
8
- Set up Myco for this project. Guide the user through:
8
+ Guide the user through setup, then run the CLI to create the vault. **Do NOT create files manually — the CLI handles all vault creation, config writing, and env configuration.**
9
9
 
10
- ## Step 0: Choose vault location
10
+ **Ask each question one at a time using AskUserQuestion with selectable options.** Wait for the user's answer before proceeding to the next question. Do NOT combine multiple questions into one message.
11
11
 
12
- Ask the user where they want the vault:
12
+ ## Step 1: Choose vault location
13
13
 
14
- > Where would you like to store the Myco vault?
15
- >
16
- > 1. **In the project** (`.myco/`) — vault lives with the code, can be committed to git for team sharing
17
- > 2. **Centralized** (`~/.myco/vaults/<project-name>/`) — vault stays outside the repo, good for public repos or personal use
18
- > 3. **Custom path** — specify your own location
14
+ Ask the user:
19
15
 
20
- Pass the chosen path to the CLI via `--vault <path>`. The CLI handles all vault setup, env configuration, and agent detection.
16
+ **Question:** "Where would you like to store the Myco vault?"
21
17
 
22
- ## Step 1: Create vault directory
18
+ **Options:**
19
+ - "In the project (.myco/)" — vault lives with the code, can be committed to git for team sharing
20
+ - "Centralized (~/.myco/vaults/<project-name>/)" — vault stays outside the repo, good for public repos or personal use
21
+ - "Custom path" — specify your own location
23
22
 
24
- Create the vault directory (at the resolved path from Step 0) with subdirectories:
25
- `sessions`, `plans`, `memories`, `artifacts`, `team`, `buffer`, `logs`
23
+ If the user picks "Custom path", ask them to type the path.
26
24
 
27
- Also create a `_dashboard.md` file in the vault root with the following Dataview-powered content:
25
+ ## Step 2: Choose LLM provider
28
26
 
29
- ```markdown
30
- # Myco Vault
27
+ First, detect available providers by checking local endpoints:
31
28
 
32
- ## Active Plans
33
- \`\`\`dataview
34
- TABLE status, tags FROM #type/plan
35
- WHERE status = "active" OR status = "in_progress"
36
- SORT created DESC
37
- \`\`\`
29
+ - **Ollama** — `curl -s http://localhost:11434/api/tags` — list model names
30
+ - **LM Studio** — `curl -s http://localhost:1234/v1/models` — list model IDs
31
+ - **Anthropic** check if `ANTHROPIC_API_KEY` is set
38
32
 
39
- ## Recent Sessions
40
- \`\`\`dataview
41
- TABLE user, started, tools_used FROM #type/session
42
- SORT started DESC LIMIT 10
43
- \`\`\`
33
+ Then ask the user:
44
34
 
45
- ## Recent Memories
46
- \`\`\`dataview
47
- TABLE observation_type AS "Type", created FROM #type/memory
48
- SORT created DESC LIMIT 15
49
- \`\`\`
35
+ **Question:** "Which LLM provider for summarization?"
50
36
 
51
- ## Memories by Type
52
- \`\`\`dataview
53
- TABLE WITHOUT ID observation_type AS "Type", length(rows) AS "Count"
54
- FROM #type/memory GROUP BY observation_type
55
- SORT length(rows) DESC
56
- \`\`\`
37
+ **Options:** List only providers that are actually running, with recommended models noted. Example:
38
+ - "Ollama — gpt-oss (recommended)"
39
+ - "LM Studio openai/gpt-oss-20b"
40
+ - "Anthropic"
57
41
 
58
- ## Gotchas
59
- \`\`\`dataview
60
- LIST FROM #memory/gotcha SORT created DESC LIMIT 10
61
- \`\`\`
62
- ```
63
-
64
- This dashboard requires the Dataview community plugin in Obsidian. Without it, the code blocks are visible but still readable as plain markdown.
65
-
66
- ## Step 2: Choose intelligence backend
67
-
68
- Configure LLM and embedding providers independently:
69
-
70
- ### LLM provider
71
-
72
- Ask the user to choose an LLM provider:
73
-
74
- - **Ollama** — detect at `http://localhost:11434/api/tags`, list available models
75
- - **LM Studio** — detect at `http://localhost:1234/v1/models`, list available models
76
- - **Anthropic** — uses existing `ANTHROPIC_API_KEY`, verify it's set
77
-
78
- Recommended summarization models by hardware tier:
79
-
80
- | Tier | Models | RAM | Notes |
81
- |------|--------|-----|-------|
82
- | **High** (best quality) | `gpt-oss` (~20B), `gemma3:27b`, `qwen3.5:14b` | 16GB+ | Best observation extraction and structured JSON output |
83
- | **Mid** (good balance) | `qwen3.5:8b`, `gemma3:12b` | 8GB+ | Good quality, reasonable speed |
84
- | **Light** (resource constrained) | `gemma3:4b`, `qwen3.5:4b` | 4GB+ | Faster, may miss nuanced observations |
85
-
86
- If the user already has a model loaded, prefer using what they have — any instruction-tuned model that handles JSON output well will work. The model only needs to produce structured JSON (observation extraction) and short text (summaries, titles).
87
-
88
- For the selected provider, list available models and let the user choose. Also set:
89
- - `context_window` (default 8192) — only for local providers, not Anthropic
90
- - `max_tokens` (default 1024)
42
+ After the user picks a provider, ask them to choose a specific model from the available models on that provider.
91
43
 
92
- If the recommended model isn't available, offer to pull it:
93
- - **Ollama**: `ollama pull gpt-oss` (pulls latest tag automatically)
94
- - **LM Studio**: `lms get openai/gpt-oss-20b` (uses `owner/model` format)
44
+ ## Step 3: Choose embedding provider
95
45
 
96
- Ask the user before pulling — models can be large (hundreds of MB to several GB).
46
+ Ask the user:
97
47
 
98
- ### Embedding provider
48
+ **Question:** "Which embedding provider?"
99
49
 
100
- Ask the user to choose an embedding provider. **Anthropic is not an option here** — it doesn't support embeddings.
50
+ **Options:** List only providers that are running and support embeddings (Anthropic does not). Example:
51
+ - "Ollama — bge-m3 (recommended)"
52
+ - "LM Studio — text-embedding-bge-m3"
101
53
 
102
- - **Ollama** detect at `http://localhost:11434/api/tags`, list available models, recommend `bge-m3` or `nomic-embed-text`. Ollama is the recommended provider for embeddings.
103
- - **LM Studio** — possible but not recommended for embeddings. LM Studio is better suited for LLM/summarization work.
54
+ After the user picks a provider, ask them to choose a specific embedding model.
104
55
 
105
- For the selected provider, list available models and let the user choose.
106
-
107
- If the recommended embedding model isn't installed, offer to pull it — embedding models are typically small (~300-700MB):
56
+ If the recommended embedding model isn't available, offer to pull it:
108
57
  - **Ollama**: `ollama pull bge-m3`
109
58
 
110
- ## Step 3: Team / solo setup
111
-
112
- Ask whether this is a team or solo project:
113
-
114
- - **Solo** — vault stays local, not tracked by git
115
- - **Team** — set up git tracking for the vault directory, ask for username
116
-
117
- If `MYCO_VAULT_DIR` is set in the environment, also offer:
118
- - **Use MYCO_VAULT_DIR from env** — treat the env-specified vault as a shared/external vault managed outside this repo; skip git tracking
119
-
120
- ## Step 4: Write `myco.yaml`
121
-
122
- Write a `version: 2` config file with chosen settings. **All configurable values must be explicit** — no hidden schema defaults. Example output:
123
-
124
- ```yaml
125
- version: 2
126
-
127
- intelligence:
128
- llm:
129
- provider: ollama
130
- model: gpt-oss
131
- base_url: http://localhost:11434
132
- context_window: 8192
133
- max_tokens: 1024
134
- embedding:
135
- provider: ollama
136
- model: bge-m3
137
- base_url: http://localhost:11434
138
-
139
- daemon:
140
- log_level: info
141
- grace_period: 30
142
- max_log_size: 5242880
143
-
144
- capture:
145
- transcript_paths: []
146
- artifact_watch:
147
- - .claude/plans/
148
- - .cursor/plans/
149
- artifact_extensions:
150
- - .md
151
- buffer_max_events: 500
152
-
153
- context:
154
- max_tokens: 1200
155
- layers:
156
- plans: 200
157
- sessions: 500
158
- memories: 300
159
- team: 200
160
-
161
- team:
162
- enabled: false
163
- user: ""
164
- sync: git
165
- ```
166
-
167
- Substitute the user's chosen providers, models, and base URLs. Set `team.enabled`, `team.user`, and `team.sync` based on Step 3.
59
+ ## Step 4: Run the CLI
168
60
 
169
- ## Step 5: Write vault `.gitignore`
61
+ Run the init command with all gathered inputs. The CLI creates the vault, writes config, sets up the FTS index, and configures `MYCO_VAULT_DIR` if the vault is external:
170
62
 
171
- Create a `.gitignore` inside the `.myco/` vault directory to exclude runtime artifacts while committing the knowledge:
172
-
173
- ```
174
- # Runtime — rebuilt on daemon startup
175
- index.db
176
- index.db-wal
177
- index.db-shm
178
- vectors.db
179
-
180
- # Daemon state — per-machine, ephemeral
181
- daemon.json
182
- buffer/
183
- logs/
184
-
185
- # Obsidian — per-user workspace config
186
- .obsidian/
63
+ ```bash
64
+ node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js init \
65
+ --vault <chosen-path> \
66
+ --llm-provider <provider> \
67
+ --llm-model <model> \
68
+ --llm-url <base-url> \
69
+ --embedding-provider <provider> \
70
+ --embedding-model <model> \
71
+ --embedding-url <base-url>
187
72
  ```
188
73
 
189
- Everything else is committed: `myco.yaml`, `sessions/`, `memories/`, `plans/`, `artifacts/`, `team/`, `lineage.json`, `_dashboard.md`. This is the project's institutional memory — it travels with the code.
74
+ ## Step 5: Verify
190
75
 
191
- ## Step 6: Vault discovery and MCP
76
+ After the CLI completes, confirm providers are reachable:
192
77
 
193
- The `MYCO_VAULT_DIR` env var (if needed) was already set in Step 0. No additional configuration is required.
194
-
195
- **Cursor / VS Code** if the user chose an external vault path, instruct them to also set `MYCO_VAULT_DIR` in their shell profile (`~/.zshrc`, `~/.bashrc`) so other agents can find it.
196
-
197
- All three agents (Claude Code, Cursor, VS Code Copilot) auto-discover the MCP server from the plugin manifest when installed via the marketplace. No manual `.mcp.json` editing is needed.
198
-
199
- ## Step 7: Setup summary
200
-
201
- After setup, display a summary:
78
+ 1. Test the LLM send a short prompt and verify a response
79
+ 2. Test embeddings — generate a test embedding and report dimensions
80
+ 3. Display a setup summary table
202
81
 
203
82
  | Setting | Value |
204
83
  |---------|-------|
205
- | Vault path | `<resolved path>` (`<vault path source>`) |
84
+ | Vault path | `<resolved path>` |
206
85
  | LLM provider | `<provider>` / `<model>` |
207
86
  | Embedding provider | `<provider>` / `<model>` |
208
87
  | Context window | `<context_window>` |
209
- | Team mode | `<enabled/disabled>` |
210
-
211
- Then confirm everything is working:
212
- 1. Verify the LLM provider is reachable (call `isAvailable()`)
213
- 2. Verify the embedding provider is reachable (call `isAvailable()`)
214
- 3. Run a test embedding to confirm dimensions
215
- 4. Report success or issues found
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@goondocks/myco",
3
- "version": "0.2.10",
3
+ "version": "0.2.12",
4
4
  "description": "Collective agent intelligence — Claude Code plugin",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",