@goondocks/myco 0.2.9 → 0.2.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -11,11 +11,19 @@
11
11
  "name": "myco",
12
12
  "source": {
13
13
  "source": "npm",
14
- "package": "@goondocks/myco"
14
+ "package": "@goondocks/myco",
15
+ "version": "0.2.10"
15
16
  },
16
17
  "description": "Collective agent intelligence — captures session knowledge and serves it back via MCP",
17
18
  "license": "MIT",
18
- "keywords": ["intelligence", "memory", "mcp", "sessions", "team", "knowledge"],
19
+ "keywords": [
20
+ "intelligence",
21
+ "memory",
22
+ "mcp",
23
+ "sessions",
24
+ "team",
25
+ "knowledge"
26
+ ],
19
27
  "category": "productivity"
20
28
  }
21
29
  ]
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "myco",
3
- "version": "0.2.9",
3
+ "version": "0.2.11",
4
4
  "description": "Collective agent intelligence — captures session knowledge and serves it back to your team via MCP",
5
5
  "author": {
6
6
  "name": "goondocks-co",
package/commands/init.md CHANGED
@@ -5,9 +5,9 @@ description: Initialize Myco in the current project — sets up vault, config, a
5
5
 
6
6
  # Initialize Myco
7
7
 
8
- Set up Myco for this project. Guide the user through:
8
+ Guide the user through setup, then run the CLI to create the vault. **Do NOT create files manually — the CLI handles all vault creation, config writing, and env configuration.**
9
9
 
10
- ## Step 0: Choose vault location
10
+ ## Step 1: Choose vault location
11
11
 
12
12
  Ask the user where they want the vault:
13
13
 
@@ -17,199 +17,50 @@ Ask the user where they want the vault:
17
17
  > 2. **Centralized** (`~/.myco/vaults/<project-name>/`) — vault stays outside the repo, good for public repos or personal use
18
18
  > 3. **Custom path** — specify your own location
19
19
 
20
- Pass the chosen path to the CLI via `--vault <path>`. The CLI handles all vault setup, env configuration, and agent detection.
21
-
22
- ## Step 1: Create vault directory
23
-
24
- Create the vault directory (at the resolved path from Step 0) with subdirectories:
25
- `sessions`, `plans`, `memories`, `artifacts`, `team`, `buffer`, `logs`
26
-
27
- Also create a `_dashboard.md` file in the vault root with the following Dataview-powered content:
28
-
29
- ```markdown
30
- # Myco Vault
31
-
32
- ## Active Plans
33
- \`\`\`dataview
34
- TABLE status, tags FROM #type/plan
35
- WHERE status = "active" OR status = "in_progress"
36
- SORT created DESC
37
- \`\`\`
38
-
39
- ## Recent Sessions
40
- \`\`\`dataview
41
- TABLE user, started, tools_used FROM #type/session
42
- SORT started DESC LIMIT 10
43
- \`\`\`
44
-
45
- ## Recent Memories
46
- \`\`\`dataview
47
- TABLE observation_type AS "Type", created FROM #type/memory
48
- SORT created DESC LIMIT 15
49
- \`\`\`
50
-
51
- ## Memories by Type
52
- \`\`\`dataview
53
- TABLE WITHOUT ID observation_type AS "Type", length(rows) AS "Count"
54
- FROM #type/memory GROUP BY observation_type
55
- SORT length(rows) DESC
56
- \`\`\`
57
-
58
- ## Gotchas
59
- \`\`\`dataview
60
- LIST FROM #memory/gotcha SORT created DESC LIMIT 10
61
- \`\`\`
62
- ```
63
-
64
- This dashboard requires the Dataview community plugin in Obsidian. Without it, the code blocks are visible but still readable as plain markdown.
65
-
66
20
  ## Step 2: Choose intelligence backend
67
21
 
68
- Configure LLM and embedding providers independently:
22
+ Detect available providers by checking local endpoints:
69
23
 
70
- ### LLM provider
24
+ - **Ollama** — `curl -s http://localhost:11434/api/tags` — list model names
25
+ - **LM Studio** — `curl -s http://localhost:1234/v1/models` — list model IDs
26
+ - **Anthropic** — check if `ANTHROPIC_API_KEY` is set
71
27
 
72
- Ask the user to choose an LLM provider:
28
+ Show the user what's available and recommend:
29
+ - **LLM**: `gpt-oss` on Ollama or LM Studio (best for structured JSON output)
30
+ - **Embeddings**: `bge-m3` on Ollama (Anthropic does not support embeddings)
73
31
 
74
- - **Ollama** detect at `http://localhost:11434/api/tags`, list available models
75
- - **LM Studio** — detect at `http://localhost:1234/v1/models`, list available models
76
- - **Anthropic** — uses existing `ANTHROPIC_API_KEY`, verify it's set
77
-
78
- Recommended summarization models by hardware tier:
79
-
80
- | Tier | Models | RAM | Notes |
81
- |------|--------|-----|-------|
82
- | **High** (best quality) | `gpt-oss` (~20B), `gemma3:27b`, `qwen3.5:14b` | 16GB+ | Best observation extraction and structured JSON output |
83
- | **Mid** (good balance) | `qwen3.5:8b`, `gemma3:12b` | 8GB+ | Good quality, reasonable speed |
84
- | **Light** (resource constrained) | `gemma3:4b`, `qwen3.5:4b` | 4GB+ | Faster, may miss nuanced observations |
85
-
86
- If the user already has a model loaded, prefer using what they have — any instruction-tuned model that handles JSON output well will work. The model only needs to produce structured JSON (observation extraction) and short text (summaries, titles).
87
-
88
- For the selected provider, list available models and let the user choose. Also set:
89
- - `context_window` (default 8192) — only for local providers, not Anthropic
90
- - `max_tokens` (default 1024)
32
+ Let the user choose their LLM provider/model and embedding provider/model.
91
33
 
92
34
  If the recommended model isn't available, offer to pull it:
93
- - **Ollama**: `ollama pull gpt-oss` (pulls latest tag automatically)
94
- - **LM Studio**: `lms get openai/gpt-oss-20b` (uses `owner/model` format)
95
-
96
- Ask the user before pulling — models can be large (hundreds of MB to several GB).
97
-
98
- ### Embedding provider
99
-
100
- Ask the user to choose an embedding provider. **Anthropic is not an option here** — it doesn't support embeddings.
101
-
102
- - **Ollama** — detect at `http://localhost:11434/api/tags`, list available models, recommend `bge-m3` or `nomic-embed-text`. Ollama is the recommended provider for embeddings.
103
- - **LM Studio** — possible but not recommended for embeddings. LM Studio is better suited for LLM/summarization work.
104
-
105
- For the selected provider, list available models and let the user choose.
106
-
107
- If the recommended embedding model isn't installed, offer to pull it — embedding models are typically small (~300-700MB):
108
- - **Ollama**: `ollama pull bge-m3`
109
-
110
- ## Step 3: Team / solo setup
111
-
112
- Ask whether this is a team or solo project:
113
-
114
- - **Solo** — vault stays local, not tracked by git
115
- - **Team** — set up git tracking for the vault directory, ask for username
116
-
117
- If `MYCO_VAULT_DIR` is set in the environment, also offer:
118
- - **Use MYCO_VAULT_DIR from env** — treat the env-specified vault as a shared/external vault managed outside this repo; skip git tracking
119
-
120
- ## Step 4: Write `myco.yaml`
121
-
122
- Write a `version: 2` config file with chosen settings. **All configurable values must be explicit** — no hidden schema defaults. Example output:
123
-
124
- ```yaml
125
- version: 2
126
-
127
- intelligence:
128
- llm:
129
- provider: ollama
130
- model: gpt-oss
131
- base_url: http://localhost:11434
132
- context_window: 8192
133
- max_tokens: 1024
134
- embedding:
135
- provider: ollama
136
- model: bge-m3
137
- base_url: http://localhost:11434
138
-
139
- daemon:
140
- log_level: info
141
- grace_period: 30
142
- max_log_size: 5242880
143
-
144
- capture:
145
- transcript_paths: []
146
- artifact_watch:
147
- - .claude/plans/
148
- - .cursor/plans/
149
- artifact_extensions:
150
- - .md
151
- buffer_max_events: 500
152
-
153
- context:
154
- max_tokens: 1200
155
- layers:
156
- plans: 200
157
- sessions: 500
158
- memories: 300
159
- team: 200
160
-
161
- team:
162
- enabled: false
163
- user: ""
164
- sync: git
35
+ - **Ollama**: `ollama pull <model>`
36
+ - **LM Studio**: `lms get <owner/model>`
37
+
38
+ ## Step 3: Run the CLI
39
+
40
+ Run the init command with all gathered inputs. The CLI creates the vault, writes config, sets up the FTS index, and configures `MYCO_VAULT_DIR` if the vault is external:
41
+
42
+ ```bash
43
+ node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js init \
44
+ --vault <chosen-path> \
45
+ --llm-provider <provider> \
46
+ --llm-model <model> \
47
+ --llm-url <base-url> \
48
+ --embedding-provider <provider> \
49
+ --embedding-model <model> \
50
+ --embedding-url <base-url>
165
51
  ```
166
52
 
167
- Substitute the user's chosen providers, models, and base URLs. Set `team.enabled`, `team.user`, and `team.sync` based on Step 3.
53
+ ## Step 4: Verify
168
54
 
169
- ## Step 5: Write vault `.gitignore`
55
+ After the CLI completes, confirm providers are reachable:
170
56
 
171
- Create a `.gitignore` inside the `.myco/` vault directory to exclude runtime artifacts while committing the knowledge:
172
-
173
- ```
174
- # Runtime — rebuilt on daemon startup
175
- index.db
176
- index.db-wal
177
- index.db-shm
178
- vectors.db
179
-
180
- # Daemon state — per-machine, ephemeral
181
- daemon.json
182
- buffer/
183
- logs/
184
-
185
- # Obsidian — per-user workspace config
186
- .obsidian/
187
- ```
188
-
189
- Everything else is committed: `myco.yaml`, `sessions/`, `memories/`, `plans/`, `artifacts/`, `team/`, `lineage.json`, `_dashboard.md`. This is the project's institutional memory — it travels with the code.
190
-
191
- ## Step 6: Vault discovery and MCP
192
-
193
- The `MYCO_VAULT_DIR` env var (if needed) was already set in Step 0. No additional configuration is required.
194
-
195
- **Cursor / VS Code** — if the user chose an external vault path, instruct them to also set `MYCO_VAULT_DIR` in their shell profile (`~/.zshrc`, `~/.bashrc`) so other agents can find it.
196
-
197
- All three agents (Claude Code, Cursor, VS Code Copilot) auto-discover the MCP server from the plugin manifest when installed via the marketplace. No manual `.mcp.json` editing is needed.
198
-
199
- ## Step 7: Setup summary
200
-
201
- After setup, display a summary:
57
+ 1. Test the LLM send a short prompt and verify a response
58
+ 2. Test embeddings — generate a test embedding and report dimensions
59
+ 3. Display a setup summary table
202
60
 
203
61
  | Setting | Value |
204
62
  |---------|-------|
205
- | Vault path | `<resolved path>` (`<vault path source>`) |
63
+ | Vault path | `<resolved path>` |
206
64
  | LLM provider | `<provider>` / `<model>` |
207
65
  | Embedding provider | `<provider>` / `<model>` |
208
66
  | Context window | `<context_window>` |
209
- | Team mode | `<enabled/disabled>` |
210
-
211
- Then confirm everything is working:
212
- 1. Verify the LLM provider is reachable (call `isAvailable()`)
213
- 2. Verify the embedding provider is reachable (call `isAvailable()`)
214
- 3. Run a test embedding to confirm dimensions
215
- 4. Report success or issues found
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@goondocks/myco",
3
- "version": "0.2.9",
3
+ "version": "0.2.11",
4
4
  "description": "Collective agent intelligence — Claude Code plugin",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",