@goondocks/myco 0.2.10 → 0.2.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +9 -2
- package/.claude-plugin/plugin.json +1 -1
- package/commands/init.md +32 -181
- package/package.json +1 -1
|
@@ -12,11 +12,18 @@
|
|
|
12
12
|
"source": {
|
|
13
13
|
"source": "npm",
|
|
14
14
|
"package": "@goondocks/myco",
|
|
15
|
-
"version": "0.2.
|
|
15
|
+
"version": "0.2.10"
|
|
16
16
|
},
|
|
17
17
|
"description": "Collective agent intelligence — captures session knowledge and serves it back via MCP",
|
|
18
18
|
"license": "MIT",
|
|
19
|
-
"keywords": [
|
|
19
|
+
"keywords": [
|
|
20
|
+
"intelligence",
|
|
21
|
+
"memory",
|
|
22
|
+
"mcp",
|
|
23
|
+
"sessions",
|
|
24
|
+
"team",
|
|
25
|
+
"knowledge"
|
|
26
|
+
],
|
|
20
27
|
"category": "productivity"
|
|
21
28
|
}
|
|
22
29
|
]
|
package/commands/init.md
CHANGED
|
@@ -5,9 +5,9 @@ description: Initialize Myco in the current project — sets up vault, config, a
|
|
|
5
5
|
|
|
6
6
|
# Initialize Myco
|
|
7
7
|
|
|
8
|
-
|
|
8
|
+
Guide the user through setup, then run the CLI to create the vault. **Do NOT create files manually — the CLI handles all vault creation, config writing, and env configuration.**
|
|
9
9
|
|
|
10
|
-
## Step
|
|
10
|
+
## Step 1: Choose vault location
|
|
11
11
|
|
|
12
12
|
Ask the user where they want the vault:
|
|
13
13
|
|
|
@@ -17,199 +17,50 @@ Ask the user where they want the vault:
|
|
|
17
17
|
> 2. **Centralized** (`~/.myco/vaults/<project-name>/`) — vault stays outside the repo, good for public repos or personal use
|
|
18
18
|
> 3. **Custom path** — specify your own location
|
|
19
19
|
|
|
20
|
-
Pass the chosen path to the CLI via `--vault <path>`. The CLI handles all vault setup, env configuration, and agent detection.
|
|
21
|
-
|
|
22
|
-
## Step 1: Create vault directory
|
|
23
|
-
|
|
24
|
-
Create the vault directory (at the resolved path from Step 0) with subdirectories:
|
|
25
|
-
`sessions`, `plans`, `memories`, `artifacts`, `team`, `buffer`, `logs`
|
|
26
|
-
|
|
27
|
-
Also create a `_dashboard.md` file in the vault root with the following Dataview-powered content:
|
|
28
|
-
|
|
29
|
-
```markdown
|
|
30
|
-
# Myco Vault
|
|
31
|
-
|
|
32
|
-
## Active Plans
|
|
33
|
-
\`\`\`dataview
|
|
34
|
-
TABLE status, tags FROM #type/plan
|
|
35
|
-
WHERE status = "active" OR status = "in_progress"
|
|
36
|
-
SORT created DESC
|
|
37
|
-
\`\`\`
|
|
38
|
-
|
|
39
|
-
## Recent Sessions
|
|
40
|
-
\`\`\`dataview
|
|
41
|
-
TABLE user, started, tools_used FROM #type/session
|
|
42
|
-
SORT started DESC LIMIT 10
|
|
43
|
-
\`\`\`
|
|
44
|
-
|
|
45
|
-
## Recent Memories
|
|
46
|
-
\`\`\`dataview
|
|
47
|
-
TABLE observation_type AS "Type", created FROM #type/memory
|
|
48
|
-
SORT created DESC LIMIT 15
|
|
49
|
-
\`\`\`
|
|
50
|
-
|
|
51
|
-
## Memories by Type
|
|
52
|
-
\`\`\`dataview
|
|
53
|
-
TABLE WITHOUT ID observation_type AS "Type", length(rows) AS "Count"
|
|
54
|
-
FROM #type/memory GROUP BY observation_type
|
|
55
|
-
SORT length(rows) DESC
|
|
56
|
-
\`\`\`
|
|
57
|
-
|
|
58
|
-
## Gotchas
|
|
59
|
-
\`\`\`dataview
|
|
60
|
-
LIST FROM #memory/gotcha SORT created DESC LIMIT 10
|
|
61
|
-
\`\`\`
|
|
62
|
-
```
|
|
63
|
-
|
|
64
|
-
This dashboard requires the Dataview community plugin in Obsidian. Without it, the code blocks are visible but still readable as plain markdown.
|
|
65
|
-
|
|
66
20
|
## Step 2: Choose intelligence backend
|
|
67
21
|
|
|
68
|
-
|
|
22
|
+
Detect available providers by checking local endpoints:
|
|
69
23
|
|
|
70
|
-
|
|
24
|
+
- **Ollama** — `curl -s http://localhost:11434/api/tags` — list model names
|
|
25
|
+
- **LM Studio** — `curl -s http://localhost:1234/v1/models` — list model IDs
|
|
26
|
+
- **Anthropic** — check if `ANTHROPIC_API_KEY` is set
|
|
71
27
|
|
|
72
|
-
|
|
28
|
+
Show the user what's available and recommend:
|
|
29
|
+
- **LLM**: `gpt-oss` on Ollama or LM Studio (best for structured JSON output)
|
|
30
|
+
- **Embeddings**: `bge-m3` on Ollama (Anthropic does not support embeddings)
|
|
73
31
|
|
|
74
|
-
|
|
75
|
-
- **LM Studio** — detect at `http://localhost:1234/v1/models`, list available models
|
|
76
|
-
- **Anthropic** — uses existing `ANTHROPIC_API_KEY`, verify it's set
|
|
77
|
-
|
|
78
|
-
Recommended summarization models by hardware tier:
|
|
79
|
-
|
|
80
|
-
| Tier | Models | RAM | Notes |
|
|
81
|
-
|------|--------|-----|-------|
|
|
82
|
-
| **High** (best quality) | `gpt-oss` (~20B), `gemma3:27b`, `qwen3.5:14b` | 16GB+ | Best observation extraction and structured JSON output |
|
|
83
|
-
| **Mid** (good balance) | `qwen3.5:8b`, `gemma3:12b` | 8GB+ | Good quality, reasonable speed |
|
|
84
|
-
| **Light** (resource constrained) | `gemma3:4b`, `qwen3.5:4b` | 4GB+ | Faster, may miss nuanced observations |
|
|
85
|
-
|
|
86
|
-
If the user already has a model loaded, prefer using what they have — any instruction-tuned model that handles JSON output well will work. The model only needs to produce structured JSON (observation extraction) and short text (summaries, titles).
|
|
87
|
-
|
|
88
|
-
For the selected provider, list available models and let the user choose. Also set:
|
|
89
|
-
- `context_window` (default 8192) — only for local providers, not Anthropic
|
|
90
|
-
- `max_tokens` (default 1024)
|
|
32
|
+
Let the user choose their LLM provider/model and embedding provider/model.
|
|
91
33
|
|
|
92
34
|
If the recommended model isn't available, offer to pull it:
|
|
93
|
-
- **Ollama**: `ollama pull
|
|
94
|
-
- **LM Studio**: `lms get
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
-
|
|
109
|
-
|
|
110
|
-
## Step 3: Team / solo setup
|
|
111
|
-
|
|
112
|
-
Ask whether this is a team or solo project:
|
|
113
|
-
|
|
114
|
-
- **Solo** — vault stays local, not tracked by git
|
|
115
|
-
- **Team** — set up git tracking for the vault directory, ask for username
|
|
116
|
-
|
|
117
|
-
If `MYCO_VAULT_DIR` is set in the environment, also offer:
|
|
118
|
-
- **Use MYCO_VAULT_DIR from env** — treat the env-specified vault as a shared/external vault managed outside this repo; skip git tracking
|
|
119
|
-
|
|
120
|
-
## Step 4: Write `myco.yaml`
|
|
121
|
-
|
|
122
|
-
Write a `version: 2` config file with chosen settings. **All configurable values must be explicit** — no hidden schema defaults. Example output:
|
|
123
|
-
|
|
124
|
-
```yaml
|
|
125
|
-
version: 2
|
|
126
|
-
|
|
127
|
-
intelligence:
|
|
128
|
-
llm:
|
|
129
|
-
provider: ollama
|
|
130
|
-
model: gpt-oss
|
|
131
|
-
base_url: http://localhost:11434
|
|
132
|
-
context_window: 8192
|
|
133
|
-
max_tokens: 1024
|
|
134
|
-
embedding:
|
|
135
|
-
provider: ollama
|
|
136
|
-
model: bge-m3
|
|
137
|
-
base_url: http://localhost:11434
|
|
138
|
-
|
|
139
|
-
daemon:
|
|
140
|
-
log_level: info
|
|
141
|
-
grace_period: 30
|
|
142
|
-
max_log_size: 5242880
|
|
143
|
-
|
|
144
|
-
capture:
|
|
145
|
-
transcript_paths: []
|
|
146
|
-
artifact_watch:
|
|
147
|
-
- .claude/plans/
|
|
148
|
-
- .cursor/plans/
|
|
149
|
-
artifact_extensions:
|
|
150
|
-
- .md
|
|
151
|
-
buffer_max_events: 500
|
|
152
|
-
|
|
153
|
-
context:
|
|
154
|
-
max_tokens: 1200
|
|
155
|
-
layers:
|
|
156
|
-
plans: 200
|
|
157
|
-
sessions: 500
|
|
158
|
-
memories: 300
|
|
159
|
-
team: 200
|
|
160
|
-
|
|
161
|
-
team:
|
|
162
|
-
enabled: false
|
|
163
|
-
user: ""
|
|
164
|
-
sync: git
|
|
35
|
+
- **Ollama**: `ollama pull <model>`
|
|
36
|
+
- **LM Studio**: `lms get <owner/model>`
|
|
37
|
+
|
|
38
|
+
## Step 3: Run the CLI
|
|
39
|
+
|
|
40
|
+
Run the init command with all gathered inputs. The CLI creates the vault, writes config, sets up the FTS index, and configures `MYCO_VAULT_DIR` if the vault is external:
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
node ${CLAUDE_PLUGIN_ROOT}/dist/src/cli.js init \
|
|
44
|
+
--vault <chosen-path> \
|
|
45
|
+
--llm-provider <provider> \
|
|
46
|
+
--llm-model <model> \
|
|
47
|
+
--llm-url <base-url> \
|
|
48
|
+
--embedding-provider <provider> \
|
|
49
|
+
--embedding-model <model> \
|
|
50
|
+
--embedding-url <base-url>
|
|
165
51
|
```
|
|
166
52
|
|
|
167
|
-
|
|
53
|
+
## Step 4: Verify
|
|
168
54
|
|
|
169
|
-
|
|
55
|
+
After the CLI completes, confirm providers are reachable:
|
|
170
56
|
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
# Runtime — rebuilt on daemon startup
|
|
175
|
-
index.db
|
|
176
|
-
index.db-wal
|
|
177
|
-
index.db-shm
|
|
178
|
-
vectors.db
|
|
179
|
-
|
|
180
|
-
# Daemon state — per-machine, ephemeral
|
|
181
|
-
daemon.json
|
|
182
|
-
buffer/
|
|
183
|
-
logs/
|
|
184
|
-
|
|
185
|
-
# Obsidian — per-user workspace config
|
|
186
|
-
.obsidian/
|
|
187
|
-
```
|
|
188
|
-
|
|
189
|
-
Everything else is committed: `myco.yaml`, `sessions/`, `memories/`, `plans/`, `artifacts/`, `team/`, `lineage.json`, `_dashboard.md`. This is the project's institutional memory — it travels with the code.
|
|
190
|
-
|
|
191
|
-
## Step 6: Vault discovery and MCP
|
|
192
|
-
|
|
193
|
-
The `MYCO_VAULT_DIR` env var (if needed) was already set in Step 0. No additional configuration is required.
|
|
194
|
-
|
|
195
|
-
**Cursor / VS Code** — if the user chose an external vault path, instruct them to also set `MYCO_VAULT_DIR` in their shell profile (`~/.zshrc`, `~/.bashrc`) so other agents can find it.
|
|
196
|
-
|
|
197
|
-
All three agents (Claude Code, Cursor, VS Code Copilot) auto-discover the MCP server from the plugin manifest when installed via the marketplace. No manual `.mcp.json` editing is needed.
|
|
198
|
-
|
|
199
|
-
## Step 7: Setup summary
|
|
200
|
-
|
|
201
|
-
After setup, display a summary:
|
|
57
|
+
1. Test the LLM — send a short prompt and verify a response
|
|
58
|
+
2. Test embeddings — generate a test embedding and report dimensions
|
|
59
|
+
3. Display a setup summary table
|
|
202
60
|
|
|
203
61
|
| Setting | Value |
|
|
204
62
|
|---------|-------|
|
|
205
|
-
| Vault path | `<resolved path>`
|
|
63
|
+
| Vault path | `<resolved path>` |
|
|
206
64
|
| LLM provider | `<provider>` / `<model>` |
|
|
207
65
|
| Embedding provider | `<provider>` / `<model>` |
|
|
208
66
|
| Context window | `<context_window>` |
|
|
209
|
-
| Team mode | `<enabled/disabled>` |
|
|
210
|
-
|
|
211
|
-
Then confirm everything is working:
|
|
212
|
-
1. Verify the LLM provider is reachable (call `isAvailable()`)
|
|
213
|
-
2. Verify the embedding provider is reachable (call `isAvailable()`)
|
|
214
|
-
3. Run a test embedding to confirm dimensions
|
|
215
|
-
4. Report success or issues found
|