@deeplake/hivemind 0.7.14 → 0.7.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -6,13 +6,13 @@
6
6
  },
7
7
  "metadata": {
8
8
  "description": "Cloud-backed persistent shared memory for AI agents powered by Deeplake",
9
- "version": "0.7.14"
9
+ "version": "0.7.16"
10
10
  },
11
11
  "plugins": [
12
12
  {
13
13
  "name": "hivemind",
14
14
  "description": "Persistent shared memory powered by Deeplake — captures all session activity and provides cross-session, cross-agent memory search",
15
- "version": "0.7.14",
15
+ "version": "0.7.16",
16
16
  "source": "./claude-code",
17
17
  "homepage": "https://github.com/activeloopai/hivemind"
18
18
  }
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "hivemind",
3
3
  "description": "Cloud-backed persistent memory powered by Deeplake — read, write, and share memory across Claude Code sessions and agents",
4
- "version": "0.7.14",
4
+ "version": "0.7.16",
5
5
  "author": {
6
6
  "name": "Activeloop",
7
7
  "url": "https://deeplake.ai"
package/README.md CHANGED
@@ -11,24 +11,27 @@
11
11
  <h4 align="center">One brain for all your agents</h4>
12
12
 
13
13
  <p align="center">
14
+ <a href="https://www.npmjs.com/package/@deeplake/hivemind"><img src="https://img.shields.io/npm/v/@deeplake/hivemind?color=blue&label=npm" alt="npm"></a>
15
+ <a href="https://github.com/activeloopai/hivemind/stargazers"><img src="https://img.shields.io/github/stars/activeloopai/hivemind?style=social" alt="GitHub stars"></a>
14
16
  <a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"></a>
15
17
  <a href="package.json"><img src="https://img.shields.io/badge/node-%3E%3D22.0.0-brightgreen.svg" alt="Node"></a>
16
18
  <a href="https://deeplake.ai"><img src="https://img.shields.io/badge/Powered%20by-Deeplake-orange.svg" alt="Deeplake"></a>
17
19
  </p>
18
20
 
19
21
  <p align="center">
20
- Persistent, cloud-backed shared memory for <b>Claude Code • OpenClaw • Codex • Cursor • Hermes • pi</b> agents.<br>
22
+ Auto-learning, cloud-backed shared brain for <b>Claude Code • OpenClaw • Codex • Cursor • Hermes • pi</b> agents.<br>
21
23
  </p>
22
24
 
23
- > One session ends, everything important disappears.
25
+ > One engineer's agent figures out a tricky migration on Monday.
24
26
  >
25
- > Hivemind finally fixes the "agent amnesia" problem.
27
+ > Tuesday, every agent on the team can execute the pattern.
26
28
 
27
- Hivemind automatically captures every prompt, tool call, decision, and file operation. Then turns them into searchable memory that is instantly available to every agent and teammate across sessions, machines, and time.
29
+ **Beyond memory.** Hivemind captures every coding-agent interaction in your org as a structured trace, codifies repeated patterns into reusable skills, and propagates those skills to every agent on your team.
28
30
 
29
- - 🧠 **Captures** every session's prompts, tool calls, and responses into a shared SQL table on Deeplake Cloud
30
- - 🔍 **Searches** across all memory with lexical search (falls back to grep when index unavailable)
31
- - 🔗 **Shares** memory across sessions, agents, teammates, and machines in real-time
31
+ - 📥 **Captures** every session's prompts, tool calls, and responses as structured traces in Deeplake
32
+ - 🧠 **Codifies** patterns in those traces into reusable skills, available to every agent on your team
33
+ - 🔍 **Searches** across all traces and skills with lexical retrieval (grep fallback when index unavailable)
34
+ - 🔗 **Propagates** capability across sessions, agents, teammates, and machines in real time
32
35
  - 📁 **Intercepts** file operations on `~/.deeplake/memory/` through a virtual filesystem backed by SQL
33
36
  - 📝 **Summarizes** sessions into AI-generated wiki pages via a background worker at session end
34
37
 
@@ -40,7 +43,7 @@ One command, all your agents:
40
43
  npm install -g @deeplake/hivemind && hivemind install
41
44
  ```
42
45
 
43
- That's it. The installer detects every supported assistant on your machine (Claude Code, Codex, OpenClaw, Cursor, Hermes Agent, pi), wires up the hooks, and opens a browser once for login. Restart your assistants and they all share the same brain.
46
+ The installer detects every supported assistant on your machine (table below), wires up the hooks, and opens a browser once for login. Restart them after install.
44
47
 
45
48
  **Install for a specific assistant only:**
46
49
 
@@ -188,51 +191,21 @@ hivemind codex uninstall # remove from one
188
191
 
189
192
  ## How it works
190
193
 
191
- ```
192
- ┌─────────────────────────────────────────────────────┐
193
- │ Your Coding Agent │
194
- └──────────────────────────┬──────────────────────────┘
195
-
196
- ┌──────────────────▼──────────────────┐
197
- │ 📥 Capture (every turn) │
198
- │ prompts · tool calls · responses │
199
- └──────────────────┬──────────────────┘
200
-
201
- ┌──────────────────▼──────────────────┐
202
- │ 🧠 Hivemind │
203
- │ SQL tables · Virtual File System │
204
- │ Search Memory · inject context │
205
- └──────────────────┬──────────────────┘
206
-
207
- ┌──────────────────▼──────────────────┐
208
- │ 🌊 Deeplake │
209
- │ Shared across all agents │
210
- │ Postgres · S3 │
211
- └─────────────────────────────────────┘
212
- ```
213
-
214
- Every session is captured. Every agent can recall. Teammates in the same org see the same memory.
194
+ **Capture → Codify → Propagate → Compound.** Every coding-agent interaction (prompt, tool call, response) is captured as a structured trace in Deeplake. A background worker mines traces for repeated patterns and codifies them into `SKILL.md` files, scoped to your workspace. Codified skills propagate into every Hivemind-connected agent's context at inference time. The agent your junior engineer used this morning is sharper because of what your senior engineer's agent figured out last week.
215
195
 
216
196
  ## Features
217
197
 
218
198
  ### 🔍 Natural search
219
199
 
220
- Just ask Claude naturally:
200
+ Just ask your agent naturally:
221
201
 
222
202
  ```
223
203
  "What was Emanuele working on?"
224
- "Search memory for authentication bugs"
204
+ "Search traces for authentication bugs we've solved"
225
205
  "What did we decide about the API design?"
206
+ "Show me skills my team has codified for handling migrations"
226
207
  ```
227
208
 
228
- ### 📝 AI-generated session summaries
229
-
230
- After each session, a background worker generates a wiki summary: key decisions, code changes, next steps. Browse them at `~/.deeplake/memory/summaries/`.
231
-
232
- ### 👥 Team sharing
233
-
234
- Invite teammates to your Deeplake org. Their agents see your memory, your agents see theirs. No setup, no sync, no merge conflicts.
235
-
236
209
  ### 🔒 Privacy controls
237
210
 
238
211
  Disable capture entirely:
@@ -256,10 +229,11 @@ This plugin captures session activity and stores it in your Deeplake workspace:
256
229
  | User prompts | Every message you send |
257
230
  | Tool calls | Tool name + full input |
258
231
  | Tool responses | Full tool output |
259
- | Assistant responses | Claude's final response |
232
+ | Assistant responses | The agent's final response |
260
233
  | Subagent activity | Subagent tool calls and responses |
234
+ | Codified skills | Patterns extracted from traces |
261
235
 
262
- **All users in your Deeplake workspace can read this data.** A DATA NOTICE is displayed at the start of every session.
236
+ **All users in your Deeplake workspace can read this data.** That's the design — shared capability requires shared substrate. A DATA NOTICE is displayed at the start of every session. Workspace-level isolation prevents data leakage between orgs.
263
237
 
264
238
  ## Configuration
265
239
 
@@ -276,256 +250,41 @@ This plugin captures session activity and stores it in your Deeplake workspace:
276
250
  | `HIVEMIND_EMBEDDINGS` | `true` | Set to `false` to force lexical-only mode |
277
251
  | `HIVEMIND_DEBUG` | — | Set to `1` for verbose hook debug logs |
278
252
 
279
- ## Optional: enable semantic search (embeddings)
280
-
281
- Hivemind can run a local embedding daemon (nomic-embed-text-v1.5, ~130 MB)
282
- so that `Grep` over `~/.deeplake/memory/` uses hybrid semantic + lexical
283
- ranking instead of pure BM25. This is **off by default** — the daemon
284
- depends on `@huggingface/transformers`, which pulls onnxruntime-node and
285
- sharp (~600 MB total with native binaries). Shipping that with every agent
286
- install would 60× the install size for a feature most users don't need.
287
-
288
- To enable, run the bundled command:
289
-
290
- ```bash
291
- hivemind embeddings install
292
- ```
293
-
294
- This installs `@huggingface/transformers` **once** into a shared directory
295
- (`~/.hivemind/embed-deps/`) and symlinks every detected agent's plugin to
296
- it, so the 600 MB cost is paid one time regardless of how many agents you
297
- have wired up. Re-run the same command after installing a new agent and
298
- the new symlink is added (the npm install is skipped because it's cached).
299
-
300
- Or do it in one shot at install time:
301
-
302
- ```bash
303
- hivemind install --with-embeddings # all detected agents
304
- hivemind <agent> install --with-embeddings # a single agent
305
- ```
306
-
307
- Other commands:
253
+ ## Semantic search (optional)
308
254
 
309
- ```bash
310
- hivemind embeddings status # show shared deps + per-agent state
311
- hivemind embeddings uninstall # remove the per-agent symlinks
312
- hivemind embeddings uninstall --prune # also delete the shared dir (~600 MB)
313
- ```
255
+ Hivemind ships with a local embedding daemon (nomic-embed-text-v1.5) for hybrid semantic + lexical search over `~/.deeplake/memory/`. **Off by default** because the dependency footprint is ~600 MB. Enable with `hivemind embeddings install` (or `hivemind install --with-embeddings`). Without it, search degrades silently to BM25/lexical-only.
314
256
 
315
- Restart your agents after enabling. From the next session, captured
316
- messages and AI-generated summaries will include a 768-dim embedding,
317
- and semantic recall queries will route through the local daemon (the
318
- nomic model is downloaded on first use and cached in `~/.cache/huggingface/`).
257
+ Full guide: **[docs/EMBEDDINGS.md](docs/EMBEDDINGS.md)**.
319
258
 
320
- If `@huggingface/transformers` is **not** present, Hivemind silently
321
- degrades to lexical-only mode:
259
+ ## Summaries
322
260
 
323
- - Capture continues; rows still land in Deeplake.
324
- - ✅ `Grep` still works via BM25 / `ILIKE` matching on text columns.
325
- - ⚪ The `message_embedding` / `summary_embedding` columns stay `NULL`.
326
- - ⚪ The hook log notes `embeddings: no-transformers` once at session start.
261
+ After each session, a background worker generates an AI-written wiki summary and stores it in the `memory` table alongside its 768-dim embedding. Long sessions checkpoint mid-session every 50 messages or 2 hours (configurable). The wiki worker shells out to the host agent's own CLI (`claude -p`, `codex exec`, `pi --print`, …) — no separate API key. Browse summaries at `~/.deeplake/memory/summaries/`.
327
262
 
328
- You can also force lexical-only mode explicitly with
329
- `HIVEMIND_EMBEDDINGS=false` (useful for CI or air-gapped environments).
263
+ Triggers, generation flow, and env-var reference: **[docs/SUMMARIES.md](docs/SUMMARIES.md)**.
330
264
 
331
- ## Summaries
265
+ ## Skills (skillify)
332
266
 
333
- Hivemind doesn't just capture raw eventsit also generates an
334
- **AI-written wiki summary** for each session and stores it in the
335
- `memory` table (alongside its 768-dim `summary_embedding`). The summary
336
- is what shows up when you `Grep` for past sessions or follow links from
337
- `~/.deeplake/memory/index.md`.
338
-
339
- ### When summaries are written
340
-
341
- Each agent (Claude Code / Codex / Cursor / Hermes / pi) fires a wiki
342
- worker on two triggers:
343
-
344
- | Trigger | When it fires |
345
- |-------------------|-------------------------------------------------------------------------------|
346
- | **Final** | At session end (Stop / SessionEnd / session_shutdown), once. |
347
- | **Periodic** | Mid-session, when **either** of two thresholds is hit since the last summary: |
348
- | | • messages-since-last-summary ≥ `HIVEMIND_SUMMARY_EVERY_N_MSGS` (default 50) |
349
- | | • elapsed time ≥ `HIVEMIND_SUMMARY_EVERY_HOURS` (default 2) |
350
-
351
- The first message after a long pause therefore triggers a fresh
352
- summary; long sessions naturally checkpoint every ~50 messages.
353
-
354
- A per-session JSON sidecar at
355
- `~/.claude/hooks/summary-state/<sessionId>.json` tracks
356
- `{lastSummaryAt, lastSummaryCount, totalCount}`. The dir is shared
357
- across all agents (session ids are UUIDs so no collisions). It is
358
- **never deleted**, so resuming a session via `--resume` / `--continue`
359
- picks up where it left off.
360
-
361
- ### How a summary is generated
362
-
363
- 1. The wiki worker queries the `sessions` table for every event tied to
364
- that session.
365
- 2. It builds a structured prompt asking the host agent's CLI to extract
366
- entities, decisions, files modified, open questions, etc.
367
- 3. It shells out to that agent's CLI (`claude -p`, `codex exec`,
368
- `pi --print`, …) with the prompt — never a separate API key, the
369
- agent's existing credentials are used.
370
- 4. The generated markdown is uploaded to the `memory` table at
371
- `/summaries/<user>/<sessionId>.md`. The shared embedding daemon
372
- produces the 768-dim `summary_embedding` so the summary is recallable
373
- via semantic search.
374
-
375
- A lock file at `~/.claude/hooks/summary-state/<sessionId>.lock`
376
- prevents two workers from running concurrently for the same session.
377
-
378
- ### Configuration
379
-
380
- | Env var | Default | Effect |
381
- |------------------------------------|----------------|-----------------------------------------------------|
382
- | `HIVEMIND_SUMMARY_EVERY_N_MSGS` | `50` | Trigger periodic when messages-since-last ≥ this |
383
- | `HIVEMIND_SUMMARY_EVERY_HOURS` | `2` | Trigger periodic after this many hours, with ≥1 msg |
384
- | `HIVEMIND_CURSOR_MODEL` | `auto` | (cursor only) model passed to `cursor-agent --print --model` |
385
- | `HIVEMIND_HERMES_PROVIDER` | `openrouter` | (hermes only) provider passed to `hermes -z --provider` |
386
- | `HIVEMIND_HERMES_MODEL` | `anthropic/claude-haiku-4-5` | (hermes only) model passed to `hermes -z -m` |
387
- | `HIVEMIND_PI_PROVIDER` | `google` | (pi only) provider passed to `pi --print --provider`|
388
- | `HIVEMIND_PI_MODEL` | `gemini-2.5-flash` | (pi only) model passed to `pi --print --model` |
389
- | `HIVEMIND_CAPTURE=false` | unset | Disable both capture and summary generation |
390
-
391
- For pi specifically, the wiki worker is bundled separately at
392
- `~/.pi/agent/hivemind/wiki-worker.js` (deposited by `hivemind pi install`).
393
- The other agents ship the wiki worker inside their per-agent plugin
394
- bundle.
395
-
396
- ## Skills (skilify)
397
-
398
- Hivemind also crystallises **recurring patterns from your recent sessions
399
- into reusable Claude Code skills**, automatically. Same architecture as
400
- the wiki worker: an async background process that fires on Stop /
401
- SessionEnd, mines recent sessions in scope, asks Haiku whether the
402
- activity contains something worth keeping, and writes a `SKILL.md` if so.
403
-
404
- ### When the skilify worker fires
405
-
406
- | Trigger | When it fires |
407
- |------------------|--------------------------------------------------------------------------------|
408
- | **Stop counter** | Mid-session, after every `HIVEMIND_SKILIFY_EVERY_N_TURNS` (default 20) turns. |
409
- | **SessionEnd** | Always at end-of-session, regardless of counter — catches tail-of-session knowledge. |
410
-
411
- Per-project counter state lives at
412
- `~/.deeplake/state/skilify/<project-key>.json`. Project key is the sha1
413
- of `git config remote.origin.url` (with the absolute path as fallback for
414
- non-git dirs).
415
-
416
- ### How a skill is generated
417
-
418
- 1. The worker pulls the **last 10 sessions in scope** from the `sessions`
419
- Deeplake table — strictly newer than the watermark in the state file.
420
- 2. It strips each session to **prompt + assistant text only** (tool calls
421
- and thinking blocks are dropped — they're noise for skill mining).
422
- 3. It builds a gate prompt: existing project skill bodies + the 10
423
- stripped exchanges + decision rules.
424
- 4. It runs `claude -p haiku --permission-mode bypassPermissions` with the
425
- prompt. The model returns a JSON verdict:
426
- - `KEEP <name> <body>` — write a new skill.
427
- - `MERGE <existing-name> <merged-body>` — update an existing skill, bump version.
428
- - `SKIP <reason>` — pattern is one-off / generic / already covered.
429
- 5. On KEEP/MERGE the skill is written to `<project>/.claude/skills/<name>/SKILL.md`
430
- (or `~/.claude/skills/...` if you've set `install` to `global`), with
431
- provenance frontmatter (`source_sessions`, `version`, `created_by_agent`,
432
- timestamps).
433
- 6. A row is also inserted into the `skills` Deeplake table for org-wide
434
- provenance (append-only — never UPDATE, sidesteps the
435
- UPDATE-coalescing quirk).
436
-
437
- ### `/skilify` — managing scope, team, install location
438
-
439
- The `/skilify` slash command (Claude Code, Codex) and the `hivemind
440
- skilify` CLI control mining behaviour.
267
+ Hivemind **codifies recurring patterns from your team's recent sessions into reusable skills** that propagate to every agent on your team automatically. An async background worker fires on Stop / SessionEnd, mines recent sessions in scope, asks Haiku whether the activity contains something worth keeping, and writes a `SKILL.md` to `<project>/.claude/skills/<name>/`.
441
268
 
442
269
  ```bash
443
- hivemind skilify # show current scope, team, install, per-project state
444
- hivemind skilify scope <me|team|org> # who counts as "in scope" for mining
445
- hivemind skilify install <project|global> # where new skills are written
446
- hivemind skilify promote <skill-name> # move a project skill to ~/.claude/skills/
447
- hivemind skilify team add <username> # add to the team list (used when scope=team)
448
- hivemind skilify team remove <username> # remove from team
449
- hivemind skilify team list # list current team members
270
+ hivemind skillify # show current scope, team, install, per-project state
271
+ hivemind skillify scope <me|team|org> # who counts as "in scope" for mining
272
+ hivemind skillify pull # install teammates' skills locally
273
+ hivemind skillify unpull # remove pulled skills
450
274
  ```
451
275
 
452
- The team list flows into the worker's session-fetch SQL: `scope=me`
453
- filters by your own username, `scope=team` filters by `author IN
454
- (<team>)`, `scope=org` applies no author filter.
455
-
456
- Config persists at `~/.deeplake/state/skilify/config.json` (one global
457
- file shared across projects).
458
-
459
- ### Configuration
460
-
461
- | Env var | Default | Effect |
462
- |--------------------------------------|---------|---------------------------------------------------------|
463
- | `HIVEMIND_SKILIFY_EVERY_N_TURNS` | `20` | Stop-counter threshold for mid-session worker fires |
464
- | `HIVEMIND_SKILLS_TABLE` | `skills`| Deeplake table name for org-wide provenance |
465
- | `HIVEMIND_SKILIFY_WORKER=1` | unset | Recursion guard (set automatically inside the worker) |
466
- | `HIVEMIND_CURSOR_MODEL` | `auto` | (cursor only) model passed to the cursor-agent gate call |
467
- | `HIVEMIND_HERMES_PROVIDER` | `openrouter` | (hermes only) provider passed to the hermes gate call |
468
- | `HIVEMIND_HERMES_MODEL` | `anthropic/claude-haiku-4-5` | (hermes only) model passed to hermes |
469
-
470
- ### Per-agent gate CLI
471
-
472
- The skilify worker calls each agent's own headless CLI for the gate
473
- prompt — so a user who only has codex / cursor / hermes installed
474
- never needs `claude` in their PATH:
475
-
476
- | Agent | Gate command |
477
- |-------------|----------------------------------------------------------------------------------------|
478
- | claude_code | `claude -p <prompt> --no-session-persistence --model haiku --permission-mode bypassPermissions` |
479
- | codex | `codex exec --dangerously-bypass-approvals-and-sandbox <prompt>` |
480
- | cursor | `cursor-agent --print --model <HIVEMIND_CURSOR_MODEL> --force --output-format text <prompt>` |
481
- | hermes | `hermes -z <prompt> --provider <HIVEMIND_HERMES_PROVIDER> -m <HIVEMIND_HERMES_MODEL> --yolo --ignore-user-config` |
482
-
483
- For hermes via OpenRouter (the default), set `OPENROUTER_API_KEY` in
484
- the environment; the worker inherits the parent process env. Other
485
- providers (anthropic, openai, etc.) need their respective API keys.
486
-
487
- ### Logs
488
-
489
- Worker activity logs to `~/.claude/hooks/skilify.log`. Each line shows
490
- which session pool was mined, what the gate decided, and whether a file
491
- was written.
276
+ Triggers, generation flow, full `pull` / `unpull` semantics, gate-CLI table per agent, env vars, logs: **[docs/SKILLIFY.md](docs/SKILLIFY.md)**.
492
277
 
493
278
  ## Architecture
494
279
 
495
- ### Integration model per agent
496
-
497
- | Agent | Mechanism | Hooks/tools wired |
498
- |-------------------|------------------------------------|-----------------------------------------------------------------------------------------|
499
- | **Claude Code** | Marketplace plugin | `SessionStart` · `UserPromptSubmit` · `PreToolUse` · `PostToolUse` · `Stop` · `SubagentStop` · `SessionEnd` |
500
- | **Codex** | `~/.codex/hooks.json` | `SessionStart` · `UserPromptSubmit` · `PreToolUse(Bash)` · `PostToolUse` · `Stop` |
501
- | **OpenClaw** | Native extension at `~/.openclaw/extensions/hivemind/` | `agent_end` capture · `before_agent_start` recall · contracted tools (`hivemind_search`/`read`/`index`) |
502
- | **Cursor (1.7+)** | `~/.cursor/hooks.json` | `sessionStart` · `beforeSubmitPrompt` · `postToolUse` · `afterAgentResponse` · `stop` · `sessionEnd` |
503
- | **Hermes** | Skill at `~/.hermes/skills/hivemind-memory/` | recall via grep on `~/.deeplake/memory/` |
504
- | **pi** | `~/.pi/agent/AGENTS.md` + skill | recall via grep on `~/.deeplake/memory/` |
280
+ Per-agent integration mechanisms (marketplace plugin, hooks, skills, native extension) and monorepo structure: **[docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)**.
505
281
 
506
- ### Monorepo structure
282
+ ## Roadmap
507
283
 
508
- ```
509
- hivemind/
510
- ├── src/ ← shared core (API client, auth, config, SQL utils)
511
- │ ├── hooks/ ← Claude Code hooks
512
- │ ├── hooks/codex/ ← Codex hooks
513
- │ ├── hooks/cursor/ ← Cursor hooks
514
- │ ├── hooks/hermes/ ← Hermes shell hooks
515
- │ ├── hooks/pi/ ← pi wiki-worker (extension lives in pi/extension-source/)
516
- │ ├── embeddings/ ← nomic embed-daemon + protocol + SQL helpers
517
- │ ├── mcp/ ← MCP server (used by Hermes; available to any future MCP-aware client)
518
- │ ├── commands/ ← auth, auth-creds, auth-login, session-prune
519
- │ └── cli/ ← unified `hivemind install` CLI + per-agent installers
520
- ├── claude-code/ ← Claude Code plugin source (marketplace-distributed)
521
- ├── codex/ ← Codex plugin build output (npm-distributed)
522
- ├── cursor/ ← Cursor plugin build output (npm-distributed)
523
- ├── hermes/ ← Hermes plugin build output (npm-distributed)
524
- ├── mcp/ ← MCP server build output (shared by Hermes + future MCP clients)
525
- ├── openclaw/ ← OpenClaw plugin source + build output (ClawHub-distributed)
526
- ├── pi/ ← pi extension source (ships raw .ts; pi compiles at load)
527
- └── bundle/ ← unified `hivemind` CLI build output
528
- ```
284
+ - **Trajectory export for fine-tuning.** Because traces are stored in Deeplake's tensor format, they're export-ready as PyTorch datasets. Teams running their own open-source models can fine-tune on their org's accumulated trajectories. A handful of advanced customers are already doing this against the trajectories their Claude Code and Codex agents generated.
285
+ - **GPU-accelerated dense retrieval at scale.** Local CPU embeddings already ship via the optional nomic-embed daemon (see [Semantic search](#semantic-search-optional)). Next: GPU-accelerated vector search over the full trace store, on by default.
286
+ - **Skill versioning and review.** Pre-release human review for codified skills before they propagate org-wide, for teams that want a curation step.
287
+ - **More agents.** If your team uses an agent that isn't on the supported-assistants list above, open an issue.
529
288
 
530
289
  ## Security
531
290
 
package/bundle/cli.js CHANGED
@@ -4710,9 +4710,9 @@ if (process.argv[1] && process.argv[1].endsWith("auth-login.js")) {
4710
4710
  }
4711
4711
 
4712
4712
  // dist/src/commands/skilify.js
4713
- import { readdirSync as readdirSync3, existsSync as existsSync15, readFileSync as readFileSync12, mkdirSync as mkdirSync7, renameSync as renameSync2 } from "node:fs";
4714
- import { homedir as homedir8 } from "node:os";
4715
- import { dirname as dirname2, join as join18 } from "node:path";
4713
+ import { readdirSync as readdirSync4, existsSync as existsSync17, readFileSync as readFileSync13, mkdirSync as mkdirSync8, renameSync as renameSync3 } from "node:fs";
4714
+ import { homedir as homedir10 } from "node:os";
4715
+ import { dirname as dirname3, join as join20 } from "node:path";
4716
4716
 
4717
4717
  // dist/src/skilify/scope-config.js
4718
4718
  import { existsSync as existsSync12, mkdirSync as mkdirSync4, readFileSync as readFileSync9, writeFileSync as writeFileSync6 } from "node:fs";
@@ -4740,9 +4740,9 @@ function saveScopeConfig(cfg) {
4740
4740
  }
4741
4741
 
4742
4742
  // dist/src/skilify/pull.js
4743
- import { existsSync as existsSync14, readFileSync as readFileSync11, writeFileSync as writeFileSync8, mkdirSync as mkdirSync6, renameSync } from "node:fs";
4744
- import { homedir as homedir7 } from "node:os";
4745
- import { join as join17 } from "node:path";
4743
+ import { existsSync as existsSync15, readFileSync as readFileSync12, writeFileSync as writeFileSync9, mkdirSync as mkdirSync7, renameSync as renameSync2 } from "node:fs";
4744
+ import { homedir as homedir8 } from "node:os";
4745
+ import { join as join18 } from "node:path";
4746
4746
 
4747
4747
  // dist/src/skilify/skill-writer.js
4748
4748
  import { existsSync as existsSync13, mkdirSync as mkdirSync5, readFileSync as readFileSync10, readdirSync as readdirSync2, statSync as statSync2, writeFileSync as writeFileSync7 } from "node:fs";
@@ -4805,7 +4805,99 @@ function parseFrontmatter(text) {
4805
4805
  return { fm, body };
4806
4806
  }
4807
4807
 
4808
+ // dist/src/skilify/manifest.js
4809
+ import { existsSync as existsSync14, mkdirSync as mkdirSync6, readFileSync as readFileSync11, renameSync, writeFileSync as writeFileSync8 } from "node:fs";
4810
+ import { homedir as homedir7 } from "node:os";
4811
+ import { dirname as dirname2, join as join17 } from "node:path";
4812
+ function emptyManifest() {
4813
+ return { version: 1, entries: [] };
4814
+ }
4815
+ function manifestPath() {
4816
+ return join17(homedir7(), ".deeplake", "state", "skilify", "pulled.json");
4817
+ }
4818
+ function loadManifest(path = manifestPath()) {
4819
+ if (!existsSync14(path))
4820
+ return emptyManifest();
4821
+ let raw;
4822
+ try {
4823
+ raw = readFileSync11(path, "utf-8");
4824
+ } catch {
4825
+ return emptyManifest();
4826
+ }
4827
+ try {
4828
+ const parsed = JSON.parse(raw);
4829
+ if (!parsed || typeof parsed !== "object")
4830
+ return emptyManifest();
4831
+ if (parsed.version !== 1 || !Array.isArray(parsed.entries))
4832
+ return emptyManifest();
4833
+ const entries = [];
4834
+ for (const e of parsed.entries) {
4835
+ if (!e || typeof e !== "object")
4836
+ continue;
4837
+ if (typeof e.dirName !== "string" || !e.dirName)
4838
+ continue;
4839
+ if (e.dirName.includes("/") || e.dirName.includes("\\") || e.dirName.includes(".."))
4840
+ continue;
4841
+ if (typeof e.name !== "string" || !e.name)
4842
+ continue;
4843
+ if (typeof e.author !== "string")
4844
+ continue;
4845
+ if (typeof e.installRoot !== "string" || !e.installRoot)
4846
+ continue;
4847
+ if (e.install !== "global" && e.install !== "project")
4848
+ continue;
4849
+ entries.push({
4850
+ dirName: e.dirName,
4851
+ name: e.name,
4852
+ author: e.author,
4853
+ projectKey: typeof e.projectKey === "string" ? e.projectKey : "",
4854
+ remoteVersion: typeof e.remoteVersion === "number" ? e.remoteVersion : 1,
4855
+ install: e.install,
4856
+ installRoot: e.installRoot,
4857
+ pulledAt: typeof e.pulledAt === "string" ? e.pulledAt : (/* @__PURE__ */ new Date()).toISOString()
4858
+ });
4859
+ }
4860
+ return { version: 1, entries };
4861
+ } catch {
4862
+ return emptyManifest();
4863
+ }
4864
+ }
4865
+ function saveManifest(m, path = manifestPath()) {
4866
+ mkdirSync6(dirname2(path), { recursive: true });
4867
+ const tmp = `${path}.tmp`;
4868
+ writeFileSync8(tmp, JSON.stringify(m, null, 2) + "\n", { mode: 384 });
4869
+ renameSync(tmp, path);
4870
+ }
4871
+ function recordPull(entry, path = manifestPath()) {
4872
+ const m = loadManifest(path);
4873
+ const idx = m.entries.findIndex((e) => e.install === entry.install && e.installRoot === entry.installRoot && e.dirName === entry.dirName);
4874
+ if (idx >= 0)
4875
+ m.entries[idx] = entry;
4876
+ else
4877
+ m.entries.push(entry);
4878
+ saveManifest(m, path);
4879
+ }
4880
+ function removePullEntry(install, installRoot, dirName, path = manifestPath()) {
4881
+ const m = loadManifest(path);
4882
+ const before = m.entries.length;
4883
+ m.entries = m.entries.filter((e) => !(e.install === install && e.installRoot === installRoot && e.dirName === dirName));
4884
+ if (m.entries.length !== before)
4885
+ saveManifest(m, path);
4886
+ }
4887
+ function entriesForRoot(m, install, installRoot) {
4888
+ return m.entries.filter((e) => e.install === install && e.installRoot === installRoot);
4889
+ }
4890
+
4808
4891
  // dist/src/skilify/pull.js
4892
+ function assertValidAuthor(author) {
4893
+ if (!author)
4894
+ throw new Error("author is empty");
4895
+ if (author.length > 64)
4896
+ throw new Error(`author too long (${author.length}): ${author.slice(0, 32)}\u2026`);
4897
+ if (!/^[A-Za-z0-9_.\-@]+$/.test(author)) {
4898
+ throw new Error(`author contains invalid characters: ${author}`);
4899
+ }
4900
+ }
4809
4901
  function esc(s) {
4810
4902
  return s.replace(/\\/g, "\\\\").replace(/'/g, "''").replace(/[\x01-\x08\x0b\x0c\x0e-\x1f\x7f]/g, "");
4811
4903
  }
@@ -4828,10 +4920,10 @@ function isMissingTableError(message) {
4828
4920
  }
4829
4921
  function resolvePullDestination(install, cwd) {
4830
4922
  if (install === "global")
4831
- return join17(homedir7(), ".claude", "skills");
4923
+ return join18(homedir8(), ".claude", "skills");
4832
4924
  if (!cwd)
4833
4925
  throw new Error("install=project requires a cwd");
4834
- return join17(cwd, ".claude", "skills");
4926
+ return join18(cwd, ".claude", "skills");
4835
4927
  }
4836
4928
  function selectLatestPerName(rows) {
4837
4929
  const seen = /* @__PURE__ */ new Set();
@@ -4897,10 +4989,10 @@ function renderFrontmatter(fm) {
4897
4989
  return lines.join("\n");
4898
4990
  }
4899
4991
  function readLocalVersion(path) {
4900
- if (!existsSync14(path))
4992
+ if (!existsSync15(path))
4901
4993
  return null;
4902
4994
  try {
4903
- const text = readFileSync11(path, "utf-8");
4995
+ const text = readFileSync12(path, "utf-8");
4904
4996
  const parsed = parseFrontmatter(text);
4905
4997
  if (!parsed)
4906
4998
  return null;
@@ -4953,9 +5045,39 @@ async function runPull(opts) {
4953
5045
  summary.skipped++;
4954
5046
  continue;
4955
5047
  }
4956
- const projectKey = String(row.project_key ?? "");
4957
- const skillDir = projectKey ? join17(root, projectKey, name) : join17(root, name);
4958
- const skillFile = join17(skillDir, "SKILL.md");
5048
+ const author = String(row.author ?? "");
5049
+ if (!author) {
5050
+ summary.entries.push({
5051
+ name,
5052
+ remoteVersion: Number(row.version ?? 1),
5053
+ localVersion: null,
5054
+ action: "skipped",
5055
+ destination: "(empty author \u2014 skipped)",
5056
+ author: "",
5057
+ sourceAgent: String(row.source_agent ?? "")
5058
+ });
5059
+ summary.skipped++;
5060
+ continue;
5061
+ }
5062
+ let dirName;
5063
+ try {
5064
+ assertValidAuthor(author);
5065
+ dirName = `${name}--${author}`;
5066
+ } catch (e) {
5067
+ summary.entries.push({
5068
+ name,
5069
+ remoteVersion: Number(row.version ?? 1),
5070
+ localVersion: null,
5071
+ action: "skipped",
5072
+ destination: `(invalid author '${author}' \u2014 skipped)`,
5073
+ author,
5074
+ sourceAgent: String(row.source_agent ?? "")
5075
+ });
5076
+ summary.skipped++;
5077
+ continue;
5078
+ }
5079
+ const skillDir = join18(root, dirName);
5080
+ const skillFile = join18(skillDir, "SKILL.md");
4959
5081
  const remoteVersion = Number(row.version ?? 1);
4960
5082
  const localVersion = readLocalVersion(skillFile);
4961
5083
  const action = decideAction({
@@ -4964,15 +5086,30 @@ async function runPull(opts) {
4964
5086
  force: opts.force ?? false,
4965
5087
  dryRun: opts.dryRun ?? false
4966
5088
  });
5089
+ let manifestError;
4967
5090
  if (action === "wrote") {
4968
- mkdirSync6(skillDir, { recursive: true });
4969
- if (existsSync14(skillFile)) {
5091
+ mkdirSync7(skillDir, { recursive: true });
5092
+ if (existsSync15(skillFile)) {
4970
5093
  try {
4971
- renameSync(skillFile, `${skillFile}.bak`);
5094
+ renameSync2(skillFile, `${skillFile}.bak`);
4972
5095
  } catch {
4973
5096
  }
4974
5097
  }
4975
- writeFileSync8(skillFile, renderSkillFile(row));
5098
+ writeFileSync9(skillFile, renderSkillFile(row));
5099
+ try {
5100
+ recordPull({
5101
+ dirName,
5102
+ name,
5103
+ author,
5104
+ projectKey: String(row.project_key ?? ""),
5105
+ remoteVersion,
5106
+ install: opts.install,
5107
+ installRoot: root,
5108
+ pulledAt: (/* @__PURE__ */ new Date()).toISOString()
5109
+ });
5110
+ } catch (e) {
5111
+ manifestError = e?.message ?? String(e);
5112
+ }
4976
5113
  }
4977
5114
  summary.entries.push({
4978
5115
  name,
@@ -4981,7 +5118,8 @@ async function runPull(opts) {
4981
5118
  action,
4982
5119
  destination: skillFile,
4983
5120
  author: String(row.author ?? ""),
4984
- sourceAgent: String(row.source_agent ?? "")
5121
+ sourceAgent: String(row.source_agent ?? ""),
5122
+ manifestError
4985
5123
  });
4986
5124
  if (action === "wrote")
4987
5125
  summary.wrote++;
@@ -4993,18 +5131,186 @@ async function runPull(opts) {
4993
5131
  return summary;
4994
5132
  }
4995
5133
 
5134
+ // dist/src/skilify/unpull.js
5135
+ import { existsSync as existsSync16, readdirSync as readdirSync3, rmSync as rmSync5, statSync as statSync3 } from "node:fs";
5136
+ import { homedir as homedir9 } from "node:os";
5137
+ import { join as join19 } from "node:path";
5138
+ function resolveUnpullRoot(install, cwd) {
5139
+ if (install === "global")
5140
+ return join19(homedir9(), ".claude", "skills");
5141
+ if (!cwd)
5142
+ throw new Error("cwd required when install === 'project'");
5143
+ return join19(cwd, ".claude", "skills");
5144
+ }
5145
+ function runUnpull(opts) {
5146
+ const root = resolveUnpullRoot(opts.install, opts.cwd);
5147
+ const summary = {
5148
+ scanned: 0,
5149
+ removed: 0,
5150
+ wouldRemove: 0,
5151
+ kept: 0,
5152
+ manifestPruned: 0,
5153
+ entries: []
5154
+ };
5155
+ const userFilter = new Set(opts.users.filter((u) => u.length > 0));
5156
+ const haveUserFilter = userFilter.size > 0;
5157
+ if ((opts.all || opts.legacyCleanup) && (haveUserFilter || opts.notMine)) {
5158
+ const flags = [opts.all && "--all", opts.legacyCleanup && "--legacy-cleanup"].filter(Boolean).join(" / ");
5159
+ const filters = [haveUserFilter && "--user/--users", opts.notMine && "--not-mine"].filter(Boolean).join(" / ");
5160
+ throw new Error(`${flags} cannot be combined with ${filters}: entries removed by ${flags} are not in the manifest and have no author metadata, so the filter would silently fail to apply. Run the filtered unpull first, then ${flags} as a separate invocation.`);
5161
+ }
5162
+ const manifest = loadManifest();
5163
+ const entries = entriesForRoot(manifest, opts.install, root);
5164
+ for (const entry of entries) {
5165
+ summary.scanned++;
5166
+ const path = join19(root, entry.dirName);
5167
+ if (!existsSync16(path)) {
5168
+ if (!opts.dryRun)
5169
+ removePullEntry(opts.install, entry.installRoot, entry.dirName);
5170
+ summary.entries.push({
5171
+ dirName: entry.dirName,
5172
+ kind: "manifest-orphan",
5173
+ author: entry.author,
5174
+ name: entry.name,
5175
+ action: opts.dryRun ? "kept-policy" : "manifest-pruned",
5176
+ reason: opts.dryRun ? "would-prune (orphan, dir missing)" : "directory was already missing",
5177
+ path: ""
5178
+ });
5179
+ if (!opts.dryRun)
5180
+ summary.manifestPruned++;
5181
+ else
5182
+ summary.kept++;
5183
+ continue;
5184
+ }
5185
+ const decision = decideTargetForManifestEntry(entry, opts, userFilter, haveUserFilter);
5186
+ const result = {
5187
+ dirName: entry.dirName,
5188
+ kind: "pulled-manifest",
5189
+ author: entry.author,
5190
+ name: entry.name,
5191
+ action: "kept-policy",
5192
+ path
5193
+ };
5194
+ if (!decision.shouldRemove) {
5195
+ result.reason = decision.reason;
5196
+ summary.kept++;
5197
+ summary.entries.push(result);
5198
+ continue;
5199
+ }
5200
+ if (opts.dryRun) {
5201
+ result.action = "would-remove";
5202
+ summary.wouldRemove++;
5203
+ } else {
5204
+ try {
5205
+ rmSync5(path, { recursive: true, force: true });
5206
+ removePullEntry(opts.install, entry.installRoot, entry.dirName);
5207
+ result.action = "removed";
5208
+ summary.removed++;
5209
+ } catch (e) {
5210
+ result.action = "kept-policy";
5211
+ result.reason = `rm failed: ${e?.message ?? e}`;
5212
+ summary.kept++;
5213
+ }
5214
+ }
5215
+ summary.entries.push(result);
5216
+ }
5217
+ if (existsSync16(root) && (opts.all || opts.legacyCleanup)) {
5218
+ const manifestDirNames = new Set(entries.map((e) => e.dirName));
5219
+ for (const dirName of readdirSync3(root)) {
5220
+ if (manifestDirNames.has(dirName))
5221
+ continue;
5222
+ const path = join19(root, dirName);
5223
+ let st;
5224
+ try {
5225
+ st = statSync3(path);
5226
+ } catch {
5227
+ continue;
5228
+ }
5229
+ if (!st.isDirectory())
5230
+ continue;
5231
+ const isLegacyProjectKey = /^[0-9a-f]{16}$/.test(dirName);
5232
+ const isLocallyMined = !isLegacyProjectKey && /^[A-Za-z0-9_.-]+$/.test(dirName) && !dirName.includes("--");
5233
+ let kind;
5234
+ let shouldRemove = false;
5235
+ let reason;
5236
+ if (isLegacyProjectKey) {
5237
+ kind = "legacy-projectkey";
5238
+ if (opts.legacyCleanup)
5239
+ shouldRemove = true;
5240
+ else
5241
+ reason = "legacy project_key dir (use --legacy-cleanup)";
5242
+ } else if (isLocallyMined) {
5243
+ kind = "locally-mined";
5244
+ if (opts.all)
5245
+ shouldRemove = true;
5246
+ else
5247
+ reason = "locally-mined (use --all to remove)";
5248
+ } else {
5249
+ continue;
5250
+ }
5251
+ summary.scanned++;
5252
+ const result = {
5253
+ dirName,
5254
+ kind,
5255
+ author: null,
5256
+ name: kind === "locally-mined" ? dirName : null,
5257
+ action: "kept-policy",
5258
+ path,
5259
+ reason
5260
+ };
5261
+ if (!shouldRemove) {
5262
+ summary.kept++;
5263
+ summary.entries.push(result);
5264
+ continue;
5265
+ }
5266
+ if (opts.dryRun) {
5267
+ result.action = "would-remove";
5268
+ summary.wouldRemove++;
5269
+ } else {
5270
+ try {
5271
+ rmSync5(path, { recursive: true, force: true });
5272
+ result.action = "removed";
5273
+ summary.removed++;
5274
+ } catch (e) {
5275
+ result.action = "kept-policy";
5276
+ result.reason = `rm failed: ${e?.message ?? e}`;
5277
+ summary.kept++;
5278
+ }
5279
+ }
5280
+ summary.entries.push(result);
5281
+ }
5282
+ }
5283
+ return summary;
5284
+ }
5285
+ function decideTargetForManifestEntry(entry, opts, userFilter, haveUserFilter) {
5286
+ if (haveUserFilter && !userFilter.has(entry.author)) {
5287
+ return { shouldRemove: false, reason: `author '${entry.author}' not in filter` };
5288
+ }
5289
+ if (opts.notMine) {
5290
+ if (!opts.myUsername)
5291
+ return { shouldRemove: false, reason: "--not-mine requires myUsername" };
5292
+ if (entry.author === opts.myUsername) {
5293
+ return { shouldRemove: false, reason: "your own pull (--not-mine excludes self)" };
5294
+ }
5295
+ }
5296
+ return { shouldRemove: true };
5297
+ }
5298
+
4996
5299
  // dist/src/commands/skilify.js
4997
- var STATE_DIR2 = join18(homedir8(), ".deeplake", "state", "skilify");
5300
+ function stateDir() {
5301
+ return join20(homedir10(), ".deeplake", "state", "skilify");
5302
+ }
4998
5303
  function showStatus() {
4999
5304
  const cfg = loadScopeConfig();
5000
5305
  console.log(`scope: ${cfg.scope}`);
5001
5306
  console.log(`team: ${cfg.team.length === 0 ? "(empty)" : cfg.team.join(", ")}`);
5002
5307
  console.log(`install: ${cfg.install} (${cfg.install === "global" ? "~/.claude/skills/" : "<project>/.claude/skills/"})`);
5003
- if (!existsSync15(STATE_DIR2)) {
5308
+ const dir = stateDir();
5309
+ if (!existsSync17(dir)) {
5004
5310
  console.log(`state: (no projects tracked yet)`);
5005
5311
  return;
5006
5312
  }
5007
- const files = readdirSync3(STATE_DIR2).filter((f) => f.endsWith(".json") && f !== "config.json");
5313
+ const files = readdirSync4(dir).filter((f) => f.endsWith(".json") && f !== "config.json" && f !== "pulled.json");
5008
5314
  if (files.length === 0) {
5009
5315
  console.log(`state: (no projects tracked yet)`);
5010
5316
  return;
@@ -5012,7 +5318,7 @@ function showStatus() {
5012
5318
  console.log(`state: ${files.length} project(s) tracked`);
5013
5319
  for (const f of files) {
5014
5320
  try {
5015
- const s = JSON.parse(readFileSync12(join18(STATE_DIR2, f), "utf-8"));
5321
+ const s = JSON.parse(readFileSync13(join20(dir, f), "utf-8"));
5016
5322
  const skills = s.skillsGenerated.length === 0 ? "none" : s.skillsGenerated.join(", ");
5017
5323
  console.log(` - ${s.project} (counter=${s.counter}, last=${s.lastDate ?? "never"}, skills=${skills})`);
5018
5324
  } catch {
@@ -5038,7 +5344,7 @@ function setInstall(loc) {
5038
5344
  }
5039
5345
  const cfg = loadScopeConfig();
5040
5346
  saveScopeConfig({ ...cfg, install: loc });
5041
- const path = loc === "global" ? join18(homedir8(), ".claude", "skills") : "<cwd>/.claude/skills";
5347
+ const path = loc === "global" ? join20(homedir10(), ".claude", "skills") : "<cwd>/.claude/skills";
5042
5348
  console.log(`Install location set to '${loc}'. New skills will be written to ${path}/<name>/SKILL.md.`);
5043
5349
  }
5044
5350
  function promoteSkill(name, cwd) {
@@ -5046,18 +5352,18 @@ function promoteSkill(name, cwd) {
5046
5352
  console.error("Usage: hivemind skilify promote <skill-name>");
5047
5353
  process.exit(1);
5048
5354
  }
5049
- const projectPath = join18(cwd, ".claude", "skills", name);
5050
- const globalPath = join18(homedir8(), ".claude", "skills", name);
5051
- if (!existsSync15(join18(projectPath, "SKILL.md"))) {
5355
+ const projectPath = join20(cwd, ".claude", "skills", name);
5356
+ const globalPath = join20(homedir10(), ".claude", "skills", name);
5357
+ if (!existsSync17(join20(projectPath, "SKILL.md"))) {
5052
5358
  console.error(`Skill '${name}' not found at ${projectPath}/SKILL.md`);
5053
5359
  process.exit(1);
5054
5360
  }
5055
- if (existsSync15(join18(globalPath, "SKILL.md"))) {
5361
+ if (existsSync17(join20(globalPath, "SKILL.md"))) {
5056
5362
  console.error(`Skill '${name}' already exists at ${globalPath}/SKILL.md \u2014 refusing to overwrite. Remove it first or rename the project skill.`);
5057
5363
  process.exit(1);
5058
5364
  }
5059
- mkdirSync7(dirname2(globalPath), { recursive: true });
5060
- renameSync2(projectPath, globalPath);
5365
+ mkdirSync8(dirname3(globalPath), { recursive: true });
5366
+ renameSync3(projectPath, globalPath);
5061
5367
  console.log(`Promoted '${name}' from ${projectPath} \u2192 ${globalPath}.`);
5062
5368
  }
5063
5369
  function teamAdd(name) {
@@ -5114,6 +5420,15 @@ function usage() {
5114
5420
  console.log(" --all-users all authors (default \u2014 equivalent to no filter)");
5115
5421
  console.log(" --dry-run show what would be written, don't touch disk");
5116
5422
  console.log(" --force overwrite even when local version >= remote");
5423
+ console.log(" hivemind skilify unpull [opts] remove skills previously installed by pull");
5424
+ console.log(" Options for unpull:");
5425
+ console.log(" --to <project|global> where to scan (default: global)");
5426
+ console.log(" --user <name> only entries authored by this user");
5427
+ console.log(" --users <a,b,c> only entries authored by these users");
5428
+ console.log(" --not-mine remove all pulled entries except your own");
5429
+ console.log(" --dry-run show what would be removed");
5430
+ console.log(" --all also remove flat-layout (locally-mined) entries");
5431
+ console.log(" --legacy-cleanup also remove pre-`--author`-layout legacy `<projectKey>/` dirs");
5117
5432
  console.log(" hivemind skilify status show per-project state");
5118
5433
  }
5119
5434
  function takeFlagValue(args, flag) {
@@ -5178,7 +5493,7 @@ async function pullSkills(args) {
5178
5493
  console.error(`pull failed: ${e?.message ?? e}`);
5179
5494
  process.exit(1);
5180
5495
  }
5181
- const dest = toRaw === "global" ? join18(homedir8(), ".claude", "skills") : `${process.cwd()}/.claude/skills`;
5496
+ const dest = toRaw === "global" ? join20(homedir10(), ".claude", "skills") : `${process.cwd()}/.claude/skills`;
5182
5497
  const filterDesc = users.length === 0 ? "all users" : users.join(", ");
5183
5498
  console.log(`Destination: ${dest}`);
5184
5499
  console.log(`Filter: ${filterDesc}${skillName ? ` \xB7 skill='${skillName}'` : ""}${dryRun ? " \xB7 dry-run" : ""}${force ? " \xB7 force" : ""}`);
@@ -5187,9 +5502,72 @@ async function pullSkills(args) {
5187
5502
  const tag = e.action === "wrote" ? "\u2713 wrote" : e.action === "dryrun" ? "\u2192 would write" : "\xB7 skipped";
5188
5503
  const ver = e.localVersion === null ? `v${e.remoteVersion} (new)` : `v${e.localVersion} \u2192 v${e.remoteVersion}`;
5189
5504
  console.log(` ${tag.padEnd(15)} ${e.name.padEnd(40)} ${ver.padEnd(20)} (${e.author}/${e.sourceAgent})`);
5505
+ if (e.manifestError) {
5506
+ console.warn(` \u26A0 manifest not updated: ${e.manifestError} \u2014 \`unpull\` will not see this entry until a successful repull.`);
5507
+ }
5190
5508
  }
5191
5509
  console.log(`Result: ${summary.wrote} written, ${summary.dryrun} dry-run, ${summary.skipped} skipped.`);
5192
5510
  }
5511
+ async function unpullSkills(args) {
5512
+ const work = [...args];
5513
+ const toRaw = takeFlagValue(work, "--to") ?? "global";
5514
+ const userOne = takeFlagValue(work, "--user");
5515
+ const usersMany = takeFlagValue(work, "--users");
5516
+ const notMine = takeBooleanFlag(work, "--not-mine");
5517
+ const dryRun = takeBooleanFlag(work, "--dry-run");
5518
+ const all = takeBooleanFlag(work, "--all");
5519
+ const legacyCleanup = takeBooleanFlag(work, "--legacy-cleanup");
5520
+ if (toRaw !== "project" && toRaw !== "global") {
5521
+ throw new Error(`Invalid --to '${toRaw}'. Use 'project' or 'global'.`);
5522
+ }
5523
+ let users = [];
5524
+ if (userOne)
5525
+ users = [userOne];
5526
+ else if (usersMany)
5527
+ users = usersMany.split(",").map((s) => s.trim()).filter(Boolean);
5528
+ let myUsername;
5529
+ if (notMine) {
5530
+ const config = loadConfig();
5531
+ if (!config) {
5532
+ throw new Error("--not-mine requires a logged-in user. Run: hivemind login");
5533
+ }
5534
+ myUsername = config.userName;
5535
+ }
5536
+ const summary = runUnpull({
5537
+ install: toRaw,
5538
+ cwd: toRaw === "project" ? process.cwd() : void 0,
5539
+ users,
5540
+ myUsername,
5541
+ notMine,
5542
+ dryRun,
5543
+ all,
5544
+ legacyCleanup
5545
+ });
5546
+ const dest = toRaw === "global" ? join20(homedir10(), ".claude", "skills") : `${process.cwd()}/.claude/skills`;
5547
+ const filterParts = [];
5548
+ if (users.length > 0)
5549
+ filterParts.push(`users=${users.join(",")}`);
5550
+ if (notMine)
5551
+ filterParts.push("not-mine");
5552
+ if (all)
5553
+ filterParts.push("all");
5554
+ if (legacyCleanup)
5555
+ filterParts.push("legacy-cleanup");
5556
+ if (dryRun)
5557
+ filterParts.push("dry-run");
5558
+ const filterDesc = filterParts.length ? filterParts.join(" \xB7 ") : "(no filter \u2014 all pulled)";
5559
+ console.log(`Scanning: ${dest}`);
5560
+ console.log(`Filter: ${filterDesc}`);
5561
+ console.log(`Scanned ${summary.scanned} dir(s).`);
5562
+ for (const e of summary.entries) {
5563
+ const tag = e.action === "removed" ? "\u2713 removed" : e.action === "would-remove" ? "\u2192 would remove" : e.action === "manifest-pruned" ? "\u26A0 pruned (orphan)" : "\xB7 kept";
5564
+ const id = e.dirName;
5565
+ const note = e.reason ? ` (${e.reason})` : "";
5566
+ console.log(` ${tag.padEnd(20)} ${id.padEnd(50)} [${e.kind}]${note}`);
5567
+ }
5568
+ const prunedNote = summary.manifestPruned > 0 ? `, ${summary.manifestPruned} manifest-pruned` : "";
5569
+ console.log(`Result: ${summary.removed} removed, ${summary.wouldRemove} dry-run, ${summary.kept} kept${prunedNote}.`);
5570
+ }
5193
5571
  function runSkilifyCommand(args) {
5194
5572
  const sub = args[0];
5195
5573
  if (!sub || sub === "status") {
@@ -5215,6 +5593,14 @@ function runSkilifyCommand(args) {
5215
5593
  });
5216
5594
  return;
5217
5595
  }
5596
+ if (sub === "unpull") {
5597
+ unpullSkills(args.slice(1)).catch((e) => {
5598
+ console.error(`unpull error: ${e?.message ?? e}`);
5599
+ process.exit(1);
5600
+ }).catch(() => {
5601
+ });
5602
+ return;
5603
+ }
5218
5604
  if (sub === "team") {
5219
5605
  const action = args[1];
5220
5606
  if (action === "add") {
@@ -5246,13 +5632,13 @@ if (process.argv[1] && process.argv[1].endsWith("skilify.js")) {
5246
5632
 
5247
5633
  // dist/src/cli/update.js
5248
5634
  import { execFileSync as execFileSync4 } from "node:child_process";
5249
- import { existsSync as existsSync16, readFileSync as readFileSync14, realpathSync } from "node:fs";
5250
- import { dirname as dirname4, sep } from "node:path";
5635
+ import { existsSync as existsSync18, readFileSync as readFileSync15, realpathSync } from "node:fs";
5636
+ import { dirname as dirname5, sep } from "node:path";
5251
5637
  import { fileURLToPath as fileURLToPath2 } from "node:url";
5252
5638
 
5253
5639
  // dist/src/utils/version-check.js
5254
- import { readFileSync as readFileSync13 } from "node:fs";
5255
- import { dirname as dirname3, join as join19 } from "node:path";
5640
+ import { readFileSync as readFileSync14 } from "node:fs";
5641
+ import { dirname as dirname4, join as join21 } from "node:path";
5256
5642
  function isNewer(latest, current) {
5257
5643
  const parse = (v) => v.split(".").map(Number);
5258
5644
  const [la, lb, lc] = parse(latest);
@@ -5271,24 +5657,24 @@ function detectInstallKind(argv1) {
5271
5657
  return argv1 ?? process.argv[1] ?? fileURLToPath2(import.meta.url);
5272
5658
  }
5273
5659
  })();
5274
- let dir = dirname4(realArgv1);
5660
+ let dir = dirname5(realArgv1);
5275
5661
  let installDir = null;
5276
5662
  for (let i = 0; i < 10; i++) {
5277
5663
  const pkgPath = `${dir}${sep}package.json`;
5278
5664
  try {
5279
- const pkg = JSON.parse(readFileSync14(pkgPath, "utf-8"));
5665
+ const pkg = JSON.parse(readFileSync15(pkgPath, "utf-8"));
5280
5666
  if (pkg.name === PKG_NAME || pkg.name === "hivemind") {
5281
5667
  installDir = dir;
5282
5668
  break;
5283
5669
  }
5284
5670
  } catch {
5285
5671
  }
5286
- const parent = dirname4(dir);
5672
+ const parent = dirname5(dir);
5287
5673
  if (parent === dir)
5288
5674
  break;
5289
5675
  dir = parent;
5290
5676
  }
5291
- installDir ??= dirname4(realArgv1);
5677
+ installDir ??= dirname5(realArgv1);
5292
5678
  if (realArgv1.includes(`${sep}_npx${sep}`) || realArgv1.includes(`${sep}.npx${sep}`)) {
5293
5679
  return { kind: "npx", installDir };
5294
5680
  }
@@ -5297,10 +5683,10 @@ function detectInstallKind(argv1) {
5297
5683
  }
5298
5684
  let gitDir = installDir;
5299
5685
  for (let i = 0; i < 6; i++) {
5300
- if (existsSync16(`${gitDir}${sep}.git`)) {
5686
+ if (existsSync18(`${gitDir}${sep}.git`)) {
5301
5687
  return { kind: "local-dev", installDir };
5302
5688
  }
5303
- const parent = dirname4(gitDir);
5689
+ const parent = dirname5(gitDir);
5304
5690
  if (parent === gitDir)
5305
5691
  break;
5306
5692
  gitDir = parent;
@@ -5459,6 +5845,11 @@ Skill management (mine + share reusable Claude skills across the org):
5459
5845
  Options: --user <email>, --users a,b,c,
5460
5846
  --all-users, --to <project|global>,
5461
5847
  --dry-run, --force.
5848
+ hivemind skilify unpull Remove skills previously installed by pull.
5849
+ Options: --user, --users, --not-mine,
5850
+ --to <project|global>, --dry-run,
5851
+ --all (also locally-mined),
5852
+ --legacy-cleanup (pre-suffix-author dirs).
5462
5853
  hivemind skilify scope <me|team|org> Set the sharing scope for newly mined skills.
5463
5854
  hivemind skilify install <project|global> Set where new skills are written.
5464
5855
  hivemind skilify promote <name> Move a project skill to the global location.
@@ -140,6 +140,10 @@ SKILLS (skilify) \u2014 mine + share reusable skills across the org:
140
140
  - hivemind skilify pull --dry-run \u2014 preview only
141
141
  - hivemind skilify pull --force \u2014 overwrite local (creates .bak)
142
142
  - hivemind skilify pull <skill-name> \u2014 pull only that skill (combines with --user)
143
+ - hivemind skilify unpull \u2014 remove every skill previously installed by pull
144
+ - hivemind skilify unpull --user <email> \u2014 remove only that author's pulls
145
+ - hivemind skilify unpull --not-mine \u2014 remove all pulls except your own
146
+ - hivemind skilify unpull --dry-run \u2014 preview without touching disk
143
147
  - hivemind skilify scope <me|team|org> \u2014 sharing scope for new skills
144
148
  - hivemind skilify install <project|global> \u2014 default install location
145
149
  - hivemind skilify team add|remove|list <name> \u2014 manage team list`;
@@ -702,6 +702,10 @@ SKILLS (skilify) \u2014 mine + share reusable skills across the org:
702
702
  - hivemind skilify pull --dry-run \u2014 preview only
703
703
  - hivemind skilify pull --force \u2014 overwrite local (creates .bak)
704
704
  - hivemind skilify pull <skill-name> \u2014 pull only that skill (combines with --user)
705
+ - hivemind skilify unpull \u2014 remove every skill previously installed by pull
706
+ - hivemind skilify unpull --user <email> \u2014 remove only that author's pulls
707
+ - hivemind skilify unpull --not-mine \u2014 remove all pulls except your own
708
+ - hivemind skilify unpull --dry-run \u2014 preview without touching disk
705
709
  - hivemind skilify scope <me|team|org> \u2014 sharing scope for new skills
706
710
  - hivemind skilify install <project|global> \u2014 default install location
707
711
  - hivemind skilify team add|remove|list <name> \u2014 manage team list`;
@@ -702,6 +702,10 @@ SKILLS (skilify) \u2014 mine + share reusable skills across the org:
702
702
  - hivemind skilify pull --dry-run \u2014 preview only
703
703
  - hivemind skilify pull --force \u2014 overwrite local (creates .bak)
704
704
  - hivemind skilify pull <skill-name> \u2014 pull only that skill (combines with --user)
705
+ - hivemind skilify unpull \u2014 remove every skill previously installed by pull
706
+ - hivemind skilify unpull --user <email> \u2014 remove only that author's pulls
707
+ - hivemind skilify unpull --not-mine \u2014 remove all pulls except your own
708
+ - hivemind skilify unpull --dry-run \u2014 preview without touching disk
705
709
  - hivemind skilify scope <me|team|org> \u2014 sharing scope for new skills
706
710
  - hivemind skilify install <project|global> \u2014 default install location
707
711
  - hivemind skilify team add|remove|list <name> \u2014 manage team list`;
@@ -1070,7 +1070,7 @@ function extractLatestVersion(body) {
1070
1070
  return typeof v === "string" && v.length > 0 ? v : null;
1071
1071
  }
1072
1072
  function getInstalledVersion() {
1073
- return "0.7.14".length > 0 ? "0.7.14" : null;
1073
+ return "0.7.16".length > 0 ? "0.7.16" : null;
1074
1074
  }
1075
1075
  function isNewer(latest, current) {
1076
1076
  const parse = (v) => v.replace(/-.*$/, "").split(".").map(Number);
@@ -1745,7 +1745,7 @@ ${body.slice(0, 500)}`;
1745
1745
  const hook = (event, handler) => {
1746
1746
  pluginApi.on(event, handler);
1747
1747
  };
1748
- if ('---\nname: hivemind\ndescription: Global team and org memory powered by Activeloop. ALWAYS check BOTH built-in memory AND Hivemind memory when recalling information.\nallowed-tools: hivemind_search, hivemind_read, hivemind_index\n---\n\n# Hivemind Memory\n\nYou have TWO memory sources. ALWAYS check BOTH when the user asks you to recall, remember, or look up ANY information:\n\n1. **Your built-in memory** \u2014 personal per-project notes from the host agent\n2. **Hivemind global memory** \u2014 global memory shared across all sessions, users, and agents in the org, accessed via the tools below\n\n## Memory Structure\n\n```\n/index.md \u2190 START HERE \u2014 table of all sessions\n/summaries/\n <username>/\n <session-id>.md \u2190 AI-generated wiki summary per session\n/sessions/\n <username>/\n <user_org_ws_slug>.jsonl \u2190 raw session data\n```\n\n## How to Search\n\n1. **First**: call `hivemind_index()` \u2014 table of all sessions with dates, projects, descriptions\n2. **If you need details**: call `hivemind_read("/summaries/<username>/<session>.md")`\n3. **If you need raw data**: call `hivemind_read("/sessions/<username>/<file>.jsonl")`\n4. **Keyword search**: call `hivemind_search("keyword")` \u2014 substring search across both summaries and sessions, returns `path:line` hits\n\nDo NOT jump straight to reading raw JSONL files. Always start with `hivemind_index` and summaries.\n\n## Organization Management\n\n- `/hivemind_login` \u2014 sign in via device flow\n- `/hivemind_capture` \u2014 toggle capture on/off (off = no data sent)\n- `/hivemind_whoami` \u2014 show current org and workspace\n- `/hivemind_orgs` \u2014 list organizations\n- `/hivemind_switch_org <name-or-id>` \u2014 switch organization\n- `/hivemind_workspaces` \u2014 list workspaces\n- `/hivemind_switch_workspace <id>` \u2014 switch workspace\n- `/hivemind_version` \u2014 show installed version and check npm for updates\n- `/hivemind_update` \u2014 shows how to install (ask the agent, or run `hivemind update` in your terminal)\n- `/hivemind_autoupdate [on|off]` \u2014 toggle the agent-facing update nudge (on by default: when a newer version is available, the agent is prompted to install it via `exec` if you ask to update)\n\n## Skill Management (skilify)\n\nHivemind also mines reusable Claude skills from agent sessions and stores them in a per-org Deeplake table. Openclaw itself doesn\'t run sessions to mine, but you can pull skills others have already mined for the user. These run in the user\'s terminal (the openclaw plugin does not register them as `/hivemind_*` commands):\n\n- `hivemind skilify` \u2014 show scope/team/install + per-project state\n- `hivemind skilify pull` \u2014 sync skills for the current project from the org table\n- `hivemind skilify pull --user <email>` \u2014 only that author\'s skills\n- `hivemind skilify pull --users a,b,c` \u2014 multiple authors (CSV)\n- `hivemind skilify pull --all-users` \u2014 explicit "no author filter"\n- `hivemind skilify pull --to project|global` \u2014 install location (`<cwd>/.claude/skills/` vs `~/.claude/skills/`)\n- `hivemind skilify pull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify pull --force` \u2014 overwrite local (creates `.bak`)\n- `hivemind skilify pull <skill-name>` \u2014 pull only that one skill (combines with `--user`)\n- `hivemind skilify scope <me|team|org>` \u2014 set sharing scope for new skills\n- `hivemind skilify install <project|global>` \u2014 default install location\n- `hivemind skilify team add|remove|list <name>` \u2014 manage team list\n\nIf the user asks to "pull skills from X", "share skills with the team", or similar, suggest the matching `hivemind skilify` command. Run `hivemind skilify --help` for the full reference.\n\n## Limits\n\nDo NOT delegate to subagents when reading Hivemind memory. If a tool call returns empty after 2 attempts, skip it and move on. Report what you found rather than exhaustively retrying.\n\n## Getting Started\n\nAfter installing the plugin:\n1. Run `/hivemind_login` to authenticate\n2. Run `/hivemind_setup` to enable the memory tools in your openclaw allowlist (one-time, per install)\n3. Start using memory \u2014 ask questions, the agent automatically captures and searches\n\n## Sharing memory\n\nMultiple agents share memory when users are in the same Activeloop organization.\n'.length > 0) {
1748
+ if ('---\nname: hivemind\ndescription: Global team and org memory powered by Activeloop. ALWAYS check BOTH built-in memory AND Hivemind memory when recalling information.\nallowed-tools: hivemind_search, hivemind_read, hivemind_index\n---\n\n# Hivemind Memory\n\nYou have TWO memory sources. ALWAYS check BOTH when the user asks you to recall, remember, or look up ANY information:\n\n1. **Your built-in memory** \u2014 personal per-project notes from the host agent\n2. **Hivemind global memory** \u2014 global memory shared across all sessions, users, and agents in the org, accessed via the tools below\n\n## Memory Structure\n\n```\n/index.md \u2190 START HERE \u2014 table of all sessions\n/summaries/\n <username>/\n <session-id>.md \u2190 AI-generated wiki summary per session\n/sessions/\n <username>/\n <user_org_ws_slug>.jsonl \u2190 raw session data\n```\n\n## How to Search\n\n1. **First**: call `hivemind_index()` \u2014 table of all sessions with dates, projects, descriptions\n2. **If you need details**: call `hivemind_read("/summaries/<username>/<session>.md")`\n3. **If you need raw data**: call `hivemind_read("/sessions/<username>/<file>.jsonl")`\n4. **Keyword search**: call `hivemind_search("keyword")` \u2014 substring search across both summaries and sessions, returns `path:line` hits\n\nDo NOT jump straight to reading raw JSONL files. Always start with `hivemind_index` and summaries.\n\n## Organization Management\n\n- `/hivemind_login` \u2014 sign in via device flow\n- `/hivemind_capture` \u2014 toggle capture on/off (off = no data sent)\n- `/hivemind_whoami` \u2014 show current org and workspace\n- `/hivemind_orgs` \u2014 list organizations\n- `/hivemind_switch_org <name-or-id>` \u2014 switch organization\n- `/hivemind_workspaces` \u2014 list workspaces\n- `/hivemind_switch_workspace <id>` \u2014 switch workspace\n- `/hivemind_version` \u2014 show installed version and check npm for updates\n- `/hivemind_update` \u2014 shows how to install (ask the agent, or run `hivemind update` in your terminal)\n- `/hivemind_autoupdate [on|off]` \u2014 toggle the agent-facing update nudge (on by default: when a newer version is available, the agent is prompted to install it via `exec` if you ask to update)\n\n## Skill Management (skilify)\n\nHivemind also mines reusable Claude skills from agent sessions and stores them in a per-org Deeplake table. Openclaw itself doesn\'t run sessions to mine, but you can pull skills others have already mined for the user. These run in the user\'s terminal (the openclaw plugin does not register them as `/hivemind_*` commands):\n\n- `hivemind skilify` \u2014 show scope/team/install + per-project state\n- `hivemind skilify pull` \u2014 sync skills for the current project from the org table\n- `hivemind skilify pull --user <email>` \u2014 only that author\'s skills\n- `hivemind skilify pull --users a,b,c` \u2014 multiple authors (CSV)\n- `hivemind skilify pull --all-users` \u2014 explicit "no author filter"\n- `hivemind skilify pull --to project|global` \u2014 install location (`<cwd>/.claude/skills/` vs `~/.claude/skills/`)\n- `hivemind skilify pull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify pull --force` \u2014 overwrite local (creates `.bak`)\n- `hivemind skilify pull <skill-name>` \u2014 pull only that one skill (combines with `--user`)\n- `hivemind skilify unpull` \u2014 remove every skill previously installed by pull\n- `hivemind skilify unpull --user <email>` \u2014 remove only that author\'s pulls\n- `hivemind skilify unpull --not-mine` \u2014 remove all pulls except your own\n- `hivemind skilify unpull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify scope <me|team|org>` \u2014 set sharing scope for new skills\n- `hivemind skilify install <project|global>` \u2014 default install location\n- `hivemind skilify team add|remove|list <name>` \u2014 manage team list\n\nIf the user asks to "pull skills from X", "share skills with the team", or similar, suggest the matching `hivemind skilify` command. Run `hivemind skilify --help` for the full reference.\n\n## Limits\n\nDo NOT delegate to subagents when reading Hivemind memory. If a tool call returns empty after 2 attempts, skip it and move on. Report what you found rather than exhaustively retrying.\n\n## Getting Started\n\nAfter installing the plugin:\n1. Run `/hivemind_login` to authenticate\n2. Run `/hivemind_setup` to enable the memory tools in your openclaw allowlist (one-time, per install)\n3. Start using memory \u2014 ask questions, the agent automatically captures and searches\n\n## Sharing memory\n\nMultiple agents share memory when users are in the same Activeloop organization.\n'.length > 0) {
1749
1749
  const setupConfigPromise = loadSetupConfig();
1750
1750
  hook("before_prompt_build", async () => {
1751
1751
  const { detectAllowlistMissing } = await setupConfigPromise;
@@ -1757,7 +1757,7 @@ A newer Hivemind version is available: ${pendingUpdate.current} \u2192 ${pending
1757
1757
  </hivemind-update-available>
1758
1758
  ` : "";
1759
1759
  return {
1760
- prependSystemContext: allowlistNudge + updateNudge + '\n\n<hivemind-skill>\n---\nname: hivemind\ndescription: Global team and org memory powered by Activeloop. ALWAYS check BOTH built-in memory AND Hivemind memory when recalling information.\nallowed-tools: hivemind_search, hivemind_read, hivemind_index\n---\n\n# Hivemind Memory\n\nYou have TWO memory sources. ALWAYS check BOTH when the user asks you to recall, remember, or look up ANY information:\n\n1. **Your built-in memory** \u2014 personal per-project notes from the host agent\n2. **Hivemind global memory** \u2014 global memory shared across all sessions, users, and agents in the org, accessed via the tools below\n\n## Memory Structure\n\n```\n/index.md \u2190 START HERE \u2014 table of all sessions\n/summaries/\n <username>/\n <session-id>.md \u2190 AI-generated wiki summary per session\n/sessions/\n <username>/\n <user_org_ws_slug>.jsonl \u2190 raw session data\n```\n\n## How to Search\n\n1. **First**: call `hivemind_index()` \u2014 table of all sessions with dates, projects, descriptions\n2. **If you need details**: call `hivemind_read("/summaries/<username>/<session>.md")`\n3. **If you need raw data**: call `hivemind_read("/sessions/<username>/<file>.jsonl")`\n4. **Keyword search**: call `hivemind_search("keyword")` \u2014 substring search across both summaries and sessions, returns `path:line` hits\n\nDo NOT jump straight to reading raw JSONL files. Always start with `hivemind_index` and summaries.\n\n## Organization Management\n\n- `/hivemind_login` \u2014 sign in via device flow\n- `/hivemind_capture` \u2014 toggle capture on/off (off = no data sent)\n- `/hivemind_whoami` \u2014 show current org and workspace\n- `/hivemind_orgs` \u2014 list organizations\n- `/hivemind_switch_org <name-or-id>` \u2014 switch organization\n- `/hivemind_workspaces` \u2014 list workspaces\n- `/hivemind_switch_workspace <id>` \u2014 switch workspace\n- `/hivemind_version` \u2014 show installed version and check npm for updates\n- `/hivemind_update` \u2014 shows how to install (ask the agent, or run `hivemind update` in your terminal)\n- `/hivemind_autoupdate [on|off]` \u2014 toggle the agent-facing update nudge (on by default: when a newer version is available, the agent is prompted to install it via `exec` if you ask to update)\n\n## Skill Management (skilify)\n\nHivemind also mines reusable Claude skills from agent sessions and stores them in a per-org Deeplake table. Openclaw itself doesn\'t run sessions to mine, but you can pull skills others have already mined for the user. These run in the user\'s terminal (the openclaw plugin does not register them as `/hivemind_*` commands):\n\n- `hivemind skilify` \u2014 show scope/team/install + per-project state\n- `hivemind skilify pull` \u2014 sync skills for the current project from the org table\n- `hivemind skilify pull --user <email>` \u2014 only that author\'s skills\n- `hivemind skilify pull --users a,b,c` \u2014 multiple authors (CSV)\n- `hivemind skilify pull --all-users` \u2014 explicit "no author filter"\n- `hivemind skilify pull --to project|global` \u2014 install location (`<cwd>/.claude/skills/` vs `~/.claude/skills/`)\n- `hivemind skilify pull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify pull --force` \u2014 overwrite local (creates `.bak`)\n- `hivemind skilify pull <skill-name>` \u2014 pull only that one skill (combines with `--user`)\n- `hivemind skilify scope <me|team|org>` \u2014 set sharing scope for new skills\n- `hivemind skilify install <project|global>` \u2014 default install location\n- `hivemind skilify team add|remove|list <name>` \u2014 manage team list\n\nIf the user asks to "pull skills from X", "share skills with the team", or similar, suggest the matching `hivemind skilify` command. Run `hivemind skilify --help` for the full reference.\n\n## Limits\n\nDo NOT delegate to subagents when reading Hivemind memory. If a tool call returns empty after 2 attempts, skip it and move on. Report what you found rather than exhaustively retrying.\n\n## Getting Started\n\nAfter installing the plugin:\n1. Run `/hivemind_login` to authenticate\n2. Run `/hivemind_setup` to enable the memory tools in your openclaw allowlist (one-time, per install)\n3. Start using memory \u2014 ask questions, the agent automatically captures and searches\n\n## Sharing memory\n\nMultiple agents share memory when users are in the same Activeloop organization.\n\n</hivemind-skill>\n'
1760
+ prependSystemContext: allowlistNudge + updateNudge + '\n\n<hivemind-skill>\n---\nname: hivemind\ndescription: Global team and org memory powered by Activeloop. ALWAYS check BOTH built-in memory AND Hivemind memory when recalling information.\nallowed-tools: hivemind_search, hivemind_read, hivemind_index\n---\n\n# Hivemind Memory\n\nYou have TWO memory sources. ALWAYS check BOTH when the user asks you to recall, remember, or look up ANY information:\n\n1. **Your built-in memory** \u2014 personal per-project notes from the host agent\n2. **Hivemind global memory** \u2014 global memory shared across all sessions, users, and agents in the org, accessed via the tools below\n\n## Memory Structure\n\n```\n/index.md \u2190 START HERE \u2014 table of all sessions\n/summaries/\n <username>/\n <session-id>.md \u2190 AI-generated wiki summary per session\n/sessions/\n <username>/\n <user_org_ws_slug>.jsonl \u2190 raw session data\n```\n\n## How to Search\n\n1. **First**: call `hivemind_index()` \u2014 table of all sessions with dates, projects, descriptions\n2. **If you need details**: call `hivemind_read("/summaries/<username>/<session>.md")`\n3. **If you need raw data**: call `hivemind_read("/sessions/<username>/<file>.jsonl")`\n4. **Keyword search**: call `hivemind_search("keyword")` \u2014 substring search across both summaries and sessions, returns `path:line` hits\n\nDo NOT jump straight to reading raw JSONL files. Always start with `hivemind_index` and summaries.\n\n## Organization Management\n\n- `/hivemind_login` \u2014 sign in via device flow\n- `/hivemind_capture` \u2014 toggle capture on/off (off = no data sent)\n- `/hivemind_whoami` \u2014 show current org and workspace\n- `/hivemind_orgs` \u2014 list organizations\n- `/hivemind_switch_org <name-or-id>` \u2014 switch organization\n- `/hivemind_workspaces` \u2014 list workspaces\n- `/hivemind_switch_workspace <id>` \u2014 switch workspace\n- `/hivemind_version` \u2014 show installed version and check npm for updates\n- `/hivemind_update` \u2014 shows how to install (ask the agent, or run `hivemind update` in your terminal)\n- `/hivemind_autoupdate [on|off]` \u2014 toggle the agent-facing update nudge (on by default: when a newer version is available, the agent is prompted to install it via `exec` if you ask to update)\n\n## Skill Management (skilify)\n\nHivemind also mines reusable Claude skills from agent sessions and stores them in a per-org Deeplake table. Openclaw itself doesn\'t run sessions to mine, but you can pull skills others have already mined for the user. These run in the user\'s terminal (the openclaw plugin does not register them as `/hivemind_*` commands):\n\n- `hivemind skilify` \u2014 show scope/team/install + per-project state\n- `hivemind skilify pull` \u2014 sync skills for the current project from the org table\n- `hivemind skilify pull --user <email>` \u2014 only that author\'s skills\n- `hivemind skilify pull --users a,b,c` \u2014 multiple authors (CSV)\n- `hivemind skilify pull --all-users` \u2014 explicit "no author filter"\n- `hivemind skilify pull --to project|global` \u2014 install location (`<cwd>/.claude/skills/` vs `~/.claude/skills/`)\n- `hivemind skilify pull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify pull --force` \u2014 overwrite local (creates `.bak`)\n- `hivemind skilify pull <skill-name>` \u2014 pull only that one skill (combines with `--user`)\n- `hivemind skilify unpull` \u2014 remove every skill previously installed by pull\n- `hivemind skilify unpull --user <email>` \u2014 remove only that author\'s pulls\n- `hivemind skilify unpull --not-mine` \u2014 remove all pulls except your own\n- `hivemind skilify unpull --dry-run` \u2014 preview without touching disk\n- `hivemind skilify scope <me|team|org>` \u2014 set sharing scope for new skills\n- `hivemind skilify install <project|global>` \u2014 default install location\n- `hivemind skilify team add|remove|list <name>` \u2014 manage team list\n\nIf the user asks to "pull skills from X", "share skills with the team", or similar, suggest the matching `hivemind skilify` command. Run `hivemind skilify --help` for the full reference.\n\n## Limits\n\nDo NOT delegate to subagents when reading Hivemind memory. If a tool call returns empty after 2 attempts, skip it and move on. Report what you found rather than exhaustively retrying.\n\n## Getting Started\n\nAfter installing the plugin:\n1. Run `/hivemind_login` to authenticate\n2. Run `/hivemind_setup` to enable the memory tools in your openclaw allowlist (one-time, per install)\n3. Start using memory \u2014 ask questions, the agent automatically captures and searches\n\n## Sharing memory\n\nMultiple agents share memory when users are in the same Activeloop organization.\n\n</hivemind-skill>\n'
1761
1761
  };
1762
1762
  });
1763
1763
  }
@@ -52,5 +52,5 @@
52
52
  }
53
53
  }
54
54
  },
55
- "version": "0.7.14"
55
+ "version": "0.7.16"
56
56
  }
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "hivemind",
3
- "version": "0.7.14",
3
+ "version": "0.7.16",
4
4
  "type": "module",
5
5
  "description": "Hivemind — cloud-backed persistent shared memory for AI agents, powered by DeepLake",
6
6
  "license": "Apache-2.0",
@@ -58,6 +58,10 @@ Hivemind also mines reusable Claude skills from agent sessions and stores them i
58
58
  - `hivemind skilify pull --dry-run` — preview without touching disk
59
59
  - `hivemind skilify pull --force` — overwrite local (creates `.bak`)
60
60
  - `hivemind skilify pull <skill-name>` — pull only that one skill (combines with `--user`)
61
+ - `hivemind skilify unpull` — remove every skill previously installed by pull
62
+ - `hivemind skilify unpull --user <email>` — remove only that author's pulls
63
+ - `hivemind skilify unpull --not-mine` — remove all pulls except your own
64
+ - `hivemind skilify unpull --dry-run` — preview without touching disk
61
65
  - `hivemind skilify scope <me|team|org>` — set sharing scope for new skills
62
66
  - `hivemind skilify install <project|global>` — default install location
63
67
  - `hivemind skilify team add|remove|list <name>` — manage team list
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@deeplake/hivemind",
3
- "version": "0.7.14",
3
+ "version": "0.7.16",
4
4
  "description": "Cloud-backed persistent shared memory for AI agents powered by Deeplake",
5
5
  "type": "module",
6
6
  "repository": {
@@ -666,6 +666,10 @@ SKILLS (skilify) — mine + share reusable skills across the org. Run these in a
666
666
  - hivemind skilify pull --dry-run — preview only
667
667
  - hivemind skilify pull --force — overwrite local (creates .bak)
668
668
  - hivemind skilify pull <skill-name> — pull only that skill (combines with --user)
669
+ - hivemind skilify unpull — remove every skill previously installed by pull
670
+ - hivemind skilify unpull --user <email> — remove only that author's pulls
671
+ - hivemind skilify unpull --not-mine — remove all pulls except your own
672
+ - hivemind skilify unpull --dry-run — preview without touching disk
669
673
  - hivemind skilify scope <me|team|org> — sharing scope for new skills
670
674
  - hivemind skilify install <project|global> — default install location
671
675
  - hivemind skilify team add|remove|list <name> — manage team list`;