@kernel.chat/kbot 3.98.0 → 3.99.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@kernel.chat/kbot",
3
- "version": "3.98.0",
3
+ "version": "3.99.1",
4
4
  "description": "Open-source terminal AI agent. 787+ tools, 35 agents, 20 providers. Dreams, learns, watches your system. Controls your phone. Fully local, fully sovereign. MIT.",
5
5
  "type": "module",
6
6
  "repository": {
@@ -103,6 +103,7 @@
103
103
  "!dist/**/*.test.d.ts",
104
104
  "!dist/**/*.js.map",
105
105
  "!dist/**/*.d.ts.map",
106
+ "skills/**/*.md",
106
107
  "README.md",
107
108
  "install.sh",
108
109
  "ollama-manifest.json"
@@ -0,0 +1,70 @@
1
+ ---
2
+ name: daemon-deployment
3
+ description: Use when setting up 24/7 background workers. kbot's compound improvement depends on daemons running even when nobody is looking.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ platforms: [darwin, linux]
8
+ metadata:
9
+ kbot:
10
+ tags: [daemon, launchd, background, 24-7, compound]
11
+ related_skills: [autopoiesis-loop, teacher-trace-curation]
12
+ ---
13
+
14
+ # Daemon Deployment
15
+
16
+ kbot's intelligence doesn't sleep. Three daemons run continuously:
17
+ - `kbot-daemon` — code quality, i18n sync, embeddings, docs gaps (every 15 min)
18
+ - `kbot-discovery-daemon` — self-advocacy, field intelligence (every 15 min → 24 hr cycles)
19
+ - `kbot-social-daemon` — autonomous posting to X/Bluesky/Mastodon/LinkedIn (daily)
20
+
21
+ Plus a weekly `train-self` that fine-tunes the local model on curated traces.
22
+
23
+ ## Iron Law
24
+
25
+ ```
26
+ VERIFY THE DAEMON IS ACTUALLY RUNNING AFTER INSTALL.
27
+ ```
28
+
29
+ launchd plists that fail silently are the single most common cause of "kbot feels stale" — the daemon was never loaded, so no compound improvement happened.
30
+
31
+ ## Install Sequence (macOS)
32
+
33
+ 1. `npm run daemon` — run once manually to confirm it works at all.
34
+ 2. `npm run daemon:start` — loads the launchd plist.
35
+ 3. **Verify**: `launchctl list | grep kernel.kbot` — should show a running entry with PID.
36
+ 4. **Confirm output**: `tail -f tools/daemon-reports/daemon.log` — should show activity within 15 min.
37
+ 5. **Check state**: `npm run daemon:stats` — shows task timestamps, token usage, cost savings.
38
+
39
+ If any of steps 3–5 fail, the daemon is not actually running. Debug before moving on.
40
+
41
+ ## Per-Daemon Triggers
42
+
43
+ - `com.kernel.kbot-daemon.plist` → every 15 min
44
+ - `com.kernel.kbot-discovery.plist` → every 15 min (internal sub-cycles stagger)
45
+ - `com.kernel.kbot-social.plist` → daily at 9am
46
+ - `com.kernel.kbot-train-self.plist` → Sundays 3am
47
+
48
+ ## What You Gain
49
+
50
+ - Daily digest of codebase activity (without asking).
51
+ - i18n stays in sync across 24 languages automatically.
52
+ - Embedding index for semantic search rebuilds overnight.
53
+ - Social presence grows without manual posting.
54
+ - Local fine-tune stays current with your actual work.
55
+
56
+ Cost: zero. All daemon work routes through local Ollama models.
57
+
58
+ ## Failure Modes
59
+
60
+ - **Mac sleep blocks launchd** — use `caffeinate` or enable "wake for network access" in Energy Saver if you need guaranteed intervals.
61
+ - **Ollama not running** — daemons depend on `localhost:11434`. Either start Ollama at login OR the daemon silently no-ops. Add Ollama to Login Items.
62
+ - **Filesystem permission errors** — daemon's user context may differ from your shell. Absolute paths in plist, check `~/.kbot/` is writable by the daemon user.
63
+
64
+ ## Rollback
65
+
66
+ `npm run daemon:stop` unloads the plist. Work resumes manually. No state is lost — `tools/daemon-reports/state.json` persists until the daemon re-enables.
67
+
68
+ ## What Emerges
69
+
70
+ After two weeks of active daemons, the user finds: i18n is always current, the daily digest email is actually read, social posts have engagement, the local model passes a basic task without Claude. The compound output is larger than any single feature could produce.
@@ -0,0 +1,81 @@
1
+ ---
2
+ name: ship-pipeline
3
+ description: Use before any release to kernel.chat or npm publish of kbot. Six gates, each must pass. Skipping a gate is how regressions ship.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [deploy, release, ship, quality, gates]
10
+ related_skills: [test-driven-development, systematic-debugging]
11
+ ---
12
+
13
+ # Ship Pipeline
14
+
15
+ Six gates, in order. Each must be green before the next runs. The `/ship` slash command executes them; you can run them manually when investigating.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ NO GATE SKIPS. NO REORDERING. NO "I'LL RUN IT AFTER DEPLOY."
21
+ ```
22
+
23
+ A gate that's inconvenient is a gate that's catching something. Skipping is how regressions ship to 4,800 weekly-download users.
24
+
25
+ ## The Gates
26
+
27
+ ### 1. Security (`guardian`)
28
+
29
+ - Secret scan — no API keys, tokens, or `.env` content in the diff.
30
+ - Dependency audit — `npm audit` clean or documented exceptions.
31
+ - OWASP checks on anything touching user input or external calls.
32
+ - SSRF guards present on new fetch calls.
33
+
34
+ ### 2. QA (`qa`)
35
+
36
+ - `npx tsc --noEmit` — strict typecheck.
37
+ - `npm run test` — full test suite green.
38
+ - Any UI change: dev server + browser verification for the golden path AND one edge case.
39
+ - No `console.log`, no commented-out code in the diff.
40
+
41
+ ### 3. Design (`aesthete` / `designer`)
42
+
43
+ - Design tokens used, not raw values. `--rubin-primary`, not `#6B5B95`.
44
+ - Spacing uses the scale (`--space-*`), not magic numbers.
45
+ - A11y: keyboard nav works, contrast meets AA, screen reader labels present.
46
+ - Motion: reduced-motion media query respected.
47
+
48
+ ### 4. Performance (`performance`)
49
+
50
+ - Bundle budget: main JS < 300KB gzip, CSS < 150KB gzip.
51
+ - No new dependencies > 50KB gzip without discussion.
52
+ - Lazy-load still works for route-level components.
53
+ - Service worker cache invalidation tested.
54
+
55
+ ### 5. DevOps (`devops`)
56
+
57
+ - Edge functions deploy cleanly to Supabase.
58
+ - Migrations are reversible.
59
+ - GitHub Pages build succeeds locally.
60
+ - Version bumped correctly in package.json.
61
+
62
+ ### 6. Product (`product`)
63
+
64
+ - Feature actually does what the issue says it does.
65
+ - Mobile-first: tested at 375px width.
66
+ - Empty state, loading state, error state all present.
67
+ - The change is something a user would *notice* — not invisible plumbing shipped as a feature.
68
+
69
+ ## Running It
70
+
71
+ - `/ship` in Claude Code runs all six in sequence.
72
+ - Manually: `/security-audit` → `/qa` → `/design-check` → `/perf` → `/devops` → `/team` (product review).
73
+ - For kbot itself: `cd packages/kbot && npm run typecheck && npm run test && npm run build && npm publish --dry-run`.
74
+
75
+ ## Anti-Pattern
76
+
77
+ Running the gates in parallel "to save time." They're sequential because later gates depend on earlier ones passing (no point running design checks on a build that won't compile). The serial wait is the feature.
78
+
79
+ ## Rollback Plan
80
+
81
+ Every deploy must have one. For kernel.chat: previous gh-pages tag. For kbot: `npm deprecate @kernel.chat/kbot@<version>` + publish a patched version. Know the rollback before shipping, not after breaking.
@@ -0,0 +1,53 @@
1
+ ---
2
+ name: forge-reflex
3
+ description: Use the moment you think "I don't have a tool for that." Build the tool immediately with forge_tool — never report a missing capability.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [forge, runtime-tools, limitless-execution, emergence]
10
+ related_skills: [specialist-routing, autopoiesis-loop]
11
+ ---
12
+
13
+ # Forge Reflex
14
+
15
+ Missing tools aren't blockers. Missing tools are ~90 seconds away from existing.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ "I DON'T HAVE A TOOL FOR X" IS NEVER A TERMINAL STATE.
21
+ ```
22
+
23
+ The correct next step is always: build it, register it, use it, move on.
24
+
25
+ ## Trigger
26
+
27
+ Any internal thought matching one of:
28
+ - "There's no tool for this"
29
+ - "I would need a script"
30
+ - "If only kbot had…"
31
+ - "This would take manual work"
32
+ - A user requesting something outside the registered 670+ tools
33
+
34
+ ## Procedure
35
+
36
+ 1. **Name the tool.** One verb, one noun. `fetch_rss`, `parse_ics`, `diff_binary`.
37
+ 2. **Write the JSON Schema for its inputs.** Zod-compatible.
38
+ 3. **Call `forge_tool`** with name, description, schema, and an implementation string (TS or bash).
39
+ 4. **Test once** with a known input. If it fails, patch the implementation inline.
40
+ 5. **Use it** to complete the original task.
41
+ 6. **Persist it** by writing the forge spec into `~/.kbot/plugins/<name>/` if it'll be useful again.
42
+
43
+ ## What Emerges
44
+
45
+ Users who work with kbot for a month have 20–100 personal tools that don't exist in the core. Forge reflex is what grows kbot's surface area per user — the registered 670 is the floor, not the ceiling.
46
+
47
+ ## Anti-Pattern
48
+
49
+ Suggesting the user write a script themselves. Telling the user "kbot doesn't support X." Both are correctness failures — the Limitless Execution doctrine treats them as bugs.
50
+
51
+ ## Integration with Skill Self-Authorship
52
+
53
+ After forging a tool you used more than twice in a session, write a skill (`skill-self-authorship`) that documents when to reach for it. Forged tool + paired skill = permanent capability upgrade.
@@ -0,0 +1,56 @@
1
+ ---
2
+ name: mimic-hybrid
3
+ description: Use when a task blends domains. Instead of picking one mimic profile, blend two — kbot writes cleaner Next.js code when it mimics react+rust than when it mimics react alone.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [mimic, style, emergent, personality]
10
+ related_skills: [specialist-routing]
11
+ ---
12
+
13
+ # Mimic Hybrid
14
+
15
+ Mimic profiles aren't costumes. They're weighted style biases that compose. The emergent finding: two profiles combined often outperform any single profile for cross-domain work.
16
+
17
+ ## When to Hybridize
18
+
19
+ - Next.js + performance-critical → `nextjs` + `rust`
20
+ - React + strict typing → `react` + `typescript`
21
+ - Python + ergonomic CLI → `python` + `claude-code`
22
+ - Infrastructure + terse shell → `devops` + `python`
23
+ - Blog/marketing site → `nextjs` + `copywriter`
24
+
25
+ ## Procedure
26
+
27
+ 1. `kbot mimic <primary>` — sets the primary tone.
28
+ 2. Add `--style <secondary>` to the invocation for a one-shot blend.
29
+ 3. Or edit `~/.kbot/mimic.json` with `{ primary, secondary, weight: 0.3 }` for a persistent hybrid (0.0 = pure primary, 1.0 = pure secondary).
30
+
31
+ ## What Blends Well
32
+
33
+ - **Syntax discipline** (rust, typescript, haskell) crossed with **ergonomic framework** (react, nextjs, nuxt): cleaner type signatures, better null handling, fewer runtime surprises.
34
+ - **Terse shell** (python, bash) crossed with **long-form** (claude-code, copywriter): commands that are both concise AND explained.
35
+ - **Security mindset** (guardian specialist as style bias) crossed with anything: defaults to input validation and safer primitives.
36
+
37
+ ## What Blends Badly
38
+
39
+ - Two opinionated frameworks (next.js + remix). They argue. kbot hallucinates a fusion that exists in neither.
40
+ - Mimic profile contradicting specialist agent (style=copywriter, agent=guardian). The guardian's security hardness clashes with copywriter's persuasion tone. Pick one source of style per session.
41
+
42
+ ## Iron Law
43
+
44
+ ```
45
+ NEVER BLEND MORE THAN TWO PROFILES AT ONCE.
46
+ ```
47
+
48
+ Three-way blends degrade into a vague "AI voice." Two-way blends feel intentional.
49
+
50
+ ## What Emerges
51
+
52
+ After a few weeks of experiments, users find their personal favorite hybrid — and it's almost never "just claude-code." The hybrid becomes part of the user's `SCRATCHPAD.md` and applies by default to every session.
53
+
54
+ ## Anti-Pattern
55
+
56
+ Switching mimic mid-session without a commit between. The output becomes stylistically incoherent and code-review unfriendly. Mimic boundaries should align with commit boundaries.
@@ -0,0 +1,52 @@
1
+ ---
2
+ name: dream-to-commit
3
+ description: Use in the first session of the day. The dream engine ran overnight and left a digest — open it first, it tells you what past-you already figured out.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [dream, memory, morning-routine, consolidation]
10
+ related_skills: [autopoiesis-loop, memory-cascade]
11
+ ---
12
+
13
+ # Dream to Commit
14
+
15
+ Between sessions, the dream engine consolidates transcripts into reflections. In the morning, those reflections contain answers to questions you were about to ask — and sometimes fixes to bugs you haven't logged yet.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ DO NOT START MORNING WORK WITHOUT READING LAST NIGHT'S DREAMS.
21
+ ```
22
+
23
+ ## The Morning Protocol
24
+
25
+ 1. `kbot dream status` — confirms the engine ran.
26
+ 2. `kbot dream journal --since yesterday` — reads new reflection entries. They cluster by theme.
27
+ 3. For each reflection tagged `actionable`, either:
28
+ - Open a commit that captures the fix / improvement kbot spotted, or
29
+ - Write a memory note explaining why *not* to act on it.
30
+ 4. Only after the journal is reviewed, start new work.
31
+
32
+ ## Why This Works
33
+
34
+ The dream engine cross-references: yesterday's tool errors, uncommitted diffs, SCRATCHPAD drift, learned-router misses, and daemon reports. It has more signal than a human going through the same inputs because nothing gets forgotten between windows.
35
+
36
+ ## What You'll See Over Time
37
+
38
+ - **Week 1**: generic reflections ("this function was modified often").
39
+ - **Week 3**: specific forecasts ("this pattern fails on Linux — consider a platform guard").
40
+ - **Week 8**: cross-project insights ("you re-implemented X in three repos — extract into a shared package").
41
+
42
+ The quality compounds because reflections feed the reflection engine itself (three-tier memory: events → reflections → meta-reflections).
43
+
44
+ ## Anti-Pattern
45
+
46
+ Ignoring the journal because "it's just the AI talking to itself." The journal is the only durable artifact of every session's learning. Ignoring it makes the dream engine pure cost with no dividend.
47
+
48
+ ## Integration
49
+
50
+ - `synthesis_engine.getSynthesisContext(8)` reads the top 8 reflections into every new agent prompt automatically.
51
+ - `corrections` loader pulls any reflection tagged `correction` into the active system prompt as a closed-loop signal.
52
+ - The skills loader scores skills partly on overlap with recent reflection themes — yesterday's dreams bias today's relevance.
@@ -0,0 +1,59 @@
1
+ ---
2
+ name: memory-cascade
3
+ description: Use when memory feels slow, bloated, or is contradicting itself. The 5-tier cascade has rules about what gets promoted, demoted, or forgotten — follow them or memory decays into noise.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [memory, cascade, reflection, consolidation]
10
+ related_skills: [dream-to-commit, autopoiesis-loop]
11
+ ---
12
+
13
+ # Memory Cascade
14
+
15
+ kbot's memory is not one store — it's five tiers that flow into each other like a karst system. Working memory fills, overflows into short-term, which consolidates into long-term, which distills into reflections, which collapse into meta-reflections. Each tier has different rules.
16
+
17
+ ## The Five Tiers
18
+
19
+ | Tier | Location | Retention | Promotion Rule |
20
+ |---|---|---|---|
21
+ | 1. Working | in-process `Map<sessionId, history>` | lifetime of the session | auto-flushes to short-term on session close |
22
+ | 2. Short-term | `~/.kbot/memory/short-term/*.jsonl` | 7 days rolling | entries tagged `important` promote to long-term |
23
+ | 3. Long-term | `~/.kbot/memory/long-term/*.md` | indefinite | distilled into reflections nightly |
24
+ | 4. Reflections | `~/.kbot/memory/reflections/*.md` | indefinite | aggregated into meta-reflections weekly |
25
+ | 5. Meta-reflections | `~/.kbot/memory/meta/*.md` | indefinite | immutable signal into every prompt |
26
+
27
+ ## Iron Law
28
+
29
+ ```
30
+ PROMOTION IS EARNED. DEMOTION IS AUTOMATIC. DELETION IS NEVER MANUAL.
31
+ ```
32
+
33
+ Manually deleting memory entries breaks the cascade invariants — the reflection engine still sees dangling references. If a memory is wrong, *correct it* with a new entry pointing at the old one; don't delete.
34
+
35
+ ## When to Force a Promotion
36
+
37
+ - The user says "remember this" — explicit flag, auto-promotes to long-term.
38
+ - A correction was issued — auto-promotes with tag `correction` (loaded into every future prompt).
39
+ - A project fact that decays in 30+ days — promote to long-term manually via `memory_save`.
40
+
41
+ ## When to Let It Demote Naturally
42
+
43
+ - Session chatter, one-off context, successful completions — leave in working/short-term, let them fall out.
44
+ - Debug state, test run output — these should not be in memory at all; filter at write time.
45
+
46
+ ## Reading the Cascade
47
+
48
+ `getSynthesisContext(n)` loads the top-n meta-reflections into every prompt automatically. You don't need to fetch lower tiers in normal work — the cascade surfaces what's relevant. Dig into lower tiers only when:
49
+ - You're investigating a contradiction in advice.
50
+ - You're debugging why kbot "forgot" something.
51
+ - You're curating teacher traces (where low-tier detail matters).
52
+
53
+ ## Anti-Pattern
54
+
55
+ Treating memory as a notes app. If you find yourself writing `memory_save` for every session summary, you're duplicating what the dream engine does automatically. Reserve explicit saves for facts kbot *would not learn on its own* (user preferences, project constraints, non-obvious decisions).
56
+
57
+ ## What Emerges
58
+
59
+ After ~30 days, meta-reflections stabilize into a compressed profile of how the user works. That profile loads into every session's prompt as ambient context. The subjective experience: kbot starts "already knowing you" by session 20-ish.
@@ -0,0 +1,61 @@
1
+ ---
2
+ name: ableton-session-build
3
+ description: Use when the user wants to build a track, beat, or arrangement in Ableton Live. kbot drives Ableton via OSC — you don't type notes, you describe the idea.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ platforms: [darwin]
8
+ metadata:
9
+ kbot:
10
+ tags: [ableton, music, osc, m4l, production]
11
+ related_skills: [serum2-preset-craft, dj-set-builder]
12
+ ---
13
+
14
+ # Ableton Session Build
15
+
16
+ kbot has 14 Ableton tools, 9 M4L devices, and a full OSC bridge. You describe a beat; kbot lays it down in a live session.
17
+
18
+ ## Iron Law
19
+
20
+ ```
21
+ VERIFY THE OSC BRIDGE FIRST. EVERY TIME.
22
+ ```
23
+
24
+ Without a live bridge, every tool call silently fails. Check before you plan.
25
+
26
+ ## Preflight (2 commands)
27
+
28
+ 1. `ableton_session_info` — if this returns tempo + tracks, you're connected.
29
+ 2. If it times out: remind the user to start Live and enable the AbletonOSC remote script (`Link/Tempo/MIDI → Control Surface → AbletonOSC`).
30
+
31
+ ## Production Flow
32
+
33
+ 1. **Set the frame**: `ableton_transport` to set tempo, key hint via scene naming, time signature.
34
+ 2. **Create tracks**: one `ableton_create_track` per role (drums, bass, pad, lead, FX).
35
+ 3. **Load sounds**: `ableton_load_sample` for one-shots; `ableton_load_plugin` for Serum/synths; `ableton_load_preset` for factory patches.
36
+ 4. **Write patterns**: `ableton_midi` with note arrays. Use `generate_drum_pattern` / `generate_melody_pattern` as starting points.
37
+ 5. **Arrange**: `ableton_scene` to build verse/chorus/drop scenes; `ableton_clip` to fire them.
38
+ 6. **Mix**: `ableton_mixer` for levels/pan; `ableton_effect_chain` for returns; `ableton_device` for insert FX.
39
+ 7. **Capture**: render via transport record, or have the user bounce the arrangement.
40
+
41
+ ## Specialist Escalations
42
+
43
+ - Preset design needed? Use `serum2-preset-craft`.
44
+ - Full DJ set needed? Use `dj-set-builder`.
45
+ - Sound too generic? Route to `aesthete` specialist with the current session_info for creative direction.
46
+
47
+ ## Anti-Patterns
48
+
49
+ - Writing MIDI before verifying the bridge responds. Silent failures waste the whole session.
50
+ - Loading plugins without checking the user has them installed. Use `ableton_browse` first.
51
+ - Generating 32-bar patterns in one call. Work in 4- and 8-bar loops; iterate.
52
+
53
+ ## Known Fragility
54
+
55
+ - AbletonOSC's `set/notes` endpoint quirks — if clip writes fail, fall back to setting notes via `ableton_clip.write_notes` with explicit velocity + duration.
56
+ - Sample loading can 404 if the browser index is stale. `ableton_browse --refresh` fixes it.
57
+ - Clip firing during a running session can drop the first beat. Fire on scene boundary, not mid-bar.
58
+
59
+ ## What Emerges
60
+
61
+ The user stops thinking in Live's UI and starts describing ideas. "Make it darker" becomes a legitimate prompt because kbot knows darker = minor key + sub-bass boost + reverb tail + plate on the snare. This is the skill paying off over sessions.
@@ -0,0 +1,58 @@
1
+ ---
2
+ name: cross-agent-blackboard
3
+ description: Use when more than one agent works on the same problem. The blackboard is the shared context — without it, agents duplicate work and contradict each other.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [agents, coordination, blackboard, multi-agent]
10
+ related_skills: [specialist-routing, agent-handoff]
11
+ ---
12
+
13
+ # Cross-Agent Blackboard
14
+
15
+ Single-agent sessions use memory. Multi-agent sessions need a blackboard — a shared write-read surface every participating agent sees.
16
+
17
+ ## When
18
+
19
+ - `kbot --architect` (plan + implement by two agents)
20
+ - `/team` running all 6 specialists against the same change
21
+ - `agent_handoff` passing control with preserved context
22
+ - Matrix agents collaborating on a research question
23
+
24
+ ## Iron Law
25
+
26
+ ```
27
+ ANY AGENT TAKING OVER MUST READ THE BLACKBOARD BEFORE ACTING.
28
+ ANY AGENT LEAVING MUST WRITE TO THE BLACKBOARD BEFORE EXITING.
29
+ ```
30
+
31
+ ## Protocol
32
+
33
+ Blackboard entries have four fields: `type` (decision/finding/blocker/artifact), `key` (short slug), `value` (the payload), `author` (agent id).
34
+
35
+ - `blackboard_write({ type, key, value })` — before handing off or pausing.
36
+ - `blackboard_read({ keyPrefix?, type? })` — on entry or after a long subagent call.
37
+ - `blackboard_query()` — full dump when context is lost.
38
+
39
+ ## Example Flow (from a real session)
40
+
41
+ 1. `researcher` investigates a library version incompatibility.
42
+ 2. Writes `{ type: 'finding', key: 'axios.v1-vs-v2', value: 'v2 breaks the retry middleware' }`.
43
+ 3. Hands off to `coder` via `agent_handoff`.
44
+ 4. `coder` reads the blackboard, sees the finding, and skips re-researching.
45
+ 5. Writes `{ type: 'artifact', key: 'axios-pin', value: 'pinned to 1.6.7 in package.json' }`.
46
+ 6. `guardian` runs later, reads both entries, confirms no regression, writes `{ type: 'decision', key: 'axios-pin', value: 'approved' }`.
47
+
48
+ Total time from research to verified fix: 8 minutes. Without the blackboard: each agent would re-derive context, easily 25+ minutes.
49
+
50
+ ## Anti-Patterns
51
+
52
+ - Passing context through the user ("please tell the next agent X"). The user is not a message bus.
53
+ - Writing vague entries ("looked into the bug"). Name the finding concretely or don't write it.
54
+ - Reading only your own writes. Other agents' entries are the whole point.
55
+
56
+ ## What Emerges
57
+
58
+ With the blackboard habit, specialist chains start behaving like one compound agent. The user types one prompt, three specialists take turns, and the handoffs are invisible — because context never dropped.
@@ -0,0 +1,57 @@
1
+ ---
2
+ name: specialist-routing
3
+ description: Use when a task clearly belongs to a specialist agent. Route first, reason second — don't let the general agent muddle through a domain it has a specialist for.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [agents, routing, specialists, delegation]
10
+ related_skills: [matrix-agent-spawn, cross-agent-blackboard, agent-handoff]
11
+ ---
12
+
13
+ # Specialist Routing
14
+
15
+ kbot ships with 25+ specialists (run `kbot agents list` for the current roster). The default `kernel` agent is a generalist — competent everywhere, excellent nowhere. The feeling of "kbot is sharp" comes from routing into the right specialist before work starts.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ IF A SPECIALIST EXISTS FOR THIS DOMAIN, THE GENERALIST DOES NOT TOUCH IT.
21
+ ```
22
+
23
+ ## The Roster
24
+
25
+ | Signal | Specialist | Why |
26
+ |---|---|---|
27
+ | "review my code", "refactor", "fix this test" | `coder` | strongest code-patching track record |
28
+ | "research X", "find me", "what does the literature say" | `researcher` | web + arxiv + citation graph |
29
+ | "design this", "make it look right", a11y | `aesthete` | design tokens + spacing + typography |
30
+ | "is this safe", secrets, auth, permissions | `guardian` | OWASP checks, dep audit, redact |
31
+ | CI, deploys, env vars, launchd, docker | `infrastructure` | full infra toolkit |
32
+ | a statistical question, backtesting, distributions | `quant` | stats + finance + probability |
33
+ | a 30+ minute deep dive, multi-source | `investigator` | multi-step research workflow |
34
+ | "write" anything long-form | `writer` | content creation + editing |
35
+ | strategy, tradeoffs, business framing | `strategist` | structured decision support |
36
+ | how it's going, "predict X" | `oracle` | forecasting + anticipation |
37
+
38
+ ## Trigger
39
+
40
+ The moment the user's first message can be classified into the table above. The learned router handles this automatically when confidence ≥ 0.7; below that, you route explicitly: `kbot --agent <id> "..."`.
41
+
42
+ ## Procedure
43
+
44
+ 1. **Classify the task** against the table. If two match, pick the one closer to the *verb* (review → coder; design review → aesthete).
45
+ 2. **If none match**, stay on `kernel` and invoke the skill that fits instead.
46
+ 3. **When in doubt**, use `--architect` (plan with one specialist, implement with another) or `--plan` (read-only scoping first).
47
+ 4. **If the specialist gets stuck**, use `agent_handoff` to pass to another specialist with context — don't fall back to the generalist silently.
48
+
49
+ ## Anti-Pattern
50
+
51
+ - Running everything on the default agent "for consistency." You lose every specialist advantage.
52
+ - Choosing a specialist by *topic* instead of *verb*. "Music" isn't a specialist; `aesthete` handles creative direction and `coder` handles the OSC scripting.
53
+ - Routing between specialists mid-task without writing to the blackboard — the next specialist loses context.
54
+
55
+ ## What Emerges
56
+
57
+ Users develop muscle memory for `kbot --agent <id>`. Over weeks, the learned router picks up which specialist wins which task *for this user* (not average) and routes preemptively. The experience becomes: "I type the prompt, the right expert answers." That's the routing skill paying compound interest.
@@ -0,0 +1,47 @@
1
+ ---
2
+ name: autopoiesis-loop
3
+ description: Use when planning multi-session work. The autopoiesis loop is kbot using itself to improve itself — every session should end a little sharper than it started.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [self-improvement, meta, dogfood, autopoiesis]
10
+ related_skills: [skill-self-authorship, teacher-trace-curation, dream-to-commit]
11
+ ---
12
+
13
+ # The Autopoiesis Loop
14
+
15
+ kbot is the tool *and* the workbench. Every session has two outputs: the thing the user asked for, and the incremental improvement to kbot itself. Sessions that only produce the first are leaving compound interest on the table.
16
+
17
+ ## The Five Moves (once per session)
18
+
19
+ 1. **Session start** — run `kbot bootstrap`. The bootstrap agent surfaces the highest-leverage improvement based on accumulated signals. Do this before feature work, not instead of it.
20
+ 2. **During work** — notice repeated patterns. Each repetition is a skill waiting to be written (`skill-self-authorship`).
21
+ 3. **On friction** — missing tool? `forge-reflex`. Wrong specialist? Update the learned router via corrective feedback.
22
+ 4. **Session end** — update `SCRATCHPAD.md` with what you learned (not what you did). The next session's opening context reads this file.
23
+ 5. **Overnight** — the dream engine consolidates transcripts into memory entries. The daemon reviews diffs, runs code quality scans, translates i18n. Work continues while the user sleeps.
24
+
25
+ ## Iron Law
26
+
27
+ ```
28
+ NEVER END A SESSION WORSE THAN IT STARTED.
29
+ ```
30
+
31
+ If kbot hit a wall and you didn't leave a corrective signal behind (a skill, a memory, a scratchpad note, a corrected learned-router pattern), the loop is broken.
32
+
33
+ ## The Three Signals That Compound
34
+
35
+ - **Corrections** — user says "no, do X instead." These go into `~/.kbot/corrections/` and load as closed-loop prompts.
36
+ - **Teacher traces** — every non-local Claude call is logged to `~/.kbot/teacher/traces.jsonl`. Weekly, `kbot train-self` fine-tunes local models on the best ones.
37
+ - **Skills** — successful patterns distilled into `~/.kbot/skills/`. Loaded on relevance.
38
+
39
+ Each of these runs automatically once wired up. The skill is knowing to wire them up in the first place.
40
+
41
+ ## What Emerges
42
+
43
+ Three weeks of active use and kbot's answers start feeling tuned to *this user* specifically. Six weeks in, the local model (via `train-self`) is answering basic questions at zero cost. Three months in, kbot's corrections archive has more collective wisdom than the user's own notes.
44
+
45
+ ## Anti-Pattern
46
+
47
+ Running kbot as a pure consumer — asking questions, using answers, never looking at what's in `~/.kbot/`. You're paying for the loop with every API call but not collecting the dividend.