@kernel.chat/kbot 3.98.0 → 3.99.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,70 @@
1
+ ---
2
+ name: skill-self-authorship
3
+ description: Use at the end of any session where you solved something non-obvious or repeated a pattern 3+ times. Write a new skill so the next session doesn't re-derive it.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [skills, meta, autopoiesis, learning]
10
+ related_skills: [forge-reflex, autopoiesis-loop, teacher-trace-curation]
11
+ ---
12
+
13
+ # Skill Self-Authorship
14
+
15
+ The skills that ship with kbot cover known territory. The skills that make *your* kbot feel psychic are the ones it writes for itself during use.
16
+
17
+ ## When to Write a New Skill
18
+
19
+ At least one of these must be true:
20
+ - You solved something that took 5+ tool calls and got it right.
21
+ - You repeated the same sequence of tool calls 3+ times this week.
22
+ - You corrected kbot twice on the same kind of mistake.
23
+ - A user gave explicit feedback "remember this" / "always do X" / "never do Y."
24
+ - A daemon surfaced a recurring failure kbot had to work around.
25
+
26
+ ## Skill Anatomy
27
+
28
+ Use `~/.kbot/skills/<category>/<kebab-name>/SKILL.md`. Frontmatter:
29
+
30
+ ```yaml
31
+ ---
32
+ name: kebab-name
33
+ description: One sentence — "when to use."
34
+ version: 1.0.0
35
+ author: kbot-self
36
+ license: MIT
37
+ metadata:
38
+ kbot:
39
+ tags: [2-5 lowercase tags]
40
+ related_skills: [other skills this connects to]
41
+ ---
42
+ ```
43
+
44
+ Body structure:
45
+ 1. **Iron Law** — the one rule that, if broken, invalidates the skill.
46
+ 2. **Trigger** — the exact situation that should make kbot reach for this.
47
+ 3. **Procedure** — 3–6 numbered steps. Not prose. Commands and decisions.
48
+ 4. **Anti-patterns** — the failure modes to refuse.
49
+
50
+ ## The "When to Use" Line Is Everything
51
+
52
+ The relevance scorer reads `description` and `tags` first. If your description is "general debugging tips" the skill will never load. If it's "use when any vitest test fails with ENOENT" it loads exactly when needed. Write for the trigger, not the topic.
53
+
54
+ ## Self-Patching
55
+
56
+ Use the `skill_manage` tool after a skill executes:
57
+ - `skill_manage({ action: "get", query: "<skill-id>" })` — check current success rate.
58
+ - If the skill failed, `skill_manage({ action: "patch", query: "<skill-id>", issue: "<what went wrong>" })` appends the issue.
59
+ - If a better step sequence was discovered, `skill_manage({ action: "patch", query: "<skill-id>", steps: "[...]" })` replaces the steps and bumps version.
60
+ - `skill_manage({ action: "report" })` shows the success-rate distribution across all skills.
61
+
62
+ A skill with a 20% success rate after 10 executions is a bug report — rewrite or delete (`action: "delete"`).
63
+
64
+ ## What Emerges
65
+
66
+ After 30 days of active use, `~/.kbot/skills/` reflects the user's actual work more precisely than any config file. The skills library is the durable shadow of every problem kbot helped solve.
67
+
68
+ ## Anti-Pattern
69
+
70
+ Writing a skill for every clever thing you did. Skills have a maintenance cost — each one competes for token budget and relevance scoring attention. Write one skill per *repeated* problem, not per *interesting* problem.
@@ -0,0 +1,54 @@
1
+ ---
2
+ name: teacher-trace-curation
3
+ description: Use weekly. The teacher log captures every non-local Claude call — curate the best ones into training data so the local model learns from your actual work.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [training, fine-tuning, mlx, local-model, dataset]
10
+ related_skills: [autopoiesis-loop, skill-self-authorship]
11
+ ---
12
+
13
+ # Teacher Trace Curation
14
+
15
+ Every time kbot calls Claude, the prompt + response is written to `~/.kbot/teacher/traces.jsonl`. Left alone, this is just a log. Curated, it's the dataset that teaches the local model to answer *your* questions without touching the API.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ ONLY SUCCESSFUL, CORRECTED, AND USER-APPROVED TRACES ENTER THE DATASET.
21
+ ```
22
+
23
+ Failed traces teach the model to fail. Garbage in is not "more data."
24
+
25
+ ## The Weekly Ritual
26
+
27
+ 1. `kbot train-self --mode default --max-examples 500 --iters 200 --num-layers 8` — curates + fine-tunes in one pass. The curator runs first, scores traces, and writes `~/.kbot/teacher/dataset-default.jsonl`.
28
+ 2. Review the top 50 entries in the dataset file. Skim titles + first 200 chars.
29
+ 3. Remove anything you wouldn't want the local model to imitate:
30
+ - Responses you corrected mid-session.
31
+ - Hallucinated library names or APIs.
32
+ - Advice you later decided was wrong.
33
+ 4. Re-run step 1 with the cleaned dataset if you made significant deletions.
34
+ 5. Test: `ollama run kernel-self:<timestamp>` on a task from the last week. Compare against the Claude baseline.
35
+
36
+ For longer cycles of evaluation + retraining, use `kbot train-cycle` which chains curate → train → evaluate → merge across multiple iterations.
37
+
38
+ ## The Quality Signal That Matters Most
39
+
40
+ Was this answer *used* without correction? The curator scores partly on: no correction in the next 5 turns, no follow-up question asking for clarification, no user rephrasing. Approved-by-silence is the strongest endorsement.
41
+
42
+ ## What You're Actually Building
43
+
44
+ A local model that answers "how do I deploy this?" using *your* deploy flow, not Anthropic's generic best practice. Your infrastructure, your naming, your conventions, your past decisions. That's what the local model becomes over weeks.
45
+
46
+ ## Anti-Pattern
47
+
48
+ Training on everything. Larger datasets with noisy data fine-tune *worse* models than small curated datasets. 200 excellent examples beat 2,000 mediocre ones every time.
49
+
50
+ ## Integration
51
+
52
+ - `KBOT_TEACHER_LOG=1` in `~/.zshrc` keeps the logger always-on.
53
+ - `launchd` plist at `~/Library/LaunchAgents/com.kernel.kbot-train-self.plist` runs the curation + training weekly.
54
+ - The trained model gets tagged `kernel-self:<timestamp>` in Ollama and becomes available to every kbot command via `--model kernel-self:latest`.
@@ -0,0 +1,86 @@
1
+ ---
2
+ name: systematic-debugging
3
+ description: Use when any test fails, any bug surfaces, or any behavior is unexpected. Enforces root-cause analysis before code changes.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [debugging, root-cause, investigation, tdd]
10
+ related_skills: [test-driven-development, ship-pipeline, specialist-routing]
11
+ ---
12
+
13
+ # Systematic Debugging
14
+
15
+ Random fixes waste time and create new bugs. Before writing a single line of fix, you must understand *why* it broke.
16
+
17
+ ## Iron Law
18
+
19
+ ```
20
+ NO FIX WITHOUT A NAMED ROOT CAUSE.
21
+ ```
22
+
23
+ If you can't finish the sentence "It broke because ___", you are not ready to edit.
24
+
25
+ ## When to Use
26
+
27
+ - Any failing test
28
+ - Any exception in a log
29
+ - Any "this used to work"
30
+ - Any UI regression
31
+ - Any flaky behavior
32
+ - Any build failure
33
+
34
+ Especially use this under time pressure — "quick fix" almost always means "new bug tomorrow."
35
+
36
+ ## Four Phases
37
+
38
+ ### Phase 1 — Reproduce
39
+
40
+ - Run the failing case *exactly*. Don't paraphrase the command.
41
+ - Capture the full error: message, stack, file, line.
42
+ - If you can't reproduce, you don't understand the bug yet — stop and gather more evidence.
43
+
44
+ Tools: `bash` to run the command, `kbot_read` to open the file at the stack trace line, `git_diff` to see what changed recently.
45
+
46
+ ### Phase 2 — Isolate
47
+
48
+ - Binary-search the change set. Use `git log --oneline -20` and check the last commit that touched the area.
49
+ - Trace the data flow upstream from the symptom. The bug almost never lives where the exception fires.
50
+ - Find a *working* similar case in the same codebase. What's different?
51
+
52
+ Tools: `grep` for the bad value, `git_log` for history, subagent with Explore to map the call graph.
53
+
54
+ ### Phase 3 — Hypothesize
55
+
56
+ - Write down one sentence: "I think this is caused by X because Y."
57
+ - Predict what the smallest possible reproduction looks like.
58
+ - Predict what the smallest possible fix looks like.
59
+
60
+ Do **not** write code yet. Share the hypothesis first if the user is watching.
61
+
62
+ ### Phase 4 — Fix
63
+
64
+ 1. Write a failing test that captures the bug (see `test-driven-development`).
65
+ 2. Make the minimal change that turns it green.
66
+ 3. Run the full suite. No regressions.
67
+ 4. Commit with a message that names the root cause, not the symptom.
68
+
69
+ ## The Rule of Three
70
+
71
+ If your third fix attempt failed, **stop**. Don't try a fourth. The bug is not where you think it is, or the architecture is wrong. Escalate — use `kbot_plan` or hand off to a specialist agent.
72
+
73
+ ## Red Flags (stop and restart)
74
+
75
+ - "Let me just try this and see"
76
+ - "I'll comment this out for now"
77
+ - Changing more than one thing per commit
78
+ - Adding a try/except to make the error go away
79
+ - "It works on my machine"
80
+
81
+ ## How kbot Helps
82
+
83
+ - `kbot --thinking` shows reasoning — use it when debugging complex issues.
84
+ - `kbot --agent coder` routes to the specialist with strongest code-patching track record.
85
+ - `kbot --plan` forces read-only investigation mode for Phase 1 & 2.
86
+ - `forge_tool` builds a throwaway diagnostic script when stdlib tools aren't enough.
@@ -0,0 +1,74 @@
1
+ ---
2
+ name: test-driven-development
3
+ description: Use when adding any new behavior or fixing any bug. Red → Green → Refactor, enforced.
4
+ version: 1.0.0
5
+ author: kbot
6
+ license: MIT
7
+ metadata:
8
+ kbot:
9
+ tags: [tdd, testing, quality, vitest]
10
+ related_skills: [systematic-debugging, ship-pipeline]
11
+ ---
12
+
13
+ # Test-Driven Development
14
+
15
+ ## Iron Law
16
+
17
+ ```
18
+ THE FIRST COMMIT ON ANY BEHAVIOR CHANGE IS A FAILING TEST.
19
+ ```
20
+
21
+ No exceptions. Not "I'll add tests after." Not "this is too small." The failing test proves you understood the behavior before you wrote it.
22
+
23
+ ## Why
24
+
25
+ - Writing the test first forces you to design the interface, not the implementation.
26
+ - A failing test that turns green is the only honest proof the fix works.
27
+ - Tests written after the fact confirm what you already did, not what you should have done.
28
+ - Coverage that grows alongside code never has a "we should add tests" backlog.
29
+
30
+ ## The Cycle
31
+
32
+ ### RED
33
+
34
+ 1. Decide what the new behavior is, in one sentence.
35
+ 2. Write the smallest test that would fail if the behavior is missing.
36
+ 3. Run it. Confirm it fails for the *right reason* (not a typo, not a missing import).
37
+
38
+ ### GREEN
39
+
40
+ 1. Write the minimum code that makes the test pass.
41
+ 2. No extra features. No refactoring yet. No "while I'm here."
42
+ 3. Run the test. Confirm green.
43
+
44
+ ### REFACTOR
45
+
46
+ 1. Now — and only now — clean up the implementation.
47
+ 2. Rename, extract, deduplicate. Tests still green the whole time.
48
+ 3. If a refactor turns a test red, you changed behavior — revert and think.
49
+
50
+ ## kbot Toolchain
51
+
52
+ - Framework: **Vitest** (`.test.ts` next to source).
53
+ - Runner: `npx vitest run` (one-shot) or `npx vitest` (watch).
54
+ - Typecheck gate: `npx tsc --noEmit` before every commit. Strict mode is non-negotiable.
55
+ - UI tests: `@testing-library/react`.
56
+ - E2E: Playwright (`npm run test:e2e`).
57
+
58
+ ## Mocking Policy
59
+
60
+ - Mock Supabase. Never call the real API in tests.
61
+ - Mock the Claude proxy. Never call the real API in tests.
62
+ - Do NOT mock your own pure functions — test them directly.
63
+ - Do NOT mock file I/O — use `tmpdir()` fixtures.
64
+
65
+ ## Anti-Patterns
66
+
67
+ - Tests that assert implementation details (internal method calls) instead of observable behavior.
68
+ - Tests that share state between cases. Each test starts clean.
69
+ - Test names like `it('works')`. Name the behavior: `it('rejects unauthenticated requests')`.
70
+ - "Fix the test" when the test correctly catches a regression. Fix the code.
71
+
72
+ ## When kbot Should Run Tests Autonomously
73
+
74
+ After any edit under `src/` or `packages/kbot/src/`, kbot runs `npm run typecheck` and `npx vitest run <affected>` before reporting success. No UI claim without a browser check (see `ship-pipeline`).