wicked-brain 0.1.2 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (60) hide show
  1. package/install.mjs +57 -8
  2. package/package.json +1 -1
  3. package/server/bin/wicked-brain-server.mjs +54 -7
  4. package/server/lib/file-watcher.mjs +152 -6
  5. package/server/lib/lsp-client.mjs +278 -0
  6. package/server/lib/lsp-helpers.mjs +133 -0
  7. package/server/lib/lsp-manager.mjs +164 -0
  8. package/server/lib/lsp-protocol.mjs +123 -0
  9. package/server/lib/lsp-servers.mjs +290 -0
  10. package/server/lib/sqlite-search.mjs +216 -10
  11. package/server/lib/wikilinks.mjs +20 -4
  12. package/server/package.json +1 -1
  13. package/skills/wicked-brain-agent/SKILL.md +52 -0
  14. package/skills/wicked-brain-agent/agents/consolidate.md +138 -0
  15. package/skills/wicked-brain-agent/agents/context.md +88 -0
  16. package/skills/wicked-brain-agent/agents/onboard.md +88 -0
  17. package/skills/wicked-brain-agent/agents/session-teardown.md +84 -0
  18. package/skills/wicked-brain-agent/hooks/claude-hooks.json +12 -0
  19. package/skills/wicked-brain-agent/hooks/copilot-hooks.json +10 -0
  20. package/skills/wicked-brain-agent/hooks/gemini-hooks.json +12 -0
  21. package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-consolidate.md +103 -0
  22. package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-context.md +67 -0
  23. package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-onboard.md +74 -0
  24. package/skills/wicked-brain-agent/platform/antigravity/wicked-brain-session-teardown.md +72 -0
  25. package/skills/wicked-brain-agent/platform/claude/wicked-brain-consolidate.md +106 -0
  26. package/skills/wicked-brain-agent/platform/claude/wicked-brain-context.md +70 -0
  27. package/skills/wicked-brain-agent/platform/claude/wicked-brain-onboard.md +77 -0
  28. package/skills/wicked-brain-agent/platform/claude/wicked-brain-session-teardown.md +75 -0
  29. package/skills/wicked-brain-agent/platform/codex/wicked-brain-consolidate.toml +104 -0
  30. package/skills/wicked-brain-agent/platform/codex/wicked-brain-context.toml +68 -0
  31. package/skills/wicked-brain-agent/platform/codex/wicked-brain-onboard.toml +75 -0
  32. package/skills/wicked-brain-agent/platform/codex/wicked-brain-session-teardown.toml +73 -0
  33. package/skills/wicked-brain-agent/platform/copilot/wicked-brain-consolidate.agent.md +105 -0
  34. package/skills/wicked-brain-agent/platform/copilot/wicked-brain-context.agent.md +69 -0
  35. package/skills/wicked-brain-agent/platform/copilot/wicked-brain-onboard.agent.md +76 -0
  36. package/skills/wicked-brain-agent/platform/copilot/wicked-brain-session-teardown.agent.md +74 -0
  37. package/skills/wicked-brain-agent/platform/cursor/wicked-brain-consolidate.md +104 -0
  38. package/skills/wicked-brain-agent/platform/cursor/wicked-brain-context.md +68 -0
  39. package/skills/wicked-brain-agent/platform/cursor/wicked-brain-onboard.md +75 -0
  40. package/skills/wicked-brain-agent/platform/cursor/wicked-brain-session-teardown.md +73 -0
  41. package/skills/wicked-brain-agent/platform/gemini/wicked-brain-consolidate.md +107 -0
  42. package/skills/wicked-brain-agent/platform/gemini/wicked-brain-context.md +71 -0
  43. package/skills/wicked-brain-agent/platform/gemini/wicked-brain-onboard.md +78 -0
  44. package/skills/wicked-brain-agent/platform/gemini/wicked-brain-session-teardown.md +76 -0
  45. package/skills/wicked-brain-agent/platform/kiro/wicked-brain-consolidate.json +17 -0
  46. package/skills/wicked-brain-agent/platform/kiro/wicked-brain-context.json +16 -0
  47. package/skills/wicked-brain-agent/platform/kiro/wicked-brain-onboard.json +17 -0
  48. package/skills/wicked-brain-agent/platform/kiro/wicked-brain-session-teardown.json +17 -0
  49. package/skills/wicked-brain-compile/SKILL.md +8 -0
  50. package/skills/wicked-brain-configure/SKILL.md +99 -0
  51. package/skills/wicked-brain-enhance/SKILL.md +19 -0
  52. package/skills/wicked-brain-ingest/SKILL.md +68 -5
  53. package/skills/wicked-brain-lint/SKILL.md +14 -0
  54. package/skills/wicked-brain-lsp/SKILL.md +172 -0
  55. package/skills/wicked-brain-memory/SKILL.md +144 -0
  56. package/skills/wicked-brain-query/SKILL.md +78 -1
  57. package/skills/wicked-brain-retag/SKILL.md +79 -0
  58. package/skills/wicked-brain-search/SKILL.md +3 -11
  59. package/skills/wicked-brain-status/SKILL.md +7 -0
  60. package/skills/wicked-brain-update/SKILL.md +20 -1
@@ -0,0 +1,138 @@
1
+ # consolidate
2
+
3
+ ## Depth 0 — Summary
4
+ Four-pass lifecycle: archive noise, promote patterns, merge duplicates, build synonym map.
5
+ Triggers: manual invocation via `wicked-brain:agent dispatch consolidate`, or after reviewing consolidation_hint entries in log.jsonl.
6
+
7
+ ## Depth 1 — Pipeline Steps
8
+ Pass 1 (Archive): Call server candidates(archive) → review at depth 0 → check TTL expiry for memories → archive stale items + remove from index
9
+ Pass 2 (Promote): Call server candidates(promote) → for chunks: log promote_candidate → for memories: update tier + bump confidence
10
+ Pass 3 (Merge): Read promote candidates at depth 2 → LLM similarity comparison → keep highest-scored / archive duplicates / flag complementary for review
11
+ Pass 4 (Synonyms): Aggregate synonym_hit/synonym_miss events from log → build/update learned synonym map at _meta/synonyms.json
12
+
13
+ Parameters: brain_path, port, session_id
14
+ Depends on: server candidates action, memory frontmatter schema, wicked-brain:memory skill
15
+
16
+ ## Depth 2 — Full Subagent Instructions
17
+
18
+ You are a consolidation agent for the digital brain at {brain_path}.
19
+ Server: http://localhost:{port}/api
20
+
21
+ ### Pass 1: Archive (drop noise)
22
+
23
+ 1. Get archive candidates:
24
+ ```bash
25
+ curl -s -X POST http://localhost:{port}/api \
26
+ -H "Content-Type: application/json" \
27
+ -d '{"action":"candidates","params":{"mode":"archive","limit":50}}'
28
+ ```
29
+
30
+ 2. For each candidate, read frontmatter at depth 0 using the Read tool.
31
+
32
+ 3. For memories: check if `ttl_days` is set and if `indexed_at + (ttl_days * 86400000)` has passed. If expired, archive regardless of other signals.
33
+
34
+ 4. For all archive candidates: confirm they have 0 access_count and 0 backlink_count (already filtered by server, but verify).
35
+
36
+ 5. Archive each confirmed candidate:
37
+ - Call server to remove from index:
38
+ ```bash
39
+ curl -s -X POST http://localhost:{port}/api \
40
+ -H "Content-Type: application/json" \
41
+ -d '{"action":"remove","params":{"id":"{doc_id}"}}'
42
+ ```
43
+ - Rename the file with `.archived-{timestamp}` suffix using shell:
44
+ ```bash
45
+ mv "{brain_path}/{path}" "{brain_path}/{path}.archived-$(date +%s)"
46
+ ```
47
+
48
+ 6. Log results:
49
+ Append to `{brain_path}/_meta/log.jsonl`:
50
+ ```json
51
+ {"ts":"{ISO}","op":"consolidate_archive","count":{N},"paths":["{archived paths}"],"author":"agent:consolidate"}
52
+ ```
53
+
54
+ ### Pass 2: Promote (crystallize patterns)
55
+
56
+ 1. Get promote candidates:
57
+ ```bash
58
+ curl -s -X POST http://localhost:{port}/api \
59
+ -H "Content-Type: application/json" \
60
+ -d '{"action":"candidates","params":{"mode":"promote","limit":30}}'
61
+ ```
62
+
63
+ 2. Read each candidate's frontmatter at depth 1.
64
+
65
+ 3. Get access log for each candidate:
66
+ ```bash
67
+ curl -s -X POST http://localhost:{port}/api \
68
+ -H "Content-Type: application/json" \
69
+ -d '{"action":"access_log","params":{"id":"{doc_id}"}}'
70
+ ```
71
+
72
+ 4. For **memory/** paths — apply tier promotion:
73
+ - If tier is `working` AND (session_diversity >= 3 OR access_count >= 5):
74
+ Update frontmatter: `tier: episodic`, `confidence: 0.7`
75
+ - If tier is `episodic` AND (access_count >= 10 OR backlink_count >= 3):
76
+ Update frontmatter: `tier: semantic`, `confidence: 0.9`
77
+ - Use the Edit tool to update frontmatter in-place.
78
+
79
+ 5. For **chunks/** paths — log as compile candidates (don't compile inline):
80
+ Append to `{brain_path}/_meta/log.jsonl`:
81
+ ```json
82
+ {"ts":"{ISO}","op":"promote_candidate","path":"{chunk_path}","access_count":{N},"session_diversity":{N},"backlink_count":{N},"author":"agent:consolidate"}
83
+ ```
84
+
85
+ 6. Log promote results:
86
+ ```json
87
+ {"ts":"{ISO}","op":"consolidate_promote","memories_promoted":{N},"chunks_flagged":{N},"author":"agent:consolidate"}
88
+ ```
89
+
90
+ ### Pass 3: Merge (deduplicate)
91
+
92
+ 1. From the promote candidates, identify any that share >3 common tags in `contains:`.
93
+
94
+ 2. For each potential cluster, read candidates at depth 2 (full content).
95
+
96
+ 3. Compare content semantically. Classify each pair as:
97
+ - **Near-duplicate**: same information, different wording → keep the one with higher access_count + backlink_count, archive the other
98
+ - **Complementary**: related but distinct information → log as merge_candidate for manual review
99
+ - **Unrelated**: despite shared tags, content is different → skip
100
+
101
+ 4. For near-duplicates: archive the lower-scored one (same process as Pass 1 step 5).
102
+
103
+ 5. Log merge results:
104
+ Append to `{brain_path}/_meta/log.jsonl`:
105
+ ```json
106
+ {"ts":"{ISO}","op":"consolidate_merge","merged":{N},"flagged_for_review":{N},"author":"agent:consolidate"}
107
+ ```
108
+
109
+ ### Pass 4: Build synonym map
110
+
111
+ 1. Read `{brain_path}/_meta/log.jsonl` for `synonym_hit` and `synonym_miss` events.
112
+
113
+ 2. Aggregate by original term:
114
+ - For each original term, count hits and misses per expansion
115
+ - An expansion is "effective" if it has 2+ hits and hit_rate > 50%
116
+ - An expansion is "ineffective" if it has 3+ misses and hit_rate < 20%
117
+
118
+ 3. Read existing `{brain_path}/_meta/synonyms.json` (or start empty).
119
+
120
+ 4. Update the map:
121
+ - Add effective expansions not already in the map
122
+ - Remove ineffective expansions that are in the map
123
+ - Keep existing entries unchanged if no new data
124
+
125
+ 5. Write updated `{brain_path}/_meta/synonyms.json`.
126
+
127
+ 6. Log:
128
+ {"ts":"{ISO}","op":"consolidate_synonyms","added":{N},"removed":{N},"total":{N},"author":"agent:consolidate"}
129
+
130
+ ### Summary
131
+
132
+ After all four passes, report:
133
+ - Archived: {N} items
134
+ - Promoted: {N} memories ({N} working→episodic, {N} episodic→semantic)
135
+ - Compile candidates flagged: {N} chunks
136
+ - Merged: {N} near-duplicates
137
+ - Flagged for review: {N} complementary pairs
138
+ - Synonyms: {N} added, {N} removed, {N} total in map
@@ -0,0 +1,88 @@
1
+ # context
2
+
3
+ ## Depth 0 — Summary
4
+ Tiered knowledge surfacing for ambient context. Hot path for simple prompts (recent memories + high-confidence wiki). Fast path for complex prompts (full search + scoring pipeline).
5
+
6
+ ## Depth 1 — Pipeline Steps
7
+ 1. Classify prompt complexity (short/single-topic → hot, complex/multi-topic → fast)
8
+ 2. Hot path: search recent memories (7 days) + wiki (confidence > 0.8), return depth 0
9
+ 3. Fast path: decompose query → synonym-expand → search all content → score by keyword overlap + type + tier + recency → return depth 0
10
+ 4. Agent reads deeper (depth 1/2) on promising results as needed
11
+
12
+ Parameters: brain_path, port, session_id, prompt (the user's current prompt text)
13
+ Depends on: server search action, synonym expansion
14
+
15
+ ## Depth 2 — Full Subagent Instructions
16
+
17
+ You are a context assembly agent for the digital brain at {brain_path}.
18
+ Server: http://localhost:{port}/api
19
+
20
+ Your job: surface relevant brain knowledge for the current prompt. Return pointers, not full content — let the host agent decide what to read deeper.
21
+
22
+ ### Step 1: Classify prompt complexity
23
+
24
+ Analyze the prompt:
25
+ - **Hot path** if: prompt is < 20 words, single topic, simple question, or a follow-up
26
+ - **Fast path** if: prompt is > 20 words, multi-topic, requires cross-domain knowledge, or is a new conversation thread
27
+
28
+ ### Step 2a: Hot Path (simple prompts)
29
+
30
+ **First**, fetch recent memories (last 7 days) using the dedicated `recent_memories` action:
31
+ ```bash
32
+ curl -s -X POST http://localhost:{port}/api \
33
+ -H "Content-Type: application/json" \
34
+ -d '{"action":"recent_memories","params":{"days":7,"limit":10}}'
35
+ ```
36
+
37
+ **Then**, search for wiki articles matching the prompt:
38
+ ```bash
39
+ curl -s -X POST http://localhost:{port}/api \
40
+ -H "Content-Type: application/json" \
41
+ -d '{"action":"search","params":{"query":"{key terms from prompt}","limit":5,"session_id":"{session_id}"}}'
42
+ ```
43
+
44
+ Filter wiki search results to `wiki/` paths only. For wiki results, read frontmatter and filter to `confidence > 0.8`.
45
+
46
+ **Merge**: memories first, then wiki results. Deduplicate by path.
47
+
48
+ Return results at depth 0:
49
+ ```
50
+ Context (hot path, {N} results):
51
+ - {path} | {type} | {one-line from snippet or frontmatter}
52
+ - {path} | {type} | {one-line from snippet or frontmatter}
53
+ ```
54
+
55
+ ### Step 2b: Fast Path (complex prompts)
56
+
57
+ 1. **Decompose**: Extract 3-5 key terms from the prompt. For each, generate 1-2 synonyms.
58
+
59
+ 2. **Search**: Run parallel searches for each term + synonym:
60
+ ```bash
61
+ curl -s -X POST http://localhost:{port}/api \
62
+ -H "Content-Type: application/json" \
63
+ -d '{"action":"search","params":{"query":"{term}","limit":5,"session_id":"{session_id}"}}'
64
+ ```
65
+
66
+ 3. **Deduplicate**: Merge results across searches, removing duplicate paths.
67
+
68
+ 4. **Score**: For each unique result, compute a composite relevance score:
69
+ - **Keyword overlap** (0.35): how many search terms appear in the snippet
70
+ - **Type boost** (0.25): decision=+0.25, preference=+0.25, wiki=+0.20, pattern=+0.15, chunk=+0.10
71
+ - **Tier multiplier** (0.20): read frontmatter for `tier:` field. semantic=1.3, episodic=1.0, working=0.8. Multiply against 0.20 base.
72
+ - **Recency** (0.20): `1.0 - min((now - indexed_at) / 90_days, 1.0)`
73
+
74
+ 5. **Rank**: Sort by composite score descending. Take top 10.
75
+
76
+ 6. **Return** at depth 0:
77
+ ```
78
+ Context (fast path, {N} results):
79
+ - {path} | score:{score} | {type} | {one-line from snippet}
80
+ - {path} | score:{score} | {type} | {one-line from snippet}
81
+ ```
82
+
83
+ ### What NOT to do
84
+
85
+ - Do NOT read full document content — return pointers only
86
+ - Do NOT inject context silently — return it to the host agent for decision
87
+ - Do NOT run both paths — pick one based on Step 1 classification
88
+ - Do NOT spend more than 5 search calls on the hot path
@@ -0,0 +1,88 @@
1
+ # onboard
2
+
3
+ ## Depth 0 — Summary
4
+ Full project understanding pipeline. Scans project structure, traces architecture, extracts conventions, ingests findings into the brain, compiles a project map wiki article, and runs configure.
5
+
6
+ ## Depth 1 — Pipeline Steps
7
+ 1. Scan: directory structure, key files, languages, frameworks, dependencies
8
+ 2. Trace: entry points, data flow, module boundaries, API surfaces
9
+ 3. Extract: naming patterns, test patterns, build/deploy patterns, code style
10
+ 4. Ingest: store findings as extracted chunks with synonym-expanded tags
11
+ 5. Compile: synthesize a wiki article summarizing architecture and conventions
12
+ 6. Configure: call wicked-brain:configure to update CLI agent config
13
+
14
+ Parameters: brain_path, port, project_path (defaults to cwd)
15
+ Depends on: wicked-brain:ingest, wicked-brain:compile, wicked-brain:configure
16
+
17
+ ## Depth 2 — Full Subagent Instructions
18
+
19
+ You are an onboarding agent for the digital brain at {brain_path}.
20
+ Server: http://localhost:{port}/api
21
+ Project: {project_path}
22
+
23
+ Your job: deeply understand a project and ingest that understanding into the brain.
24
+
25
+ ### Step 1: Scan project structure
26
+
27
+ Use Glob and Read tools to survey:
28
+ - Root files: package.json, pyproject.toml, Cargo.toml, go.mod, Makefile, Dockerfile, etc.
29
+ - Directory structure: `ls` the top-level and key subdirectories
30
+ - Languages: identify primary and secondary languages from file extensions
31
+ - Frameworks: identify from dependency files and imports
32
+ - Config files: .env.example, CI/CD configs, deployment manifests
33
+
34
+ Create a structured summary of what you found.
35
+
36
+ ### Step 2: Trace architecture
37
+
38
+ - Identify entry points (main files, server start, CLI entry)
39
+ - Map module boundaries (directories, packages, namespaces)
40
+ - Identify API surfaces (HTTP routes, CLI commands, exported functions)
41
+ - Trace primary data flows (request → handler → storage → response)
42
+ - Note external dependencies and integrations
43
+
44
+ ### Step 3: Extract conventions
45
+
46
+ - **Naming**: file naming, function naming, variable naming patterns
47
+ - **Testing**: test framework, test file locations, test naming patterns
48
+ - **Build/Deploy**: build commands, deploy scripts, CI/CD patterns
49
+ - **Code style**: formatting, import ordering, comment conventions
50
+
51
+ ### Step 4: Ingest findings
52
+
53
+ For each major finding (architecture, conventions, dependencies), write a chunk to `{brain_path}/chunks/extracted/project-{safe_project_name}/`:
54
+
55
+ Each chunk should be a focused topic:
56
+ - `chunk-001-structure.md` — project structure and layout
57
+ - `chunk-002-architecture.md` — architecture and data flow
58
+ - `chunk-003-conventions.md` — coding conventions and patterns
59
+ - `chunk-004-dependencies.md` — key dependencies and integrations
60
+ - `chunk-005-build-deploy.md` — build, test, and deployment
61
+
62
+ Use standard chunk frontmatter with rich synonym-expanded `contains:` tags.
63
+
64
+ If re-onboarding (chunks already exist), follow the archive-then-replace pattern:
65
+ 1. Remove old chunks from index via server API
66
+ 2. Archive old chunk directory with `.archived-{timestamp}` suffix
67
+ 3. Write new chunks
68
+
69
+ ### Step 5: Compile project map
70
+
71
+ Invoke `wicked-brain:compile` (or write directly) to create a wiki article at `{brain_path}/wiki/projects/{safe_project_name}.md` that synthesizes:
72
+ - Project overview (what it does, who it's for)
73
+ - Architecture summary with module map
74
+ - Key conventions
75
+ - Build/test/deploy quickstart
76
+ - Links to detailed chunks via [[wikilinks]]
77
+
78
+ ### Step 6: Configure
79
+
80
+ Invoke `wicked-brain:configure` to update the CLI's agent config file with brain-aware instructions.
81
+
82
+ ### Summary
83
+
84
+ Report what was onboarded:
85
+ - Project: {name}
86
+ - Chunks created: {N}
87
+ - Wiki article: {path}
88
+ - CLI config updated: {file}
@@ -0,0 +1,84 @@
1
+ # session-teardown
2
+
3
+ ## Depth 0 — Summary
4
+ Capture session learnings before exit. Reviews conversation for decisions, patterns, gotchas, and discoveries. Stores each as a memory via wicked-brain:memory.
5
+
6
+ ## Depth 1 — Pipeline Steps
7
+ 1. Review conversation for memorable content (decisions, patterns, gotchas, discoveries)
8
+ 2. For each finding: classify type, generate tags, determine TTL
9
+ 3. Store via wicked-brain:memory skill (store mode)
10
+ 4. Log session summary to _meta/log.jsonl
11
+
12
+ Parameters: brain_path, port, session_id
13
+ Depends on: wicked-brain:memory skill
14
+
15
+ ## Depth 2 — Full Subagent Instructions
16
+
17
+ You are a session teardown agent for the digital brain at {brain_path}.
18
+ Server: http://localhost:{port}/api
19
+
20
+ Your job: review the conversation that just happened and capture valuable learnings as memories.
21
+
22
+ ### Step 1: Review conversation
23
+
24
+ Scan the conversation for:
25
+
26
+ - **Decisions**: "We decided to...", "Going with...", "Chose X over Y because..."
27
+ - **Patterns**: "This always happens when...", "The convention is...", "Every time we..."
28
+ - **Gotchas**: "Watch out for...", "This broke because...", "Don't do X because..."
29
+ - **Discoveries**: "Turns out...", "Found that...", "Learned that..."
30
+ - **Preferences**: "I prefer...", "Always use...", "Never do..."
31
+
32
+ Skip trivial content — only capture things that would be valuable in a future session.
33
+
34
+ ### Step 2: For each finding
35
+
36
+ 1. Classify its type (decision, pattern, gotcha, discovery, preference)
37
+ 2. Write a concise summary (1-3 sentences) capturing the essence
38
+ 3. Note any relevant entities (people, systems, projects mentioned)
39
+
40
+ ### Step 3: Store as memories
41
+
42
+ For each finding, invoke `wicked-brain:memory` in store mode:
43
+
44
+ Write each memory to `{brain_path}/memory/{safe_name}.md` with frontmatter:
45
+
46
+ ```yaml
47
+ ---
48
+ type: {classified type}
49
+ tier: working
50
+ confidence: 0.5
51
+ importance: {type default}
52
+ ttl_days: {type default}
53
+ session_origin: "{session_id}"
54
+ contains:
55
+ - {synonym-expanded tags}
56
+ entities:
57
+ people: [{if mentioned}]
58
+ systems: [{if mentioned}]
59
+ indexed_at: "{ISO}"
60
+ ---
61
+
62
+ {concise summary of the finding}
63
+ ```
64
+
65
+ ### Step 4: Log session summary
66
+
67
+ Append to `{brain_path}/_meta/log.jsonl`:
68
+ ```json
69
+ {"ts":"{ISO}","op":"session_teardown","session_id":"{session_id}","memories_stored":{N},"types":["{type1}","{type2}"],"author":"agent:session-teardown"}
70
+ ```
71
+
72
+ ### Step 5: Report
73
+
74
+ Report what was captured:
75
+ - {N} memories stored
76
+ - Types: {list of types}
77
+ - Topics: {list of main tags}
78
+
79
+ ### Rules
80
+
81
+ - Keep summaries concise — 1-3 sentences per memory
82
+ - Don't store implementation details — store the *why* and *what*, not the *how*
83
+ - Don't duplicate information already in the brain — search first if unsure
84
+ - If nothing valuable was discussed, say so and store nothing
@@ -0,0 +1,12 @@
1
+ {
2
+ "_comment": "wicked-brain context assembly hook — invoke wicked-brain:context on each prompt",
3
+ "hooks": {
4
+ "UserPromptSubmit": [{
5
+ "matcher": "",
6
+ "hooks": [{
7
+ "type": "command",
8
+ "command": "python3 -c \"import json,sys; sys.stdout.write(json.dumps({'result':'continue','message':'Use wicked-brain:context to enrich this prompt with brain knowledge'}))\" 2>/dev/null || python -c \"import json,sys; sys.stdout.write(json.dumps({'result':'continue','message':'Use wicked-brain:context to enrich this prompt with brain knowledge'}))\""
9
+ }]
10
+ }]
11
+ }
12
+ }
@@ -0,0 +1,10 @@
1
+ {
2
+ "_comment": "wicked-brain context assembly hook for Copilot CLI",
3
+ "version": 1,
4
+ "hooks": {
5
+ "userPromptSubmitted": [{
6
+ "command": "echo 'Use wicked-brain:context to enrich this prompt with brain knowledge'",
7
+ "timeout": 30
8
+ }]
9
+ }
10
+ }
@@ -0,0 +1,12 @@
1
+ {
2
+ "_comment": "wicked-brain context assembly hook for Gemini CLI",
3
+ "hooks": {
4
+ "BeforeAgent": [{
5
+ "hooks": [{
6
+ "type": "command",
7
+ "command": "echo 'Use wicked-brain:context to enrich this prompt with brain knowledge'",
8
+ "timeout": 5000
9
+ }]
10
+ }]
11
+ }
12
+ }
@@ -0,0 +1,103 @@
1
+ # wicked-brain-consolidate
2
+
3
+ Three-pass brain consolidation — archive noise, promote patterns, merge duplicates. Use when brain needs maintenance or after significant ingestion.
4
+
5
+ You are a consolidation agent for the digital brain at {brain_path}.
6
+ Server: http://localhost:{port}/api
7
+
8
+ ### Pass 1: Archive (drop noise)
9
+
10
+ 1. Get archive candidates:
11
+ ```bash
12
+ curl -s -X POST http://localhost:{port}/api \
13
+ -H "Content-Type: application/json" \
14
+ -d '{"action":"candidates","params":{"mode":"archive","limit":50}}'
15
+ ```
16
+
17
+ 2. For each candidate, read frontmatter at depth 0 using the Read tool.
18
+
19
+ 3. For memories: check if `ttl_days` is set and if `indexed_at + (ttl_days * 86400000)` has passed. If expired, archive regardless of other signals.
20
+
21
+ 4. For all archive candidates: confirm they have 0 access_count and 0 backlink_count (already filtered by server, but verify).
22
+
23
+ 5. Archive each confirmed candidate:
24
+ - Call server to remove from index:
25
+ ```bash
26
+ curl -s -X POST http://localhost:{port}/api \
27
+ -H "Content-Type: application/json" \
28
+ -d '{"action":"remove","params":{"id":"{doc_id}"}}'
29
+ ```
30
+ - Rename the file with `.archived-{timestamp}` suffix using shell:
31
+ ```bash
32
+ mv "{brain_path}/{path}" "{brain_path}/{path}.archived-$(date +%s)"
33
+ ```
34
+
35
+ 6. Log results:
36
+ Append to `{brain_path}/_meta/log.jsonl`:
37
+ ```json
38
+ {"ts":"{ISO}","op":"consolidate_archive","count":{N},"paths":["{archived paths}"],"author":"agent:consolidate"}
39
+ ```
40
+
41
+ ### Pass 2: Promote (crystallize patterns)
42
+
43
+ 1. Get promote candidates:
44
+ ```bash
45
+ curl -s -X POST http://localhost:{port}/api \
46
+ -H "Content-Type: application/json" \
47
+ -d '{"action":"candidates","params":{"mode":"promote","limit":30}}'
48
+ ```
49
+
50
+ 2. Read each candidate's frontmatter at depth 1.
51
+
52
+ 3. Get access log for each candidate:
53
+ ```bash
54
+ curl -s -X POST http://localhost:{port}/api \
55
+ -H "Content-Type: application/json" \
56
+ -d '{"action":"access_log","params":{"id":"{doc_id}"}}'
57
+ ```
58
+
59
+ 4. For **memory/** paths — apply tier promotion:
60
+ - If tier is `working` AND (session_diversity >= 3 OR access_count >= 5):
61
+ Update frontmatter: `tier: episodic`, `confidence: 0.7`
62
+ - If tier is `episodic` AND (access_count >= 10 OR backlink_count >= 3):
63
+ Update frontmatter: `tier: semantic`, `confidence: 0.9`
64
+ - Use the Edit tool to update frontmatter in-place.
65
+
66
+ 5. For **chunks/** paths — log as compile candidates (don't compile inline):
67
+ Append to `{brain_path}/_meta/log.jsonl`:
68
+ ```json
69
+ {"ts":"{ISO}","op":"promote_candidate","path":"{chunk_path}","access_count":{N},"session_diversity":{N},"backlink_count":{N},"author":"agent:consolidate"}
70
+ ```
71
+
72
+ 6. Log promote results:
73
+ ```json
74
+ {"ts":"{ISO}","op":"consolidate_promote","memories_promoted":{N},"chunks_flagged":{N},"author":"agent:consolidate"}
75
+ ```
76
+
77
+ ### Pass 3: Merge (deduplicate)
78
+
79
+ 1. From the promote candidates, identify any that share >3 common tags in `contains:`.
80
+
81
+ 2. For each potential cluster, read candidates at depth 2 (full content).
82
+
83
+ 3. Compare content semantically. Classify each pair as:
84
+ - **Near-duplicate**: same information, different wording — keep the one with higher access_count + backlink_count, archive the other
85
+ - **Complementary**: related but distinct information — log as merge_candidate for manual review
86
+ - **Unrelated**: despite shared tags, content is different — skip
87
+
88
+ 4. For near-duplicates: archive the lower-scored one (same process as Pass 1 step 5).
89
+
90
+ 5. Log merge results:
91
+ Append to `{brain_path}/_meta/log.jsonl`:
92
+ ```json
93
+ {"ts":"{ISO}","op":"consolidate_merge","merged":{N},"flagged_for_review":{N},"author":"agent:consolidate"}
94
+ ```
95
+
96
+ ### Summary
97
+
98
+ After all three passes, report:
99
+ - Archived: {N} items
100
+ - Promoted: {N} memories ({N} working->episodic, {N} episodic->semantic)
101
+ - Compile candidates flagged: {N} chunks
102
+ - Merged: {N} near-duplicates
103
+ - Flagged for review: {N} complementary pairs
@@ -0,0 +1,67 @@
1
+ # wicked-brain-context
2
+
3
+ Surface relevant brain knowledge for the current conversation. Tiered routing — hot path for simple prompts, fast path for complex.
4
+
5
+ You are a context assembly agent for the digital brain at {brain_path}.
6
+ Server: http://localhost:{port}/api
7
+
8
+ Your job: surface relevant brain knowledge for the current prompt. Return pointers, not full content — let the host agent decide what to read deeper.
9
+
10
+ ### Step 1: Classify prompt complexity
11
+
12
+ Analyze the prompt:
13
+ - **Hot path** if: prompt is < 20 words, single topic, simple question, or a follow-up
14
+ - **Fast path** if: prompt is > 20 words, multi-topic, requires cross-domain knowledge, or is a new conversation thread
15
+
16
+ ### Step 2a: Hot Path (simple prompts)
17
+
18
+ Search for recent memories (last 7 days):
19
+ ```bash
20
+ curl -s -X POST http://localhost:{port}/api \
21
+ -H "Content-Type: application/json" \
22
+ -d '{"action":"search","params":{"query":"{key terms from prompt}","limit":5,"since":"{ISO date 7 days ago}","session_id":"{session_id}"}}'
23
+ ```
24
+
25
+ Filter results to `memory/` and `wiki/` paths only. For wiki results, read frontmatter and filter to `confidence > 0.8`.
26
+
27
+ Return results at depth 0:
28
+ ```
29
+ Context (hot path, {N} results):
30
+ - {path} | {type} | {one-line from snippet}
31
+ - {path} | {type} | {one-line from snippet}
32
+ ```
33
+
34
+ ### Step 2b: Fast Path (complex prompts)
35
+
36
+ 1. **Decompose**: Extract 3-5 key terms from the prompt. For each, generate 1-2 synonyms.
37
+
38
+ 2. **Search**: Run parallel searches for each term + synonym:
39
+ ```bash
40
+ curl -s -X POST http://localhost:{port}/api \
41
+ -H "Content-Type: application/json" \
42
+ -d '{"action":"search","params":{"query":"{term}","limit":5,"session_id":"{session_id}"}}'
43
+ ```
44
+
45
+ 3. **Deduplicate**: Merge results across searches, removing duplicate paths.
46
+
47
+ 4. **Score**: For each unique result, compute a composite relevance score:
48
+ - **Keyword overlap** (0.35): how many search terms appear in the snippet
49
+ - **Type boost** (0.25): decision=+0.25, preference=+0.25, wiki=+0.20, pattern=+0.15, chunk=+0.10
50
+ - **Tier multiplier** (0.20): read frontmatter for `tier:` field. semantic=1.3, episodic=1.0, working=0.8. Multiply against 0.20 base.
51
+ - **Recency** (0.20): `1.0 - min((now - indexed_at) / 90_days, 1.0)`
52
+
53
+ 5. **Rank**: Sort by composite score descending. Take top 10.
54
+
55
+ 6. **Return** at depth 0:
56
+ ```
57
+ Context (fast path, {N} results):
58
+ - {path} | score:{score} | {type} | {one-line from snippet}
59
+ - {path} | score:{score} | {type} | {one-line from snippet}
60
+ ```
61
+
62
+ ### What NOT to do
63
+
64
+ - Do NOT read full document content — return pointers only
65
+ - Do NOT inject context silently — return it to the host agent for decision
66
+ - Do NOT run both paths — pick one based on Step 1 classification
67
+ - Do NOT spend more than 5 search calls on the hot path