opencode-swarm-plugin 0.40.0 → 0.42.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.hive/analysis/eval-failure-analysis-2025-12-25.md +331 -0
- package/.hive/analysis/session-data-quality-audit.md +320 -0
- package/.hive/eval-results.json +481 -24
- package/.hive/issues.jsonl +65 -16
- package/.hive/memories.jsonl +159 -1
- package/.opencode/eval-history.jsonl +315 -0
- package/.turbo/turbo-build.log +5 -5
- package/CHANGELOG.md +155 -0
- package/README.md +2 -0
- package/SCORER-ANALYSIS.md +598 -0
- package/bin/eval-gate.test.ts +158 -0
- package/bin/eval-gate.ts +74 -0
- package/bin/swarm.test.ts +661 -732
- package/bin/swarm.ts +274 -0
- package/dist/compaction-hook.d.ts +7 -5
- package/dist/compaction-hook.d.ts.map +1 -1
- package/dist/compaction-prompt-scoring.d.ts +1 -0
- package/dist/compaction-prompt-scoring.d.ts.map +1 -1
- package/dist/eval-runner.d.ts +134 -0
- package/dist/eval-runner.d.ts.map +1 -0
- package/dist/hive.d.ts.map +1 -1
- package/dist/index.d.ts +29 -0
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +99741 -58858
- package/dist/memory-tools.d.ts +70 -2
- package/dist/memory-tools.d.ts.map +1 -1
- package/dist/memory.d.ts +37 -0
- package/dist/memory.d.ts.map +1 -1
- package/dist/observability-tools.d.ts +64 -0
- package/dist/observability-tools.d.ts.map +1 -1
- package/dist/plugin.js +99356 -58318
- package/dist/swarm-orchestrate.d.ts.map +1 -1
- package/dist/swarm-prompts.d.ts +32 -1
- package/dist/swarm-prompts.d.ts.map +1 -1
- package/docs/planning/ADR-009-oh-my-opencode-patterns.md +353 -0
- package/evals/ARCHITECTURE.md +1189 -0
- package/evals/example.eval.ts +3 -4
- package/evals/fixtures/compaction-prompt-cases.ts +6 -0
- package/evals/scorers/coordinator-discipline.ts +0 -253
- package/evals/swarm-decomposition.eval.ts +4 -2
- package/package.json +4 -3
- package/src/compaction-prompt-scorers.test.ts +10 -9
- package/src/compaction-prompt-scoring.ts +7 -5
- package/src/eval-runner.test.ts +128 -1
- package/src/eval-runner.ts +46 -0
- package/src/hive.ts +43 -42
- package/src/memory-tools.test.ts +84 -0
- package/src/memory-tools.ts +68 -3
- package/src/memory.test.ts +2 -112
- package/src/memory.ts +88 -49
- package/src/observability-tools.test.ts +13 -0
- package/src/observability-tools.ts +277 -0
- package/src/swarm-orchestrate.test.ts +162 -0
- package/src/swarm-orchestrate.ts +7 -5
- package/src/swarm-prompts.test.ts +168 -4
- package/src/swarm-prompts.ts +228 -7
- package/.env +0 -2
- package/.turbo/turbo-test.log +0 -481
- package/.turbo/turbo-typecheck.log +0 -1
package/.hive/memories.jsonl
CHANGED
|
@@ -1,11 +1,14 @@
|
|
|
1
1
|
{"id":"002624b7-fbdd-4720-ad28-5a9fd25c0c3e","information":"Label propagation clustering implementation for graph visualization: Algorithm chosen over alternatives (Louvain, spectral clustering) for O(m×k) performance where k is typically 5-20 iterations. Key implementation details: (1) Build adjacency list from d3.SimulationLinkDatum where source/target can be string OR object - must use String(link.source) not direct casting to avoid type errors. (2) Nodes get unique initial labels (their IDs), then iteratively adopt most common neighbor label until convergence. (3) Ties broken deterministically by lowest label value to ensure reproducible results. (4) Final labels compacted to 0-indexed cluster IDs. (5) Centroids computed as simple averages, updated on force simulation ticks. Works well for 10-10k node graphs with 5-20 natural clusters. Catppuccin color cycling provides visual distinction.","created_at":"1766343300618.0","tags":"graph-clustering,label-propagation,d3-force,community-detection,typescript"}
|
|
2
2
|
{"id":"0099fc4f-ff1d-4771-a6a1-bb61e436638a","information":"LibSQLDatabase multi-scale retrieval option added: includeClusterSummaries in SearchOptions enables querying cluster_summaries table (when it exists) for RAPTOR-style hierarchical search. Implementation is currently a no-op (just destructures the option) because cluster_summaries table doesn't exist yet. When the table is created by another agent, the implementation can query both chunks and cluster summaries, merging results by score. This is part of the RAPTOR-lite architecture where documents can be searched at multiple scales: leaf chunks (fine-grained) and cluster summaries (coarse-grained themes).","created_at":"1766421046482.0","tags":"pdf-brain,raptor,multi-scale-retrieval,cluster-summaries,vector-search,libsql"}
|
|
3
3
|
{"id":"00c08d88-8825-4a44-b0a7-944ae1aec88d","information":"d3.polygonHull and d3.polygonCentroid implementation for cluster visualization: Use d3.polygonHull to compute convex hulls around node clusters. Add padding by placing multiple points around each node at 90-degree intervals (0, π/2, π, 3π/2) offset by padding distance. d3.polygonHull returns [number, number][] | null, so check for null and min length. d3.polygonCentroid takes hull points and returns [x, y] tuple for centroid. Render to canvas with semi-transparent fill (0.08 alpha) and stroke (0.3 alpha). When iterating Map in TypeScript, use Map.forEach() instead of for...of to avoid downlevelIteration issues. Pattern used in pdf-brain-viewer cluster hulls implementation.","created_at":"1766343757791.0","tags":"d3,visualization,canvas,clustering,convex-hull,typescript"}
|
|
4
|
+
{"id":"013cbc29-5f25-4d07-b571-cde06c47edd1","information":"Mandate System Implementation Pattern: Agent voting with state machine (candidate → established → mandate, with permanent rejection). Uses 90-day half-life decay matching learning.ts. Thresholds: established at net_votes >= 2, mandate at net_votes >= 5 AND vote_ratio >= 0.7, rejected at net_votes <= -3. Mandate status never demotes (no demotion once achieved), rejected is permanent. Score = net_votes * vote_ratio, combines strength with consensus. Each agent votes once per mandate to prevent manipulation. Dual storage backends: SemanticMemoryMandateStorage (CLI-based, persistent, semantic search) and InMemoryMandateStorage (ephemeral, testing). Tool collection: mandate_file (submit), mandate_vote (cast vote), mandate_query (semantic search), mandate_list (filter), mandate_stats (metrics). Pattern reused from learning.ts decay calculations. State transitions logged with human-readable reasons in PromotionResult.","created_at":"1766672883436.0","tags":"mandates,voting,state-machine,decay,consensus"}
|
|
4
5
|
{"id":"013e5fd6-20fc-49f0-b913-8815a66746d7","information":"Integration testing pattern for GitHub API tools: Use well-known public repos (e.g., vercel/next.js) as test targets. Handle rate limiting gracefully by checking for rate limit errors in responses and skipping tests with console.warn(). GitHub Code Search API often requires authentication - tests should skip gracefully when errors occur. Unauthenticated: 60 req/hr, Authenticated (GITHUB_TOKEN): 5000 req/hr. Error handling tests should accept either the expected error OR rate limit error as valid (e.g., result.error.includes(\"not found\") || result.error.includes(\"rate limit\")).","created_at":"1766294917308.0","tags":"testing,github-api,integration-tests,rate-limiting,error-handling"}
|
|
5
6
|
{"id":"03864e7d-2f09-4779-8619-eaba5e98cb46","information":"PGlite WAL management solution for pdf-library project: Added checkpoint() method to Database service (Database.ts). PGlite supports standard PostgreSQL CHECKPOINT command - no special configuration needed. Implementation: checkpoint() => Effect.tryPromise({ try: async () => { await db.exec(\"CHECKPOINT\"); }, catch: ... }). This prevents WAL accumulation that caused 930 WAL files (930MB) and WASM OOM crash. CHECKPOINT forces WAL to be written to data files, allowing WAL recycling. Transaction safety for addChunks/addEmbeddings already existed (BEGIN/COMMIT/ROLLBACK pattern). Tests verify checkpoint can be called and transactions roll back on failure. Pattern applies to any PGlite project with batch operations.","created_at":"2025-12-19T03:41:35.101Z","metadata":"{\"file\":\"src/services/Database.ts\",\"project\":\"pdf-library\",\"test_file\":\"src/services/Database.test.ts\",\"tests_passing\":10}","tags":"pglite,wal,checkpoint,database,pdf-library,transaction,wasm,oom"}
|
|
7
|
+
{"id":"03deaace-4b46-4cad-93b8-238390223118","information":"**Oh-My-OpenCode Configuration System**\n\nUses Zod for type-safe config validation with dual-scope loading:\n1. User config: `~/.config/opencode/oh-my-opencode.json` (base)\n2. Project config: `.opencode/oh-my-opencode.json` (overrides)\n\n**Config Schema Pattern:**\n```typescript\nconst OhMyOpenCodeConfigSchema = z.object({\n disabled_agents: z.array(BuiltinAgentNameSchema).optional(),\n disabled_hooks: z.array(HookNameSchema).optional(),\n disabled_mcps: z.array(McpNameSchema).optional(),\n agents: AgentOverridesSchema.optional(), // Per-agent customization\n experimental: ExperimentalConfigSchema.optional(),\n claude_code: ClaudeCodeConfigSchema.optional(), // Compat flags\n sisyphus_agent: SisyphusAgentConfigSchema.optional(),\n});\n```\n\n**Deep Merge Strategy:**\n- Arrays: Set union (`[...new Set([...base, ...override])]`)\n- Objects: Recursive `deepMerge(base, override)` with override precedence\n- Primitives: Override wins\n\n**Migration System:**\n- Auto-migrates old config keys → new keys (e.g., `omo → Sisyphus`)\n- Writes migrated config back to file automatically\n- Backward compatibility via AGENT_NAME_MAP lookup\n\n**Config Validation:**\n- Zod `safeParse` with error collection via `addConfigLoadError()`\n- Continues on validation failure, logs issues\n- Invalid configs are ignored, don't crash plugin load\n\n**Novel Pattern:** Config validation errors collected but don't block plugin - graceful degradation.","created_at":"1766673420779.0","tags":"oh-my-opencode,configuration,zod,validation,deep-merge"}
|
|
6
8
|
{"id":"03fb1085-e349-47d3-9e2e-084e129a7fdb","information":"@badass Content Model Decision (Dec 2024): Use ContentResource + ContentResourceResource pattern from course-builder. Key files:\n\n**Database Schema:**\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource.ts:19` - Core ContentResource table with flexible JSON `fields` column\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource-resource.ts:14` - Join table for parent-child relationships with `position` (double for fractional ordering)\n\n**Collection Management:**\n- `apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84` - Main collection editor with drag-and-drop, search, tier selection\n- `apps/ai-hero/src/components/list-editor/lesson-list/tree.tsx:103` - Nested tree using Atlassian Pragmatic DnD\n- `apps/ai-hero/src/lib/lists-query.ts:268` - addPostToList for resource association\n\n**Resource Form Pattern:**\n- `apps/ai-hero/src/components/resource-form/with-resource-form.tsx:78` - HOC for config-driven resource editing\n- `apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/cohort-form-config.tsx:8` - Example config\n\n**Key Gotchas:**\n- Position is `double` not `int` - allows fractional positions for insertion without reordering\n- Nested loading hardcoded to 3 levels in adapter (line 2689-2723)\n- Slug format: `{slugified-title}~{guid}` for uniqueness\n- JSON fields validated by Zod at app layer, not DB level\n\n**Patterns to Extract to @badass:**\n1. ContentResource base model to @badass/core\n2. ResourceFormConfig pattern to @badass/core\n3. CollectionEditor component to @badass/ui\n4. Position management utilities to @badass/core/utils","created_at":"2025-12-18T15:50:04.300Z"}
|
|
7
9
|
{"id":"04024144-e865-45b6-a6c2-b4d6ed735d8d","information":"Skills integration tests learned pattern: writeFileSync with mode parameter doesn't actually set executable permissions on created files. Need explicit chmodSync(path, 0o755) after writing for scripts to be executable via Bun.spawn. This is cross-platform filesystem behavior. Also: skills_init creates skills with TODO placeholder descriptions that fail validation, so duplicate detection requires valid descriptions in tests.","created_at":"1766295448269.0","tags":"testing,skills,filesystem,executable,integration-tests"}
|
|
8
10
|
{"id":"0496158b-3a9b-476e-9b13-982cfdd6abee","information":"{\"id\":\"test-1766263663559-ok1qs8pysja\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:47:43.559Z\",\"raw_value\":1}","created_at":"1766263663796.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:47:43.559Z\"}"}
|
|
11
|
+
{"id":"04e4c229-6403-4a0c-a8b3-ba5fa01eb465","information":"Mem0 memory operations pattern implemented for swarm-mail. LLM (claude-haiku-4-5) analyzes new information against existing memories to decide: ADD (genuinely new), UPDATE (refines existing), DELETE (contradicts), or NOOP (already captured). Key implementation details: (1) Use AI SDK v6 with generateText + Output.object() pattern, NOT generateObject; (2) UPDATE must update in-place via Drizzle db.update() to preserve memory ID, not delete+recreate; (3) UPDATE requires re-generating embedding via Ollama for the new content; (4) Schema uses z.discriminatedUnion on \"action\" field for type-safe LLM responses; (5) Tests require full libSQL schema including valid_from, valid_until, superseded_by, auto_tags, keywords columns from db/schema/memory.ts. This enables intelligent memory management where the system evolves its knowledge graph instead of blindly appending.","created_at":"1766643549785.0","metadata":"{\"epic\":\"mjl1ksc3peh\",\"task\":\"mjl1kscjw3s\",\"pattern\":\"mem0-memory-management\"}","tags":"mem0,memory-operations,llm,ai-sdk-v6,swarm-mail,drizzle,ollama,embeddings"}
|
|
9
12
|
{"id":"05ab4b37-7772-4e98-9c5d-34dfdee9da95","information":"{\"id\":\"pattern-1765653517980-ywilgz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:18:37.980Z\",\"updated_at\":\"2025-12-13T19:18:37.980Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:18:38.186Z","metadata":"{\"id\":\"pattern-1765653517980-ywilgz\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
10
13
|
{"id":"05b865e3-4546-4ba5-a9e7-91ac62247efc","information":"## Durable Streams - Upstream Source\n\nThe Effect-TS durable primitives in swarm-mail originated from https://github.com/durable-streams/durable-streams\n\n### What Durable Streams Provides\n- HTTP-based protocol for resumable, offset-based streaming\n- Works with web browsers, mobile apps, native clients\n- Refresh-safe, multi-device, multi-tab support\n- CDN-friendly for massive fan-out\n\n### Packages in Upstream\n- @durable-streams/client - TypeScript client\n- @durable-streams/server - Node.js server\n- @durable-streams/cli - Command-line tool\n- @durable-streams/state - State management\n\n### Our Local Adaptation (swarm-mail/src/streams/effect/)\n- DurableCursor - Positioned event consumption with checkpointing\n- DurableLock - Distributed mutex with TTL\n- DurableDeferred - Distributed promises\n- DurableMailbox - Actor message passing\n- ask.ts - RPC pattern combining mailbox + deferred\n\n### Key Insight\nOur primitives are a LOCAL adaptation for multi-agent coordination, not the full HTTP protocol. They use PGLite as the durable store. Task is to port them to libSQL.","created_at":"1766333614743.0","tags":"durable-streams,effect-primitives,architecture,upstream-source"}
|
|
11
14
|
{"id":"05be623c-dfdc-44f8-9abc-0d0dfa475685","information":"Worker prompt ON-DEMAND research pattern: Workers can now spawn researchers when they hit unknowns during implementation. Added new section to SUBTASK_PROMPT_V2 (after Step 9, before SWARM MAIL) with 3-step workflow: (1) Check semantic-memory_find first for existing research, (2) If not found, spawn researcher with swarm_spawn_researcher + Task tool, (3) Wait for results then continue. Includes clear triggers for WHEN to research (unknown API behavior, version-specific issues, outdated docs) vs WHEN NOT to (standard patterns, well-documented APIs, obvious implementations). This is OPTIONAL research driven by workers during implementation, distinct from PRE-DECOMPOSITION research driven by coordinators. TDD pattern: 6 new tests covering section placement, semantic-memory check, researcher spawn tool usage, research triggers, and anti-triggers. All placeholder substitutions use {bead_id}, {epic_id}, {project_path} for dynamic values.","created_at":"1766516151168.0","tags":"swarm,worker-prompt,research,on-demand,tdd,semantic-memory,swarm_spawn_researcher"}
|
|
@@ -17,6 +20,8 @@
|
|
|
17
20
|
{"id":"0952bf32-db7d-4378-8f1b-9dd04ca56f16","information":"DurableDeferred libSQL migration was already complete when task assigned. The implementation already used DatabaseAdapter parameter pattern correctly (config.db: DatabaseAdapter), had parameterized queries throughout (no string interpolation), and tests used createInMemorySwarmMailLibSQL(). All 11 tests passing. Key verification: check imports for PGLite (none found), verify DatabaseAdapter usage (line 73), confirm test patterns (line 34). This suggests the epic decomposition didn't check current state before creating subtasks.","created_at":"1766339219958.0","tags":"swarm,libsql,deferred,already-complete,epic-planning"}
|
|
18
21
|
{"id":"096354f7-241c-426e-a53e-d1ba08d00baf","information":"{\"id\":\"test-1766263308863-1sfc71v5ibx\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:41:48.863Z\",\"raw_value\":1}","created_at":"1766263309108.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:41:48.863Z\"}"}
|
|
19
22
|
{"id":"0973178b-96f0-4fe2-bc39-6fbf5d5361c7","information":"Vitest workspace auto-discovery gotcha in monorepos. Even with vitest.workspace.ts configured with explicit project paths vitest still auto-discovers and tries to run ALL test files in the repository by default. This causes failures when legacy or archived code has missing dependencies. Solution add --dir scope flag to package.json test scripts to limit vitest search scope. Example test vitest --dir packages ensures only packages directory is scanned. Why workspace config alone is not enough the workspace file defines separate test projects but does not prevent auto-discovery. Vitest will still find and attempt to load test files outside the workspace unless you explicitly limit the search directory. Affects Bun Turborepo monorepos with archived legacy code.","created_at":"2025-12-18T16:48:31.583Z"}
|
|
23
|
+
{"id":"0a148843-e77a-4c5b-8a0d-91e40d2072d6","information":"Eval failure root cause analysis for opencode-swarm-plugin (Dec 25 2025):\n\n**example.eval.ts (0%)**: Structural bug - data() returns {input: str, output: JSON} but task() does passthrough returning input string. Scorer receives \"Test task\" string instead of CellTree JSON. Fix: Make task() return JSON.stringify(input) where input is the CellTree object, not separate output field.\n\n**compaction-prompt.eval.ts (53%)**: Three issues:\n1. Case sensitivity - scorer checks /\\bEdit\\b/ and /\\bWrite\\b/ but fixtures have lowercase \"edit\"/\"write\". Word boundary \\b makes it case-sensitive. Fix: Add /i flag to regex.\n2. Missing tools - scorer expects 4 tools (Edit, Write, swarmmail_reserve, git commit) but fixtures only have 3 (edit, write, bash). Missing swarmmail_reserve and git commit.\n3. bash not in scorer - fixtures mention bash but scorer doesn't check for it.\n\nCombined impact: Perfect fixture scores 85% (not 100%) due to 0/4 forbidden tools matched. Average across 6 fixtures is 53%. Expected after fixes: 70-80% (some fixtures SHOULD fail - they test bad prompts).\n\nHistorical 100% claim in semantic memory is aspirational - these evals were just added in commit aa12943 (Dec 24). No prior baseline existed.\n\nFixes are 20 lines of code total. Low risk, high impact.","created_at":"1766674701733.0","tags":"evals,debugging,opencode-swarm-plugin,compaction-prompt,case-sensitivity,forbidden-tools"}
|
|
24
|
+
{"id":"0a4d2a90-2cda-4459-b601-94ff37cf0b6f","information":"COORDINATOR_PROMPT extraction pattern: When extracting large inline prompts from bin/swarm.ts into swarm-prompts.ts constants, follow TDD approach: (1) Write failing tests first checking for key sections, (2) Extract prompt with placeholder substitution ({task}, {project_path}), (3) Add helper format function for substitution. Key gotcha: Test regex patterns must account for case variations (FORBIDDEN vs forbidden) and exact text structure. For coordinator prompts, MUST include: role boundaries (what coordinators NEVER do), forbidden research tools section with swarm_spawn_researcher as alternative, all phase headers, and MANDATORY review loop. Phase 1.5 Research Phase goes between Phase 1 (Initialize) and Phase 2 (Knowledge Gathering) - this is where coordinators spawn researchers instead of calling docs tools directly. Format function pattern: replace all placeholders globally with .replace(/{placeholder}/g, value). Tests verify: constant exists, all phases present, forbidden tools listed, research phase documents swarm_spawn_researcher, format function works.","created_at":"1766620077128.0","metadata":"{\"file\":\"packages/opencode-swarm-plugin/src/swarm-prompts.ts\",\"helper\":\"formatCoordinatorPrompt\",\"constant\":\"COORDINATOR_PROMPT\",\"test_file\":\"packages/opencode-swarm-plugin/src/swarm-prompts.test.ts\",\"lines_added\":\"~200\",\"tests_added\":14}","tags":"swarm,coordinator,prompts,tdd,extraction,phase-1.5,forbidden-tools,researcher"}
|
|
20
25
|
{"id":"0b9184ca-cd44-42f1-ae5b-28c6aad6d368","information":"{\"id\":\"test-1766080068974-jpovvl8fce\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:47:48.974Z\",\"raw_value\":1}","created_at":"2025-12-18T17:47:49.178Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:47:48.974Z\"}"}
|
|
21
26
|
{"id":"0bbf5fe0-f8e4-47cc-8f6b-de0aece27650","information":"AI SDK v6 starter repo migration: When updating starter repos from v5 to v6, check ALL files with generateObject imports, not just the ones explicitly listed in the task. Found 2 additional files (invisible-ai-demo.ts, test-structured.ts) beyond the 3 assigned files. Key updates: 1) package.json dependencies (ai ^6.0.0, @ai-sdk/openai ^3.0.0, @ai-sdk/react ^3.0.0), 2) imports change from `generateObject` to `generateText, Output`, 3) TODO comments must reflect new pattern: `generateText({ output: Output.object({ schema, mode: 'array' }) })` instead of `generateObject({ schema, output: 'array' })`. Files may already be partially updated from formatter/prettier changes - always verify actual state before editing.","created_at":"1766434086678.0","tags":"ai-sdk,v6,migration,starter-repo,generateObject,generateText,Output"}
|
|
22
27
|
{"id":"0c44c18e-b76e-4d3c-a6a7-6bfe9836c795","information":"bd daemon creates git worktrees that block branch switching. The beads daemon (bd daemon) runs in background and creates worktrees at .git/beads-worktrees/main for syncing. When switching branches, git fails with \"fatal: 'main' is already used by worktree\". Solution: 1) Stop daemon with `bd daemon --stop`, 2) Remove .git/beads-worktrees and .git/worktrees directories, 3) Run `git worktree prune`, 4) Then checkout works. The daemon auto-starts and recreates worktrees, so stop it before branch operations. Config shows sync.branch = main which is the branch it tracks.","created_at":"2025-12-16T19:52:14.153Z"}
|
|
@@ -28,7 +33,11 @@
|
|
|
28
33
|
{"id":"0e25979a-ff67-4d4c-b9ef-94c1a85d183b","information":"{\"id\":\"pattern-1766350571145-34xtlu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:56:11.145Z\",\"updated_at\":\"2025-12-21T20:56:11.145Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766350571373.0","metadata":"{\"id\":\"pattern-1766350571145-34xtlu\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
29
34
|
{"id":"0e2654bb-47e5-4a0e-9738-427712dee767","information":"{\"id\":\"test-1766085028669-e33njleg6ak\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T19:10:28.669Z\",\"raw_value\":1}","created_at":"2025-12-18T19:10:28.913Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T19:10:28.669Z\"}"}
|
|
30
35
|
{"id":"0e7acef9-5500-4342-9c12-ef50c5997dee","information":"{\"id\":\"pattern-1765664067335-e68cvl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:14:27.335Z\",\"updated_at\":\"2025-12-13T22:14:27.335Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:14:27.567Z","metadata":"{\"id\":\"pattern-1765664067335-e68cvl\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
36
|
+
{"id":"0eb29580-772f-480c-acba-724dd5f54134","information":"{\"id\":\"test-1766610770941-uxojrnr51k\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T21:12:50.941Z\",\"raw_value\":1}","created_at":"1766610771165.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T21:12:50.941Z\"}"}
|
|
31
37
|
{"id":"0f3d03bf-9a59-41db-9569-fd639661aeab","information":"{\"id\":\"test-1766350569888-z8uv1atsc5q\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:56:09.888Z\",\"raw_value\":1}","created_at":"1766350570179.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:56:09.888Z\"}"}
|
|
38
|
+
{"id":"0f7cdbcf-07a7-4b9d-b8cd-5f19988ee73c","information":"{\"id\":\"pattern-1766635243533-dnzj96\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-25T04:00:43.533Z\",\"updated_at\":\"2025-12-25T04:00:43.533Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766635243747.0","metadata":"{\"id\":\"pattern-1766635243533-dnzj96\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
39
|
+
{"id":"0fdea5f9-7b46-47c1-b3d6-1d4ee0b545b2","information":"oh-my-opencode hook architecture research findings:\n\n## Hook System Overview\noh-my-opencode implements a comprehensive lifecycle hook system (21+ hooks) for OpenCode plugin extensibility. Unlike simple event listeners, this is a **multi-phase, composable hook architecture** with optional callbacks and dependency injection.\n\n## Complete Hook Inventory\n1. **Compaction Hooks** (3):\n - `anthropic-auto-compact`: Detects Anthropic token limit errors, auto-triggers compaction with retry logic\n - `preemptive-compaction`: Proactive compaction at 80% context threshold (configurable), prevents overflow\n - `compaction-context-injector`: Injects structured prompt before summarization to preserve user requests, goals, completed work, remaining tasks, and \"MUST NOT do\" constraints\n\n2. **Session Recovery Hooks** (2):\n - `session-recovery`: Repairs 3 error types (tool_result_missing, thinking_block_order, thinking_disabled_violation) by manipulating session filesystem\n - `session-notification`: Tracks session state, prevents double notifications\n\n3. **Think Mode Hooks** (1):\n - `think-mode`: Keyword detection (\"think\", \"ultrathink\"), auto-switches to high-variant model (e.g., sonnet-4-5 → sonnet-4.5-high), injects thinking config\n\n4. **Claude Code Compatibility Hooks** (5 event types):\n - `claude-code-hooks`: Full compatibility layer for Claude Code hooks (PreToolUse, PostToolUse, UserPromptSubmit, Stop, PreCompact)\n - Executes external hook commands via stdin/stdout protocol\n - Pattern matching with glob/regex matchers\n - JSON-based hook configuration from `.claude/settings.json`\n\n5. **Context Management** (3):\n - `context-window-monitor`: Injects reminder at 70% usage (Anthropic models)\n - `tool-output-truncator`: Aggressive truncation with experimental mode\n - `empty-message-sanitizer`: Fixes empty message parts\n\n6. **Directory Injection** (2):\n - `directory-agents-injector`: Auto-injects AGENTS.md from current/parent dirs\n - `directory-readme-injector`: Auto-injects README.md\n\n7. **Task Enforcement** (3):\n - `todo-continuation-enforcer`: Forces agent to continue if quits mid-task (Sisyphus pattern)\n - `empty-task-response-detector`: Detects and blocks empty task tool responses\n - `agent-usage-reminder`: Reminds to use specialized agents\n\n8. **Other** (2):\n - `rules-injector`: Injects RULES.md files\n - `comment-checker`: Prevents excessive AI comments\n - `keyword-detector`: Detects special keywords\n - `non-interactive-env`: Sets non-interactive env vars\n - `interactive-bash-session`: Tmux session management\n - `background-notification`: Notifies on background task completion\n - `auto-update-checker`: Version checking + toast\n\n## Hook Registration Architecture\n**Pattern:** Factory functions return hook objects with method keys matching OpenCode lifecycle events.\n\n```typescript\nfunction createMyHook(ctx: PluginInput, options?: MyOptions) {\n return {\n \"chat.message\": async (input, output) => { /* modify output */ },\n \"chat.params\": async (output, sessionID) => { /* modify params */ },\n \"tool.execute.before\": async (input, output) => { /* modify args */ },\n \"tool.execute.after\": async (input, output) => { /* modify results */ },\n \"event\": async ({ event }) => { /* handle lifecycle events */ },\n \"experimental.session.compacting\": async (input, output) => { /* inject context */ }\n }\n}\n```\n\n## Hook Execution Flow\n1. **Plugin loads** → All hooks instantiated with `isHookEnabled()` guard\n2. **Main plugin returns** → Aggregates hook methods into plugin object\n3. **OpenCode calls lifecycle methods** → Plugin dispatches to all enabled hooks\n4. **Hooks execute serially** → `await hook1(); await hook2(); ...`\n5. **No short-circuiting** → All hooks run unless one throws\n\n## Event Types (Lifecycle)\n- `session.created` → New session started\n- `session.deleted` → Session closed (cleanup trigger)\n- `session.idle` → Agent stopped responding\n- `session.error` → Error occurred (recovery trigger)\n- `session.updated` → Session metadata changed\n- `message.created` → New message added\n- `message.updated` → Message modified (streaming updates)\n\n## Hook Ordering Strategy\n**No explicit ordering** - hooks registered in code order, all run serially. Coordination via:\n- **Shared state**: Maps/Sets per sessionID\n- **Callbacks**: `setOnAbortCallback()`, `setOnRecoveryCompleteCallback()`\n- **Conditional execution**: Guards like `if (compactionInProgress.has(sessionID)) return`\n\n## Error Handling Patterns\n1. **Silent degradation**: Most hooks catch errors, don't throw (preserve user experience)\n2. **Graceful fallbacks**: Multiple recovery strategies in sequence\n3. **State cleanup**: `session.deleted` event triggers Map/Set cleanup\n4. **Retry logic**: `anthropic-auto-compact` has 3 retry attempts with different strategies\n\n## Novel Patterns for Swarm\n1. **Compaction Context Injection**: Structured prompt before summarization prevents loss of critical context (user requests, constraints, completed work, remaining tasks)\n2. **Callback-based hook coordination**: Hooks expose callbacks for cross-hook coordination without tight coupling\n3. **Filesystem-based session recovery**: Manipulates OpenCode's session storage files to repair broken states\n4. **Preemptive compaction**: Token usage monitoring → trigger compaction before overflow (80% threshold)\n5. **Hook message injection**: Injects system messages into session without going through chat API (filesystem write)\n6. **External hook protocol**: stdin/stdout protocol for user-defined hooks (Claude Code compatibility)\n7. **Think mode auto-switching**: Keyword detection → model variant upgrade + config injection\n\n## Key Takeaways\n- **Composability over inheritance**: Each hook is self-contained, opt-in via config\n- **Filesystem as IPC**: Session state manipulation via direct file writes\n- **Event-driven cleanup**: `session.deleted` as universal cleanup signal\n- **Progressive enhancement**: Hooks add features without breaking core functionality\n- **Context preservation through compaction**: Structured prompts ensure continuity after summarization","created_at":"1766673445032.0","tags":"oh-my-opencode,hooks,lifecycle,research,opencode,plugin-architecture"}
|
|
40
|
+
{"id":"0ffd3f18-5b14-4245-b2b1-ed324a3c844b","information":"CoordinatorEvent schema implementation pattern: Use z.discriminatedUnion on event_type field for type-safe coordinator event logging. Three event types: DECISION (strategy_selected, worker_spawned, review_completed, decomposition_complete), VIOLATION (coordinator_edited_file, coordinator_ran_tests, coordinator_reserved_files, no_worker_spawned), OUTCOME (subtask_success, subtask_retry, subtask_failed, epic_complete). Each event includes session_id, epic_id, timestamp, and flexible payload field (z.any() for max compatibility). Session capture writes to ~/.config/swarm-tools/sessions/{session_id}.jsonl as JSONL (one event per line). captureCoordinatorEvent() validates and appends. saveSession() reads all events and wraps in CoordinatorSession with computed start_time/end_time from event timestamps. Pattern enables eval scoring of coordinator behavior without coupling to specific payload schemas.","created_at":"1766610347341.0","tags":"zod,schema,coordinator,eval-capture,discriminated-union,jsonl"}
|
|
32
41
|
{"id":"1005d5c0-ac5e-4658-a555-3089c642fac5","information":"SWARM COORDINATION BUG: Coordinators must NEVER call swarmmail_reserve(). File reservation is exclusively for worker agents who are actually modifying files. When coordinator reserves files before spawning workers, it blocks the workers from accessing their assigned files. Correct flow: coordinator creates beads + spawns workers → workers call swarmmail_init() → workers call swarmmail_reserve() for their assigned files → workers do work → workers call swarm_complete() which auto-releases. The coordinator only monitors via swarmmail_inbox() and swarm_status().","created_at":"2025-12-14T23:18:17.346Z"}
|
|
33
42
|
{"id":"104f560e-6b0e-46e3-9835-9b19a8a6c6f2","information":"{\"id\":\"pattern-1766260049255-4xpnhx\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:47:29.255Z\",\"updated_at\":\"2025-12-20T19:47:29.255Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260049487.0","metadata":"{\"id\":\"pattern-1766260049255-4xpnhx\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
34
43
|
{"id":"11c9e111-bf66-44e9-84d0-6c9a338bf290","information":"OpenCode command flags use simple prefix parsing (--flag-name). The /swarm command now supports planning modes: --fast (skip brainstorming), --auto (minimal Q&A), --confirm-only (show plan + yes/no), and default (full Socratic). These map to swarm_plan_interactive modes: 'fast', 'auto', 'confirm-only', 'socratic'. Key pattern: parse flags from command string, pass mode to swarm_plan_interactive, handle multi-turn conversation until ready_to_decompose=true, then delegate to swarm/planner subagent. The command documentation includes clear behavior table showing Questions/User Input/Confirmation for each mode.","created_at":"2025-12-16T16:25:10.423Z"}
|
|
@@ -37,6 +46,8 @@
|
|
|
37
46
|
{"id":"132ee45b-67b0-4499-8401-bf761432a9f0","information":"Drizzle ORM PostgreSQL ContentResource pattern: (1) NeonHttpDatabase type needs explicit schema object with tables AND relations - relations required for db.query to work. (2) Multi-column where: use and() helper not && operator. (3) Fractional positions: doublePrecision() not double(). (4) JSONB: Record string unknown not any. (5) Nested loading: recursively build Drizzle query objects for each depth level. (6) Slug format: slugified-title~guid for uniqueness.","created_at":"2025-12-18T16:06:01.731Z"}
|
|
38
47
|
{"id":"13557e2b-154a-45ae-bad9-291357d15536","information":"Durable Streams Protocol (Electric SQL) - The open protocol for real-time sync to client applications. Key concepts:\n\n1. **Offset format**: `<read-seq>_<byte-offset>` - 16-char zero-padded hex for each part, lexicographically sortable\n2. **Operations**: PUT (create), POST (append), GET (read with offset), DELETE, HEAD (metadata)\n3. **Read modes**: catch-up (from offset), long-poll (wait for new data), SSE (streaming)\n4. **Headers**: Stream-Next-Offset, Stream-Up-To-Date, Stream-Seq (writer coordination), Stream-TTL/Expires-At\n5. **Storage pattern**: LMDB for metadata + append-only log files for data\n6. **Recovery**: Scan files to compute true offset, reconcile with metadata on startup\n7. **File handle pooling**: SIEVE cache eviction for LRU file handles\n\nImplementation repo: github.com/durable-streams/durable-streams\n- @durable-streams/client - TypeScript client\n- @durable-streams/server - Reference implementation\n- @durable-streams/conformance-tests - Protocol compliance tests\n\nCritical for Agent Mail: Provides crash recovery, offset-based resumability, and long-poll for live tailing. Better than custom event sourcing because battle-tested at Electric SQL for 1.5 years.","created_at":"2025-12-13T16:52:31.021Z"}
|
|
39
48
|
{"id":"135aa45e-e41f-4864-b075-a8ff658ae9ae","information":"{\"id\":\"pattern-1766074438727-1olr11\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:13:58.727Z\",\"updated_at\":\"2025-12-18T16:13:58.727Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:13:58.949Z","metadata":"{\"id\":\"pattern-1766074438727-1olr11\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
49
|
+
{"id":"13669a83-9a33-46e5-9a81-c6b0376ad2ca","information":"Memory & Context Preservation Research Findings (opencode-swarm-plugin):\n\n**Compaction Triggers:**\n1. Automatic when OpenCode session context reaches limit (experimental.session.compacting hook)\n2. Detection via multiple signals: active file reservations (HIGH confidence), in_progress cells (HIGH), open subtasks (MEDIUM), recent activity (MEDIUM)\n3. Philosophy: \"Err on side of continuation\" - false positive (extra context) cheaper than false negative (lost swarm)\n\n**Context Preservation Strategies:**\n1. Multi-layer compaction context injection based on confidence levels (high/medium/low/none)\n2. Session message scanning for ground truth swarm state (epicId, subtasks, agent names from tool calls)\n3. Dynamic state building with SPECIFIC values (not placeholders) - epicId, projectPath, subtask counts\n4. ASCII art visual anchors for coordinator identity reinforcement\n5. Forbidden tools list (edit, write, reserve) with SPAWN A WORKER alternative\n6. Immediate actions section (numbered 1-5) for post-compaction discipline\n\n**Semantic Memory Integration:**\n1. 90-day half-life decay formula: value = initial * (0.5)^(age_days/90)\n2. Confidence affects decay rate: high confidence (1.0) = 135 day half-life, low (0.0) = 45 day\n3. Auto-migration from legacy PGlite to libSQL on first use\n4. Vector search with Ollama embeddings + full-text search fallback\n5. Validate operation resets decay timer (marks memory still relevant)\n\n**Post-Compaction Recovery:**\n1. Tool call tracking (max 20 calls) after resumption to detect coordinator violations\n2. resumption_started event emitted on first tool call post-compaction\n3. Violation detection via lookup table: edit/write/reserve = coordinator_edited_file, coordinator_reserved_files\n4. Metrics collection across 6 phases: START, GATHER_SWARM_MAIL, GATHER_HIVE, DETECT, INJECT, COMPLETE\n5. Pattern extraction tracking for eval-driven development\n\n**Sources:**\n- RAPTOR paper (Recursive Abstractive Processing) for hierarchical summarization/compression\n- Ebbinghaus forgetting curve for exponential decay model\n- Effect-TS durable primitives for state management\n- OpenCode SDK session.messages API for ground truth extraction\n\nLocated: packages/opencode-swarm-plugin/src/compaction-*.ts, memory*.ts, post-compaction-tracker.ts","created_at":"1766672871984.0","tags":"research,compaction,memory,context-preservation,swarm,adr-009"}
|
|
50
|
+
{"id":"13ea848f-abf8-4f1d-bf02-772617839517","information":"reviewEfficiency vs reviewThoroughness potential contradiction:\n\nreviewThoroughness: reviews / finished_workers (0-1, measures completeness)\nreviewEfficiency: reviews / spawned_workers (penalizes >2:1 ratio)\n\nScenario that exposes contradiction: 2 workers spawned, 2 finished, 4 reviews completed\n- reviewThoroughness: 4/2 = 2.0 → clipped to 1.0 (perfect!)\n- reviewEfficiency: 4/2 = 2.0 → 0.5 (threshold penalty - over-reviewing)\n\nThese contradict each other. Thoroughness rewards all reviews, efficiency penalizes excessive reviews.\n\nRESOLUTION: They are INTENTIONALLY complementary:\n- Thoroughness = quality gate (did you review all workers?)\n- Efficiency = resource optimization (did you waste context on duplicate reviews?)\n\nNeed docstring clarifying this relationship. Both are used in coordinator-session.eval.ts but only thoroughness in overallDiscipline composite (efficiency is newer addition).","created_at":"1766674503176.0","tags":"evalite,scorers,coordinator,review-metrics,calibration"}
|
|
40
51
|
{"id":"140dbeef-29c1-4abd-8bd3-cadc264f3169","information":"ADR-009 Local Dev Database Decision (Dec 2024):\n\nVERDICT: Docker Compose + MySQL 8.0 for local development\n\nRATIONALE:\n- PlanetScale production target is MySQL-compatible (Vitess-backed)\n- Local-to-production parity prevents \"works on my machine\" dialect issues\n- Docker Compose provides declarative, version-controlled database setup\n- Zero MySQL administration knowledge required for developers\n\nKEY DECISIONS:\n1. MySQL 8.0 (not Postgres, not SQLite) - matches PlanetScale production dialect\n2. Docker Compose (not manual install, not PlanetScale branches) - version consistency + easy onboarding\n3. Port 3309 (not 3306) - avoids conflict with local MySQL installations\n4. Hybrid seed strategy: SQL files for bootstrap + TypeScript factories for test data\n5. Drizzle Kit integration: drizzle-kit push for migrations, drizzle-kit studio for GUI\n\nREJECTED ALTERNATIVES:\n- SQLite local + MySQL prod: Dialect mismatch causes production bugs (AUTOINCREMENT vs AUTO_INCREMENT, date handling, foreign keys)\n- Postgres: PlanetScale is MySQL-only, migration later would be painful\n- PlanetScale branches: Network latency, internet dependency, cost, no offline work\n- Manual MySQL install: Version fragmentation, config drift, M1/M2 issues, onboarding friction\n\nSCRIPTS INTERFACE:\n- bun db:up - Start container\n- bun db:down - Stop container\n- bun db:reset - Wipe + recreate + seed\n- bun db:migrate - Drizzle Kit push\n- bun db:seed - Run TypeScript seed script\n- bun db:studio - Drizzle Kit GUI\n\nCOURSE-BUILDER PRECEDENT:\nLegacy apps use identical pattern: MySQL 8.0 + Docker Compose + Drizzle Kit + seed_data volume mount\n\nGOTCHA: SQLite local testing is tempting for speed but creates false confidence - queries that work in SQLite fail in production MySQL due to dialect differences. Always match production database locally.","created_at":"2025-12-18T23:57:41.853Z","tags":"adr,database,docker,mysql,drizzle,planetscale,local-dev"}
|
|
41
52
|
{"id":"14ce13ac-bdc9-4972-a39f-054cd3d01cd8","information":"pdf-library document_concepts backfill successful: Script populated 2335 links from 803/907 documents (88.5% coverage). Tag normalization matched documents to 580/1641 concepts (35.3% usage). Most linked concept: \"Instructional Design\" with 104 documents. Confidence set to 0.8, source tagged as \"backfill\". JOIN queries work: can expand from concept -> documents and vice versa. Database path: ~/Documents/.pdf-library/library.db","created_at":"1766419666846.0","tags":"pdf-library,libsql,taxonomy,document_concepts,backfill,migration"}
|
|
42
53
|
{"id":"14e46924-baf7-4d30-8361-532404832c3f","information":"README showcase structure for developer tools: Lead with the unique innovation (learning system), not features. Use ASCII art liberally for visual impact on GitHub. Structure: Hero (what/why different) → Quick start → Deep dive by category → Scale metrics → Credits. For multi-agent systems, emphasize cost optimization (coordinator-worker split) and learning mechanisms (confidence decay, anti-pattern inversion). Include architecture diagrams showing information flow, not just component boxes.","created_at":"2025-12-18T15:34:49.143Z","tags":"documentation,readme,showcase,portfolio,ascii-art,developer-tools,architecture"}
|
|
@@ -45,16 +56,21 @@
|
|
|
45
56
|
{"id":"167d7034-c725-4eda-96f9-7efd8f050c6b","information":"{\"id\":\"test-1765771108697-kiz3s5fu2v\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:28.697Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:29.165Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:28.697Z\"}"}
|
|
46
57
|
{"id":"16e62f42-bd4a-464a-aad5-31b4ac04797a","information":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:42.155Z\",\"updated_at\":\"2025-12-18T16:17:42.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:42.421Z","metadata":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
47
58
|
{"id":"178856d5-dce3-4ee4-a47a-84bf9eb1b16b","information":"{\"id\":\"pattern-1766262800839-5p64ec\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:33:20.839Z\",\"updated_at\":\"2025-12-20T20:33:20.839Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262801045.0","metadata":"{\"id\":\"pattern-1766262800839-5p64ec\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
59
|
+
{"id":"1798ca87-9fae-4357-a50b-9435ba26e2ff","information":"ADR documentation patterns for opencode-swarm-plugin: Existing ADRs follow git-style format (Status, Context, Decision, Consequences, Implementation Notes). Key ADRs cover: monorepo structure (ADR-001), package extraction (ADR-002), performance with live queries (ADR-003), message queue features (ADR-004), DevTools observability (ADR-005), worktree isolation + review (ADR-007), worker handoff protocol (ADR-008). ROADMAP provides phased implementation timeline. Supporting docs: swarm-mail-architecture.md (technical deep-dive), analysis-socratic-planner-pattern.md (research), subagent-coordination-patterns.md (research), semantic-memory-cli-syntax.md (reference).","created_at":"1766672875782.0","tags":"ADR,documentation,opencode-swarm-plugin,patterns"}
|
|
48
60
|
{"id":"17c19a32-0f52-4cb3-bcf2-c8ea7d390c3e","information":"Linear SDK pagination pattern for @linear/sdk in workflow steps: Use pageInfo.hasNextPage and pageInfo.endCursor for cursor-based pagination. The SDK returns PaginatedConnection with nodes array and pageInfo object. Pattern: (1) Initialize cursor as undefined (not null), (2) Pass after: cursor in query options, (3) Check response.pageInfo.hasNextPage for continuation, (4) Update cursor with response.pageInfo.endCursor ?? undefined. Works for team.issues() and team.projects(). Cursor is string | undefined, NOT string | null. For incremental sync, use filter: { updatedAt: { gte: new Date(lastSyncTimestamp) } } and store the latest updated_at from results as the next sync cursor in Redis.","created_at":"1766517140690.0","tags":"linear-sdk,pagination,workflow,cursor,incremental-sync"}
|
|
49
61
|
{"id":"17e5c6fd-d9b7-4cc5-bc61-b9b40cdd1b2a","information":"{\"id\":\"test-1766262449195-qwoaqt61xu\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:27:29.195Z\",\"raw_value\":1}","created_at":"1766262449437.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:27:29.195Z\"}"}
|
|
50
62
|
{"id":"18e1fd32-ef6a-4332-88e2-b19dfff2e230","information":"JSONL export/import implementation for swarm-mail beads package: Export works well with hash-based deduplication and dirty tracking. Import has issues when creating beads via direct SQL INSERT to preserve IDs - subsequent adapter calls for dependencies/labels/comments may fail silently. 13/29 tests passing. Working: serialize/parse JSONL, content hashing, full export, dirty export, new bead import. Failing: dependency/label/comment import for new beads created via direct INSERT.","created_at":"2025-12-16T23:05:17.663Z","tags":"typescript,beads,jsonl,event-sourcing"}
|
|
63
|
+
{"id":"196f7746-fa81-447d-b0fb-5139d6126066","information":"Coordinator session eval pattern: Created coordinator-session.eval.ts that scores both real captured sessions AND synthetic fixtures. Key pattern: Use loadCapturedSessions() from data-loader.ts to load real sessions from ~/.config/swarm-tools/sessions/*.jsonl, then merge with synthetic fixtures for comprehensive testing. \n\nThree fixture types needed:\n1. Perfect coordinator (0 violations, 100% spawn/review, fast)\n2. Bad coordinator (multiple violations, poor spawn/review, slow)\n3. Decent coordinator (minor violations, mixed performance)\n\nThe eval uses evalite with 5 coordinator-discipline scorers: violationCount, spawnEfficiency, reviewThoroughness, timeToFirstSpawn, overallDiscipline.\n\nData loader pattern: Check if session dir exists, read all .jsonl files, parse events, reconstruct sessions using saveSession(). Returns empty array if no sessions (eval skips gracefully).\n\nSession files are JSONL with one CoordinatorEvent per line. Each event has session_id, epic_id, timestamp, event_type (DECISION/VIOLATION/OUTCOME), and type-specific payload.","created_at":"1766611314406.0","tags":"evalite,coordinator,session-capture,testing,fixtures,data-loader"}
|
|
51
64
|
{"id":"19bb5eb1-027e-4bce-9091-7a7f3f6b5e31","information":"{\"id\":\"test-1766349000983-gd3hkil1hrr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:30:00.983Z\",\"raw_value\":1}","created_at":"1766349001298.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:30:00.983Z\"}"}
|
|
52
65
|
{"id":"19c70339-3281-4311-9e7f-591b264624ea","information":"Bead Event Store Integration completed 75%. Implemented beads/store.ts (336 lines) with appendBeadEvent readBeadEvents replayBeadEvents following streams/store.ts pattern. Created beads/events.ts (215 lines) with 20 bead event type definitions to avoid TypeScript cross-package import issues. Key learnings: Cross-package TS imports fail with not under rootDir error - duplicate type definitions in consuming package. PGLite schema initialization happens in initializeSchema not migrations - tests must call getDatabase or manually init schema. Projection update functions expect loose event types with index signatures - need cast to any. Remaining work: Fix test setup initialize core schema, implement beads/adapter.ts factory update beads/index.ts exports.","created_at":"2025-12-16T22:00:19.988Z"}
|
|
53
66
|
{"id":"19daaead-8317-42d3-8abf-5a69c9f5191d","information":"{\"id\":\"test-1766341863421-b8vnf8ftqw\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T18:31:03.421Z\",\"raw_value\":1}","created_at":"1766341863639.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T18:31:03.421Z\"}"}
|
|
67
|
+
{"id":"1aafe3fc-6318-49a0-9f9b-7e769d6532be","information":"Coordinator session eval filter analysis (Dec 2025): Only 3/102 sessions (2.9%) pass default filter (minEvents=3, requireWorkerSpawn=true, requireReview=true). ROOT CAUSE: Filter is correctly designed but TOO STRICT for real-world data.\n\nDATA BREAKDOWN:\n- 70 sessions (68.6%) = single-event worker completions (NOT coordinator sessions, should be excluded)\n- 20 sessions (19.6%) = no worker_spawned event (incomplete coordinator sessions)\n- 9 sessions (8.8%) = spawned workers but no reviews captured\n- 3 sessions (2.9%) = PASS (gold-standard: 20-24 worker spawns, 4-13 reviews, 6-9 hours duration, zero violations)\n\nFILTER IS WORKING AS DESIGNED: Correctly isolates high-quality complete coordinator cycles for evaluation.\n\nPROBLEM: 2.9% passing rate means most coordinator behavior is invisible to evals.\n\nSOLUTION: Change defaults to requireWorkerSpawn=false, requireReview=false. This increases passing to ~28 sessions (27.5%) while still filtering out worker-only noise. Users can opt-in to stricter filters for gold-standard analysis.\n\nADDITIONAL FINDINGS:\n- No decomposition_complete events in ANY session (including the 3 passing)\n- Some sessions have 22 review_completed with no worker_spawned (split sessions?)\n- Session capture may split long-running coordinators across multiple files\n\nRECOMMENDATIONS:\n1. Loosen default filter criteria (immediate)\n2. Add isCoordinatorSession() filter to exclude worker-only sessions\n3. Investigate session splitting behavior in eval-capture.ts\n4. Add filter breakdown logging for observability\n5. Consider separate evals for different coordinator behavior aspects","created_at":"1766674540935.0","metadata":"{\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlk7jspacf\",\"passing_rate\":\"2.9%\",\"files_analyzed\":102,\"recommended_rate\":\"27.5%\"}","tags":"evalite,coordinator-session,data-quality,filter-tuning,session-capture"}
|
|
54
68
|
{"id":"1b0b1b73-196c-499b-9db7-530645d6749f","information":"GOTCHA: bun publish doesn't support npm OIDC trusted publishers (requires npm login). \n\nSOLUTION: Use bun pack + npm publish combo:\n1. `bun pm pack` - creates tarball WITH workspace:* resolved to actual versions\n2. `npm publish <tarball>` - publishes tarball with OIDC support\n\nThis is implemented in scripts/publish.ts for opencode-swarm-plugin monorepo.\n\nAlso: bin scripts that import external packages need those packages in dependencies, not just devDependencies. The bin/swarm.ts was missing @clack/prompts.","created_at":"2025-12-15T04:46:30.825Z"}
|
|
55
69
|
{"id":"1b236fab-235c-426d-b2cf-d9c54d051724","information":"MarkdownExtractor testing patterns for Effect-based services: Use Effect.runPromise() in test helpers to properly execute Effects. For file-based tests, use temp directories (mkdtempSync) with beforeAll/afterAll cleanup. When testing Effect error types (like MarkdownNotFoundError), catch the FiberFailure wrapper and check error string contains the error name - don't use instanceof on the wrapped error. Gray-matter parses YAML dates as Date objects, not strings. Code blocks in chunking get replaced with placeholders then restored, so test for content presence not exact backtick syntax.","created_at":"2025-12-16T21:41:26.968Z"}
|
|
70
|
+
{"id":"1c49f226-3b54-4328-9b81-96cf6c359bdf","information":"{\"id\":\"test-1766598233248-8jxwqbk0xex\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T17:43:53.248Z\",\"raw_value\":1}","created_at":"1766598233475.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T17:43:53.248Z\"}"}
|
|
56
71
|
{"id":"1ca58d9d-f34c-4cb8-8766-f6131b36d374","information":"swarm-review.integration.test.ts BLOCKER: sendSwarmMessage in swarm_review_feedback.execute() attempts to create its own LibSQLAdapter via appendEvent → createLibSQLAdapter, which fails with \"URL_INVALID\" for non-file:// URLs like '/Users/joel/.config/swarm-tools/swarm.db'. This breaks integration tests that use createInMemorySwarmMailLibSQL.\n\nRoot cause: sendSwarmMessage doesn't accept a database adapter parameter - it auto-creates one. For integration tests to work, either:\n1. swarm_review_feedback needs dbAdapter parameter (breaking change)\n2. sendSwarmMessage needs to use adapter cache (requires global state)\n3. Tests need to use file-based libSQL (not in-memory)\n\nWorkaround: Use file-based temp database instead of in-memory for integration tests that call swarm_review tools.\n\nAlternative: Mock sendSwarmMessage in tests - but defeats purpose of integration test.","created_at":"1766380581123.0","tags":"swarm-review,integration-test,sendSwarmMessage,libSQL,URL_INVALID,blocker"}
|
|
57
72
|
{"id":"1d034b17-20ee-4442-927a-3943288153d0","information":"Test learning about swarm patterns","created_at":"2025-12-16T16:21:07.411Z","tags":"swarm,test"}
|
|
73
|
+
{"id":"1d333531-0e58-4981-91da-b36dd3628d4a","information":"**Oh-My-OpenCode Hook System Architecture**\n\n**Hook Lifecycle Points (in order):**\n1. `config` - Modify OpenCode config before session starts\n2. `auth` - Auth provider integration (optional)\n3. `chat.message` - Intercept user messages\n4. `chat.params` - Modify LLM request params (model, temperature, etc.)\n5. `experimental.chat.messages.transform` - Transform message array before send\n6. `tool.execute.before` - Pre-process tool calls (modify args)\n7. `tool.execute.after` - Post-process tool results (inject content)\n8. `event` - React to system events (session.deleted, session.compacted, tool.execute)\n\n**Hook Creation Pattern:**\n```typescript\n// Hook factory function\nexport function createMyHook(ctx: PluginInput, options?: MyOptions) {\n // Private state (session-scoped Maps)\n const sessionState = new Map<string, MyState>();\n \n return {\n \"hook.name\": async (input, output, ...rest) => {\n // Mutate output in-place\n output.foo = transformFoo(input.foo);\n },\n event: async ({ event }) => {\n // Cleanup on session lifecycle events\n if (event.type === \"session.deleted\") {\n const sessionID = event.properties?.info?.id;\n sessionState.delete(sessionID);\n }\n },\n };\n}\n```\n\n**State Management Pattern:**\n- Hooks maintain session-scoped state via `Map<sessionID, State>`\n- Clean up state on `session.deleted` / `session.compacted` events\n- No shared global state - all state keyed by sessionID\n\n**Conditional Hook Loading:**\n```typescript\nconst hook = isHookEnabled(\"hook-name\") ? createHook(ctx) : null;\n// Later:\nawait hook?.[\"hook.name\"]?.(input, output);\n```\n\n**Novel Pattern:** Optional chaining on hook calls allows null hooks without branching.","created_at":"1766673442963.0","tags":"oh-my-opencode,hooks,lifecycle,state-management,events"}
|
|
58
74
|
{"id":"1d5c0410-845d-4a7e-b916-096dba823675","information":"Three-Tier Health Checks Pattern: Tier 1 (fast): Binary exists - command -v tool. Tier 2 (medium): Shallow verify - tool --version. Tier 3 (slow, --deep only): Functional test - actually calls API. Features: 5-minute cache TTL, 15-second timeout per check, JSON output for automation. Coordinator should run fast checks every 60s, deep checks before spawning workers. Detects: stale reservations, orphaned agents, database corruption. Source: Dicklesworthstone/agentic_coding_flywheel_setup doctor.sh","created_at":"1766591009508.0","tags":"swarm,health,monitoring,observability,patterns,acfs"}
|
|
59
75
|
{"id":"1dada1b7-5e76-46e7-9147-7355300f4f67","information":"{\"id\":\"test-1766261949130-leqx0ivxeo\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:19:09.130Z\",\"raw_value\":1}","created_at":"1766261949427.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:19:09.130Z\"}"}
|
|
60
76
|
{"id":"1e728072-c251-4ebc-9c3c-8753221d63a0","information":"{\"id\":\"pattern-1766261950204-daquzu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:19:10.204Z\",\"updated_at\":\"2025-12-20T20:19:10.204Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261950447.0","metadata":"{\"id\":\"pattern-1766261950204-daquzu\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -67,14 +83,20 @@
|
|
|
67
83
|
{"id":"20c5ee43-3389-42bd-b125-7da87c55445c","information":"{\"id\":\"test-1765670643103-ac1htt8yv4s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T00:04:03.103Z\",\"raw_value\":1}","created_at":"2025-12-14T00:04:03.299Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T00:04:03.103Z\"}"}
|
|
68
84
|
{"id":"20fb300c-80b9-400c-8125-258e1ddbba9b","information":"Session compaction hook implementation: Plugin.trigger(\"session.compacting\", { sessionID }, { context: [] }) allows plugins to inject additional context into the compaction prompt. The hook returns { context: string[] } which gets spread into the prompt text array and joined with \\n\\n. Hook is called BEFORE processor.process() to ensure context is available during compaction. Located in packages/opencode/src/session/compaction.ts process() function.","created_at":"2025-12-17T18:01:32.282Z"}
|
|
69
85
|
{"id":"2190aecb-b20f-4a27-8b32-ff9fd0810216","information":"{\"id\":\"pattern-1766262704550-6h9hi9\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:31:44.550Z\",\"updated_at\":\"2025-12-20T20:31:44.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262704795.0","metadata":"{\"id\":\"pattern-1766262704550-6h9hi9\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
86
|
+
{"id":"2192357b-8ab5-4479-be8f-2c605a4540fe","information":"Eval-to-learning feedback loop implementation pattern:\n\n**TDD approach:**\n1. RED: Write tests for rolling average, drop detection, and memory storage\n2. GREEN: Implement minimal code to pass (calculateRollingAverage, isSignificantDrop, formatFailureContext, learnFromEvalFailure)\n3. REFACTOR: Add configurable threshold, convenience helpers (createLearningConfig), polish docs\n\n**Key design decisions:**\n- Rolling average (default 5 runs) establishes baseline, not simple comparison to last run\n- 15% default threshold balances sensitivity vs noise (configurable)\n- Memory stores structured metadata (JSON) for future query flexibility\n- Tags: eval-failure, {eval-name}, regression for semantic search\n- Mock MemoryAdapter in tests to avoid real storage dependency\n\n**Integration points:**\n- Call after each eval run (eval-gates.ts, evalite runner)\n- Query memories before generating prompts for same eval\n- Threshold tuning per eval type (compaction vs coordinator behavior)\n\n**Type safety:**\n- Zod not needed (simple types, validated at boundaries)\n- StoreResult uses `id` field, not `memory_id` (swarm-mail interface)\n\nFile: packages/opencode-swarm-plugin/src/eval-learning.ts","created_at":"1766635984239.0","metadata":"{\"task\":\"mjkweht7320\",\"module\":\"eval-learning\",\"completed\":\"2024-12-25\"}","tags":"tdd,eval-learning,semantic-memory,pattern,testing"}
|
|
70
87
|
{"id":"22174fd3-71ad-4e49-ac02-67bd38e89db6","information":"opencode-swarm-plugin CI/CD status (Dec 2024):\n\nPACKAGES:\n- swarm-mail@0.1.2 - published, has dist/, repository field, ASCII art README\n- opencode-swarm-plugin@0.23.4 - published but has swarm-mail@0.1.0 dep (stale lockfile issue)\n\nPENDING FIX: \n- Updated scripts/publish.ts to use bun pm pack + npm publish\n- Updated package.json with ci:version and ci:publish scripts \n- Updated publish.yml to setup .npmrc and use new scripts\n- Need to push and merge release PR to get swarm-mail@0.1.2 as dependency\n\nOPEN BEADS:\n- opencode-swarm-plugin-whh1n (P1 bug): swarm_complete fails silently - NOT ADDRESSED\n- opencode-swarm-plugin-gde33 (P2): Swarm Mail Generalization Analysis - NOT ADDRESSED\n\nNEXT SESSION:\n1. Commit and push the publish workflow fixes\n2. Merge release PR when it appears\n3. Verify npm install works with correct swarm-mail version\n4. Then tackle the swarm_complete bug or the skill creation swarm task","created_at":"2025-12-15T05:07:35.356Z"}
|
|
71
88
|
{"id":"2300685b-e672-461f-9846-5ba2b78c4ac0","information":"Daemon process lifecycle management pattern for Node.js: Use child_process.spawn with detached true and stdio ignore for background daemons. Unref child process to allow parent exit. Store PID in file system. Use process.kill(pid, 0) to check if process is alive without sending signal - ESRCH error means dead. Wait for daemon ready by polling health check. SIGTERM for graceful shutdown, SIGKILL as fallback. Clean up PID file after process exit. Dynamic import of optional dependencies like postgres to avoid bundling in library consumers.","created_at":"2025-12-17T17:54:13.019Z"}
|
|
72
89
|
{"id":"235a989c-b607-42f8-a8dc-6f199ae8424f","information":"Lockfile parsing implementation for swarm research phase. Added getInstalledVersions() to detect package versions from lockfiles (npm package-lock.json, pnpm pnpm-lock.yaml, yarn yarn.lock) with fallback to package.json. Binary bun.lock falls back to package.json.\n\nKey design decisions:\n1. Lockfile preferred over package.json - returns what's ACTUALLY installed, not constraints\n2. Semver constraint stripping for package.json fallback - regex extracts X.Y.Z from \"^X.Y.Z\"\n3. Graceful degradation - returns empty array if no package info found\n4. TDD approach - 20 tests covering all formats, edge cases (missing packages, multiple packages, preference order)\n\nPlugin tool: swarm_get_versions - takes projectPath and packages array, returns VersionInfo[] with source tracking (\"lockfile\" vs \"package.json\").\n\nResearchers use this to fetch docs for the CORRECT version (not latest). Critical for accurate documentation lookups in swarm coordination.","created_at":"1766516621466.0","tags":"lockfile,version-detection,swarm-research,npm,pnpm,yarn,bun,tdd"}
|
|
90
|
+
{"id":"23b9ef2c-fe09-432a-a4bb-a2a8f92f90c2","information":"Progressive eval gates implementation with TDD: Created checkGate() function that enforces phase-based quality gates. Bootstrap phase (<10 runs) always passes to collect data. Stabilization phase (10-50 runs) warns on >10% regression but passes. Production phase (>50 runs + variance <0.1) fails on >5% regression. \n\nKey implementation details:\n- Baseline calculated as mean of all historical scores\n- Regression percentage calculated as (baseline - current) / baseline\n- Division by zero handled when baseline is 0\n- Thresholds configurable via GateConfig parameter (stabilizationThreshold, productionThreshold)\n- Helper functions: calculateBaseline(), calculateRegression(), formatRegressionMessage()\n- Returns GateResult with passed flag, phase, message, baseline, currentScore, regressionPercent\n\nTDD process worked perfectly:\n- RED: 25 failing tests covering all phases, edge cases, thresholds\n- GREEN: Minimal implementation passing all tests\n- REFACTOR: Extracted helpers, made thresholds configurable, improved error messages\n\nEdge cases handled: score of 0, baseline of 0, no history, perfect score 1.0, high variance preventing production phase, exactly 10/50 runs boundaries, exactly 5%/10% regression boundaries.\n\nIntegration with eval-history.ts: imports getPhase(), getScoreHistory(), calculateVariance(). Exports added to src/index.ts for programmatic use.","created_at":"1766635914926.0","tags":"tdd,eval-gates,progressive-gates,testing,quality-gates,regression-testing"}
|
|
91
|
+
{"id":"2523b916-2c48-4092-87c6-4794fb8f2a1b","information":"Coordinator prompt evaluation strategy (mjk8tk7jn11): Hybrid approach combining lightweight versioning (Option A) + Evalite offline testing (Option B). \n\nKey insights:\n- Existing infrastructure already supports this: coordinator-discipline scorers (violationCount, spawnEfficiency, reviewThoroughness, timeToFirstSpawn), session capture to JSONL, evalite integration\n- Coordinator prompt is 263 lines (not 500 as estimated), defined in swarm-prompts.ts lines 594-857\n- Offline regression testing with synthetic scenarios enables fast feedback (no 10min real swarms)\n- Semantic versioning with hash validation prevents accidental prompt edits\n- Regression threshold: 5% score drop = fail\n- Synthetic scenario coverage matrix: simple feature, unfamiliar tech, file-based refactor, bug fix, ambiguous task\n\nImplementation phases:\n1. Week 1: Add versioning + hash validation to swarm-prompts.ts\n2. Weeks 2-3: Build coordinator-prompt.eval.ts with 10+ synthetic scenarios\n3. Week 4 (optional): Analytics dashboard for prompt effectiveness\n\nDeferred to v0.34+: LLM-as-Judge continuous eval (Option C) - powerful but requires post-swarm LLM calls and has meta-problem risk.\n\nPattern: Treat prompts like code - version control, regression testing, measurable iteration.","created_at":"1766640410005.0","tags":"coordinator,eval,research,prompt-engineering,evalite,regression-testing"}
|
|
73
92
|
{"id":"258e9231-4bf7-4dbd-809f-3a16de6908f7","information":"When renaming tools in tool-availability.ts, must update 4 places: 1) ToolName type union, 2) toolCheckers object with async checker function, 3) fallbackBehaviors Record with description, 4) tools array in checkAllTools(). Keep deprecated tools for backward compatibility by adding both old and new names to all 4 locations. Mark deprecated with comments.","created_at":"2025-12-17T16:41:27.639Z"}
|
|
93
|
+
{"id":"2641121d-a131-4b9a-9f67-cd96ff48d62e","information":"Structured Output Parsing - 6 Extraction Strategies (Priority Order): 1) direct_parse - clean JSON (fastest), 2) json_code_block - ```json blocks (common in markdown), 3) any_code_block - unlabeled ``` blocks, 4) brace_match_object - finds balanced {...} with surrounding text, 5) brace_match_array - finds balanced [...], 6) repair_json - fixes trailing commas and quote issues. Brace matching respects: escaped quotes (\\\"), string boundaries (tracks inString state), MAX_BRACE_DEPTH=100 (prevents stack overflow). Repair strategy: removes trailing commas before } or ], replaces single quotes in keys (limited support), extracts JSON-like content first. All strategies return [parsed, method] tuple for tracing. JsonExtractionError includes attemptedStrategies array for debugging. Tool wrappers: structured_extract_json (raw), structured_validate (with schema), structured_parse_evaluation/decomposition/cell_tree (typed). Schema registry maps names to Zod schemas (evaluation, task_decomposition, cell_tree).","created_at":"1766672891997.0","tags":"structured-output,json-extraction,zod,parsing,strategies"}
|
|
74
94
|
{"id":"265444da-937e-4fa7-9f5a-0d551b5fcc32","information":"Auto-migration implementation in createMemoryAdapter: Added module-level flag `migrationChecked` to track if legacy memory migration has been checked. First call to createMemoryAdapter() checks: (1) legacyDatabaseExists() from swarm-mail, (2) target DB is empty (COUNT(*) FROM memories = 0), (3) if both true, runs migrateLegacyMemories() with console logging. Subsequent calls skip check (performance optimization). Critical: Export resetMigrationCheck() for test isolation - without it, module-level flag persists across tests causing false failures. Test pattern: beforeEach(() => resetMigrationCheck()) ensures each test starts with fresh state. Graceful degradation: migration failures log warnings but don't throw - adapter continues working. Migrated 176 real memories successfully in production test. Migration functions were added to swarm-mail/src/index.ts exports (legacyDatabaseExists, migrateLegacyMemories, getMigrationStatus, getDefaultLegacyPath).","created_at":"2025-12-18T21:12:31.305Z","metadata":"{\"file\":\"src/memory.ts\",\"pattern\":\"auto-migration-on-first-use\",\"project\":\"opencode-swarm-plugin\"}","tags":"auto-migration,memory,pglite,testing,module-state,swarm-mail"}
|
|
75
95
|
{"id":"27928bec-546f-4a77-a32f-53415771c127","information":"PGlite WAL accumulation root cause: \"different vector dimensions 1024 and 0\" error from failed embedding operations. Solution: Validate embeddings BEFORE database insert in Ollama service. Added validateEmbedding() function that checks: 1) dimension not 0 (empty), 2) dimension matches expected (1024 for nomic-embed-text), 3) no NaN/Infinity values. Integrated into embedSingle() which is used by both embed() and embedBatch(). This prevents pgvector corruption that causes WAL buildup since PGlite never checkpoints. Test coverage: 6 tests covering all validation cases in Ollama.test.ts.","created_at":"2025-12-19T03:30:20.283Z","tags":"pglite,pgvector,embeddings,validation,ollama,wal,database-corruption,pdf-library"}
|
|
76
96
|
{"id":"27f7e1e7-f314-45b6-a916-b28431053392","information":"{\"id\":\"test-1766262042366-41ozxqqdxx3\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:20:42.366Z\",\"raw_value\":1}","created_at":"1766262042619.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:20:42.366Z\"}"}
|
|
97
|
+
{"id":"28a7ed62-751b-4999-9cc0-3c37e1c076dd","information":"oh-my-opencode LSP Integration: Comprehensive LSP tools for AI agents. 11 tools (hover, goto-definition, find-references, document-symbols, workspace-symbols, diagnostics, servers, prepare-rename, rename, code-actions, code-action-resolve). Singleton LSPServerManager with connection pooling, 5min idle timeout. Multi-workspace support keyed by root::serverId. Auto-server detection via PATH + node_modules. Context-safe limits: 100 refs, 50 symbols, 50 diagnostics. Config layers: project → user → opencode → builtin. Novel pattern: lazy-load servers per file extension, then pool. Agents get code intelligence without manual setup.","created_at":"1766673445140.0","tags":"oh-my-opencode,lsp,language-server,code-intelligence"}
|
|
77
98
|
{"id":"28d55a17-96b9-4b3c-a10e-1045925ced18","information":"PGlite Database Path Isolation Bug:\n\n**Problem:** Integration tests were failing intermittently because all tests shared the SAME global database (`~/.opencode/streams`) instead of getting isolated per-test databases. This caused schema conflicts - old schema from previous tests was reused.\n\n**Root Cause:** `getDatabasePath()` logic was:\n```typescript\nif (projectPath) {\n const localDir = join(projectPath, \".opencode\");\n if (existsSync(localDir) || existsSync(projectPath)) {\n // create local DB\n }\n}\n// fallback to global\n```\n\nWhen `projectPath` didn't exist (e.g., `/tmp/test-swarm-12345` not created yet), the `existsSync(projectPath)` check failed, so it fell back to global DB. Tests never created the projectPath directory, assuming getDatabasePath would handle it.\n\n**Solution:** Create `projectPath` directory in `getDatabasePath()` before checking:\n```typescript\nif (projectPath) {\n const localDir = join(projectPath, \".opencode\");\n // Create project directory if it doesn't exist\n if (!existsSync(projectPath)) {\n mkdirSync(projectPath, { recursive: true });\n }\n if (!existsSync(localDir)) {\n mkdirSync(localDir, { recursive: true });\n }\n return join(localDir, \"streams\");\n}\n```\n\n**Impact:** Now each test gets an isolated database at `projectPath/.opencode/streams`, preventing schema pollution between tests.\n\n**Files Changed:**\n- `streams/index.ts`: Fixed `getDatabasePath()` to create directories\n\n**Lesson:** When database path depends on a directory, create it unconditionally. Don't assume caller will create it.","created_at":"1766331466890.0","tags":"pglite,test-isolation,database-path,integration-tests,mkdir"}
|
|
99
|
+
{"id":"28fe3f79-fa37-4d51-ac52-a7ba1c489403","information":"{\"id\":\"test-1766599110442-02is5oo5wefy\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T17:58:30.442Z\",\"raw_value\":1}","created_at":"1766599110666.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T17:58:30.442Z\"}"}
|
|
78
100
|
{"id":"291f3101-82dc-41f8-b077-fbce25dfd767","information":"@badass Video Pipeline Decision (Dec 2024): Videos are ALWAYS separate ContentResource types, never embedded fields. Video resources link to posts/lessons via ContentResourceResource join table. This enables video reuse across multiple collections. \n\nCourse-builder has a full web-based, Inngest-backed video pipeline currently in @coursebuilder/core - but core is bloated and this needs extraction. Video processing should be its own package (@badass/video or @badass/mux).\n\nKey reference files for video pipeline:\n- course-builder core video processing (needs extraction, location TBD)\n- academy-content Mux integration: vercel/academy-content/plans/video-upload-processing-plan.md\n\nArchitecture: Upload triggers Inngest job, Mux processes video, webhook updates VideoResource with asset ID and playback info.","created_at":"2025-12-18T15:51:59.366Z"}
|
|
79
101
|
{"id":"2add0e53-1dba-4191-bea0-0451e681f898","information":"{\"id\":\"test-1765751935012-epiln8ycyte\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:38:55.012Z\",\"raw_value\":1}","created_at":"2025-12-14T22:38:55.304Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:38:55.012Z\"}"}
|
|
80
102
|
{"id":"2b39efc2-f484-4f02-81ac-182da5de8048","information":"{\"id\":\"pattern-1766256884732-h98jpn\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T18:54:44.731Z\",\"updated_at\":\"2025-12-20T18:54:44.731Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766256884971.0","metadata":"{\"id\":\"pattern-1766256884732-h98jpn\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -106,29 +128,41 @@
|
|
|
106
128
|
{"id":"3667bbf3-77fa-4beb-868e-61164dd85081","information":"npm Trusted Publishers setup for opencode-swarm-plugin monorepo:\n\nPROBLEM SOLVED: npm token management is a mess. Trusted Publishers use OIDC - no tokens needed.\n\nSETUP:\n1. Workflow needs `permissions: id-token: write` \n2. Each npm package configured at npmjs.com/package/PKG/access with Trusted Publisher:\n - Organization: joelhooks\n - Repository: opencode-swarm-plugin \n - Workflow: publish.yml\n3. Use `bunx changeset publish` NOT `npm publish` directly - changeset publish is smarter, only publishes packages with new versions not yet on npm\n\nKEY GOTCHA: Using `bun turbo publish:pkg` with individual `npm publish --provenance` scripts FAILED because:\n- turbo tried to publish ALL packages including ones already at same version on npm\n- OIDC token detection didn't work through bun→npm chain properly\n\nSOLUTION: `bunx changeset publish` handles everything:\n- Checks npm registry for each package version\n- Only publishes packages where local version > npm version\n- Creates git tags automatically\n- Works with OIDC out of the box\n\nWORKFLOW FILE: .github/workflows/publish.yml\n- Triggers on push to main\n- Uses changesets/action@v1\n- publish command: `bun run release` which runs `bunx changeset publish`\n\nDOCS: https://docs.npmjs.com/trusted-publishers","created_at":"2025-12-15T04:34:51.427Z"}
|
|
107
129
|
{"id":"36a16df5-4bf9-4c2a-9b27-96613e25201b","information":"{\"id\":\"test-1766260910579-wcmez499yqe\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:50.579Z\",\"raw_value\":1}","created_at":"1766260910801.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:50.579Z\"}"}
|
|
108
130
|
{"id":"36e760a9-9737-4f1e-8159-a739f679af77","information":"Monorepo publishing with workspace:* protocol and npm OIDC trusted publishers:\n\nPROBLEM: workspace:* doesn't get resolved by npm publish or changeset publish, causing \"Unsupported URL Type workspace:*\" errors on install.\n\nSOLUTION (scripts/publish.ts):\n1. bun pm pack - creates tarball with workspace:* resolved to actual versions\n2. npm publish <tarball> - publishes with OIDC support\n\nWHY NOT bun publish? It resolves workspace:* but doesn't support npm OIDC trusted publishers (requires npm login).\n\nWHY NOT npm publish directly? It doesn't resolve workspace:* protocol.\n\nWHY NOT changeset publish? Uses npm under the hood, same problem.\n\nADDITIONAL GOTCHA: CLI bin scripts (like bin/swarm.ts) need external imports in dependencies, not devDependencies. Users installing globally won't have devDeps, causing \"Cannot find module\" errors.\n\nFILES:\n- scripts/publish.ts - custom publish script\n- .github/workflows/publish.yml - calls bun run release which runs scripts/publish.ts","created_at":"2025-12-15T04:47:41.617Z"}
|
|
131
|
+
{"id":"36e97644-c248-4437-990c-0e3123b927d4","information":"TDD pattern for quality filter options in JSONL loaders: When adding filter options to data loaders, use dependency injection (sessionDir parameter) instead of mocking ES module exports. Structure: (1) Add filter params to options with sensible defaults (minEvents=3, requireWorkerSpawn=true), (2) Extract quality check logic to helper function for clarity (meetsQualityCriteria), (3) Apply filters BEFORE limit for accurate sampling, (4) Log filtered count for visibility. Test strategy: Create temp session dir, write JSONL files with createSessionFile helper, pass sessionDir to loader, assert filter behavior. This pattern worked for loadCapturedSessions in evals/lib/data-loader.ts with 7 tests covering individual filters, combinations, defaults, and limit ordering.","created_at":"1766638116890.0","tags":"tdd,data-loader,quality-filters,testing,dependency-injection,evalite"}
|
|
109
132
|
{"id":"370b4da1-176a-4975-a58d-9cd46d515918","information":"TDD workflow for JSONL merge function: Write tests FIRST that verify behavior (empty files, overlaps, missing files), then implement minimal code to pass. For JSONL deduplication, use Set to track existing IDs, filter base records, append new ones, write back. Testing pattern: mkdirSync temp project, writeFileSync JSONL fixtures, run function, readFileSync + parse to verify. All 6 test cases passed on first implementation - TDD prevented edge case bugs.","created_at":"2025-12-18T00:56:09.189Z"}
|
|
133
|
+
{"id":"38066454-933d-4cb9-ac3c-1af8fb3875a3","information":"OpenCode plugin tool creation pattern: (1) Add types to adapter module (Args, Result interfaces). (2) Add method to adapter interface and implementation. (3) Use tool.schema for parameter validation in plugin wrapper. (4) Export tool from memory-tools.ts and add to memoryTools registry. (5) Tool is auto-registered via ...memoryTools spread in index.ts. (6) Write adapter-level tests FIRST (TDD), then add tool-level integration tests. (7) For features requiring external dependencies (like swarm-mail's smart upsert), create mock implementation in plugin with clear TODOs for real integration. Mock should match result schema exactly. (8) Use readonly Result types and build objects with spread operator for optional fields to avoid TS2540 errors.","created_at":"1766673066970.0","tags":"opencode,plugin,tools,tdd,typescript"}
|
|
110
134
|
{"id":"388c94e7-6227-4af2-a13f-e5c54af4cf5f","information":"pdf-brain enrichment fix: AutoTagger.enrich() returns concepts array but never called taxonomy.assignToDocument(). Fixed in 3 locations in cli.ts:\n\n1. `add` command (line ~683): After library.add(), extract concepts from enrichedMetadata, loop and call taxonomy.assignToDocument(doc.id, conceptId, 0.9, \"llm\")\n\n2. `ingest` TUI mode (line ~1664): Moved enrichedMetadata declaration outside if-block for scope, added same concept assignment loop after library.add()\n\n3. `ingest` CLI mode (line ~1887): Added concept assignment loop using fileMetadata.concepts after library.add()\n\nPattern: \n```typescript\nconst concepts = metadata.concepts as string[] | undefined;\nif (concepts && Array.isArray(concepts) && concepts.length > 0) {\n const taxonomy = yield* TaxonomyService;\n for (const conceptId of concepts) {\n yield* taxonomy.assignToDocument(doc.id, conceptId, 0.9, \"llm\");\n }\n}\n```\n\nVerified with manual test: added documents now show \"Assigned N concept(s)\" and document_concepts table is populated. All 181 tests pass.","created_at":"1766420197417.0","tags":"pdf-brain,autotagger,enrichment,taxonomy,document_concepts,bug-fix,tdd"}
|
|
111
135
|
{"id":"38fbf3f0-eea1-4d7d-b888-7cb68f73ae91","information":"{\"id\":\"test-1766262703408-0tujzt32od4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:31:43.408Z\",\"raw_value\":1}","created_at":"1766262703656.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:31:43.408Z\"}"}
|
|
112
136
|
{"id":"39879763-33d1-4881-b7f7-8bb5bbca62f2","information":"CLI integration for pdf-brain multi-scale retrieval: Added --include-clusters flag to search command, wired to SearchOptions.includeClusterSummaries. Updated HELP text to document the flag. Exported parseArgs() for testability. Pattern: CLI flag → parseArgs → SearchOptions → LibSQLDatabase.vectorSearch(). The cluster command implementation (using streamEmbeddings, mini-batch k-means, soft clustering) is in LibSQLDatabase and ClusteringService but not yet exposed via CLI - that's a separate integration task. TDD approach: wrote tests for flag parsing first, then implemented minimal wiring.","created_at":"1766424483922.0","metadata":"{\"file\":\"src/cli.ts\",\"pattern\":\"flag-to-service-wiring\",\"test_file\":\"src/cli.test.ts\"}","tags":"pdf-brain,cli,tdd,multi-scale-retrieval,clustering,flags"}
|
|
137
|
+
{"id":"3989cb4e-bd79-46cb-8e1d-df56ec443c5e","information":"**Oh-My-OpenCode Plugin Architecture Overview**\n\nEntry Point: `src/index.ts` exports single `OhMyOpenCodePlugin: Plugin` function that receives `PluginInput` context.\n\n**Core Architecture Pattern:**\n- Single plugin function that returns object mapping OpenCode hook names to implementations\n- Hook pattern: `\"hook.name\": async (input, output, ...rest) => { /* mutation logic */ }`\n- Configuration-driven feature toggling via Zod schemas\n- Multi-scope loading (user, project, opencode-global, opencode-project) with priority resolution\n\n**Plugin Object Structure:**\n```typescript\nconst Plugin: Plugin = async (ctx: PluginInput) => {\n // 1. Load config from ~/.config/opencode/oh-my-opencode.json + .opencode/oh-my-opencode.json\n const config = loadPluginConfig(ctx.directory);\n \n // 2. Conditionally create hook instances based on config.disabled_hooks\n const hook1 = isHookEnabled(\"hook-name\") ? createHook() : null;\n \n // 3. Return hook mapping object\n return {\n tool: { tool1, tool2, ...dynamicTools },\n \"chat.message\": async (input, output) => { /* intercept */ },\n \"chat.params\": async (output, sessionID) => { /* modify params */ },\n \"tool.execute.before\": async (input, output) => { /* pre-process */ },\n \"tool.execute.after\": async (input, output) => { /* post-process */ },\n config: async (config) => { /* modify OpenCode config */ },\n event: async ({ event }) => { /* react to events */ },\n auth: authHooks, // Optional auth provider\n };\n};\n```\n\n**Key Insight:** Hooks mutate `output` parameter in-place. No return values - side effects only.","created_at":"1766673412024.0","tags":"oh-my-opencode,architecture,plugin,hooks,opencode-sdk"}
|
|
113
138
|
{"id":"3a1f3810-5ab9-419c-8c27-48b5b28ea1c1","information":"{\"id\":\"test-1766349510928-8i8zfpvwfw2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:38:30.928Z\",\"raw_value\":1}","created_at":"1766349511174.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:38:30.928Z\"}"}
|
|
114
139
|
{"id":"3a7be2a6-36d7-40b5-a14f-fedefadb4608","information":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:20:42.550Z\",\"updated_at\":\"2025-12-13T19:20:42.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:20:42.749Z","metadata":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
115
140
|
{"id":"3a8ffc86-7de2-4a69-aff2-1de413c0dca7","information":"AI SDK 6 with Vercel AI Gateway - SIMPLEST PATTERN: Just use the model string directly with generateText/generateObject. No provider setup needed.\n\n```typescript\nimport { generateText } from \"ai\";\n\nconst { text } = await generateText({\n model: \"anthropic/claude-haiku-4-5\",\n prompt: \"...\",\n});\n```\n\nThe AI SDK automatically uses the AI_GATEWAY_API_KEY env var and routes through Vercel AI Gateway. No need for createOpenAICompatible or any provider configuration. This is the canonical pattern for all AI SDK usage in Joel's projects.","created_at":"1766338514071.0","tags":"ai-sdk,vercel-ai-gateway,pattern,anthropic,generateText"}
|
|
116
141
|
{"id":"3a949a0b-337c-4cd8-919a-bdfb4040dd07","information":"{\"id\":\"test-1766263206530-evd2s8oy0nt\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:40:06.530Z\",\"raw_value\":1}","created_at":"1766263206770.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:40:06.530Z\"}"}
|
|
142
|
+
{"id":"3aa21a69-f3dd-4fb1-b6bb-c47675bd808c","information":"{\"id\":\"test-1766610307392-liq47cibycq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T21:05:07.392Z\",\"raw_value\":1}","created_at":"1766610307615.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T21:05:07.392Z\"}"}
|
|
117
143
|
{"id":"3b086612-be4e-4bf3-83bb-2d30eaacf873","information":"Drizzle ORM has specific limitations with libSQL vector operations and FTS5 full-text search that require raw SQL:\n\n**MUST use raw SQL for:**\n1. Vector function calls: `embedding: sql\\`vector(${JSON.stringify(array)})\\`` - Drizzle's custom vector type handles reads but not writes with vector() function\n2. Vector similarity search: `vector_top_k()`, `vector_distance_cos()` - libSQL-specific ANN search not in Drizzle\n3. FTS5 virtual tables: `CREATE VIRTUAL TABLE ... USING fts5(...)` - Drizzle doesn't support virtual tables\n4. FTS5 MATCH queries: `WHERE content MATCH $query` - Drizzle doesn't support FTS5 syntax\n5. FTS5 triggers: Auto-sync triggers for FTS5 tables - Drizzle doesn't support triggers\n6. Vector indexes: `CREATE INDEX ... ON table(libsql_vector_idx(column))` - libSQL-specific function syntax\n\n**Pattern for acceptable raw SQL:**\n- Use Drizzle for all standard CRUD operations\n- Use `sql\\`\\`` template for libSQL-specific features\n- Use DatabaseAdapter abstraction for portable queries\n- Document WHY raw SQL is required (feature not in Drizzle)\n\n**When auditing for Drizzle conversion:** Check if raw SQL is for vector ops, FTS5, or triggers FIRST before attempting conversion. These features aren't in Drizzle's scope.\n\nApplies to: swarm-mail memory subsystem (store.ts, libsql-schema.ts)","created_at":"1766296170854.0","metadata":"{\"file\":\"packages/swarm-mail/src/memory/store.ts\",\"context\":\"memory subsystem audit\"}","tags":"drizzle,orm,libsql,vector,fts5,sql,migrations"}
|
|
144
|
+
{"id":"3b5726fa-99a5-4206-b63d-814556beb52a","information":"Successfully removed 4 unused coordinator scorers (researcherSpawnRate, skillLoadingRate, inboxMonitoringRate, blockerResponseTime) from evals/scorers/coordinator-discipline.ts. These were fully defined and tested but NEVER used in any eval file - classic case of prototyped but never integrated. Removed 254 lines (649→395). Verification: grep for imports in eval files, check index.ts exports, run typecheck. Pattern: Always verify dead code claims with grep before deleting - trust but verify.","created_at":"1766677614330.0","tags":"dead-code-removal,evalite,scorers,verification"}
|
|
118
145
|
{"id":"3b8562be-cf8d-451b-9a1c-1a4c1ea63360","information":"DurableStreamAdapter implementation for Hive Visualizer (Dec 24, 2025):\n\n**Pattern:** Adapter layer that wraps SwarmMailAdapter for Durable Streams protocol compatibility\n\n**Implementation:**\n- `read(offset, limit)` - Uses `swarmMail.readEvents({ afterSequence, limit })` for offset-based pagination\n- `head()` - Uses `swarmMail.getLatestSequence(projectKey)` to return latest sequence number\n- `subscribe(callback)` - Polls every 100ms, initializes lastSequence to current head to avoid replaying history\n\n**Testing gotcha:** Tests must use `swarmMail.appendEvent()` adapter method, not raw `appendEvent()` with `swarmMail.db` (which doesn't exist on interface). SwarmMailAdapter doesn't expose `.db` property - it has `getDatabase()` method instead.\n\n**Key design decision:** Subscribe polls every 100ms instead of using database triggers. Simple, works everywhere, acceptable latency for human-facing dashboard.\n\n**TDD wins:** Tests existed before implementation. Fixed tests to use proper adapter interface, increased polling timeout from 50ms to 150ms to account for 100ms poll interval.\n\nFiles: durable-adapter.ts (140 lines), durable-adapter.test.ts (12 tests, all passing)","created_at":"1766595734950.0","tags":"durable-streams,adapter-pattern,tdd,polling"}
|
|
119
146
|
{"id":"3bbfd751-13b8-4fda-b6b5-9bbee52aa179","information":"Mini-batch k-means implementation for pdf-library clustering: Algorithm uses incremental centroid updates with learning rate η = 1/count to handle 500k+ embeddings in O(batch_size) memory instead of O(n). Key implementation details: (1) k-means++ initialization for better convergence, (2) Random batch sampling without replacement per iteration, (3) Convergence detection via Frobenius norm check every 10 iterations (threshold 1e-4) for early stopping, (4) Final full assignment pass after convergence. Default batch_size=100 works well for 1000-500k points. Complexity: O(batch_size * k * iterations) vs full k-means O(n * k * iterations). Tested accuracy within 30% of full k-means with faster convergence on large datasets. Used for RAPTOR-style clustering when dataset exceeds 100k chunks.","created_at":"1766423215971.0","tags":"clustering,mini-batch-k-means,pdf-library,scalability,memory-optimization,raptor"}
|
|
120
147
|
{"id":"3bd2ffbe-2a13-42a3-b2e2-b990df18dbe6","information":"Analytics Queries 6-10 Implementation (Dec 22, 2024)\n\n**Implemented 5 pre-built analytics queries using TDD (RED → GREEN → REFACTOR):**\n\n1. **scope-violations**: Files touched outside owned scope. Extracts `files_touched` from `task_completed` events. Useful for detecting agents modifying files they weren't assigned.\n\n2. **task-duration**: p50/p95/p99 task durations. Uses window functions (ROW_NUMBER, COUNT OVER) to approximate percentiles since libSQL lacks `percentile_cont`. Joins `task_started` and `task_completed` events to calculate duration.\n\n3. **checkpoint-frequency**: Checkpoint creation frequency per agent. Counts `checkpoint_created` events, calculates avg interval between checkpoints using `(MAX - MIN) / NULLIF(COUNT - 1, 0)` pattern.\n\n4. **recovery-success**: Deferred task resolution success rate. Uses `COUNT(CASE WHEN ...)` pattern to count resolved vs rejected, calculates percentage with `CAST AS REAL` for floating-point division.\n\n5. **human-feedback**: Approval/rejection breakdown. Groups `review_feedback` events by status field, calculates percentage of total.\n\n**Key Patterns:**\n\n- **AnalyticsQuery interface**: `{ name, description, sql, parameters? }`\n- **Optional buildQuery()**: Returns filtered query with project_key parameter\n- **JSON extraction in libSQL**: `json_extract(data, '$.field_name')`\n- **Percentile approximation**: Use window functions + row counting (no native percentile functions)\n- **Percentage calculation**: `CAST(numerator AS REAL) / NULLIF(denominator, 0) * 100`\n- **Integration tests**: Use `createInMemorySwarmMailLibSQL`, seed with `db.query(INSERT ...)` not `db.exec()`\n\n**libSQL Gotchas:**\n\n1. `exec()` doesn't take parameters - use `query()` for parameterized inserts\n2. JSON stored as TEXT, use `json_extract()` not `->` operator\n3. No `percentile_cont` - approximate with `ROW_NUMBER() OVER (ORDER BY value)`\n4. Division truncates to INTEGER unless you `CAST AS REAL`\n\n**Test Coverage:** 16 unit tests + 8 integration tests = 24 new tests, all passing.","created_at":"1766434055306.0","tags":"swarm-mail,analytics,TDD,libSQL,SQL,percentiles,window-functions"}
|
|
121
148
|
{"id":"3c165471-c5f5-4d7c-9879-051b88e9d097","information":"AI SDK UI hooks like useChat() use a transport layer pattern. DefaultChatTransport handles the /api/chat endpoint by default, managing streaming responses, message formatting, and error handling. This abstraction allows customization via the `api` option for different endpoints or custom transport implementations for advanced scenarios (auth, request transformation, non-standard protocols). Important for understanding the connection between UI hooks and backend routes in AI SDK applications.","created_at":"1766466221037.0","tags":"ai-sdk,transport,useChat,architecture,patterns"}
|
|
122
149
|
{"id":"3d053700-465c-4599-96bd-a9f3af4b27a3","information":"ClusterSummarizer LLM abstractive implementation: Replaced extractive summarization with AI SDK generateObject pattern using anthropic/claude-haiku-4-5. Schema defines { summary: string, keyTopics: string[], representativeQuote?: string }. Implementation uses Effect.tryPromise to wrap async LLM call, with automatic fallback to extractive summarization on LLM failure (caught in try-catch, returns generateExtractiveSummary). Key learnings: (1) Mock AI SDK with mock.module() in tests, (2) ClusterSummary interface gets optional keyTopics and representativeQuote fields for backward compatibility, (3) Extractive fallback ensures reliability even when LLM unavailable/fails, (4) Truncate content to 6000 chars before sending to LLM to avoid context limits, (5) Effect pattern uses Effect.tryPromise for async operations.","created_at":"1766423263633.0","tags":"ai-sdk,effect-ts,tdd,summarization,abstractive,claude-haiku,fallback-pattern,pdf-brain"}
|
|
150
|
+
{"id":"3da5cc6a-7dfa-41c4-8310-46fccbd92090","information":"Vercel AI SDK v6 Output.object() Pattern for Entity Extraction: Use `import { generateText, Output } from \"ai\"` then call `generateText({ model, prompt, output: Output.object({ schema: ZodSchema }), headers: { Authorization: Bearer ${apiKey} } })`. The result has `{ output }` property. CRITICAL: Add .describe() to EVERY Zod schema field - dramatically improves extraction quality. The model uses these descriptions as guidance. Example: z.enum(['person', 'project']).describe('Type of entity: person (people), project (software projects)'). Graceful degradation pattern: wrap in try/catch, console.error the failure, return empty structure { entities: [], relationships: [] } so storage succeeds even if LLM fails. This prevents cascade failures in batch operations.","created_at":"1766672962705.0","tags":"vercel-ai-sdk,llm,structured-output,zod,entity-extraction"}
|
|
123
151
|
{"id":"3dfe5a42-bb1e-4881-960a-2bdb3023bee2","information":"{\"id\":\"pattern-1766265064341-zstfx3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:11:04.341Z\",\"updated_at\":\"2025-12-20T21:11:04.341Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766265064582.0","metadata":"{\"id\":\"pattern-1766265064341-zstfx3\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
124
152
|
{"id":"3e88ec34-2b29-406f-8352-cd434ac23b68","information":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-19T00:30:11.784Z\",\"updated_at\":\"2025-12-19T00:30:11.784Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-19T00:30:11.993Z","metadata":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
125
153
|
{"id":"3eabd321-1ad6-4fa9-bf11-8fad2a57ea83","information":"{\"id\":\"test-1765733411282-pzqyaldzdya\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T17:30:11.282Z\",\"raw_value\":1}","created_at":"2025-12-14T17:30:11.541Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T17:30:11.282Z\"}"}
|
|
126
154
|
{"id":"3ec7f612-4075-48f0-b63e-ba46f646f577","information":"POC Migration Learnings (December 2025):\n\n1. SCHEMA PATTERNS:\n- Coursebuilder uses type='post' + fields.postType='course', but migration can use type='course' directly\n- Query files must support BOTH patterns with OR clause\n- Use .passthrough() on Zod schemas to allow extra migration fields (migratedAt, collaborators, legacyRailsId)\n- Remove 'use server' from files that export types/schemas (Next.js constraint)\n\n2. DATABASE CONSTRAINTS:\n- createdById is NOT NULL - must provide system user ID for migrations\n- Use Joel's ID: c903e890-0970-4d13-bdee-ea535aaaf69b for migration scripts\n\n3. VIDEO INTEGRATION:\n- Rails current_video_hls_url contains Mux playback IDs (extract with regex)\n- 97.5% of lessons have Mux coverage (193 missing = mark as retired)\n- VideoResource links to Lesson via ContentResourceResource table\n\n4. MIGRATION SCRIPTS:\n- investigation/poc-migrate-modern-course.ts - Sanity source\n- investigation/poc-migrate-legacy-course.ts - Rails source\n- investigation/src/lib/migration-utils.ts - Shared utilities\n\n5. TDD APPROACH NEEDED:\n- Unit tests for schema validation and field mapping\n- Docker containers for integration tests (postgres + mysql)\n- E2E verification with browser automation","created_at":"2025-12-13T17:07:15.655Z"}
|
|
127
155
|
{"id":"3f49e8fe-db29-4859-8c30-6f17f8964a10","information":"{\"id\":\"pattern-1766259560283-bbfhnp\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:39:20.283Z\",\"updated_at\":\"2025-12-20T19:39:20.283Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766259560525.0","metadata":"{\"id\":\"pattern-1766259560283-bbfhnp\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
128
156
|
{"id":"3faa59da-150b-4c02-a257-515df507fdbe","information":"{\"id\":\"test-1765664124701-aa17ylzydnq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:15:24.701Z\",\"raw_value\":1}","created_at":"2025-12-13T22:15:24.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:15:24.701Z\"}"}
|
|
157
|
+
{"id":"406493d1-385f-49f0-ac86-7e6695de83aa","information":"Scorer analysis revealed 4 unused coordinator scorers (researcherSpawnRate, skillLoadingRate, inboxMonitoringRate, blockerResponseTime) representing 38% of coordinator-discipline.ts (250 LOC). These are fully tested but NEVER used in any eval file. They were likely prototypes that were never integrated into coordinator-session.eval.ts. \n\nDecision point: Either add to scorers array in coordinator-session.eval.ts OR remove them to reduce maintenance burden. Current 5-scorer set (violations, spawn, review, speed, reviewEfficiency) is sufficient for protocol adherence.\n\nFile: evals/scorers/coordinator-discipline.ts lines 345-588\nEvidence: grep -r \"researcherSpawnRate|skillLoadingRate|inboxMonitoringRate|blockerResponseTime\" evals/*.eval.ts returns no matches","created_at":"1766674489385.0","tags":"evalite,scorers,dead-code,coordinator-discipline"}
|
|
129
158
|
{"id":"40e45c96-514c-4f5e-a010-96215895a455","information":"{\"id\":\"test-1766076692243-0mib94hstes\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:51:32.243Z\",\"raw_value\":1}","created_at":"2025-12-18T16:51:32.478Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:51:32.243Z\"}"}
|
|
159
|
+
{"id":"40f496fe-3baf-438b-8fac-e760191650ff","information":"Eval infrastructure architecture analysis (opencode-swarm-plugin): System follows CAPTURE → STORE → LOAD → EVAL → GATE → LEARN pipeline. Key structural issues: 1) Data loader abstraction leak - data-loader.ts knows both PGlite internals AND JSONL format (violates SRP, hard to test/extend). Solution: Extract EvalSource interface with PGliteSource, JsonlSource, FixtureSource implementations. 2) Session quality filters hardcoded in loadCapturedSessions() - only 3/100 sessions passed minEvents=3, requireWorkerSpawn=true, requireReview=true filters. Solution: Make SessionFilter first-class, composable type. 3) No scorer versioning - can't distinguish code regression from scorer logic changes. Solution: Add version field to scorers, track in history, baseline only compatible runs. 4) LLM-as-judge (decompositionCoherence) has no budget controls - unbounded cost, no fallback. Solution: Enforce maxCalls/maxCost budget, cache responses, graceful degradation. 5) Baseline calculation uses naive mean - early bad runs drag down baseline forever, no time decay. Solution: Implement EMA (exponential moving average) or trimmed mean. 6) No eval parameterization - must copy-paste eval files for variations (e.g., maxSubtasks=4 vs 8). See evals/ARCHITECTURE.md for full analysis, data flow diagrams, and 4-phase improvement roadmap.","created_at":"1766674592367.0","metadata":"{\"file\":\"evals/ARCHITECTURE.md\",\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlk7jsilk9\",\"epic_id\":\"opencode-swarm-plugin--ys7z8-mjlk7js9bt1\",\"issues_count\":6}","tags":"architecture,evals,evalite,data-loaders,scorers,progressive-gates,structural-issues"}
|
|
160
|
+
{"id":"40f9eb83-f881-4a61-91a1-7f8a6f5ba7f5","information":"Floating point precision in tests: Use toBeCloseTo(expected, precision) instead of toBe() for decimal comparisons. Example: 0.7 - 0.3 = 0.39999999999999997 in JavaScript. Use expect(value).toBeCloseTo(0.4, 5) for 5 decimal places precision. Applies to link strength calculations, similarity scores, any arithmetic with decimals. toBe() uses strict equality (===) which fails on floating point rounding errors.","created_at":"1766672881755.0","metadata":"{\"source\":\"mjl1kscsxga\",\"context\":\"memory-linking test fix\"}","tags":"testing,javascript,floating-point,bun-test"}
|
|
130
161
|
{"id":"41308199-3761-485f-a7a6-567f97417f95","information":"{\"id\":\"pattern-1765664183401-tex4za\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:16:23.401Z\",\"updated_at\":\"2025-12-13T22:16:23.401Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:16:23.600Z","metadata":"{\"id\":\"pattern-1765664183401-tex4za\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
162
|
+
{"id":"413fad2b-96ea-468e-bb68-503c2bcbaac3","information":"**Oh-My-OpenCode MCP Loader - Claude Code Compatibility**\n\nLoads Claude Code `.mcp.json` configs and transforms to OpenCode SDK format:\n\n**Multi-Scope Loading (priority order):**\n1. `./.claude/.mcp.json` (project - highest)\n2. `./.mcp.json` (project)\n3. `~/.claude/.mcp.json` (user)\n\n**Transformation Pattern:**\n```typescript\n// Claude Code format:\n{\n \"mcpServers\": {\n \"server-name\": {\n \"type\": \"stdio\",\n \"command\": \"npx\",\n \"args\": [\"-y\", \"package-name\"],\n \"env\": { \"API_KEY\": \"${API_KEY}\" },\n \"disabled\": false\n }\n }\n}\n\n// Transformed to OpenCode SDK format:\n{\n \"server-name\": {\n type: \"local\",\n command: [\"npx\", \"-y\", \"package-name\"],\n environment: { \"API_KEY\": \"actual-value\" },\n enabled: true,\n }\n}\n```\n\n**Environment Variable Expansion:**\n- Recursively expands `${VAR_NAME}` placeholders in all string values\n- Falls back to empty string if env var not found\n- Supports both `env` object and inline string expansion\n\n**HTTP/SSE Server Support:**\n```typescript\n// Remote MCP servers (type: \"http\" or \"sse\")\n{\n type: \"remote\",\n url: \"https://example.com/mcp\",\n headers: { \"Authorization\": \"Bearer token\" },\n enabled: true,\n}\n```\n\n**Integration Point:**\n```typescript\nconfig: async (config) => {\n const mcpResult = await loadMcpConfigs();\n config.mcp = {\n ...config.mcp,\n ...createBuiltinMcps(pluginConfig.disabled_mcps),\n ...mcpResult.servers, // Claude Code MCPs (highest priority)\n };\n}\n```\n\n**Novel Pattern:** Bidirectional compatibility layer - supports both Claude Code and OpenCode MCP configs simultaneously.","created_at":"1766673495771.0","tags":"oh-my-opencode,mcp,claude-code-compat,transformation,env-expansion"}
|
|
131
163
|
{"id":"41453b78-e33f-41c2-aedd-3d521af2a2c4","information":"SUBTASK_PROMPT_V2 survival checklist pattern: Workers need 9-step mandatory workflow: 1) swarmmail_init (coordination), 2) semantic-memory_find (query past learnings BEFORE starting), 3) skills_list/skills_use (load domain knowledge), 4) swarmmail_reserve (worker reserves own files, NOT coordinator), 5) do work, 6) swarm_progress at 25/50/75% milestones (triggers auto-checkpoint), 7) swarm_checkpoint before risky ops (refactors, deletions), 8) semantic-memory_store (capture learnings), 9) swarm_complete (closes, releases, scans). KEY INSIGHT: Workers reserve their own files (step 4) - coordinator no longer does this. Past mistake: coordinators reserving caused confusion about who owns what. Worker self-reservation makes ownership explicit. Applies to all swarm worker agents.","created_at":"2025-12-16T16:21:16.745Z","metadata":"{\"context\":\"opencode-swarm-plugin\"}","tags":"swarm,coordination,worker-patterns,file-reservation,semantic-memory,skills,checkpointing,learning-loops"}
|
|
164
|
+
{"id":"4165b38f-50c0-4500-9de2-c017b0c875a9","information":"Drizzle ORM table creation requires BOTH Drizzle schema AND raw SQL DDL. Having a table definition in db/schema/streams.ts (Drizzle schema) is NOT enough - you must also add the CREATE TABLE statement in libsql-schema.ts createLibSQLStreamsSchema(). Drizzle schemas define the TypeScript types and query builder, but libsql doesn't auto-create tables from schemas. The pattern: (1) Define in db/schema/streams.ts using sqliteTable(), (2) Add CREATE TABLE IF NOT EXISTS in libsql-schema.ts, (3) Update dropLibSQLStreamsSchema and validateLibSQLStreamsSchema to include the new table. Bug symptom: \"no such table\" errors at runtime even though Drizzle schema exists. Affected tables: eval_records, swarm_contexts were missing from libsql-schema.ts despite having schemas defined.","created_at":"1766633751334.0","metadata":"{\"files\":[\"libsql-schema.ts\",\"db/schema/streams.ts\"],\"project\":\"swarm-mail\"}","tags":"drizzle,libsql,schema,migration,bug-pattern"}
|
|
165
|
+
{"id":"41cfe54f-664b-41f8-acc3-702bdf07a272","information":"TDD for discriminated union event schemas in eval-capture.ts: Pattern is to (1) Add new variant to z.discriminatedUnion with event_type literal, (2) Add typed sub-field (e.g., compaction_type with z.enum for all variants), (3) payload remains z.any() for max flexibility, (4) Create helper function that wraps captureCoordinatorEvent() with automatic timestamp generation. Tests must validate each enum variant AND reject invalid values. Full prompt content should NOT be truncated in payload - capture it verbatim for eval analysis. COMPACTION events track: detection_complete, prompt_generated, context_injected, resumption_started, tool_call_tracked. This pattern enables type-safe event capture while keeping evals decoupled from specific payload schemas.","created_at":"1766634682045.0","tags":"tdd,zod,discriminated-union,eval-capture,compaction,event-sourcing"}
|
|
132
166
|
{"id":"41f91144-7ecf-4887-ab65-ba045c9c3dae","information":"{\"id\":\"test-1766260221147-2gbfn5x7qj\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:50:21.147Z\",\"raw_value\":1}","created_at":"1766260221379.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:50:21.147Z\"}"}
|
|
133
167
|
{"id":"42465dd4-8323-416b-8b7a-740cb77a1701","information":"HDBSCAN vs GMM/K-means for RAPTOR clustering (credit: @georg_dev):\n\nHDBSCAN advantages for document clustering:\n1. **Builds hierarchy natively** - no need for recursive summarization, the dendrogram IS the tree\n2. **No k selection needed** - automatically finds cluster structure\n3. **Handles noise** - outlier documents don't force bad clusters\n4. **Density-based** - finds clusters of varying shapes/sizes\n\nJS implementation: https://github.com/rivulet-zhang/vis-utils (euclidean distance works for embeddings)\n\nCurrent implementation uses GMM-like soft clustering + mini-batch k-means. HDBSCAN would simplify:\n- Remove BIC k-selection logic\n- Remove recursive summarization\n- Get hierarchical structure for free\n- Better handling of edge cases\n\nTrade-off: HDBSCAN is O(n²) for distance matrix, but can use approximate methods for scale.","created_at":"1766424519710.0","tags":"clustering,HDBSCAN,RAPTOR,embeddings,architecture,georg_dev"}
|
|
134
168
|
{"id":"429da23f-c274-4d2c-93ed-88eee75c4b20","information":"{\"id\":\"test-1765678709593-34lfj5t3x44\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T02:18:29.593Z\",\"raw_value\":1}","created_at":"2025-12-14T02:18:29.809Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T02:18:29.593Z\"}"}
|
|
@@ -136,12 +170,17 @@
|
|
|
136
170
|
{"id":"42e40d93-d19a-4fc2-838e-c312e13eeb88","information":"{\"id\":\"pattern-1766263947907-v3bo81\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:52:27.907Z\",\"updated_at\":\"2025-12-20T20:52:27.907Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263948144.0","metadata":"{\"id\":\"pattern-1766263947907-v3bo81\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
137
171
|
{"id":"4330c2b2-8536-4143-82c1-cbf24e0d8e22","information":"{\"id\":\"pattern-1766593256208-6yyuub\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:20:56.208Z\",\"updated_at\":\"2025-12-24T16:20:56.208Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766593256494.0","metadata":"{\"id\":\"pattern-1766593256208-6yyuub\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
138
172
|
{"id":"44fbda0a-ae47-4180-a5a0-f2969e7044f4","information":"TypeScript discriminated union pattern for unified search results in pdf-library: Used Effect Schema with literal entityType field ('document' | 'concept') as discriminator. Key learnings: 1) Keep backward compatibility by preserving original SearchResult class without entityType, mark as @deprecated. 2) New DocumentSearchResult extends all SearchResult fields + entityType: Schema.Literal(\"document\"). 3) ConceptSearchResult has different structure + entityType: Schema.Literal(\"concept\"). 4) UnifiedSearchResult = DocumentSearchResult | ConceptSearchResult enables type-safe narrowing via entityType check. 5) SearchOptions gets optional entityTypes: Schema.Array(Schema.Literal(\"document\", \"concept\")) for filtering. Pattern allows TypeScript to narrow types automatically: if (result.entityType === 'document') { result.docId } else { result.conceptId }. All existing SearchResult usage continues to work unchanged.","created_at":"1766256672788.0","tags":"typescript,discriminated-union,effect-schema,backward-compatibility,pdf-library"}
|
|
173
|
+
{"id":"46f65649-2e7b-4db6-83f6-9a77771071d6","information":"Notion API v5.x SDK (@notionhq/client ^5.6.0) has a different API structure than earlier versions:\n\n1. **Database queries moved to dataSources**: `notion.databases.query()` no longer exists. Use `notion.dataSources.query()` instead.\n\n2. **Inline databases require explicit sharing**: Child databases (inline databases embedded in pages) are NOT automatically accessible even if the parent page is shared with the integration. Each inline database must be explicitly shared with the integration in Notion's share settings.\n\n3. **Error message**: \"Could not find database with ID: xxx. Make sure the relevant pages and databases are shared with your integration.\" - This means the database exists but isn't shared with the API integration.\n\n4. **Available methods**:\n - `notion.databases`: retrieve, create, update (NO query)\n - `notion.dataSources`: retrieve, query, create, update, listTemplates\n\n5. **Workaround for inline databases**: The parent page content IS accessible via `notion.blocks.children.list()`, which returns child_database blocks with their titles. But querying the actual database items requires the database to be explicitly shared.\n\nProject context: vrain uses NOTION_API_KEY for Vercel workspace access. The DX Content Pipeline page (2b7e06b0-59c4-808c-9a88-c6d9afc0c3e4) is accessible but its inline databases (Campaign Planning, Deliverables, etc.) need explicit sharing.","created_at":"1766679282126.0","tags":"notion,api,sdk,gotcha,permissions"}
|
|
174
|
+
{"id":"470ebc3a-c5f6-496d-9412-6da5cf0e2c3e","information":"Eval infrastructure synthesis (opencode-swarm-plugin): Analyzed 4 investigation reports (architecture, failing evals, session data quality, scorer analysis) and created unified improvement plan with 22 prioritized recommendations.\n\nKEY INSIGHT: The \"failures\" are tactical code bugs, not systemic issues. Architecture is sound (CAPTURE → STORE → LOAD → EVAL → GATE → LEARN pipeline). Two quick fixes restore eval health:\n1. example.eval.ts: 0% → 100% (data/task mismatch - 5min fix)\n2. compaction-prompt: 53% → 70-80% (case-sensitive regex - 5min fix)\n\nData quality is EXCELLENT: 3 passing coordinator sessions are gold-standard examples (6-9 hours, 20-24 worker spawns, 0 violations). High filter rate (97%) filters worker completions by design.\n\nCritical findings:\n- 4 unused scorers = 250 LOC dead code (38% of coordinator-discipline.ts)\n- Data loader abstraction leak (knows PGlite + JSONL internals)\n- No scorer versioning (can't improve without breaking history)\n- Session filter too strict (2.9% pass rate hides coordinator behavior)\n- LLM-as-judge has no budget controls (unbounded cost)\n\nImprovement roadmap: 5 sprints (80-120 hours total)\n- Sprint 1 (1-2 days): Fix evals, remove dead code\n- Sprint 2 (1-2 weeks): Data quality improvements, versioning\n- Sprint 3 (2-3 weeks): Reliability (budgets, baselines, retries)\n- Sprint 4 (3-4 weeks): Intelligence (learning loop, CI integration)\n- Sprint 5 (4-6 weeks): Scale (performance, observability)\n\nPattern: When analyzing complex systems, distinguish between architectural soundness and tactical implementation issues. This eval infrastructure is architecturally excellent but has fixable tactical bugs. Don't confuse the two.","created_at":"1766675040723.0","metadata":"{\"cell\":\"opencode-swarm-plugin--ys7z8-mjlk7jstvch\",\"epic\":\"opencode-swarm-plugin--ys7z8-mjlk7js9bt1\",\"recommendations\":22,\"reports_analyzed\":4,\"total_effort_hours\":\"80-120\"}","tags":"eval-system,synthesis,improvement-plan,opencode-swarm-plugin,architecture-analysis"}
|
|
139
175
|
{"id":"475d7add-4a4f-4289-a794-ecd1b6c64d45","information":"RAPTOR vs SKOS research conclusion (Dec 2025): They're COMPLEMENTARY, not competing approaches. RAPTOR (UMAP+GMM soft clustering + recursive summarization) enables automatic bottom-up theme discovery with multi-scale retrieval - documents can belong to multiple clusters, and queries match both leaf chunks and cluster summaries. SKOS provides stable top-down semantic organization with persistent concept URIs for consistent navigation. Hybrid approach: use RAPTOR-style clustering for discovery, then map clusters to SKOS concepts for stable semantics. Key papers in pdf-brain: RAPTOR, GraphRAG, LightRAG. Implementation priority: (1) backfill document_concepts, (2) improve hybrid search, (3) RAPTOR-lite with cluster summaries, (4) storage optimization via smaller embeddings or larger chunks.","created_at":"1766415682693.0","tags":"pdf-brain,raptor,skos,clustering,taxonomy,architecture,research"}
|
|
140
176
|
{"id":"47e272e2-37c4-4ea1-b724-ec68de3c3bf1","information":"TDD pattern for database query functions: Write tests that use the actual database adapter (not mocks) to verify query behavior. For swarm-mail hive queries, tests use in-memory PGlite with full migrations. This catches SQL syntax errors, constraint violations, and index issues that mocks would miss. Pattern: beforeEach creates fresh PGlite instance, afterEach closes it. Each test creates necessary cells via adapter, then queries them. Fast enough (12s for 36 tests) because PGlite is in-memory.","created_at":"2025-12-19T16:17:46.254Z","tags":"tdd,testing,database,pglite,swarm-mail"}
|
|
141
177
|
{"id":"48610ac6-d52f-4505-8b06-9df2fad353aa","information":"CRITICAL BUG: PGLite database corruption when multiple swarm agents access shared database concurrently.\n\nROOT CAUSE: PGLite is single-connection only. When multiple parallel swarm worker agents each create their own PGLite instance pointing to the same database file, they corrupt each other's writes. This manifests as:\n- 'PGlite is closed' errors\n- Missing data after writes\n- Inconsistent query results\n- Database file corruption requiring deletion\n\nSOLUTION: Implement PGLite leader election pattern from multi-tab-worker docs (https://pglite.dev/docs/multi-tab-worker).\n\nThe pattern works by:\n1. Each worker/agent creates a PGliteWorker instead of PGlite directly\n2. Workers run an election to nominate ONE as the leader\n3. ONLY the leader starts the actual PGlite instance\n4. All other workers proxy their queries through the leader\n5. When leader dies, new election runs and new leader takes over\n\nKey APIs:\n- PGliteWorker - client that proxies to leader\n- worker({ init: () => PGlite }) - wrapper that handles election\n- onLeaderChange(callback) - subscribe to leader changes\n- isLeader: boolean - check if this instance is leader\n\nFor swarm-mail specifically:\n- The singleton pattern in pglite.ts is NOT sufficient for parallel agents\n- Each Task subagent runs in a separate process, not just separate async contexts\n- Need to implement a coordinator pattern where ONE agent owns the DB connection\n- Other agents communicate via IPC/file locks/Agent Mail instead of direct DB access\n\nWORKAROUND (current): Tests use isolated in-memory PGLite instances per test to avoid singleton conflicts.","created_at":"2025-12-17T17:18:27.494Z","tags":"pglite,database,corruption,swarm,parallel-agents,leader-election,critical-bug,P0"}
|
|
178
|
+
{"id":"48ac8664-e156-442e-8a88-54d3be1108a8","information":"MemoryAdapter extension pattern for Wave 1 features: When extending the adapter with methods that depend on services being created by parallel workers, use stub implementations that return graceful defaults (undefined, empty arrays) and document with TODO comments pointing to the service files. Tests should verify the adapter API works correctly with stubs, not the full service behavior. This allows the integration task (mjl1ksdqv4b) to wire up real services later. Key learnings: (1) Drizzle libSQL uses db.all() for SELECT queries and db.run() for INSERT/UPDATE (not db.execute()), (2) Temporal queries filter by valid_from/valid_until using OR conditions for NULL (always valid), (3) Graph traversal with superseded_by requires inserting in reverse order to satisfy foreign keys, (4) Smart operation stubs should implement realistic heuristics (exact match → NOOP, high similarity → UPDATE, different numbers → DELETE) for meaningful tests.","created_at":"1766673051928.0","tags":"swarm-mail,memory-adapter,tdd,parallel-workers,stub-services,wave-1"}
|
|
179
|
+
{"id":"48b311cf-69f2-44ce-bd9e-0d96756e598a","information":"{\"id\":\"pattern-1766610308454-1ctvnc\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T21:05:08.454Z\",\"updated_at\":\"2025-12-24T21:05:08.454Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766610308661.0","metadata":"{\"id\":\"pattern-1766610308454-1ctvnc\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
142
180
|
{"id":"4924f104-cdeb-46f3-91e4-56460e269884","information":"pdf-brain database size investigation (Dec 2025): 52GB database for 907 documents, 486k chunks, 484k embeddings. Database has 13.5M pages × 4096 bytes = ~55GB total. The HNSW neighbor graph (embeddings_idx_shadow table) has 484k rows (one per embedding) and is the primary storage consumer. With compress_neighbors=float8 already enabled (4x compression from default), each shadow row still averages ~100KB due to HNSW neighbor graph structure. Without compression it would be ~400KB/row = 200GB just for the index. CRITICAL: The embeddings themselves are only ~1.9GB (484k × 1024 dims × 4 bytes), the shadow index is ~48GB (92% of total). Alternative optimizations: (1) smaller embedding model (384 dims = 62% reduction), (2) reduce chunk count via better chunking, (3) partial indexing (only recent/important docs), (4) accept slower search without index. Hierarchical clustering would NOT directly reduce storage - it might reduce chunk count if used for document deduplication, but wouldn't compress the HNSW index itself.","created_at":"1766415330225.0","metadata":"{\"docs\":907,\"chunks\":486407,\"db_size_gb\":52,\"embeddings\":483733,\"compression\":\"float8\",\"investigation_date\":\"2025-12-22\"}","tags":"pdf-brain,libsql,hnsw,vector-index,storage-optimization,embeddings,compress_neighbors"}
|
|
143
181
|
{"id":"4945b847-6fd0-42fe-aebd-6ee0d415b1cb","information":"CRITICAL SCHEMA FIX (Dec 2025): egghead-rails `series` table is DEPRECATED. Official courses are in `playlists` with `visibility_state='indexed'` (437 courses). Lessons link via `tracklists` polymorphic join table (tracklistable_type='Lesson', tracklistable_id=lesson.id), NOT via lessons.series_id. Standalone lessons (~1,650) are published lessons NOT in any indexed playlist. Use DISTINCT ON (l.id) when querying lessons to handle 36 lessons that appear in multiple courses.","created_at":"2025-12-13T23:17:05.679Z"}
|
|
144
182
|
{"id":"49a14aed-a8f0-4e43-b7d7-f5a40d1871a2","information":"AI SDK v6 Breaking Changes Audit Pattern: When auditing course content for SDK migrations, prioritize finding actual usage over theoretical possibilities. Used grep to search for deprecated patterns (generateObject, convertToCoreMessages, textEmbedding, Experimental_Agent) and found generateObject in 3 lessons but zero usage of other deprecated APIs. Key insight: Don't assume all breaking changes apply - verify with targeted searches. The most effective audit workflow: 1) Read migration guide for breaking changes list, 2) Grep for each pattern across codebase, 3) Read only files with matches, 4) Document specific line numbers and code snippets for replacements. For AI SDK specifically, generateObject→generateText+Output.object() is the most common v6 migration, affecting structured output lessons heavily.","created_at":"1766431951475.0","tags":"ai-sdk,migration,audit,v6,course-content,breaking-changes"}
|
|
183
|
+
{"id":"49ecae15-9041-488f-88df-93a94da711d0","information":"**Oh-My-OpenCode Preemptive Compaction Hook**\n\nAuto-triggers context compaction when nearing token limits:\n\n**Threshold Detection:**\n```typescript\nconst usageRatio = usedTokens / contextLimit;\nif (usageRatio >= threshold && !cooldown) {\n triggerCompaction(sessionID);\n}\n```\n\n**Compaction Trigger Flow:**\n1. Hook listens to `event` stream for assistant messages\n2. Finds last assistant message with token usage info\n3. Checks usage ratio against threshold (default: 0.80 = 80%)\n4. Enforces cooldown (default: 5 minutes) to prevent spam\n5. Calls `client.session.compact()` if threshold exceeded\n\n**Context Limit Detection:**\n```typescript\n// Priority order for determining context limit:\n1. User config: modelContextLimitsCache.get(providerID/modelID)\n2. Anthropic 1M context beta: check \"anthropic-beta\" header\n3. Model pattern match: Claude models default to 200k\n4. Fallback: use detected limit or skip compaction\n```\n\n**Compaction Context Injection:**\n- `onBeforeSummarize` callback injects additional context before compaction\n- Used by `compaction-context-injector` to add session metadata\n- Allows customizing what gets preserved in summary\n\n**State Management:**\n- `lastCompactionTime` Map prevents rapid re-compaction\n- `compactionInProgress` Set prevents concurrent compaction of same session\n- Cleaned up on `session.deleted` / `session.compacted`\n\n**Novel Pattern:** Proactive compaction based on token ratio, not just error recovery. Prevents hitting limits instead of reacting to them.\n\n**Swarm Adoption:** Could trigger compaction checkpoints automatically when worker sessions approach limits.","created_at":"1766673506882.0","tags":"oh-my-opencode,compaction,preemptive,token-limits,context-management"}
|
|
145
184
|
{"id":"4a109810-3bbb-43f9-af7d-4034d132302b","information":"{\"id\":\"test-1766260890139-zq75zhy9nia\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:30.139Z\",\"raw_value\":1}","created_at":"1766260890651.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:30.139Z\"}"}
|
|
146
185
|
{"id":"4a9929ba-3860-4ebe-8ea9-89688d79d348","information":"{\"id\":\"test-1765653389932-an49coy8vg4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:16:29.932Z\",\"raw_value\":1}","created_at":"2025-12-13T19:16:30.132Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:16:29.932Z\"}"}
|
|
147
186
|
{"id":"4b488af5-d26b-4c82-a0d0-1b89bf742df8","information":"{\"id\":\"test-1766594998844-1rffuzu8dnx\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:49:58.844Z\",\"raw_value\":1}","created_at":"1766594999055.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:49:58.844Z\"}"}
|
|
@@ -149,13 +188,17 @@
|
|
|
149
188
|
{"id":"4b8f146e-bfd9-41d9-954d-fd27622f2bc4","information":"Bun.serve SSE (Server-Sent Events) implementation pattern: Use ReadableStream with controller.enqueue() to send events. Format: `data: ${JSON.stringify(event)}\\n\\n`. Headers MUST include: Content-Type: text/event-stream, Cache-Control: no-cache, Connection: keep-alive. Track active subscriptions in a Map with cleanup on req.signal abort event. Close streams via controller.close() on server stop. Common gotcha: Bun serves with generic Server<WebSocketData> type - use Server<undefined> for non-WebSocket HTTP servers.","created_at":"1766595958646.0","tags":"bun,sse,server-sent-events,http,streaming"}
|
|
150
189
|
{"id":"4c47409c-83a4-4e85-87ed-1ee7445a3b09","information":"swarm-mail socket adapter hybrid pattern: getSwarmMail() now checks SWARM_MAIL_SOCKET=true env var to enable socket mode with graceful PGLite fallback on any failure. Close methods need conditional logic for pglite vs socket adapters. Env vars: SWARM_MAIL_SOCKET_PATH (unix socket), SWARM_MAIL_SOCKET_PORT (TCP, default 5433), SWARM_MAIL_SOCKET_HOST (TCP, default 127.0.0.1).","created_at":"2025-12-17T18:03:01.543Z"}
|
|
151
190
|
{"id":"4ca9a4ef-db39-48e7-aa7c-bd573fe6213d","information":"{\"id\":\"pattern-1766261007175-idjnhn\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:03:27.175Z\",\"updated_at\":\"2025-12-20T20:03:27.175Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261007439.0","metadata":"{\"id\":\"pattern-1766261007175-idjnhn\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
191
|
+
{"id":"4cb83691-94ba-43c4-959d-b8e38682af67","information":"Composite scorer weight patterns across eval system:\n\noverallDiscipline (coordinator): violations=30%, spawn=25%, review=25%, speed=20%\ncompactionQuality (compaction): confidence=25%, injection=25%, required=30%, forbidden=20%\noverallCoordinatorBehavior (behavior): tools=30%, avoidsWorker=40%, mindset=30%\n\nPattern: Each composite prioritizes different metrics (domain-specific), but NO documentation of WHY these weights were chosen. Need comments explaining rationale.\n\nExample rationale for overallDiscipline:\n- Violations (30%): Breaking protocol causes immediate harm\n- Spawn (25%): Delegation is core coordinator job \n- Review (25%): Quality gate prevents bad work propagating\n- Speed (20%): Optimization, not correctness\n\nWithout rationale, weights appear arbitrary and hard to tune.","created_at":"1766674495779.0","tags":"evalite,scorers,weights,composite,calibration"}
|
|
152
192
|
{"id":"4ce9127c-6f36-4335-b9ac-584c282dafca","information":"WORKFLOW LOGGING CONSTRAINT: Vercel Workflow files (those with \"use workflow\" or \"use step\" directives) CANNOT import pino logger or use node:crypto. The workflow bundler runs code in a restricted environment that doesn't support Node.js built-in modules. \n\nSOLUTION: Workflow files MUST use console.log/console.error/console.warn directly. The workflow runtime captures these. Only non-workflow files (API routes, listeners, middleware, lib modules NOT imported by workflows) can use the structured pino logger.\n\nFILES AFFECTED: server/workflows/*.ts - all must use console.* not logger\nFILES SAFE: server/api/*.ts, server/listeners/*.ts, server/middleware/*.ts, server/lib/*.ts (if not imported by workflows)\n\nRoot cause: Importing ~/lib/logger into workflow files pulls in pino (Node.js module) and randomUUID (node:crypto), both forbidden in workflow runtime.","created_at":"1766458212230.0","tags":"workflow,logging,pino,vercel-workflow,bundler,constraint,gotcha"}
|
|
153
193
|
{"id":"4d167832-70e4-46b0-85ba-170e5826b9c8","information":"PGLite WAL Safety Pattern: Add checkpoint() to DatabaseAdapter interface and call after batch operations to prevent WAL bloat.\n\nRoot cause from pdf-brain: PGLite accumulated 930 WAL files (930MB) without explicit CHECKPOINT, causing WASM OOM crash. PostgreSQL CHECKPOINT command forces WAL to be written to data files, allowing WAL to be recycled.\n\nImplementation:\n1. Add `checkpoint?(): Promise<void>` to DatabaseAdapter interface (optional method)\n2. Implement in wrapPGlite: `async checkpoint() { await pglite.query(\"CHECKPOINT\"); }`\n3. Call after batch operations:\n - After runMigrations() in adapter.runMigrations()\n - After bulk event appends (if batching)\n - After large projection updates\n\nTDD approach confirmed effectiveness:\n- Write failing test expecting checkpoint() method\n- Implement checkpoint in interface + wrapper\n- Call from adapters after migrations\n- All tests green (29 tests passing)\n\nKey insight: CHECKPOINT is a PostgreSQL command, not PGLite-specific. Works for any PostgreSQL-compatible database but critical for embedded databases without automatic checkpointing.\n\nPattern applies to any PGLite usage with batch operations: migrations, bulk writes, large transactions.","created_at":"2025-12-19T03:34:00.966Z","tags":"pglite,wal,checkpoint,database-adapter,batch-operations,memory-management,wasm"}
|
|
154
194
|
{"id":"4df79169-bae1-4942-bfc3-8a0c5ba038de","information":"MemoryAdapter implementation pattern for Effect-TS + PGlite semantic memory: High-level adapter wraps low-level services (Ollama + MemoryStore) with graceful degradation. Key insights: (1) Use Effect.runPromise with Effect.either for optional Ollama - returns Left on failure, enabling FTS fallback. (2) Store decay calculation (90-day half-life) in adapter layer, not DB - keeps store generic. (3) validate() resets timestamp via direct SQL UPDATE, not store.store() which preserves original timestamps on conflict. (4) Tags parsed from comma-separated string and merged into metadata.tags array for searchability. (5) TDD with 22 tests first caught 3 design issues: metadata structure, embedding similarity mocking, timestamp update semantics. Integration test verifies full lifecycle: store→find→get→validate→remove with FTS fallback.","created_at":"2025-12-18T19:09:34.653Z","metadata":"{\"pattern\":\"high-level-adapter\",\"testing\":\"tdd-integration\",\"component\":\"swarm-mail/memory\"}","tags":"effect-ts,pglite,semantic-memory,adapter-pattern,graceful-degradation,tdd"}
|
|
195
|
+
{"id":"4f6864e3-0844-4673-8c9e-09e46f9ccd85","information":"Compaction prompt scorer regex case-sensitivity fix: The forbidden tools regex patterns in scoreForbiddenToolsPresent() were case-sensitive (/\\bEdit\\b/), causing them to miss lowercase \"edit\" in fixtures. Solution: Add 'i' flag to all tool regexes (/\\bEdit\\b/i, /\\bWrite\\b/i, /\\bbash\\b/i). This affects eval scoring - prompts with lowercase tool names now correctly match. Also added \"bash\" as 5th forbidden tool since it appears in coordinator prompt patterns for file modifications. Total forbidden tools: Edit, Write, swarmmail_reserve, git commit, bash. Test expectations updated from 4 to 5 tools (3/4=0.75 → 3/5=0.6).","created_at":"1766677748496.0","metadata":"{\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlm2nmont1\",\"files_modified\":[\"src/compaction-prompt-scoring.ts\",\"src/compaction-prompt-scorers.test.ts\",\"evals/fixtures/compaction-prompt-cases.ts\"]}","tags":"regex,case-sensitivity,eval-scoring,compaction,forbidden-tools,tdd"}
|
|
155
196
|
{"id":"4f6a7e08-fa47-4f23-bca2-6e7edb72a702","information":"PGLite DatabaseAdapter wrapper pattern: PGLite's exec() method returns Promise<Results[]> but DatabaseAdapter interface expects Promise<void>. Solution: wrap with async function that awaits exec() but doesn't return the value. Example: exec: async (sql: string) => { await pglite.exec(sql); }. This matches the adapter contract without leaking PGLite-specific types. Used in swarm-mail package for database abstraction layer.","created_at":"2025-12-15T00:18:10.156Z","tags":"pglite,adapter-pattern,database,typescript,type-compatibility,swarm-mail"}
|
|
156
197
|
{"id":"4fca4eb1-e967-4992-8c48-502ea5596cde","information":"{\"id\":\"pattern-1766076693301-vgiike\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:51:33.301Z\",\"updated_at\":\"2025-12-18T16:51:33.301Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:51:33.529Z","metadata":"{\"id\":\"pattern-1766076693301-vgiike\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
198
|
+
{"id":"4ffecf66-b968-42c1-8aea-978a7e35e027","information":"{\"id\":\"test-1766641845195-92egchvior9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-25T05:50:45.195Z\",\"raw_value\":1}","created_at":"1766641845423.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-25T05:50:45.195Z\"}"}
|
|
157
199
|
{"id":"509ddf29-54c9-4d65-8610-dfc76321aadc","information":"--information","created_at":"2025-12-14T22:41:51.321Z","tags":"swarm,edge-case,workaround"}
|
|
158
200
|
{"id":"510c2a51-2f72-4bdd-8d3b-4dfd8ea3a0b7","information":"Ollama service config migration pattern: When updating Effect services to use new config structure, replace LibraryConfig.fromEnv() with loadConfig() import from types.ts. Update property access from flat config (config.ollamaHost) to nested structure (config.ollama.host, config.embedding.model). For auto-install functionality, use Effect.tryPromise to wrap spawn() calls - don't use Effect.gen wrapper around tryPromise as it adds unnecessary effect nesting. Place console.log outside the Effect for immediate logging. The pattern: Effect.tryPromise({ try: () => new Promise(...), catch: (e) => new CustomError(...) })","created_at":"1766261006627.0","tags":"effect,config-migration,ollama,spawn,child-process"}
|
|
201
|
+
{"id":"511aa884-6dfe-4ca5-90d2-e5d8b9330a14","information":"oh-my-opencode Agent System Architecture - Detailed Research Findings\n\n**Repository**: code-yeongyu/oh-my-opencode\n**Research Date**: 2025-01-25\n\n## Agent Registry Pattern\n\noh-my-opencode uses a factory-based agent registry pattern with deep config merging:\n\n1. **Agent Factories**: Each agent is defined as either an `AgentFactory` function or static `AgentConfig`\n - Factories take optional `model` parameter for dynamic model selection\n - Example: `createOracleAgent(model?: string): AgentConfig`\n - Enables model-specific configuration (GPT vs Claude get different reasoning configs)\n\n2. **Registry Structure** (`src/agents/utils.ts`):\n ```typescript\n const agentSources: Record<BuiltinAgentName, AgentSource> = {\n Sisyphus: createSisyphusAgent,\n oracle: createOracleAgent,\n librarian: createLibrarianAgent,\n explore: createExploreAgent,\n \"frontend-ui-ux-engineer\": createFrontendUiUxEngineerAgent,\n \"document-writer\": createDocumentWriterAgent,\n \"multimodal-looker\": createMultimodalLookerAgent,\n }\n ```\n\n3. **createBuiltinAgents()** - Central composition function:\n - Takes disabled agents list, overrides, directory, systemDefaultModel\n - Builds each agent from factory with model\n - Injects environment context (date, platform, timezone) into Sisyphus and librarian prompts\n - Deep merges user overrides (with special `prompt_append` support)\n - Returns fully configured agent registry\n\n4. **Config Merging Strategy**:\n - Base agent config from factory\n - Environment context injection (Sisyphus, librarian only)\n - User overrides via `deepMerge()` (preserves nested properties)\n - `prompt_append` concatenates to existing prompt instead of replacing\n\n## Agent Invocation Patterns\n\n### 1. Direct Subagent Invocation\n- All agents have `mode: \"subagent\"` (except Sisyphus which can be default)\n- Sisyphus delegates via standard OpenCode `task()` tool\n- Tool restrictions enforced: `{ write: false, edit: false, background_task: false }` for most\n- Model-aware config: GPT models get `reasoningEffort`, Claude gets `thinking` budget\n\n### 2. Background Task System (`BackgroundManager`)\nLocated in `src/features/background-agent/manager.ts`:\n\n**Key Innovation**: Async agent execution with lifecycle tracking\n\n```typescript\nclass BackgroundManager {\n async launch(input: LaunchInput): Promise<BackgroundTask>\n handleEvent(event: Event): void // Listens to session events\n markForNotification(task: BackgroundTask): void\n getPendingNotifications(sessionID: string): BackgroundTask[]\n}\n```\n\n**Lifecycle**:\n1. Create child session with `parentID` linkage\n2. Launch agent via `session.promptAsync()` (non-blocking)\n3. Track via `subagentSessions` global set\n4. Poll session status every 2s for completion\n5. Check for incomplete todos before marking complete\n6. Send notification to parent session + show toast\n7. Results retrieved via `background_output(task_id)`\n\n**Novel Features**:\n- Progress tracking: counts tool calls, tracks last tool/message\n- Todo-aware completion: waits for todo-continuation hook before completing\n- Parent session notification: injects message into parent thread\n- Toast notifications via OpenCode TUI client\n- Session deletion handling (marks as cancelled)\n\n### 3. call_omo_agent Tool\nCustom tool in `src/tools/call-omo-agent/`:\n- Wraps background_task for explore/librarian only\n- Enforces `ALLOWED_AGENTS = [\"explore\", \"librarian\"]` \n- Supports both sync (`run_in_background=false`) and async modes\n- Sync mode allows session continuation via `session_id` param\n- Async mode returns task_id for later retrieval\n\n## Inter-Agent Communication\n\n### Coordinator → Worker Pattern\n\n**Sisyphus** (orchestrator) uses structured delegation prompts (7 sections):\n1. TASK: Atomic goal\n2. EXPECTED OUTCOME: Success criteria\n3. REQUIRED SKILLS: Skill to invoke\n4. REQUIRED TOOLS: Explicit whitelist\n5. MUST DO: Exhaustive requirements\n6. MUST NOT DO: Forbidden actions\n7. CONTEXT: File paths, patterns, constraints\n\n**Post-delegation verification** (enforced in Sisyphus prompt):\n- Does it work as expected?\n- Did it follow codebase patterns?\n- Expected result achieved?\n- Did agent follow MUST DO/MUST NOT DO?\n\n### Parallel Execution Philosophy\n\n**explore** and **librarian** treated as \"grep, not consultants\":\n- Always launched in parallel via `background_task()`\n- Never wait synchronously\n- Collect results with `background_output()` when needed\n- Mandatory minimum parallel calls: 3+ (TYPE A), 4+ (TYPE B/C), 6+ (TYPE D)\n\n### Tool Restrictions by Agent\n\n- **oracle**: Read-only (`write: false, edit: false, task: false, background_task: false`)\n- **librarian**: Read-only + external search tools\n- **explore**: Read-only + codebase search tools\n- **frontend-ui-ux-engineer**: Can edit but no background_task\n- **multimodal-looker**: Limited tools (`task: false, call_omo_agent: false, look_at: false`)\n\n## Novel Coordination Patterns We Could Adopt\n\n### 1. Factory-Based Agent Registry\n**What**: Agents defined as factory functions, not static configs\n**Why**: Enables model-specific configuration, dynamic prompt injection\n**Adopt for**: Our agent definitions could accept model param and return different configs (e.g., Haiku vs Opus workers get different thinking budgets)\n\n### 2. Environment Context Injection\n**What**: Auto-inject date/timezone/platform into agent prompts\n**Why**: Prevents agents from hallucinating dates, knowing context\n**Adopt for**: Coordinator and researcher agents in our swarm\n\n### 3. Background Task Manager with Lifecycle Tracking\n**What**: Centralized manager tracking async agent execution with events\n**Why**: Real-time progress visibility, todo-aware completion, parent notification\n**Adopt for**: Our swarm orchestration - track worker progress, notify coordinator\n\n### 4. Todo-Aware Completion\n**What**: Don't mark agent complete until all todos are done\n**Why**: Prevents premature completion, enforces task completion\n**Adopt for**: Worker agents in swarm - integrate with hive cells\n\n### 5. Parallel Execution Minimums\n**What**: Enforce minimum parallel tool calls (3+, 4+, 6+) by request type\n**Why**: Prevents sequential bottlenecks, maximizes throughput\n**Adopt for**: Researcher agents - force parallel doc lookups\n\n### 6. Delegation Verification Protocol\n**What**: 7-section delegation prompt + post-work verification checklist\n**Why**: Reduces rogue agent behavior, ensures quality\n**Adopt for**: Our coordinator → worker handoff\n\n### 7. Agent-Specific Tool Whitelisting\n**What**: Each agent has explicit tool restrictions in config\n**Why**: Enforces separation of concerns, prevents tool sprawl\n**Adopt for**: Read-only agents (archaeologist, reviewer, researcher)\n\n### 8. Model-Aware Configuration\n**What**: Factory detects model type (GPT vs Claude) and sets appropriate reasoning config\n**Why**: Maximizes each model's capabilities (reasoningEffort for GPT, thinking for Claude)\n**Adopt for**: Our multi-model swarm workers\n\n### 9. Parent Session Notification System\n**What**: Background tasks inject completion messages into parent session\n**Why**: Coordinator sees worker completion inline, doesn't poll\n**Adopt for**: Swarm mail could adopt this for completion notifications\n\n### 10. Prompt Append Override\n**What**: Config allows `prompt_append` to extend (not replace) base prompt\n**Why**: Users customize without losing base instructions\n**Adopt for**: User-configurable agent overrides in swarm\n\n## Key Differences from Our Swarm\n\n| Feature | oh-my-opencode | Our Swarm |\n|---------|---------------|-----------|\n| **Coordination** | Single orchestrator (Sisyphus) | Coordinator + workers in cells |\n| **Persistence** | Session-based (ephemeral) | Hive cells (git-backed) |\n| **Communication** | Background manager events | Swarm Mail (event log) |\n| **Isolation** | Session sandboxing | Worktrees or file reservations |\n| **Learning** | None (stateless) | Pattern maturity, outcome tracking |\n| **Failure Recovery** | 3-strike rule → Oracle consult | 3-strike rule → escalate |\n| **Parallel Strategy** | Mandatory minimums (3+, 4+, 6+) | Automatic by file/feature/risk |\n| **Agent Registry** | Factory-based with deep merge | Static imports (could improve) |\n| **Tool Restrictions** | Per-agent whitelist in config | Mode-based (read-only agents) |\n| **Progress Tracking** | Tool call counting + polling | Hive cell status + swarm mail |\n\n## Actionable Takeaways for Swarm\n\n1. **Adopt factory pattern for agents** - enables model-specific configs\n2. **Add environment context injection** - prevent date hallucinations\n3. **Implement lifecycle tracking in swarm orchestrator** - track worker progress\n4. **Enforce parallel execution minimums** - researcher must launch 3+ lookups\n5. **Use 7-section delegation prompt template** - reduce rogue behavior\n6. **Add model-aware configuration** - GPT vs Claude workers get optimal settings\n7. **Consider parent notification system** - workers inject completion into coordinator thread\n8. **Explicit tool whitelisting per agent** - enforce separation of concerns\n\n## Technical Details\n\n**File Locations**:\n- Agent definitions: `src/agents/*.ts`\n- Agent registry: `src/agents/index.ts`, `src/agents/utils.ts`\n- Background manager: `src/features/background-agent/manager.ts`\n- call_omo_agent tool: `src/tools/call-omo-agent/tools.ts`\n- Plugin registration: `src/index.ts` (lines 371-450)\n\n**Key Functions**:\n- `createBuiltinAgents()`: Composes agent registry with overrides\n- `BackgroundManager.launch()`: Async agent execution\n- `BackgroundManager.handleEvent()`: Lifecycle event processing\n- `createCallOmoAgent()`: Background task wrapper tool\n\n**Dependencies**: @opencode-ai/sdk, @opencode-ai/plugin","created_at":"1766673479006.0","tags":"oh-my-opencode,agent-architecture,research,coordination-patterns,background-tasks,delegation,factories"}
|
|
159
202
|
{"id":"5166a145-4de1-4870-b75d-36670a00d76b","information":"## Database Migration: PGLite → libSQL Complete\n\n### Current State (Dec 2024)\n- **Primary database:** libSQL (SQLite-compatible)\n- **PGLite:** Only for migration from legacy databases\n- **AGENTS.md:** Updated to reflect libSQL as primary\n\n### Key APIs\n- `createInMemorySwarmMail()` - In-memory libSQL for tests\n- `getSwarmMailLibSQL()` - File-based libSQL for production\n- `createLibSQLAdapter()` - Low-level adapter\n\n### Migration Path\n- Legacy PGLite databases can be migrated via `migrate-pglite-to-libsql.ts`\n- Effect-TS durable primitives still need porting from PGLite to libSQL\n\n### Hive Tools Issue\nThe hive_* MCP tools are failing with \"no such column: stream\" error. This is NOT from the cursors table (that has correct schema). Need to trace the actual error source in the tool implementation.","created_at":"1766333931600.0","tags":"database-migration,libsql,pglite-deprecated,hive-tools,architecture"}
|
|
160
203
|
{"id":"516a8144-80fc-4fdf-beb1-ab9a2a95ba36","information":"Swarm coordinator enforcement rules added to swarm.md: (1) CRITICAL section \"Coordinator Role Boundaries\" with explicit list of what coordinators DO (clarify, decompose, spawn, monitor, verify) and DO NOT (edit code, run tests, make quick fixes). (2) Sequential task pattern: spawn workers in order, await each before next - still get checkpointing, recovery, learning benefits. (3) Anti-patterns section with three examples: Mega-Coordinator (doing work inline), Sequential Work Without Workers, and \"Just This One Small Thing\". (4) Updated checklist with \"Coordinator did NOT edit any files\" and \"ALL subtasks spawned as workers\". Key insight from Event-Driven Microservices: \"orchestrator is responsible ONLY for orchestrating the business logic\".","created_at":"2025-12-18T00:31:38.099Z"}
|
|
161
204
|
{"id":"51a8fc37-f8e2-4626-b6fc-6fe3710d985a","information":"libSQL auto-migration module created for swarm-mail package. Key learnings:\n\n**Generated columns cannot be inserted:** libSQL's GENERATED columns (like `sequence INTEGER GENERATED ALWAYS AS (id) STORED`) throw SQLITE_ERROR if you try to INSERT into them. Solution: exclude generated columns from INSERT column list.\n\n**Dynamic schema detection required for graceful migration:** Old databases may have different schemas (missing columns, different types). Instead of hardcoding column lists, query source schema with `PRAGMA table_info(table_name)` and intersect with target columns. This allows migration to work even when source has subset of columns.\n\n**INSERT OR IGNORE rowsAffected check:** INSERT OR IGNORE silently succeeds even when row already exists (constraint violation). Check `result.rowsAffected > 0` to know if row was actually inserted vs skipped.\n\n**Global DB schema must exist before migration:** migrateProjectToGlobal() must create global DB schema with createLibSQLStreamsSchema() before calling migrateLibSQLToGlobal(), otherwise INSERT fails with \"no such table\".\n\n**Tables migrated (16 total):**\nStreams: events, agents, messages, message_recipients, reservations, cursors, locks\nHive: beads, bead_dependencies, bead_labels, bead_comments, blocked_beads_cache, dirty_beads\nLearning: eval_records, swarm_contexts, deferred\n\nModule location: packages/swarm-mail/src/streams/auto-migrate.ts\nTests: 13 passing, 624 LOC implementation, 270 LOC tests","created_at":"1766343789270.0","tags":"libsql,migration,schema-evolution,database,swarm-mail"}
|
|
@@ -166,24 +209,33 @@
|
|
|
166
209
|
{"id":"550d8616-7064-4e45-ae86-63387526435a","information":"Drizzle ORM migration pattern for swarm-mail streams subsystem: When migrating from raw SQL to Drizzle ORM, create convenience wrapper functions that match old signatures. Pattern: (1) Drizzle functions take db SwarmDb as FIRST parameter, (2) Wrapper functions match old signature with dbOverride as LAST parameter, (3) Use dynamic import (await import) in wrappers to avoid circular dependencies, (4) Convert DatabaseAdapter to SwarmDb using toSwarmDb helper. This maintains backward compatibility - tests do not need changes. High-level functions (registerAgent, sendMessage) automatically use Drizzle through the wrappers.","created_at":"1766296542912.0","tags":"drizzle,migration,swarm-mail,testing"}
|
|
167
210
|
{"id":"556474e3-5398-46fd-9550-5f0744fcb198","information":"Walkthrough verification pattern for technical courses: Use .scratch/ directory as throwaway workspace for end-to-end lesson verification. Clone starter repos here, follow lessons step-by-step to verify code examples work on fresh clone. Directory should be gitignored. This enforces \"We Don't Ship Junk\" principle by testing lessons as students experience them. Example: .scratch/ai-sdk-walkthrough/ for verifying AI SDK course. Delete and recreate for each verification run to ensure clean environment.","created_at":"1766433551696.0","tags":"course-development,quality-assurance,verification,walkthrough,best-practices"}
|
|
168
211
|
{"id":"56a594bf-f52e-4b28-9e8e-2a88c9745037","information":"TDD pattern for PGlite WAL auto-checkpoint during batch operations: \n1. Write failing tests first (getCheckpointInterval, shouldCheckpoint helpers)\n2. Implement minimal checkpoint interval logic (default 50 docs, configurable)\n3. Remove per-doc checkpoint from library.add() (wasteful for batch ops)\n4. Expose checkpoint() method on PDFLibrary service API\n5. Add checkpoint logic to batch ingest command (both TUI and console modes)\n6. Update TUI state to show checkpoint progress (checkpointInProgress, checkpointMessage, lastCheckpointAt fields)\n7. Use Effect.either() to handle checkpoint failures gracefully (log but continue)\n\nKey insight: Checkpointing every document adds 930MB WAL in real usage. Checkpointing every N documents (default 50) prevents WASM OOM while maintaining performance. Batch operations should own checkpointing, not individual operations.","created_at":"2025-12-19T17:28:31.265Z","tags":"tdd,pglite,wal,checkpoint,batch-operations,effect-ts"}
|
|
212
|
+
{"id":"56ead5ba-87c2-4d48-819e-b687ecbb1961","information":"README documentation pattern for technical projects: 1) One-liner purpose at top, 2) Quick start immediately after (pnpm commands), 3) Architecture overview with ASCII diagrams (scannable), 4) Key patterns with tables (AI models, workflows, logging), 5) Scripts reference (grouped by purpose), 6) Environment variables (required vs optional tables), 7) Troubleshooting section with common errors. Structure: scannable first (tables, ASCII art), details expand after. Include full architecture diagram at end. For Slack bots: emphasize 3-tier storage (Redis/Search/Vector), workflow constraints, and AI Gateway usage.","created_at":"1766678672711.0","tags":"documentation,readme,technical-writing,architecture,scannable-design"}
|
|
169
213
|
{"id":"571b2a05-aff5-493c-8db7-28dfadff501b","information":"{\"id\":\"test-1766259559212-clvgwqn44pc\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:39:19.212Z\",\"raw_value\":1}","created_at":"1766259559440.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:39:19.212Z\"}"}
|
|
170
214
|
{"id":"5776a7bb-00ca-4d6d-b82e-7216596da81c","information":"PostgreSQL ON CONFLICT clause must reference an actual unique constraint or exclusion constraint. In swarm_contexts table, migration v5 creates UNIQUE INDEX on (project_key, epic_id, bead_id), not on (id). Therefore ON CONFLICT (id) fails with \"no unique or exclusion constraint matching\". Fix: change to ON CONFLICT (project_key, epic_id, bead_id). Also: test queries must filter by ALL columns in the unique constraint when expecting single rows, otherwise queries span multiple projects and return unexpected counts.","created_at":"1766260293483.0","tags":"postgresql,upsert,on-conflict,unique-constraint,swarm-mail,testing"}
|
|
215
|
+
{"id":"57adf6e2-0669-40f7-8120-bf4708cd74f3","information":"oh-my-opencode AST-Grep Integration: AST-aware search/replace for 25 languages. Auto-downloads binary if missing, caches path. Meta-variables: $VAR (single node), $$$ (multiple nodes). Context-safe limits: 200 matches, 500KB output, 30s timeout. Truncation tracking: max_matches | max_output_bytes | timeout. Pattern validation with helpful hints (strips Python colons, requires complete JS/TS function nodes). Smart error handling: ENOENT → auto-download, timeout → graceful truncation, parse errors → recovery. Novel pattern: automatic binary management removes setup friction.","created_at":"1766673450032.0","tags":"oh-my-opencode,ast-grep,ast,structural-search,refactoring"}
|
|
171
216
|
{"id":"5822a985-22dd-4c52-aa57-3d048e376c1a","information":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:19.155Z\",\"updated_at\":\"2025-12-18T16:17:19.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:19.369Z","metadata":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
172
217
|
{"id":"58a49e56-1ed4-421f-803c-b87908415356","information":"Built swarm-db CLI for analytics queries using TDD. Key learnings:\n\n1. **DatabaseAdapter returns QueryResult<T>**: The libSQL adapter's query() method returns `{ rows: T[] }`, not `T[]` directly. Always access result.rows, not result itself.\n\n2. **Query function type inference issue**: TypeScript incorrectly infers analytics query functions as `AnalyticsQuery & { buildQuery?: ... }` instead of function types. Use `as any` with biome-ignore comment when mapping command names to query functions.\n\n3. **CLI structure for analytics**: 3-tier command structure works well:\n - query <sql>: raw SQL (validated, max 1000 rows)\n - analytics <command>: pre-built queries with filters\n - list: discovery of available commands\n\n4. **Time range parsing pattern**: Regex `^(\\d+)(d|h|m)$` with switch on unit. Store as Date, not string.\n\n5. **Formatter integration**: Analytics formatters (table/json/csv/jsonl) accept QueryResult with columns/rows/rowCount/executionTimeMs. Execution time measured in CLI layer, not query layer.\n\n6. **Testing strategy**: Unit test validation/parsing logic, integration test CLI with in-memory DB (`:memory:`). Manual testing via bash script catches edge cases.\n\nFile locations:\n- packages/swarm-mail/bin/swarm-db.ts (entry point, shebang, parseArgs)\n- packages/swarm-mail/src/cli/db.ts (implementations)\n- packages/swarm-mail/src/cli/db.test.ts (19 tests)\n- package.json bin entry: \"swarm-db\": \"./bin/swarm-db.ts\"","created_at":"1766434650307.0","tags":"cli,analytics,tdd,libsql,swarm-db,typescript"}
|
|
173
218
|
{"id":"59429c7c-7ba1-49f6-933c-2a13e9fbb4b3","information":"Research phase integration testing pattern: Test each layer independently (tool discovery, lockfile parsing, prompt generation), then test integration between layers (runResearchPhase orchestrates all pieces). Use real repo as fixture for realistic testing. Key insight: extractTechStack returns normalized names (\"next\" not \"next.js\") - tests must match actual TECH_PATTERNS implementation. ResearchResult returns { tech_stack, summaries, memory_ids } not installed_versions.","created_at":"1766517197167.0","tags":"testing,integration-tests,research-phase,swarm,patterns"}
|
|
219
|
+
{"id":"5949c0a2-50b6-4c45-a5a3-570236d502f5","information":"Created eval_run plugin tool for programmatic evalite execution in opencode-swarm-plugin. Key learnings:\n\n1. Evalite programmatic API: Use `runEvalite()` from \"evalite/runner\" with mode=\"run-once\", outputPath for JSON results\n2. Output parsing: evalite writes Evalite.Exported.Output JSON format with run/suites/evals/scores structure\n3. Error handling: evalite may fail to write output file if tests crash - handle missing output gracefully\n4. Working directory resolution: When tests run from src/, need to resolve to project root for evals/ directory\n5. Tool schema: OpenCode plugin tools use `tool.schema.string()` not `z.string()` - tool() from @opencode-ai/plugin\n6. Context efficiency: Provide `includeDetailedResults` flag to omit per-eval input/output/scores, saving tokens when only summary needed\n\nImplementation pattern:\n- Core function: runEvals() async function with structured RunEvalsResult return type\n- Plugin wrapper: tool() with args/execute, JSON.stringify output\n- Test fixtures: Use PROJECT_ROOT, add timeouts for long-running eval suites\n- Cleanup: Auto-delete temporary .evalite-results-*.json files\n\nFile locations:\n- src/eval-runner.ts - Core implementation + plugin tool\n- src/eval-runner.test.ts - TDD test suite (6 tests, all passing)\n- src/index.ts - Tool registration in plugin hooks","created_at":"1766642202614.0","tags":"evalite,plugin-tools,testing,programmatic-api,opencode-swarm"}
|
|
174
220
|
{"id":"598d9dbe-997f-4508-b29f-b5420cbe1631","information":"{\"id\":\"test-1766260239804-rg13d19mfd\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:50:39.804Z\",\"raw_value\":1}","created_at":"1766260240028.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:50:39.804Z\"}"}
|
|
175
221
|
{"id":"5a7064a2-2a11-44e5-a1c9-455c4b30e18d","information":"ADR writing pattern for swarm plugin: Structure follows Context → Decision → Consequences → Implementation Notes → Alternatives Considered → References → Success Criteria. Key elements: (1) Context section must articulate current pain points with concrete examples, not just abstractions. (2) Decision section shows actual code/JSON structures, not just prose descriptions. (3) Consequences split into Positive/Negative/Neutral with specific tradeoffs. (4) Implementation phases are numbered and actionable. (5) Alternatives Considered documents rejected approaches with reasoning. (6) References link to inspirations and related ADRs. Format creates forcing function for clear thinking - if you can't fill in all sections cleanly, decision may not be ready. Used successfully for ADR-001 (monorepo), ADR-007 (worktree isolation), and ADR-008 (worker handoff protocol).","created_at":"2025-12-18T17:26:05.386Z","tags":"adr,architecture-decision-records,documentation,swarm-plugin,system-design"}
|
|
176
222
|
{"id":"5afe465e-ef42-4240-aa44-136967baf239","information":"CLI flag pattern for conditional output formatting: Use boolean flag (e.g., --expand) parsed via custom parseArgs function. Store flag state (const expand = opts.expand === true), then use ternary operator for conditional content: const preview = expand ? fullContent : truncatedContent. This allows backward-compatible feature addition without breaking default behavior. Applied in semantic-memory CLI to toggle between truncated (60 chars) and full content display.","created_at":"2025-12-18T17:01:12.075Z","tags":"cli,typescript,bun,flags,conditional-output,backward-compatibility"}
|
|
177
223
|
{"id":"5b117709-6a91-4237-a532-0f08909da9f7","information":"Kent C. Dodds Unified Accounts Use Case (Dec 2024) - Driving requirement for @badass auth architecture. Kent has EpicAI.pro, EpicWeb.dev, EpicReact.dev on different TLDs sharing a database. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Solution: epicweb.dev is the \"hive\" site for auth, other sites are \"spokes\" that redirect there. Workshop App uses device flow (RFC 8628) to authenticate against the hive. This validates hive+spoke model and device flow as core requirements.","created_at":"2025-12-18T15:42:16.703Z"}
|
|
178
224
|
{"id":"5d2404b8-3635-42a2-bd63-ae623aba2a62","information":"@badass Auth Architecture Decision (Dec 2024): Creators with multiple sites MUST designate a central \"hive\" site for auth. For Kent, epicweb.dev is the hive - all auth flows redirect there. Other sites (epicreact.dev, epicai.pro) are \"spoke\" sites that trust the hive. This is a REQUIREMENT, not optional. Simplifies cross-domain SSO - standard OAuth/OIDC pattern where hive is the IdP. Spoke sites redirect to hive for login, receive tokens back. Shared database means session/user data is already unified, just need the auth handshake.","created_at":"2025-12-18T15:39:52.225Z"}
|
|
225
|
+
{"id":"5d404bfd-1ce8-43d6-818c-8ea1be49dccd","information":"Database Deduplication Pattern (libSQL/SQLite): When storing batch inserts with potential duplicates, use a two-phase dedupe: (1) In-memory Map with dedupe key (e.g., `${name.toLowerCase()}:${type}`) to skip duplicates within the batch, (2) DB lookup with case-insensitive LOWER() to check existing records. CRITICAL: Return `Array.from(seen.values())` at the end, NOT pushing to a separate array during iteration. This ensures the return array has only uniques. Example: `const seen = new Map(); for (entity of entities) { const key = entity.name.toLowerCase()+type; if (seen.has(key)) continue; ...process...; seen.set(key, result); } return Array.from(seen.values());`. This pattern prevents N duplicate entries in the result array when input has N duplicates.","created_at":"1766672971986.0","tags":"deduplication,database,libsql,sqlite,batch-insert"}
|
|
179
226
|
{"id":"5d4cccf4-0638-4c6c-8489-152f89c04f87","information":"Atomic File Writes Pattern: For crash-safe state persistence: 1) Create temp file in SAME directory (atomic rename requires same filesystem), 2) Write content to temp file, 3) sync to flush buffers, 4) chmod permissions, 5) mv -f temp to final (POSIX guarantees atomicity), 6) sync directory entry. Prevents state corruption on SSH disconnect or crash. Use for: swarm state, hive issues.jsonl, any file that must survive interruption. Source: Dicklesworthstone/agentic_coding_flywheel_setup state.sh:193-290","created_at":"1766591013349.0","tags":"persistence,atomic,crash-safe,state,patterns,acfs"}
|
|
180
227
|
{"id":"5d871dd3-e45a-4237-8d79-12e568949c91","information":"AI SDK v6 Runtime Identity Pattern: Use callOptionsSchema with Zod to define type-safe per-request context (userId, tier, permissions). Implement prepareCall function that receives typed options and returns config overrides (tools, instructions, model, temperature). This enables tier-based feature gating, region-specific compliance, A/B testing, dynamic model selection. Key: prepareCall runs on EVERY invocation - keep it fast, avoid async DB lookups, use in-memory cache or extract from headers/JWT. In tier-one app: free (queryFAQ only), pro (adds searchDocs), enterprise (adds askV0). Always include respondToTicketTool for structured exit. Console.log in prepareCall provides observability.","created_at":"2025-12-16T21:12:38.912Z","tags":"ai-sdk,ai-sdk-v6,runtime-identity,callOptionsSchema,prepareCall,tier-filtering,tool-gating"}
|
|
181
228
|
{"id":"5da9c7c6-ad9b-4747-a9f1-8e47022227bc","information":"PDF Brain config CLI implementation pattern: For nested config access (e.g., \"embedding.model\"), use path.split(\".\") and navigate object tree iteratively. Type coercion critical for boolean/number values from string CLI args: check typeof oldValue to determine how to parse newValue. loadConfig() creates config.json with defaults if missing (good UX). Always show note about API keys in env vars when displaying config - users need to know keys aren't stored in JSON.","created_at":"1766261053511.0","tags":"cli,config,nested-paths,type-coercion,pdf-brain"}
|
|
182
229
|
{"id":"5e6a594e-adf0-424d-9490-847233492ae2","information":"D3 force simulation improvements for graph clustering visualization: Implemented 5-part enhancement strategy for better cluster discovery UX. (1) Custom cluster force: pulls nodes toward cluster centroids with strength 0.2, requires updating centroids on each tick via clusterResult.clusterCentroids. (2) Radial layout: forceRadial pushes concepts (radius=0) toward center, documents (radius=400) toward periphery with strength 0.1. (3) Link strength by relationship type: broader=0.7 (tight hierarchy), has_concept=0.2 (loose many-to-many tagging), related=0.4 (medium). (4) Weakened rigid centering: reduced forceX/forceY strength from 0.015 to 0.005 to avoid fighting natural clustering. (5) Slower alpha decay: alphaDecay from 0.015 to 0.008, alphaMin from 0.005 to 0.001 for better equilibrium settling. Result: STRONG visual cluster separation with clear boundaries, concepts centralized, hierarchies tight, exploration-friendly neighborhoods. Pattern used in pdf-brain-viewer force graph.","created_at":"1766347782107.0","tags":"d3,force-simulation,clustering,graph-visualization,radial-layout,link-strength"}
|
|
230
|
+
{"id":"5ecffcfe-9a98-42f0-bb4f-b0f76ba35eec","information":"Swarm review integration test fix: Tests were failing because they expected messages for `needs_changes` status that are no longer sent by design. The architecture changed to coordinator-driven retry pattern where workers are considered \"dead\" after review rejection. The fix was NOT adapter caching (already fixed) but updating test expectations to match the current behavior:\n\n**Current Architecture (Coordinator-Driven Retry):**\n1. `approved` status → sendSwarmMessage to worker (worker can swarm_complete)\n2. `needs_changes` status → NO message sent, return retry_context for coordinator to use with swarm_spawn_retry\n3. After 3 rejections → task marked blocked, NO message sent, coordinator escalates\n\n**Why \"worker is dead\":**\n- Failure indicates architectural problem, not \"try harder\"\n- Coordinator needs full context to decide: retry with same agent? Different agent? Decompose differently?\n- Worker self-retry via messages couples retry logic to worker, making iteration impossible\n\n**Test Pattern:**\n```typescript\n// For needs_changes, expect NO messages\nconst messages = await swarmMail.getInbox(projectPath, \"worker\");\nexpect(messages.length).toBe(0);\n\n// Instead expect retry_context\nexpect(feedbackParsed.retry_context).toBeDefined();\nexpect(feedbackParsed.retry_context.next_action).toContain(\"swarm_spawn_retry\");\n```\n\n**Code Comments:**\nLines 595 and 613 in swarm-review.ts explicitly state \"NO sendSwarmMessage for needs_changes - worker is dead\"\n\n**Lesson:** When tests fail, check if the test expectations are stale, not just the implementation.","created_at":"1766618369821.0","metadata":"{\"files\":[\"packages/opencode-swarm-plugin/src/swarm-review.integration.test.ts\",\"packages/opencode-swarm-plugin/src/swarm-review.ts\"],\"pattern\":\"test-expectations-stale\"}","tags":"swarm-review,integration-tests,coordinator-driven-retry,architecture-change,worker-is-dead"}
|
|
231
|
+
{"id":"5f0a4ff2-fb25-448c-9edb-c2a5a9dc6534","information":"oh-my-opencode hook implementation patterns (code-level):\n\n## Compaction Hook Implementation Details\n\n### Preemptive Compaction Hook\n**File:** `src/hooks/preemptive-compaction/index.ts`\n**Trigger:** `message.updated` event when assistant message finishes + `session.idle`\n**Key Logic:**\n- Monitors token usage ratio: `(input + cache.read + output) / contextLimit`\n- Default threshold: 80% (configurable via `experimental.preemptive_compaction_threshold`)\n- Cooldown: 5 seconds between compactions (prevents rapid re-compaction)\n- Callback injection: `onBeforeSummarize(ctx)` runs before `session.summarize()` API call\n- Auto-resume: After compaction, injects \"Continue\" prompt with stored agent/model\n\n**Novel pattern:** Callbacks as dependency injection for cross-hook coordination without tight coupling.\n\n### Compaction Context Injector Hook\n**File:** `src/hooks/compaction-context-injector/index.ts`\n**Key Innovation:** Injects structured prompt BEFORE compaction via `onBeforeSummarize` callback\n**Prompt Structure:**\n```\n## 1. User Requests (As-Is) - exact wording preserved\n## 2. Final Goal - end result expected\n## 3. Work Completed - files, features, problems solved\n## 4. Remaining Tasks - pending items, follow-ups\n## 5. MUST NOT Do - forbidden approaches, failed attempts, anti-patterns\n```\n**Implementation:** Uses `injectHookMessage()` to write system message to session filesystem (not via chat API)\n\n### Anthropic Auto-Compact Hook\n**File:** `src/hooks/anthropic-auto-compact/index.ts`\n**Triggers:** `session.error` + `message.updated` (with error) + `session.idle` (with pending flag)\n**Error Detection:** Parses Anthropic API error messages for token limit patterns\n**Recovery Strategies (sequential):**\n1. Truncate large tool outputs (experimental mode)\n2. Trigger compaction via `session.summarize()` API\n3. Inject \"Continue\" after successful compaction\n**State Management:** Uses Maps for pending compactions, retry counts, fallback states\n\n## Session Recovery Hook Implementation\n\n### Session Recovery Hook\n**File:** `src/hooks/session-recovery/index.ts`\n**Error Types Handled:**\n1. `tool_result_missing`: Agent called tool, user pressed ESC before result → injects placeholder tool_result parts\n2. `thinking_block_order`: Thinking part not first in message → reorders parts in session storage\n3. `thinking_disabled_violation`: Thinking parts present when model doesn't support → strips thinking parts\n\n**Filesystem Manipulation Functions:**\n- `readParts(messageID)`: Reads `.opencode/sessions/<sessionID>/<messageID>.json`\n- `prependThinkingPart(sessionID, messageID)`: Reorders JSON parts array\n- `stripThinkingParts(messageID)`: Removes thinking parts from message\n- `injectTextPart(sessionID, messageID, text)`: Adds text part to message\n\n**Key Pattern:** Direct session file manipulation as recovery mechanism (not API-based)\n\n## Think Mode Hook Implementation\n\n### Think Mode Hook\n**File:** `src/hooks/think-mode/index.ts`\n**Hook Point:** `chat.params` (modifies message params before sending to LLM)\n**Keyword Detection:** Regex patterns for \"think\", \"ultrathink\", \"think hard\", \"think harder\"\n**Model Switching:**\n```typescript\nconst modelMap = {\n \"claude-sonnet-4-5\": \"claude-sonnet-4.5-high\",\n \"claude-opus-4\": \"claude-opus-4-high\"\n}\n```\n**Thinking Config Injection:** \n```typescript\noutput.message.thinking = { type: \"enabled\", budget_tokens: 10000 }\n```\n**State Tracking:** Per-session Map tracks if model was switched (for metrics/debugging)\n\n## Hook Registration Pattern\n\n### Main Plugin Registration\n**File:** `src/index.ts` lines 230-294\n**Pattern:**\n```typescript\nconst myHook = isHookEnabled(\"my-hook-name\")\n ? createMyHook(ctx, { experimental: config.experimental })\n : null;\n```\n**Aggregation:** Plugin returns object with hook methods calling all enabled hooks:\n```typescript\nreturn {\n \"tool.execute.before\": async (input, output) => {\n await hook1?.[\"tool.execute.before\"](input, output);\n await hook2?.[\"tool.execute.before\"](input, output);\n await hook3?.[\"tool.execute.before\"](input, output);\n },\n event: async (input) => {\n await hook1?.event(input);\n await hook2?.event(input);\n // ... etc\n }\n}\n```\n\n## Claude Code Hooks Compatibility Layer\n\n### External Hook Protocol\n**File:** `src/hooks/claude-code-hooks/`\n**Config Location:** `~/.claude/settings.json` or `.claude/settings.json`\n**Hook Events:** PreToolUse, PostToolUse, UserPromptSubmit, Stop, PreCompact\n**Execution Pattern:**\n1. Load config with glob/regex matchers\n2. Match tool name against patterns\n3. Execute hook command via `executeHookCommand(command, stdin, cwd)`\n4. Parse JSON stdout for decision (allow/deny/ask) and modifications\n\n**stdin Protocol:**\n```json\n{\n \"hook_event_name\": \"PreToolUse\",\n \"tool_name\": \"bash\",\n \"tool_input\": { \"command\": \"ls\" },\n \"tool_use_id\": \"call_xyz\",\n \"cwd\": \"/path/to/project\",\n \"hook_source\": \"opencode-plugin\"\n}\n```\n\n**stdout Expected:**\n```json\n{\n \"hookSpecificOutput\": {\n \"permissionDecision\": \"allow|deny|ask\",\n \"permissionDecisionReason\": \"reason\",\n \"updatedInput\": { /* modified args */ }\n }\n}\n```\n\n## Hook Message Injection Pattern\n\n### Filesystem-based Message Injection\n**File:** `src/features/hook-message-injector/`\n**Used By:** Compaction context injector, directory injectors, rules injector\n**Pattern:**\n1. Find nearest message file in session storage with required fields (agent, model)\n2. Read existing message JSON\n3. Append new text part to parts array\n4. Write back to filesystem\n5. OpenCode picks up changes on next message fetch\n\n**Why filesystem vs API?** Avoids triggering streaming events, allows injection without user-visible message in chat UI.\n\n## Key Implementation Insights\n\n1. **Hooks are stateful:** Most maintain per-session Maps/Sets for tracking\n2. **Error handling is optimistic:** Catch and log, don't throw (preserve UX)\n3. **Cleanup is event-driven:** `session.deleted` event triggers all state cleanup\n4. **Coordination via callbacks:** Hooks expose setters for cross-hook coordination\n5. **Filesystem as IPC:** Session manipulation bypasses OpenCode API for fine-grained control","created_at":"1766673490336.0","tags":"oh-my-opencode,implementation,hooks,code-patterns,opencode"}
|
|
183
232
|
{"id":"5f688547-d56f-4951-9ad9-69e7ddf60590","information":"{\"id\":\"pattern-1766297016224-adki3f\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:03:36.224Z\",\"updated_at\":\"2025-12-21T06:03:36.224Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766297016462.0","metadata":"{\"id\":\"pattern-1766297016224-adki3f\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
233
|
+
{"id":"5f6d7558-3b03-4d63-88b0-ac62cb223a4b","information":"Progressive eval gates pattern for AI systems: Three-phase quality control that adapts based on run count and variance.\n\nPHASES:\n1. Bootstrap (<10 runs): Always pass - focus on collecting baseline data, no gates yet\n2. Stabilization (10-50 runs): Warn on >10% regression but still pass - learning the baseline, tolerating noise\n3. Production (>50 runs AND variance <0.1): Fail on >5% regression - strict enforcement once stable\n\nVARIANCE THRESHOLD (0.1): If >50 runs but variance ≥0.1, stays in stabilization. Prevents premature production gates when scores are unstable.\n\nREGRESSION CALCULATION: (baseline - current) / baseline where baseline = mean(all_historical_scores)\n\nWHY IT WORKS:\n- Avoids false failures during initial learning\n- Adapts to eval maturity\n- Variance check prevents strict gates on unstable evals\n- Used in opencode-swarm-plugin for decomposition quality, coordinator discipline, compaction prompt quality\n\nIMPLEMENTATION: eval-gates.ts (checkGate), eval-history.ts (recordEvalRun, getPhase), eval-runner.ts (runEvals)\n\nSOURCE: Inspired by SRE practices (error budgets, progressive rollouts), MLOps (model monitoring phases)","created_at":"1766672862785.0","tags":"evals,quality-gates,progressive-systems,observability,SRE"}
|
|
184
234
|
{"id":"5faca7a3-eefb-44bd-affb-3140d367c748","information":"PGlite daemon initialization pattern: After creating PGlite instance and calling waitReady, MUST initialize schema (CREATE TABLE IF NOT EXISTS) before starting socket server. Without schema init, daemon starts successfully but all database operations fail with \"relation does not exist\" errors. DatabaseClient connects to daemon socket but finds empty database. Schema initialization code should mirror Database.ts DirectDatabaseLive implementation exactly to ensure consistency between daemon and direct modes.","created_at":"2025-12-19T15:18:58.912Z","tags":"pglite,daemon,schema-initialization,database,socket-server"}
|
|
235
|
+
{"id":"60105bff-60f6-4acd-852b-78e5320ee5c1","information":"Memory linking vector similarity pattern in libSQL: Use vector_top_k('idx_memories_embedding', vector(json_array), limit) for efficient ANN search. Returns virtual table with just (id) column (the rowid). MUST join back to main table to get full rows. Calculate distance separately with vector_distance_cos(). Pattern: SELECT m.*, vector_distance_cos(m.embedding, vector(?)) as distance FROM vector_top_k(...) AS v JOIN memories m ON m.rowid = v.id. Cosine distance: 0 = identical, 2 = opposite. Convert to similarity: score = 1 - distance. Requires vector index created with: CREATE INDEX ... ON table(libsql_vector_idx(embedding)). This is libSQL-specific, can't use Drizzle - must use sql`` template.","created_at":"1766672865882.0","metadata":"{\"source\":\"mjl1kscsxga\",\"context\":\"memory-linking implementation\"}","tags":"libsql,vector-search,drizzle,memory,similarity,ann"}
|
|
185
236
|
{"id":"6065c202-d39d-4a9c-a578-cbcc52e8f3b9","information":"{\"id\":\"test-1766263853476-920c0xetj4e\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:50:53.476Z\",\"raw_value\":1}","created_at":"1766263853706.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:50:53.476Z\"}"}
|
|
186
237
|
{"id":"61b3acf6-2eaa-4670-b17d-401634a0e41e","information":"@badass Video Pipeline Extraction Plan (Dec 2024): Extract from @coursebuilder/core to @badass/video.\n\n**Files to Extract:**\n- packages/core/src/schemas/video-resource.ts - VideoResource schema\n- packages/core/src/schemas/mux.ts - Mux API response schemas\n- packages/core/src/lib/mux.ts:1-142 - Mux API client\n- packages/core/src/providers/deepgram.ts:1-200 - Transcription provider\n- packages/core/src/inngest/video-processing/functions/* - All Inngest functions\n- packages/core/src/inngest/video-processing/events/* - All event definitions\n- packages/core/src/inngest/video-processing/utils.ts - Mux thumbnail generation\n\n**Architecture:**\n- VideoResource is a ContentResource type (not embedded in posts)\n- Upload triggers Inngest job\n- Mux processes video\n- Deepgram transcribes\n- Webhooks update VideoResource with asset ID, playback info, transcript, SRT\n\n**API Design:**\nconst video = createVideoProcessor({ storage: mux, transcription: deepgram, jobs: inngest })\nawait video.process(uploadUrl) // Returns VideoResource ID","created_at":"2025-12-18T15:57:51.555Z"}
|
|
238
|
+
{"id":"628d7189-7274-4a6c-ad84-d40c96bdc833","information":"Post-compaction tool call tracker pattern: Factory function returning closure-based tracker with minimal state (callCount, resumptionEmitted flags). Key design: emits resumption_started ONCE on first tool call, then tool_call_tracked for each call up to limit. Violation detection via lookup table (FORBIDDEN_COORDINATOR_TOOLS) keyed by tool name. Exported isCoordinatorViolation for reusability. Testing strategy: mock the onEvent callback, verify call counts and payloads. Critical: use 1-based call_number (increment BEFORE emitting) for human-readable event logs. Integration point: wire to OpenCode hooks[\"tool.call\"] in compaction-hook.ts. Located: packages/opencode-swarm-plugin/src/post-compaction-tracker.ts","created_at":"1766635862754.0","metadata":"{\"files\":[\"post-compaction-tracker.ts\",\"post-compaction-tracker.test.ts\"],\"cell_id\":\"mjkwehtburk\",\"test_count\":12}","tags":"tdd,post-compaction,coordinator-violations,tool-tracking,factory-pattern"}
|
|
187
239
|
{"id":"6298607d-7d0d-4aaa-8ece-f53a208edfb9","information":"Effect-based SQLite retry pattern: Created withSqliteRetry() utility in swarm-mail/src/db/retry.ts following the pattern from lock.ts and ollama.ts. Key implementation detail: Use Effect.catchAllDefect() BEFORE Effect.retry() to convert defects (thrown exceptions) into failures that retry logic can handle. Without this, Effect.sync(() => throw error) creates a \"Die\" defect that bypasses retry. Retryable errors: SQLITE_BUSY, SQLITE_LOCKED. Non-retryable: SQLITE_CONSTRAINT, SQLITE_MISMATCH. Schedule: exponential(\"100 millis\").pipe(Schedule.compose(Schedule.recurs(3))) = 100ms, 200ms, 400ms, then fail. Exported from swarm-mail package for use in adapter write operations.","created_at":"1766592267105.0","metadata":"{\"module\":\"swarm-mail\",\"pattern\":\"effect-retry\",\"project\":\"opencode-swarm-plugin\",\"technology\":\"effect-ts,sqlite\"}"}
|
|
188
240
|
{"id":"62f31790-3897-4543-80e7-cf8a66061ece","information":"{\"id\":\"pattern-1766263088654-o2004a\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:38:08.654Z\",\"updated_at\":\"2025-12-20T20:38:08.654Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263088902.0","metadata":"{\"id\":\"pattern-1766263088654-o2004a\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
189
241
|
{"id":"632a5d9d-c85f-4f2a-9e2a-28d348f30c0d","information":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:30.591Z\",\"updated_at\":\"2025-12-18T16:17:30.591Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:30.812Z","metadata":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -204,30 +256,38 @@
|
|
|
204
256
|
{"id":"6a8feca2-6dfd-4ec6-bc3c-7f0b603594d9","information":"{\"id\":\"test-1766262134969-m7gzvr176qq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:22:14.969Z\",\"raw_value\":1}","created_at":"1766262135220.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:22:14.969Z\"}"}
|
|
205
257
|
{"id":"6a92690a-72ec-4c30-b79c-d1b6d60b35f1","information":"{\"id\":\"test-1766349591225-em81o0nl1jf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:39:51.225Z\",\"raw_value\":1}","created_at":"1766349591468.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:39:51.225Z\"}"}
|
|
206
258
|
{"id":"6ac8457e-e26e-481d-a51c-cfeeff54c151","information":"Tech stack extraction for swarm research phase: Use regex patterns to detect common frameworks/libraries in task descriptions. Patterns should match case-insensitively and handle variations (e.g., 'Next.js', 'nextjs', 'next'). Return normalized lowercase names. Deduplicate using Set. Fast pattern: /next\\.?js|nextjs/i for Next.js, /react(?!ive)/i for React (negative lookahead prevents matching 'reactive'). Store patterns in TECH_PATTERNS map for easy extension.","created_at":"1766516842895.0"}
|
|
259
|
+
{"id":"6af54dc8-d34f-452a-b375-6825bdd02b0c","information":"Proactive entity extraction hook for swarm-mail semantic memory adapter. Hook intercepts every store() call with extractEntities=true option, automatically calls extractEntitiesAndRelationships() from entity-extraction.ts, then stores entities/relationships and links them via memory_entities junction table. Implementation uses dynamic import to avoid circular deps, accesses db.$client for libSQL client needed by entity-extraction functions, implements graceful degradation (try/catch returns empty on failure, never throws). Tests verify both graceful degradation (when LLM fails) and successful extraction (integration test calling extraction functions directly). Knowledge graph grows automatically as memories are stored without explicit user action. This implements the A-MEM pattern: named entities + subject-predicate-object triples extracted from natural language.","created_at":"1766674696082.0","tags":"swarm-mail,entity-extraction,knowledge-graph,proactive-extraction,graceful-degradation,a-mem-pattern"}
|
|
207
260
|
{"id":"6af70186-7cbf-42dc-91bb-2420dda1a2d2","information":"{\"id\":\"pattern-1766260844892-qlihj4\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:00:44.892Z\",\"updated_at\":\"2025-12-20T20:00:44.892Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260845104.0","metadata":"{\"id\":\"pattern-1766260844892-qlihj4\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
208
261
|
{"id":"6b335dab-3622-4a9a-a9a3-7464ae60a6e4","information":"{\"id\":\"test-1766265063306-07dckj8yk1gp\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T21:11:03.306Z\",\"raw_value\":1}","created_at":"1766265063516.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T21:11:03.306Z\"}"}
|
|
209
262
|
{"id":"6b6b00c9-540b-4ef6-a908-47048d9589d1","information":"Cross-domain SSO architecture insight: Kent's use case (EpicAI.pro, EpicWeb.dev, EpicReact.dev) requires unified identity across different TLDs. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Current course-builder uses NextAuth.js per-site. Solution requires either: (1) Shared auth database with cross-domain session tokens, (2) Central identity provider (IdP) that all sites trust, or (3) Token exchange protocol between sites. BetterAuth may have better cross-domain support than NextAuth. Key constraint: different domains means cookies don't share - need explicit SSO flow.","created_at":"2025-12-18T15:32:50.696Z"}
|
|
210
263
|
{"id":"6b75df69-f6b5-4f91-84d4-c91dafcd29d0","information":"Documentation Pass Plan (Comprehensive - Full Sweep):\n\nEPIC: Comprehensive Documentation Pass\nScope: READMEs, web docs, code comments, AGENTS.md\nApproach: Code is truth, verify against implementations, focus on recent PGLite→libSQL migration\n\nSubtasks (file-based strategy):\n1. Update swarm-mail package README - libSQL storage, getSwarmMailLibSQL, createLibSQLAdapter, createMemoryAdapter signature, architecture diagram\n2. Update swarm-mail JSDoc and code comments - scan src/**/*.ts for PGLite/deprecated API references\n3. Update opencode-swarm-plugin README - tool names, APIs, storage references\n4. Update web docs - swarm-mail section (apps/web/content/docs/packages/swarm-mail/*.mdx) - depends on #1\n5. Update web docs - opencode-plugin section (apps/web/content/docs/packages/opencode-plugin/*.mdx) - depends on #3\n6. Update root README and AGENTS.md - storage refs, tool names, workflows - depends on #1, #3\n\nKey API changes to verify:\n- getSwarmMail → getSwarmMailLibSQL (deprecated)\n- createMemoryAdapter signature changed\n- PGLite references should be libSQL\n- Storage architecture diagrams need updating\n\nSemantic memory findings to incorporate:\n- PGlite database existence check patterns changed\n- LibSQL vector search requires explicit vector index\n- createMemoryAdapter signature changed in opencode-swarm-plugin\n\nFix docs + minor code issues, file beads for larger issues found.","created_at":"1766279661875.0","tags":"documentation,planning,swarm,libsql,migration,epic"}
|
|
211
264
|
{"id":"6c7021dc-8b8f-4497-92c7-9693e04c42a0","information":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T19:53:03.777Z\",\"updated_at\":\"2025-12-17T19:53:03.777Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T19:53:04.822Z","metadata":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
212
265
|
{"id":"6c93e56b-b3f6-4f5c-9d37-8e702fad2a0d","information":"HDBSCAN scaling bottleneck for 500k embeddings: Core issue is O(n²) distance matrix requirement. For 500k points × 1024 dims: 125 billion distance calculations, ~1TB RAM for dense matrix, ~35 hours compute time at 1μs/distance. The naive vis-utils JS implementation (github.com/rivulet-zhang/vis-utils) confirms this - it precomputes the full cachedDist matrix in mst.js precomputeDist() function using nested loops. SOLUTION: Leverage existing HNSW index (embeddings_idx in libSQL) for approximate k-NN queries. HNSW provides O(log n) queries vs O(n) brute force, reducing total complexity from O(n²) to O(n log n). For pdf-library: Use vector_top_k() queries to compute core distances, extract neighbor graph from HNSW (each point queries k=16 neighbors), then run agglomerative clustering on sparse graph instead of full MST. Memory drops from 1TB to ~100MB (64MB graph + 40MB dendrogram). Time drops from hours to ~11min for 3-level hierarchy. Key insight: Don't use HDBSCAN library - steal the concepts (hierarchical dendrogram, noise filtering, density-based clustering) and adapt to HNSW infrastructure we already have.","created_at":"1766426001603.0","tags":"hdbscan,clustering,scalability,hnsw,approximate-nearest-neighbor,500k-scale,distance-matrix,O(n²),performance,embeddings"}
|
|
266
|
+
{"id":"6ced41f8-9c8c-45e1-8c19-de85c2f9f18a","information":"Wired captureSubtaskOutcome() into swarm_complete for eval data capture pipeline. Key learning: New hive cell IDs don't follow epicId.subtaskNum pattern - epic and subtasks have independent IDs. Use cell.parent_id to get epic ID for subtasks (falls back to extracted epicId if parent_id unavailable). Pattern: dynamic import(\"./eval-capture.js\"), try-catch with console.warn, non-fatal on error. captureSubtaskOutcome requires: epicId (from parent_id), projectPath, beadId, title (from cell), plannedFiles (args), actualFiles (args.files_touched), durationMs (from start_time), errorCount, retryCount, success (always true in success path). Tests use hive_create_epic to create beads properly (not manual JSONL writes), setHiveWorkingDirectory required in tests, spyOn pattern to verify capture calls.","created_at":"1766619974913.0","tags":"eval-capture,swarm-orchestrate,swarm_complete,captureSubtaskOutcome,hive,cell-id-format,TDD"}
|
|
213
267
|
{"id":"6d15ae24-2e07-4e22-bf23-dd846e900428","information":"Test isolation fix for .hive/ pollution: Tests MUST use `tmpdir()` from `node:os` instead of relative paths or hardcoded `/tmp/`. Pattern: `const TEST_DIR = join(tmpdir(), \\`test-name-${Date.now()}\\`)`. **Root cause**: memory/sync.test.ts was using `join(import.meta.dir, \".test-memory-sync\")` which created test directories in the source tree, polluting the repo. hive.integration.test.ts was using hardcoded `/tmp/` which works on Unix but fails on Windows. Always use `tmpdir()` for cross-platform temp directory handling. **Verification**: Run tests, check `git status .hive/` is clean, and `find packages -type d -name \".test-*\"` returns nothing.","created_at":"1766422059365.0","tags":"testing,test-isolation,tmpdir,hive,cross-platform"}
|
|
214
268
|
{"id":"70613071-8231-49a5-bdca-a9b9f7e9c53c","information":"{\"id\":\"pattern-1765386530615-riuu0i\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:08:50.615Z\",\"updated_at\":\"2025-12-10T17:08:50.615Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:08:50.799Z","metadata":"{\"id\":\"pattern-1765386530615-riuu0i\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
215
269
|
{"id":"7102b6c2-0338-48a2-b3be-6b263057a4ab","information":"SSE streaming with Bun.serve() requires sending initial data to flush headers. When using ReadableStream for SSE, if no existing events are available to send immediately, the client's fetch() will hang waiting for the first byte. Fix: Send an SSE comment (`: connected\\n\\n`) at the start of the stream to establish the connection. This is standard SSE practice - comments (lines starting with `:`) are ignored by clients but flush the response headers.","created_at":"1766597178157.0","tags":"bun,sse,server-sent-events,streaming,http,fetch,readablestream"}
|
|
270
|
+
{"id":"712a5885-771e-41e9-9a50-a105e954c566","information":"Evalite scorer pattern (corrected understanding): createScorer() returns an ASYNC FUNCTION directly, NOT an object with .scorer property. When calling child scorers in composite scorers, MUST await the scorer call directly: const result = await childScorer({ output, expected, input }); NOT: const result = childScorer.scorer({ ... }). This pattern bit TWO files recently (coordinator-discipline.ts and compaction-scorers.ts) when implementing overallDiscipline and compactionQuality composite scorers. Scorer return type: { score: number | null, message: string }. When computing weighted averages, use nullish coalescing: (result.score ?? 0) * weight. All three parameters (output, expected, input) must be passed even if not used by specific scorer.","created_at":"1766674598706.0","metadata":"{\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlk7jsilk9\",\"pattern\":\"createScorer async composition\"}","tags":"evalite,scorers,async-patterns,composite-scorers,evalite-api"}
|
|
216
271
|
{"id":"713d8d68-90fa-4b2f-9ea0-5b06a0e6e50c","information":"{\"id\":\"test-1765771061095-2yd4dw3psvh\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:41.095Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:41.455Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:41.095Z\"}"}
|
|
217
272
|
{"id":"7189cf77-2ceb-47c4-a354-0dc493876ded","information":"{\"id\":\"test-1765771127882-pdmhpieixbg\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:47.882Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:48.290Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:47.882Z\"}"}
|
|
218
273
|
{"id":"71db34a5-29be-4431-98a9-e6a1e9416c8e","information":"PGlite WAL accumulation prevention pattern: Added `doctor` command to CLI that checks WAL file count and size (thresholds: 50 files OR 50MB). Also added graceful shutdown handlers (SIGINT, SIGTERM) that run CHECKPOINT before exit. Critical for MCP tool invocations which are separate processes that may not cleanly close database. Without these, WAL files accumulate over days causing WASM memory exhaustion (930 WAL files = 930MB crashed PGlite). Doctor command uses assessWALHealth() helper to warn users and suggest export/reimport. Shutdown handlers use dynamic import to avoid circular deps and check if DB exists before checkpointing.","created_at":"2025-12-19T04:03:22.627Z","tags":"pglite,wal,checkpoint,cli,graceful-shutdown,mcp,wasm-memory,prevention-pattern"}
|
|
274
|
+
{"id":"721ab883-6fbe-4f60-975a-dd632e647e32","information":"TDD for eval-history module: Created progressive eval gating system with 3 phases (bootstrap/stabilization/production) based on run count and variance.\n\nKey implementation details:\n- Bootstrap: <10 runs, no gates\n- Stabilization: 10-50 runs, warn on regression\n- Production: >50 runs AND variance <0.1, fail on regression\n\nVariance calculation: Σ((x - μ)²) / n. For phase transitions, variance threshold is 0.1.\n\nTesting pattern: When testing high variance scenarios, need significant number of wild runs to overcome stable baseline. For 60 stable runs @ 0.85, need 50 alternating 0.1/0.9 runs to push variance above 0.1 threshold. Math: (60 stable + 50 wild) = variance ~0.103.\n\nRefactoring: Extracted readAllRecords() helper, simplified calculateVariance() to single reduce, combined early returns for length <= 1.\n\nFile structure: .opencode/eval-history.jsonl (JSONL format, one EvalRunRecord per line).","created_at":"1766634641029.0","tags":"tdd,eval-history,variance,progressive-gates,testing-patterns"}
|
|
219
275
|
{"id":"729c2510-6ae1-4701-ba06-5faef13ec1f2","information":"postgres.js DatabaseAdapter wrapper pattern: postgres.js uses tagged template literals for queries (sql`SELECT...`) but DatabaseAdapter expects (sql, params) signature. Key implementation details: 1) Use sql.unsafe(sqlString, params) for raw SQL with parameters. 2) postgres.js returns Row[] directly (not wrapped in {rows:[]}), so wrap result: {rows: await sql.unsafe(...)}. 3) Type assertion needed: (await sql.unsafe(...)) as unknown as T[] because postgres.js unsafe returns Row[] but we need T[]. 4) Transaction support: sql.begin() callback receives TransactionSql that behaves like sql, wrap it recursively with wrapPostgres(). 5) sql.begin() returns Promise<UnwrapPromiseArray<T>>, need type assertion: result as T. 6) Factory pattern: createSocketAdapter validates options (either path OR host+port, not both), creates postgres client, validates with ping query, wraps and returns. 7) External postgres in build config to avoid bundling. Successfully implemented for swarm-mail socket adapter.","created_at":"2025-12-17T17:54:54.552Z"}
|
|
220
276
|
{"id":"72ea1de7-fa6e-4c40-b641-d9b40e86772c","information":"npm registry API for latest versions: Use https://registry.npmjs.org/{package}/latest endpoint. Returns JSON with version field. Works for scoped packages (@types/node). Graceful handling: return undefined on 404 or network errors - don't throw. Used in swarm-research.ts for optional upgrade checking when checkUpgrades=true parameter passed. Performance consideration: Promise.all for parallel fetches when checking multiple packages.","created_at":"1766517242442.0","tags":"npm,registry,api,versions,upgrades,swarm-research,network-resilience"}
|
|
221
277
|
{"id":"738be6d8-6f06-45b5-9e48-f78c0689af64","information":"{\"id\":\"test-1765653641690-8bz4qvel2p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:20:41.690Z\",\"raw_value\":1}","created_at":"2025-12-13T19:20:41.892Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:20:41.690Z\"}"}
|
|
222
278
|
{"id":"7399bf68-936c-4129-bb20-dd9d332ddb1d","information":"{\"id\":\"pattern-1766263207762-zbob2h\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:40:07.762Z\",\"updated_at\":\"2025-12-20T20:40:07.762Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263208060.0","metadata":"{\"id\":\"pattern-1766263207762-zbob2h\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
223
279
|
{"id":"73a330d8-15ea-4ea6-80cf-9a9bdf82ae6b","information":"Integration tests should always use isolated collections to prevent test pollution. Best pattern discovered:\n\nFor semantic-memory tests:\n- Use unique collection names with timestamps in beforeEach\n- Example: test-feedback-${testSuite}-${Date.now()}\n- Always cleanup with storage.close() in afterEach\n\nFor database tests (PGLite/streams):\n- Use unique temp paths with timestamps and UUIDs\n- Example: /tmp/test-${testSuite}-${Date.now()}-${randomUUID()}\n- Always cleanup with closeDatabase() and rm -rf in afterEach\n\nWHY: Without isolation tests can interfere with each other causing flaky failures. Each test needs its own collection/database that gets cleaned up after the test runs.","created_at":"2025-12-14T22:36:54.874Z"}
|
|
280
|
+
{"id":"73ff886b-5337-4097-b731-b0cba6b81c23","information":"oh-my-opencode Background Agent Manager Architecture (https://github.com/code-yeongyu/oh-my-opencode)\n\n**Core Design Pattern: Fire-and-Forget with Event-Driven Completion**\n\n1. **Async Agent Spawning**:\n - background_task tool creates a new OpenCode session via client.session.create()\n - Uses client.session.promptAsync() (non-blocking) to fire off agent work\n - Returns task_id immediately to coordinator\n - Disables task and background_task tools in spawned agents (prevents infinite recursion)\n\n2. **Task Lifecycle Management**:\n - States: running to completed/error/cancelled\n - Completion Detection: Dual-path approach\n a) Event-driven: session.idle event triggers completion check\n b) Polling fallback: 2-second interval polls running tasks via client.session.status()\n - Todo Integration: Before marking complete, checks client.session.todo() for incomplete items (prevents premature completion)\n\n3. **Result Collection**:\n - background_output(task_id, block=false) tool\n - Non-blocking by default: returns current status/progress\n - Blocking mode: polls every 1s until completion (max 10min timeout)\n - Results fetched via client.session.messages() - extracts last assistant message text\n\n4. **Parent Notification**:\n - On completion, sends message to parent session: [BACKGROUND TASK COMPLETED] Task finished in 42s. Use background_output with task_id=bg_xyz to get results.\n - Uses client.session.prompt() to inject notification into parent conversation\n - Also shows OS toast notification via client.tui.showToast()\n - 200ms delay before notification to ensure session state stability\n\n5. **Progress Tracking**:\n - Monitors message.part.updated events for tool calls\n - Polls client.session.messages() to extract: tool call count, last tool used, last message text plus timestamp\n - Exposes via background_output status view\n\n6. **Error Handling**:\n - promptAsync errors caught, task marked error, error stored\n - Special case: agent.name undefined becomes friendly Agent not found message\n - Session deletion marks task cancelled, cleans up notifications\n\n7. **Cancellation**:\n - background_cancel(taskId) or background_cancel(all=true)\n - Calls client.session.abort() fire-and-forget (await would abort parent too!)\n - Marks task cancelled, sets completedAt\n\n**Novel Patterns for Swarm**:\n\n- Event-driven plus polling hybrid: More reliable than polling alone, faster than events alone\n- Todo-aware completion: Prevents completing while agent still has work queued\n- Fire-and-forget abort: Critical insight - awaiting abort() kills parent session\n- Progressive status fetching: Start with lightweight status, only fetch full messages on demand\n- Parent model inheritance: Background tasks inherit parent model config for consistency\n- Recursive task tracking: getAllDescendantTasks() walks tree of background tasks spawned by background tasks\n\n**Key Differences from Swarm**:\n- Uses OpenCode session API, not separate process spawn\n- No file reservations (oh-my-opencode does not have parallel file edit conflicts)\n- No structured decomposition - agents spawn ad-hoc background tasks\n- Coordinator explicitly told to use background_task for all exploration/research\n- Background tasks disabled from spawning more background tasks (vs Swarm allows recursive spawning)","created_at":"1766673403857.0","tags":"oh-my-opencode,background-agents,async,event-driven,opencode-api,task-lifecycle,research"}
|
|
224
281
|
{"id":"751c12a5-7ce2-4ad7-b91f-31e0f54ed076","information":"Svelte 5 component pattern for GraphControls: Used $props() rune for reactive props, defined TypeScript interfaces inline, used Catppuccin color palette via CSS custom properties with fallbacks. Component is purely presentational - takes features object, zoomLevel number, and onToggle callback. Used {#each} over const array with 'as const' assertion for type safety. Positioned absolutely with z-index 100 to float over canvas. Key insight: CSS custom properties (var(--cat-*)) don't need imports in script - they're runtime values.","created_at":"1766343278132.0","tags":"svelte,svelte5,components,typescript,catppuccin,ui,props"}
|
|
225
282
|
{"id":"753a6005-3ecb-4bae-bbd0-bd38cfb2ab55","information":"Lite model support implementation pattern: Add model selection based on file types to optimize swarm costs. Key learnings: (1) File-type inference is simple but effective - all .md/.mdx or all .test./.spec. files use lite model, (2) Priority system works well: explicit override > file inference > default, (3) Integration point is swarm_spawn_subtask which returns recommended_model in metadata for coordinator to use with Task(), (4) Used dynamic import for selectWorkerModel to avoid circular dependencies, (5) Added risks: [] to mock subtask to satisfy DecomposedSubtask schema. Pattern applies to any swarm optimization where different task types have different resource needs.","created_at":"2025-12-19T00:31:23.462Z","tags":"swarm,model-selection,optimization,cost-savings"}
|
|
226
283
|
{"id":"75fc6779-42fe-4c60-9836-c4bc3e2ee3e7","information":"BetterAuth cross-domain limitation (Dec 2024): crossSubDomainCookies only works for SUBDOMAINS of the same root domain (e.g., app1.example.com and app2.example.com). It does NOT work for different TLDs (epicweb.dev vs epicreact.dev vs epicai.pro). For Kent's use case, need a different solution: either (1) Central IdP on a shared domain, (2) Token exchange protocol between sites, or (3) Custom SSO plugin. This is a gap in BetterAuth that @badass may need to solve.","created_at":"2025-12-18T15:35:21.461Z"}
|
|
227
284
|
{"id":"76b2273e-c06d-44ba-b243-bc6180af1149","information":"{\"id\":\"pattern-1766263405842-l534tr\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:43:25.842Z\",\"updated_at\":\"2025-12-20T20:43:25.842Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263406063.0","metadata":"{\"id\":\"pattern-1766263405842-l534tr\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
285
|
+
{"id":"77225379-f2c0-41a4-8c01-c02fc000bdec","information":"Added hive_cells plugin tool for agentic cell querying. Complements the existing CLI `swarm cells` command. Tool provides flexible filtering (status, type, partial ID lookup, ready flag) and returns JSON array of cells. Implementation pattern: use HiveAdapter.queryCells() for filters, resolvePartialId() for ID lookup, getNextReadyCell() for ready flag. Added comprehensive integration tests covering all filter combinations. Key insight: The tool is more ergonomic than hive_query for agents because it always returns an array (even for single ID lookups) and has clearer semantics (hive_cells vs hive_query). Use hive_cells when you need to see what work is available or look up cells by criteria.","created_at":"1766618640018.0","metadata":"{\"file\":\"packages/opencode-swarm-plugin/src/hive.ts\",\"pattern\":\"tool-implementation\",\"test_file\":\"packages/opencode-swarm-plugin/src/hive.integration.test.ts\"}","tags":"hive,plugin-tools,agentic-querying,tdd"}
|
|
228
286
|
{"id":"7792b139-5a37-44a9-9c6b-a5578ad93d48","information":"SWARM-MAIL EXTRACTION COMPLETE (Dec 2025): Successfully extracted swarm-mail as standalone npm package using adapter pattern. Key learnings: 1) Turborepo needs packageManager field in root package.json, 2) bun build doesn't resolve workspace:* - must build dependencies first with turbo, 3) TypeScript declarations need emitDeclarationOnly:true (not noEmit) plus tsc in build script, 4) Re-export everything from streams/index.ts for backward compatibility, 5) Coordinator should NOT reserve files - only workers reserve their own files. Architecture: createSwarmMailAdapter(db, projectKey) for DI, getSwarmMail(path) for convenience singleton. All 230 tests pass.","created_at":"2025-12-15T00:22:09.754Z"}
|
|
287
|
+
{"id":"7792d67f-6847-4eaf-9707-eab44962619c","information":"Bun test in evals/ directory has module resolution bug: \"Export named 'inject' not found in module 'bun:test'\" error occurs for all test files in evals/scorers/, even though identical imports work fine in src/. Error appears in both .test.ts and .evalite-test.ts files. Affects outcome-scorers.evalite-test.ts (pre-existing) and new coordinator-discipline tests. Tests in src/ directory run fine with same bun version (1.3.4). Workaround: manually verify exports with grep and typecheck. Root cause likely tsconfig/module resolution difference between src/ and evals/ directories, or corrupted bun test cache for evals path. Code is valid - typecheck passes, exports verified.","created_at":"1766610914678.0","tags":"bun,testing,module-resolution,evalite,evals,bug,workaround"}
|
|
229
288
|
{"id":"77e67fcb-446f-4444-8a27-624e43bc16c7","information":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T00:37:17.036Z\",\"updated_at\":\"2025-12-17T00:37:17.036Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T00:37:17.973Z","metadata":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
230
289
|
{"id":"7809bc09-f952-4a0b-9e8b-d1787500a22d","information":"{\"id\":\"pattern-1766593303550-g2j1y9\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:21:43.550Z\",\"updated_at\":\"2025-12-24T16:21:43.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766593303830.0","metadata":"{\"id\":\"pattern-1766593303550-g2j1y9\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
290
|
+
{"id":"7854d43a-abae-4b11-84cd-c24165d81999","information":"SwarmDb type is LibSQLDatabase from drizzle-orm/libsql, NOT the raw libSQL Client. In tests, create manually: const client = createClient({url: ':memory:'}); const db = drizzle(client, {schema}); await createLibSQLMemorySchema(client). The createInMemoryDb() helper only initializes streams schema, not memory schema. For memory tests, must manually call createLibSQLMemorySchema(client) after creating the drizzle instance. Client has .close(), SwarmDb does not - close the underlying client instead.","created_at":"1766672874819.0","metadata":"{\"source\":\"mjl1kscsxga\",\"context\":\"memory-linking test setup\"}","tags":"swarm-mail,testing,drizzle,libsql,memory,setup"}
|
|
231
291
|
{"id":"7a145b41-f975-4b3f-b849-9b2a4d96568c","information":"{\"id\":\"pattern-1766341864753-8kv4c7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T18:31:04.753Z\",\"updated_at\":\"2025-12-21T18:31:04.753Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766341864995.0","metadata":"{\"id\":\"pattern-1766341864753-8kv4c7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
232
292
|
{"id":"7a3a796e-8a02-46b7-8c94-d9e0dc317127","information":"Successfully implemented 5 pre-built analytics queries for swarm-mail event sourcing system using TDD methodology. Queries built using QueryBuilder fluent API with parameterized SQL to prevent injection. \n\nQueries implemented:\n1. failed-decompositions: Groups subtask_outcome failures by strategy, shows failure counts and avg duration\n2. strategy-success-rates: Calculates success rate percentage per strategy with total/successful/failed counts\n3. lock-contention: Identifies files with most reservations using reservation_released events, computes avg hold time\n4. agent-activity: Tracks agent event counts, first/last timestamps, active time spans\n5. message-latency: Computes p50/p95/p99 percentiles using window functions (ROW_NUMBER OVER)\n\nKey patterns learned:\n- Use QueryBuilder for consistency but raw SQL acceptable for complex queries (percentiles)\n- Always use parameterized queries (? placeholders) for security\n- json_extract() for querying JSON data fields in libSQL\n- CAST(...AS REAL) for floating-point aggregates (AVG, percentage calculations)\n- CASE WHEN for conditional aggregation (counting successes/failures separately)\n- Window functions (ROW_NUMBER OVER) for percentile approximation in SQLite/libSQL\n- Each query exports typed filter interfaces for type-safe usage\n\nTesting approach:\n- RED: Write comprehensive test expectations first (38 tests)\n- GREEN: Implement minimal code to pass (5 query modules + index)\n- Tests verify SQL structure, parameter handling, filter support, export contracts\n- All tests passing, typecheck clean, UBS scan clean","created_at":"1766433854114.0","metadata":"{\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjhkium6rpy\",\"query_count\":5,\"files_created\":7,\"tests_written\":38}","tags":"analytics,tdd,query-builder,libsql,event-sourcing,sql,percentiles,window-functions"}
|
|
233
293
|
{"id":"7a44a74d-5688-46eb-87ad-1740f1a057ae","information":"TDD RED phase regression testing discovered issues beyond the target bugs: (1) appendEventDrizzle doesn't work with trigger-based sequence generation in test-libsql.ts - needs manual sequence assignment or different approach, (2) Hive operations pass undefined to libSQL which throws TypeError. For regression tests, better to use direct SQL inserts to test the query logic in isolation, not the full event sourcing stack.","created_at":"1766415554151.0","tags":"testing,tdd,regression-tests,pglite-migration,libSQL"}
|
|
@@ -243,14 +303,22 @@
|
|
|
243
303
|
{"id":"7e06f0d4-1231-4b91-943a-b55587178b6a","information":"Daemon-first architecture pattern for PGlite: Auto-start daemon on first database access with graceful fallback. Implementation uses ensureDaemonRunning() function that: 1) checks if daemon running, 2) attempts auto-start if not, 3) returns {success, mode, error?} result. DatabaseLive Layer calls ensureDaemonRunning() and routes based on result - success routes to DatabaseClient (socket), failure falls back to DirectDatabaseLive with warning. This solves PGlite single-connection limitation by default while maintaining backwards compatibility. Key insight: NEVER throw from ensureDaemonRunning - always return a result, even on failure. Caller handles fallback logic. TDD approach: wrote 4 tests first (RED), implemented ensureDaemonRunning (GREEN), added JSDoc (REFACTOR). All 32 tests passing.","created_at":"2025-12-19T17:22:37.415Z","tags":"pglite,daemon,auto-start,tdd,graceful-fallback,architecture"}
|
|
244
304
|
{"id":"7ec67bba-2397-4eba-b563-7df4f17d02f5","information":"OpenCode plugin hook interface pattern: hooks use string literal keys with optional function signatures. Format: \"namespace.event\"?: (input: {...}, output: {...}) => Promise<void>. The output parameter is mutable - plugins append to arrays or modify properties. Single-line formatting is preferred by prettier for simple signatures. Session compaction hooks allow plugins to inject context before summarization.","created_at":"2025-12-17T18:01:37.726Z"}
|
|
245
305
|
{"id":"7f00daa2-8e7d-419b-9810-88647287e18d","information":"{\"id\":\"test-1766593254903-gipm8etumjg\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:20:54.903Z\",\"raw_value\":1}","created_at":"1766593255286.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:20:54.903Z\"}"}
|
|
306
|
+
{"id":"7f1bb06a-59ac-4c2c-81d1-b6339a09e765","information":"Coordinator observability documentation pattern: When documenting eval systems, separate OVERVIEW (what it measures, how to run it) from DEEP DIVE (implementation details, capture flow, violation patterns). \n\nStructure used:\n1. README.md overview - lists scorers, data sources, example output\n2. evals/README.md deep dive - capture flow diagram, JSONL format, viewing commands, integration points\n3. AGENTS.md pointer - brief summary + link to deep dive\n\nKey insight: Users need two levels:\n- Quick reference: \"How do I run coordinator eval?\" → AGENTS.md or README.md\n- Implementation details: \"How does session capture actually work?\" → evals/README.md deep dive\n\nDocumentation assets that made it clear:\n- ASCII flow diagram showing capture → detect → emit → eval\n- Event type table (DECISION/VIOLATION/OUTCOME/COMPACTION with subtypes)\n- Example JSONL file (5 lines showing real event progression)\n- jq command examples for viewing sessions\n- Integration points table mapping code locations to events\n\nAvoid: Dumping all technical details in one place. Progressive disclosure lets users find what they need.","created_at":"1766640380798.0","tags":"documentation,evals,coordinator,observability,progressive-disclosure"}
|
|
246
307
|
{"id":"803fddcb-ef84-4df9-8038-c69a6ebee9c5","information":"Course-builder OAuth Device Flow implementation reference (Dec 2024): Full RFC 8628 implementation exists in apps/ai-hero/src/app/oauth/device/. Key components: (1) POST /oauth/device/code - generates device_code + user_code with human-readable-ids, 10min expiry, (2) /activate page where user enters user_code, (3) device-verification tRPC router that marks verification with verifiedByUserId, (4) POST /oauth/token polls for access token. Schema in packages/adapter-drizzle with DeviceVerification table. This pattern should be extracted into @badass/auth for CLI and Workshop App authentication.","created_at":"2025-12-18T15:41:09.121Z"}
|
|
247
308
|
{"id":"8055fead-5592-40da-afa1-8a64d98b9afe","information":"{\"id\":\"test-1766350691029-ckp899oybls\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:58:11.029Z\",\"raw_value\":1}","created_at":"1766350691387.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:58:11.029Z\"}"}
|
|
309
|
+
{"id":"806c0104-3ddd-4a81-873f-a83245ecdc69","information":"Planning Guardrails - Coordinator Violation Detection: Real-time pattern matching for coordinators doing work instead of delegating. VIOLATION_PATTERNS: FILE_MODIFICATION_TOOLS [\"edit\", \"write\"], RESERVATION_TOOLS [\"swarmmail_reserve\", \"agentmail_reserve\"], TEST_EXECUTION_PATTERNS (regex: bun test, npm test, jest, vitest, mocha, *.test.*, *.spec.*). Detection flow: check agentContext === \"coordinator\" first (short-circuit workers), pattern match tool names/args, call captureCoordinatorEvent() immediately (no batching). Violation types: coordinator_edited_file (should spawn workers), coordinator_ran_tests (workers verify), coordinator_reserved_files (workers reserve before editing), no_worker_spawned (after hive_create_epic without spawning). Coordinator context state: isCoordinator flag, epicId, sessionId, activatedAt timestamp, 4-hour timeout. Used by eval-capture.ts for scoring coordinator discipline. Non-blocking - emits warnings, doesn't prevent execution. TodoWrite analysis: detects 6+ todos with file modification patterns, suggests swarm decomposition instead.","created_at":"1766672911325.0","tags":"planning-guardrails,coordinator-discipline,violation-detection,delegation"}
|
|
310
|
+
{"id":"808c538c-7706-4fea-a814-637e762f799b","information":"Cell ID partial matching fix: Changed LIKE pattern in resolvePartialIdDrizzle from `%-${partialHash}%-%` to `%${partialHash}%`. The old pattern only matched the middle hash segment, failing when users provided the timestamp+random segment (end of ID like \"mjkmdat26vq\"). The new pattern matches ANY substring of the cell ID (project name, hash, OR timestamp segments). Cell ID format: {project-name}-{hash}-{timestamp}{random}. Tests confirm matching works for full ID, hash segment, timestamp+random segment, and partial substrings.","created_at":"1766617685906.0","tags":"hive,cell-id,pattern-matching,sql,like-query,bug-fix"}
|
|
311
|
+
{"id":"80b472d4-2c8b-4253-a8fa-230ab282588c","information":"Successfully wired opencode-swarm-plugin memory tools to real swarm-mail adapter (Wave 1-3 features). Key learnings: (1) swarm-mail's createMemoryAdapter is synchronous, not async; (2) Real adapter uses 'mem-' ID prefix, not 'mem_'; (3) AI SDK v6 uses .output property, not .object; (4) Drizzle 0.41+ uses uniqueIndex().on() syntax, not object with columns/name; (5) Graceful degradation works - LLM failures fall back to heuristics (exact match → NOOP, no match → ADD); (6) autoTag/autoLink/extractEntities only work in store(), not upsert() yet (upsert only returns {id, operation, reason}); (7) Fixed TypeScript build by adding default case with exhaustiveness check in switch.","created_at":"1766678340383.0","metadata":"{\"task_id\":\"opencode-swarm-monorepo-lf2p4u-mjlm824m4d0\",\"duration_ms\":720000,\"tests_passing\":7}","tags":"swarm-mail,plugin,memory,integration,drizzle,ai-sdk"}
|
|
312
|
+
{"id":"814aaf36-df25-4698-989f-d6b596062532","information":"Added 7 new coordinator event types to eval capture system (mjl0n8rv0th):\n\n**New DECISION subtypes:**\n- researcher_spawned - tracks when coordinator delegates research instead of querying pdf-brain/context7 directly\n- skill_loaded - tracks skills_use() calls for domain knowledge\n- inbox_checked - tracks swarmmail inbox monitoring frequency\n- blocker_resolved - tracks coordinator unblocking workers\n- scope_change_approved/rejected - tracks scope expansion decisions\n\n**New OUTCOME subtypes:**\n- blocker_detected - tracks when workers report being blocked\n\n**Implementation pattern:**\n1. Updated CoordinatorEventSchema discriminated union in eval-capture.ts (lines 141-151, 175-180)\n2. Added helper capture functions (captureResearcherSpawned, captureSkillLoaded, etc.) following captureCompactionEvent pattern\n3. Added 4 new scorers to coordinator-discipline.ts:\n - researcherSpawnRate - binary (1.0 if spawned, 0.0 if not)\n - skillLoadingRate - lenient (1.0 if loaded, 0.5 if not - helpful but not critical)\n - inboxMonitoringRate - binary based on worker activity\n - blockerResponseTime - normalized response time (<5min=1.0, >15min=0.0)\n\n**Key insight:** When adding new Zod discriminated union values, must change .ts imports to .ts in test files temporarily OR clear Bun cache, because .js imports cache old enum values.","created_at":"1766641879870.0","tags":"eval-capture,coordinator-events,zod,discriminated-unions,evalite,scorers"}
|
|
313
|
+
{"id":"8180df3d-7637-4d9d-99e1-bfe43f41d9a2","information":"Research phase spawn instruction pattern: runResearchPhase() generates spawn instructions for coordinator, NOT actual spawning. Each technology gets unique research_id (research-{tech}-{timestamp}-{random}), formatResearcherPrompt() call, and ResearchSpawnInstruction object. Coordinator uses these to call Task() tool. Pattern: generate → return → coordinator spawns. Avoids tight coupling - runResearchPhase is pure generation, coordinator handles execution. This matches worker pattern where formatSubtaskPromptV2() generates prompts but doesn't spawn.","created_at":"1766619995711.0","metadata":"{\"file\":\"swarm-orchestrate.ts\",\"line\":2187,\"function\":\"runResearchPhase\"}","tags":"swarm,research-phase,spawn-instructions,separation-of-concerns,coordinator-pattern"}
|
|
248
314
|
{"id":"820d4c41-6de0-436b-a2e7-70a79830f959","information":"Implemented Four Golden Signals analytics queries for swarm-mail event store. Key learnings:\n\n**JSON boolean handling in SQLite/libSQL:** JSON stores booleans as 0/1, not strings. Use `json_extract(data, '$.success') = 0` for false, NOT `= 'false'`. This is because json_extract returns native SQLite types (0 for false, 1 for true), not JSON strings.\n\n**json_each() table aliasing:** When using json_each() in a FROM clause with other tables, ALWAYS alias it and qualify column names. `json_each(events.data, '$.paths') as paths` then use `paths.value`, not `value`. Without aliasing, SQLite throws \"ambiguous column name: type\" because json_each has its own \"type\" column.\n\n**Correct json_each syntax:** Use `json_each(table.column, '$.field')` with the JSON path, NOT `json_each(json_extract(...))`. Direct syntax: `FROM events, json_each(events.data, '$.paths') as paths` then `paths.value` for array elements.\n\n**Time filter parameterization:** For optional time filters, use pattern: `WHERE (? IS NULL OR timestamp >= ?) AND (? IS NULL OR timestamp <= ?)`. Pass same value twice: [sinceMs, sinceMs, untilMs, untilMs]. This allows NULL to skip the filter while still using parameterized queries.\n\n**Test patterns:** Integration tests use `createInMemorySwarmMailLibSQL()`, then `swarmMail.getDatabase()` for raw SQL. Insert test data with `db.query(sql, [params])`, not `db.execute()` (DatabaseAdapter only has query method).\n\n**Four Golden Signals mapping:**\n1. Latency = task duration by strategy (subtask_outcome events)\n2. Traffic = events per hour (time-series bucketing with strftime)\n3. Errors = failed tasks by agent (success=false filter)\n4. Saturation = active reservations (created but not released)\n5. Conflicts = most contested files (json_each over paths array)","created_at":"1766594928898.0","tags":"swarm-mail,analytics,libsql,sqlite,json,four-golden-signals,testing"}
|
|
249
315
|
{"id":"8228a158-ceba-4195-bbd4-66039caeee34","information":"{\"id\":\"pattern-1766259539198-8szypl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:38:59.198Z\",\"updated_at\":\"2025-12-20T19:38:59.198Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766259539429.0","metadata":"{\"id\":\"pattern-1766259539198-8szypl\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
250
316
|
{"id":"825ccc37-c833-42e6-9069-4a531215cea2","information":"{\"id\":\"test-1765749524072-fs3i37vpoik\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T21:58:44.072Z\",\"raw_value\":1}","created_at":"2025-12-14T21:58:44.282Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T21:58:44.072Z\"}"}
|
|
251
317
|
{"id":"82945143-4b25-418b-acaa-e3a02a2eb7b8","information":"{\"id\":\"test-1766104210635-2mewizal9aa\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-19T00:30:10.635Z\",\"raw_value\":1}","created_at":"2025-12-19T00:30:10.859Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-19T00:30:10.635Z\"}"}
|
|
318
|
+
{"id":"830d6023-74d1-492a-99af-6a67fa7692b6","information":"Coordinator research delegation pattern: Coordinators must NEVER call documentation/research tools directly (repo-crawl_*, webfetch, context7_*, pdf-brain_*). These tools dump massive context that exhausts expensive Sonnet context. Instead, use swarm_spawn_researcher to spawn a researcher worker who fetches in disposable context, stores details in semantic-memory, and returns a condensed summary. Implementation: (1) COORDINATOR_PROMPT has explicit forbidden tools section, (2) Phase 1.5 Research Phase shows spawn pattern, (3) Compaction hook reinforces with ASCII header and repeated identity statements, (4) runResearchPhase() generates spawn_instructions array for coordinator to use with Task().","created_at":"1766620631985.0","tags":"coordinator,research,delegation,forbidden-tools,swarm,context-management,spawn-researcher"}
|
|
252
319
|
{"id":"8311ea42-e882-4b72-8f23-fc6e83250e5f","information":"{\"id\":\"test-1765751832219-4zgo42wxmyu\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:37:12.219Z\",\"raw_value\":1}","created_at":"2025-12-14T22:37:12.483Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:37:12.219Z\"}"}
|
|
253
320
|
{"id":"834e33d4-b8d4-4c80-8a70-5d69d612efb0","information":"swarm_complete review gate UX fix: Changed review gate responses from { success: false, error: \"...\" } to { success: true, status: \"pending_review\" | \"needs_changes\", message: \"...\", next_steps: [...] }. This reframes the review gate as a workflow checkpoint, not an error state. Workers did nothing wrong - they just need to wait for coordinator review. The logic of when to check review status was already correct, only the response format needed fixing. Added 3 tests covering: (1) pending_review when no review attempted, (2) needs_changes when review rejected, (3) skip_review bypasses gate. Also added markReviewRejected() test helper to swarm-review.ts for simulating rejected reviews.","created_at":"2025-12-18T21:40:00.165Z","tags":"swarm,review-gate,ux-fix,workflow-state,testing"}
|
|
321
|
+
{"id":"83d94a7a-4d4a-4407-a244-ac04c0199e0d","information":"libSQL executeMultiple() required for migrations: The libSQL client's execute() method can only handle single SQL statements. For migrations with multiple CREATE TABLE, ALTER TABLE, or other DDL statements, use client.executeMultiple(sql) instead. This is critical for migration files where SQL strings contain multiple statements separated by semicolons. Example: await db.executeMultiple(migration.up) not await db.execute(migration.up). Symptom: Migration appears to succeed but tables/columns not created, \"no such table\" errors at runtime.","created_at":"1766643790103.0","metadata":"{\"severity\":\"high\",\"component\":\"swarm-mail\",\"subsystem\":\"memory\"}","tags":"libsql,migrations,executeMultiple,sql,ddl"}
|
|
254
322
|
{"id":"83fad083-f9d7-4b0b-9434-3750b67c0ac8","information":"swarm-mail adapter instance mismatch bug RESOLVED: The getInbox empty bug was caused by TWO separate adapter caches. `libsql.convenience.ts` had its own `instances` map caching SwarmMailAdapter wrappers, while `store.ts` had `adapterCache` map for DatabaseAdapter instances. When tests called `getSwarmMailLibSQL(testProjectPath)`, it created an adapter cached in `instances`. When `sendSwarmMessage` called `appendEvent()`, it created a DIFFERENT adapter cached in `adapterCache`. Messages were written to one database instance and read from another = empty inbox.\n\n**Fix**: Made all adapter creation go through the SAME cache by:\n1. Exporting `getOrCreateAdapter` from `store.ts` (the one with caching logic)\n2. Making `store-drizzle.ts` delegate to `store.ts` for adapter creation (not create its own)\n3. Making `getSwarmMailLibSQL` use the shared cache from `store.ts` instead of creating adapters directly\n\n**Critical Insight**: Parameter order mattered - `store.ts` uses `(dbOverride, projectPath)` while `store-drizzle.ts` uses `(projectPath, dbOverride)`. Had to swap them when delegating.\n\n**Test Pattern**: Integration tests that use `getSwarmMailLibSQL` now share adapters with `sendSwarmMessage`, `appendEvent`, and `getInbox` - all operations use the same database instance as intended.\n\nThis was NOT the URL_INVALID bug (already fixed in commit 7bf9385). This was a separate instance mismatch issue discovered after URL normalization was resolved.","created_at":"1766423294803.0","metadata":"{\"files\":[\"swarm-mail/src/streams/store.ts\",\"swarm-mail/src/streams/store-drizzle.ts\",\"swarm-mail/src/libsql.convenience.ts\",\"swarm-mail/src/streams/swarm-mail.ts\"],\"pattern\":\"adapter-caching\",\"tests_fixed\":3}","tags":"swarm-mail,adapter-cache,bug-fix,integration-tests,database-instance"}
|
|
255
323
|
{"id":"8470c067-528b-43b6-a491-a9a5190c4c08","information":"{\"id\":\"pattern-1766264411599-5hpzj8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:00:11.599Z\",\"updated_at\":\"2025-12-20T21:00:11.599Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766264411818.0","metadata":"{\"id\":\"pattern-1766264411599-5hpzj8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
256
324
|
{"id":"8476d7c1-9768-44a6-a378-dcaca8447aae","information":"hive_sync git remote handling: Fixed bug where hive_sync would fail with \"No configured push destination\" error when no git remote is configured. Root cause: implementation unconditionally tried to push/pull even when no remote exists. Solution: Check if remote exists with `git remote` command before attempting pull/push operations. If no remote, return success message \"(no remote configured)\" instead of failing. This allows local-only git repos to use hive_sync without errors. Implementation detail: The commit of .hive changes happens BEFORE the pull check, ensuring .hive state is committed even if pull/push are skipped.","created_at":"2025-12-18T18:02:37.061Z"}
|
|
@@ -258,12 +326,14 @@
|
|
|
258
326
|
{"id":"84f2229d-1f63-44d2-84f3-ba5884a13b32","information":"{\"id\":\"pattern-1766260911605-ad5ur8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:01:51.605Z\",\"updated_at\":\"2025-12-20T20:01:51.605Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260911893.0","metadata":"{\"id\":\"pattern-1766260911605-ad5ur8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
259
327
|
{"id":"85310b4f-fc98-4675-9b6d-ae6f1d593306","information":"Drizzle ORM PGlite Adapter Integration Pattern:\n\n**Problem:** Projection wrappers calling `toSwarmDb()` failed with \"DatabaseAdapter does not have getClient() method\" when passed PGlite instances. `toSwarmDb()` only worked with LibSQLAdapter (which has `getClient()` method).\n\n**Root Cause:** swarm-mail supports BOTH PGlite and LibSQL, but Drizzle wrappers assumed LibSQL-only. `getDatabase()` returns PGlite, not LibSQLAdapter.\n\n**Solution:** Universal `toDrizzleDb()` function that:\n1. Detects if database is LibSQLAdapter (has `getClient()` method) OR PGlite (has `query`/`exec` methods)\n2. For LibSQL: uses `drizzle-orm/libsql` with `getClient()`\n3. For PGlite: uses `drizzle-orm/pglite` adapter directly with PGlite instance\n\n**Implementation:**\n```typescript\nexport function toDrizzleDb(db: any): SwarmDb {\n // LibSQL path\n if (db && typeof db.getClient === 'function') {\n return createDrizzleClient(db.getClient());\n }\n \n // PGlite path \n if (db && typeof db.query === 'function' && typeof db.exec === 'function') {\n const { drizzle } = require('drizzle-orm/pglite');\n const { schema } = require('./db/schema/index.js');\n return drizzle(db, { schema });\n }\n \n throw new Error('Database must be LibSQLAdapter or PGlite');\n}\n```\n\n**Files Changed:**\n- `libsql.convenience.ts`: Added `toDrizzleDb()`, exported from index\n- `projections-drizzle.ts`: Changed `toSwarmDb()` to `toDrizzleDb()` in all wrappers\n- `store-drizzle.ts`: Changed `toSwarmDb()` to `toDrizzleDb()` in all wrappers\n\n**Testing:** All projection queries (getActiveReservations, appendEvent, etc.) now work with both PGlite AND LibSQL.","created_at":"1766331440420.0","tags":"drizzle,pglite,libsql,database-adapter,type-detection"}
|
|
260
328
|
{"id":"85d1e309-e76d-4617-86ec-bc6f556d9e87","information":"PGlite Schema Sync with Drizzle Schema:\n\n**Problem:** Drizzle INSERT queries failed with \"relation does not exist\" or \"no unique constraint\" errors when using PGlite. Tables like `swarm_contexts`, `cursors`, `eval_decompositions`, `eval_outcomes` were defined in Drizzle schema but missing from PGlite initialization.\n\n**Root Cause:** Drizzle doesn't auto-create tables - it's just a query builder. PGlite schema initialization (`initializeSchema()` in streams/index.ts) was incomplete. It only had core tables (events, agents, messages, reservations, locks).\n\n**Solution:** Added missing tables to PGlite schema to match Drizzle schema exactly:\n\n1. **swarm_contexts** - checkpoint/recovery tracking (needs PRIMARY KEY for ON CONFLICT)\n2. **cursors** - stream position tracking\n3. **eval_decompositions** - task decomposition tracking \n4. **eval_outcomes** - subtask outcome recording\n\n**Critical Detail:** PostgreSQL/PGlite `ON CONFLICT (column)` requires PRIMARY KEY or UNIQUE constraint on that column. Drizzle schema had `.primaryKey()` but PGlite SQL needed explicit `PRIMARY KEY` in CREATE TABLE.\n\n**Files Changed:**\n- `streams/index.ts`: Added 4 missing tables to `initializeSchema()`\n\n**Pattern:** When adding Drizzle tables, ALWAYS add equivalent CREATE TABLE to PGlite schema. Keep them in sync.","created_at":"1766331453627.0","tags":"pglite,schema-sync,drizzle,database-migration,on-conflict"}
|
|
329
|
+
{"id":"867cdd02-1c7f-4c54-bfd7-29662c5eefb0","information":"**Oh-My-OpenCode Background Agent System**\n\nParallel agent execution via OpenCode SDK's `client.session` API:\n\n**Launch Flow:**\n```typescript\nclass BackgroundManager {\n async launch(input: LaunchInput): Promise<BackgroundTask> {\n // 1. Create child session\n const session = await client.session.create({\n body: {\n parentID: input.parentSessionID,\n title: `Background: ${input.description}`,\n },\n });\n \n // 2. Start async prompt (non-blocking)\n client.session.promptAsync({\n path: { id: session.id },\n body: {\n agent: input.agent,\n tools: { task: false, background_task: false }, // Prevent recursion\n parts: [{ type: \"text\", text: input.prompt }],\n },\n });\n \n // 3. Poll for completion\n this.startPolling(); // Checks session status periodically\n }\n}\n```\n\n**Status Tracking:**\n- Polls message storage to detect task completion\n- Tracks tool call count as progress indicator\n- Detects TODO creation as implicit result signal\n- Status transitions: `running → completed | failed | cancelled`\n\n**Notification System:**\n- Accumulates notifications per parent session\n- Hook: `background-notification` injects notifications on main session response\n- Clears notifications after injection\n\n**Tools Provided:**\n- `background_task` - Launch background agent\n- `background_output` - Check task status\n- `background_cancel` - Cancel running task\n\n**Novel Pattern - TODO as Result:**\n- Background agents create TODOs instead of returning values\n- Main agent polls for TODO creation as completion signal\n- Async result passing via shared TODO list\n\n**Swarm Adoption:** Similar to our worker spawn pattern, but uses TODO list instead of Swarm Mail for coordination.","created_at":"1766673485402.0","tags":"oh-my-opencode,background-agents,async-execution,polling,todos"}
|
|
261
330
|
{"id":"872f41e4-f752-4ed0-aca9-2c2222f27768","information":"DurableDeferred Integration in Swarm: swarm_complete now resolves a DurableDeferred keyed by bead_id to enable cross-agent task completion signaling. This allows coordinators to await worker completion without polling. Implementation: After closing the cell, swarm_complete checks for a deferred with URL `deferred:${bead_id}` and resolves it with {completed: true, summary} payload. Non-fatal if deferred doesn't exist (backward compatibility). Coordinators can create the deferred BEFORE spawning workers, then await its resolution. Uses libSQL database via getSwarmMailLibSQL(). Returns deferred_resolved: boolean and deferred_error: string in response for debugging. Future improvement: Use Effect-TS DurableDeferred service instead of raw SQL for type safety and error handling.","created_at":"1766341155834.0","tags":"swarm,durabledeferred,effect-ts,cross-agent-signaling,task-completion"}
|
|
262
331
|
{"id":"888e5037-d33c-4182-8ff5-7c1466977f38","information":"Debug package integration for swarm-mail: Successfully implemented debug logging with namespace filtering (swarm:events, swarm:reservations, swarm:messages, swarm:checkpoints). Key learnings: (1) debug package checks DEBUG env var at import time, so tests need to use debug.enable()/disable() programmatically, NOT process.env.DEBUG directly. (2) Capturing stderr in tests requires proper typing: `process.stderr.write = ((chunk: Buffer | string) => {...}) as typeof process.stderr.write` to satisfy TypeScript. (3) Dynamic import in tests (`await import(\"./debug.ts\")`) ensures debug state is picked up after enable/disable calls. (4) debug package automatically adds timestamps and subsystem prefixes - no manual formatting needed. (5) For human debugging only - AI agents should use structured errors instead. Console output bloats AI context.","created_at":"1766433142777.0","tags":"debug,logging,testing,typescript,swarm-mail,environment-variables"}
|
|
263
332
|
{"id":"899fb8b8-d5fb-464d-a493-a8e5131e3f0e","information":"Svelte 5 runes pattern for reactive config objects: Use `$derived()` for objects that depend on props. WRONG: `const config = { width, height }` (captures initial value). CORRECT: `const config = $derived({ width, height })`. This ensures reactivity when props change. Also applies to computed values derived from props.","created_at":"1766343401494.0","tags":"svelte,svelte5,runes,reactivity,derived,props"}
|
|
264
333
|
{"id":"8a14fcbb-5546-4bdb-9a0b-91ac985a85fb","information":"DurableLock integration pattern for event-sourced file reservations:\n\n**Architecture:**\n- Keep existing event+projection architecture (reserveFiles/releaseFiles)\n- Add DurableLock underneath for actual mutex\n- Store lock holder IDs in both event (lock_holder_ids array) and projection (lock_holder_id column)\n- Release locks using stored holder IDs\n\n**Implementation steps:**\n1. Extend event schemas with lock_holder_ids optional field\n2. Add lock_holder_id column to projection table schema\n3. Update projection handler to store lock holder IDs\n4. In reserve function: call DurableLock.acquire() for each path, store holders\n5. In release function: read holders from projection, call DurableLock.release()\n\n**Key learnings:**\n- DurableLock requires holder ID for release - must be persisted\n- Locks auto-expire via TTL if release fails (graceful degradation)\n- Effect.runPromise() pattern for calling Effect-based DurableLock from async code\n- Schema changes require updating BOTH Drizzle schema (db/schema) AND libsql-schema.ts DDL\n\n**Gotchas:**\n- Database adapter must be passed explicitly to all store/projection functions (dbOverride parameter)\n- Schema initialization (createLibSQLStreamsSchema) must be called on first DB access\n- Bulk INSERT with lock_holder_ids requires careful parameter indexing ($baseParamCount+4+i pattern)\n\n**Test pattern:**\nQuery locks table directly after reserve/release to verify DurableLock was used","created_at":"1766341450388.0","metadata":"{\"date\":\"2025-12-21\",\"epic\":\"opencode-swarm-monorepo-lf2p4u-mjg1elo0g21\",\"task\":\"opencode-swarm-monorepo-lf2p4u-mjg1elo9uoa\",\"agent\":\"BoldStone\"}","tags":"durablelock,event-sourcing,file-reservations,libsql,effect-ts,swarm-mail"}
|
|
265
334
|
{"id":"8a396b22-7a39-489a-ae5d-b5332b8f350e","information":"Course Builder monorepo structure for shared database adapters:\n\n- packages/core - defines CourseBuilderAdapter interface with 100+ methods, domain schemas (Zod), business logic\n- packages/adapter-drizzle - implements adapter interface, exports schema factories (getCourseBuilderSchema(tableFn)), supports MySQL/PG/SQLite via type discrimination\n- apps/* - each app creates own db instance, own table prefix, calls schema factory, passes both to adapter\n\nKey files:\n- packages/core/src/adapters.ts - interface definition with generic TDatabaseInstance\n- packages/adapter-drizzle/src/lib/mysql/index.ts - mySqlDrizzleAdapter(client, tableFn) implementation\n- apps/*/src/db/mysql-table.ts - app-specific mysqlTableCreator with unique prefix\n- apps/*/src/db/schema.ts - calls getCourseBuilderSchema(mysqlTable) to get prefixed tables\n- apps/*/src/db/index.ts - creates db instance, exports courseBuilderAdapter = DrizzleAdapter(db, mysqlTable)\n\nPattern enables 15+ apps sharing same database with table isolation via prefixes like zER_, zEW_, EDAI_, AI_, etc.","created_at":"2025-12-14T23:56:13.303Z"}
|
|
266
335
|
{"id":"8a59059a-7374-49a6-ad4e-4dc5a4160a5c","information":"Docker test infrastructure approach for egghead migration:\n\n1. Use pg_dump for REAL schemas - don't manually recreate Rails table definitions. The schema has 50+ columns per table with specific defaults, constraints, and indexes.\n\n2. Export strategy (pragmatic path):\n - Option 2 (now): Export 2 POC courses with full schema via pg_dump --schema-only + COPY for data\n - Option 1 (next): Generalize to N random courses with --courses=N flag\n - Option 3 (goal): Full sanitized production dump\n\n3. Data anonymization: Replace emails with instructor{id}@test.egghead.io, null out authentication_token, encrypted_password, confirmation_token, reset_password_token\n\n4. Key tables in dependency order: users → instructors → series → lessons → tags → taggings → playlists → tracklists\n\n5. Shell script approach (export-poc-courses.sh) is cleaner than TypeScript for pg_dump operations - native psql/pg_dump tools handle schema complexity better than manual SQL generation.","created_at":"2025-12-13T17:35:48.194Z"}
|
|
336
|
+
{"id":"8a9fd42c-6c54-443b-9976-d9c0643d5aa2","information":"Compaction hook observability pattern: Added structured metrics collection WITHOUT breaking existing logger instrumentation. Key insight: The hook already had 14 Pino log points with structured data (phase timings, detection confidence, reasons) - the new observability module COMPLEMENTS this by providing programmatic metrics access and aggregation. Implementation: (1) Created CompactionMetrics type with phase timing Map and pattern tracking arrays. (2) Used functional API (recordPhaseStart, recordPhaseComplete, recordPatternExtracted) instead of OO methods for simplicity. (3) Integrated into hook by creating metrics collector at start, tracking phases throughout, adding summary to final log. (4) Kept lazy logger pattern intact - metrics is orthogonal concern. Result: Hook logs now include nested metrics object with phase breakdown, pattern counts, and success rates. Queryable via jq on Pino NDJSON logs. TDD approach: 11 unit tests + 4 integration tests, all passing. Debug mode captures verbose pattern details when enabled.","created_at":"1766640640332.0","tags":"compaction,observability,metrics,logging,pino,tdd"}
|
|
267
337
|
{"id":"8b23681b-7dc8-4501-882e-1ef66174881f","information":"{\"id\":\"pattern-1765751936368-siqk3d\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T22:38:56.368Z\",\"updated_at\":\"2025-12-14T22:38:56.368Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T22:38:56.603Z","metadata":"{\"id\":\"pattern-1765751936368-siqk3d\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
268
338
|
{"id":"8b93efb6-c350-4538-b258-99bc7acc1e63","information":"Tool-adapter integration test coverage completed for opencode-swarm-plugin. Added 4 new tests covering memory tools (semantic_memory_store, semantic_memory_find), swarm coordination tools (swarm_broadcast, swarm_checkpoint), and a comprehensive smoke test that exercises 9 tools in sequence (init → create → reserve → progress → memory → send → close → release). All 20 tests pass. Key learnings: (1) semantic_memory_store returns {id: string}, not {success, id, information}. (2) swarm_checkpoint requires epic_id, files_modified, progress_percent fields - it's not just a simple checkpoint. (3) Smoke test pattern is valuable for catching adapter lifecycle bugs that unit tests miss. (4) swarm_checkpoint failure with \"no such table: swarm_contexts\" is EXPECTED in test environments without full swarm coordination setup - the test verifies it does NOT fail with \"dbOverride required\" which was the original bug.","created_at":"1766364993661.0","tags":"testing,integration-tests,tool-adapter,swarm-plugin"}
|
|
269
339
|
{"id":"8c4f7a27-e641-4657-9bbe-857e77cdd200","information":"{\"id\":\"pattern-1765653391843-hizz8c\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:16:31.843Z\",\"updated_at\":\"2025-12-13T19:16:31.843Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:16:32.050Z","metadata":"{\"id\":\"pattern-1765653391843-hizz8c\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -273,6 +343,7 @@
|
|
|
273
343
|
{"id":"8f1ef3ea-7d99-4997-9dc2-805987cea648","information":"CRITICAL BUG: Coordinator loses identity after compaction\n\nRoot cause: The compaction hook injects generic \"you are a coordinator\" context but doesn't include:\n1. The SPECIFIC epic ID being coordinated\n2. Which subtasks are done/pending/in_progress \n3. The original task description\n4. Which workers were spawned\n\nThe agent wakes up knowing it's a coordinator but not WHAT it's coordinating. It then starts doing work directly instead of spawning workers.\n\nFix needed in compaction-hook.ts:\n- Query hive for in_progress epics\n- Include epic ID, title, and subtask status in injected context\n- Include last known worker activity from swarm-mail\n- Make the context actionable: \"Resume coordinating epic bd-xxx\"\n\nThis is P0 - breaks the entire swarm coordination model.","created_at":"1766595208571.0","tags":"swarm,compaction,coordinator,bug,p0,context-loss"}
|
|
274
344
|
{"id":"8f24dcef-12cd-464f-906f-d3847062abd5","information":"{\"id\":\"test-1766593302223-7tdtts4ohgp\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:21:42.223Z\",\"raw_value\":1}","created_at":"1766593302468.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:21:42.223Z\"}"}
|
|
275
345
|
{"id":"9126fdf3-7090-4dda-bc3b-d66e14362291","information":"{\"id\":\"pattern-1765664125767-wxih0g\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:15:25.767Z\",\"updated_at\":\"2025-12-13T22:15:25.767Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:15:25.968Z","metadata":"{\"id\":\"pattern-1765664125767-wxih0g\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
346
|
+
{"id":"9151565c-17f8-4f76-86f0-5878286e0652","information":"oh-my-opencode Claude Code Compatibility Layer Architecture:\n\n**Multi-Layer Loader Pattern:**\n- 4 independent loaders: commands, agents, skills, MCPs\n- Each scans ~/.claude and .claude/ (user + project scope)\n- Frontmatter parsing with Zod validation\n- Deep merge strategy: project config overrides user config\n- Commands wrap in <command-instruction>/$ARGUMENTS template\n- Skills inject base directory path for @path references\n- Agents parse tools field (comma-separated) into tools config\n\n**Hook Integration (5 Event Types):**\nMaps Claude Code hooks → OpenCode plugin events:\n- PreToolUse → tool.execute.before (can deny, modify input)\n- PostToolUse → tool.execute.after (warnings, additional context)\n- UserPromptSubmit → chat.message (inject messages, block prompts)\n- Stop → event:session.idle (continue prompt injection)\n- PreCompact → experimental.session.compacting (inject context)\n\nHook config sources (merged with precedence):\n1. ~/.claude/settings.json (user)\n2. ./.claude/settings.json (project)\n3. ./.claude/settings.local.json (local, git-ignored)\n\nPattern matching: supports wildcards, regex for tool names\n\n**MCP Loader:**\n- Reads .mcp.json from 3 scopes: ~/.claude, ./.claude, .claude/\n- Env var expansion: ${VAR} → process.env.VAR\n- Transforms Claude format → OpenCode SDK format\n- Disabled servers skipped via {disabled: true}\n\n**Session State Tracking:**\n- mainSessionID vs subagentSessions Set\n- Used for background agent notifications\n- Tracks first message (skip UserPromptSubmit for title gen)\n- Session error/interrupt state management\n\n**Novel Patterns for Swarm:**\n\n1. **Config Migration on Load:**\n - Auto-migrates deprecated agent names (OmO → Sisyphus)\n - Writes back to disk if migration occurs\n - Zod validation with error collection\n\n2. **Tool Input Caching:**\n - Caches args at tool.execute.before\n - Retrieves at tool.execute.after (PostToolUse needs input)\n - Prevents re-reading from session messages\n\n3. **Transcript Recording:**\n - JSONL format: {type, timestamp, tool_name, tool_input, tool_output}\n - ~/.local/share/opencode/storage/${sessionID}/transcript.jsonl\n - Used by hooks for context awareness\n\n4. **Selective Hook Disable:**\n - Config: {disabledHooks: {PreToolUse: [\"pattern1\", \"pattern2\"]}}\n - Pattern matching per event type\n - User + project config merge\n\n5. **Message Injection via Filesystem:**\n - injectHookMessage() writes to temp file\n - OpenCode picks up via filesystem watcher\n - Preserves agent/model/tools context\n\n**Data Storage Separation:**\n- Config: ~/.config/opencode/oh-my-opencode.json\n- Data: ~/.local/share/opencode/storage/ (XDG compliant)\n- Claude compat: ~/.claude/* (commands, agents, skills, settings.json)\n\n**Key Insight:** The compatibility layer is NOT a migration tool - it's a dual-path system. Users can keep Claude Code configs while using OpenCode. This allows gradual migration and cross-tool workflows.","created_at":"1766673470579.0","tags":"oh-my-opencode,claude-code,compatibility,loaders,hooks,config-migration,session-state"}
|
|
276
347
|
{"id":"91f6de54-cd46-46c4-a12b-7f80b2a887b9","information":"Test Isolation Pattern for semantic-memory: Use environment variable TEST_MEMORY_COLLECTIONS=true to suffix collection names with '-test'. Implemented via getCollectionNames() function that checks process.env.TEST_MEMORY_COLLECTIONS and conditionally appends '-test' to base collection names (swarm-feedback, swarm-patterns, swarm-maturity). Vitest integration config sets this env var automatically. Prevents test data from polluting production semantic-memory collections. Cleanup handled in vitest.integration.setup.ts teardown hook. Pattern enables running integration tests safely without affecting production learning data. Key insight: Dynamic collection naming at config resolution time (not runtime) ensures all storage instances in test mode automatically use test collections.","created_at":"2025-12-14T22:37:48.129Z","metadata":"{\"author\":\"WarmHawk\",\"pattern_type\":\"test_isolation\"}"}
|
|
277
348
|
{"id":"920ce3e0-5d5d-4cf4-be54-b5a450f6c18c","information":"pino-roll file rotation format: Uses NUMERIC rotation, not date-based. With frequency='daily' and extension='log', files are named {basename}.{number}log (e.g., swarm.1log, swarm.2log). The number increments with each rotation. The 'limit.count' option specifies how many OLD files to keep in addition to the current file. So limit: { count: 14 } means 14 rotated files + 1 current file = 15 total files max. Common misconception: thinking pino-roll will create date-based filenames like swarm-2024-12-24.log - it doesn't. That requires a custom transport or different package.","created_at":"1766592728219.0","tags":"pino,pino-roll,logging,rotation,file-naming,nodejs,bun"}
|
|
278
349
|
{"id":"921e7326-558a-4e7d-8f4d-c958541fdbf9","information":"{\"id\":\"pattern-1766262043345-lwfqkk\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:20:43.345Z\",\"updated_at\":\"2025-12-20T20:20:43.345Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262043583.0","metadata":"{\"id\":\"pattern-1766262043345-lwfqkk\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -284,9 +355,11 @@
|
|
|
284
355
|
{"id":"9776db4c-e14f-4495-b9fc-05954676abbb","information":"{\"id\":\"test-1766074436954-pj27gd4lso\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:13:56.954Z\",\"raw_value\":1}","created_at":"2025-12-18T16:13:57.169Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:13:56.954Z\"}"}
|
|
285
356
|
{"id":"97ab28c1-c249-4144-937e-88f2b0f4b398","information":"{\"id\":\"pattern-1766085029743-5mj578\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T19:10:29.743Z\",\"updated_at\":\"2025-12-18T19:10:29.743Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T19:10:29.969Z","metadata":"{\"id\":\"pattern-1766085029743-5mj578\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
286
357
|
{"id":"9891d1d6-0015-4983-83cd-bc27c1df0d43","information":"SQLite ALTER TABLE ADD COLUMN has strict limitations that Drizzle doesn't warn about:\n\n**The Problem:**\n- ALTER TABLE cannot use non-constant defaults like `datetime('now')`, `CURRENT_TIMESTAMP`\n- ALTER TABLE cannot add NOT NULL columns without a default\n- Drizzle schema allows these but they fail at runtime with ALTER TABLE\n\n**Root Cause:**\nSQLite's ALTER TABLE is more restrictive than CREATE TABLE. CREATE TABLE allows SQL function defaults, but ALTER TABLE only allows constant literals.\n\n**The Solution:**\nSeparate default handling for CREATE vs ALTER:\n- CREATE TABLE: use original defaults (functions OK)\n- ALTER TABLE: provide constant defaults based on type (TEXT='', INTEGER=0, REAL=0.0)\n\n**Code Pattern:**\n```typescript\nfunction getColumnDefaultForAlterTable(col: AnySQLiteColumn<any>): string {\n const config = (col as any).config;\n \n // Skip SQL functions - not allowed in ALTER TABLE\n if (defaultVal.includes(\"(\")) {\n // Fall through to constant default\n }\n \n // Provide type-appropriate constant defaults\n const sqlType = normalizeType(col.getSQLType());\n if (sqlType === \"TEXT\") return \"DEFAULT ''\";\n if (sqlType === \"INTEGER\") return \"DEFAULT 0\";\n if (sqlType === \"REAL\") return \"DEFAULT 0.0\";\n}\n```\n\n**When This Matters:**\n- Runtime schema migrations when columns are missing\n- ALTER TABLE operations on existing tables\n- Drizzle schema validation and auto-fixing\n\n**Prevention:**\nDocument in schema comments when a default is non-constant so migration code can handle it specially.","created_at":"1766294601043.0","tags":"sqlite,drizzle,alter-table,schema-migration,gotcha"}
|
|
358
|
+
{"id":"98ba0ccb-f51f-41ef-941f-fc0922a50fec","information":"{\"id\":\"test-1766635242400-7kuxjtz68jn\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-25T04:00:42.400Z\",\"raw_value\":1}","created_at":"1766635242687.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-25T04:00:42.400Z\"}"}
|
|
287
359
|
{"id":"99a8fa5a-2287-4665-bf88-972213bc754b","information":"{\"id\":\"test-1766080415739-14f1w45qthd9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:53:35.739Z\",\"raw_value\":1}","created_at":"2025-12-18T17:53:36.012Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:53:35.739Z\"}"}
|
|
288
360
|
{"id":"9a004fda-9142-4e55-9447-db005493487e","information":"{\"id\":\"pattern-1765771064070-9few2m\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:57:44.070Z\",\"updated_at\":\"2025-12-15T03:57:44.070Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:57:44.420Z","metadata":"{\"id\":\"pattern-1765771064070-9few2m\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
289
361
|
{"id":"9abd19da-6b27-40f1-a385-de69d0a0f55b","information":"Swarm coordination pattern for ADR writing (Dec 2024): When multiple ADRs need writing, spawn parallel workers with clear file ownership. Both workers may need to update shared index file (docs/adr/README.md) - coordinate via swarmmail to avoid conflicts. Pattern: first worker adds placeholder entries for both, second worker corrects titles. Workers should store learnings via semantic-memory_store after completing ADRs. Use swarm_complete (not hive_close) to auto-release reservations and record learning signals.","created_at":"2025-12-19T00:16:21.306Z","tags":"swarm,coordination,adr,parallel-work,file-conflicts,best-practice"}
|
|
362
|
+
{"id":"9b113ff5-7294-42e1-9978-08e861d4255f","information":"Evalite API Pattern for Fixture-Based Evals: The `task` function in evalite receives only `input` parameter, NOT `{ output }` context. When testing with pre-generated fixtures (where \"output\" already exists), structure data like: `{ input: fixture, expected: criteria }` and use identity task: `task: async (input) => JSON.stringify(input)`. Do NOT try to pass output via data and destructure in task - evalite doesn't work that way. See compaction-prompt.eval.ts for working pattern with 6 synthetic fixtures.","created_at":"1766636507216.0","tags":"evalite,testing,fixtures,api-pattern"}
|
|
290
363
|
{"id":"9b55a76c-d07d-4a7c-b9c9-ea49f13c140f","information":"@badass Router Design Decision (Dec 2024): Hybrid approach combining uploadthing and course-builder patterns.\n\n**From Uploadthing (COPY):**\n1. Type-state builder pattern with UnsetMarker for compile-time safety\n2. Immutable chain - each method returns new builder\n3. Effect-TS at handler layer ONLY, not in builder API (builder stays pure TypeScript for DX)\n4. Two-phase adapter transformation: extract framework context then normalize to Web Request\n5. Subpath exports for tree-shaking: @badass/next, @badass/astro, @badass/server\n\n**From Course-Builder (KEEP):**\n1. Framework-agnostic core with single entry function\n2. Provider plugin system for integrations (payment, transcription, etc.)\n3. Adapter interface separating DB from business logic\n4. Inngest for background jobs\n\n**Changes from Course-Builder:**\n1. Switch-based routing becomes procedure registry with type inference\n2. String actions become type-safe procedures: router.checkout.call(input)\n3. Manual request/response becomes middleware chain\n4. Massive adapter interface splits into ContentAdapter, CommerceAdapter, VideoAdapter\n5. Video processing extracts to @badass/video\n\n**Key Files:**\n- uploadthing builder: packages/uploadthing/src/_internal/upload-builder.ts\n- uploadthing adapters: packages/uploadthing/src/next.ts, express.ts\n- course-builder core: packages/core/src/lib/index.ts:24\n- course-builder next: packages/next/src/lib/index.ts:50\n- course-builder astro: packages/astro/server.ts:44","created_at":"2025-12-18T15:57:47.086Z"}
|
|
291
364
|
{"id":"9b7e2971-9b37-4783-8640-2c3504ae4450","information":"@badass CLI Architecture Decision (Dec 2024): Multi-site CLI pattern like PlanetScale/Stripe CLI. Sites are self-contained bounded contexts with own Mux/Inngest/Stripe accounts. CLI manages multiple sites via ~/.badass/config.json. Commands: badass auth login site, badass site use site, badass --site=site command. Each site provides its own API, CLI routes to appropriate site based on config.","created_at":"2025-12-18T15:30:12.361Z"}
|
|
292
365
|
{"id":"9b9c19de-bf95-4289-b9a2-7c8148069791","information":"{\"id\":\"pattern-1766261761595-um9s30\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:16:01.595Z\",\"updated_at\":\"2025-12-20T20:16:01.595Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261761860.0","metadata":"{\"id\":\"pattern-1766261761595-um9s30\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -296,14 +369,19 @@
|
|
|
296
369
|
{"id":"9d11a24b-119a-473d-b1d3-311602c6cbaa","information":"{\"id\":\"test-1766074742680-yt5vhmvkfzl\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:19:02.680Z\",\"raw_value\":1}","created_at":"2025-12-18T16:19:02.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:19:02.680Z\"}"}
|
|
297
370
|
{"id":"9d1875fc-6598-46fb-b297-b23656a8dbcb","information":"{\"id\":\"test-1766264315783-eqtqfr2j6y6\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:58:35.783Z\",\"raw_value\":1}","created_at":"1766264316016.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:58:35.783Z\"}"}
|
|
298
371
|
{"id":"9d459798-bc90-4947-9c70-0b9bb9526e42","information":"Memory schema migrations in swarm-mail: Created v9 migration that adds memories and memory_embeddings tables to shared PGLite database. Critical: Must add \"CREATE EXTENSION IF NOT EXISTS vector;\" at start of migration SQL before using vector type. Integrated by importing memoryMigrations into streams/migrations.ts and spreading into main migrations array. Pattern: Module migrations append to main array (hive=v7-8, memory=v9). Tests verify table structure, indexes (HNSW, GIN, B-tree), cascade deletes, and 1024-dim vector storage. Memory schema uses TEXT ids, TIMESTAMPTZ timestamps, JSONB metadata, vector(1024) embeddings.","created_at":"2025-12-18T18:59:18.304Z","tags":"swarm-mail,migrations,pgvector,schema,pglite,memory"}
|
|
372
|
+
{"id":"9e53ff1f-54bf-4693-b80f-f8c527183ac4","information":"{\"id\":\"test-1766634597948-ye8nvmu6cym\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-25T03:49:57.948Z\",\"raw_value\":1}","created_at":"1766634598216.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-25T03:49:57.948Z\"}"}
|
|
373
|
+
{"id":"9eed104b-b291-40fd-af62-d5b1fef48203","information":"{\"id\":\"pattern-1766598234358-4e1dkn\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T17:43:54.358Z\",\"updated_at\":\"2025-12-24T17:43:54.358Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766598234666.0","metadata":"{\"id\":\"pattern-1766598234358-4e1dkn\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
299
374
|
{"id":"9ef20adf-0850-48e2-83b9-9af8f0976182","information":"Swarm Wave-based coordination pattern observed: When task instructions explicitly say \"WAIT for Wave1-X and Wave1-Y\", this indicates sequential dependency gates. If file reservation conflicts occur with expected dependencies, agent should:\n\n1. Check if prerequisite files/dirs exist in old state (confirms prereqs not done)\n2. Send BLOCKED message to coordinator with blocker details\n3. Update bead status to blocked\n4. Be patient - conflict holder likely working on prerequisite\n5. Don't attempt workarounds - the sequential ordering exists for a reason\n\nIn this case: bd-lf2p4u-mja6npihvzm (AdapterRename) correctly blocked waiting for Wave1-DirRename and Wave1-TypeRename. File conflict with GoldHawk on beads-adapter.ts was expected since that file needs to be moved/renamed by prereqs first.\n\nAnti-pattern: Trying to work around prerequisites by renaming imports before files are renamed - breaks everything.","created_at":"2025-12-17T15:51:25.825Z"}
|
|
300
375
|
{"id":"9f18ab25-3898-4a71-866b-aad1627a6498","information":"Adapter factory pattern for event-sourced systems: createAdapter(db: DatabaseAdapter, projectKey: string) factory takes a DatabaseAdapter and returns interface with high-level operations. Delegates to store.ts for event operations (appendEvent, readEvents) and projections.ts for queries (getBead, queryBeads). This enables dependency injection and testing with different databases. Key: adapter methods create events with correct type, then call appendEvent(event, projectPath, db) to persist. Projections update automatically via event handlers. Example: createBead() generates bead_created event, appends it, then queries projection to return created bead.","created_at":"2025-12-16T22:08:24.450Z","metadata":"{\"context\":\"swarm-mail architecture\"}","tags":"adapter-pattern,event-sourcing,cqrs,dependency-injection"}
|
|
301
376
|
{"id":"a01b1e63-b02d-49fb-b0d4-48db482b6f22","information":"{\"id\":\"test-1766256912440-5hizpp3yl8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T18:55:12.440Z\",\"raw_value\":1}","created_at":"1766256912635.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T18:55:12.440Z\"}"}
|
|
302
377
|
{"id":"a02ef17d-6ac1-4575-8bd3-6d1854241f80","information":"checkSwarmHealth() and checkHealth() (agent-mail) were throwing \"has been removed\" errors instead of working. These were deprecated during PGlite → libSQL migration but never re-implemented.\n\nFix for checkSwarmHealth(): Use getSwarmMailLibSQL() adapter pattern, test connectivity with \"SELECT 1\", return { healthy: boolean, database: \"libsql\" }. Implemented in swarm-mail.ts.\n\nFix for checkHealth(): Delegate to checkSwarmHealth(). No need to duplicate logic. Implemented in agent-mail.ts.\n\nBoth functions are used by plugin tools (swarmmail_health) and internal health checks (tool-availability.ts, compaction-hook.ts). Leaving them broken would break plugin's health monitoring.\n\nPattern: When migrating infrastructure (PGlite → libSQL), don't just throw deprecation errors for public APIs. Either remove the API entirely or re-implement with new infrastructure. Half-deprecated functions break consumers.","created_at":"1766383554830.0","tags":"swarm-mail,health-check,deprecation,migration,libsql,pglite"}
|
|
303
378
|
{"id":"a0921dff-b9b1-4cd1-b555-9acc6fe23e2f","information":"{\"id\":\"test-1766074455925-j3xb65rzg2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:14:15.925Z\",\"raw_value\":1}","created_at":"2025-12-18T16:14:16.152Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:14:15.925Z\"}"}
|
|
304
379
|
{"id":"a0fc22e7-2d15-4993-9fe4-e7af40e93cab","information":"{\"id\":\"test-1766260843953-vnyht2xat4p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:00:43.953Z\",\"raw_value\":1}","created_at":"1766260844173.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:00:43.953Z\"}"}
|
|
380
|
+
{"id":"a122b09e-71a1-4907-9bc4-c9187c76e7b9","information":"{\"id\":\"pattern-1766634599107-p3myr7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-25T03:49:59.107Z\",\"updated_at\":\"2025-12-25T03:49:59.107Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766634599326.0","metadata":"{\"id\":\"pattern-1766634599107-p3myr7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
305
381
|
{"id":"a1c9240a-b245-4109-a497-be818fa82127","information":"Effect.succeed() vs Effect.gen() in middleware: The @badass/core middleware implementation detects Effect objects by checking for \"_tag\" property. Effect.succeed() returns objects with \"_tag\", but Effect.gen() returns objects with \"_id\" and \"_op\" instead. Result: Effect.succeed() gets unwrapped properly via Effect.runPromise, but Effect.gen() returns the raw Effect object. Workaround: Use Effect.succeed() for simple context values in middleware, avoid Effect.gen() for middleware context functions.","created_at":"2025-12-18T16:32:14.305Z","tags":"effect-ts,middleware,badass-core,gotcha,effect-succeed,effect-gen"}
|
|
382
|
+
{"id":"a1cae9db-8304-47a2-88c4-cff41e45ed37","information":"Implemented `swarm eval` CLI commands with TDD approach. Three commands: 1) `eval status` shows current phase (bootstrap/stabilization/production), gate thresholds, and recent scores with sparklines. 2) `eval history` displays eval run history grouped by eval name with trends and color-coded scores (green >=0.8, yellow >=0.6, red <0.6). 3) `eval run` is a stub for future implementation. Key implementation details: Used existing eval-gates.ts and eval-history.ts modules. Sparkline generation uses chars ▁▂▃▄▅▆▇█ with normalization. Color coding: green (pass/high score), yellow (warning/medium), red (fail/low). Used @clack/prompts for consistent CLI formatting. Phase indicators: 🌱 bootstrap, ⚙️ stabilization, 🚀 production. All helpers have corresponding test coverage in bin/swarm.test.ts following TDD pattern (RED → GREEN → REFACTOR).","created_at":"1766636635126.0","tags":"cli,eval,tdd,sparklines,progressive-gates,formatting"}
|
|
306
383
|
{"id":"a2e73118-c51b-411f-9ee6-fa11bb37a733","information":"{\"id\":\"pattern-1766263854559-5dy1gz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:50:54.559Z\",\"updated_at\":\"2025-12-20T20:50:54.559Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263854807.0","metadata":"{\"id\":\"pattern-1766263854559-5dy1gz\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
384
|
+
{"id":"a3983e28-31b0-4ee4-95e5-57ce24988ccb","information":"Implemented 'swarm log sessions' CLI subcommand for viewing captured coordinator sessions.\n\nKEY IMPLEMENTATION PATTERNS:\n1. **TDD approach**: Wrote failing tests first in swarm.test.ts, then implemented helpers and CLI command to make them pass. Tests cover: session file parsing, listing with metadata, filtering by type/time, latest session retrieval.\n\n2. **Session file format**: JSONL with CoordinatorEvent objects from eval-capture.ts. Each line is a JSON event with discriminated union on event_type (DECISION/VIOLATION/OUTCOME/COMPACTION). Session files stored in ~/.config/swarm-tools/sessions/{session_id}.jsonl.\n\n3. **CLI structure**: Added logSessions() function called from logs() when first arg is 'sessions'. Follows existing pattern: parseArgs, filter, format, output (text or JSON).\n\n4. **Helper functions**: parseSessionFile (read JSONL, skip invalid lines), listSessionFiles (read all, extract metadata, sort by time), getLatestSession, filterEventsByType, filterEventsSince, formatEvent (colored output by event type).\n\n5. **Features implemented**:\n - `swarm log sessions` - list all sessions with metadata (start time, event count, duration)\n - `swarm log sessions <id>` - view specific session (supports partial ID match)\n - `swarm log sessions --latest` - view most recent session\n - `--type <TYPE>` - filter by event type (DECISION/VIOLATION/OUTCOME/COMPACTION)\n - `--since <duration>` - time filter (30s, 5m, 2h, 1d)\n - `--limit <n>` - limit event count\n - `--json` - JSON output for piping to jq\n\n6. **Testing gotcha**: Busy wait isn't reliable for ensuring different timestamps. Use explicit baseTimestamp parameter in test helpers instead.\n\n7. **Import pattern**: Must import CoordinatorEvent type from ../src/eval-capture.js for type safety.\n\nCOMPLETES OBSERVABILITY STORY: Session capture (eval-capture.ts) → View (swarm log sessions) → Score (coordinator evals)","created_at":"1766640585732.0","tags":"cli,swarm-log,sessions,observability,tdd,coordinator-events,jsonl"}
|
|
307
385
|
{"id":"a43fb38d-b67c-40df-9a28-02d5e5ca529b","information":"PR triage context efficiency pattern: ALWAYS fetch metadata first (id, path, line, author) using `gh api --jq` to keep responses compact (~100 bytes per comment vs ~5KB with body). Only fetch full comment bodies for actionable items (human comments, high severity). This prevents context exhaustion on PRs with 50+ CodeRabbit comments. Triage into buckets: fix-with-code (implement + reply), won't-fix (acknowledge + explain), tracked-in-cell (create hive cell + link). Use batch acknowledgment for low-priority bot comments. Key insight: 50 metadata entries = ~5KB, 50 full bodies = ~500KB. Strategy is metadata-first categorization, then selective body fetches. Created pr-triage skill with full gh API patterns at .opencode/skills/pr-triage/","created_at":"1766424320611.0","tags":"pr-triage,github,context-efficiency,coderabbit,gh-api,workflow"}
|
|
308
386
|
{"id":"a46dc0eb-beae-4e1a-8261-a378ada89125","information":"{\"id\":\"test-1766262988210-2t45j8b22aw\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:36:28.210Z\",\"raw_value\":1}","created_at":"1766262988691.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:36:28.210Z\"}"}
|
|
309
387
|
{"id":"a4dbe094-d77b-4763-887f-13aee7dab5b6","information":"Implemented observability tools for OpenCode Swarm Plugin. KEY LEARNINGS: (1) swarm-mail analytics queries come in two forms - functions that take filters (failedDecompositions, strategySuccessRates, etc.) and objects with buildQuery methods (scopeViolations, taskDuration, etc.). Check for .buildQuery property before calling. (2) SwarmMailAdapter has getDatabase() method that returns the underlying DatabaseAdapter - use this instead of creating new libSQL adapters. (3) In-memory test databases work with createInMemorySwarmMailLibSQL(), no need for complex event creation in tests. (4) All analytics query functions must be exported from swarm-mail/src/index.ts, not just from analytics/index.ts, for plugin imports to work. (5) Plugin tools should use getSwarmMailLibSQL(projectPath) then .getDatabase() for consistent database access across tools.","created_at":"1766434941736.0","tags":"observability,analytics,plugin-tools,swarm-mail,testing"}
|
|
@@ -319,18 +397,28 @@
|
|
|
319
397
|
{"id":"a82ae6de-3d24-4851-9e02-87f2c5fb6e86","information":"## Swarm Decomposition: Remove PGLite, Port Effect Primitives to libSQL\n\n### Epic\n**Title:** Remove PGLite, Port Effect Primitives to libSQL, Integrate into Swarm\n\n**Description:** Complete removal of PGLite infrastructure (except migration tools), port all Effect-TS durable primitives to use libSQL/DatabaseAdapter, and integrate DurableLock + DurableDeferred into swarm worker coordination for file locking and task completion signals.\n\n**Upstream source:** https://github.com/durable-streams/durable-streams\n\n### Subtasks (7 total, validated)\n\n**Task 0: Port DurableLock to libSQL** (complexity: 3, parallel)\n- Files: lock.ts, lock.test.ts\n- Dependencies: none\n- Convert getDatabase() calls to accept DatabaseAdapter parameter\n\n**Task 1: Port DurableDeferred to libSQL** (complexity: 3, parallel)\n- Files: deferred.ts, deferred.test.ts\n- Dependencies: none\n- Convert getDatabase() calls to accept DatabaseAdapter parameter\n\n**Task 2: Port DurableCursor to libSQL** (complexity: 3, parallel)\n- Files: cursor.ts, cursor.integration-test.ts\n- Dependencies: none\n- Cursors table schema already updated (stream, checkpoint columns)\n\n**Task 3: Port DurableMailbox and ask pattern to libSQL** (complexity: 4, sequential)\n- Files: mailbox.ts, mailbox.test.ts, ask.ts, ask.integration-test.ts, layers.ts, index.ts\n- Dependencies: [0, 1, 2]\n- Update layers.ts for proper Effect service composition\n\n**Task 4: Remove PGLite from streams/index.ts** (complexity: 4, sequential)\n- Files: streams/index.ts, pglite.ts, src/index.ts\n- Dependencies: [0, 1, 2, 3]\n- Keep migrate-pglite-to-libsql.ts for migration CLI\n\n**Task 5: Integrate DurableLock into swarm file reservations** (complexity: 4, sequential)\n- Files: agent-mail.ts, swarm-mail.ts\n- Dependencies: [0, 4]\n- Replace current reservation system with DurableLock\n\n**Task 6: Integrate DurableDeferred into swarm task completion** (complexity: 4, sequential)\n- Files: swarm.ts, swarm-orchestrate.ts (in opencode-swarm-plugin)\n- Dependencies: [1, 4]\n- Enable cross-agent RPC pattern\n\n### Execution Order\n1. Spawn tasks 0, 1, 2 in parallel (Lock, Deferred, Cursor)\n2. Wait for all three, then spawn task 3 (Mailbox+ask)\n3. Wait for task 3, then spawn task 4 (Remove PGLite)\n4. Wait for task 4, then spawn tasks 5, 6 in parallel (Integration)\n\n### Blocker\nHive tools are broken due to cursors table schema change. Need to fix before spawning workers.","created_at":"1766333755376.0","tags":"swarm-decomposition,pglite-removal,effect-primitives,epic-plan,blocker"}
|
|
320
398
|
{"id":"a849e675-58d3-4b5a-8c66-28e0dbbc297c","information":"{\"id\":\"test-1766001178291-jasc2x5op7s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-17T19:52:58.291Z\",\"raw_value\":1}","created_at":"2025-12-17T19:52:59.368Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-17T19:52:58.291Z\"}"}
|
|
321
399
|
{"id":"a8999f0a-57cb-450d-979a-ac8b122b7404","information":"{\"id\":\"pattern-1766260222122-wmr1cl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:22.118Z\",\"updated_at\":\"2025-12-20T19:50:22.118Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260222373.0","metadata":"{\"id\":\"pattern-1766260222122-wmr1cl\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
400
|
+
{"id":"a9024e74-3374-4fd2-8cd6-89a8352767ff","information":"**Oh-My-OpenCode Agent Injection via config Hook**\n\nAgents registered via `config` hook that mutates OpenCode config object:\n\n**Multi-Source Agent Loading:**\n```typescript\nconfig: async (config) => {\n // 1. Create builtin agents\n const builtinAgents = createBuiltinAgents(\n pluginConfig.disabled_agents,\n pluginConfig.agents, // Overrides\n ctx.directory,\n config.model,\n );\n \n // 2. Load Claude Code agents from filesystem\n const userAgents = loadUserAgents(); // ~/.claude/agents/*.md\n const projectAgents = loadProjectAgents(); // ./.claude/agents/*.md\n \n // 3. Merge with priority (last wins)\n config.agent = {\n ...builtinAgents,\n ...userAgents,\n ...projectAgents,\n ...config.agent, // OpenCode's own agents (highest priority)\n };\n}\n```\n\n**Agent Markdown Format:**\n```markdown\n---\nname: my-agent\ndescription: What the agent does\ntools: task,read,write,bash\n---\nAgent system prompt goes here.\n```\n\n**Agent Override System:**\n- Per-agent overrides in `agents: { \"agent-name\": { model, temperature, tools, ... } }`\n- `prompt_append` special field to extend (not replace) prompts\n- `disable: true` to disable specific agents\n- `mode: \"subagent\" | \"primary\" | \"all\"` to control agent visibility\n\n**Novel Pattern - Sisyphus Replaces Build:**\n- When `sisyphus_agent.replace_build: true`, demotes `build` agent to subagent mode\n- Sisyphus becomes PRIMARY agent (no `default_agent` config in OpenCode SDK yet)\n- Preserves OpenCode's build agent as fallback subagent\n- Clever workaround for SDK limitation\n\n**Agent Description Scope Tagging:**\n- Appends `(user)`, `(project)`, `(opencode)` to descriptions\n- Makes agent source visible in UI","created_at":"1766673455121.0","tags":"oh-my-opencode,agents,config-hook,markdown,multi-source"}
|
|
322
401
|
{"id":"a9034557-0634-45d0-b405-c0cdacd59c12","information":"{\"id\":\"test-1765386361375-9thynapgze\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:06:01.375Z\",\"raw_value\":1}","created_at":"2025-12-10T17:06:01.560Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:06:01.375Z\"}"}
|
|
323
402
|
{"id":"a921dc7d-4116-477d-98ee-dcf321eb1f75","information":"ACFS Contract Validation Pattern: Every swarm tool should call validateWorkerContract() FIRST before doing work. Check for: swarmmail_initialized, file reservations acquired, cell_id present, epic_id present. Fail fast with actionable error messages that explain HOW to fix, not just WHAT is missing. Example: \"Contract violation: swarmmail_init not called. Fix: Call swarmmail_init(project_path) before any file modifications.\" This prevents 80% of coordination bugs where workers call swarm_complete without proper setup. Source: Dicklesworthstone/agentic_coding_flywheel_setup contract.sh","created_at":"1766591003716.0","tags":"swarm,coordination,validation,contract,patterns,acfs"}
|
|
403
|
+
{"id":"aa011799-04d8-444e-848b-c6b1bec82320","information":"Arbitrary normalization thresholds in time-based scorers lack evidence:\n\ntimeToFirstSpawn: EXCELLENT_MS=60000 (60s), POOR_MS=300000 (5min)\nblockerResponseTime: EXCELLENT_MS=300000 (5min), POOR_MS=900000 (15min)\n\nQuestion: Are these evidence-based or arbitrary? No analysis of actual coordinator spawn/response times exists.\n\nRecommendation: Gather real coordinator session data (20+ sessions), plot distribution of times, adjust thresholds based on percentiles (e.g., p50 = 0.5 score, p95 = 0.0 score). This makes thresholds self-calibrating from real behavior.\n\nAlternative: Make thresholds configurable via expected values in eval cases for different contexts (research-heavy vs implementation-only tasks may have different \"good\" spawn times).\n\nFile: evals/scorers/coordinator-discipline.ts lines 269-335, 499-588","created_at":"1766674509628.0","tags":"evalite,scorers,normalization,thresholds,calibration,time-metrics"}
|
|
404
|
+
{"id":"ab6ae464-b18d-4374-877e-d536fe95e179","information":"Notion workspace exploration for vrain project - key data sources discovered:\n\n**DX Content Pipeline Databases (via dataSources API):**\n\n1. **Campaign Planning** (2bfe06b0-59c4-8006-91e1-000bdea543cb)\n - Tracks: Next.js 16.1, Cache Components, Turborepo 2.7, Sandbox GA, AI SDK v6, Queues\n - Schema: Status (Not started/Idea/Deprioritized/Blocked/Active/Done), Type (Launch/DX Initiative/LT Request), Product Area (Next.js/Turborepo/AI SDK/Vercel/Sandbox), DRI, Flying Dates\n - Key for: Understanding what DX is actively working on\n\n2. **Deliverables** (2b7e06b0-59c4-8041-bcad-000b4b0d5ab4)\n - Tracks: Docs, Guides, Blogs, Community Sessions, Error Messages\n - Schema: Status (Not started/Blocked/In progress/In Review/Ready to publish/Published), Content Type, Product Area, Campaign relation\n - Key for: Content production pipeline status\n\n3. **Launches** (602af05b-5ea8-4073-b6f4-9d0156a0ca6f)\n - Comprehensive launch tracking with 40+ properties\n - Tracks: Marketing Tier, Launch Phase, DX Support needs, Docs Process, Pricing, Telemetry\n - Key for: Understanding upcoming product launches\n\n4. **Academy Course Pipeline** (292e06b0-59c4-80b3-93ea-000bdeff1d9e)\n - Tracks: Courses in development (Slack Agents, Workflow Fundamentals, AI Agent Workflows)\n - Schema: Status, Product Area, Owner, Size, Support Value\n - Key for: Education content planning\n\n**Total accessible:** 92 data sources including OSS cohorts, community platforms, events tracking.\n\n**API Pattern:** Use `notion.dataSources.query()` not `notion.databases.query()` in SDK v5.x.","created_at":"1766679375267.0","tags":"notion,vrain,dx-content-pipeline,databases,campaigns,launches,academy"}
|
|
324
405
|
{"id":"ab7288ed-6ec8-4ff9-92ed-85c11445ddaf","information":"TDD pattern for structured error classes with context enrichment: Start with interface definition (ErrorContext), then write comprehensive tests covering construction, serialization, default values, and context population. Implement base class first with defaults (timestamp auto-populated, suggestions/recent_events default to empty arrays), then specialized error classes extend with just name override. Key insight: TypeScript's Partial<ErrorContext> allows flexible construction while maintaining type safety. Tests verify both minimal (message only) and maximal (all context fields) construction paths. The pattern scales well - 16 tests cover base + 4 specialized error classes comprehensively in under 200 lines.","created_at":"1766433215869.0","tags":"tdd,error-handling,typescript,observability,swarm-mail"}
|
|
406
|
+
{"id":"abeaf436-05d1-434b-a7f0-7adfa788c6d4","information":"Evalite data/task mismatch bug pattern: When task() returns input string unchanged but scorer expects JSON, the eval fails with 0% score. Root cause: data() provides string as input, but scorer expects parsed object. Fix: Change data() to provide the object as input, have task() stringify it with JSON.stringify(input). This aligns with Evalite's API design - task receives input parameter only, not {output} context. Pattern confirmed in example.eval.ts fix - changed from input=\"Test task\" to input={epic, subtasks} object, eval score went from 0% to 100%.","created_at":"1766677513461.0","metadata":"{\"file\":\"evals/example.eval.ts\",\"impact\":\"0% to 100%\",\"fix_type\":\"structural\"}","tags":"evalite,testing,debugging,data-task-mismatch"}
|
|
407
|
+
{"id":"ac0cfaf3-65d6-467a-85f7-9aa045c33524","information":"oh-my-opencode Hook Integration Implementation Patterns:\n\n**Hook Execution Flow:**\n1. Load configs from 3 sources (user, project, local)\n2. Merge configs (later sources append to earlier)\n3. Match tool name against hook matchers (supports wildcards)\n4. Execute hook command as subprocess\n5. Parse JSON output (stdout)\n6. Apply decision (allow/deny/block) + optional input modification\n\n**PreToolUse Hook Contract:**\nInput: {session_id, tool_name, tool_input, tool_use_id, cwd, transcript_path}\nOutput: {continue, decision: \"allow\"|\"deny\"|\"ask\", reason, hookSpecificOutput: {permissionDecision, updatedInput}}\nEffect: Can deny tool execution or modify tool args\n\n**PostToolUse Hook Contract:**\nInput: {session_id, tool_name, tool_input, tool_response, tool_use_id, cwd, transcript_path}\nOutput: {continue, decision: \"block\", systemMessage, hookSpecificOutput: {additionalContext}}\nEffect: Can inject warnings/context into tool output\n\n**UserPromptSubmit Hook Contract:**\nInput: {session_id, prompt, cwd, session: {id}}\nOutput: {continue, stopReason, systemMessage}\nEffect: Can block prompts or inject messages\nSpecial: Skipped on first message (title generation)\n\n**Stop Hook Contract:**\nInput: {session_id, cwd, transcript_path, stop_hook_active, todo_path}\nOutput: {decision: \"block\"|\"continue\", inject_prompt, reason}\nEffect: Can force prompt injection when agent stops\nBypass: Ignored if session ended with error or was interrupted\n\n**PreCompact Hook Contract:**\nInput: {session_id, cwd}\nOutput: {context: string[], hookSpecificOutput: {additionalContext}}\nEffect: Inject context into compaction prompt\n\n**Pattern Matcher Implementation:**\n- Glob-style wildcards: * matches any string\n- Exact match: tool name === matcher\n- Regex fallback: compile matcher as regex if not glob\n- Cache compiled regexes for performance\n\n**Transcript Format:**\n{type, timestamp, tool_name, tool_input, tool_output, content}\nSaved to ~/.local/share/opencode/storage/sessionID/transcript.jsonl\n\n**Hook Command Pattern:**\n{\"PreToolUse\": [{\"matcher\": \"edit|write|bash\", \"hooks\": [{\"type\": \"command\", \"command\": \"/path/to/hook.sh\"}]}]}\n\n**Error Handling:**\n- Hook execution errors logged but don't crash plugin\n- Invalid JSON output treated as no-op\n- Missing fields use defaults (continue: true, decision: allow)","created_at":"1766673485915.0","tags":"oh-my-opencode,hooks,claude-code,pretooluse,posttooluse,userprompttsubmit,stop,precompact"}
|
|
325
408
|
{"id":"ac29eb86-4647-4d59-81c9-07bcfa7093bf","information":"PGlite dynamic import pattern for bundled code: When using PGlite in code that gets bundled with Bun, static imports cause WASM files to load at module import time, which fails if dist/ doesn't include the .data files. Solution: (1) Remove static imports: `import { PGlite } from \"@electric-sql/pglite\"`, (2) Add dynamic imports inside functions: `const { PGlite } = await import(\"@electric-sql/pglite\")`, (3) Use `any` type for db variable to avoid TypeScript generic type errors after dynamic import, (4) Use type assertions on query results: `await db.query(...) as { rows: MyType[] }`. This defers WASM loading until the function is actually called, preventing build-time ENOENT errors.","created_at":"1766259023346.0","tags":"pglite,dynamic-import,wasm,bundler,typescript,bun"}
|
|
409
|
+
{"id":"ac2e80f4-ce4f-40dc-994d-eb8db036b9d2","information":"Session ID propagation in OpenCode plugin tools: Tools receive ctx.sessionID from OpenCode runtime, NOT process.env.OPENCODE_SESSION_ID (which is always empty). When calling captureCoordinatorEvent(), use _ctx.sessionID from tool's execute(args, _ctx) signature. In tool.execute.before/after hooks, use input.sessionID. Pattern: captureCoordinatorEvent({ session_id: _ctx.sessionID || \"unknown\", ... }). Without this, events are orphaned to unknown.jsonl instead of proper session files. Affected all swarm coordination tools: swarm_complete, swarm_review_feedback, swarm_delegate_planning, swarm_spawn_subtask, and detectCoordinatorViolation in index.ts hooks.","created_at":"1766635454168.0","metadata":"{\"solution\":\"use _ctx.sessionID or input.sessionID\",\"root_cause\":\"process.env.OPENCODE_SESSION_ID is not set by OpenCode\",\"fixed_files\":[\"swarm-orchestrate.ts\",\"swarm-review.ts\",\"swarm-decompose.ts\",\"swarm-prompts.ts\",\"index.ts\"]}","tags":"opencode,session-id,ctx,captureCoordinatorEvent,eval-capture,swarm"}
|
|
326
410
|
{"id":"acb950b8-656d-4488-a930-5176968d666f","information":"Integration testing auto-migration in createMemoryAdapter: Tests run against in-memory PGLite databases using createInMemorySwarmMail(). Key insight: If ~/.semantic-memory/memory exists on test machine, migration actually runs and imports real memories during tests. Tests must handle both scenarios (legacy DB exists vs doesn't exist) using toBeGreaterThanOrEqual(0) instead of toBe(0). This proved the migration works end-to-end in real conditions - 177 actual memories migrated successfully during test runs. Critical: Use resetMigrationCheck() in beforeEach() for test isolation (module-level flag persists across tests without reset). Access DatabaseAdapter via swarmMail.getDatabase(), not swarmMail.db (property doesn't exist).","created_at":"2025-12-18T21:26:02.233Z","metadata":"{\"cell_id\":\"mjbxj68dmtb\",\"epic_id\":\"mjbxj67vqil\",\"test_file\":\"memory.integration.test.ts\"}","tags":"testing,integration-tests,pglite,migration,memory,swarm-mail"}
|
|
327
411
|
{"id":"ad3f2d32-9a85-4298-986e-249a10f9a643","information":"Implemented `swarm log` CLI command with TDD approach. Key implementation details: 1) Log files are in ~/.config/swarm-tools/logs/ with .Nlog extension (e.g., swarm.1log, compaction.1log). 2) Log format is JSON lines with level (10=trace, 20=debug, 30=info, 40=warn, 50=error, 60=fatal), time (ISO), module (string), msg (string). 3) Filtering supports: module (positional arg), --level (warn/error/etc), --since (30s/5m/2h/1d format), --limit (default 50). 4) Output modes: colored formatted text (default) or --json for piping to jq. 5) Used parseArgs pattern from cli-builder skill - no dependencies, uses Node util module. 6) TDD pattern: wrote all test helpers first (parseLogLine, filterLogsByLevel, filterLogsByModule, etc) then implemented in swarm.ts. Tests verify parsing, filtering, formatting, and file reading logic.","created_at":"1766593177192.0","tags":"swarm,cli,logging,tdd,filtering,json"}
|
|
412
|
+
{"id":"ad57c407-1bb7-4856-9647-47469bdd425a","information":"{\"id\":\"pattern-1766598997102-ea555z\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T17:56:37.102Z\",\"updated_at\":\"2025-12-24T17:56:37.102Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766598997334.0","metadata":"{\"id\":\"pattern-1766598997102-ea555z\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
328
413
|
{"id":"ad85dcb1-ae91-4b2f-8857-16a5d8747969","information":"3 High-Value Improvements for opencode-swarm-plugin (Dec 2024):\n\n1. **Prompt Template Registry with Hot-Reload**\n - Problem: Prompts hardcoded in swarm-prompts.ts, require rebuild to change\n - Solution: External templates in ~/.config/opencode/swarm/prompts/*.md with variable interpolation\n - Enables: A/B testing, project-specific customization, hot-reload during dev\n - Inspired by: mdflow template variables, Release It! \"configuration as UI\"\n\n2. **Worker Handoff Protocol with Structured Context** (RECOMMENDED FIRST)\n - Problem: Workers ignore 400-line SUBTASK_PROMPT_V2, confused about scope\n - Solution: Structured WorkerHandoff envelope with machine-readable contract (files_owned, success_criteria) + minimal prose\n - Enables: Contract validation in swarm_complete, automatic scope creep detection, smaller prompts\n - Inspired by: \"Patterns for Building AI Agents\" subagent handoff, Bellemare event contracts\n\n3. **Adaptive Decomposition with Feedback Loops**\n - Problem: Decomposition quality varies, learning system doesn't feed back into strategy selection\n - Solution: Strategy registry with outcome-weighted selection (confidence * success_rate / log(completion_time))\n - Enables: Self-improving decomposition, auto-deprecation of failing strategies, transparent reasoning\n - Inspired by: Bellemare event replay, mdflow adapter registry, existing pattern-maturity system\n\nImplementation order: #2 then #1 then #3 (handoff protocol creates structured signals needed for adaptive decomposition)","created_at":"2025-12-18T17:20:56.752Z"}
|
|
329
414
|
{"id":"ae4ce932-255c-43bd-b4b0-64049d0afecf","information":"Database testing pattern for PGlite + pgvector in Effect-TS: Use isolated temp databases per test with makeTempDbPath() creating unique tmpdir paths. Critical: PGlite stores data in a DIRECTORY (not a file), so dbPath.replace(\".db\", \"\") gives the actual data dir. Cleanup with rmSync(dbDir, {recursive: true}). Effect services test via Effect.gen + Effect.provide(layer) + Effect.runPromise. Vector dimension errors (e.g., 1024 vs 3) throw from PGlite with \"expected N dimensions, not M\" - test with try/catch, not .rejects since Effect may wrap errors. Test decay by setting createdAt in past (Date.now() - 90*24*60*60*1000) and validating decayFactor < 0.6. Ordering tests need explicit timestamps, not Sleep delays.","created_at":"2025-12-18T17:16:46.245Z"}
|
|
330
415
|
{"id":"ae77ee44-0037-451b-8465-3dce4630e18a","information":"{\"id\":\"pattern-1766080417904-ucxl91\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:53:37.904Z\",\"updated_at\":\"2025-12-18T17:53:37.904Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:53:38.137Z","metadata":"{\"id\":\"pattern-1766080417904-ucxl91\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
331
416
|
{"id":"af97ab19-575c-4db2-9c60-3594d3698f5d","information":"{\"id\":\"test-1766259538220-8g5a5mcpk7e\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:38:58.220Z\",\"raw_value\":1}","created_at":"1766259538439.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:38:58.220Z\"}"}
|
|
417
|
+
{"id":"b0ef27d5-d431-4bc2-a46d-e0624b918ec6","information":"LLM-as-judge scorer pattern for decomposition quality evaluation:\n\n1. USE HAIKU FOR COST: anthropic/claude-haiku-4-5 is fast and cheap enough for eval scoring\n2. STRUCTURED OUTPUT: Ask for JSON with score (0-100), issues array, and optional strengths\n3. HANDLE MARKDOWN WRAPPING: LLMs sometimes wrap JSON in ```json blocks - strip them\n4. GRACEFUL DEGRADATION: Return 0.5 neutral score if LLM call fails, don't crash the eval\n5. BE HARSH IN PROMPT: Tell the LLM to be harsh - bad decompositions waste expensive parallel work\n6. FOUR CRITERIA: Independence (parallel execution), Scope (right-sized), Completeness (sum=whole), Clarity (actionable)\n\nKey insight: When LLM receives garbage input, it correctly scores it 0 - this is the RIGHT behavior, not an error. The LLM is judging the decomposition quality, and garbage decomposition = 0 score.\n\nLocation: evals/scorers/index.ts - decompositionCoherence scorer\nTest: evals/scorers/index.test.ts","created_at":"1766642888646.0","tags":"evalite,llm-as-judge,decomposition,scoring,haiku,testing"}
|
|
332
418
|
{"id":"b14efc93-45be-4ec2-9ca8-ee14f23a88b4","information":"{\"id\":\"pattern-1766349513132-nmk7j3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:38:33.132Z\",\"updated_at\":\"2025-12-21T20:38:33.132Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766349513379.0","metadata":"{\"id\":\"pattern-1766349513132-nmk7j3\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
419
|
+
{"id":"b1d09f0e-68e3-4f25-b97e-c9cc4feeb45c","information":"Planning guardrails violation detection pattern: Use discriminated union on event_type for coordinator violations (coordinator_edited_file, coordinator_ran_tests, coordinator_reserved_files, no_worker_spawned). Pattern matching approach: Check agentContext === \"coordinator\" FIRST to short-circuit worker checks, then pattern match tool names (edit/write for files, swarmmail_reserve/agentmail_reserve for reservations) and regex test bash commands for test execution patterns. Integration: Call captureCoordinatorEvent() immediately when violation detected - don't batch, don't defer. TypeScript gotcha: readonly array.includes() with string requires `as any` cast for dynamic strings. TDD approach: Write tests for each violation type independently, test non-violations, test event capture integration. Discovered: coordinators doing work is detectable in real-time via tool call inspection.","created_at":"1766610705795.0","tags":"planning-guardrails,coordinator,violations,event-capture,pattern-matching,tdd,zod,discriminated-union"}
|
|
333
420
|
{"id":"b37f55db-d1bf-4249-a757-39724bdf18f8","information":"AI SDK v6 Lesson 02-02 (Text Classification) verification: All steps pass cleanly on fresh clone. generateText + Output.array() pattern works as documented. Key progression: 1) Basic schema with z.enum for categories 2) Adding urgency field via schema extension 3) Multi-language with z.string() returns codes by default 4) Adding .describe() to language field produces full names. No compilation errors, outputs match lesson examples exactly. Students can follow this lesson without issues.","created_at":"1766455232378.0","tags":"ai-sdk,lesson-verification,text-classification,Output.array,zod,v6-patterns"}
|
|
421
|
+
{"id":"b385e8a5-4c71-44bc-a07e-2c1f5c2075af","information":"oh-my-opencode Loader Implementation: Commands, Agents, Skills, MCPs\n\n**Command Loader:**\nScans: ~/.claude/commands/*.md, ./.claude/commands/*.md\nFrontmatter: {description, agent, model, subtask, argument-hint}\nTemplate: <command-instruction>BODY</command-instruction><user-request>$ARGUMENTS</user-request>\nModel sanitization: Claude Code commands don't set model (undefined), OpenCode preserves it\n\n**Agent Loader:**\nScans: ~/.claude/agents/*.md, ./.claude/agents/*.md\nFrontmatter: {name, description, tools}\nTools parsing: \"edit,bash,webfetch\" → {edit: true, bash: true, webfetch: true}\nOutput: {description: \"(scope) desc\", mode: \"subagent\", prompt: body, tools}\n\n**Skill Loader:**\nScans: ~/.claude/skills/*/SKILL.md, ./.claude/skills/*/SKILL.md\nSymlinks: follows to actual directory\nFrontmatter: {name, description, model}\nTemplate: <skill-instruction>Base directory: PATH\\nBODY</skill-instruction><user-request>$ARGUMENTS</user-request>\nConverted to slash commands (/skill-name)\n\n**MCP Loader:**\nFiles: ~/.claude/.mcp.json, ./.claude/.mcp.json, .claude/.mcp.json\nFormat: {mcpServers: {name: {command, args, env, disabled}}}\nEnv expansion: ${VAR} → process.env.VAR\nTransform: Claude → OpenCode SDK format\nPrecedence: local > project > user\n\n**Shared Patterns:**\n1. Markdown + frontmatter parsing\n2. Scope tracking (user vs project)\n3. Error resilience (catch + continue)\n4. Description prefixing for disambiguation\n5. Deep merge (project overrides user)\n\n**Template Variables:**\n$ARGUMENTS - user slash command args\n@path - in skills, relative to skill dir\nBase directory - injected for file references","created_at":"1766673491623.0","tags":"oh-my-opencode,loaders,commands,agents,skills,mcps,claude-code,frontmatter"}
|
|
334
422
|
{"id":"b3c1b1c3-0c21-41a7-98cc-868df103875b","information":"When assigned a task to fix code that was already fixed: verify the current state first before making changes. In this case, projections.test.ts table names were already correct (bead_* not cell_*). The task description was outdated or the fix was already applied. Always read the file to confirm the problem exists before attempting fixes.","created_at":"2025-12-18T15:39:22.185Z"}
|
|
335
423
|
{"id":"b3cbbf0c-981a-4f4f-8fa3-45175796e338","information":"{\"id\":\"test-1765386438362-dn6i6pzsef\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:07:18.362Z\",\"raw_value\":1}","created_at":"2025-12-10T17:07:18.549Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:07:18.362Z\"}"}
|
|
336
424
|
{"id":"b465f06f-ce75-47c7-84b5-567aa10e12b0","information":"AI SDK v6 Lesson 02-03 (Automatic Summarization) verification: All steps pass cleanly. generateText + Output.object() pattern works perfectly for summarization. Key progression: 1) Basic schema with 4 string fields (headline, context, discussionPoints, takeaways) 2) Adding .describe() to each field with specific constraints (Max 5 words, Max 2 sentences, **Include names**) produces dramatically better output. Evidence: headline went from 13 words to 5 words, takeaways correctly included names (Liam Johnson, James Smith, Emma Thompson). Minor issue: lesson uses any[] type parameter which triggers linting warning - this is a lesson code quality issue, not a verification blocker. Students can follow this lesson without issues.","created_at":"1766455545834.0","tags":"ai-sdk,lesson-verification,automatic-summarization,Output.object,generateText,zod,v6-patterns,schema-refinement,describe"}
|
|
@@ -341,36 +429,48 @@
|
|
|
341
429
|
{"id":"b6a9b8dc-0da0-43eb-ba32-14d4bb2bd88b","information":"@badass UI Components Reference (Dec 2024): Key extractable components from ai-hero:\n\n**High Priority (Ready to Extract):**\n1. DateTimePicker - apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/date-time-picker/date-time-picker.tsx:40 - React Aria based, self-contained\n2. CRUD Dialog Pattern - apps/ai-hero/src/app/admin/tags/tag-crud-dialog.tsx:34 - Generic pattern, 90% identical across uses\n3. Sidebar Layout - apps/ai-hero/src/app/(content)/cohorts/[slug]/_components/cohort-sidebar.tsx:13 - Sticky with mobile floating CTA\n\n**Medium Priority (Needs Refactoring):**\n4. withResourceForm HOC - apps/ai-hero/src/components/resource-form/with-resource-form.tsx:219 - Needs dependency injection to remove app-specific imports\n5. ListResourcesEdit - apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84 - Needs search provider abstraction (currently Typesense-coupled)\n\n**Shared UI Package (Already Extracted):**\n- packages/ui/resources-crud/edit-resources-form.tsx:28 - Mobile/desktop responsive form\n- packages/ui/resources-crud/create-resource-form.tsx - Resource creation\n\n**Architecture Patterns:**\n- Config-driven forms: Zod schema + config object equals full CRUD UI\n- Tool panel system: Pluggable tools with icon + component\n- Batch operations: Drag-and-drop with debounced batch saves\n- Factory pattern: createWorkshopFormConfig() for type-safe config","created_at":"2025-12-18T15:50:07.107Z"}
|
|
342
430
|
{"id":"b6b71724-e02b-42c5-8c34-e4ae6109aa00","information":"pdf-library AutoTagger auto-accept pattern: (1) Use extractRAGContext() to find relevant concepts via content embedding (threshold 0.5, limit 5) and add to LLM prompt - helps LLM match existing instead of proposing duplicates. (2) After LLM enrichment, call autoAcceptProposals() which generates embeddings for each proposal, checks findSimilarConcepts(embedding, 0.85) for duplicates, and auto-inserts novel concepts with taxonomy.addConcept() + storeConceptEmbedding(). (3) AutoTagger.enrich() now requires TaxonomyService | Ollama dependencies (updated interface). (4) validateProposedConcepts exported for testing. (5) JSON file workflow completely removed - no more manual proposal review, all automatic via embedding similarity.","created_at":"1766257443255.0","tags":"pdf-library,autotagger,taxonomy,embeddings,rag,auto-accept,deduplication"}
|
|
343
431
|
{"id":"b6e2cc14-5344-49a3-8ebc-3bad012f1d38","information":"FTS5 MATCH queries in libSQL/SQLite require quoting search terms to avoid operator parsing issues. Without quotes, hyphens are parsed as MINUS operators. Example: \"unique-keyword-12345\" → \"unique\" MINUS \"keyword\" → \"no such column: keyword\" error. Solution: Wrap query in double quotes, escaping existing quotes: `const quotedQuery = `\"${searchQuery.replace(/\"/g, '\"\"')}\"`;`. Affects all FTS5 full-text search implementations.","created_at":"1766260792853.0","metadata":"{\"file\":\"packages/swarm-mail/src/memory/store.ts\",\"function\":\"ftsSearch\",\"error_pattern\":\"no such column: keyword\"}","tags":"fts5,libsql,sqlite,full-text-search,query-syntax,gotcha"}
|
|
432
|
+
{"id":"b8024bee-f39f-4b5b-bb87-949b6313ae88","information":"Swarm coordinator template rewrite (mjl0n8rylpp): Shifted from score-threshold-driven to outcome-driven structure. Key changes:\n\n1. **\"What Good Looks Like\" section** - Added prominent ✅/❌ behavioral examples at top showing ideal vs anti-pattern coordinator behavior. Examples: spawning researcher, loading skills, checking inbox, delegating planning, never reserving files, reviewing worker output.\n\n2. **Event tracking integration** - Added \"Event Tracking Reference\" table mapping coordinator actions to tracked events (session_initialized, skill_loaded, researcher_spawned, inbox_checked, blocker_resolved, scope_change_approved/rejected, review_completed). Events drive eval scoring.\n\n3. **Mandatory inbox monitoring** - Elevated from optional to MANDATORY in step 7 with explicit frequency guidance (every 5-10 minutes). Added intervention triggers table with event tracking.\n\n4. **Skill loading prominence** - Made skills_use() MANDATORY in step 2 with task-type triggers. Added ✅/❌ examples showing consequences of skipping skill loading.\n\n5. **Researcher spawning emphasis** - Kept step 2.5 researcher section, added explicit \"SPAWN RESEARCHER IF NEEDED - MANDATORY CHECK\" header with event tracking. Added ✅/❌ examples showing context pollution from direct context7 calls.\n\n6. **Worker review enforcement** - Added step 8 \"Review Worker Output (MANDATORY)\" with swarm_review/swarm_review_feedback workflow, 3-strike rule, and event tracking.\n\n7. **Quick checklist updates** - Added event tracking annotations to each checklist item, added new items for inbox monitoring, blocker resolution, scope change handling, worker review.\n\nPattern: Show SPIRIT of good coordination through concrete examples, not \"score ≥0.8\" thresholds. Coordinators should understand WHY each action matters (context preservation, conflict prevention, learning capture) through behavioral outcomes.\n\nFile: packages/opencode-swarm-plugin/examples/commands/swarm.md (524 lines → 600 lines exactly at limit).","created_at":"1766642314853.0","tags":"swarm,coordinator,template,eval-driven,behavioral-examples,event-tracking"}
|
|
344
433
|
{"id":"b833bde6-8f1e-4d3a-948e-f0eef242cab3","information":"{\"id\":\"test-1766261005894-cuvagqzbes5\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:03:25.894Z\",\"raw_value\":1}","created_at":"1766261006168.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:03:25.894Z\"}"}
|
|
345
434
|
{"id":"b89c6800-cc8a-477b-8bce-81ad325b1e87","information":"Enhanced doctor command in pdf-library with comprehensive health checks and --fix flag.\n\n**Implementation (TDD - all tests green):**\n\n1. **New Health Checks (5 total)**:\n - WAL files: existing assessWALHealth() (50 files/50MB thresholds)\n - Corrupted directories: checkCorruptedDirs() detects \" 2\" suffix pattern (\"base 2\", \"pg_multixact 2\")\n - Daemon status: async isDaemonRunning(daemonConfig) via Effect.promise\n - Ollama connectivity: library.checkReady() with try/catch\n - Orphaned data: library.repair() returns chunks/embeddings counts\n\n2. **New Functions**:\n - `checkCorruptedDirs(libraryPath, dirs)`: Returns CorruptedDirsResult with issues array\n - `assessDoctorHealth(data)`: Combines all checks into DoctorHealthResult with HealthCheck[] array\n\n3. **Auto-Repair with --fix flag**:\n - Parses opts.fix from args via parseArgs()\n - Removes corrupted directories with rmSync(path, { recursive: true, force: true })\n - Orphaned data auto-cleaned via existing repair() call\n - Shows recommendations when --fix not used\n\n4. **Key Patterns**:\n - Used Effect.gen for async flow (yield* Effect.promise for isDaemonRunning)\n - DaemonConfig requires: socketPath, pidPath, dbPath (all derived from config.libraryPath)\n - WAL health check handles non-existent pg_wal gracefully (assumes healthy)\n - All checks graceful-fail: database not existing doesn't crash, returns healthy defaults\n\n5. **Test Coverage**: 11 new tests covering checkCorruptedDirs edge cases and assessDoctorHealth combinations\n\n**Bug Prevention**: Always await isDaemonRunning with Effect.promise, never call synchronously (returns Promise<boolean>).","created_at":"2025-12-19T17:29:44.709Z","tags":"pdf-library,doctor-command,health-checks,tdd,effect-ts,cli,auto-repair"}
|
|
346
435
|
{"id":"b8f28a17-d8a2-44e1-8b72-f74e2ae3a98a","information":"{\"id\":\"test-1765653517058-z98hhewgo3r\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:18:37.058Z\",\"raw_value\":1}","created_at":"2025-12-13T19:18:37.257Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:18:37.058Z\"}"}
|
|
436
|
+
{"id":"b93d1da2-5c6f-4133-a55b-10403c072724","information":"{\"id\":\"pattern-1766636009528-k57n3z\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-25T04:13:29.528Z\",\"updated_at\":\"2025-12-25T04:13:29.528Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766636009754.0","metadata":"{\"id\":\"pattern-1766636009528-k57n3z\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
347
437
|
{"id":"b9f53e2c-8086-4bab-95b1-0529595cb2f1","information":"## Hive Database Schema Bug - Root Cause and Fix\n\n**Error:** `SQLITE_ERROR: no such column: project_key` when running hive tools\n\n**Root Cause:** The libSQL database had tables with OLD schemas that were missing the `project_key` column. Specifically:\n- `messages` table was missing `project_key` column\n- `events` table had wrong schema (aggregate_id/aggregate_type/payload instead of project_key/timestamp/data)\n\n**Why it happened:**\n1. Tables were created by an older version of the code with different schema\n2. `CREATE TABLE IF NOT EXISTS` doesn't update existing tables\n3. `CREATE INDEX IF NOT EXISTS idx_messages_project ON messages(project_key)` failed because the column didn't exist\n4. The `schema_version` table was either missing or had incorrect entries\n\n**Debug approach that worked:**\n1. Added `SWARM_DEBUG=1` environment variable check\n2. Added console.error logging at each step of schema initialization\n3. Traced the exact SQL statement that failed\n4. Used `PRAGMA table_info(tablename)` to check actual column structure\n\n**Fix:**\n1. Drop and recreate tables with correct schema (safe if empty)\n2. Or use ALTER TABLE to add missing columns\n3. Ensure schema_version table accurately reflects applied migrations\n4. Delete fake schema_version entries and let migrations run properly\n\n**Prevention:**\n- Always check schema_version table matches actual database state\n- Use `swarm db` command to verify database health\n- Consider adding schema validation on startup that compares expected vs actual columns","created_at":"1766294004408.0","tags":"debugging,libsql,schema,migrations,hive,database,project_key"}
|
|
348
438
|
{"id":"ba639de8-848f-4ced-92f5-9401dc270417","information":"{\"id\":\"test-1765664182311-clxw0y6xk4b\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:16:22.311Z\",\"raw_value\":1}","created_at":"2025-12-13T22:16:22.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:16:22.311Z\"}"}
|
|
439
|
+
{"id":"ba964b81-c9cf-44dd-8893-5a1de327d3c0","information":"Compaction prompt quality scorers created with TDD approach. Eval test infrastructure uses `.evalite-test.ts` suffix and is run via `bunx evalite`, NOT `bun test`. Regular `bun test` in evals/ directory fails with \"Export named 'inject' not found\" error due to evalite.config.ts interference. Solution: Either use `.evalite-test.ts` for minimal export checks (like outcome-scorers pattern), OR move tests to src/ directory for full unit testing. Epic ID pattern is mjkw + 7 base36 chars = 11 chars total, not 12. Regex must be `/mjkw[a-z0-9]{7,}/` not `{12}`.","created_at":"1766634990383.0","tags":"evalite,testing,tdd,compaction,scorers,bun-test"}
|
|
349
440
|
{"id":"baaacd02-244f-4098-86b9-cd5c779c2e35","information":"{\"id\":\"pattern-1766263664511-zduc3o\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:47:44.511Z\",\"updated_at\":\"2025-12-20T20:47:44.511Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263664729.0","metadata":"{\"id\":\"pattern-1766263664511-zduc3o\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
350
441
|
{"id":"bab8d96e-4698-48b2-a1ba-aa3252938028","information":"pdf-brain enrichment bug: concepts extracted but not stored in document_concepts join table.\n\nROOT CAUSE: AutoTagger.enrich() returns concepts array but never calls taxonomy.assignToDocument(). The concepts end up only in the tags array (as leaf names without category prefix, e.g., \"instructional-design\" instead of \"education/instructional-design\").\n\nDATA STATE:\n- documents.tags: [\"instructional-design\", \"cognitive-load\", ...] (leaf names only)\n- concepts.id: \"education/instructional-design\" (full path with category)\n- document_concepts: EMPTY (join table never populated)\n- concept_embeddings: 1641 rows (all concepts have embeddings)\n\nFIX REQUIRED:\n1. Backfill: Match tags to concepts by normalizing and comparing leaf portions\n2. Fix enrichment: After LLM returns concepts array, call taxonomy.assignToDocument() for each\n\nBACKFILL SCRIPT: scripts/migration/backfill-document-concepts.ts\n- Builds tag -> concept_id mapping (leaf + pref_label + alt_labels)\n- For each doc, matches tags to concepts\n- Inserts into document_concepts with confidence=0.8, source=\"backfill\"\n\nWHY THIS MATTERS: Without document_concepts populated, concept embeddings are useless for search expansion. The whole point is: query -> find similar concepts -> expand to all docs tagged with those concepts.","created_at":"1766331389808.0","tags":"pdf-brain,enrichment,bug,taxonomy,concepts,document_concepts,backfill"}
|
|
351
442
|
{"id":"bad6e714-cf98-4609-a8f0-44c2e636901e","information":"Added legacy semantic-memory migration prompt to swarm setup CLI. Pattern follows existing .beads migration flow: 1) Check legacyDatabaseExists() after dependency checks, before model selection. 2) Call getMigrationStatus() to show counts (total, withEmbeddings). 3) Prompt user with p.confirm. 4) Create target DB with getSwarmMail(cwd). 5) Run migrateLegacyMemories({ targetDb, onProgress }) with spinner. 6) Show detailed results (migrated, skipped, failed). Key insight: Migration functions are exported from swarm-mail/src/memory/migrate-legacy.ts and re-exported from swarm-mail/src/index.ts. Needed to rebuild swarm-mail package after adding exports. Placement: lines 1672-1735 in bin/swarm.ts, right after .beads migration, before model selection.","created_at":"2025-12-18T21:09:36.891Z","tags":"cli,migration,semantic-memory,swarm-mail,legacy-migration,setup"}
|
|
352
443
|
{"id":"bbad7825-bc3b-4cc1-ad63-698d8a81889e","information":"TDD pattern for Pino logger instrumentation in existing code: Use lazy initialization (getLog() function instead of module-level const) to enable test mocking. Pattern: `let _logger: any | undefined; function getLog() { if (!_logger) { _logger = createChildLogger(\"module\"); } return _logger; }`. Mock in tests with: `mock.module(\"./logger\", () => ({ createChildLogger: () => mockLogger }))` BEFORE importing the module. This allows tests to capture log calls without hitting the actual file system. Applied successfully in compaction-hook.ts with 14 log points across START, GATHER (swarm-mail, hive), DETECT, INJECT, COMPLETE phases. All tests pass (18/18).","created_at":"1766593404339.0","tags":"tdd,testing,pino,logging,mocking,instrumentation,lazy-initialization"}
|
|
353
444
|
{"id":"bc1b197e-9d63-4466-8c7d-d453e0949840","information":"BeadsAdapter interface pattern for swarm-mail: Interface split into 6 sub-adapters (BeadAdapter, DependencyAdapter, LabelAdapter, CommentAdapter, EpicAdapter, QueryAdapter, BeadsSchemaAdapter) combined into single BeadsAdapter, matching SwarmMailAdapter pattern. Migration v6 adds beads tables to shared PGLite database (shares schema_version with swarm-mail migrations v1-v5). Projections use updateProjections() dispatcher pattern to route events to handlers. Blocked cache uses recursive CTE for transitive blocker lookup with depth limit (10). Dirty tracking marks beads for incremental JSONL export. Key insight: Share same PGLite instance and migration system with swarm-mail - don't create separate database. Test pattern: wrapPGlite() creates DatabaseAdapter from PGlite instance for dependency injection in tests.","created_at":"2025-12-16T21:51:14.238Z"}
|
|
354
445
|
{"id":"bc574f69-e850-4327-b939-a8e2e96c08eb","information":"Workflow logging constraint VERIFIED: Files with \"use workflow\" or \"use step\" directives CANNOT import from ~/lib/logger (pino-based). They MUST use wlog from ~/lib/workflow-logger. The workflow bundler runs in a restricted environment without Node.js modules like pino or node:crypto. Initialize clients (LinearClient, Redis, Index, Search) inline in steps with explicit env var checks - do not import singletons from lib modules. Pattern: const apiKey = process.env.LINEAR_API_KEY; if (!apiKey) throw new Error(...); const linear = new LinearClient({ apiKey });","created_at":"1766517141969.0","tags":"workflow,vercel-workflow,logging,wlog,pino,linear-sdk"}
|
|
446
|
+
{"id":"bc7098e4-698f-49af-bf96-8b39511c2a9a","information":"Wave 1-2 Integration Pattern: Replacing stubs with real services in adapter.ts\n\n**Context:** Wiring up memory intelligence services (auto-tagging, memory-linking, entity-extraction, memory-operations) to replace stub implementations in the adapter.\n\n**Dynamic Import Pattern (critical for circular dependencies):**\n```typescript\nconst { generateTags } = await import(\"./auto-tagger.js\");\nconst result = await generateTags(content, existingTags, config);\n```\n\n**Type Import Aliasing (avoid conflicts):**\n```typescript\nimport type { AutoTagResult as AutoTagServiceResult } from \"./auto-tagger.js\";\nexport type AutoTagResult = AutoTagServiceResult; // Re-export for backward compat\n```\n\n**Graceful Degradation (mandatory for LLM services):**\n```typescript\ntry {\n const { analyzeMemoryOperation } = await import(\"./memory-operations.js\");\n const result = await analyzeMemoryOperation(info, memories, {\n model: \"anthropic/claude-haiku-4-5\",\n apiKey: process.env.AI_GATEWAY_API_KEY || \"\",\n });\n return mapResult(result);\n} catch (error) {\n console.warn(\"Service failed, using fallback:\", error);\n return fallbackHeuristics();\n}\n```\n\n**Type Mapping (service types → adapter types):**\n- MemoryOperation.type → SmartOpResult.operation\n- MemoryOperation.memoryId → SmartOpResult.targetId\n- MemoryLink properties are camelCase (targetId, linkType, not snake_case)\n\n**Drizzle ORM for Inserts (not raw SQL):**\n```typescript\nawait db.insert(memoryLinks).values({\n id: linkId,\n source_id: id,\n target_id: link.targetId,\n link_type: link.linkType,\n}).onConflictDoNothing();\n```\n\n**Testing Strategy:** Write integration tests that verify graceful degradation, not LLM behavior. Tests should pass without AI_GATEWAY_API_KEY by checking that services attempt real calls and fall back gracefully.\n\n**Common Pitfall:** Don't modify tests written for stubs to expect real LLM behavior - those tests should be left to fail (expected) until someone rewrites them for graceful degradation.","created_at":"1766675641586.0","metadata":"{\"task\":\"integration\",\"wave\":\"1-2\",\"project\":\"swarm-mail\"}","tags":"integration,wave-1-2,adapter,dynamic-imports,graceful-degradation,type-mapping"}
|
|
355
447
|
{"id":"bce57c41-f979-4cad-aae1-8def03a13bc2","information":"{\"id\":\"pattern-1766349592445-bafgdz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:39:52.445Z\",\"updated_at\":\"2025-12-21T20:39:52.445Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766349592662.0","metadata":"{\"id\":\"pattern-1766349592445-bafgdz\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
356
448
|
{"id":"bcf542b5-34a8-4de4-8cc8-dc414784d0f5","information":"LibSQL vector search requires explicit vector index creation. Without the index, vector_top_k() fails with \"failed to parse vector index parameters\". \n\nThe required pattern for libSQL memory schema:\n1. Create table with F32_BLOB(1024) embedding column\n2. Create FTS5 virtual table for fallback search\n3. Create triggers (INSERT, UPDATE, DELETE) to sync FTS\n4. **CRITICAL**: CREATE INDEX idx_memories_embedding ON memories(libsql_vector_idx(embedding))\n\nThis pattern is now centralized in createTestMemoryDb() utility in swarm-mail/src/memory/test-utils.ts. Reference: adapter.test.ts createTestDb() function.\n\nCommon failure mode: Manual schema setup in tests often misses step 4, causing vector search to fail silently or with cryptic errors.","created_at":"1766257338726.0","metadata":"{\"source\":\"swarm-task\",\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjenx80qhqn\",\"epic_id\":\"opencode-swarm-monorepo-lf2p4u-mjenx80mqiv\"}","tags":"libsql,vector-search,testing,memory,schema-setup"}
|
|
449
|
+
{"id":"bd30bcf2-df23-449c-a604-80a9fa1d4fb6","information":"Drizzle self-referencing foreign keys in SQLite require plain text column declaration: When creating a self-referencing foreign key in Drizzle (e.g., memories.superseded_by → memories.id), you cannot use .references(() => memories.id) because the table is not fully defined yet. Instead, declare the column as text(\"superseded_by\") without the .references() call, and add the REFERENCES constraint in the raw SQL CREATE TABLE statement: superseded_by TEXT REFERENCES memories(id). The foreign key will work correctly at runtime, Drizzle just can't express self-references in schema definitions. This applies to all self-referencing tables in libSQL/SQLite.","created_at":"1766643802756.0","metadata":"{\"pattern\":\"schema-definition\",\"severity\":\"medium\",\"component\":\"swarm-mail\"}","tags":"drizzle,sqlite,libsql,foreign-keys,self-reference,schema"}
|
|
357
450
|
{"id":"bd5bcd8d-ac35-48ba-aee0-8efb657ab236","information":"{\"id\":\"pattern-1766264316797-qy4n51\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:58:36.797Z\",\"updated_at\":\"2025-12-20T20:58:36.797Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766264317047.0","metadata":"{\"id\":\"pattern-1766264316797-qy4n51\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
358
451
|
{"id":"bd7187c4-23be-4081-a315-c2c897fef72f","information":"## Session Context Capture (Dec 19, 2025)\n\n### Current Bug: \"Invalid Date\" error on hive_query\n\n**Symptom:** `hive_query` returns `{\"success\":false,\"error\":{\"code\":\"HiveError\",\"message\":\"Failed to query cells: Invalid Date\"}}`\n\n**Root Cause Investigation:**\n- JSONL file parses fine with jq\n- 17 lines in .hive/issues.jsonl, all status \"open\"\n- Date fields (created_at, updated_at) look valid: \"2025-12-19T17:14:05.371Z\"\n- Error comes from JavaScript Date constructor somewhere in swarm-mail/src/hive/\n\n**Likely culprits (from grep):**\n- `jsonl.ts:207-210` - `new Date(bead.created_at as number)` - casting string to number?\n- `jsonl.ts:347-348` - `new Date(cellExport.closed_at)` - closed_at might be undefined\n- `jsonl.ts:465-468` - same pattern\n- `merge.ts:135` - `new Date(cell.closed_at)` on potentially undefined\n\n**Hypothesis:** Code expects timestamps as numbers but JSONL has ISO strings, OR closed_at is undefined and being passed to Date constructor.\n\n### Open P1 Bugs (from earlier query)\n1. `mjd4pdh5651` - Make hive_sync bidirectional (import from JSONL after git pull)\n2. `mjd4pjujc7e` - Fix overly strict task_id regex requiring 3+ segments\n\n### Recent Completed Work\n- Smart ID resolution (resolvePartialId) - committed\n- Auto-sync at hive_create_epic, swarm_complete, process exit - committed \n- Removed max_subtasks limit of 10 - committed\n- Changeset pushed, waiting for CI to create version PR\n\n### Hive Viewer Epic Created\n- Epic ID: `mjd4yu2aguv` - 16 subtasks across 4 phases\n- Phase 1 (spike): OpenTUI hello world, JSONL parser, cell list component\n- Not yet started - was about to spawn workers\n\n### Files Modified This Session\n- packages/opencode-swarm-plugin/src/hive.ts (auto-sync)\n- packages/opencode-swarm-plugin/src/swarm-orchestrate.ts (auto-sync in swarm_complete)\n- packages/opencode-swarm-plugin/src/swarm-decompose.ts (removed max limit)\n- packages/opencode-swarm-plugin/src/swarm-prompts.ts (removed max limit)\n- .changeset/hive-smart-id-resolution.md (updated with all changes)","created_at":"2025-12-19T17:30:18.475Z","tags":"session-context,bug,invalid-date,hive-query,swarm-mail,jsonl,december-2025"}
|
|
452
|
+
{"id":"be53e957-7b29-45d0-9ace-8fa1c6b14ebd","information":"Evalite scorer pattern for coordinator evaluation: Use createScorer() from evalite with scorer function that returns { score: 0-1, message: string }. When no baseline exists for scoring (e.g., no decomposition event but workers were spawned), prefer realistic fallback (1.0 if delegation happened) over arbitrary middle ground (0.5). This prevents penalizing coordinators who skip formal decomposition but still delegate work. Example: spawnEfficiency returns 1.0 when workers spawned without decomp event, not 0.5.","created_at":"1766641051152.0","tags":"evalite,scoring,coordinator,fallback-strategy,swarm-mail"}
|
|
359
453
|
{"id":"be8c1c00-1128-4c4e-8984-6dc93db50610","information":"Auto-sync pattern in swarm_complete: When calling hive_sync from within a tool that operates on a specific project_key, you MUST temporarily set the hive working directory using setHiveWorkingDirectory(project_key) before calling hive_sync.execute(), then restore it in a finally block. Why: hive_sync uses getHiveWorkingDirectory() which defaults to process.cwd(), not the project_key argument. Without this, sync writes to wrong directory. Pattern: const prev = getHiveWorkingDirectory(); setHiveWorkingDirectory(projectKey); try { await hive_sync.execute({}, ctx); } finally { setHiveWorkingDirectory(prev); }","created_at":"2025-12-19T17:02:17.235Z","metadata":"{\"type\":\"gotcha\",\"pattern\":\"working-directory-context\",\"component\":\"swarm-orchestrate\"}","tags":"hive,sync,swarm,working-directory,context-management"}
|
|
360
454
|
{"id":"bf3948ec-720b-474f-a06b-463e142ca769","information":"{\"id\":\"pattern-1766296937208-dulm1q\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:02:17.208Z\",\"updated_at\":\"2025-12-21T06:02:17.208Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766296937442.0","metadata":"{\"id\":\"pattern-1766296937208-dulm1q\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
455
|
+
{"id":"bfae539e-3718-4438-a33a-dbbb2b7b3c13","information":"hive_start, hive_close, and hive_update all correctly use resolvePartialId from swarm-mail. Pattern: `const cellId = await resolvePartialId(adapter, projectKey, input.id) || input.id;`. The `|| input.id` fallback is correct - if resolution returns null (no match), try original ID and let adapter throw proper error. All error handling includes helpful messages for \"Cell not found\" and \"Ambiguous hash\" cases. Integration tests in hive.integration.test.ts verify short hash resolution works (lines 628-834).","created_at":"1766617721573.0","tags":"hive,resolvePartialId,short-id,partial-hash,error-handling"}
|
|
361
456
|
{"id":"bffb1fa4-68f1-4c73-b4fb-909a1c5ee4d7","information":"{\"id\":\"pattern-1766260932025-539g3y\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:02:12.025Z\",\"updated_at\":\"2025-12-20T20:02:12.025Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260932299.0","metadata":"{\"id\":\"pattern-1766260932025-539g3y\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
362
457
|
{"id":"c0144f56-dcd6-4aba-a19e-5f10b7f7c68b","information":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:50.318Z\",\"updated_at\":\"2025-12-15T03:58:50.318Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:50.643Z","metadata":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
458
|
+
{"id":"c03312bc-0e9b-4adb-a303-4d782b9c9190","information":"oh-my-opencode Multi-Agent Architecture: 6 specialized agents. explore (Grok, read-only): fast codebase search, 3+ parallel tools, structured output, absolute paths required. librarian (Sonnet 4.5, read-only): external docs/code, 4-phase classification (CONCEPTUAL/IMPLEMENTATION/CONTEXT/COMPREHENSIVE), GitHub permalinks mandatory, 3-6+ parallel tools, year-aware filtering. oracle (GPT 5.2/o3, read-only): strategic advisor, deep reasoning, pragmatic minimalism, effort tagging (Quick/Short/Medium/Large). Agents scoped via tools: {write: false, edit: false, ...}. Novel: read-only prevents conflicts, structured outputs for parseability, type-specific tools (explore→grep_app, librarian→context7).","created_at":"1766673457705.0","tags":"oh-my-opencode,multi-agent,swarm,explore,librarian,oracle"}
|
|
363
459
|
{"id":"c0c2dad4-c952-4a49-91d1-119fb33b477b","information":"SwarmMail database path migration to global location: Changed getDatabasePath() from project-local .opencode/streams.db to always return global ~/.opencode/swarm-mail.db. Added getOldProjectDbPaths() helper that returns both old libSQL path ({projectPath}/.opencode/streams.db) and old PGlite directory path ({projectPath}/.opencode/streams/) for migration detection. The getDatabasePath() signature remains backward-compatible - still accepts projectPath parameter but ignores it. This consolidates all SwarmMail data into a single global database for simpler management.","created_at":"1766343594886.0","tags":"swarm-mail,database-path,migration,global-db,libsql"}
|
|
364
460
|
{"id":"c17bc88f-4015-4ab8-b0c6-cff0c7955eb5","information":"--information","created_at":"2025-12-14T22:42:53.190Z","tags":"documentation,semantic-memory,cli-syntax,gotcha,agent-reference"}
|
|
365
461
|
{"id":"c1e3d77d-0183-4f45-80ba-a6d6318f0868","information":"Cell ID generation now uses project name from package.json as prefix instead of generic 'bd-'. Format is {slugified-name}-{hash}-{timestamp}{random}, e.g., swarm-mail-lf2p4u-mjbneh7mqah. Fallback is 'cell' prefix when package.json not found or has no name field. Implementation uses fs.readFileSync + fs.existsSync at ID generation time (lazy load), not adapter initialization. Slugification replaces @/spaces/special chars with dashes, removes leading/trailing dashes. Hash can be negative (use [-a-z0-9]+ regex pattern). Backward compatible - no changes to validation, existing bd-* IDs work fine. TDD approach: wrote failing tests first, implemented to pass, refactored to use ES module imports.","created_at":"2025-12-18T16:29:37.218Z"}
|
|
366
462
|
{"id":"c26bff59-2549-44e8-abf0-f8d7fe952889","information":"{\"id\":\"test-1766297015294-ot5uubgret\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T06:03:35.294Z\",\"raw_value\":1}","created_at":"1766297015498.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T06:03:35.294Z\"}"}
|
|
367
463
|
{"id":"c27724f6-a65c-4641-830d-83a535f95c6b","information":"JSONL file format bug: `wc -l` showed 0 lines despite having content because records were concatenated with `lines.join(\"\\n\")` which doesn't add a trailing newline. The fix: (1) `serializeToJSONL()` now returns `JSON.stringify(cell) + \"\\n\"` and (2) `exportToJSONL()` uses `lines.join(\"\")` since each line already has `\\n`. Root cause: JSONL spec requires each line to end with newline, including the last line. Without trailing newline, `wc -l` returns 0 because it counts newline characters, not lines. Tests: verify `jsonl.endsWith(\"\\n\")` and `(jsonl.match(/\\n/g) || []).length === recordCount`.","created_at":"2025-12-19T16:18:17.706Z","tags":"jsonl,newlines,file-format,wc,unix-tools,bugs"}
|
|
464
|
+
{"id":"c2bb8f11-76c6-4cb0-89f1-5cc6b5fa787b","information":"Evalite createScorer() API for composite scorers: createScorer returns an async FUNCTION, not an object with .scorer property. \n\nWRONG usage in composite scorers:\n```typescript\nconst scores = {\n child: childScorer.scorer({ output, expected }),\n};\n```\n\nCORRECT usage:\n```typescript\nconst scores = {\n child: await childScorer({ output, expected, input }),\n};\n```\n\nKey points:\n- createScorer returns the scorer function directly\n- Scorer functions are async and return MaybePromise<Score>\n- Must await the call\n- Must pass { output, expected, input } - all three parameters\n- Return type has nullable score: use ?? 0 when computing weighted averages\n- Return object has .score (number | null), not .message (message goes somewhere else in evalite framework)\n\nThis bit TWO files in opencode-swarm-plugin: coordinator-discipline.ts and compaction-scorers.ts, both had overallDiscipline/compactionQuality composite scorers calling .scorer() which doesn't exist.\n\nFixed by making composite scorer async and awaiting child scorer calls directly.","created_at":"1766633827874.0","tags":"evalite,scorers,composite-scorers,async,api-usage"}
|
|
368
465
|
{"id":"c373ecc8-5a84-44d7-8b5e-ba4f65f92a15","information":"createMemoryAdapter signature change in opencode-swarm-plugin: Changed from accepting `SwarmDb` (Drizzle client) to `DatabaseAdapter` for consistency with swarm-mail's getDatabase() return type. Internally converts using `toSwarmDb()` helper. This aligns with the pattern used throughout swarm-mail where DatabaseAdapter is the abstraction layer and Drizzle is an implementation detail. Callers now pass `swarmMail.getDatabase()` directly without needing to call `toSwarmDb()` themselves.\n\nCritical discovery: swarm-mail's `createLibSQLMemorySchema` in memory/libsql-schema.ts is outdated - missing columns: `tags TEXT DEFAULT '[]'`, `updated_at TEXT DEFAULT (datetime('now'))`, `decay_factor REAL DEFAULT 1.0`. The Drizzle schema in db/schema/memory.ts has these columns but the raw SQL schema doesn't. swarm-mail's own tests (store.drizzle.test.ts) work around this by creating the schema manually. This causes test failures when using `createLibSQLMemorySchema` - tests must create schema manually until swarm-mail is fixed.","created_at":"1766256829374.0","metadata":"{\"project\":\"opencode-swarm-plugin\",\"affected_files\":[\"packages/opencode-swarm-plugin/src/memory.ts\",\"packages/opencode-swarm-plugin/src/memory-tools.ts\"]}","tags":"typescript,swarm-mail,memory,database-adapter,drizzle,schema"}
|
|
369
466
|
{"id":"c39ece10-3ed4-4b70-9998-7da626aa96ec","information":"{\"id\":\"pattern-1766262544063-se5leq\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:29:04.063Z\",\"updated_at\":\"2025-12-20T20:29:04.063Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262544311.0","metadata":"{\"id\":\"pattern-1766262544063-se5leq\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
467
|
+
{"id":"c3f6bea0-60d9-412d-ba5a-dad108f089d2","information":"JSONL Migration Script Pattern for Session Re-attribution:\n\n**Problem:** Session events logged to unknown.jsonl need re-attribution to proper session files based on correlation keys (epic_id).\n\n**Solution Components:**\n\n1. **Atomic File Writes** (crash-safe):\n - Write to temp file in SAME directory (atomic rename requires same filesystem)\n - Use rename (not move) for POSIX atomicity guarantee\n - Sync directory after rename to flush metadata\n\n2. **Idempotency via Fingerprinting**:\n - Create lightweight fingerprints: {epic_id, timestamp, event_type}\n - Build Set of existing fingerprints before appending\n - Filter new events to exclude those already present\n - Running script twice = no duplicates\n\n3. **Index Building Strategy**:\n - Read all existing session files once at start\n - Build Map<epic_id, session_id> for O(1) lookups\n - For unmatched epic_ids, generate new session IDs\n\n4. **Session ID Generation**:\n - Format: ses_<22-char-base58>\n - Use base58 (avoid 0, O, I, l for readability)\n - Random generation, not derived from epic_id (cleaner)\n\n5. **Dry-run Mode** (CRITICAL for migration scripts):\n - Add --dry-run flag that prints actions without executing\n - Test with dry-run first, verify counts\n - Only then run actual migration\n\n6. **User Experience**:\n - Progress indicators (🔍, 📊, 🆕, ➕, ✅)\n - Detailed summary table at end\n - Help flag with examples\n - Package.json script alias\n\n**Code Pattern:**\n```typescript\nfunction atomicWriteFile(path: string, content: string): void {\n const dir = join(path, \"..\");\n const tempFile = `${dir}/.${Date.now()}.tmp`;\n writeFileSync(tempFile, content, \"utf-8\");\n renameSync(tempFile, path); // POSIX atomic\n execSync(`sync \"${dir}\"`); // Flush metadata\n}\n```\n\n**Testing:**\n- Verify idempotency: run dry-run twice, same counts\n- Test with real data but --dry-run first\n- After actual run, check for duplicates with grep/sort/uniq\n\n**Files:**\n- scripts/migrate-unknown-sessions.ts (implementation)\n- Added to package.json scripts as \"migrate:sessions\"\n\n**When to Use:**\n- Re-attributing orphaned events to sessions\n- Consolidating split session files\n- Migrating session formats\n- Any JSONL log file re-attribution by correlation keys","created_at":"1766635258043.0","tags":"jsonl,migration,session-files,idempotency,atomic-writes"}
|
|
370
468
|
{"id":"c46fb9d3-f659-4059-abac-181442f1502b","information":"Semantic zoom implementation pattern for canvas visualization: Create progressive content levels (minimal/standard/detailed/full) based on weighted formula (zoom * 0.7 + importance * 0.3). Extract different metadata fields at each level to avoid visual clutter at low zoom. Key insight: Text truncation needs both character-based (for non-canvas) and measure-based (using ctx.measureText) approaches. The measure-based approach accounts for actual rendered width. Render multi-line content with fontSize and lineHeight parameters for flexibility. Uses Catppuccin colors (cat.text, cat.subtext0, cat.teal, cat.subtext1) for semantic differentiation.","created_at":"1766343287635.0","tags":"canvas,semantic-zoom,visualization,progressive-disclosure,tufte"}
|
|
371
469
|
{"id":"c48ccddf-2e1a-4f73-8e49-f89de6bd0877","information":"Bun monorepo publishing with changesets - COMPLETE SOLUTION (Dec 2024):\n\nPROBLEM: workspace:* protocol not resolved by npm publish or changeset publish\n\nROOT CAUSE: bun pm pack resolves workspace:* from LOCKFILE, not package.json. Stale lockfile = old versions.\n\nSOLUTION (from https://ianm.com/posts/2025-08-18-setting-up-changesets-with-bun-workspaces):\n1. ci:version script: `changeset version && bun update` - the bun update syncs lockfile after version bump\n2. ci:publish script: custom scripts/publish.ts using `bun pm pack` + `npm publish <tarball>`\n3. Setup .npmrc in CI: `echo \"//registry.npmjs.org/:_authToken=$NPM_TOKEN\" > .npmrc`\n\nWHY NOT:\n- `bunx changeset publish` - uses npm publish, doesn't resolve workspace:*\n- `bun publish` - no npm token support yet (track: github.com/oven-sh/bun/issues/15601)\n- OIDC trusted publishers - works but requires repository field in package.json for provenance\n\nWORKFLOW (.github/workflows/publish.yml):\n- Setup npmrc with NPM_TOKEN secret\n- version: bun run ci:version\n- publish: bun run ci:publish\n- changesets/action handles PR creation and tagging\n\nGOTCHAS:\n- CLI bin scripts need deps in dependencies, not devDependencies\n- Each package needs repository field for npm provenance\n- files field in package.json to include dist/\n\nFILES: scripts/publish.ts, .github/workflows/publish.yml, package.json (ci:version, ci:publish scripts)","created_at":"2025-12-15T05:07:27.735Z"}
|
|
372
470
|
{"id":"c48d5b39-7afc-4b3f-887c-e6e1ba5e6ed0","information":"{\"id\":\"pattern-1766262135955-k8e6k5\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:22:15.955Z\",\"updated_at\":\"2025-12-20T20:22:15.955Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262136180.0","metadata":"{\"id\":\"pattern-1766262135955-k8e6k5\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
471
|
+
{"id":"c51dd6d0-8783-4b9a-bb3e-000073d62ee9","information":"Evalite eval scripts pattern: Added three npm scripts for running Evalite evals in opencode-swarm-plugin:\n- eval:run - runs all evals in evals/ directory\n- eval:decomposition - runs specific decomposition eval suite\n- eval:coordinator - runs specific coordinator discipline eval suite\n\nAll use `bunx evalite run <path>` pattern. Evalite is in devDependencies, no need for global install. Scripts execute correctly even if evals themselves have issues (missing DB tables, API keys) - that's expected during development.\n\nDocumentation pattern: Added \"Evaluation Pipeline\" section to main README showing what gets evaluated, data sources, and custom scorers. Added \"Data Capture\" section to evals/README.md explaining what data is captured and where it's stored (.opencode/eval-data.jsonl, sessions/, swarm-mail database).","created_at":"1766619642361.0","tags":"evalite,npm-scripts,documentation,testing,opencode-swarm-plugin"}
|
|
373
472
|
{"id":"c6121593-3fb2-4af2-b68a-ecfb5fe82a3d","information":"Nitro API route pattern for cron jobs with Vercel Workflow integration: Import `{ start } from \"workflow/api\"` (NOT \"workflow\") to trigger workflows from API routes. Use `defineEventHandler` wrapper, extract query params with `getQuery(event)`, and start workflow with `await start(workflowFn, [argsObject])`. The workflow args must be in an array even for a single object parameter. Return workflow run ID for tracking. Cron config in vercel.json: add to \"crons\" array with \"path\" and \"schedule\" (cron expression). Typecheck may fail on API routes outside build context - always verify with `pnpm build` instead. Logger from ~/lib/logger works in API routes (NOT workflow files which need wlog).","created_at":"1766517348451.0","tags":"nitro,vercel-workflow,cron,api-routes,pattern"}
|
|
473
|
+
{"id":"c7041747-8c86-48cc-888c-415f1b41f05f","information":"Eval capture pipeline wiring pattern for swarm tools: When wiring eval-capture functions (captureDecomposition, captureSubtaskOutcome, finalizeEvalRecord) into swarm tools, follow this pattern:\n\n1. Add optional params to tool schema (project_path, epic_id, etc.)\n2. Use dynamic import to avoid circular deps: const { finalizeEvalRecord } = await import(\"./eval-capture.js\")\n3. Wrap call in try-catch with console.warn (non-fatal) - eval capture should never block tool execution\n4. Include result in response object for visibility\n5. Write tests using spyOn() from bun:test to verify wiring\n\nThis pattern was used for:\n- swarm_validate_decomposition → captureDecomposition\n- swarm_complete → captureSubtaskOutcome \n- swarm_record_outcome → finalizeEvalRecord\n\nAll eval capture is non-fatal - if capture fails, tool execution continues.","created_at":"1766620280343.0","tags":"swarm,eval-capture,testing-patterns,integration-patterns"}
|
|
374
474
|
{"id":"c76fd51e-f15f-4f2c-9ca5-f3853806deef","information":"@badass/core TDD patterns successfully applied: Wrote characterization tests FIRST to document actual behavior (what IS) before behavior tests (what SHOULD). Key learnings: 1) z.coerce.date() creates new Date instances, so use .getTime() for equality checks not reference equality. 2) Zod .omit() strips fields silently, doesn't throw - test with .not.toHaveProperty(). 3) composeMiddleware in @badass/core runs middlewares sequentially (await first, then second), NOT in parallel - order matters. 4) Effect detection checks for \"_tag\" property, works for Effect.succeed() but NOT Effect.gen() which uses \"_id\". 5) Characterization tests caught 6 wrong assumptions about behavior before writing implementation-dependent tests. This validates the TDD pattern: write failing test, observe actual behavior, update test to match reality.","created_at":"2025-12-18T16:32:11.709Z","tags":"tdd,characterization-tests,badass-core,zod,effect-ts,middleware,testing-patterns"}
|
|
375
475
|
{"id":"c8a60a99-af35-450c-94c9-2e664b91ec71","information":"{\"id\":\"test-1766263568973-e0ugyob6fjf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:46:08.973Z\",\"raw_value\":1}","created_at":"1766263569199.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:46:08.973Z\"}"}
|
|
376
476
|
{"id":"c8c9d415-4351-40c4-8297-12d41043abcc","information":"{\"id\":\"test-1766261665641-byvzo7wnf4o\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:14:25.641Z\",\"raw_value\":1}","created_at":"1766261665906.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:14:25.641Z\"}"}
|
|
@@ -381,6 +481,7 @@
|
|
|
381
481
|
{"id":"c9d0eaaf-afb7-4c54-87f0-8ecb79bfb8eb","information":"Git-synced memories implementation pattern: Export memories to JSONL without embeddings (too large, ~4KB per memory). Store id, information, metadata, tags, confidence, created_at. Import skips duplicates by ID. Bidirectional sync: import from file first, then export all to file. Integration with hive_sync: after flushing cells to issues.jsonl, also sync memories.jsonl. Memory sync is optional - wrapped in try/catch so it doesn't fail the main sync. Key insight: PGlite returns JSONB as object not string, need to handle both cases when parsing metadata.","created_at":"2025-12-19T03:01:14.081Z","metadata":"{\"files\":[\"packages/swarm-mail/src/memory/sync.ts\"],\"pattern\":\"git-synced-memories\"}","tags":"memory-sync,jsonl,git-sync,hive,swarm-mail"}
|
|
382
482
|
{"id":"ca35365f-9fd9-4889-a1bd-1a44c1bae7ab","information":"## PGLite Removal Investigation - Effect Primitives Status\n\n### Finding: Effect-TS Durable Primitives Are NOT Used\n\nSearched for usage of DurableCursor, DurableMailbox, DurableLock, DurableDeferred across the codebase:\n\n1. **opencode-swarm-plugin/src/** - ZERO imports or usage\n2. **Only references found** - in swarm-mail's own dist/*.d.ts files (self-referential)\n\n### Effect Primitives Location\n- `packages/swarm-mail/src/streams/effect/cursor.ts`\n- `packages/swarm-mail/src/streams/effect/mailbox.ts`\n- `packages/swarm-mail/src/streams/effect/lock.ts`\n- `packages/swarm-mail/src/streams/effect/deferred.ts`\n- `packages/swarm-mail/src/streams/effect/ask.ts`\n- `packages/swarm-mail/src/streams/effect/layers.ts`\n\n### Current Dependency Chain\nEffect primitives → `getDatabase()` from `streams/index.ts` → PGLite\n\n### Decision Context\nTask: Remove PGLite except for migration paths\n\nOptions considered:\na) Remove Effect primitives entirely - simplifies, not used\nb) Port Effect primitives to libSQL - keeps patterns, changes backend\nc) Keep behind migration flag\n\n### Recommendation\nOption (a) Remove entirely is safest since:\n- Zero actual usage in production code\n- Can re-add later if needed\n- Removes PGLite dependency cleanly\n\nBUT user asked \"how COULD we use them\" - suggesting interest in keeping the patterns for future use.","created_at":"1766333479399.0","tags":"pglite-removal,effect-primitives,investigation,swarm-mail,architecture-decision"}
|
|
383
483
|
{"id":"cab59350-8135-4df0-97d8-6bae5596585c","information":"{\"id\":\"pattern-1766265308860-qdb2d3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:15:08.860Z\",\"updated_at\":\"2025-12-20T21:15:08.860Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766265309080.0","metadata":"{\"id\":\"pattern-1766265308860-qdb2d3\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
484
|
+
{"id":"cc628c55-e0a8-4396-973a-3c95756de807","information":"Pattern for module-level \"warn once per session\" deprecation warnings in TypeScript/Bun:\n\n1. Module-level flag: `let _deprecationWarned = false`\n2. Public helper: `warnPGliteDeprecation()` checks flag, warns if false, sets to true\n3. Test helper: `_resetDeprecationFlag()` (exported) to reset between tests\n4. Call from deprecated function: first line calls the helper\n\nKey insight: Using a module-level variable (not class instance) ensures warnings are session-scoped, not per-instance. Multiple calls to deprecated functions only warn once across the entire module.\n\nTesting pattern:\n- Mock console.warn\n- Reset flag in beforeEach\n- Verify first call warns, subsequent calls silent\n- Use `_resetDeprecationFlag()` exported function for test isolation\n\nFiles: packages/swarm-mail/src/pglite.ts, pglite.test.ts","created_at":"1766612402253.0","tags":"typescript,deprecation,testing,patterns"}
|
|
384
485
|
{"id":"cc84f337-516e-40cc-9429-d557e4484d23","information":"@badass Implementation Decomposition Ready (Dec 2024) - Next steps after architecture questions resolved: Create epic with subtasks for (1) @badass/core - Effect-TS services, builder pattern from uploadthing, (2) @badass/db - Drizzle schemas, adapter interface supporting shared/isolated DB, (3) @badass/auth - BetterAuth with hive/spoke SSO, device flow for CLI/local apps, (4) @badass/next - createRouteHandler, site config, (5) @badass/cli - badass command with multi-site support, device flow auth, TUI for video uploads. Namespace is @badass/*, CLI binary is 'badass'. Reference repos: pingdotgg/uploadthing for Effect-TS router pattern, vercel/academy-content for CLI+Mux, badass-courses/course-builder for device flow and multi-site patterns.","created_at":"2025-12-18T15:42:12.574Z"}
|
|
385
486
|
{"id":"cd179af2-3f9d-45ee-a349-8b7663f2078e","information":"JSONL sync architecture in swarm-mail hive module investigation (Dec 2024):\n\n**NO BUG FOUND** - System working as designed. 271/271 tests passing.\n\n**Architecture (Lazy Write Pattern)**:\n1. Operations (createCell, updateCell, closeCell) mark cells dirty via updateProjections() → markBeadDirty()\n2. Dirty tracking stored in dirty_beads table (cell_id, project_key, marked_at)\n3. User explicitly calls hive_sync tool to flush dirty cells to .hive/issues.jsonl\n4. FlushManager exports dirty cells via exportDirtyBeads() and writes to file\n\n**Key Implementation Details**:\n- updateProjections() in projections.ts line 118 marks ALL cells dirty after EVERY event\n- exportDirtyBeads() queries dirty_beads table, exports to JSONL\n- FlushManager.flush() writes JSONL to file, clears dirty flags\n- Table naming: \"beads\" is real table, \"cells\" is a view (migration v8) for compatibility\n- Both \"SELECT FROM beads\" and \"SELECT FROM cells\" work correctly\n\n**Why Tests All Pass**:\nFull integration test verifies: createCell → markDirty → exportDirtyBeads → FlushManager.flush() → file written correctly\n\n**Design Rationale**:\nLazy writes prevent excessive disk I/O. Operations mark dirty (cheap), user flushes when ready (expensive). Similar to git add/commit pattern.\n\n**If Asked \"Why Don't Cells Appear in JSONL?\"**:\nAnswer: Did you call hive_sync? Operations don't auto-flush. This is intentional.","created_at":"2025-12-19T16:28:00.031Z","tags":"hive,jsonl,sync,flush,dirty-tracking,swarm-mail,architecture"}
|
|
386
487
|
{"id":"cd77b842-2aff-47c0-baba-97096aaf9322","information":"pdf-brain research session on memory systems for AI agents yielded 13 actionable patterns from cognitive science literature:\n\n1. **Testing Effect** (Range, 9853): Retrieval strengthens memory more than passive review. Query count should affect decay rate.\n\n2. **Interleaving** (Range): Mixed/varied practice leads to better transfer than blocked practice. Tag memories for cross-domain retrieval.\n\n3. **Self-Explanation** (e-Learning and Science of Instruction): Prompting \"WHY does this work?\" produces deeper learning than just storing facts.\n\n4. **Negative Examples** (Training Complex Cognitive Skills): Contrast correct with incorrect. Store anti-patterns alongside patterns.\n\n5. **Worked Examples** (Multimediabook): Before/after code snippets more valuable than abstract rules for novices.\n\n6. **Connection Strength** (Smart Notes, Zettelkasten): Well-connected notes decay slower. Cross-references surface unexpected insights.\n\n7. **Tacit Knowledge** (Nonaka/Takeuchi): Some knowledge is hard to articulate. Capture intuitions with examples, not just rules.\n\n8. **Chunking** (Kirschner): One transferable insight per memory. Too granular = noise, too broad = not actionable.\n\n9. **Metacognitive Prompts** (9853): \"Would you be able to apply this in a different context?\" encourages reflection on transferability.\n\n10. **Hierarchical Tags** (How Learning Works): Knowledge organization affects retrieval. Use domain/subdomain/topic structure.\n\n11. **Spaced Retrieval** (CodingCareer, Anki): Active scheduling beats passive decay. Surface due-for-review memories proactively.\n\n12. **Prior Knowledge Activation** (978-3-031-74661-1): New info connected to existing knowledge sticks longer. Link new memories to existing ones.\n\n13. **Schema Acquisition** (Training Complex Cognitive Skills): Store transferable patterns, not specific fixes. Schemas enable far transfer.\n\nKey sources: Training_Complex_Cognitive_Skills (360 pages), e-Learning and the Science of Instruction (759 pages), Range (366 pages), How Learning Works (274 pages), ten-steps-to-complex-learning (416 pages), Smart Notes (146 pages).","created_at":"2025-12-19T03:13:03.888Z","tags":"memory-systems,cognitive-science,pdf-brain,learning,spaced-repetition,schemas,research"}
|
|
@@ -391,6 +492,7 @@
|
|
|
391
492
|
{"id":"cfc258d1-d420-444c-9532-ae46e8bcd619","information":"{\"id\":\"test-1766262799864-8dtsmvp6i13\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:33:19.864Z\",\"raw_value\":1}","created_at":"1766262800072.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:33:19.864Z\"}"}
|
|
392
493
|
{"id":"cffea773-b97b-4582-b5d4-b0154bd12f83","information":"Lesson rating rubric application for AI SDK course: Setup lessons (00-) often score low on Hook & Motivation because they're functional rather than problem-focused. Fix: Add \"Why This Matters\" explaining infrastructure value (AI Gateway = unified multi-provider access, no vendor lock-in). Also, setup lessons need Fast Track even though they're procedural—format consistency matters for learner expectations. Real output examples critical (e.g., \"vc --version # Output: Vercel CLI 39.2.4\") because learners verify setup success by matching exact output. Changed \"Done\" to \"Done-When\" with unchecked boxes—learners check them off as they progress, improving engagement.","created_at":"2025-12-16T21:43:30.828Z"}
|
|
393
494
|
{"id":"d0534c28-593b-40a1-998a-05cd7c82a32f","information":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:56:06.090Z\",\"updated_at\":\"2025-12-15T03:56:06.090Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:56:06.457Z","metadata":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
495
|
+
{"id":"d230aae7-fac4-4620-aeb9-7b9bae17b96c","information":"Compaction prompt restructuring for eval scores: Achieved 100% score (up from 53%) by following eval fixture patterns. Key insight: Evals test the COMPLETE prompt (dynamic state + static template), not just the static template. The scorer flags ANY placeholder (angle brackets like `<epic-id>`, `<path>`) as failures. Solution: (1) Dynamic state builders inject real IDs in \"IMMEDIATE ACTIONS\" section at TOP of prompt - this satisfies postCompactionDiscipline (first tool must be swarm_status). (2) Static template uses descriptive names (EPIC_ID, PROJECT_PATH) instead of angle brackets in examples. (3) Clear visual hierarchy with emoji headers and numbered sections. (4) Explicit forbidden tools list (Edit, Write, swarmmail_reserve, git commit) by name - generic language doesn't score. (5) Strong coordinator identity with ASCII box header + NEVER/ALWAYS/NON-NEGOTIABLE language. The \"perfect\" fixture pattern: epic context → immediate actions with real IDs → forbidden tools → role reminder. Reference sections go AFTER core guidance.","created_at":"1766641425490.0","metadata":"{\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjl04znlxzw\",\"final_score\":\"100%\",\"score_improvement\":\"+47pp\"}","tags":"compaction,eval-scoring,prompt-structure,coordinator-identity,tdd"}
|
|
394
496
|
{"id":"d26902c4-6cb2-4b10-9cb9-63cba428436d","information":"{\"id\":\"test-1766296855773-l0w0n6pv18d\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T06:00:55.773Z\",\"raw_value\":1}","created_at":"1766296855983.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T06:00:55.773Z\"}"}
|
|
395
497
|
{"id":"d2ad29ee-76d6-4eaf-a9c7-674e6990cd19","information":"## SQL CHECK Constraint Violation: Status='closed' Requires closed_at\n\n**Problem:** `changeCellStatus()` in hive adapter was changing status to 'closed' without setting `closed_at`, violating CHECK constraint:\n```sql\nCHECK ((status = 'closed') = (closed_at IS NOT NULL))\n```\n\n**Error:**\n```\nSQLITE_CONSTRAINT_CHECK: CHECK constraint failed: (status = 'closed') = (closed_at IS NOT NULL)\n```\n\n**Root Cause:** Event projection handler `handleCellStatusChangedDrizzle()` only updated `status` and `updated_at`, ignoring the bidirectional constraint between `status` and `closed_at`.\n\n**The CHECK Constraint Means:**\n- When `status='closed'`, `closed_at` MUST be non-NULL\n- When `status!='closed'`, `closed_at` MUST be NULL\n- It's a bidirectional equality constraint\n\n**Fix Pattern:**\n```typescript\nasync function handleCellStatusChangedDrizzle(db: SwarmDb, event: CellEvent) {\n const toStatus = event.to_status as string;\n const updates: Partial<typeof beads.$inferInsert> = {\n status: toStatus,\n updated_at: event.timestamp,\n };\n\n // Set closed_at when transitioning to 'closed'\n if (toStatus === \"closed\") {\n updates.closed_at = event.timestamp;\n updates.closed_reason = event.reason ?? null;\n } else {\n // Clear closed_at when transitioning away from 'closed'\n updates.closed_at = null;\n updates.closed_reason = null;\n }\n\n await db.update(beads).set(updates).where(eq(beads.id, event.cell_id));\n}\n```\n\n**Key Insight:** When an event handler changes one side of a CHECK constraint, it MUST update the other side. The constraint isn't just validation - it's a data integrity rule that requires coordinated updates.\n\n**TDD Test That Caught It:**\n```typescript\ntest(\"changeCellStatus to 'closed' sets closed_at\", async () => {\n const cell = await adapter.createCell(projectKey, {...});\n const updated = await adapter.changeCellStatus(projectKey, cell.id, \"closed\");\n expect(updated.closed_at).toBeGreaterThan(0); // FAILED before fix\n});\n```\n\n**Related Pattern:** `closeCell()` event handler was ALREADY doing this correctly - it set `status`, `closed_at`, and `closed_reason` together. The bug was that `changeCellStatus()` bypassed this coordination.\n\n**Files:**\n- packages/swarm-mail/src/hive/projections-drizzle.ts (fix location)\n- packages/swarm-mail/src/hive/migrations.ts (CHECK constraint definition)\n- packages/swarm-mail/src/hive/adapter.test.ts (TDD test)","created_at":"1766338304428.0","tags":"sql,check-constraint,event-sourcing,projections,data-integrity,sqlite,hive"}
|
|
396
498
|
{"id":"d330686c-fa2d-40f3-a231-9c9ed3c463f9","information":"{\"id\":\"pattern-1766260203398-aeogl6\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:03.398Z\",\"updated_at\":\"2025-12-20T19:50:03.398Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260203622.0","metadata":"{\"id\":\"pattern-1766260203398-aeogl6\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
@@ -403,22 +505,33 @@
|
|
|
403
505
|
{"id":"d6614b59-70bb-4b86-9d20-14774faa9f5a","information":"Config file pattern for Effect Schema classes: When creating config with Schema.Class, define a static Default property for the default config instance, and implement loadConfig/saveConfig helpers outside the class. Use Schema.decodeSync for validation when loading from JSON. For simple serialization, JSON.stringify works directly on Schema instances without needing Schema.encode. File structure: imports at top (fs, path), Schema class definition with static Default, then standalone load/save functions that use the Default instance. This keeps the Schema class clean and separates IO concerns.","created_at":"1766260781808.0","tags":"effect,schema,config,patterns,typescript"}
|
|
404
506
|
{"id":"d66c650b-d7d5-4c06-89fe-1b5fc0d1dbee","information":"{\"id\":\"pattern-1766296858244-xlcat5\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:00:58.244Z\",\"updated_at\":\"2025-12-21T06:00:58.244Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766296858493.0","metadata":"{\"id\":\"pattern-1766296858244-xlcat5\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
405
507
|
{"id":"d6759351-07a1-40f2-9c3e-c49022039786","information":"Testing Zod schemas pattern: For date coercion tests, z.coerce.date() always creates NEW Date instances even when input is already a Date. This means reference equality (toBe) fails. Solution: use .toBeInstanceOf(Date) + .getTime() comparison for date values. Also, Zod .omit() doesn't reject extra fields, it silently strips them during parsing. Test with expect(result).not.toHaveProperty('omittedField') not expect().toThrow().","created_at":"2025-12-18T16:32:12.902Z","tags":"zod,testing,dates,schemas,validation,gotcha"}
|
|
508
|
+
{"id":"d6962e45-b517-431a-ad86-ca67c048f509","information":"{\"id\":\"test-1766636008484-wxqdd5yfv0p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-25T04:13:28.484Z\",\"raw_value\":1}","created_at":"1766636008711.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-25T04:13:28.484Z\"}"}
|
|
406
509
|
{"id":"d70554ab-1551-4f5d-982e-425b65e191dc","information":"{\"id\":\"pattern-1766261425493-154cx7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:10:25.493Z\",\"updated_at\":\"2025-12-20T20:10:25.493Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261425745.0","metadata":"{\"id\":\"pattern-1766261425493-154cx7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
407
510
|
{"id":"d72166d4-f000-4748-bbd7-26196e7205d7","information":"Evalite Framework for Compaction Hook Testing\n\nCreated comprehensive eval suite for testing coordinator resumption after compaction. Key patterns:\n\n**Fixture Structure:**\n- Test cases include hive cells (simulated state) and swarm-mail state (agents, reservations, messages)\n- Expected includes confidence level, context type, mustContain/mustNotContain patterns\n- 5 test cases covering: active epic, multiple epics, no swarm, empty hive, blocked epic\n\n**Custom Scorers:**\n- confidenceAccuracy - validates detection confidence (high/medium/low/none)\n- contextInjectionCorrectness - validates context type (full/fallback/none) \n- requiredPatternsPresent - checks for required patterns (swarm_status, COORDINATOR, etc)\n- forbiddenPatternsAbsent - ensures no placeholders (bd-xxx, <epic>, <path>)\n- compactionQuality - weighted composite (25% confidence, 25% injection, 30% required, 20% forbidden)\n\n**Import Issue Workaround:**\n- Importing from src/compaction-hook.ts triggers OpenCode plugin chain with module resolution errors\n- Solution: Copy context constants directly into eval file to avoid deep imports\n- This keeps evals independent and runnable without full build\n\n**Results:**\n- 77% overall score detects the bug correctly\n- Test \"Epic ID must be specific\" scores 50% - shows placeholders in context (the actual bug)\n- Run with: bunx evalite run evals/compaction-resumption.eval.ts\n\nFile locations:\n- evals/fixtures/compaction-cases.ts\n- evals/scorers/compaction-scorers.ts \n- evals/compaction-resumption.eval.ts","created_at":"1766596294978.0","tags":"evalite,testing,compaction-hook,coordinator,swarm,eval-framework"}
|
|
408
511
|
{"id":"d7e4cdc5-87d6-49d6-a2d7-f0b3291152da","information":"Analyzed Dicklesworthstone/agentic_coding_flywheel_setup for swarm coordination patterns. Key findings:\n\n**1. Manifest-Driven Generation Pattern (acfs.manifest.yaml):**\n- YAML manifest defines modules with metadata: id, phase, dependencies, install commands, verify commands, installed_check\n- TypeScript generator (packages/manifest/src/generate.ts) compiles YAML → shell scripts (scripts/generated/)\n- Each module becomes an idempotent bash function with skip logic via installed_check\n- `installed_check: { run_as: target_user, command: \"test -x ~/.bun/bin/bun\" }` → skips if already installed\n- verified_installer pattern: delegates to checksummed upstream install scripts, no inline commands needed\n\n**2. State Persistence with Stable IDs (scripts/lib/state.sh):**\n- state.json v2 uses stable phase IDs ([\"user_setup\", \"filesystem\", \"shell_setup\"...]) NOT numbers\n- Why: if phases reorder, resume logic doesn't skip wrong phases\n- Atomic writes: temp file → sync → rename (prevents corruption on crash/disconnect)\n- Tracks completed_phases, current_phase, current_step, phase_durations, failed_phase + error\n- JSON schema versioning for migrations (v2 → v3 added ubuntu_upgrade section)\n\n**3. Checksum-Verified Installers (scripts/lib/security.sh):**\n- checksums.yaml: maps tool names to upstream URL + SHA256\n- fetch_and_run_with_recovery(): fetches, verifies checksum, pipes to runner if match, skips or aborts if mismatch\n- HTTPS enforcement: curl --proto '=https' --proto-redir '=https' (prevents downgrade attacks)\n- Sentinel-based fetching preserves trailing newlines (appends __ACFS_EOF_SENTINEL__, strips after hash)\n- Retry logic with exponential backoff for transient network errors (exit codes 6,7,28,35,52,56)\n\n**4. Contract Validation (scripts/lib/contract.sh):**\n- acfs_require_contract() validates required env vars (TARGET_USER, TARGET_HOME, MODE) and helper functions before generated modules run\n- Prevents runtime errors from missing context\n- Explicit dependencies over implicit coupling\n\n**5. Doctor Checks with Caching + Timeouts (scripts/lib/doctor.sh):**\n- Three-tier checks: binary existence, shallow verification, deep functional tests (--deep flag)\n- Cache successful deep checks for 5min to avoid slow re-runs\n- Per-check timeout (15s default) prevents indefinite hangs, returns special \"timeout\" status\n- JSON output mode for parsing, gum UI for humans\n- Skipped tools tracking from state.json to differentiate \"not installed\" vs \"skipped by user\"\n\n**6. AGENTS.md Destructive Command Controls:**\n- RULE 1: NEVER delete files without explicit approval in same session\n- Forbidden: git reset --hard, git clean -fd, rm -rf without user providing exact command\n- Audit trail required: user text, command run, timestamp\n- Bun-only mandate: no npm/yarn/pnpm, only bun.lock\n\n**7. Generated File Convention:**\n- scripts/generated/ NEVER edited manually (stamped with generator metadata)\n- Modify generator (packages/manifest/src/generate.ts) → regenerate → shellcheck\n- Clear separation: hand-written libs in scripts/lib/, generated modules in scripts/generated/\n\n**Implementation for swarm:**\n- Adopt manifest-driven plugin tool generation (YAML → TypeScript compiler → MCP tools)\n- Use stable IDs for swarm phases/subtasks (not array indices) in decomposition\n- Add checksum verification to skill downloads and external script execution\n- Contract validation for swarm workers (require swarmmail_init, file reservations before work)\n- Doctor-style health checks for swarm coordination (detect stale reservations, blocked agents)\n- AGENTS.md-style mandate for destructive operations (NEVER close cells without completion criteria met)","created_at":"1766590813558.0","tags":"agentic_coding_flywheel_setup,manifest-generation,state-persistence,idempotency"}
|
|
409
512
|
{"id":"d7efe68a-3a5d-42c6-b203-d77ea9c61961","information":"Successfully completed Bead→Cell event schema rename with backward compatibility. Key pattern: Export new names as primary exports, then add deprecated type aliases and const aliases for all old names (schemas, types, and helper functions). For imports, use only the new names and don't try to create aliases in the import statement - create them as separate exports after. This allows existing code to continue using BeadEvent types while new code uses CellEvent types. Total renames: 20 schemas, 20 types, 3 helper functions - all with backward compat aliases marked with @deprecated JSDoc tags.","created_at":"2025-12-17T16:40:48.872Z"}
|
|
410
513
|
{"id":"d8320ad2-425b-4c27-a854-ef5ce49a2e55","information":"{\"id\":\"pattern-1765771080299-rxkeql\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:00.299Z\",\"updated_at\":\"2025-12-15T03:58:00.299Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:01.723Z","metadata":"{\"id\":\"pattern-1765771080299-rxkeql\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
514
|
+
{"id":"d91e91b5-c0bb-4983-8681-f5282a971e5e","information":"Progressive eval gates architecture (opencode-swarm-plugin): Gates adapt based on data maturity in 3 phases - Bootstrap (<10 runs): always pass, collect baseline. Stabilization (10-50 runs): warn on >10% regression but pass. Production (>50 runs + variance <0.1): FAIL on >5% regression. Variance threshold (0.1) prevents premature production phase when scores unstable. Baseline = mean of historical scores. Regression = (baseline - current) / baseline. Issue: baseline calculation too naive - simple mean means early bad runs drag down baseline forever, no time-based decay. Solution: Use exponential moving average (EMA) where recent scores weighted higher, or trimmed mean to remove outliers. Current coordinator-session eval has high variance (only 3/100 sessions pass quality filters), keeping it in stabilization despite >50 runs.","created_at":"1766674606259.0","metadata":"{\"file\":\"src/eval-gates.ts\",\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlk7jsilk9\",\"phase_thresholds\":{\"production\":0.05,\"stabilization\":0.1}}","tags":"progressive-gates,eval-phases,baseline-calculation,variance-threshold,quality-control"}
|
|
515
|
+
{"id":"d977020b-acf3-4150-8b81-8aa3628e3927","information":"oh-my-opencode Context Preservation: System-wide anti-context-explosion. Hard limits: LSP (100 refs, 50 symbols, 50 diagnostics), ast_grep (200 matches, 500KB, 30s timeout). Truncation reporting with counts. tool-output-truncator hook trims verbose tools. Background agents as summarization barriers (full search in subagent, only summary to main). Parallel execution reduces round-trips. Structured output requirements (explore→results blocks, librarian→permalinks not full code). Novel: context preservation as first-class concern, every tool has limits + reporting, background agents prevent context dumps.","created_at":"1766673465569.0","tags":"oh-my-opencode,context-preservation,token-optimization"}
|
|
411
516
|
{"id":"da3010a8-76fb-4eb2-ba5e-743b0d63baec","information":"## 🧠 Brain Chat Feature Decomposition\n\n**Project:** pdf-brain-viewer (SvelteKit)\n**Epic:** Full RAG Chat with Knowledge Graph Memory\n\n### Architecture\n```\n┌─────────────────────────────────────────────────────────────────────┐\n│ pdf-brain-viewer (SvelteKit) │\n├─────────────────────────────────────────────────────────────────────┤\n│ ┌──────────────┐ ┌──────────────────────────┐ ┌───────────────┐ │\n│ │ Chat Panel │ │ Force Graph │ │ Info Panel │ │\n│ │ (left) │ │ (center) │ │ (right) │ │\n│ └──────────────┘ └──────────────────────────┘ └───────────────┘ │\n└─────────────────────────────────────────────────────────────────────┘\n```\n\n### Data Model\n```\nthreads: id, title, created_at, updated_at, selected_node_id\nmessages: id, thread_id, role, content, created_at, embedding F32_BLOB(1024)\nmemories: id, content, type (fact|preference|insight|question), embedding F32_BLOB(1024)\nmemory_sources: memory_id → message_id\nmemory_concepts: memory_id → concept_id + confidence\nmemory_documents: memory_id → doc_id + confidence\nmemory_links: source_memory_id → target_memory_id + relation_type\n```\n\n### Tech Stack\n- AI SDK 6 beta + Vercel AI Gateway (anthropic/claude-opus-4-5)\n- ai-elements Svelte for chat UI\n- Vercel Workflow for durable memory extraction\n- LibSQL with F32_BLOB vectors + libsql_vector_idx\n- Ollama mxbai-embed-large (1024 dims)\n- Catppuccin Mocha theme\n\n### Subtasks (8 total, validated)\n\n**Wave 1 - Parallel (no deps):**\n1. Schema & Types [src/lib/db.ts, src/lib/types.ts] - complexity 4\n2. Ollama Embedding Service [src/lib/services/embedding.ts] - complexity 2\n\n**Wave 2 - Depends on Wave 1:**\n3. RAG Service: Hybrid Reranking [src/lib/services/rag.ts] - complexity 4 (deps: 0,1)\n4. Chat API: Streaming [src/routes/api/chat/+server.ts, src/lib/services/chat.ts] - complexity 4 (deps: 0,2)\n5. Vercel Workflow: Memory Extraction [src/lib/workflows/extract-memories.ts, vite.config.ts, API route] - complexity 5 (deps: 0,1)\n\n**Wave 3 - Depends on Wave 2:**\n6. Chat Panel Component [ChatPanel.svelte, MessageBubble.svelte, ThreadList.svelte] - complexity 4 (deps: 3)\n\n**Wave 4 - Depends on Wave 3:**\n7. IDE Layout: Three-Panel [+page.svelte, selection store, ResizeHandle] - complexity 3 (deps: 5)\n\n**Wave 5 - Final polish:**\n8. Global Catppuccin Theme [app.css, +layout.svelte, theme.ts] - complexity 2 (deps: 6)\n\n### RAG Strategy (Hybrid Reranking)\n1. Embed query with Ollama\n2. Parallel search: selected node context + embeddings + concept_embeddings + memories\n3. Combine, deduplicate, rerank by cosine similarity\n4. Return top-k with source attribution\n\n### Memory Extraction (Vercel Workflow)\n- Explicit: User says \"remember X\" → immediate extraction\n- Automatic: Background workflow after assistant responses\n- Extract: facts, preferences, insights, questions\n- Auto-link to concepts and similar memories\n\n### Key Decisions from Socratic Planning\n- Full knowledge graph (option C) - conversations as first-class citizens\n- Thread → Messages → Memories architecture (option A)\n- Hybrid memory extraction (option D) - explicit + background\n- Persisted chat with embeddings from day one (option B)","created_at":"1766336899420.0","metadata":"{\"epic\":\"brain-chat\",\"project\":\"pdf-brain-viewer\",\"strategy\":\"feature-based\",\"subtask_count\":8,\"total_complexity\":28}","tags":"pdf-brain-viewer,chat,rag,knowledge-graph,memory,decomposition,swarm,sveltekit,ai-sdk,vercel-workflow,catppuccin"}
|
|
412
517
|
{"id":"da4dbfc8-fbd1-4a12-b0ed-8b262529953c","information":"@badass Effect Router Decision (Dec 2024): Build a router/builder pattern using Effect-TS, similar to uploadthing's approach. Reference implementation: pingdotgg/uploadthing/packages/uploadthing/src/effect-platform.ts and _internal/upload-builder.ts. This provides type-safe, composable route definitions with Effect's error handling and dependency injection. The router pattern will be used across @badass packages for consistent API design.","created_at":"2025-12-18T15:51:55.079Z"}
|
|
413
518
|
{"id":"da756adb-a188-41fa-a8cc-67a961a73bf2","information":"swarm_review_feedback retry_context pattern: When review status is needs_changes, return retry_context in the response for coordinators to use with swarm_spawn_retry. Workers are fire-and-forget Task subagents - once they complete, they're dead and can't receive messages. The retry_context includes: (1) task_id, (2) attempt number, (3) max_attempts (3), (4) structured issues array (file, line, issue, suggestion), (5) next_action hint (\"Use swarm_spawn_retry to spawn new worker\"). CRITICAL: DO NOT send sendSwarmMessage for needs_changes status - worker is dead. KEEP sendSwarmMessage for approved status (audit trail). After 3 failed attempts, task is marked blocked and no retry_context is returned. TDD pattern: wrote 6 failing tests FIRST covering retry_context structure, next_action hint, max_attempts, no message to dead worker, message kept for approved, no retry_context after failure. All tests passed after removing sendSwarmMessage calls and adding retry_context to response.","created_at":"1766595048679.0","tags":"swarm,review,retry,coordinator,worker,fire-and-forget,tdd"}
|
|
414
519
|
{"id":"db9ed7ab-6599-4b62-b12d-276836a633cc","information":"Shared PGlite test server pattern for swarm-mail dramatically speeds up test suite execution. \n\n**ROOT CAUSE:** Each test creating new PGlite instance requires ~500ms WASM initialization. With 50+ tests, this adds 25+ seconds of pure overhead.\n\n**SOLUTION:** Share ONE PGlite instance across entire test suite via test-server.ts module-level state:\n\n```typescript\n// test-server.ts\nlet db: PGlite | null = null;\n\nexport async function startTestServer() {\n if (db) return { db }; // Reuse existing\n db = await PGlite.create({ extensions: { vector } });\n await runMigrations(db);\n return { db };\n}\n\nexport async function resetTestDatabase() {\n if (!db) throw new Error(\"Test server not started\");\n await db.exec(\"TRUNCATE agents, messages, beads, ... CASCADE\");\n}\n\nexport function getTestDb() {\n if (!db) throw new Error(\"Test server not started\");\n return db;\n}\n```\n\n**Test Pattern:**\n```typescript\nbeforeAll(async () => {\n await startTestServer(); // ONE init\n});\n\nbeforeEach(async () => {\n await resetTestDatabase(); // TRUNCATE (~10ms) instead of recreate (~500ms)\n});\n\nafterAll(async () => {\n await stopTestServer();\n});\n```\n\n**MEASURED RESULTS (hive/adapter.test.ts, 25 tests):**\n- Before: 8.63s (345ms per test)\n- After: 0.96s (38ms per test)\n- **~9x speedup, 90% reduction in test time**\n\n**KEY DECISIONS:**\n1. Abandoned PGLiteSocketServer approach - socket overhead added complexity without benefit\n2. Direct shared PGlite instance is simpler and faster\n3. TRUNCATE CASCADE between tests provides clean isolation\n4. Module-level state works perfectly for process-scoped test suites\n\n**GOTCHAS:**\n- Must TRUNCATE in correct order due to foreign keys (use CASCADE)\n- Must run migrations once at startup, not per test\n- Close cleanup is critical: `db.exec(\"CHECKPOINT\")` before `db.close()`\n\n**APPLICABILITY:** This pattern works for any test suite using PGlite where WASM init dominates test time. Expected 10-20x speedup for larger test suites (100+ tests).","created_at":"2025-12-19T15:12:21.422Z","tags":"testing,pglite,performance,test-patterns,swarm-mail,speedup"}
|
|
520
|
+
{"id":"dbb4d660-4907-4b27-a86c-45da9e7455e0","information":"Compaction hook SDK client integration pattern: createCompactionHook now accepts optional OpencodeClient parameter. When provided, calls scanSessionMessages(client, sessionID) to extract ground truth swarm state from actual tool calls (hive_create_epic, swarmmail_init, swarm_spawn_subtask). Key merge strategy: (1) Prefer scanned epicId/epicTitle/projectPath over hive-detected (tool calls are ground truth). (2) Include agentName from scanned state in dynamic context. (3) Show detailed subtask info (title, worker, files) from scannedState.subtasks Map instead of just counts. (4) buildDynamicSwarmState accepts both SwarmState and optional ScannedSwarmState, merges with preference for scanned. This fixes critical bug where coordinators lost identity after compaction - now they wake up with SPECIFIC epic ID, subtask details, and worker assignments from actual tool history, not heuristic detection.","created_at":"1766599163003.0","tags":"compaction,sdk-client,swarm-coordination,ground-truth,state-merging"}
|
|
415
521
|
{"id":"dbba7b08-3fc3-4ccd-b51f-827770d11717","information":"Script-to-workflow integration pattern for Vercel Workflow in Nitro apps: Add --workflow flag to existing scripts to trigger workflow via cron API endpoint instead of inline processing. Pattern: (1) Parse --workflow flag, (2) Build URL with query params (full, team, etc), (3) Fetch http://localhost:3000/api/cron/sync-<name> endpoint, (4) Handle JSON response with runId, (5) Exit early before local processing. Keep existing --dry-run mode for local testing. Update script header docs to show both modes. Replace TODO ingestion comments with notes that workflow handles production ingestion. This allows scripts to serve dual purpose: local debugging AND workflow trigger without code duplication.","created_at":"1766517572228.0","tags":"vercel-workflow,script-patterns,api-integration,nitro"}
|
|
416
522
|
{"id":"dc749a41-96ec-4ab2-a163-f1639857f9bd","information":"{\"id\":\"pattern-1766074743915-fstlv8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:19:03.915Z\",\"updated_at\":\"2025-12-18T16:19:03.915Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:19:04.142Z","metadata":"{\"id\":\"pattern-1766074743915-fstlv8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
417
523
|
{"id":"dcbf2f31-eab0-4e0b-8884-41c288908d9d","information":"**agentmail_release test \"failures\" were already fixed in commit eb2ff6d**: Task opencode-swarm-monorepo-lf2p4u-mjg00go0fga reported 3 failing agentmail_release integration tests. Investigation found all 3 tests passing (100% success). The tests were fixed in prior commit \"fix(swarm-mail): fix 32 failing tests - schema alignment and test infrastructure\" (eb2ff6d). The three tests verify: (1) releasing all reservations, (2) releasing specific paths only, and (3) releasing by reservation IDs. All verify the `released` count correctly matches expectations. **Key learning:** When a task describes failing tests, ALWAYS run them first to verify current state before investigating. Task descriptions can be outdated if based on pre-fix snapshots. Don't waste time fixing what's already fixed.","created_at":"1766338566116.0","tags":"testing,agentmail_release,drizzle-migration,swarm-coordination,already-fixed"}
|
|
418
524
|
{"id":"dda2aaf9-9eb3-4a54-8eb8-9894743448af","information":"Kent C. Dodds unified accounts feature request (Dec 2024): Kent wants to unify accounts across EpicAI.pro, EpicWeb.dev, and EpicReact.dev. Use case: User buys Epic React, starts Epic Workshop App tutorial, shouldn't have to create a separate EpicWeb.dev account. Current pain: suboptimal experience forcing account creation on different domain. Alternative considered: local tracking (also suboptimal). This validates the need for creator-scoped unified identity in @badass architecture.","created_at":"2025-12-18T15:32:32.673Z"}
|
|
525
|
+
{"id":"de1ba0c2-cc7e-4bd7-9a1d-2b5f813d3e3f","information":"Coordinator identity reinforcement pattern for compaction hooks:\n\n**Problem:** Coordinators lose identity after compaction and start doing implementation work directly instead of spawning workers. They also fetch external data directly (repo-crawl_*, context7_*, pdf-brain_*) instead of delegating to researcher agents.\n\n**Solution:** Multi-layered identity reinforcement:\n\n1. **ASCII header** - Unmistakable visual reminder using box-drawing characters\n2. **Repeated statements** - \"YOU ARE THE COORDINATOR\" appears 3+ times\n3. **Strong language** - NEVER, ALWAYS, NON-NEGOTIABLE (not \"should\" or \"consider\")\n4. **Explicit forbidden tools list** - Every tool that requires delegation listed by name\n5. **Positive alternative** - Always include WHAT to do, not just what NOT to do\n\n**Implementation in compaction-hook.ts:**\n- SWARM_COMPACTION_CONTEXT constant starts with ASCII box header\n- Section \"🚫 FORBIDDEN TOOLS\" lists repo-crawl_*, repo-autopsy_*, webfetch, fetch_fetch, context7_*, pdf-brain_* by name\n- Instructs to use swarm_spawn_researcher for external data\n- Multiple \"YOU ARE THE COORDINATOR\" statements throughout\n\n**Implementation in plugin-wrapper-template.ts:**\n- LLM prompt generation (line ~1225) includes same ASCII header\n- Template instructs LLM to include ALL coordinator mandates in continuation prompt\n- Post-compaction agent wakes up with ZERO doubt about role\n\n**Why it works:**\n- Visual: ASCII header is unmissable\n- Semantic: Repeated strong language creates certainty\n- Procedural: Explicit tool lists leave no ambiguity\n- Actionable: Always paired with \"use X instead\"\n\n**Testing:**\n- Tests verify ALL forbidden tools present by name\n- Tests verify ASCII header exists\n- Tests verify multiple identity statements\n- Tests verify strong language (NEVER/ALWAYS/NON-NEGOTIABLE)\n\nFile locations:\n- packages/opencode-swarm-plugin/src/compaction-hook.ts (lines 71-137)\n- packages/opencode-swarm-plugin/examples/plugin-wrapper-template.ts (lines 1225-1320)\n- packages/opencode-swarm-plugin/src/compaction-hook.test.ts (tests starting line 148)","created_at":"1766620020719.0","metadata":"{\"files\":[\"compaction-hook.ts\",\"plugin-wrapper-template.ts\"],\"pattern\":\"multi-layered-identity-reinforcement\"}","tags":"compaction,coordinator-identity,forbidden-tools,swarm-coordination,anti-patterns,researcher-spawn"}
|
|
526
|
+
{"id":"de49ff77-422f-47f7-b007-e82fba173111","information":"**Oh-My-OpenCode Tool Registration Pattern**\n\nTools registered via flat object merge in plugin return value:\n\n**Static Tools:**\n```typescript\nexport const builtinTools = {\n lsp_hover, lsp_goto_definition, lsp_find_references,\n ast_grep_search, ast_grep_replace,\n grep, glob, slashcommand,\n session_list, session_read, session_search,\n};\n```\n\n**Dynamic Tools (Context-Dependent):**\n```typescript\nreturn {\n tool: {\n ...builtinTools,\n ...backgroundTools, // Created from BackgroundManager instance\n call_omo_agent: createCallOmoAgent(ctx, backgroundManager),\n look_at: createLookAt(ctx),\n ...(tmuxAvailable ? { interactive_bash } : {}), // Conditional\n }\n};\n```\n\n**Tool Definition Pattern (using @opencode-ai/plugin):**\n```typescript\nimport { tool } from \"@opencode-ai/plugin\";\n\nexport const myTool = tool({\n description: \"What the tool does\",\n args: {\n param1: tool.schema.string().describe(\"What param1 is\"),\n param2: tool.schema.number().optional().describe(\"Optional param\"),\n },\n async execute(args) {\n // Implementation\n return \"result string or object\";\n },\n});\n```\n\n**Slash Commands as Tools:**\n- `slashcommand` tool dynamically discovers markdown files from:\n - `.opencode/command/` (project - highest priority)\n - `.claude/commands/` (project)\n - `~/.config/opencode/command/` (global)\n - `~/.claude/commands/` (user - lowest priority)\n- Markdown frontmatter defines metadata (description, agent, model, subtask)\n- Body is the prompt template\n- `$ARGUMENTS` placeholder for user input\n- File references: `@path/to/file` (relative to command file)\n- Shell injection: `` `!command` `` (executes and injects output)","created_at":"1766673433270.0","tags":"oh-my-opencode,tools,registration,slashcommand,dynamic-tools"}
|
|
527
|
+
{"id":"de92fd4f-36b8-49f3-bbe7-d4cb3de60aa7","information":"Output Guardrails - Smart Truncation Preserving Structure: Default limit 32000 chars (~8000 tokens at 4 chars/token). Per-tool overrides: code/doc tools 64000 (repo-autopsy_file, context7_get-library-docs, cass_view), stats tools lower (cass_stats 8000). Skips internal coordination tools entirely (hive_*, agentmail_*, swarmmail_*, structured_*, swarm_*, mandate_*). Truncation logic: 1) Find last unclosed brace/bracket, try to include matching close within 120% of limit, 2) Detect code blocks (odd number of ``` markers), try to close or truncate before opening, 3) Prefer markdown header boundaries (## boundaries at 80%+ of limit), 4) Avoid mid-word splits (walk back to whitespace). Adds \"[TRUNCATED - N chars removed]\" suffix with formatted count. Returns GuardrailResult with metadata: truncated boolean, originalLength, truncatedLength, output. Used for MCP tool outputs to prevent context exhaustion. createMetrics() generates analytics for learning what tools produce large outputs.","created_at":"1766672901717.0","tags":"guardrails,truncation,context-management,output-limits"}
|
|
528
|
+
{"id":"df26f9ae-54f1-4d36-b603-517ddabce38e","information":"AI SDK v6 Auto-Tagging Implementation: Use generateText with Output.object() for structured LLM responses. Don't manually pass Authorization header - AI SDK uses AI_GATEWAY_API_KEY env var automatically when model string starts with provider prefix (e.g., \"anthropic/claude-haiku-4-5\"). The .env file needs to be in the package directory for bun test to pick it up. Schema: z.object() with .min()/.max() constraints for array lengths. Graceful degradation pattern: try/catch with console.error, return empty result structure on LLM errors - NEVER throw, storage must succeed even if tagging fails.","created_at":"1766643520975.0","metadata":"{\"epic\":\"mjl1ksc3peh\",\"context\":\"memory-system-overhaul\",\"priority\":\"high\"}","tags":"ai-sdk,vercel,llm,auto-tagging,graceful-degradation"}
|
|
419
529
|
{"id":"df2fcb8c-ccbd-401b-8d19-9fc00927eece","information":"{\"id\":\"test-1766260866255-c66a1una25\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:06.255Z\",\"raw_value\":1}","created_at":"1766260866491.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:06.255Z\"}"}
|
|
530
|
+
{"id":"df4b90a4-786c-4fd5-bd5d-58fd29bc7a12","information":"**Oh-My-OpenCode Think Mode Hook - Model Switching**\n\n`think-mode` hook auto-switches to high-context models when user says \"think\":\n\n**Keyword Detection:**\n- Scans message parts for keywords: `think hard`, `think harder`, `deep think`, `ultrathink`\n- Case-insensitive matching\n- Activates on `chat.params` hook\n\n**Model Switching Logic:**\n```typescript\nconst HIGH_VARIANTS = {\n \"claude-3-5-sonnet-20241022\": \"claude-3-5-sonnet-v2@20241022\",\n \"claude-sonnet-4\": \"claude-sonnet-4-20250514\",\n \"gemini-2.0-flash-thinking-exp\": \"gemini-2.0-flash-thinking-exp-01-21\",\n // ... provider-specific mappings\n};\n\n// On keyword match:\noutput.message.model = {\n providerID: currentModel.providerID,\n modelID: HIGH_VARIANTS[currentModel.modelID] || currentModel.modelID,\n};\n```\n\n**Extended Thinking Injection (Gemini):**\n```typescript\nconst THINKING_CONFIG = {\n anthropic: { thinking: { type: \"enabled\", budget_tokens: 10000 } },\n google: { thinkingConfig: { thinkingBudget: 16384 } },\n};\n// Merges thinking config into message params\n```\n\n**State Tracking:**\n- Per-session state: `{ requested, modelSwitched, thinkingConfigInjected }`\n- Cleaned up on `session.deleted` event\n\n**Novel Pattern:** Non-invasive model switching via hook mutation. User sees seamless upgrade to thinking models without explicit model selection.\n\n**Swarm Adoption Idea:** Could auto-enable extended thinking for complex decomposition tasks or when LLM detects subtask complexity.","created_at":"1766673475531.0","tags":"oh-my-opencode,think-mode,model-switching,extended-thinking,hooks"}
|
|
420
531
|
{"id":"e03ede8f-8cc6-467f-bc19-60a54cb07e2e","information":"WorkerHandoff integration: task_id must have 3+ segments (project-slug-hash). Tests with bd-123 format fail. Use test-swarm-plugin-lf2p4u-name123 instead.","created_at":"2025-12-18T17:36:15.053Z"}
|
|
532
|
+
{"id":"e0437639-8194-46b3-bfb3-951aa8e097a0","information":"**Oh-My-OpenCode Rules Injection Hook Pattern**\n\n`rules-injector` hook auto-injects context from `AGENTS.md` files on file access:\n\n**Discovery Strategy:**\n```typescript\n// On read/write/edit of a file, search upward for AGENTS.md:\n1. Same directory as accessed file\n2. Parent directories (recursively to project root)\n3. ~/.claude/AGENTS.md (user-global)\n```\n\n**Injection Mechanism:**\n- Hooks `tool.execute.after` for read/write/edit/batch tools\n- Appends matched AGENTS.md content to tool output:\n ```\n [Rule: path/to/AGENTS.md]\n [Match: directory-match]\n <rule content>\n ```\n\n**Deduplication:**\n- Session-scoped cache of content hashes + real paths\n- Prevents re-injecting same rule multiple times\n- Cache cleared on `session.compacted` / `session.deleted` events\n- Storage via `~/.local/share/opencode/rules-injector/<sessionID>.json`\n\n**Frontmatter Matching (Optional):**\n```markdown\n---\ninclude: [\"src/auth/**\", \"tests/**\"]\nexclude: [\"**/*.test.ts\"]\n---\nRule content here\n```\n\n**Novel Pattern:** Uses filesystem realpath + content hash for deduplication, not just path. Handles symlinks correctly.\n\n**Extension Point for Swarm:** Could adapt this for:\n- Auto-injecting skill content on file access\n- Loading swarm coordination rules per directory\n- Injecting decomposition strategies based on file patterns","created_at":"1766673465230.0","tags":"oh-my-opencode,rules-injector,context-injection,deduplication,AGENTS.md"}
|
|
421
533
|
{"id":"e04dfef8-c513-4557-8b6e-cee18253e17d","information":"## Session Context: PGLite to libSQL Migration (Dec 21, 2025)\n\n### Epic: Drizzle Migration + Plugin Integration Tests\n**Branch:** feat/drizzle-migration-and-tests\n**Cell ID:** opencode-swarm-monorepo-lf2p4u-mjf9zd9kgo7\n\n### Completed Work\n1. **Streams subsystem** - ✅ Fully converted to Drizzle with wrappers\n2. **Memory subsystem** - ✅ Already uses Drizzle (raw SQL only for vector/FTS5)\n3. **32 failing tests fixed** - Schema alignment and test infrastructure\n4. **PGLite → libSQL migration tool** - Created migrate-pglite-to-libsql.ts\n\n### In Progress\n1. **Hive subsystem conversion** - Still uses DatabaseAdapter with raw SQL\n2. **Remove PGLite from streams/index.ts exports** - Cleanup task\n\n### Key Technical Decisions\n- Use toSwarmDb() to convert DatabaseAdapter → SwarmDb (Drizzle client)\n- Keep complex CTEs as raw SQL via sql.raw() if Drizzle cannot express them\n- Schema source of truth: packages/swarm-mail/src/db/schema/*.ts\n- FTS5 and vector operations MUST stay as raw SQL (Drizzle does not support)\n\n### Test Status (Last Known)\n- swarm-mail: 595 pass, 15 skip, 0 fail\n- opencode-swarm-plugin: 423 pass, 0 fail\n- Integration tests: 440 pass, 18 skip, 6 fail (agentmail_release, swarm_checkpoint)\n\n### Files Modified (Key)\n- hive/store.ts - Event store operations\n- hive/projections.ts, projections-drizzle.ts - Query projections\n- hive/queries.ts, queries-drizzle.ts - Complex queries\n- streams/index.ts - Export cleanup needed\n- db/migrate.ts - Migration runner","created_at":"1766337614267.0","tags":"drizzle,migration,pglite,libsql,swarm-mail,hive,session-context"}
|
|
534
|
+
{"id":"e0a37e53-ecb1-48ec-a0c0-80e11c39659b","information":"{\"id\":\"pattern-1766641846497-4fhjdm\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-25T05:50:46.497Z\",\"updated_at\":\"2025-12-25T05:50:46.497Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766641846717.0","metadata":"{\"id\":\"pattern-1766641846497-4fhjdm\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
422
535
|
{"id":"e0a89793-1dd3-4061-9621-524a5ae92841","information":"Documentation audit for BeadsAdapter migration completed 2025-01-16. Searched all docs in packages/opencode-swarm-plugin/docs/ for stale references to: bd CLI commands, Go implementation, SQLite, old architecture. Found 1 stale reference: swarm-mail-architecture.md line 519 incorrectly compared Agent Mail's \"SQLite file\" to Swarm Mail's PGLite. Fixed to \"PGLite (embedded Postgres)\" for accuracy. All other docs (ADR-001, ADR-002, ADR-003, ROADMAP, subagent-coordination-patterns.md, swarm-mail-architecture.md) correctly reference: PGLite event sourcing, BeadsAdapter from swarm-mail package, .beads/issues.jsonl sync. No references to deprecated bd CLI or Go implementation found.","created_at":"2025-12-17T01:00:46.822Z","tags":"documentation,audit,BeadsAdapter,migration,PGLite,swarm-mail"}
|
|
423
536
|
{"id":"e0e9227d-51b0-4943-8ba3-e5de88cda39c","information":"{\"id\":\"pattern-1766262232471-56tbqa\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:23:52.471Z\",\"updated_at\":\"2025-12-20T20:23:52.471Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262232691.0","metadata":"{\"id\":\"pattern-1766262232471-56tbqa\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
424
537
|
{"id":"e122ede7-8a62-4489-9742-3234b89a8fb2","information":"SWARM-MAIL ADAPTER PATTERN DECISION (Dec 2025): Extracting swarm-mail as standalone package using adapter pattern from coursebuilder. Key design: 1) DatabaseAdapter interface abstracts SQL operations (query, exec, transaction), 2) SwarmMailAdapter interface defines all swarm-mail operations, 3) createSwarmMailAdapter(db) factory accepts injected database, 4) PGLite convenience layer provides getSwarmMail() singleton for simple usage. Benefits: portable (works with PGLite, Postgres, Turso), testable (inject in-memory), shareable (one db across consumers), decoupled (swarm-mail doesn't own db lifecycle). Pattern learned from github.com/badass-courses/course-builder/tree/main/packages/adapter-drizzle which uses table function injection for multi-tenant prefixing.","created_at":"2025-12-15T00:02:39.759Z"}
|
|
@@ -428,15 +541,18 @@
|
|
|
428
541
|
{"id":"e1eb1c68-a71a-4c00-beb6-7310deffc166","information":"Documentation file rename with terminology update pattern: Renamed beads.mdx → hive.mdx in docs, updated all tool names (beads_* → hive_*), changed terminology (bead/beads → cell/cells), updated directory references (.beads/ → .hive/), and added backward compatibility note mentioning beads_* aliases still work but are deprecated. Key insight: When renaming documentation for deprecated APIs, ALWAYS include a migration note at the top explaining the old names still work but show warnings. This helps users transition smoothly without breaking existing code. File path was apps/web/content/docs/packages/opencode-plugin/","created_at":"2025-12-18T18:37:20.197Z","metadata":"{\"context\":\"v0.31 beads→hive rename\"}"}
|
|
429
542
|
{"id":"e23e3f30-6e9f-4eb4-858f-2ac50f6e17ad","information":"@badass Multi-Database Testing Pattern (Dec 2024): Adopted from course-builder. Key pattern is PARAMETERIZED TEST SUITES.\n\n**Core Pattern:**\n```typescript\n// Write once in packages/db/test/adapter-tests.ts\nexport function runAdapterTests(options: {\n adapter: Adapter\n db: { connect, disconnect, user, session, ... }\n fixtures: TestFixtures\n}) {\n beforeAll(() => options.db.connect())\n afterAll(() => options.db.disconnect())\n \n test('creates user', async () => {\n const user = await options.adapter.createUser(options.fixtures.user)\n const dbUser = await options.db.user(user.id)\n expect(dbUser).toEqual(user)\n })\n}\n\n// Run against Postgres\nrunAdapterTests({ adapter: postgresAdapter, db: postgresHelpers, fixtures })\n\n// Run against SQLite\nrunAdapterTests({ adapter: sqliteAdapter, db: sqliteHelpers, fixtures })\n```\n\n**Key Files from course-builder:**\n- packages/utils/adapter.ts:84 - runBasicTests() (766 lines)\n- packages/adapter-drizzle/test/fixtures.ts - Shared test data\n- packages/adapter-drizzle/test/mysql/test.sh - Shell script for DB lifecycle\n\n**DRY Patterns:**\n1. Parameterized test suites (write once, run against multiple DBs)\n2. Shared fixtures file (single source of truth for test data)\n3. Shell scripts for database lifecycle (Docker container management)\n4. Shared vitest config via tooling package\n5. Optional test methods pattern (core required, extended optional)\n\n**Gotchas:**\n- Drizzle truncates milliseconds - zero them out in fixtures\n- Cleanup order matters - delete children before parents (FK constraints)\n- Test suite functions use vitest globals (side effects, not pure)","created_at":"2025-12-18T16:36:29.114Z"}
|
|
430
543
|
{"id":"e333f398-4fee-41d8-8edb-c0fc30376305","information":"AI SDK v6 Section 1 Fundamentals validation complete. Found 3 model naming bugs, all other v6 patterns CORRECT.\n\n**CORRECT v6 Patterns:**\n- Import: `import { generateText, Output } from 'ai'` ✅\n- Structured output: `Output.object({ schema })` with destructuring `{ output }` ✅\n- Basic text generation: `generateText({ model, prompt })` with destructuring `{ text }` ✅\n- No deprecated `generateObject` or `experimental_generateObject` references ✅\n\n**Bugs Filed:**\n1. cell-is13o5-mji2yj856tl: Lesson 04 line 132 - 'openai/gpt-5' should be 'openai/gpt-5.1'\n2. cell-is13o5-mji2ym6ttkx: Lesson 05 line 182 - 'openai/gpt-5' should be 'openai/gpt-5.1'\n3. cell-is13o5-mji2zh5ndeq: Lesson 04 Model Selection Guide - 'gpt-5' → 'gpt-5.1' and 'gpt-5-nano' → 'gpt-5-mini'\n\n**Model Names v6:**\n- Fast models: `gpt-4.1`, `gpt-4.1-mini`, `gpt-4o`, `gpt-4o-mini`\n- Reasoning models: `gpt-5.1`, `gpt-5-mini`, `o3`, `o1-mini`\n\n**Lessons Validated:**\n- 01-introduction-to-llms.mdx: PASS (conceptual example uses correct v6 Output.object pattern)\n- 02-prompting-fundamentals.mdx: PASS (basic generateText examples, no structured output)\n- 03-ai-sdk-dev-setup.mdx: PASS (setup instructions, no code validation issues)\n- 04-data-extraction.mdx: 3 bugs (model naming in code example + Model Selection Guide)\n- 05-model-types-and-performance.mdx: 1 bug (model naming in code example)\n\nAll imports, API calls, and destructuring patterns match official v6 docs exactly.","created_at":"1766463910105.0","tags":"ai-sdk-v6,section-1,fundamentals,validation,model-naming,Output.object,generateText"}
|
|
544
|
+
{"id":"e3feea1e-c7b9-44e7-a6bc-bba6ce640500","information":"scanSessionMessages implementation pattern: Extract swarm state from SDK session messages by scanning tool calls. Key design: (1) Define minimal OpencodeClient interface for dependency injection - only needs session.messages method. (2) Return early with empty state if client undefined - graceful degradation. (3) Use type guards (part.type === \"tool\" && state.status === \"completed\") before accessing input/output. (4) Parse JSON output defensively with try/catch - tools may return non-JSON or fail. (5) Use Map for subtasks - efficient lookup when marking complete. (6) Track lastAction with timestamp for temporal ordering. (7) Multiple tools can populate same state field (epicId from hive_create_epic OR swarm_spawn_subtask OR swarm_status) - first one wins. (8) swarm_complete updates existing subtask status, doesn't create new entry. Pattern validates well with 8 TDD tests covering all tool types.","created_at":"1766598729683.0","tags":"opencode-sdk,session-scanning,swarm-state,tdd,compaction-hook"}
|
|
431
545
|
{"id":"e45a9f1d-12fa-4dbf-b6ff-f5d2b15abd27","information":"Drizzle Migration Pattern for Subsystem-Specific Queries:\n\n**Problem:** When migrating queries to Drizzle, using the full schema (via `toDrizzleDb()` or `createDrizzleClient()`) breaks tests when test databases only contain tables from one subsystem (e.g., hive tables but not streams tables).\n\n**Root Cause:** `createDrizzleClient()` loads ALL schemas from `db/schema/index.js` (streams, memory, hive). Drizzle validates schema on instantiation, causing \"table X has no column Y\" errors when tables don't exist.\n\n**Solution:** Create subsystem-specific Drizzle client factories that only load relevant schemas:\n\n```typescript\nfunction getHiveDrizzle(db: DatabaseAdapter) {\n // Import only hive schema tables\n const hiveSchema = { beads };\n \n // For LibSQL Client, get the client and wrap with Drizzle\n if (typeof (db as any).getClient === 'function') {\n const client = (db as any).getClient();\n return drizzle(client, { schema: hiveSchema });\n }\n \n // For PGlite or raw client, wrap directly\n return drizzle(db as any, { schema: hiveSchema });\n}\n```\n\n**Benefits:**\n- Tests work with minimal schema setup (only tables needed for subsystem)\n- Faster Drizzle instantiation (fewer tables to validate)\n- Clear separation of concerns (hive code only sees hive schema)\n\n**Pattern:** When migrating subsystems to Drizzle, create `get{Subsystem}Drizzle()` helpers in subsystem-specific files (e.g., `hive/queries-drizzle.ts`, `streams/store-drizzle.ts`).\n\n**Applies to:** swarm-mail hive subsystem, but pattern is universal for any Drizzle migration with multiple schemas.\n","created_at":"1766331998014.0","tags":"drizzle,testing,schema-isolation,subsystem-migration,hive"}
|
|
432
546
|
{"id":"e488f52a-f43c-49b2-be81-57f3e9c57d50","information":"{\"id\":\"pattern-1766260240802-kxdynu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:40.802Z\",\"updated_at\":\"2025-12-20T19:50:40.802Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260241042.0","metadata":"{\"id\":\"pattern-1766260240802-kxdynu\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
433
547
|
{"id":"e490aaba-d992-4f89-9fca-9855979a86e5","information":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:16:25.895Z\",\"updated_at\":\"2025-12-14T02:16:25.895Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:16:26.095Z","metadata":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
434
548
|
{"id":"e5cb0bfa-a3b7-451e-a7f1-3bc13caa1b2f","information":"{\"id\":\"pattern-1766262989524-goyxtd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:36:29.524Z\",\"updated_at\":\"2025-12-20T20:36:29.524Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262989741.0","metadata":"{\"id\":\"pattern-1766262989524-goyxtd\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
435
549
|
{"id":"e6cd509d-b29d-4a32-9e45-1fb9587017a0","information":"Canvas rendering in Svelte 5: Always check getContext(\"2d\") returns non-null before using. Pattern: `const ctx = canvas.getContext(\"2d\"); if (!ctx) return;` instead of `const ctx = canvas.getContext(\"2d\")!;`. Also check node.x and node.y together in one condition: `if (node.x == null || node.y == null) continue;` not `if (node.x == null) continue; ... node.y!`.","created_at":"1766343409432.0","tags":"svelte,canvas,null-safety,typescript,rendering"}
|
|
436
550
|
{"id":"e6ed4f0d-5c0a-46c0-871f-dbfd9167e0ba","information":"ADR-008 App Template Architecture decision: CLI scaffolding via bunx create-badass-app using Bun's native file I/O (no degit/giget dependencies). Database-backed email list via adapter pattern (default: SQLite, swappable to ConvertKit/etc). ContentResource pattern from ADR-003 for blog posts + collections. Next.js 16 + Tailwind + @badass/ui default stack. Template lives in packages/create-badass-app/templates/. Coordinated with ADR009Writer on shared docs/adr/README.md - both agents added their entries to avoid conflicts.","created_at":"2025-12-18T23:56:41.130Z","tags":"adr,app-template,scaffolding,cli,email-capture,adapter-pattern,swarm-coordination"}
|
|
551
|
+
{"id":"e7633465-7922-42ab-b618-1f1e9a9ad36f","information":"GitHub Actions eval gate integration pattern: Added CI workflow step that runs evals with gate checking and posts results as PR comments. Key implementation details: (1) eval:ci script in root package.json that cd's to package and runs 'swarm eval run --ci', (2) CI step uses continue-on-error: true so stabilization warnings don't fail CI, (3) evalRun() function checks --ci flag and writes results to .hive/eval-results.json for PR comment consumption, (4) github-script action reads JSON and formats as markdown table with emoji indicators (🌱 bootstrap, ⚡ stabilization, 🏆 production), (5) Exit code logic: only fail (exit 1) if production-phase evals fail, bootstrap/stabilization always pass. recordEvalRun() requires full EvalRunRecord object with timestamp, eval_name, score, run_count. AI_GATEWAY_API_KEY must be in repository secrets.","created_at":"1766637006511.0","tags":"ci,github-actions,eval-gates,progressive-testing,pr-comments"}
|
|
437
552
|
{"id":"e77b7ee9-ceea-4f0f-8314-30e64330d6c3","information":"DRIZZLE ORM + PGLITE FEASIBILITY ANALYSIS (Dec 2025):\n\nVERDICT: FEASIBLE via hybrid/coexistence approach.\n\nKEY FINDINGS:\n1. Drizzle has first-class PGLite support (drizzle-orm/pglite driver since v0.30.6)\n2. Can wrap existing PGLite instance: drizzle({ client: existingPGlite })\n3. Same API works on full PostgreSQL - future migration is trivial\n4. All PostgreSQL features work: JSONB, SERIAL, indexes, foreign keys, transactions\n\nRECOMMENDED APPROACH:\n- Keep existing migrations.ts for current tables\n- Use Drizzle for new features going forward\n- Implement DrizzleDatabaseAdapter wrapper to satisfy existing DatabaseAdapter interface\n- Gradual migration of high-churn tables over time\n\nEFFORT ESTIMATE: ~87 hours (2-3 weeks) for full migration\n\nWRAPPER PATTERN:\nclass DrizzleDatabaseAdapter implements DatabaseAdapter {\n constructor(private db: PgliteDatabase) {}\n async query<T>(sql, params) { return { rows: (await this.db.execute(sql.raw(sql, ...params))).rows }; }\n async transaction<T>(fn) { return this.db.transaction(tx => fn(new DrizzleDatabaseAdapter(tx))); }\n}\n\nREFERENCE: Course Builder has working adapter-drizzle package at badass-courses/course-builder\n\nGOTCHAS:\n- Drizzle doesn't auto-generate down migrations (rollback support is partial)\n- Drizzle uses template literals not $1,$2 params - wrapper must translate\n- Bundle size adds ~50kb (negligible for Node.js)","created_at":"2025-12-16T20:23:38.983Z"}
|
|
438
553
|
{"id":"e7e92b71-82db-4a4f-a9b0-b4b4549c5a0e","information":"Beads validation and operations implementation completed for opencode-swarm-plugin-it2ke.19. Ported validation rules from steveyegge/beads internal/types/types.go: title max 500 chars, priority 0-4, status transition state machine (open->in_progress/blocked/closed, closed->open reopen, tombstone permanent). Operations layer provides high-level CRUD (createBead, getBead, updateBead, closeBead, reopenBead, deleteBead, searchBeads) wrapping BeadsAdapter with validation. All 41 validation tests pass. Operations tests reveal priority=0 handling issue - event stores priority correctly but projection defaults to 2, likely due to event.priority OR 2 treating 0 as falsy. Fix: use nullish coalescing instead for proper undefined handling.","created_at":"2025-12-16T22:19:50.241Z","tags":"beads,validation,operations,event-sourcing,priority-handling,steveyegge-port"}
|
|
439
554
|
{"id":"e84c9135-1eb8-417b-a753-6ff71b0becda","information":"Stable IDs for Subtasks: Use generated string identifiers (e.g., \"auth-setup-f3a2\") instead of array indices for subtask dependencies. Problem: If subtasks are reordered or new ones inserted, numeric indices break resume logic and dependency tracking. ACFS uses stable phase IDs in state.json v2 schema: completed_phases: [\"user_setup\", \"filesystem\"] NOT [1, 2]. Apply to hive epic subtasks - generate stable IDs at creation time, reference by ID not position. Source: Dicklesworthstone/agentic_coding_flywheel_setup state.sh","created_at":"1766591006754.0","tags":"swarm,hive,subtasks,ids,dependencies,patterns,acfs"}
|
|
555
|
+
{"id":"e8b202fa-465c-4f37-a082-3a926e1c0215","information":"{\"id\":\"pattern-1766633966969-p7dkzd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-25T03:39:26.969Z\",\"updated_at\":\"2025-12-25T03:39:26.969Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766633967185.0","metadata":"{\"id\":\"pattern-1766633966969-p7dkzd\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
440
556
|
{"id":"e8d7af1e-8896-4937-ac4d-08c8decc67fa","information":"{\"id\":\"pattern-1766263570127-xkxp9j\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:46:10.127Z\",\"updated_at\":\"2025-12-20T20:46:10.127Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263570370.0","metadata":"{\"id\":\"pattern-1766263570127-xkxp9j\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
441
557
|
{"id":"e9133cb2-0d3a-4ab6-8528-3b1f4a2ad306","information":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:48:36.548Z\",\"updated_at\":\"2025-12-13T22:48:36.548Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:48:36.768Z","metadata":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
442
558
|
{"id":"e97c791c-93a0-447c-9dbc-a46bd503f183","information":"Schema consolidation for libsql-schema.ts files: DO NOT use migrateDatabase() for initial schema creation. The migration system is designed for schema evolution (ALTER TABLE), not initial CREATE TABLE. libsql-schema.ts files serve as convenience helpers for tests/migrations and should keep explicit CREATE TABLE statements for clarity.\n\n**Why duplication is acceptable:**\n- libsql-schema.ts = convenience for tests (fast in-memory setup)\n- db/schema/*.ts = Drizzle schema (source of truth for structure)\n- FTS5/vector DDL MUST be in libsql-schema.ts (Drizzle can't create these)\n\n**Approach taken:**\n1. Keep CREATE TABLE in libsql-schema.ts for convenience\n2. Add prominent comments: \"MUST match db/schema/*.ts (source of truth)\"\n3. Remove duplicate logic, keep only FTS5/vector/index DDL that Drizzle can't handle\n4. Tests verify sync between schemas\n\n**Anti-pattern:** Trying to auto-generate CREATE TABLE from Drizzle schema via migrateDatabase() - causes quote escaping issues with defaults like \"'{}'\", fails for SQL function defaults like \"(datetime('now'))\".\n\nApplies to: swarm-mail package, memory/streams subsystems","created_at":"1766339063434.0","tags":"schema,consolidation,drizzle,libsql,fts5,vector,migration,source-of-truth"}
|
|
@@ -452,32 +568,52 @@
|
|
|
452
568
|
{"id":"ee586e5a-5aa2-4b71-904a-a4aee468076d","information":"{\"id\":\"pattern-1766074457007-guqdx7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:14:17.007Z\",\"updated_at\":\"2025-12-18T16:14:17.007Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:14:17.299Z","metadata":"{\"id\":\"pattern-1766074457007-guqdx7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
453
569
|
{"id":"ef0007e8-632e-41b9-bca5-4f22547500b1","information":"SQL injection prevention in libSQL/SQLite requires using `db.query()` with parameterized queries instead of `db.exec()` with string interpolation.\n\n**Vulnerable pattern:**\n```typescript\nawait db.exec(`\n INSERT INTO table (col1, col2)\n VALUES ('${userInput}', ${numericInput})\n`);\n```\n\n**Secure pattern:**\n```typescript\nawait db.query(\n `INSERT INTO table (col1, col2) VALUES (?, ?)`,\n [userInput, numericInput]\n);\n```\n\n**Why it matters:**\n- String interpolation allows SQL injection: malicious input like `\"'; DROP TABLE users; --\"` gets executed\n- Parameterized queries bind values safely - database treats them as data, not SQL code\n- Works for all parameter types (string, number, boolean)\n\n**Testing strategy:**\n- Test with malicious SQL in string parameters\n- Test with special characters (quotes, backslashes)\n- Verify malicious strings are stored literally, not executed\n- Check tables/data weren't modified by injection attempts\n\n**Affected locations in swarm-mail:**\n- `packages/swarm-mail/src/streams/effect/cursor.ts` lines 134-138 (loadCursorPosition)\n- `packages/swarm-mail/src/streams/effect/cursor.ts` lines 154-159 (saveCursorPosition)\n\nFixed by replacing `db.exec()` with string interpolation with `db.query()` using `?` placeholders and parameter arrays.","created_at":"1766375809350.0","tags":"security,sql-injection,libsql,sqlite,parameterized-queries,cursor,swarm-mail"}
|
|
454
570
|
{"id":"ef25dc27-ef8f-41c9-8f44-4ef31ababa22","information":"Course Builder Drizzle Adapter Pattern for \"bring your own database\" sharing:\n\n1. **Table Function Injection**: Adapter accepts BOTH db instance AND table creator function. `DrizzleAdapter(db, tableFn)` - db is shared, tableFn is consumer-specific for namespacing.\n\n2. **Schema Factory Pattern**: Export `getSchema(tableFn)` factory, NOT concrete tables. Consumer calls factory with their prefixed table creator. Adapter never owns concrete table definitions.\n\n3. **Database Instance Injection**: Adapter stores reference to consumer's db instance, uses it for all queries. Adapter doesn't create db - consumer creates and passes it in.\n\n4. **Multi-Project Schema via Drizzle's tableCreator**: `mysqlTableCreator((name) => 'prefix_${name}')` enables table prefixing. Multiple apps share same database with isolated namespaces (e.g., `zER_users`, `zEW_users` in same db).\n\n5. **Consumer Usage Pattern**: Consumer creates pgTable with prefix, calls schema factory, creates db with merged schemas, passes db+tableFn to adapter.\n\nThis enables extracting packages like swarm-mail as pure libraries that integrate into consumer's database rather than owning their own instance. Key insight: the library is a \"guest\" in the consumer's database, not a \"host\".","created_at":"2025-12-14T23:56:11.298Z"}
|
|
571
|
+
{"id":"ef4cdd0c-1ec6-48f7-9646-76f96939918a","information":"Wired captureDecomposition() into swarm_validate_decomposition for eval data capture. Pattern: Add optional params (project_path, task, context, strategy, epic_id) to tool args, call captureDecomposition() after successful validation but before returning result. Use dynamic import to avoid circular deps. Capture is non-fatal (wrapped in try-catch with console.warn). Tests use spyOn() from bun:test to verify capture calls. Key learning: CellTreeSchema has .optional().default(\"\") for epic description, so it returns empty string not undefined.","created_at":"1766619085214.0","tags":"eval-capture,swarm-decompose,tdd,testing-patterns"}
|
|
455
572
|
{"id":"ef97f001-87ae-47c1-bfcb-f513cf991a23","information":"Researcher prompt template pattern for swarm documentation phase: Created RESEARCHER_PROMPT template following SUBTASK_PROMPT_V2 structure with [IDENTITY], [MISSION], [WORKFLOW], and [CRITICAL REQUIREMENTS] sections. Key design: coordinator provides EXPLICIT tech list (researcher doesn't discover what to research), researcher dynamically discovers TOOLS available (nextjs_docs, context7, fetch, pdf-brain). Two-output pattern: detailed findings to semantic-memory (searchable by future agents), condensed summary to coordinator via swarmmail_send for shared_context. Supports --check-upgrades flag for comparing installed vs latest versions. Tool signature: swarm_spawn_researcher(research_id, epic_id, tech_stack[], project_path, check_upgrades?). Returns JSON with prompt, subagent_type=\"swarm/researcher\", and expected_output schema. Exported via promptTools in swarmTools.","created_at":"1766515129291.0","tags":"swarm,researcher,documentation,prompt-template,epic-opencode-swarm-monorepo-lf2p4u-mjix9j5ssyz"}
|
|
573
|
+
{"id":"f0c94424-208d-475e-83fe-2d2fae472d68","information":"{\"id\":\"test-1766598995939-2f3fgqzpft9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T17:56:35.939Z\",\"raw_value\":1}","created_at":"1766598996156.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T17:56:35.939Z\"}"}
|
|
456
574
|
{"id":"f13bc295-104a-4c65-af90-7cfa7eab1539","information":"swarm-mail getDatabase() migration: The old PGLite-style `getDatabase(projectPath)` standalone export was removed. Now use `getSwarmMailLibSQL(projectPath)` to get a SwarmMailAdapter, then call `adapter.getDatabase()` to get the DatabaseAdapter for raw queries. Example: `const swarmMail = await getSwarmMailLibSQL(projectPath); const db = await swarmMail.getDatabase(); await db.query(...)`","created_at":"1766345263816.0","tags":"swarm-mail,migration,getDatabase,libsql,api-change"}
|
|
575
|
+
{"id":"f1b9618c-7400-41c6-aa3b-da5ed86eeeb9","information":"{\"id\":\"pattern-1766610771986-2nbju9\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T21:12:51.986Z\",\"updated_at\":\"2025-12-24T21:12:51.986Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766610772201.0","metadata":"{\"id\":\"pattern-1766610771986-2nbju9\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
457
576
|
{"id":"f1e4ec49-2123-46c4-9dfb-3bc334734e25","information":"{\"id\":\"test-1766593217747-1ure5lmoryr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:20:17.747Z\",\"raw_value\":1}","created_at":"1766593218085.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:20:17.747Z\"}"}
|
|
458
577
|
{"id":"f24305d6-ce9c-4b32-84f1-cc6fbafa5899","information":"Effect-TS Layer routing pattern for daemon-aware connection fallback in pdf-library project.\n\n**Problem**: Database service needs to support both daemon mode (Unix socket via DatabaseClient) and single-process mode (direct PGlite) transparently.\n\n**Solution**: Use Layer.unwrapEffect to check daemon status at Layer creation time and route to appropriate implementation:\n\n```typescript\nexport const DatabaseLive = Layer.unwrapEffect(\n Effect.gen(function* () {\n const config = LibraryConfig.fromEnv();\n \n const daemonConfig = {\n socketPath: config.libraryPath,\n pidPath: `${config.libraryPath}/daemon.pid`,\n dbPath: config.dbPath,\n };\n\n const running = yield* Effect.promise(() => isDaemonRunning(daemonConfig));\n\n if (running) {\n // Route to DatabaseClient (Unix socket connection)\n return Layer.effect(\n Database,\n DatabaseClient.make(config.libraryPath).pipe(\n Layer.build,\n Effect.flatMap((context) => Effect.succeed(Context.get(context, DatabaseClient)))\n )\n );\n } else {\n // Route to direct PGlite implementation\n return DirectDatabaseLive;\n }\n })\n);\n```\n\n**Key insights**:\n- Layer.unwrapEffect allows decision at runtime (daemon check)\n- Layer.build + Context.get extracts DatabaseClient implementation\n- Compatible interfaces (Database and DatabaseClient) allow transparent routing\n- Tests verify fallback works when daemon not running\n\n**Why Layer.effect + Layer.build**:\nNeed to \"convert\" DatabaseClient layer to provide Database service. Pattern:\n1. Build DatabaseClient layer to get context\n2. Extract DatabaseClient implementation from context via Context.get\n3. Wrap in Layer.effect(Database, ...) to provide Database tag\n\nThis provides multi-process safety via daemon while maintaining single-process simplicity as fallback.","created_at":"2025-12-19T15:15:49.858Z","tags":"effect-ts,layer,routing,daemon,fallback,unix-socket,pglite"}
|
|
578
|
+
{"id":"f2708b7b-c81c-4cf9-8b7a-e45fd48e2d91","information":"Learning Systems architecture in opencode-swarm-plugin: Four interconnected modules (learning.ts, pattern-maturity.ts, anti-patterns.ts, eval-learning.ts) implement confidence decay (90-day half-life), implicit feedback scoring (weighted formula: 40% success + 20% duration + 20% errors + 20% retries), pattern maturity state machine (candidate→established→proven→deprecated), and anti-pattern auto-inversion (60% failure threshold). Inspired by Dicklesworthstone's cass_memory_system (scoring.ts, outcome.ts, curate.ts), spaced repetition research (Anki, Michael Nielsen), and \"Patterns for Building AI Agents\" p.40 error accumulator pattern. Novel contributions: 3-strike architecture review forcing function, eval-to-learning closed-loop feedback (15% drop threshold triggers semantic memory storage), and maturity multipliers (proven=1.5x, deprecated=0x) for prompt weighting.","created_at":"1766672839549.0","metadata":"{\"files\":[\"learning.ts\",\"pattern-maturity.ts\",\"anti-patterns.ts\",\"eval-learning.ts\"],\"worker\":\"CoolFire\",\"research_task\":\"ADR-009\"}","tags":"learning-systems,confidence-decay,pattern-maturity,anti-patterns,swarm,research,opencode-swarm-plugin"}
|
|
459
579
|
{"id":"f2b63c56-11dd-4e37-aa59-57d15987bf69","information":"LibSQL AsyncGenerator pattern: When implementing async generators in Effect-based services, the generator function must be called WITHIN the Effect scope to prevent CLIENT_CLOSED errors. The client is scoped to the Effect layer and closes when the scope ends.\n\n**WRONG**:\n```typescript\nconst db = await Effect.runPromise(Effect.provide(program, layer));\nconst batches = await collectGenerator(db.streamEmbeddings(10)); // CLIENT_CLOSED!\n```\n\n**CORRECT**:\n```typescript\nconst batches = await Effect.runPromise(\n Effect.gen(function* () {\n const db = yield* Database;\n // setup data...\n return yield* Effect.promise(() => collectGenerator(db.streamEmbeddings(10)));\n }).pipe(Effect.provide(layer))\n);\n```\n\nThe async generator holds a reference to the client, so it must be consumed before the Effect scope closes. Use Effect.promise() to wrap the async generator consumption inside the Effect scope.","created_at":"1766423830017.0","tags":"effect-ts,libsql,async-generators,scoping,client-lifecycle"}
|
|
460
580
|
{"id":"f2c0bac0-6db1-4453-acc1-4b2c56b2df32","information":"{\"id\":\"test-1766262543105-r7bm19lkujf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:29:03.105Z\",\"raw_value\":1}","created_at":"1766262543323.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:29:03.105Z\"}"}
|
|
461
581
|
{"id":"f3514329-eb61-4447-b242-1f3e05d9bdcd","information":"AI SDK v6 Section 2 Validation Complete: API patterns are correct (generateText + Output.object/array, correct destructuring), but found systematic model naming bugs. All instances of `openai/gpt-4.1` should be `openai/gpt-4o-mini` and `openai/gpt-5` should be `openai/o1-mini`. Found across lessons 1-4. Lesson 5 (v0 UI) has no AI SDK code (just v0 integration tutorial). The core teaching is correct - only model identifiers need updating.","created_at":"1766464081509.0","tags":"ai-sdk-v6,validation,invisible-ai,model-names,bugs"}
|
|
462
582
|
{"id":"f3b50100-0bb4-4ff0-a9f4-440447b8aa94","information":"ADR writing pattern for opencode-swarm-plugin: Follow git-sync-distributed-coordination.md format with these sections: Context (problem statement with ASCII diagrams), Decision (architecture with detailed flow diagrams), Consequences (Positive/Negative/Risks), Implementation (files, functions, pseudocode), Alternatives Considered (rejected options with reasoning), Future Work (next steps), References. Use ASCII box diagrams for processes, state machines, and architecture. Include TypeScript pseudocode for key workflows. Reference specific OpenCode constraints and issues. Match existing ADR tone: technical, detailed, opinionated (\"this is the right architecture\").","created_at":"1766595569344.0","tags":"adr,documentation,architecture,opencode-swarm-plugin,writing-patterns"}
|
|
583
|
+
{"id":"f42faca9-7e97-4ce4-a7f8-cb5e71b4f1c0","information":"{\"id\":\"test-1766633965828-qvh9g5fotis\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-25T03:39:25.828Z\",\"raw_value\":1}","created_at":"1766633966044.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-25T03:39:25.828Z\"}"}
|
|
463
584
|
{"id":"f4e32f4b-6b15-4458-b904-e8cdf5d310cb","information":"{\"id\":\"test-1766263760686-zzafifmiqr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:49:20.686Z\",\"raw_value\":1}","created_at":"1766263760949.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:49:20.686Z\"}"}
|
|
464
585
|
{"id":"f519b624-497d-4115-a62a-fc3d637238ef","information":"{\"id\":\"test-1766261101180-gd9l9iem91g\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:05:01.180Z\",\"raw_value\":1}","created_at":"1766261101433.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:05:01.180Z\"}"}
|
|
465
586
|
{"id":"f51c6faf-225c-4a28-96d4-df2fe8849549","information":"{\"id\":\"test-1766516101566-59iepjl7xqy\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-23T18:55:01.566Z\",\"raw_value\":1}","created_at":"1766516101846.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-23T18:55:01.566Z\"}"}
|
|
466
587
|
{"id":"f5a5d45a-a679-4edd-8628-310cc639b109","information":"{\"id\":\"pattern-1766263310054-b2w7ig\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:41:50.054Z\",\"updated_at\":\"2025-12-20T20:41:50.054Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263310316.0","metadata":"{\"id\":\"pattern-1766263310054-b2w7ig\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
588
|
+
{"id":"f6379658-3450-4894-85cc-bcbf6e933612","information":"TDD pattern for JSONL loaders: Start with failing tests that create fixture files in beforeAll(), use afterAll() for cleanup. For tests that create files mid-test (like \"skips invalid JSONL lines\"), clean up immediately after assertion to avoid polluting subsequent tests. \n\nBun test ordering isn't guaranteed, so files created in one test can interfere with others. Pattern: Create temp directory with Date.now() suffix, clean up in afterAll(), and immediately unlink files created in individual tests.\n\nStreaming strategy for JSONL: For small limits (<100), streaming with readline is overkill - just read entire file. For large files or no limit, streaming saves memory. Pattern: check limit, use fs.readFileSync for small, createReadStream + readline for large.\n\nType-safe filtering with discriminated unions: Use Extract<Union, { discriminator: \"value\" }> to get subset. Example: Extract<CoordinatorEvent, { event_type: \"COMPACTION\" }> gives only COMPACTION events with proper narrowed types. Zod validates at runtime, Extract validates at compile time.","created_at":"1766635910127.0","tags":"tdd,jsonl,streaming,testing-patterns,typescript,discriminated-unions"}
|
|
467
589
|
{"id":"f7a49f2c-9f9d-4e25-b910-973a703ebc99","information":"Plugin runtime migration from standalone getDatabase() to adapter pattern: The old PGLite-style `getDatabase(projectPath)` standalone export was removed from swarm-mail. Tests that called `const { getDatabase } = await import(\"swarm-mail\"); const db = await getDatabase(projectPath)` must migrate to adapter pattern: `const { getSwarmMailLibSQL } = await import(\"swarm-mail\"); const swarmMail = await getSwarmMailLibSQL(projectPath); const db = await swarmMail.getDatabase()`.\n\n**Why the change:** The standalone function was tightly coupled to PGLite. The adapter pattern (SwarmMailAdapter) provides database-agnostic interface.\n\n**Migration steps:**\n1. Replace `getDatabase, closeDatabase` imports with `getSwarmMailLibSQL, closeSwarmMailLibSQL`\n2. Replace `const db = await getDatabase(path)` with `const swarmMail = await getSwarmMailLibSQL(path); const db = await swarmMail.getDatabase()`\n3. Replace `await closeDatabase(path)` with `await closeSwarmMailLibSQL(path)`\n\n**Key insight:** Plugin code in swarm-orchestrate.ts, hive.ts, memory-tools.ts was ALREADY correctly using `swarmMail.getDatabase()`. They didn't need fixes - they were never broken. The issue Worker 1 fixed was in swarm-mail's store functions (appendEvent, readEvents) requiring explicit dbOverride parameter. Those now auto-create adapters via getOrCreateAdapter().","created_at":"1766349125655.0","tags":"swarm-mail,migration,adapter-pattern,database,getDatabase"}
|
|
468
590
|
{"id":"f7f941bd-2467-49a2-b948-bba33ee263b1","information":"@badass Inngest Decision (Dec 2024): Site-isolated Inngest. Each site has its own Inngest app despite database sharing. Simpler blast radius, no cross-site event coordination complexity. Video processing, email jobs, etc. are site-scoped.","created_at":"2025-12-18T15:54:00.825Z"}
|
|
469
591
|
{"id":"f85ae083-b6c3-40d8-9599-c7a9c591069f","information":"HDBSCAN concepts yoinkable for pdf-library without full algorithm implementation: (1) Core distance via HNSW k-NN - compute core_k(x) = distance to k-th neighbor using existing vector_top_k(), provides noise robustness O(n log n) instead of O(n²). (2) Hierarchical clustering on HNSW graph - extract neighbor connections as sparse graph, run agglomerative with average linkage, single dendrogram contains all hierarchy levels (eliminates BIC k-selection). (3) Noise point filtering - minimum cluster size threshold (e.g., 5 chunks) + late merge detection (height > threshold × 1.5), filters OCR errors and outliers without forcing into clusters. (4) Height-based dendrogram cutting - cut at fixed distance thresholds (0.3, 0.5, 0.7 for cosine) for RAPTOR levels, simpler than stability optimization. SKIP: (1) Full MST construction via Prim's/Boruvka - even O(n log n) too expensive, HNSW graph IS the sparse MST approximation. (2) Stability-based cluster extraction - overkill for \"good enough\" clusters, height-based cutting sufficient. Implementation gains: 35% faster (17min → 11min), better cluster quality (noise filtering), single clustering run vs 3 independent k-means per level.","created_at":"1766426011488.0","tags":"hdbscan,clustering,raptor,hierarchical,noise-filtering,dendrogram,hnsw,agglomerative-clustering,k-selection,pdf-library"}
|
|
592
|
+
{"id":"f968121a-9f60-4bcb-bd5a-98a81008cc33","information":"{\"id\":\"pattern-1766599111495-akam4b\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T17:58:31.495Z\",\"updated_at\":\"2025-12-24T17:58:31.495Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766599111756.0","metadata":"{\"id\":\"pattern-1766599111495-akam4b\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
470
593
|
{"id":"f96dbbf1-bdea-4200-b6be-2ef8b64f80c7","information":"Fixed swarm-mail store.ts auto-adapter resolution: Removed requireDbOverride() error by implementing getOrCreateAdapter() function that auto-creates DatabaseAdapter instances when dbOverride is not provided. \n\n**Problem:** All store functions (appendEvent, readEvents, etc.) threw \"dbOverride parameter is required\" error when called without explicit DatabaseAdapter. This broke the API - callers shouldn't need to manually create adapters.\n\n**Root Cause:** requireDbOverride() function threw error if dbOverride was undefined. Legacy from PGlite removal.\n\n**Solution:**\n1. Added adapter cache (Map<string, DatabaseAdapter>) to avoid creating multiple instances\n2. Replaced requireDbOverride() with async getOrCreateAdapter(dbOverride?, projectPath?)\n3. Auto-creates adapter using getDatabasePath() + createLibSQLAdapter() when not provided\n4. Calls createLibSQLStreamsSchema() to initialize schema on new adapters\n5. Exported clearAdapterCache() for test isolation\n\n**Files Changed:**\n- store.ts: Added getOrCreateAdapter(), clearAdapterCache(), schema init\n- store.integration-test.ts: Added clearAdapterCache() + deleteGlobalDatabase() in afterEach\n- store-auto-adapter.test.ts: New test file proving fix works (2/2 pass)\n\n**Test Results:**\n- Integration tests: 21/24 pass (3 failures are pre-existing bugs unrelated to fix)\n- New focused tests: 2/2 pass\n- Original \"dbOverride required\" error completely eliminated\n\n**Key Insight:** getDatabasePath() ignores projectPath parameter and always returns global ~/.config/swarm-tools/swarm.db. Tests need to clear adapter cache + delete global DB for isolation.","created_at":"1766348469011.0","tags":"swarm-mail,store,database-adapter,auto-resolution,caching,libsql"}
|
|
471
594
|
{"id":"f9c44e94-1fc1-49e3-b0a9-28d09a1fa976","information":"Tool discovery pattern for researchers in swarm coordination: Created runtime detection of available documentation tools (MCP servers, CLI tools) using `discoverDocTools()`. Returns structured `DiscoveredTool[]` with name, type (mcp/cli/skill), capabilities array, and availability boolean.\n\nKey insight: Researchers discover HOW to fetch docs (available tools), not WHAT to research (coordinator provides tech list). This separation of concerns allows researchers to adapt to different environments.\n\nImplementation pattern:\n1. Define TOOL_DEFINITIONS with capabilities\n2. Check availability via isToolAvailable() for CLI, assume true for MCP (runtime detection)\n3. Return structured list with availability status\n4. Export as plugin tool with summary stats\n\nTDD approach worked well: 9 tests written first, all passing. Tests verify structure, availability detection, capability mapping, and graceful degradation.\n\nIntegration: Exported from swarm-research.ts → swarm.ts → index.ts (public API). Tool registered as `swarm_discover_tools` in plugin.\n\nFuture enhancement: OpenCode doesn't yet expose MCP server list, so we assume availability. When that's available, add actual MCP detection.","created_at":"1766515823304.0","tags":"swarm,research,tool-discovery,mcp,runtime-detection,tdd"}
|
|
472
595
|
{"id":"fa0ede27-8993-4b8f-af9e-a1496684107e","information":"{\"id\":\"test-1765664066304-cw34qmxbxjm\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:14:26.304Z\",\"raw_value\":1}","created_at":"2025-12-13T22:14:26.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:14:26.304Z\"}"}
|
|
596
|
+
{"id":"fa5cd7d6-3293-4bca-a596-5e99ab53b187","information":"Wired decomposition_complete event capture into hive_create_epic (hive.ts line ~769-801). Pattern: After successful DecompositionGeneratedEvent emission, capture coordinator event with epic_id, subtask_count, strategy_used, and files_per_subtask (indexed map). Use dynamic import for eval-capture.js to avoid circular dependencies. Capture is non-fatal (wrapped in try-catch with console.warn). Key insight: files_per_subtask must be a Record<number, string[]> mapping subtask index to file list for eval scorer consumption. The spawnEfficiency scorer relies on this event to avoid 0.5 fallback when decomposition_complete is logged. TDD approach validated: wrote failing test first (checking captureCoordinatorEvent directly vs full hive integration due to plugin infrastructure requirements).","created_at":"1766641052510.0","tags":"eval-capture,decomposition_complete,coordinator-events,hive-integration,tdd"}
|
|
597
|
+
{"id":"fab5562c-ae29-4662-a222-12bbf2d29c22","information":"Implemented `swarm cells` CLI command in packages/opencode-swarm-plugin/bin/swarm.ts. Key design decisions: 1) Used getSwarmMailLibSQL + createHiveAdapter directly (no bd CLI dependency). 2) Partial ID resolution via resolvePartialId() from swarm-mail. 3) Table output by default with formatCellsTable() helper. 4) --json flag outputs raw array (no wrapper). 5) Filters: --status, --type, --ready. 6) Positional arg for single cell lookup (e.g., `swarm cells mjkmdyoqhn4`). Pattern: Parse args manually (no dependency), call adapter methods, format output (table or JSON). Added Cell Management section to help text. Test coverage: formatCellsTable() helper tested in swarm.test.ts with TDD.","created_at":"1766618217887.0","tags":"cli,cells,swarm,hive,database,partial-id,tdd"}
|
|
473
598
|
{"id":"facf6e03-d4c3-42da-8cf0-758434d4748f","information":"pino-roll async file creation timing: Files created via pino.transport() with pino-roll are written asynchronously. In tests, need to wait 500ms+ after logger.info() before checking if files exist with fs.readdir(). 100ms is too short and causes flaky tests. The transport spawns a worker thread that handles file writes, so the write operation doesn't complete synchronously.","created_at":"1766592745715.0","tags":"pino,pino-roll,testing,async,timing,flaky-tests"}
|
|
474
599
|
{"id":"fb2f3480-9e10-443c-b9e9-755e83f648d8","information":"@badass Architecture Session Checkpoint (Dec 2024) - Ready to decompose into implementation. LOCKED DECISIONS: (1) CLI: Multi-site PlanetScale/Stripe pattern, ~/.badass/config.json, (2) DB: Creator-level sharing enabled, (3) Auth: Hive+Spoke model - creator designates one site as auth hive, spokes redirect there, (4) Cross-domain SSO: Hive acts as IdP since BetterAuth crossSubDomainCookies only works for subdomains not different TLDs, (5) Local app auth: RFC 8628 device flow (reference impl in course-builder ai-hero), (6) All core framework features in @badass/* packages. OPEN QUESTIONS for next session: (1) Content Model - posts vs courses/modules/lessons schema, (2) Video Pipeline - Mux integration (academy-content reference), (3) Payments - Stripe integration, cross-site purchases, (4) Event System - Inngest patterns. KEY REFERENCES: course-builder apps/ai-hero/src/app/oauth/device/ for device flow, vercel/academy-content for CLI+video pipeline, Kent's unified accounts request as driving use case.","created_at":"2025-12-18T15:42:07.722Z"}
|
|
475
600
|
{"id":"fb3b0250-8f3b-4b2d-804f-120254c70b0c","information":"LibSQL concept embeddings implementation for pdf-library: (1) Use F32_BLOB(768) for nomic-embed-text vectors - MUST match document embeddings dimension. (2) Store with vector32(JSON.stringify(embedding)), query with vector_top_k('concept_embeddings_idx', vector32(?), limit) joined to concepts table. (3) Distance to similarity: score = 1 - distance/2, threshold filter: distance <= 2*(1-threshold). (4) Index with compress_neighbors=float8 for 4x space savings, minimal recall loss. (5) TaxonomyService needs Layer.scoped (not Layer.effect) because addFinalizer requires Scope for cleanup. (6) Migration pattern: create table IF NOT EXISTS, create index IF NOT EXISTS, query for missing rows, batch process with progress reporting. (7) Concept embedding text format: \"prefLabel: definition\" or just \"prefLabel\" to match document chunk semantics.","created_at":"1766257019013.0","tags":"libsql,vector-search,embeddings,nomic-embed-text,taxonomy,effect-ts,migration"}
|
|
476
601
|
{"id":"fb7adce8-e2f1-493c-beb6-8d3736a00b17","information":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:18:30.523Z\",\"updated_at\":\"2025-12-14T02:18:30.523Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:18:30.785Z","metadata":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
477
602
|
{"id":"fbdec046-f92e-47dc-a80a-26e1a6c5fe8f","information":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:47:52.119Z\",\"updated_at\":\"2025-12-18T17:47:52.119Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:47:52.415Z","metadata":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
603
|
+
{"id":"fc47f50f-c717-4be4-a59c-3279a0fe6caf","information":"Drizzle unique constraints with composite keys use column-based syntax: For multi-column unique constraints in Drizzle, don't use uniqueIndex(). Instead, use the table configuration callback with the columns array pattern: sqliteTable(\"table\", { col1, col2 }, (table) => ({ uniqueName: { columns: [table.col1, table.col2], name: \"unique_constraint_name\" } })). This generates: UNIQUE(col1, col2) in CREATE TABLE DDL. Example from memory_links: { uniqueLink: { columns: [table.source_id, table.target_id, table.link_type], name: \"unique_link\" } } prevents duplicate links. The name property is optional but recommended for debugging constraint violations.","created_at":"1766643811859.0","metadata":"{\"pattern\":\"schema-constraints\",\"severity\":\"medium\",\"component\":\"swarm-mail\"}","tags":"drizzle,sqlite,unique-constraints,composite-keys,schema"}
|
|
478
604
|
{"id":"fc9c8976-85c3-48d1-a5cf-88d05be9c5ca","information":"Pino logger singleton pattern for tests: When writing tests that create loggers with different directories, use a Map-based cache instead of a single module-level variable. Pattern: const loggerCache = new Map<string, Logger>() with cache keys like `${module}:${logDir}`. This allows tests to create isolated logger instances per test directory without interference. Also: clear require.cache[require.resolve(\"./logger\")] in beforeEach to force module reimport and reset singletons between tests.","created_at":"1766592738314.0","tags":"pino,testing,singleton,bun,typescript,cache-management"}
|
|
605
|
+
{"id":"fdb1d2f7-3e8a-4ad6-9782-50e96cd9ee31","information":"PGlite deprecation warnings already implemented in swarm-mail package. All three required integration points already have warnPGliteDeprecation() calls: 1) wrapPGlite() in pglite.ts (line 80), 2) toDrizzleDb() PGlite branch in libsql.convenience.ts (line 293), 3) migratePGliteToLibSQL() in migrate-pglite-to-libsql.ts (line 72). Implementation uses module-level _pgliteDeprecationWarned flag for warn-once behavior. Tests exist and pass (pglite.test.ts lines 24-39). Pattern follows warnedTools Set pattern from hive.ts but uses simpler boolean since only one thing is deprecated. Always check if work is already done before implementing - saved 30+ minutes of redundant work.","created_at":"1766618100252.0","metadata":"{\"files\":[\"packages/swarm-mail/src/pglite.ts\",\"packages/swarm-mail/src/libsql.convenience.ts\",\"packages/swarm-mail/src/migrate-pglite-to-libsql.ts\"],\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjggwznl7gx\",\"project\":\"opencode-swarm-plugin\"}","tags":"pglite,deprecation,warnings,swarm-mail,libsql,migration,work-already-done"}
|
|
479
606
|
{"id":"fdf514c6-3fba-4361-b5f4-fd7b5d023985","information":"{\"id\":\"test-1765771077694-7w6dasddwz8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:57.694Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:58.059Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:57.694Z\"}"}
|
|
607
|
+
{"id":"fdff4de8-0a50-4149-a6f0-6a1c925798fe","information":"oh-my-opencode Hook System: Comprehensive workflow automation. keyword-detector: auto-activates modes (ultrawork/ulw, search/find, analyze). todo-continuation-enforcer: prevents quitting mid-work, 2s countdown, auto-continue. comment-checker: prevents excessive AI comments. tool-output-truncator: truncates Glob/Grep/LSP/ast_grep for context preservation. agent-usage-reminder: suggests spawning specialized agents. Hook points: PreToolUse, PostToolUse, UserPromptSubmit, Stop. Background task system with task_id tracking, output retrieval, cancel support. Novel: keyword-based mode switching, enforcement vs suggestion hooks, cross-platform notifications.","created_at":"1766673462463.0","tags":"oh-my-opencode,hooks,workflow-automation,keyword-detector"}
|
|
608
|
+
{"id":"ff4a0e7a-1863-4aaa-b7d4-99f141a47d1d","information":"Integration of eval gates and learning into eval-runner.ts: After recording eval runs to history, runEvals() now calls checkGate() for each suite and triggers learnFromEvalFailure() when gates fail (regression detected). \n\nKey implementation details:\n- getMemoryAdapter() needed to be exported from memory-tools.ts (was previously internal-only)\n- Gate checking happens AFTER recordEvalRun() loop (line 275-292 in eval-runner.ts)\n- Learning is best-effort: wrapped in try/catch, failures logged as warnings, don't fail the eval run\n- gateResults added to RunEvalsResult interface as optional array with suite name + gate details\n- TDD approach worked perfectly: 4 failing tests → implementation → 11 passing tests\n\nError handling pattern:\n```typescript\nif (!gate.passed) {\n try {\n const memoryAdapter = await getMemoryAdapter();\n await learnFromEvalFailure(suite.name, suite.averageScore, history, memoryAdapter);\n } catch (e) {\n console.warn(`Failed to store learning for ${suite.name}:`, e);\n }\n}\n```\n\nThis completes the eval-to-learning closed-loop: evals run → gates check → regressions trigger memory storage → future prompts query memories for context.","created_at":"1766680767592.0","metadata":"{\"file\":\"src/eval-runner.ts\",\"lines\":\"275-292\",\"worker\":\"GoldCloud\",\"cell_id\":\"opencode-swarm-plugin--ys7z8-mjlnn93ux01\"}","tags":"eval-runner,eval-gates,eval-learning,TDD,integration,semantic-memory"}
|
|
480
609
|
{"id":"ffb8e28a-303d-4941-afe7-bf21f69656fb","information":"{\"id\":\"test-1765666114922-71ihlfel1gc\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:48:34.922Z\",\"raw_value\":1}","created_at":"2025-12-13T22:48:35.124Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:48:34.922Z\"}"}
|
|
610
|
+
{"id":"mem-206696ebf1f14d6b","information":"Test memory for plugin tool","created_at":"2025-12-25T19:27:59.579Z","tags":"test,plugin"}
|
|
611
|
+
{"id":"mem-344359d13bbb998a","information":"React hooks enable functional components to use state","created_at":"2025-12-25T19:28:02.980Z"}
|
|
612
|
+
{"id":"mem-597c1cce325f3a7d","information":"TypeScript is a typed superset of JavaScript","created_at":"2025-12-25T19:28:00.798Z"}
|
|
613
|
+
{"id":"mem-72afc1d7ce910281","information":"Test memory for adapter wiring verification","created_at":"2025-12-25T19:28:08.094Z","tags":"test,memory"}
|
|
614
|
+
{"id":"mem-b01df0c7513c556b","information":"Next.js 15 was released by Vercel in October 2024","created_at":"2025-12-25T19:28:04.048Z"}
|
|
615
|
+
{"id":"mem-b9fd74e518f0fc1d","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T19:27:57.752Z"}
|
|
616
|
+
{"id":"mem-ca6f8c7cd4e5a420","information":"Test memory for tools integration","created_at":"2025-12-25T19:27:57.654Z","tags":"test"}
|
|
481
617
|
{"id":"mem_mjbteazb_g1swqjm","information":"Test memory for tools integration","created_at":"2025-12-18T19:09:38.711Z","tags":"test"}
|
|
482
618
|
{"id":"mem_mjbteb35_o8xwaxn","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-18T19:09:38.849Z"}
|
|
483
619
|
{"id":"mem_mjbteo3a_mnd325l","information":"Test memory for tools integration","created_at":"2025-12-18T19:09:55.702Z","tags":"test"}
|
|
@@ -568,4 +704,26 @@
|
|
|
568
704
|
{"id":"mem_mjkxglqg_5ojok3n","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T04:13:20.008Z"}
|
|
569
705
|
{"id":"mem_mjkxgogk_48pml1f","information":"Test memory for adapter wiring verification","created_at":"2025-12-25T04:13:23.540Z","tags":"test,memory"}
|
|
570
706
|
{"id":"mem_mjkxgomk_mm0hvqg","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-25T04:13:23.756Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
571
|
-
{"id":"mem_mjkxgopz_mqvrw0z","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-25T04:13:23.879Z","tags":"test,verification"}
|
|
707
|
+
{"id":"mem_mjkxgopz_mqvrw0z","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-25T04:13:23.879Z","tags":"test,verification"}
|
|
708
|
+
{"id":"mem_mjl0xjba_1mg3q72","information":"Test memory for tools integration","created_at":"2025-12-25T05:50:28.870Z","tags":"test"}
|
|
709
|
+
{"id":"mem_mjl0xjsd_4x2gw3k","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T05:50:29.485Z"}
|
|
710
|
+
{"id":"mem_mjl0xmby_jghp3tn","information":"Test memory for adapter wiring verification","created_at":"2025-12-25T05:50:32.782Z","tags":"test,memory"}
|
|
711
|
+
{"id":"mem_mjl0xmfb_qfilp0o","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-25T05:50:32.903Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
712
|
+
{"id":"mem_mjljidpt_543g4ha","information":"Test memory for tools integration","created_at":"2025-12-25T14:30:34.481Z","tags":"test"}
|
|
713
|
+
{"id":"mem_mjljidwu_7dyeb1j","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T14:30:34.734Z"}
|
|
714
|
+
{"id":"mem_mjljie01_zhla7rt","information":"Test memory for plugin tool","created_at":"2025-12-25T14:30:34.849Z","tags":"test,plugin"}
|
|
715
|
+
{"id":"mem_mjljie18_n7qrt6d","information":"TypeScript is a typed superset of JavaScript","created_at":"2025-12-25T14:30:34.892Z"}
|
|
716
|
+
{"id":"mem_mjljie2y_1872ws5","information":"React hooks enable functional components to use state","created_at":"2025-12-25T14:30:34.954Z"}
|
|
717
|
+
{"id":"mem_mjljie3q_ap819rs","information":"Next.js 15 was released by Vercel in October 2024","created_at":"2025-12-25T14:30:34.982Z"}
|
|
718
|
+
{"id":"mem_mjljihi9_2qgv1li","information":"Test memory for tools integration","created_at":"2025-12-25T14:30:39.393Z","tags":"test"}
|
|
719
|
+
{"id":"mem_mjljihkd_pp4uigi","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T14:30:39.469Z"}
|
|
720
|
+
{"id":"mem_mjljihn4_r0s5949","information":"Test memory for plugin tool","created_at":"2025-12-25T14:30:39.568Z","tags":"test,plugin"}
|
|
721
|
+
{"id":"mem_mjljiho2_fdyc36z","information":"TypeScript is a typed superset of JavaScript","created_at":"2025-12-25T14:30:39.602Z"}
|
|
722
|
+
{"id":"mem_mjljihp9_8cp2u66","information":"React hooks enable functional components to use state","created_at":"2025-12-25T14:30:39.645Z"}
|
|
723
|
+
{"id":"mem_mjljihq8_h9umc12","information":"Next.js 15 was released by Vercel in October 2024","created_at":"2025-12-25T14:30:39.680Z"}
|
|
724
|
+
{"id":"mem_mjljq4i7_yj40zww","information":"Test memory for tools integration","created_at":"2025-12-25T14:36:35.791Z","tags":"test"}
|
|
725
|
+
{"id":"mem_mjljq8cv_p3vprah","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-25T14:36:40.783Z"}
|
|
726
|
+
{"id":"mem_mjljqalt_g58x3bi","information":"Test memory for plugin tool","created_at":"2025-12-25T14:36:43.697Z","tags":"test,plugin"}
|
|
727
|
+
{"id":"mem_mjljqanu_2qvmq7o","information":"TypeScript is a typed superset of JavaScript","created_at":"2025-12-25T14:36:43.770Z"}
|
|
728
|
+
{"id":"mem_mjljqape_ifvk8wt","information":"React hooks enable functional components to use state","created_at":"2025-12-25T14:36:43.826Z"}
|
|
729
|
+
{"id":"mem_mjljqaqn_ej3t9iu","information":"Next.js 15 was released by Vercel in October 2024","created_at":"2025-12-25T14:36:43.871Z"}
|