opencode-swarm-plugin 0.35.0 → 0.36.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/.hive/issues.jsonl +4 -4
  2. package/.hive/memories.jsonl +274 -1
  3. package/.turbo/turbo-build.log +4 -4
  4. package/.turbo/turbo-test.log +307 -307
  5. package/CHANGELOG.md +133 -0
  6. package/bin/swarm.ts +234 -179
  7. package/dist/compaction-hook.d.ts +54 -4
  8. package/dist/compaction-hook.d.ts.map +1 -1
  9. package/dist/eval-capture.d.ts +122 -17
  10. package/dist/eval-capture.d.ts.map +1 -1
  11. package/dist/index.d.ts +1 -7
  12. package/dist/index.d.ts.map +1 -1
  13. package/dist/index.js +1278 -619
  14. package/dist/planning-guardrails.d.ts +121 -0
  15. package/dist/planning-guardrails.d.ts.map +1 -1
  16. package/dist/plugin.d.ts +9 -9
  17. package/dist/plugin.d.ts.map +1 -1
  18. package/dist/plugin.js +1283 -329
  19. package/dist/schemas/task.d.ts +0 -1
  20. package/dist/schemas/task.d.ts.map +1 -1
  21. package/dist/swarm-decompose.d.ts +0 -8
  22. package/dist/swarm-decompose.d.ts.map +1 -1
  23. package/dist/swarm-orchestrate.d.ts.map +1 -1
  24. package/dist/swarm-prompts.d.ts +0 -4
  25. package/dist/swarm-prompts.d.ts.map +1 -1
  26. package/dist/swarm-review.d.ts.map +1 -1
  27. package/dist/swarm.d.ts +0 -6
  28. package/dist/swarm.d.ts.map +1 -1
  29. package/evals/README.md +38 -0
  30. package/evals/coordinator-session.eval.ts +154 -0
  31. package/evals/fixtures/coordinator-sessions.ts +328 -0
  32. package/evals/lib/data-loader.ts +69 -0
  33. package/evals/scorers/coordinator-discipline.evalite-test.ts +536 -0
  34. package/evals/scorers/coordinator-discipline.ts +315 -0
  35. package/evals/scorers/index.ts +12 -0
  36. package/examples/plugin-wrapper-template.ts +747 -34
  37. package/package.json +2 -2
  38. package/src/compaction-hook.test.ts +234 -281
  39. package/src/compaction-hook.ts +221 -63
  40. package/src/eval-capture.test.ts +390 -0
  41. package/src/eval-capture.ts +168 -10
  42. package/src/index.ts +89 -2
  43. package/src/learning.integration.test.ts +0 -2
  44. package/src/planning-guardrails.test.ts +387 -2
  45. package/src/planning-guardrails.ts +289 -0
  46. package/src/plugin.ts +10 -10
  47. package/src/schemas/task.ts +0 -1
  48. package/src/swarm-decompose.ts +21 -8
  49. package/src/swarm-orchestrate.ts +44 -0
  50. package/src/swarm-prompts.ts +20 -0
  51. package/src/swarm-review.ts +41 -0
  52. package/src/swarm.integration.test.ts +0 -40
@@ -1,19 +1,36 @@
1
+ {"id":"002624b7-fbdd-4720-ad28-5a9fd25c0c3e","information":"Label propagation clustering implementation for graph visualization: Algorithm chosen over alternatives (Louvain, spectral clustering) for O(m×k) performance where k is typically 5-20 iterations. Key implementation details: (1) Build adjacency list from d3.SimulationLinkDatum where source/target can be string OR object - must use String(link.source) not direct casting to avoid type errors. (2) Nodes get unique initial labels (their IDs), then iteratively adopt most common neighbor label until convergence. (3) Ties broken deterministically by lowest label value to ensure reproducible results. (4) Final labels compacted to 0-indexed cluster IDs. (5) Centroids computed as simple averages, updated on force simulation ticks. Works well for 10-10k node graphs with 5-20 natural clusters. Catppuccin color cycling provides visual distinction.","created_at":"1766343300618.0","tags":"graph-clustering,label-propagation,d3-force,community-detection,typescript"}
2
+ {"id":"0099fc4f-ff1d-4771-a6a1-bb61e436638a","information":"LibSQLDatabase multi-scale retrieval option added: includeClusterSummaries in SearchOptions enables querying cluster_summaries table (when it exists) for RAPTOR-style hierarchical search. Implementation is currently a no-op (just destructures the option) because cluster_summaries table doesn't exist yet. When the table is created by another agent, the implementation can query both chunks and cluster summaries, merging results by score. This is part of the RAPTOR-lite architecture where documents can be searched at multiple scales: leaf chunks (fine-grained) and cluster summaries (coarse-grained themes).","created_at":"1766421046482.0","tags":"pdf-brain,raptor,multi-scale-retrieval,cluster-summaries,vector-search,libsql"}
3
+ {"id":"00c08d88-8825-4a44-b0a7-944ae1aec88d","information":"d3.polygonHull and d3.polygonCentroid implementation for cluster visualization: Use d3.polygonHull to compute convex hulls around node clusters. Add padding by placing multiple points around each node at 90-degree intervals (0, π/2, π, 3π/2) offset by padding distance. d3.polygonHull returns [number, number][] | null, so check for null and min length. d3.polygonCentroid takes hull points and returns [x, y] tuple for centroid. Render to canvas with semi-transparent fill (0.08 alpha) and stroke (0.3 alpha). When iterating Map in TypeScript, use Map.forEach() instead of for...of to avoid downlevelIteration issues. Pattern used in pdf-brain-viewer cluster hulls implementation.","created_at":"1766343757791.0","tags":"d3,visualization,canvas,clustering,convex-hull,typescript"}
4
+ {"id":"013e5fd6-20fc-49f0-b913-8815a66746d7","information":"Integration testing pattern for GitHub API tools: Use well-known public repos (e.g., vercel/next.js) as test targets. Handle rate limiting gracefully by checking for rate limit errors in responses and skipping tests with console.warn(). GitHub Code Search API often requires authentication - tests should skip gracefully when errors occur. Unauthenticated: 60 req/hr, Authenticated (GITHUB_TOKEN): 5000 req/hr. Error handling tests should accept either the expected error OR rate limit error as valid (e.g., result.error.includes(\"not found\") || result.error.includes(\"rate limit\")).","created_at":"1766294917308.0","tags":"testing,github-api,integration-tests,rate-limiting,error-handling"}
1
5
  {"id":"03864e7d-2f09-4779-8619-eaba5e98cb46","information":"PGlite WAL management solution for pdf-library project: Added checkpoint() method to Database service (Database.ts). PGlite supports standard PostgreSQL CHECKPOINT command - no special configuration needed. Implementation: checkpoint() => Effect.tryPromise({ try: async () => { await db.exec(\"CHECKPOINT\"); }, catch: ... }). This prevents WAL accumulation that caused 930 WAL files (930MB) and WASM OOM crash. CHECKPOINT forces WAL to be written to data files, allowing WAL recycling. Transaction safety for addChunks/addEmbeddings already existed (BEGIN/COMMIT/ROLLBACK pattern). Tests verify checkpoint can be called and transactions roll back on failure. Pattern applies to any PGlite project with batch operations.","created_at":"2025-12-19T03:41:35.101Z","metadata":"{\"file\":\"src/services/Database.ts\",\"project\":\"pdf-library\",\"test_file\":\"src/services/Database.test.ts\",\"tests_passing\":10}","tags":"pglite,wal,checkpoint,database,pdf-library,transaction,wasm,oom"}
2
6
  {"id":"03fb1085-e349-47d3-9e2e-084e129a7fdb","information":"@badass Content Model Decision (Dec 2024): Use ContentResource + ContentResourceResource pattern from course-builder. Key files:\n\n**Database Schema:**\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource.ts:19` - Core ContentResource table with flexible JSON `fields` column\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource-resource.ts:14` - Join table for parent-child relationships with `position` (double for fractional ordering)\n\n**Collection Management:**\n- `apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84` - Main collection editor with drag-and-drop, search, tier selection\n- `apps/ai-hero/src/components/list-editor/lesson-list/tree.tsx:103` - Nested tree using Atlassian Pragmatic DnD\n- `apps/ai-hero/src/lib/lists-query.ts:268` - addPostToList for resource association\n\n**Resource Form Pattern:**\n- `apps/ai-hero/src/components/resource-form/with-resource-form.tsx:78` - HOC for config-driven resource editing\n- `apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/cohort-form-config.tsx:8` - Example config\n\n**Key Gotchas:**\n- Position is `double` not `int` - allows fractional positions for insertion without reordering\n- Nested loading hardcoded to 3 levels in adapter (line 2689-2723)\n- Slug format: `{slugified-title}~{guid}` for uniqueness\n- JSON fields validated by Zod at app layer, not DB level\n\n**Patterns to Extract to @badass:**\n1. ContentResource base model to @badass/core\n2. ResourceFormConfig pattern to @badass/core\n3. CollectionEditor component to @badass/ui\n4. Position management utilities to @badass/core/utils","created_at":"2025-12-18T15:50:04.300Z"}
7
+ {"id":"04024144-e865-45b6-a6c2-b4d6ed735d8d","information":"Skills integration tests learned pattern: writeFileSync with mode parameter doesn't actually set executable permissions on created files. Need explicit chmodSync(path, 0o755) after writing for scripts to be executable via Bun.spawn. This is cross-platform filesystem behavior. Also: skills_init creates skills with TODO placeholder descriptions that fail validation, so duplicate detection requires valid descriptions in tests.","created_at":"1766295448269.0","tags":"testing,skills,filesystem,executable,integration-tests"}
8
+ {"id":"0496158b-3a9b-476e-9b13-982cfdd6abee","information":"{\"id\":\"test-1766263663559-ok1qs8pysja\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:47:43.559Z\",\"raw_value\":1}","created_at":"1766263663796.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:47:43.559Z\"}"}
3
9
  {"id":"05ab4b37-7772-4e98-9c5d-34dfdee9da95","information":"{\"id\":\"pattern-1765653517980-ywilgz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:18:37.980Z\",\"updated_at\":\"2025-12-13T19:18:37.980Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:18:38.186Z","metadata":"{\"id\":\"pattern-1765653517980-ywilgz\",\"kind\":\"pattern\",\"is_negative\":false}"}
10
+ {"id":"05b865e3-4546-4ba5-a9e7-91ac62247efc","information":"## Durable Streams - Upstream Source\n\nThe Effect-TS durable primitives in swarm-mail originated from https://github.com/durable-streams/durable-streams\n\n### What Durable Streams Provides\n- HTTP-based protocol for resumable, offset-based streaming\n- Works with web browsers, mobile apps, native clients\n- Refresh-safe, multi-device, multi-tab support\n- CDN-friendly for massive fan-out\n\n### Packages in Upstream\n- @durable-streams/client - TypeScript client\n- @durable-streams/server - Node.js server\n- @durable-streams/cli - Command-line tool\n- @durable-streams/state - State management\n\n### Our Local Adaptation (swarm-mail/src/streams/effect/)\n- DurableCursor - Positioned event consumption with checkpointing\n- DurableLock - Distributed mutex with TTL\n- DurableDeferred - Distributed promises\n- DurableMailbox - Actor message passing\n- ask.ts - RPC pattern combining mailbox + deferred\n\n### Key Insight\nOur primitives are a LOCAL adaptation for multi-agent coordination, not the full HTTP protocol. They use PGLite as the durable store. Task is to port them to libSQL.","created_at":"1766333614743.0","tags":"durable-streams,effect-primitives,architecture,upstream-source"}
11
+ {"id":"05be623c-dfdc-44f8-9abc-0d0dfa475685","information":"Worker prompt ON-DEMAND research pattern: Workers can now spawn researchers when they hit unknowns during implementation. Added new section to SUBTASK_PROMPT_V2 (after Step 9, before SWARM MAIL) with 3-step workflow: (1) Check semantic-memory_find first for existing research, (2) If not found, spawn researcher with swarm_spawn_researcher + Task tool, (3) Wait for results then continue. Includes clear triggers for WHEN to research (unknown API behavior, version-specific issues, outdated docs) vs WHEN NOT to (standard patterns, well-documented APIs, obvious implementations). This is OPTIONAL research driven by workers during implementation, distinct from PRE-DECOMPOSITION research driven by coordinators. TDD pattern: 6 new tests covering section placement, semantic-memory check, researcher spawn tool usage, research triggers, and anti-triggers. All placeholder substitutions use {bead_id}, {epic_id}, {project_path} for dynamic values.","created_at":"1766516151168.0","tags":"swarm,worker-prompt,research,on-demand,tdd,semantic-memory,swarm_spawn_researcher"}
12
+ {"id":"05cd3774-b8ef-444c-92e5-f4419da7a022","information":"pdf-library clustering schema evolution: Initially implemented soft clustering (GMM-style with probability field) but RAPTOR-lite implementation uses hard clustering (k-means with distance field). Schema changed from:\n- chunk_clusters: probability → distance\n- Separate clusters + cluster_summaries tables → unified cluster_summaries with embedded centroid, concept mapping, and chunk_count\nHard clustering simpler for RAPTOR tree construction where each chunk belongs to exactly one cluster per level.","created_at":"1766421660239.0","tags":"pdf-library,clustering,RAPTOR,schema-migration,libSQL"}
4
13
  {"id":"05e32452-500a-4365-bf06-2cddac413184","information":"@badass Cross-Domain SSO Decision (Dec 2024): Use BetterAuth crossSite plugin as core framework feature. Enables unified identity across different TLDs (like Kent's EpicAI.pro, EpicWeb.dev, EpicReact.dev). Configuration: trustedOrigins array lists sibling sites. This is a CORE feature built into @badass/auth, not per-creator config. All sites in a creator's ecosystem automatically trust each other when sharing a database. Solves Kent's Workshop App tutorial flow - user with Epic React purchase doesn't need separate EpicWeb.dev account.","created_at":"2025-12-18T15:34:52.718Z"}
5
14
  {"id":"06e8b34d-6400-4b4c-85bd-e74102c29a12","information":"SQL alias typo in getBlockedCells: JOIN clause defined alias bbc for blocked_beads_cache but ON clause incorrectly referenced bcc.cell_id. Root cause: typo during initial implementation. Prevention: verify alias consistency between JOIN and ON clauses.","created_at":"2025-12-18T15:42:50.822Z"}
6
15
  {"id":"0712fa64-54a7-4b3c-9b53-93f6f626f38b","information":"ADR-009 Local Dev Database decision (Dec 2024): Docker Compose + MySQL 8.0 for local development. Matches PlanetScale production (MySQL-compatible). Scripts: bun db:up/down/reset/migrate/seed/studio. Drizzle Kit for migrations. Hybrid seed data approach: SQL bootstrap files for static data + TypeScript factories for dynamic test data. Port 3309 to avoid conflicts with local MySQL. Rejected alternatives: manual MySQL install (version fragmentation), PostgreSQL (PlanetScale is MySQL-only), SQLite local (dialect mismatch causes prod bugs), PlanetScale branches (network latency, cost), shared dev database (conflicts).","created_at":"2025-12-19T00:16:16.546Z","tags":"adr,database,docker,mysql,drizzle,local-dev,planetscale"}
7
16
  {"id":"07b07817-d654-4f13-880f-1c43592c6bc5","information":"Updated swarm-coordination skill with 4 critical new patterns: Worker Survival Checklist (mandatory 9-step pattern), Socratic Planning Flow (interactive modes), Coordinator File Ownership Rule (coordinators never reserve files), Context Survival Patterns (checkpoint before risky ops, store learnings immediately, auto-checkpoints, delegate to subagents). These prevent common failures: silent workers, context exhaustion, ownership confusion, lost learnings.","created_at":"2025-12-16T16:26:19.718Z","tags":"swarm,coordination,patterns,documentation,skills"}
17
+ {"id":"0952bf32-db7d-4378-8f1b-9dd04ca56f16","information":"DurableDeferred libSQL migration was already complete when task assigned. The implementation already used DatabaseAdapter parameter pattern correctly (config.db: DatabaseAdapter), had parameterized queries throughout (no string interpolation), and tests used createInMemorySwarmMailLibSQL(). All 11 tests passing. Key verification: check imports for PGLite (none found), verify DatabaseAdapter usage (line 73), confirm test patterns (line 34). This suggests the epic decomposition didn't check current state before creating subtasks.","created_at":"1766339219958.0","tags":"swarm,libsql,deferred,already-complete,epic-planning"}
18
+ {"id":"096354f7-241c-426e-a53e-d1ba08d00baf","information":"{\"id\":\"test-1766263308863-1sfc71v5ibx\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:41:48.863Z\",\"raw_value\":1}","created_at":"1766263309108.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:41:48.863Z\"}"}
8
19
  {"id":"0973178b-96f0-4fe2-bc39-6fbf5d5361c7","information":"Vitest workspace auto-discovery gotcha in monorepos. Even with vitest.workspace.ts configured with explicit project paths vitest still auto-discovers and tries to run ALL test files in the repository by default. This causes failures when legacy or archived code has missing dependencies. Solution add --dir scope flag to package.json test scripts to limit vitest search scope. Example test vitest --dir packages ensures only packages directory is scanned. Why workspace config alone is not enough the workspace file defines separate test projects but does not prevent auto-discovery. Vitest will still find and attempt to load test files outside the workspace unless you explicitly limit the search directory. Affects Bun Turborepo monorepos with archived legacy code.","created_at":"2025-12-18T16:48:31.583Z"}
9
20
  {"id":"0b9184ca-cd44-42f1-ae5b-28c6aad6d368","information":"{\"id\":\"test-1766080068974-jpovvl8fce\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:47:48.974Z\",\"raw_value\":1}","created_at":"2025-12-18T17:47:49.178Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:47:48.974Z\"}"}
21
+ {"id":"0bbf5fe0-f8e4-47cc-8f6b-de0aece27650","information":"AI SDK v6 starter repo migration: When updating starter repos from v5 to v6, check ALL files with generateObject imports, not just the ones explicitly listed in the task. Found 2 additional files (invisible-ai-demo.ts, test-structured.ts) beyond the 3 assigned files. Key updates: 1) package.json dependencies (ai ^6.0.0, @ai-sdk/openai ^3.0.0, @ai-sdk/react ^3.0.0), 2) imports change from `generateObject` to `generateText, Output`, 3) TODO comments must reflect new pattern: `generateText({ output: Output.object({ schema, mode: 'array' }) })` instead of `generateObject({ schema, output: 'array' })`. Files may already be partially updated from formatter/prettier changes - always verify actual state before editing.","created_at":"1766434086678.0","tags":"ai-sdk,v6,migration,starter-repo,generateObject,generateText,Output"}
10
22
  {"id":"0c44c18e-b76e-4d3c-a6a7-6bfe9836c795","information":"bd daemon creates git worktrees that block branch switching. The beads daemon (bd daemon) runs in background and creates worktrees at .git/beads-worktrees/main for syncing. When switching branches, git fails with \"fatal: 'main' is already used by worktree\". Solution: 1) Stop daemon with `bd daemon --stop`, 2) Remove .git/beads-worktrees and .git/worktrees directories, 3) Run `git worktree prune`, 4) Then checkout works. The daemon auto-starts and recreates worktrees, so stop it before branch operations. Config shows sync.branch = main which is the branch it tracks.","created_at":"2025-12-16T19:52:14.153Z"}
23
+ {"id":"0ccf86ea-8234-49da-b7c5-c4798b1089ac","information":"swarm_checkpoint integration tests fix: The DatabaseAdapter getClient() method issue was caused by wrapLibSQL helper not implementing getClient() for Drizzle detection. Fix: Added getClient() method to wrapLibSQL in session.integration.test.ts and flush-manager.test.ts. This enables toDrizzleDb() to properly detect LibSQLAdapter vs PGlite instances. Pattern: When wrapping DatabaseAdapter for tests, always implement getClient() to maintain Drizzle compatibility. Commit eb2ff6d fixed this along with cursors table schema alignment (stream_id → stream/checkpoint columns).","created_at":"1766338541804.0","metadata":"{\"files\":[\"session.integration.test.ts\",\"flush-manager.test.ts\"],\"pattern\":\"wrapLibSQL getClient() implementation\",\"fixed_in\":\"eb2ff6d\"}","tags":"swarm-mail,testing,DatabaseAdapter,Drizzle,libSQL,checkpoint"}
11
24
  {"id":"0d062d9b-68a4-47f6-899d-a08d899d48c5","information":"swarm-mail daemon mode is now the default. Implementation change: `const useSocket = process.env.SWARM_MAIL_SOCKET !== 'false'` (was `=== 'true'`). This prevents multi-process PGLite corruption by defaulting to single-daemon architecture.\n\nLog messages are critical for user guidance:\n- Daemon mode: \"Using daemon mode (set SWARM_MAIL_SOCKET=false for embedded)\"\n- Embedded mode: \"Using embedded mode (unset SWARM_MAIL_SOCKET to use daemon)\"\n\nTesting default behavior: Test unsets env var with `delete process.env.SWARM_MAIL_SOCKET`, then verifies getSwarmMail() attempts daemon mode (which falls back to embedded if no daemon running). This proves the default without requiring actual daemon.\n\nTests that call getSwarmMail() directly MUST set `SWARM_MAIL_SOCKET=false` in setup to avoid daemon startup attempts during tests.","created_at":"2025-12-19T15:17:10.442Z","tags":"swarm-mail,daemon,socket,pglite,default-behavior,testing"}
12
25
  {"id":"0d34c323-6962-40ed-87fd-3d954e8e8524","information":"{\"id\":\"test-1766074649441-2bahri75eeq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:29.441Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:29.716Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:29.441Z\"}"}
13
26
  {"id":"0d5c110a-f9b9-457c-b4f3-e877d5051baa","information":"Zod schema pattern for structured contracts: WorkerHandoff replaces 400-line prose with machine-readable contracts. Key design decisions: (1) task_id regex requires minimum 3 segments (project-slug-hash) to prevent \"invalid-format\" matching - use /^[a-z0-9]+(-[a-z0-9]+){2,}(\\.[\\w-]+)?$/ not /^[a-z0-9]+(-[a-z0-9]+)+(\\.[\\w-]+)?$/. (2) Empty arrays valid for files_owned (read-only tasks) and files_readonly, but success_criteria must have at least one item (.min(1)) to prevent ambiguous completion. (3) Nested schemas (Contract, Context, Escalation) compose cleanly - validate each independently then combine. (4) Export all schemas AND types from index.ts for proper TypeScript inference. Pattern proven in cell.ts, task.ts, evaluation.ts schemas.","created_at":"2025-12-18T17:27:12.651Z"}
27
+ {"id":"0e20b98e-fde4-4da8-8597-4a2e4ee7015e","information":"{\"id\":\"pattern-1766256913411-nxumnu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T18:55:13.411Z\",\"updated_at\":\"2025-12-20T18:55:13.411Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766256913636.0","metadata":"{\"id\":\"pattern-1766256913411-nxumnu\",\"kind\":\"pattern\",\"is_negative\":false}"}
28
+ {"id":"0e25979a-ff67-4d4c-b9ef-94c1a85d183b","information":"{\"id\":\"pattern-1766350571145-34xtlu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:56:11.145Z\",\"updated_at\":\"2025-12-21T20:56:11.145Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766350571373.0","metadata":"{\"id\":\"pattern-1766350571145-34xtlu\",\"kind\":\"pattern\",\"is_negative\":false}"}
14
29
  {"id":"0e2654bb-47e5-4a0e-9738-427712dee767","information":"{\"id\":\"test-1766085028669-e33njleg6ak\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T19:10:28.669Z\",\"raw_value\":1}","created_at":"2025-12-18T19:10:28.913Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T19:10:28.669Z\"}"}
15
30
  {"id":"0e7acef9-5500-4342-9c12-ef50c5997dee","information":"{\"id\":\"pattern-1765664067335-e68cvl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:14:27.335Z\",\"updated_at\":\"2025-12-13T22:14:27.335Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:14:27.567Z","metadata":"{\"id\":\"pattern-1765664067335-e68cvl\",\"kind\":\"pattern\",\"is_negative\":false}"}
31
+ {"id":"0f3d03bf-9a59-41db-9569-fd639661aeab","information":"{\"id\":\"test-1766350569888-z8uv1atsc5q\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:56:09.888Z\",\"raw_value\":1}","created_at":"1766350570179.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:56:09.888Z\"}"}
16
32
  {"id":"1005d5c0-ac5e-4658-a555-3089c642fac5","information":"SWARM COORDINATION BUG: Coordinators must NEVER call swarmmail_reserve(). File reservation is exclusively for worker agents who are actually modifying files. When coordinator reserves files before spawning workers, it blocks the workers from accessing their assigned files. Correct flow: coordinator creates beads + spawns workers → workers call swarmmail_init() → workers call swarmmail_reserve() for their assigned files → workers do work → workers call swarm_complete() which auto-releases. The coordinator only monitors via swarmmail_inbox() and swarm_status().","created_at":"2025-12-14T23:18:17.346Z"}
33
+ {"id":"104f560e-6b0e-46e3-9835-9b19a8a6c6f2","information":"{\"id\":\"pattern-1766260049255-4xpnhx\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:47:29.255Z\",\"updated_at\":\"2025-12-20T19:47:29.255Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260049487.0","metadata":"{\"id\":\"pattern-1766260049255-4xpnhx\",\"kind\":\"pattern\",\"is_negative\":false}"}
17
34
  {"id":"11c9e111-bf66-44e9-84d0-6c9a338bf290","information":"OpenCode command flags use simple prefix parsing (--flag-name). The /swarm command now supports planning modes: --fast (skip brainstorming), --auto (minimal Q&A), --confirm-only (show plan + yes/no), and default (full Socratic). These map to swarm_plan_interactive modes: 'fast', 'auto', 'confirm-only', 'socratic'. Key pattern: parse flags from command string, pass mode to swarm_plan_interactive, handle multi-turn conversation until ready_to_decompose=true, then delegate to swarm/planner subagent. The command documentation includes clear behavior table showing Questions/User Input/Confirmation for each mode.","created_at":"2025-12-16T16:25:10.423Z"}
18
35
  {"id":"128aed42-765e-4958-9645-5031d57c60d2","information":"Context hygiene pattern for RAG systems: Implement reranking pipeline with rerankDocuments(), selectTopN(), and rerankAndSelect(). Start with keyword-based scoring for lessons (title 3x, content 2x, keywords 1x, term frequency 0.5x), then show production alternatives (Cohere, Together AI). Log token reduction metrics to demonstrate impact (~80% reduction typical). This teaches the concept while being runnable without external API keys.","created_at":"2025-12-16T21:29:47.790Z","metadata":"{\"type\":\"pattern\",\"domain\":\"rag-systems\"}","tags":"context-hygiene,reranking,ai-sdk,education"}
19
36
  {"id":"131c9006-eb18-4fce-a248-359c9571032c","information":"Lesson authoring pattern for production-ready technical courses: Start with working implementation, then polish lesson content to match. For AI SDK courses, use @ts-expect-error for Vercel-only packages (like 'workflow') to avoid local TypeScript errors while maintaining educational value. Include Fast Track (3 quick steps), Project Prompt (requirements + hints), Try It (real output), and Solution (complete working code). Always create git tags for checkpoints (lesson-X.Y-solution) and push them.","created_at":"2025-12-16T21:29:34.961Z","metadata":"{\"type\":\"pattern\",\"domain\":\"lesson-authoring\"}","tags":"education,vercel,ai-sdk,workflows"}
@@ -21,73 +38,154 @@
21
38
  {"id":"13557e2b-154a-45ae-bad9-291357d15536","information":"Durable Streams Protocol (Electric SQL) - The open protocol for real-time sync to client applications. Key concepts:\n\n1. **Offset format**: `<read-seq>_<byte-offset>` - 16-char zero-padded hex for each part, lexicographically sortable\n2. **Operations**: PUT (create), POST (append), GET (read with offset), DELETE, HEAD (metadata)\n3. **Read modes**: catch-up (from offset), long-poll (wait for new data), SSE (streaming)\n4. **Headers**: Stream-Next-Offset, Stream-Up-To-Date, Stream-Seq (writer coordination), Stream-TTL/Expires-At\n5. **Storage pattern**: LMDB for metadata + append-only log files for data\n6. **Recovery**: Scan files to compute true offset, reconcile with metadata on startup\n7. **File handle pooling**: SIEVE cache eviction for LRU file handles\n\nImplementation repo: github.com/durable-streams/durable-streams\n- @durable-streams/client - TypeScript client\n- @durable-streams/server - Reference implementation\n- @durable-streams/conformance-tests - Protocol compliance tests\n\nCritical for Agent Mail: Provides crash recovery, offset-based resumability, and long-poll for live tailing. Better than custom event sourcing because battle-tested at Electric SQL for 1.5 years.","created_at":"2025-12-13T16:52:31.021Z"}
22
39
  {"id":"135aa45e-e41f-4864-b075-a8ff658ae9ae","information":"{\"id\":\"pattern-1766074438727-1olr11\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:13:58.727Z\",\"updated_at\":\"2025-12-18T16:13:58.727Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:13:58.949Z","metadata":"{\"id\":\"pattern-1766074438727-1olr11\",\"kind\":\"pattern\",\"is_negative\":false}"}
23
40
  {"id":"140dbeef-29c1-4abd-8bd3-cadc264f3169","information":"ADR-009 Local Dev Database Decision (Dec 2024):\n\nVERDICT: Docker Compose + MySQL 8.0 for local development\n\nRATIONALE:\n- PlanetScale production target is MySQL-compatible (Vitess-backed)\n- Local-to-production parity prevents \"works on my machine\" dialect issues\n- Docker Compose provides declarative, version-controlled database setup\n- Zero MySQL administration knowledge required for developers\n\nKEY DECISIONS:\n1. MySQL 8.0 (not Postgres, not SQLite) - matches PlanetScale production dialect\n2. Docker Compose (not manual install, not PlanetScale branches) - version consistency + easy onboarding\n3. Port 3309 (not 3306) - avoids conflict with local MySQL installations\n4. Hybrid seed strategy: SQL files for bootstrap + TypeScript factories for test data\n5. Drizzle Kit integration: drizzle-kit push for migrations, drizzle-kit studio for GUI\n\nREJECTED ALTERNATIVES:\n- SQLite local + MySQL prod: Dialect mismatch causes production bugs (AUTOINCREMENT vs AUTO_INCREMENT, date handling, foreign keys)\n- Postgres: PlanetScale is MySQL-only, migration later would be painful\n- PlanetScale branches: Network latency, internet dependency, cost, no offline work\n- Manual MySQL install: Version fragmentation, config drift, M1/M2 issues, onboarding friction\n\nSCRIPTS INTERFACE:\n- bun db:up - Start container\n- bun db:down - Stop container\n- bun db:reset - Wipe + recreate + seed\n- bun db:migrate - Drizzle Kit push\n- bun db:seed - Run TypeScript seed script\n- bun db:studio - Drizzle Kit GUI\n\nCOURSE-BUILDER PRECEDENT:\nLegacy apps use identical pattern: MySQL 8.0 + Docker Compose + Drizzle Kit + seed_data volume mount\n\nGOTCHA: SQLite local testing is tempting for speed but creates false confidence - queries that work in SQLite fail in production MySQL due to dialect differences. Always match production database locally.","created_at":"2025-12-18T23:57:41.853Z","tags":"adr,database,docker,mysql,drizzle,planetscale,local-dev"}
41
+ {"id":"14ce13ac-bdc9-4972-a39f-054cd3d01cd8","information":"pdf-library document_concepts backfill successful: Script populated 2335 links from 803/907 documents (88.5% coverage). Tag normalization matched documents to 580/1641 concepts (35.3% usage). Most linked concept: \"Instructional Design\" with 104 documents. Confidence set to 0.8, source tagged as \"backfill\". JOIN queries work: can expand from concept -> documents and vice versa. Database path: ~/Documents/.pdf-library/library.db","created_at":"1766419666846.0","tags":"pdf-library,libsql,taxonomy,document_concepts,backfill,migration"}
24
42
  {"id":"14e46924-baf7-4d30-8361-532404832c3f","information":"README showcase structure for developer tools: Lead with the unique innovation (learning system), not features. Use ASCII art liberally for visual impact on GitHub. Structure: Hero (what/why different) → Quick start → Deep dive by category → Scale metrics → Credits. For multi-agent systems, emphasize cost optimization (coordinator-worker split) and learning mechanisms (confidence decay, anti-pattern inversion). Include architecture diagrams showing information flow, not just component boxes.","created_at":"2025-12-18T15:34:49.143Z","tags":"documentation,readme,showcase,portfolio,ascii-art,developer-tools,architecture"}
25
43
  {"id":"154c5c23-f0e1-47b1-8d17-d27ee198f943","information":"Enhanced swarm setup command with comprehensive verbose logging using @clack/prompts p.log.* methods. Pattern: Use p.log.step() to announce major operations (e.g., \"Checking existing configuration...\", \"Writing agent configuration...\"), p.log.success() for successful completions, p.log.message(dim()) for detailed status info, and p.log.warn() for non-critical issues. This pattern leverages existing writeFileWithStatus(), mkdirWithStatus(), and rmWithStatus() helpers which already output their own status. The key is to add context-setting log.step() calls BEFORE sections that contain multiple file operations. Example: p.log.step(\"Writing configuration files...\") followed by multiple writeFileWithStatus() calls that each log their own status (created/updated/unchanged). Users see the overall flow while helper functions show granular file-level details. This creates a clear hierarchy: step announcements → operation details → success summaries.","created_at":"2025-12-18T21:36:18.393Z","tags":"cli,verbose-output,ux,clack-prompts,swarm-setup"}
26
44
  {"id":"16323a37-5d59-4c0b-a27e-5ffdea930cf1","information":"{\"id\":\"pattern-1765771111190-acdzga\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:31.190Z\",\"updated_at\":\"2025-12-15T03:58:31.190Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:31.512Z","metadata":"{\"id\":\"pattern-1765771111190-acdzga\",\"kind\":\"pattern\",\"is_negative\":false}"}
27
45
  {"id":"167d7034-c725-4eda-96f9-7efd8f050c6b","information":"{\"id\":\"test-1765771108697-kiz3s5fu2v\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:28.697Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:29.165Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:28.697Z\"}"}
28
46
  {"id":"16e62f42-bd4a-464a-aad5-31b4ac04797a","information":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:42.155Z\",\"updated_at\":\"2025-12-18T16:17:42.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:42.421Z","metadata":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"kind\":\"pattern\",\"is_negative\":false}"}
47
+ {"id":"178856d5-dce3-4ee4-a47a-84bf9eb1b16b","information":"{\"id\":\"pattern-1766262800839-5p64ec\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:33:20.839Z\",\"updated_at\":\"2025-12-20T20:33:20.839Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262801045.0","metadata":"{\"id\":\"pattern-1766262800839-5p64ec\",\"kind\":\"pattern\",\"is_negative\":false}"}
48
+ {"id":"17c19a32-0f52-4cb3-bcf2-c8ea7d390c3e","information":"Linear SDK pagination pattern for @linear/sdk in workflow steps: Use pageInfo.hasNextPage and pageInfo.endCursor for cursor-based pagination. The SDK returns PaginatedConnection with nodes array and pageInfo object. Pattern: (1) Initialize cursor as undefined (not null), (2) Pass after: cursor in query options, (3) Check response.pageInfo.hasNextPage for continuation, (4) Update cursor with response.pageInfo.endCursor ?? undefined. Works for team.issues() and team.projects(). Cursor is string | undefined, NOT string | null. For incremental sync, use filter: { updatedAt: { gte: new Date(lastSyncTimestamp) } } and store the latest updated_at from results as the next sync cursor in Redis.","created_at":"1766517140690.0","tags":"linear-sdk,pagination,workflow,cursor,incremental-sync"}
49
+ {"id":"17e5c6fd-d9b7-4cc5-bc61-b9b40cdd1b2a","information":"{\"id\":\"test-1766262449195-qwoaqt61xu\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:27:29.195Z\",\"raw_value\":1}","created_at":"1766262449437.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:27:29.195Z\"}"}
29
50
  {"id":"18e1fd32-ef6a-4332-88e2-b19dfff2e230","information":"JSONL export/import implementation for swarm-mail beads package: Export works well with hash-based deduplication and dirty tracking. Import has issues when creating beads via direct SQL INSERT to preserve IDs - subsequent adapter calls for dependencies/labels/comments may fail silently. 13/29 tests passing. Working: serialize/parse JSONL, content hashing, full export, dirty export, new bead import. Failing: dependency/label/comment import for new beads created via direct INSERT.","created_at":"2025-12-16T23:05:17.663Z","tags":"typescript,beads,jsonl,event-sourcing"}
51
+ {"id":"19bb5eb1-027e-4bce-9091-7a7f3f6b5e31","information":"{\"id\":\"test-1766349000983-gd3hkil1hrr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:30:00.983Z\",\"raw_value\":1}","created_at":"1766349001298.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:30:00.983Z\"}"}
30
52
  {"id":"19c70339-3281-4311-9e7f-591b264624ea","information":"Bead Event Store Integration completed 75%. Implemented beads/store.ts (336 lines) with appendBeadEvent readBeadEvents replayBeadEvents following streams/store.ts pattern. Created beads/events.ts (215 lines) with 20 bead event type definitions to avoid TypeScript cross-package import issues. Key learnings: Cross-package TS imports fail with not under rootDir error - duplicate type definitions in consuming package. PGLite schema initialization happens in initializeSchema not migrations - tests must call getDatabase or manually init schema. Projection update functions expect loose event types with index signatures - need cast to any. Remaining work: Fix test setup initialize core schema, implement beads/adapter.ts factory update beads/index.ts exports.","created_at":"2025-12-16T22:00:19.988Z"}
53
+ {"id":"19daaead-8317-42d3-8abf-5a69c9f5191d","information":"{\"id\":\"test-1766341863421-b8vnf8ftqw\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T18:31:03.421Z\",\"raw_value\":1}","created_at":"1766341863639.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T18:31:03.421Z\"}"}
31
54
  {"id":"1b0b1b73-196c-499b-9db7-530645d6749f","information":"GOTCHA: bun publish doesn't support npm OIDC trusted publishers (requires npm login). \n\nSOLUTION: Use bun pack + npm publish combo:\n1. `bun pm pack` - creates tarball WITH workspace:* resolved to actual versions\n2. `npm publish <tarball>` - publishes tarball with OIDC support\n\nThis is implemented in scripts/publish.ts for opencode-swarm-plugin monorepo.\n\nAlso: bin scripts that import external packages need those packages in dependencies, not just devDependencies. The bin/swarm.ts was missing @clack/prompts.","created_at":"2025-12-15T04:46:30.825Z"}
32
55
  {"id":"1b236fab-235c-426d-b2cf-d9c54d051724","information":"MarkdownExtractor testing patterns for Effect-based services: Use Effect.runPromise() in test helpers to properly execute Effects. For file-based tests, use temp directories (mkdtempSync) with beforeAll/afterAll cleanup. When testing Effect error types (like MarkdownNotFoundError), catch the FiberFailure wrapper and check error string contains the error name - don't use instanceof on the wrapped error. Gray-matter parses YAML dates as Date objects, not strings. Code blocks in chunking get replaced with placeholders then restored, so test for content presence not exact backtick syntax.","created_at":"2025-12-16T21:41:26.968Z"}
56
+ {"id":"1ca58d9d-f34c-4cb8-8766-f6131b36d374","information":"swarm-review.integration.test.ts BLOCKER: sendSwarmMessage in swarm_review_feedback.execute() attempts to create its own LibSQLAdapter via appendEvent → createLibSQLAdapter, which fails with \"URL_INVALID\" for non-file:// URLs like '/Users/joel/.config/swarm-tools/swarm.db'. This breaks integration tests that use createInMemorySwarmMailLibSQL.\n\nRoot cause: sendSwarmMessage doesn't accept a database adapter parameter - it auto-creates one. For integration tests to work, either:\n1. swarm_review_feedback needs dbAdapter parameter (breaking change)\n2. sendSwarmMessage needs to use adapter cache (requires global state)\n3. Tests need to use file-based libSQL (not in-memory)\n\nWorkaround: Use file-based temp database instead of in-memory for integration tests that call swarm_review tools.\n\nAlternative: Mock sendSwarmMessage in tests - but defeats purpose of integration test.","created_at":"1766380581123.0","tags":"swarm-review,integration-test,sendSwarmMessage,libSQL,URL_INVALID,blocker"}
33
57
  {"id":"1d034b17-20ee-4442-927a-3943288153d0","information":"Test learning about swarm patterns","created_at":"2025-12-16T16:21:07.411Z","tags":"swarm,test"}
58
+ {"id":"1d5c0410-845d-4a7e-b916-096dba823675","information":"Three-Tier Health Checks Pattern: Tier 1 (fast): Binary exists - command -v tool. Tier 2 (medium): Shallow verify - tool --version. Tier 3 (slow, --deep only): Functional test - actually calls API. Features: 5-minute cache TTL, 15-second timeout per check, JSON output for automation. Coordinator should run fast checks every 60s, deep checks before spawning workers. Detects: stale reservations, orphaned agents, database corruption. Source: Dicklesworthstone/agentic_coding_flywheel_setup doctor.sh","created_at":"1766591009508.0","tags":"swarm,health,monitoring,observability,patterns,acfs"}
59
+ {"id":"1dada1b7-5e76-46e7-9147-7355300f4f67","information":"{\"id\":\"test-1766261949130-leqx0ivxeo\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:19:09.130Z\",\"raw_value\":1}","created_at":"1766261949427.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:19:09.130Z\"}"}
60
+ {"id":"1e728072-c251-4ebc-9c3c-8753221d63a0","information":"{\"id\":\"pattern-1766261950204-daquzu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:19:10.204Z\",\"updated_at\":\"2025-12-20T20:19:10.204Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261950447.0","metadata":"{\"id\":\"pattern-1766261950204-daquzu\",\"kind\":\"pattern\",\"is_negative\":false}"}
61
+ {"id":"1ea71c6e-a703-4c27-8a81-8d750c61de59","information":"Implemented advanced label rendering strategies for graph visualization following Tufte's data-ink ratio principle:\n\n1. **Inside Labels** - Place labels centered inside large nodes (screenRadius >= 30px) instead of external annotations. Uses word-wrapping (max 2 lines), intelligent truncation with ellipsis, and dark text (cat.crust) on light nodes for contrast.\n\n2. **Curved Labels** - Render edge labels along quadratic bezier curves following edge paths. Text automatically flips to avoid upside-down rendering (angle check: > π/2 or < -π/2). Uses semi-transparent background (cat.base + \"cc\") for readability.\n\nKey implementation details:\n- Quadratic bezier control points calculated perpendicular to edge midpoint\n- Font sizes adaptive: inside labels capped at 16px, curved labels default 10px\n- Text measurement with ctx.measureText() for precise wrapping\n- Transform.save()/restore() for rotated text rendering\n- Integration with existing Catppuccin Mocha color palette\n\nTesting: Bun test framework (not Vitest). Import from \"bun:test\" for describe/it/expect.\n\nFiles created:\n- src/lib/graph/betterLabels.ts (implementation)\n- src/lib/graph/betterLabels.test.ts (7 passing tests)\n- src/lib/graph/betterLabels.md (comprehensive usage docs)\n- Exports added to src/lib/graph/index.ts\n\nPerformance: ~0.5ms per 100 inside labels, ~1.0ms per 100 curved labels.","created_at":"1766343433373.0","tags":"canvas,rendering,labels,graph-visualization,tufte,data-ink-ratio,bezier,typography"}
34
62
  {"id":"1f1a19f9-485b-4344-8efa-390f0d0cc42b","information":"BeadsAdapter migration from bd CLI to event sourcing complete. All 9 beads_* tools migrated to direct BeadsAdapter calls. Key patterns: (1) getBeadsAdapter() singleton with lazy init via getSwarmMail()->createBeadsAdapter(), (2) formatBeadForOutput() maps adapter fields to schema (type->issue_type, timestamps->ISO strings), (3) markDirty() after every mutation for incremental export, (4) FlushManager for beads_sync instead of bd sync --flush-only, (5) deleteBead() for rollback in beads_create_epic instead of bd close. Critical: export beads from swarm-mail/src/index.ts via 'export * from ./beads' then rebuild.","created_at":"2025-12-16T23:40:04.564Z"}
35
63
  {"id":"1fcf004e-9ffd-4949-b83c-8e043dc80536","information":"PGLite WAL Health Monitoring Implementation: Added proactive WAL size monitoring to prevent WASM OOM crashes.\n\nRoot cause from pdf-brain: 930 WAL files accumulated to 930MB, causing WASM crash. Solution: monitor BEFORE it reaches critical size.\n\nImplementation (TDD approach - all tests green):\n1. Added to DatabaseAdapter interface:\n - `getWalStats(): Promise<{ walSize: number, walFileCount: number }>` - scans pg_wal directory\n - `checkWalHealth(thresholdMb = 100): Promise<{ healthy: boolean, message: string }>` - warns when exceeds threshold\n\n2. Implemented in wrapPGlite():\n - getWalDirectoryStats() helper scans pg_wal directory recursively\n - Returns { walSize: 0, walFileCount: 0 } for in-memory databases\n - Default 100MB threshold (10x safety margin before 930MB crisis point)\n - Message includes actual size, file count, and threshold\n\n3. Integrated with SwarmMailAdapter:\n - Enhanced healthCheck() to return `{ connected: boolean, walHealth?: { healthy, message } }`\n - Enhanced getDatabaseStats() to include `wal?: { size, fileCount }`\n - Graceful fallback when WAL stats not available (other database types)\n\nTesting: 15 tests covering getWalStats, checkWalHealth, adapter integration, in-memory fallback, custom thresholds.\n\nKey insight: Filesystem-based monitoring works better than pg_stat_wal queries for PGLite since pg_stat_wal may not be fully supported in embedded mode.\n\nUsage pattern:\n```typescript\nconst health = await adapter.healthCheck({ walThresholdMb: 100 });\nif (!health.walHealth?.healthy) {\n console.warn(health.walHealth?.message);\n await adapter.checkpoint?.(); // Trigger WAL flush\n}\n```","created_at":"2025-12-19T03:41:05.238Z","metadata":"{\"files\":[\"pglite.ts\",\"adapter.ts\",\"types/database.ts\",\"types/adapter.ts\"],\"package\":\"swarm-mail\",\"test_count\":15}","tags":"pglite,wal,health-monitoring,prevention-pattern,tdd,wasm-oom"}
36
64
  {"id":"1ffca519-0ca2-4df7-b6bf-603c2001327f","information":"Beads query implementation: Blocked cache must be invalidated in event handlers. handleBeadClosed must call invalidateBlockedCache for dependents - closing a blocker unblocks dependent beads. Without this the blocked cache returns stale data. Cache enables 25x faster ready work queries by avoiding recursive CTEs.","created_at":"2025-12-16T22:51:44.210Z"}
37
65
  {"id":"2061a77a-3eb7-4d52-a3d5-2a2314622ede","information":"Successfully completed index.ts rename from beads to hive. Pattern: 1) Import both hiveTools and beadsTools (plus directory setters) from \"./hive\", 2) Use setHiveWorkingDirectory() in plugin init, 3) Spread hiveTools in tool registration (includes beads aliases), 4) Update hook to check both \"hive_close\" and \"beads_close\", 5) Update all JSDoc to mention hive as primary and beads as deprecated. Build and typecheck pass. Backward compatibility maintained through aliases exported from hive module.","created_at":"2025-12-17T16:48:43.284Z"}
66
+ {"id":"207a8ea0-3f7c-484e-ad07-23ffc24e49f7","information":"{\"id\":\"pattern-1766593219150-e70lsr\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:20:19.150Z\",\"updated_at\":\"2025-12-24T16:20:19.150Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766593219453.0","metadata":"{\"id\":\"pattern-1766593219150-e70lsr\",\"kind\":\"pattern\",\"is_negative\":false}"}
38
67
  {"id":"20c5ee43-3389-42bd-b125-7da87c55445c","information":"{\"id\":\"test-1765670643103-ac1htt8yv4s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T00:04:03.103Z\",\"raw_value\":1}","created_at":"2025-12-14T00:04:03.299Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T00:04:03.103Z\"}"}
39
68
  {"id":"20fb300c-80b9-400c-8125-258e1ddbba9b","information":"Session compaction hook implementation: Plugin.trigger(\"session.compacting\", { sessionID }, { context: [] }) allows plugins to inject additional context into the compaction prompt. The hook returns { context: string[] } which gets spread into the prompt text array and joined with \\n\\n. Hook is called BEFORE processor.process() to ensure context is available during compaction. Located in packages/opencode/src/session/compaction.ts process() function.","created_at":"2025-12-17T18:01:32.282Z"}
69
+ {"id":"2190aecb-b20f-4a27-8b32-ff9fd0810216","information":"{\"id\":\"pattern-1766262704550-6h9hi9\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:31:44.550Z\",\"updated_at\":\"2025-12-20T20:31:44.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262704795.0","metadata":"{\"id\":\"pattern-1766262704550-6h9hi9\",\"kind\":\"pattern\",\"is_negative\":false}"}
40
70
  {"id":"22174fd3-71ad-4e49-ac02-67bd38e89db6","information":"opencode-swarm-plugin CI/CD status (Dec 2024):\n\nPACKAGES:\n- swarm-mail@0.1.2 - published, has dist/, repository field, ASCII art README\n- opencode-swarm-plugin@0.23.4 - published but has swarm-mail@0.1.0 dep (stale lockfile issue)\n\nPENDING FIX: \n- Updated scripts/publish.ts to use bun pm pack + npm publish\n- Updated package.json with ci:version and ci:publish scripts \n- Updated publish.yml to setup .npmrc and use new scripts\n- Need to push and merge release PR to get swarm-mail@0.1.2 as dependency\n\nOPEN BEADS:\n- opencode-swarm-plugin-whh1n (P1 bug): swarm_complete fails silently - NOT ADDRESSED\n- opencode-swarm-plugin-gde33 (P2): Swarm Mail Generalization Analysis - NOT ADDRESSED\n\nNEXT SESSION:\n1. Commit and push the publish workflow fixes\n2. Merge release PR when it appears\n3. Verify npm install works with correct swarm-mail version\n4. Then tackle the swarm_complete bug or the skill creation swarm task","created_at":"2025-12-15T05:07:35.356Z"}
41
71
  {"id":"2300685b-e672-461f-9846-5ba2b78c4ac0","information":"Daemon process lifecycle management pattern for Node.js: Use child_process.spawn with detached true and stdio ignore for background daemons. Unref child process to allow parent exit. Store PID in file system. Use process.kill(pid, 0) to check if process is alive without sending signal - ESRCH error means dead. Wait for daemon ready by polling health check. SIGTERM for graceful shutdown, SIGKILL as fallback. Clean up PID file after process exit. Dynamic import of optional dependencies like postgres to avoid bundling in library consumers.","created_at":"2025-12-17T17:54:13.019Z"}
72
+ {"id":"235a989c-b607-42f8-a8dc-6f199ae8424f","information":"Lockfile parsing implementation for swarm research phase. Added getInstalledVersions() to detect package versions from lockfiles (npm package-lock.json, pnpm pnpm-lock.yaml, yarn yarn.lock) with fallback to package.json. Binary bun.lock falls back to package.json.\n\nKey design decisions:\n1. Lockfile preferred over package.json - returns what's ACTUALLY installed, not constraints\n2. Semver constraint stripping for package.json fallback - regex extracts X.Y.Z from \"^X.Y.Z\"\n3. Graceful degradation - returns empty array if no package info found\n4. TDD approach - 20 tests covering all formats, edge cases (missing packages, multiple packages, preference order)\n\nPlugin tool: swarm_get_versions - takes projectPath and packages array, returns VersionInfo[] with source tracking (\"lockfile\" vs \"package.json\").\n\nResearchers use this to fetch docs for the CORRECT version (not latest). Critical for accurate documentation lookups in swarm coordination.","created_at":"1766516621466.0","tags":"lockfile,version-detection,swarm-research,npm,pnpm,yarn,bun,tdd"}
42
73
  {"id":"258e9231-4bf7-4dbd-809f-3a16de6908f7","information":"When renaming tools in tool-availability.ts, must update 4 places: 1) ToolName type union, 2) toolCheckers object with async checker function, 3) fallbackBehaviors Record with description, 4) tools array in checkAllTools(). Keep deprecated tools for backward compatibility by adding both old and new names to all 4 locations. Mark deprecated with comments.","created_at":"2025-12-17T16:41:27.639Z"}
43
74
  {"id":"265444da-937e-4fa7-9f5a-0d551b5fcc32","information":"Auto-migration implementation in createMemoryAdapter: Added module-level flag `migrationChecked` to track if legacy memory migration has been checked. First call to createMemoryAdapter() checks: (1) legacyDatabaseExists() from swarm-mail, (2) target DB is empty (COUNT(*) FROM memories = 0), (3) if both true, runs migrateLegacyMemories() with console logging. Subsequent calls skip check (performance optimization). Critical: Export resetMigrationCheck() for test isolation - without it, module-level flag persists across tests causing false failures. Test pattern: beforeEach(() => resetMigrationCheck()) ensures each test starts with fresh state. Graceful degradation: migration failures log warnings but don't throw - adapter continues working. Migrated 176 real memories successfully in production test. Migration functions were added to swarm-mail/src/index.ts exports (legacyDatabaseExists, migrateLegacyMemories, getMigrationStatus, getDefaultLegacyPath).","created_at":"2025-12-18T21:12:31.305Z","metadata":"{\"file\":\"src/memory.ts\",\"pattern\":\"auto-migration-on-first-use\",\"project\":\"opencode-swarm-plugin\"}","tags":"auto-migration,memory,pglite,testing,module-state,swarm-mail"}
44
75
  {"id":"27928bec-546f-4a77-a32f-53415771c127","information":"PGlite WAL accumulation root cause: \"different vector dimensions 1024 and 0\" error from failed embedding operations. Solution: Validate embeddings BEFORE database insert in Ollama service. Added validateEmbedding() function that checks: 1) dimension not 0 (empty), 2) dimension matches expected (1024 for nomic-embed-text), 3) no NaN/Infinity values. Integrated into embedSingle() which is used by both embed() and embedBatch(). This prevents pgvector corruption that causes WAL buildup since PGlite never checkpoints. Test coverage: 6 tests covering all validation cases in Ollama.test.ts.","created_at":"2025-12-19T03:30:20.283Z","tags":"pglite,pgvector,embeddings,validation,ollama,wal,database-corruption,pdf-library"}
76
+ {"id":"27f7e1e7-f314-45b6-a916-b28431053392","information":"{\"id\":\"test-1766262042366-41ozxqqdxx3\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:20:42.366Z\",\"raw_value\":1}","created_at":"1766262042619.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:20:42.366Z\"}"}
77
+ {"id":"28d55a17-96b9-4b3c-a10e-1045925ced18","information":"PGlite Database Path Isolation Bug:\n\n**Problem:** Integration tests were failing intermittently because all tests shared the SAME global database (`~/.opencode/streams`) instead of getting isolated per-test databases. This caused schema conflicts - old schema from previous tests was reused.\n\n**Root Cause:** `getDatabasePath()` logic was:\n```typescript\nif (projectPath) {\n const localDir = join(projectPath, \".opencode\");\n if (existsSync(localDir) || existsSync(projectPath)) {\n // create local DB\n }\n}\n// fallback to global\n```\n\nWhen `projectPath` didn't exist (e.g., `/tmp/test-swarm-12345` not created yet), the `existsSync(projectPath)` check failed, so it fell back to global DB. Tests never created the projectPath directory, assuming getDatabasePath would handle it.\n\n**Solution:** Create `projectPath` directory in `getDatabasePath()` before checking:\n```typescript\nif (projectPath) {\n const localDir = join(projectPath, \".opencode\");\n // Create project directory if it doesn't exist\n if (!existsSync(projectPath)) {\n mkdirSync(projectPath, { recursive: true });\n }\n if (!existsSync(localDir)) {\n mkdirSync(localDir, { recursive: true });\n }\n return join(localDir, \"streams\");\n}\n```\n\n**Impact:** Now each test gets an isolated database at `projectPath/.opencode/streams`, preventing schema pollution between tests.\n\n**Files Changed:**\n- `streams/index.ts`: Fixed `getDatabasePath()` to create directories\n\n**Lesson:** When database path depends on a directory, create it unconditionally. Don't assume caller will create it.","created_at":"1766331466890.0","tags":"pglite,test-isolation,database-path,integration-tests,mkdir"}
45
78
  {"id":"291f3101-82dc-41f8-b077-fbce25dfd767","information":"@badass Video Pipeline Decision (Dec 2024): Videos are ALWAYS separate ContentResource types, never embedded fields. Video resources link to posts/lessons via ContentResourceResource join table. This enables video reuse across multiple collections. \n\nCourse-builder has a full web-based, Inngest-backed video pipeline currently in @coursebuilder/core - but core is bloated and this needs extraction. Video processing should be its own package (@badass/video or @badass/mux).\n\nKey reference files for video pipeline:\n- course-builder core video processing (needs extraction, location TBD)\n- academy-content Mux integration: vercel/academy-content/plans/video-upload-processing-plan.md\n\nArchitecture: Upload triggers Inngest job, Mux processes video, webhook updates VideoResource with asset ID and playback info.","created_at":"2025-12-18T15:51:59.366Z"}
46
79
  {"id":"2add0e53-1dba-4191-bea0-0451e681f898","information":"{\"id\":\"test-1765751935012-epiln8ycyte\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:38:55.012Z\",\"raw_value\":1}","created_at":"2025-12-14T22:38:55.304Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:38:55.012Z\"}"}
80
+ {"id":"2b39efc2-f484-4f02-81ac-182da5de8048","information":"{\"id\":\"pattern-1766256884732-h98jpn\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T18:54:44.731Z\",\"updated_at\":\"2025-12-20T18:54:44.731Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766256884971.0","metadata":"{\"id\":\"pattern-1766256884732-h98jpn\",\"kind\":\"pattern\",\"is_negative\":false}"}
81
+ {"id":"2cce0c80-5d6f-4ec5-af66-f5f3fc6f949a","information":"DurableLock Effect primitive successfully ported to libSQL/DatabaseAdapter pattern (Dec 21, 2025).\n\n**Implementation Pattern:**\n- LockConfig requires `db: DatabaseAdapter` parameter (not optional)\n- Uses `await db.exec()` for DDL (CREATE TABLE, CREATE INDEX, INSERT, UPDATE, DELETE)\n- Uses `await db.query<T>()` for reads with `?` placeholders\n- Schema matches db/schema/streams.ts locksTable definition\n- Tests use `createInMemorySwarmMailLibSQL(testId)` for in-memory databases\n\n**Schema (locks table):**\n```sql\nCREATE TABLE IF NOT EXISTS locks (\n resource TEXT PRIMARY KEY,\n holder TEXT NOT NULL,\n seq INTEGER NOT NULL DEFAULT 0,\n acquired_at INTEGER NOT NULL,\n expires_at INTEGER NOT NULL\n);\nCREATE INDEX IF NOT EXISTS idx_locks_expires ON locks(expires_at);\n```\n\n**Test Pattern:**\n```typescript\nbeforeEach(async () => {\n const swarmMail = await createInMemorySwarmMailLibSQL(testId);\n db = await swarmMail.getDatabase(); // Returns DatabaseAdapter\n closeDb = () => swarmMail.close();\n await db.exec(\"DELETE FROM locks\"); // Reset state\n});\n```\n\n**Files:** lock.ts, lock.test.ts (16 tests, all passing)\n\n**Related primitives:** Same pattern used in deferred.ts, cursor.ts, mailbox.ts","created_at":"1766339236609.0","metadata":"{\"files\":[\"lock.ts\",\"lock.test.ts\"],\"status\":\"complete\",\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjg00god17i\",\"epic_id\":\"opencode-swarm-monorepo-lf2p4u-mjg00gnmwui\",\"test_count\":16}","tags":"effect-ts,durable-primitives,libsql,database-adapter,locks,swarm-mail,migration"}
82
+ {"id":"2d1dcfa7-9f37-40fb-a099-67c71bd25276","information":"Mandatory Coordinator Review Loop Pattern: Coordinators MUST review worker output before spawning the next worker. The COORDINATOR_POST_WORKER_CHECKLIST in swarm-prompts.ts enforces a 5-step quality gate: (1) Check swarm mail for messages, (2) Run swarm_review to get diff+context, (3) Evaluate against epic goals, (4) Send swarm_review_feedback (approved or needs_changes), (5) ONLY THEN spawn next worker. This is returned in post_completion_instructions field from swarm_spawn_subtask. Without this, coordinators skip quality gates and ship broken code. Updated bin/swarm.ts Phase 7 to make review MANDATORY with stronger language. 3-strike rule: after 3 review failures, task marked blocked (architectural problem, not \"try harder\").","created_at":"1766350942163.0","tags":"swarm,coordination,quality-gate,review-loop,coordinator-pattern"}
83
+ {"id":"2d3496f2-44ce-4dda-915f-7afa7d3c041b","information":"Backfill script security pattern: When internal API endpoints require authentication middleware, both the primary script (backfill-channel.ts) AND orchestrator scripts (backfill-all.ts) need INTERNAL_API_KEY validation. The orchestrator spawns child processes that inherit env vars, so validation at orchestrator level prevents cascading failures. Authorization header pattern: `Authorization: Bearer ${process.env.INTERNAL_API_KEY}` in fetch headers. Validation pattern: Check env var exists BEFORE starting work to fail fast.","created_at":"1766436010973.0","tags":"auth,internal-api,backfill,security,env-vars,scripts"}
84
+ {"id":"2d76d4de-ea8f-4440-ad53-98feeb10a980","information":"{\"id\":\"test-1766263087372-lohl8lq2l8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:38:07.372Z\",\"raw_value\":1}","created_at":"1766263087596.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:38:07.372Z\"}"}
47
85
  {"id":"2d87e08a-4fed-450a-9aa5-ee09cc8848d7","information":"{\"id\":\"pattern-1765733413093-1ct6rt\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T17:30:13.093Z\",\"updated_at\":\"2025-12-14T17:30:13.093Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T17:30:13.345Z","metadata":"{\"id\":\"pattern-1765733413093-1ct6rt\",\"kind\":\"pattern\",\"is_negative\":false}"}
48
86
  {"id":"2fcbad1b-56f1-471b-bc04-72a8765fe6c3","information":"Partial ID resolution in hive plugin tools: resolvePartialId from swarm-mail uses SQL LIKE pattern `%-{partialHash}%-%` to match the hash segment (middle portion of cell IDs). Cell ID format is `{prefix}-{hash}-{timestamp}{random}` where hash is 6 chars (can include negative sign creating consecutive hyphens like `cell--gcel4-mjd...`). In tests with many cells, short hashes (3-5 chars) often collide, causing ambiguous matches - use full hash or full ID for reliable resolution. The function returns null for no match, full ID for unique match, throws error for ambiguous. Integration: import resolvePartialId from swarm-mail, call before adapter operations with `const cellId = await resolvePartialId(adapter, projectKey, inputId) || inputId`. Add helpful error handling for \"Ambiguous hash\" and \"Cell not found\" messages.","created_at":"2025-12-19T16:30:14.215Z","tags":"hive,partial-id,resolution,swarm-mail,testing"}
49
87
  {"id":"2febbadd-de6d-43e2-9e0a-ac3856755792","information":"Auto-sync pattern for hive_create_epic: After successfully creating epic + subtasks, immediately flush to JSONL using FlushManager so spawned workers can see cells without waiting for manual hive_sync. Implementation: ensureHiveDirectory() → new FlushManager({adapter, projectKey, outputPath}) → flush(). Wrapped in try/catch as non-fatal (log warning if fails). This mirrors the pattern in hive_sync but happens automatically after epic creation. Critical for swarm coordination - workers spawned after epic creation need to query cells from JSONL, not wait for coordinator to manually sync.","created_at":"2025-12-19T16:58:31.668Z","tags":"hive,swarm,auto-sync,epic-creation,flush-manager,coordination"}
88
+ {"id":"30f717f1-490a-47b7-8df2-e4cbc7ac91ba","information":"{\"id\":\"test-1766263404797-gbdtm796si\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:43:24.797Z\",\"raw_value\":1}","created_at":"1766263405009.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:43:24.797Z\"}"}
89
+ {"id":"31095e82-3398-4272-b417-eeee2c4a175c","information":"pdf-brain AutoTagger config integration: Updated AutoTagger.ts to use loadConfig() from types.ts for enrichment and judge provider/model configuration. Key changes: (1) Added llmJudgeDuplicate() function supporting both \"gateway\" (AI Gateway with API key) and \"ollama\" (local via fetch to /api/generate) providers. (2) Updated enrich() and generateTags() to read enrichment.{provider,model} from config instead of hardcoded defaults. (3) Map \"gateway\" provider to \"anthropic\" for LLMProvider type compatibility. (4) Updated autoAcceptProposals() to use llmJudgeDuplicate for better duplicate detection (lowered threshold to 0.75 for candidates, then LLM judges). Gateway provider uses generateText with model string, Ollama uses direct fetch to ollama.host/api/generate. Config provider determines which path is taken.","created_at":"1766261154984.0","tags":"pdf-brain,autotagger,config,multi-provider,ollama,gateway,llm-judge"}
50
90
  {"id":"310ca5c8-13b1-483d-a28f-1140c9aa5d05","information":"{\"id\":\"pattern-1765386508923-acld7i\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:08:28.923Z\",\"updated_at\":\"2025-12-10T17:08:28.923Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:08:29.108Z","metadata":"{\"id\":\"pattern-1765386508923-acld7i\",\"kind\":\"pattern\",\"is_negative\":false}"}
91
+ {"id":"315a268a-a6db-480a-bae3-6838a4e3d824","information":"Created swarm/researcher agent template for OpenCode Swarm Plugin. Key patterns learned:\n\n1. **Agent Template Structure**: Agent templates follow a consistent pattern:\n - YAML frontmatter (name, description, model)\n - Role definition and constraints\n - Step-by-step workflow (numbered steps)\n - Tool usage examples\n - Anti-patterns and when to use/not use\n\n2. **READ-ONLY Agent Design**: The researcher agent is intentionally read-only:\n - No file reservations (doesn't edit, so no conflicts)\n - No swarm_complete (doesn't modify code)\n - Focuses on tool discovery, doc fetching, and knowledge storage\n - Uses semantic-memory for persistence, swarm mail for communication\n\n3. **Tool Discovery Pattern**: Dynamic tool discovery is critical:\n - Use skills_list() to see available skills\n - Use bash(\"which <tool>\") to check CLI availability\n - No direct MCP listing - infer from task context\n - Never assume user has specific tools installed\n\n4. **Context Efficiency**: Researchers must condense findings:\n - Store full details in semantic-memory (persistent)\n - Send 3-5 bullet points via swarm mail (ephemeral)\n - Return structured JSON summary (shared_context)\n - Never dump raw docs into main context\n\n5. **Setup Flow Integration**: Added researcher to setup:\n - Variable: researcherAgentPath = join(swarmAgentDir, \"researcher.md\")\n - Write during setup: writeFileWithStatus(researcherAgentPath, getResearcherAgent(workerModel))\n - Uses workerModel (user's mid-tier choice) for cost efficiency\n - Added to existingFiles array, config() display, and help() text\n\nLocation: packages/opencode-swarm-plugin/bin/swarm.ts\nLines: ~1324-1550 (getResearcherAgent function)\nModel: Uses workerModel parameter (typically claude-haiku-4-5 for cost efficiency)","created_at":"1766515146648.0","tags":"swarm,agent-templates,researcher,read-only,tool-discovery,context-efficiency"}
51
92
  {"id":"317c723e-6240-4a5f-b810-da9c274b3ece","information":"BUN MONOREPO DEPENDENCY INSTALLATION - COMPLETE GUIDE (Dec 2025)\n\nPROBLEM: `bun add --filter <workspace>` is BROKEN - installs to ROOT package.json, not the target workspace. Support is in beta as of Aug 2025.\n\nSOLUTION: Use `--cwd` flag instead:\n```bash\nbun add <package> --cwd <workspace-path>\nbun add -d <package> --cwd <workspace-path> # dev dependency\n```\n\nEXAMPLES:\n```bash\n# Install to specific workspace\nbun add express --cwd apps/server\nbun add -d @types/node --cwd apps/server\nbun add express cors helmet --cwd apps/server\n\n# Install to shared package\nbun add lodash --cwd packages/shared\n```\n\nWHY --cwd WORKS:\n- Tells Bun to pretend it's inside that folder\n- Dependencies go to correct package.json\n- Lockfile (bun.lockb) stays centralized at root\n- No local node_modules pollution\n\nANTI-PATTERN (don't do this):\n```bash\ncd apps/server && bun add express && cd ../..\n# Creates local node_modules, breaks monorepo hoisting\n```\n\nPRO TIP - Add helper scripts to root package.json:\n```json\n{\n \"scripts\": {\n \"add:web\": \"bun add --cwd apps/wizardshit-ai\",\n \"add:server\": \"bun add --cwd apps/server\"\n }\n}\n```\n\nTURBOREPO COMPATIBILITY:\n- `turbo build --filter=server` works fine\n- `bun add --filter` is the broken one, not turbo's --filter\n\nSource: fgbyte.com blog post, verified in wizardshit.ai monorepo setup Dec 2025","created_at":"2025-12-16T19:59:16.995Z"}
52
93
  {"id":"32577e43-8ceb-481c-a8ee-874cfd49dd00","information":"{\"id\":\"pattern-1765749526038-65vu4n\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T21:58:46.038Z\",\"updated_at\":\"2025-12-14T21:58:46.038Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T21:58:46.288Z","metadata":"{\"id\":\"pattern-1765749526038-65vu4n\",\"kind\":\"pattern\",\"is_negative\":false}"}
53
94
  {"id":"3287144c-e3f1-46fd-b6e1-ce4b82b35448","information":"PGLite BIGINT to Date conversion fix: PGLite can return BIGINT columns as JavaScript `bigint` type (version/environment-dependent). The Date constructor throws TypeError on bigint: `new Date(1234n)` fails with \"Cannot convert a BigInt value to a number\". \n\nSOLUTION: Wrap database timestamps in Number() before passing to Date constructor. Number() handles both number and bigint safely:\n- Number(1234) → 1234\n- Number(1234n) → 1234\n\nAPPLIED TO: packages/opencode-swarm-plugin/src/hive.ts, formatCellForOutput() function:\n- Line 590: created_at → new Date(Number(adapterCell.created_at))\n- Line 591: updated_at → new Date(Number(adapterCell.updated_at))\n- Line 593: closed_at → new Date(Number(adapterCell.closed_at))\n\nNOTE: Only affects READ path (database → output). WRITE path (JSONL → database) uses new Date(isoString).getTime() which is fine because input is string, not bigint.\n\nTESTING: Added integration tests in hive.integration.test.ts to verify dates parse correctly. All 66 hive tests pass with fix.","created_at":"2025-12-19T17:50:46.475Z","tags":"pglite,bigint,date,hive,database,type-safety"}
95
+ {"id":"3288d53a-62de-4057-ad61-2de2df847651","information":"{\"id\":\"pattern-1766349002356-m3rg0w\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:30:02.356Z\",\"updated_at\":\"2025-12-21T20:30:02.356Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766349002632.0","metadata":"{\"id\":\"pattern-1766349002356-m3rg0w\",\"kind\":\"pattern\",\"is_negative\":false}"}
54
96
  {"id":"330b1dcc-e675-4b49-8a0d-83c79ff9445e","information":"UTF-8 null byte sanitization for PostgreSQL: PostgreSQL TEXT columns crash with \"invalid byte sequence for encoding UTF8: 0x00\" when null bytes (\\x00) are present. Solution: sanitizeText(text: string) function using text.replace(/\\x00/g, \"\") to strip null bytes. Applied early in processing pipeline (before chunking) in both PDFExtractor and MarkdownExtractor. Critical to sanitize BEFORE other text processing to prevent null bytes from propagating through chunks into database. Used biome-ignore comment for noControlCharactersInRegex lint rule since we intentionally use \\x00 pattern.","created_at":"2025-12-19T17:16:53.764Z","tags":"postgresql,utf-8,null-bytes,sanitization,pdf-extraction,markdown-extraction,database-errors"}
55
97
  {"id":"332273cc-89e1-477d-be2b-c0aee9fb08cd","information":"opencode-swarm-plugin v0.22.0 release - Major improvements to semantic memory and swarm mail coordination:\n\n1. MANDATORY semantic memory usage - agents now auto-capture learnings after every swarm_complete, with MANDATORY triggers documented in AGENTS.md for when to store memories (after bugs, architectural decisions, patterns discovered, debugging sessions)\n\n2. MANDATORY swarm mail coordination - comprehensive error handling in swarm_complete pushes failures to swarm mail for coordinator visibility, preventing silent failures\n\n3. Test isolation - TEST_MEMORY_COLLECTIONS env var prevents integration tests from polluting production semantic-memory (identified 32 test artifacts, 86% pollution rate)\n\n4. Swarm Mail architecture documentation - complete 3-tier stack (primitives, patterns, coordination) inlined into README with diagrams, clarified Agent Mail is inspiration vs Swarm Mail implementation\n\n5. Learning improvements - debug logging, session stats tracking, low usage alerts if <1 store operation in 10 minutes\n\nKey files changed: src/storage.ts (test isolation + logging), src/swarm-orchestrate.ts (auto-capture + error handling), AGENTS.md (+358 lines of MANDATORY usage), docs/swarm-mail-architecture.md (1,147 lines), README.md (architecture diagrams)\n\nThis release makes semantic-memory and swarm mail usage non-optional, forcing agents to coordinate and learn proactively.","created_at":"2025-12-14T22:50:23.470Z"}
56
98
  {"id":"332822ce-746f-4f79-9283-3cfebc98dea7","information":"## Publishing Workflow Fix - In Progress (Dec 15, 2025)\n\n### Problem\nCI builds failing because @swarmtools/web (fumadocs docs site) has type errors. The .source/ directory with generated types doesn't exist in CI.\n\n### Root Cause\nFumadocs-mdx generates .source/ directory with TypeScript types at dev/build time. In CI, this directory doesn't exist when TypeScript runs.\n\n### What We Tried\n1. Committing .source/ - Reverted. Not best practice.\n2. postinstall script - Added postinstall fumadocs-mdx to package.json. Still failing.\n\n### Current State\n- Changeset exists for swarm-mail patch (fix-pglite-external.md)\n- swarm-mail and opencode-swarm-plugin build successfully\n- @swarmtools/web build fails on TypeScript\n- GitHub Actions Release workflow failing\n\n### Quick Fix Option\nEdit .github/workflows/publish.yml to exclude web from build:\n run: bun turbo build --filter=!@swarmtools/web\n\nThis is valid because @swarmtools/web is private and not published to npm.","created_at":"2025-12-15T16:16:33.265Z"}
99
+ {"id":"33585120-f851-4a7a-b658-6dbd970bbbf3","information":"{\"id\":\"test-1766260048287-av4r1nm3l\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:47:28.287Z\",\"raw_value\":1}","created_at":"1766260048541.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:47:28.287Z\"}"}
100
+ {"id":"3474a327-042c-4895-8d3a-14297ae3a467","information":"{\"id\":\"test-1766263946884-krpy25uikh\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:52:26.884Z\",\"raw_value\":1}","created_at":"1766263947127.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:52:26.884Z\"}"}
101
+ {"id":"347769a7-04dc-4fae-afee-122440501550","information":"{\"id\":\"pattern-1766262450183-3wl0s8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:27:30.183Z\",\"updated_at\":\"2025-12-20T20:27:30.183Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262450481.0","metadata":"{\"id\":\"pattern-1766262450183-3wl0s8\",\"kind\":\"pattern\",\"is_negative\":false}"}
57
102
  {"id":"3597596f-e755-4e7b-963b-92995aec0ccc","information":"Refactored swarm-mail daemon from spawning external `pglite-server` binary to in-process PGLiteSocketServer. \n\n**Key Pattern:**\n```typescript\nimport { PGlite } from \"@electric-sql/pglite\"\nimport { vector } from \"@electric-sql/pglite/vector\"\nimport { PGLiteSocketServer } from \"@electric-sql/pglite-socket\"\n\n// Module-level state (one server per process)\nlet activeServer: PGLiteSocketServer | null = null\nlet activeDb: PGlite | null = null\n\n// Start in-process\nconst db = await PGlite.create({ dataDir, extensions: { vector } })\nconst server = new PGLiteSocketServer({ db, port, host })\nawait server.start()\n\n// Graceful shutdown (CRITICAL ORDER)\nawait db.exec(\"CHECKPOINT\") // Flush WAL first\nawait server.stop()\nawait db.close()\n```\n\n**Benefits:**\n- No external binary dependency (pglite-server)\n- Same process = simpler lifecycle management\n- PID file tracks current process.pid\n- Server reuse: check activeServer before creating new one\n\n**TDD Approach Worked:**\n- 4 new tests written first (RED)\n- Implementation made them pass (GREEN)\n- Refactored with JSDoc (REFACTOR)\n- All 12 tests passing in 519ms\n\n**Gotcha:** Constructor is `{ db, port, host }` for TCP or `{ db, path }` for Unix socket, not separate args.","created_at":"2025-12-19T15:00:35.753Z","tags":"pglite,daemon,in-process,tdd,swarm-mail,refactoring"}
103
+ {"id":"35cfa05f-d6b1-4e12-8108-3d16eae4e40d","information":"{\"id\":\"test-1766260202382-4vlthuiq5\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:50:02.382Z\",\"raw_value\":1}","created_at":"1766260202662.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:50:02.382Z\"}"}
104
+ {"id":"35f50728-ca85-4134-805c-1fcab79cc0a5","information":"{\"id\":\"pattern-1766262895002-mwj654\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:34:55.002Z\",\"updated_at\":\"2025-12-20T20:34:55.002Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262895283.0","metadata":"{\"id\":\"pattern-1766262895002-mwj654\",\"kind\":\"pattern\",\"is_negative\":false}"}
105
+ {"id":"36389b73-d737-4441-bb4a-8ad284988f00","information":"{\"id\":\"test-1766262231497-fsjs0em7lu4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:23:51.497Z\",\"raw_value\":1}","created_at":"1766262231729.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:23:51.497Z\"}"}
58
106
  {"id":"3667bbf3-77fa-4beb-868e-61164dd85081","information":"npm Trusted Publishers setup for opencode-swarm-plugin monorepo:\n\nPROBLEM SOLVED: npm token management is a mess. Trusted Publishers use OIDC - no tokens needed.\n\nSETUP:\n1. Workflow needs `permissions: id-token: write` \n2. Each npm package configured at npmjs.com/package/PKG/access with Trusted Publisher:\n - Organization: joelhooks\n - Repository: opencode-swarm-plugin \n - Workflow: publish.yml\n3. Use `bunx changeset publish` NOT `npm publish` directly - changeset publish is smarter, only publishes packages with new versions not yet on npm\n\nKEY GOTCHA: Using `bun turbo publish:pkg` with individual `npm publish --provenance` scripts FAILED because:\n- turbo tried to publish ALL packages including ones already at same version on npm\n- OIDC token detection didn't work through bun→npm chain properly\n\nSOLUTION: `bunx changeset publish` handles everything:\n- Checks npm registry for each package version\n- Only publishes packages where local version > npm version\n- Creates git tags automatically\n- Works with OIDC out of the box\n\nWORKFLOW FILE: .github/workflows/publish.yml\n- Triggers on push to main\n- Uses changesets/action@v1\n- publish command: `bun run release` which runs `bunx changeset publish`\n\nDOCS: https://docs.npmjs.com/trusted-publishers","created_at":"2025-12-15T04:34:51.427Z"}
107
+ {"id":"36a16df5-4bf9-4c2a-9b27-96613e25201b","information":"{\"id\":\"test-1766260910579-wcmez499yqe\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:50.579Z\",\"raw_value\":1}","created_at":"1766260910801.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:50.579Z\"}"}
59
108
  {"id":"36e760a9-9737-4f1e-8159-a739f679af77","information":"Monorepo publishing with workspace:* protocol and npm OIDC trusted publishers:\n\nPROBLEM: workspace:* doesn't get resolved by npm publish or changeset publish, causing \"Unsupported URL Type workspace:*\" errors on install.\n\nSOLUTION (scripts/publish.ts):\n1. bun pm pack - creates tarball with workspace:* resolved to actual versions\n2. npm publish <tarball> - publishes with OIDC support\n\nWHY NOT bun publish? It resolves workspace:* but doesn't support npm OIDC trusted publishers (requires npm login).\n\nWHY NOT npm publish directly? It doesn't resolve workspace:* protocol.\n\nWHY NOT changeset publish? Uses npm under the hood, same problem.\n\nADDITIONAL GOTCHA: CLI bin scripts (like bin/swarm.ts) need external imports in dependencies, not devDependencies. Users installing globally won't have devDeps, causing \"Cannot find module\" errors.\n\nFILES:\n- scripts/publish.ts - custom publish script\n- .github/workflows/publish.yml - calls bun run release which runs scripts/publish.ts","created_at":"2025-12-15T04:47:41.617Z"}
60
109
  {"id":"370b4da1-176a-4975-a58d-9cd46d515918","information":"TDD workflow for JSONL merge function: Write tests FIRST that verify behavior (empty files, overlaps, missing files), then implement minimal code to pass. For JSONL deduplication, use Set to track existing IDs, filter base records, append new ones, write back. Testing pattern: mkdirSync temp project, writeFileSync JSONL fixtures, run function, readFileSync + parse to verify. All 6 test cases passed on first implementation - TDD prevented edge case bugs.","created_at":"2025-12-18T00:56:09.189Z"}
110
+ {"id":"388c94e7-6227-4af2-a13f-e5c54af4cf5f","information":"pdf-brain enrichment fix: AutoTagger.enrich() returns concepts array but never called taxonomy.assignToDocument(). Fixed in 3 locations in cli.ts:\n\n1. `add` command (line ~683): After library.add(), extract concepts from enrichedMetadata, loop and call taxonomy.assignToDocument(doc.id, conceptId, 0.9, \"llm\")\n\n2. `ingest` TUI mode (line ~1664): Moved enrichedMetadata declaration outside if-block for scope, added same concept assignment loop after library.add()\n\n3. `ingest` CLI mode (line ~1887): Added concept assignment loop using fileMetadata.concepts after library.add()\n\nPattern: \n```typescript\nconst concepts = metadata.concepts as string[] | undefined;\nif (concepts && Array.isArray(concepts) && concepts.length > 0) {\n const taxonomy = yield* TaxonomyService;\n for (const conceptId of concepts) {\n yield* taxonomy.assignToDocument(doc.id, conceptId, 0.9, \"llm\");\n }\n}\n```\n\nVerified with manual test: added documents now show \"Assigned N concept(s)\" and document_concepts table is populated. All 181 tests pass.","created_at":"1766420197417.0","tags":"pdf-brain,autotagger,enrichment,taxonomy,document_concepts,bug-fix,tdd"}
111
+ {"id":"38fbf3f0-eea1-4d7d-b888-7cb68f73ae91","information":"{\"id\":\"test-1766262703408-0tujzt32od4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:31:43.408Z\",\"raw_value\":1}","created_at":"1766262703656.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:31:43.408Z\"}"}
112
+ {"id":"39879763-33d1-4881-b7f7-8bb5bbca62f2","information":"CLI integration for pdf-brain multi-scale retrieval: Added --include-clusters flag to search command, wired to SearchOptions.includeClusterSummaries. Updated HELP text to document the flag. Exported parseArgs() for testability. Pattern: CLI flag → parseArgs → SearchOptions → LibSQLDatabase.vectorSearch(). The cluster command implementation (using streamEmbeddings, mini-batch k-means, soft clustering) is in LibSQLDatabase and ClusteringService but not yet exposed via CLI - that's a separate integration task. TDD approach: wrote tests for flag parsing first, then implemented minimal wiring.","created_at":"1766424483922.0","metadata":"{\"file\":\"src/cli.ts\",\"pattern\":\"flag-to-service-wiring\",\"test_file\":\"src/cli.test.ts\"}","tags":"pdf-brain,cli,tdd,multi-scale-retrieval,clustering,flags"}
113
+ {"id":"3a1f3810-5ab9-419c-8c27-48b5b28ea1c1","information":"{\"id\":\"test-1766349510928-8i8zfpvwfw2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:38:30.928Z\",\"raw_value\":1}","created_at":"1766349511174.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:38:30.928Z\"}"}
61
114
  {"id":"3a7be2a6-36d7-40b5-a14f-fedefadb4608","information":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:20:42.550Z\",\"updated_at\":\"2025-12-13T19:20:42.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:20:42.749Z","metadata":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"kind\":\"pattern\",\"is_negative\":false}"}
115
+ {"id":"3a8ffc86-7de2-4a69-aff2-1de413c0dca7","information":"AI SDK 6 with Vercel AI Gateway - SIMPLEST PATTERN: Just use the model string directly with generateText/generateObject. No provider setup needed.\n\n```typescript\nimport { generateText } from \"ai\";\n\nconst { text } = await generateText({\n model: \"anthropic/claude-haiku-4-5\",\n prompt: \"...\",\n});\n```\n\nThe AI SDK automatically uses the AI_GATEWAY_API_KEY env var and routes through Vercel AI Gateway. No need for createOpenAICompatible or any provider configuration. This is the canonical pattern for all AI SDK usage in Joel's projects.","created_at":"1766338514071.0","tags":"ai-sdk,vercel-ai-gateway,pattern,anthropic,generateText"}
116
+ {"id":"3a949a0b-337c-4cd8-919a-bdfb4040dd07","information":"{\"id\":\"test-1766263206530-evd2s8oy0nt\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:40:06.530Z\",\"raw_value\":1}","created_at":"1766263206770.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:40:06.530Z\"}"}
117
+ {"id":"3b086612-be4e-4bf3-83bb-2d30eaacf873","information":"Drizzle ORM has specific limitations with libSQL vector operations and FTS5 full-text search that require raw SQL:\n\n**MUST use raw SQL for:**\n1. Vector function calls: `embedding: sql\\`vector(${JSON.stringify(array)})\\`` - Drizzle's custom vector type handles reads but not writes with vector() function\n2. Vector similarity search: `vector_top_k()`, `vector_distance_cos()` - libSQL-specific ANN search not in Drizzle\n3. FTS5 virtual tables: `CREATE VIRTUAL TABLE ... USING fts5(...)` - Drizzle doesn't support virtual tables\n4. FTS5 MATCH queries: `WHERE content MATCH $query` - Drizzle doesn't support FTS5 syntax\n5. FTS5 triggers: Auto-sync triggers for FTS5 tables - Drizzle doesn't support triggers\n6. Vector indexes: `CREATE INDEX ... ON table(libsql_vector_idx(column))` - libSQL-specific function syntax\n\n**Pattern for acceptable raw SQL:**\n- Use Drizzle for all standard CRUD operations\n- Use `sql\\`\\`` template for libSQL-specific features\n- Use DatabaseAdapter abstraction for portable queries\n- Document WHY raw SQL is required (feature not in Drizzle)\n\n**When auditing for Drizzle conversion:** Check if raw SQL is for vector ops, FTS5, or triggers FIRST before attempting conversion. These features aren't in Drizzle's scope.\n\nApplies to: swarm-mail memory subsystem (store.ts, libsql-schema.ts)","created_at":"1766296170854.0","metadata":"{\"file\":\"packages/swarm-mail/src/memory/store.ts\",\"context\":\"memory subsystem audit\"}","tags":"drizzle,orm,libsql,vector,fts5,sql,migrations"}
118
+ {"id":"3b8562be-cf8d-451b-9a1c-1a4c1ea63360","information":"DurableStreamAdapter implementation for Hive Visualizer (Dec 24, 2025):\n\n**Pattern:** Adapter layer that wraps SwarmMailAdapter for Durable Streams protocol compatibility\n\n**Implementation:**\n- `read(offset, limit)` - Uses `swarmMail.readEvents({ afterSequence, limit })` for offset-based pagination\n- `head()` - Uses `swarmMail.getLatestSequence(projectKey)` to return latest sequence number\n- `subscribe(callback)` - Polls every 100ms, initializes lastSequence to current head to avoid replaying history\n\n**Testing gotcha:** Tests must use `swarmMail.appendEvent()` adapter method, not raw `appendEvent()` with `swarmMail.db` (which doesn't exist on interface). SwarmMailAdapter doesn't expose `.db` property - it has `getDatabase()` method instead.\n\n**Key design decision:** Subscribe polls every 100ms instead of using database triggers. Simple, works everywhere, acceptable latency for human-facing dashboard.\n\n**TDD wins:** Tests existed before implementation. Fixed tests to use proper adapter interface, increased polling timeout from 50ms to 150ms to account for 100ms poll interval.\n\nFiles: durable-adapter.ts (140 lines), durable-adapter.test.ts (12 tests, all passing)","created_at":"1766595734950.0","tags":"durable-streams,adapter-pattern,tdd,polling"}
119
+ {"id":"3bbfd751-13b8-4fda-b6b5-9bbee52aa179","information":"Mini-batch k-means implementation for pdf-library clustering: Algorithm uses incremental centroid updates with learning rate η = 1/count to handle 500k+ embeddings in O(batch_size) memory instead of O(n). Key implementation details: (1) k-means++ initialization for better convergence, (2) Random batch sampling without replacement per iteration, (3) Convergence detection via Frobenius norm check every 10 iterations (threshold 1e-4) for early stopping, (4) Final full assignment pass after convergence. Default batch_size=100 works well for 1000-500k points. Complexity: O(batch_size * k * iterations) vs full k-means O(n * k * iterations). Tested accuracy within 30% of full k-means with faster convergence on large datasets. Used for RAPTOR-style clustering when dataset exceeds 100k chunks.","created_at":"1766423215971.0","tags":"clustering,mini-batch-k-means,pdf-library,scalability,memory-optimization,raptor"}
120
+ {"id":"3bd2ffbe-2a13-42a3-b2e2-b990df18dbe6","information":"Analytics Queries 6-10 Implementation (Dec 22, 2024)\n\n**Implemented 5 pre-built analytics queries using TDD (RED → GREEN → REFACTOR):**\n\n1. **scope-violations**: Files touched outside owned scope. Extracts `files_touched` from `task_completed` events. Useful for detecting agents modifying files they weren't assigned.\n\n2. **task-duration**: p50/p95/p99 task durations. Uses window functions (ROW_NUMBER, COUNT OVER) to approximate percentiles since libSQL lacks `percentile_cont`. Joins `task_started` and `task_completed` events to calculate duration.\n\n3. **checkpoint-frequency**: Checkpoint creation frequency per agent. Counts `checkpoint_created` events, calculates avg interval between checkpoints using `(MAX - MIN) / NULLIF(COUNT - 1, 0)` pattern.\n\n4. **recovery-success**: Deferred task resolution success rate. Uses `COUNT(CASE WHEN ...)` pattern to count resolved vs rejected, calculates percentage with `CAST AS REAL` for floating-point division.\n\n5. **human-feedback**: Approval/rejection breakdown. Groups `review_feedback` events by status field, calculates percentage of total.\n\n**Key Patterns:**\n\n- **AnalyticsQuery interface**: `{ name, description, sql, parameters? }`\n- **Optional buildQuery()**: Returns filtered query with project_key parameter\n- **JSON extraction in libSQL**: `json_extract(data, '$.field_name')`\n- **Percentile approximation**: Use window functions + row counting (no native percentile functions)\n- **Percentage calculation**: `CAST(numerator AS REAL) / NULLIF(denominator, 0) * 100`\n- **Integration tests**: Use `createInMemorySwarmMailLibSQL`, seed with `db.query(INSERT ...)` not `db.exec()`\n\n**libSQL Gotchas:**\n\n1. `exec()` doesn't take parameters - use `query()` for parameterized inserts\n2. JSON stored as TEXT, use `json_extract()` not `->` operator\n3. No `percentile_cont` - approximate with `ROW_NUMBER() OVER (ORDER BY value)`\n4. Division truncates to INTEGER unless you `CAST AS REAL`\n\n**Test Coverage:** 16 unit tests + 8 integration tests = 24 new tests, all passing.","created_at":"1766434055306.0","tags":"swarm-mail,analytics,TDD,libSQL,SQL,percentiles,window-functions"}
121
+ {"id":"3c165471-c5f5-4d7c-9879-051b88e9d097","information":"AI SDK UI hooks like useChat() use a transport layer pattern. DefaultChatTransport handles the /api/chat endpoint by default, managing streaming responses, message formatting, and error handling. This abstraction allows customization via the `api` option for different endpoints or custom transport implementations for advanced scenarios (auth, request transformation, non-standard protocols). Important for understanding the connection between UI hooks and backend routes in AI SDK applications.","created_at":"1766466221037.0","tags":"ai-sdk,transport,useChat,architecture,patterns"}
122
+ {"id":"3d053700-465c-4599-96bd-a9f3af4b27a3","information":"ClusterSummarizer LLM abstractive implementation: Replaced extractive summarization with AI SDK generateObject pattern using anthropic/claude-haiku-4-5. Schema defines { summary: string, keyTopics: string[], representativeQuote?: string }. Implementation uses Effect.tryPromise to wrap async LLM call, with automatic fallback to extractive summarization on LLM failure (caught in try-catch, returns generateExtractiveSummary). Key learnings: (1) Mock AI SDK with mock.module() in tests, (2) ClusterSummary interface gets optional keyTopics and representativeQuote fields for backward compatibility, (3) Extractive fallback ensures reliability even when LLM unavailable/fails, (4) Truncate content to 6000 chars before sending to LLM to avoid context limits, (5) Effect pattern uses Effect.tryPromise for async operations.","created_at":"1766423263633.0","tags":"ai-sdk,effect-ts,tdd,summarization,abstractive,claude-haiku,fallback-pattern,pdf-brain"}
123
+ {"id":"3dfe5a42-bb1e-4881-960a-2bdb3023bee2","information":"{\"id\":\"pattern-1766265064341-zstfx3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:11:04.341Z\",\"updated_at\":\"2025-12-20T21:11:04.341Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766265064582.0","metadata":"{\"id\":\"pattern-1766265064341-zstfx3\",\"kind\":\"pattern\",\"is_negative\":false}"}
62
124
  {"id":"3e88ec34-2b29-406f-8352-cd434ac23b68","information":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-19T00:30:11.784Z\",\"updated_at\":\"2025-12-19T00:30:11.784Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-19T00:30:11.993Z","metadata":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"kind\":\"pattern\",\"is_negative\":false}"}
63
125
  {"id":"3eabd321-1ad6-4fa9-bf11-8fad2a57ea83","information":"{\"id\":\"test-1765733411282-pzqyaldzdya\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T17:30:11.282Z\",\"raw_value\":1}","created_at":"2025-12-14T17:30:11.541Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T17:30:11.282Z\"}"}
64
126
  {"id":"3ec7f612-4075-48f0-b63e-ba46f646f577","information":"POC Migration Learnings (December 2025):\n\n1. SCHEMA PATTERNS:\n- Coursebuilder uses type='post' + fields.postType='course', but migration can use type='course' directly\n- Query files must support BOTH patterns with OR clause\n- Use .passthrough() on Zod schemas to allow extra migration fields (migratedAt, collaborators, legacyRailsId)\n- Remove 'use server' from files that export types/schemas (Next.js constraint)\n\n2. DATABASE CONSTRAINTS:\n- createdById is NOT NULL - must provide system user ID for migrations\n- Use Joel's ID: c903e890-0970-4d13-bdee-ea535aaaf69b for migration scripts\n\n3. VIDEO INTEGRATION:\n- Rails current_video_hls_url contains Mux playback IDs (extract with regex)\n- 97.5% of lessons have Mux coverage (193 missing = mark as retired)\n- VideoResource links to Lesson via ContentResourceResource table\n\n4. MIGRATION SCRIPTS:\n- investigation/poc-migrate-modern-course.ts - Sanity source\n- investigation/poc-migrate-legacy-course.ts - Rails source\n- investigation/src/lib/migration-utils.ts - Shared utilities\n\n5. TDD APPROACH NEEDED:\n- Unit tests for schema validation and field mapping\n- Docker containers for integration tests (postgres + mysql)\n- E2E verification with browser automation","created_at":"2025-12-13T17:07:15.655Z"}
127
+ {"id":"3f49e8fe-db29-4859-8c30-6f17f8964a10","information":"{\"id\":\"pattern-1766259560283-bbfhnp\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:39:20.283Z\",\"updated_at\":\"2025-12-20T19:39:20.283Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766259560525.0","metadata":"{\"id\":\"pattern-1766259560283-bbfhnp\",\"kind\":\"pattern\",\"is_negative\":false}"}
65
128
  {"id":"3faa59da-150b-4c02-a257-515df507fdbe","information":"{\"id\":\"test-1765664124701-aa17ylzydnq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:15:24.701Z\",\"raw_value\":1}","created_at":"2025-12-13T22:15:24.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:15:24.701Z\"}"}
66
129
  {"id":"40e45c96-514c-4f5e-a010-96215895a455","information":"{\"id\":\"test-1766076692243-0mib94hstes\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:51:32.243Z\",\"raw_value\":1}","created_at":"2025-12-18T16:51:32.478Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:51:32.243Z\"}"}
67
130
  {"id":"41308199-3761-485f-a7a6-567f97417f95","information":"{\"id\":\"pattern-1765664183401-tex4za\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:16:23.401Z\",\"updated_at\":\"2025-12-13T22:16:23.401Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:16:23.600Z","metadata":"{\"id\":\"pattern-1765664183401-tex4za\",\"kind\":\"pattern\",\"is_negative\":false}"}
68
131
  {"id":"41453b78-e33f-41c2-aedd-3d521af2a2c4","information":"SUBTASK_PROMPT_V2 survival checklist pattern: Workers need 9-step mandatory workflow: 1) swarmmail_init (coordination), 2) semantic-memory_find (query past learnings BEFORE starting), 3) skills_list/skills_use (load domain knowledge), 4) swarmmail_reserve (worker reserves own files, NOT coordinator), 5) do work, 6) swarm_progress at 25/50/75% milestones (triggers auto-checkpoint), 7) swarm_checkpoint before risky ops (refactors, deletions), 8) semantic-memory_store (capture learnings), 9) swarm_complete (closes, releases, scans). KEY INSIGHT: Workers reserve their own files (step 4) - coordinator no longer does this. Past mistake: coordinators reserving caused confusion about who owns what. Worker self-reservation makes ownership explicit. Applies to all swarm worker agents.","created_at":"2025-12-16T16:21:16.745Z","metadata":"{\"context\":\"opencode-swarm-plugin\"}","tags":"swarm,coordination,worker-patterns,file-reservation,semantic-memory,skills,checkpointing,learning-loops"}
132
+ {"id":"41f91144-7ecf-4887-ab65-ba045c9c3dae","information":"{\"id\":\"test-1766260221147-2gbfn5x7qj\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:50:21.147Z\",\"raw_value\":1}","created_at":"1766260221379.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:50:21.147Z\"}"}
133
+ {"id":"42465dd4-8323-416b-8b7a-740cb77a1701","information":"HDBSCAN vs GMM/K-means for RAPTOR clustering (credit: @georg_dev):\n\nHDBSCAN advantages for document clustering:\n1. **Builds hierarchy natively** - no need for recursive summarization, the dendrogram IS the tree\n2. **No k selection needed** - automatically finds cluster structure\n3. **Handles noise** - outlier documents don't force bad clusters\n4. **Density-based** - finds clusters of varying shapes/sizes\n\nJS implementation: https://github.com/rivulet-zhang/vis-utils (euclidean distance works for embeddings)\n\nCurrent implementation uses GMM-like soft clustering + mini-batch k-means. HDBSCAN would simplify:\n- Remove BIC k-selection logic\n- Remove recursive summarization\n- Get hierarchical structure for free\n- Better handling of edge cases\n\nTrade-off: HDBSCAN is O(n²) for distance matrix, but can use approximate methods for scale.","created_at":"1766424519710.0","tags":"clustering,HDBSCAN,RAPTOR,embeddings,architecture,georg_dev"}
69
134
  {"id":"429da23f-c274-4d2c-93ed-88eee75c4b20","information":"{\"id\":\"test-1765678709593-34lfj5t3x44\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T02:18:29.593Z\",\"raw_value\":1}","created_at":"2025-12-14T02:18:29.809Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T02:18:29.593Z\"}"}
135
+ {"id":"42ae102b-b7fc-4860-af19-356eff1a9d98","information":"{\"id\":\"test-1766264410605-bqqpzc3thoo\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T21:00:10.605Z\",\"raw_value\":1}","created_at":"1766264410845.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T21:00:10.605Z\"}"}
136
+ {"id":"42e40d93-d19a-4fc2-838e-c312e13eeb88","information":"{\"id\":\"pattern-1766263947907-v3bo81\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:52:27.907Z\",\"updated_at\":\"2025-12-20T20:52:27.907Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263948144.0","metadata":"{\"id\":\"pattern-1766263947907-v3bo81\",\"kind\":\"pattern\",\"is_negative\":false}"}
137
+ {"id":"4330c2b2-8536-4143-82c1-cbf24e0d8e22","information":"{\"id\":\"pattern-1766593256208-6yyuub\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:20:56.208Z\",\"updated_at\":\"2025-12-24T16:20:56.208Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766593256494.0","metadata":"{\"id\":\"pattern-1766593256208-6yyuub\",\"kind\":\"pattern\",\"is_negative\":false}"}
138
+ {"id":"44fbda0a-ae47-4180-a5a0-f2969e7044f4","information":"TypeScript discriminated union pattern for unified search results in pdf-library: Used Effect Schema with literal entityType field ('document' | 'concept') as discriminator. Key learnings: 1) Keep backward compatibility by preserving original SearchResult class without entityType, mark as @deprecated. 2) New DocumentSearchResult extends all SearchResult fields + entityType: Schema.Literal(\"document\"). 3) ConceptSearchResult has different structure + entityType: Schema.Literal(\"concept\"). 4) UnifiedSearchResult = DocumentSearchResult | ConceptSearchResult enables type-safe narrowing via entityType check. 5) SearchOptions gets optional entityTypes: Schema.Array(Schema.Literal(\"document\", \"concept\")) for filtering. Pattern allows TypeScript to narrow types automatically: if (result.entityType === 'document') { result.docId } else { result.conceptId }. All existing SearchResult usage continues to work unchanged.","created_at":"1766256672788.0","tags":"typescript,discriminated-union,effect-schema,backward-compatibility,pdf-library"}
139
+ {"id":"475d7add-4a4f-4289-a794-ecd1b6c64d45","information":"RAPTOR vs SKOS research conclusion (Dec 2025): They're COMPLEMENTARY, not competing approaches. RAPTOR (UMAP+GMM soft clustering + recursive summarization) enables automatic bottom-up theme discovery with multi-scale retrieval - documents can belong to multiple clusters, and queries match both leaf chunks and cluster summaries. SKOS provides stable top-down semantic organization with persistent concept URIs for consistent navigation. Hybrid approach: use RAPTOR-style clustering for discovery, then map clusters to SKOS concepts for stable semantics. Key papers in pdf-brain: RAPTOR, GraphRAG, LightRAG. Implementation priority: (1) backfill document_concepts, (2) improve hybrid search, (3) RAPTOR-lite with cluster summaries, (4) storage optimization via smaller embeddings or larger chunks.","created_at":"1766415682693.0","tags":"pdf-brain,raptor,skos,clustering,taxonomy,architecture,research"}
70
140
  {"id":"47e272e2-37c4-4ea1-b724-ec68de3c3bf1","information":"TDD pattern for database query functions: Write tests that use the actual database adapter (not mocks) to verify query behavior. For swarm-mail hive queries, tests use in-memory PGlite with full migrations. This catches SQL syntax errors, constraint violations, and index issues that mocks would miss. Pattern: beforeEach creates fresh PGlite instance, afterEach closes it. Each test creates necessary cells via adapter, then queries them. Fast enough (12s for 36 tests) because PGlite is in-memory.","created_at":"2025-12-19T16:17:46.254Z","tags":"tdd,testing,database,pglite,swarm-mail"}
71
141
  {"id":"48610ac6-d52f-4505-8b06-9df2fad353aa","information":"CRITICAL BUG: PGLite database corruption when multiple swarm agents access shared database concurrently.\n\nROOT CAUSE: PGLite is single-connection only. When multiple parallel swarm worker agents each create their own PGLite instance pointing to the same database file, they corrupt each other's writes. This manifests as:\n- 'PGlite is closed' errors\n- Missing data after writes\n- Inconsistent query results\n- Database file corruption requiring deletion\n\nSOLUTION: Implement PGLite leader election pattern from multi-tab-worker docs (https://pglite.dev/docs/multi-tab-worker).\n\nThe pattern works by:\n1. Each worker/agent creates a PGliteWorker instead of PGlite directly\n2. Workers run an election to nominate ONE as the leader\n3. ONLY the leader starts the actual PGlite instance\n4. All other workers proxy their queries through the leader\n5. When leader dies, new election runs and new leader takes over\n\nKey APIs:\n- PGliteWorker - client that proxies to leader\n- worker({ init: () => PGlite }) - wrapper that handles election\n- onLeaderChange(callback) - subscribe to leader changes\n- isLeader: boolean - check if this instance is leader\n\nFor swarm-mail specifically:\n- The singleton pattern in pglite.ts is NOT sufficient for parallel agents\n- Each Task subagent runs in a separate process, not just separate async contexts\n- Need to implement a coordinator pattern where ONE agent owns the DB connection\n- Other agents communicate via IPC/file locks/Agent Mail instead of direct DB access\n\nWORKAROUND (current): Tests use isolated in-memory PGLite instances per test to avoid singleton conflicts.","created_at":"2025-12-17T17:18:27.494Z","tags":"pglite,database,corruption,swarm,parallel-agents,leader-election,critical-bug,P0"}
142
+ {"id":"4924f104-cdeb-46f3-91e4-56460e269884","information":"pdf-brain database size investigation (Dec 2025): 52GB database for 907 documents, 486k chunks, 484k embeddings. Database has 13.5M pages × 4096 bytes = ~55GB total. The HNSW neighbor graph (embeddings_idx_shadow table) has 484k rows (one per embedding) and is the primary storage consumer. With compress_neighbors=float8 already enabled (4x compression from default), each shadow row still averages ~100KB due to HNSW neighbor graph structure. Without compression it would be ~400KB/row = 200GB just for the index. CRITICAL: The embeddings themselves are only ~1.9GB (484k × 1024 dims × 4 bytes), the shadow index is ~48GB (92% of total). Alternative optimizations: (1) smaller embedding model (384 dims = 62% reduction), (2) reduce chunk count via better chunking, (3) partial indexing (only recent/important docs), (4) accept slower search without index. Hierarchical clustering would NOT directly reduce storage - it might reduce chunk count if used for document deduplication, but wouldn't compress the HNSW index itself.","created_at":"1766415330225.0","metadata":"{\"docs\":907,\"chunks\":486407,\"db_size_gb\":52,\"embeddings\":483733,\"compression\":\"float8\",\"investigation_date\":\"2025-12-22\"}","tags":"pdf-brain,libsql,hnsw,vector-index,storage-optimization,embeddings,compress_neighbors"}
72
143
  {"id":"4945b847-6fd0-42fe-aebd-6ee0d415b1cb","information":"CRITICAL SCHEMA FIX (Dec 2025): egghead-rails `series` table is DEPRECATED. Official courses are in `playlists` with `visibility_state='indexed'` (437 courses). Lessons link via `tracklists` polymorphic join table (tracklistable_type='Lesson', tracklistable_id=lesson.id), NOT via lessons.series_id. Standalone lessons (~1,650) are published lessons NOT in any indexed playlist. Use DISTINCT ON (l.id) when querying lessons to handle 36 lessons that appear in multiple courses.","created_at":"2025-12-13T23:17:05.679Z"}
144
+ {"id":"49a14aed-a8f0-4e43-b7d7-f5a40d1871a2","information":"AI SDK v6 Breaking Changes Audit Pattern: When auditing course content for SDK migrations, prioritize finding actual usage over theoretical possibilities. Used grep to search for deprecated patterns (generateObject, convertToCoreMessages, textEmbedding, Experimental_Agent) and found generateObject in 3 lessons but zero usage of other deprecated APIs. Key insight: Don't assume all breaking changes apply - verify with targeted searches. The most effective audit workflow: 1) Read migration guide for breaking changes list, 2) Grep for each pattern across codebase, 3) Read only files with matches, 4) Document specific line numbers and code snippets for replacements. For AI SDK specifically, generateObject→generateText+Output.object() is the most common v6 migration, affecting structured output lessons heavily.","created_at":"1766431951475.0","tags":"ai-sdk,migration,audit,v6,course-content,breaking-changes"}
145
+ {"id":"4a109810-3bbb-43f9-af7d-4034d132302b","information":"{\"id\":\"test-1766260890139-zq75zhy9nia\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:30.139Z\",\"raw_value\":1}","created_at":"1766260890651.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:30.139Z\"}"}
73
146
  {"id":"4a9929ba-3860-4ebe-8ea9-89688d79d348","information":"{\"id\":\"test-1765653389932-an49coy8vg4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:16:29.932Z\",\"raw_value\":1}","created_at":"2025-12-13T19:16:30.132Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:16:29.932Z\"}"}
147
+ {"id":"4b488af5-d26b-4c82-a0d0-1b89bf742df8","information":"{\"id\":\"test-1766594998844-1rffuzu8dnx\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:49:58.844Z\",\"raw_value\":1}","created_at":"1766594999055.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:49:58.844Z\"}"}
148
+ {"id":"4b88730e-03ab-442f-9a33-701b789a8709","information":"Drizzle ORM migration pattern for hive/projections.ts successful. Created projections-drizzle.ts with all event handler write operations (INSERT/UPDATE/DELETE) using Drizzle query builder. Main projections.ts now delegates to Drizzle implementation via toDrizzleDb() adapter pattern. Key decisions: (1) Only migrated write operations to Drizzle - read operations (queries) still use raw SQL via DatabaseAdapter (avoid premature optimization), (2) Created dependencies-drizzle.ts for blocked cache management using Drizzle, (3) Used dynamic imports to avoid circular dependencies, (4) Followed streams/projections-drizzle.ts pattern for consistency. Tests: projections.test.ts - 21 pass, 0 fail. Verified conversion maintains same public API and behavior.","created_at":"1766331857011.0","tags":"drizzle,orm,migration,hive,projections,event-sourcing"}
149
+ {"id":"4b8f146e-bfd9-41d9-954d-fd27622f2bc4","information":"Bun.serve SSE (Server-Sent Events) implementation pattern: Use ReadableStream with controller.enqueue() to send events. Format: `data: ${JSON.stringify(event)}\\n\\n`. Headers MUST include: Content-Type: text/event-stream, Cache-Control: no-cache, Connection: keep-alive. Track active subscriptions in a Map with cleanup on req.signal abort event. Close streams via controller.close() on server stop. Common gotcha: Bun serves with generic Server<WebSocketData> type - use Server<undefined> for non-WebSocket HTTP servers.","created_at":"1766595958646.0","tags":"bun,sse,server-sent-events,http,streaming"}
74
150
  {"id":"4c47409c-83a4-4e85-87ed-1ee7445a3b09","information":"swarm-mail socket adapter hybrid pattern: getSwarmMail() now checks SWARM_MAIL_SOCKET=true env var to enable socket mode with graceful PGLite fallback on any failure. Close methods need conditional logic for pglite vs socket adapters. Env vars: SWARM_MAIL_SOCKET_PATH (unix socket), SWARM_MAIL_SOCKET_PORT (TCP, default 5433), SWARM_MAIL_SOCKET_HOST (TCP, default 127.0.0.1).","created_at":"2025-12-17T18:03:01.543Z"}
151
+ {"id":"4ca9a4ef-db39-48e7-aa7c-bd573fe6213d","information":"{\"id\":\"pattern-1766261007175-idjnhn\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:03:27.175Z\",\"updated_at\":\"2025-12-20T20:03:27.175Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261007439.0","metadata":"{\"id\":\"pattern-1766261007175-idjnhn\",\"kind\":\"pattern\",\"is_negative\":false}"}
152
+ {"id":"4ce9127c-6f36-4335-b9ac-584c282dafca","information":"WORKFLOW LOGGING CONSTRAINT: Vercel Workflow files (those with \"use workflow\" or \"use step\" directives) CANNOT import pino logger or use node:crypto. The workflow bundler runs code in a restricted environment that doesn't support Node.js built-in modules. \n\nSOLUTION: Workflow files MUST use console.log/console.error/console.warn directly. The workflow runtime captures these. Only non-workflow files (API routes, listeners, middleware, lib modules NOT imported by workflows) can use the structured pino logger.\n\nFILES AFFECTED: server/workflows/*.ts - all must use console.* not logger\nFILES SAFE: server/api/*.ts, server/listeners/*.ts, server/middleware/*.ts, server/lib/*.ts (if not imported by workflows)\n\nRoot cause: Importing ~/lib/logger into workflow files pulls in pino (Node.js module) and randomUUID (node:crypto), both forbidden in workflow runtime.","created_at":"1766458212230.0","tags":"workflow,logging,pino,vercel-workflow,bundler,constraint,gotcha"}
75
153
  {"id":"4d167832-70e4-46b0-85ba-170e5826b9c8","information":"PGLite WAL Safety Pattern: Add checkpoint() to DatabaseAdapter interface and call after batch operations to prevent WAL bloat.\n\nRoot cause from pdf-brain: PGLite accumulated 930 WAL files (930MB) without explicit CHECKPOINT, causing WASM OOM crash. PostgreSQL CHECKPOINT command forces WAL to be written to data files, allowing WAL to be recycled.\n\nImplementation:\n1. Add `checkpoint?(): Promise<void>` to DatabaseAdapter interface (optional method)\n2. Implement in wrapPGlite: `async checkpoint() { await pglite.query(\"CHECKPOINT\"); }`\n3. Call after batch operations:\n - After runMigrations() in adapter.runMigrations()\n - After bulk event appends (if batching)\n - After large projection updates\n\nTDD approach confirmed effectiveness:\n- Write failing test expecting checkpoint() method\n- Implement checkpoint in interface + wrapper\n- Call from adapters after migrations\n- All tests green (29 tests passing)\n\nKey insight: CHECKPOINT is a PostgreSQL command, not PGLite-specific. Works for any PostgreSQL-compatible database but critical for embedded databases without automatic checkpointing.\n\nPattern applies to any PGLite usage with batch operations: migrations, bulk writes, large transactions.","created_at":"2025-12-19T03:34:00.966Z","tags":"pglite,wal,checkpoint,database-adapter,batch-operations,memory-management,wasm"}
76
154
  {"id":"4df79169-bae1-4942-bfc3-8a0c5ba038de","information":"MemoryAdapter implementation pattern for Effect-TS + PGlite semantic memory: High-level adapter wraps low-level services (Ollama + MemoryStore) with graceful degradation. Key insights: (1) Use Effect.runPromise with Effect.either for optional Ollama - returns Left on failure, enabling FTS fallback. (2) Store decay calculation (90-day half-life) in adapter layer, not DB - keeps store generic. (3) validate() resets timestamp via direct SQL UPDATE, not store.store() which preserves original timestamps on conflict. (4) Tags parsed from comma-separated string and merged into metadata.tags array for searchability. (5) TDD with 22 tests first caught 3 design issues: metadata structure, embedding similarity mocking, timestamp update semantics. Integration test verifies full lifecycle: store→find→get→validate→remove with FTS fallback.","created_at":"2025-12-18T19:09:34.653Z","metadata":"{\"pattern\":\"high-level-adapter\",\"testing\":\"tdd-integration\",\"component\":\"swarm-mail/memory\"}","tags":"effect-ts,pglite,semantic-memory,adapter-pattern,graceful-degradation,tdd"}
77
155
  {"id":"4f6a7e08-fa47-4f23-bca2-6e7edb72a702","information":"PGLite DatabaseAdapter wrapper pattern: PGLite's exec() method returns Promise<Results[]> but DatabaseAdapter interface expects Promise<void>. Solution: wrap with async function that awaits exec() but doesn't return the value. Example: exec: async (sql: string) => { await pglite.exec(sql); }. This matches the adapter contract without leaking PGLite-specific types. Used in swarm-mail package for database abstraction layer.","created_at":"2025-12-15T00:18:10.156Z","tags":"pglite,adapter-pattern,database,typescript,type-compatibility,swarm-mail"}
78
156
  {"id":"4fca4eb1-e967-4992-8c48-502ea5596cde","information":"{\"id\":\"pattern-1766076693301-vgiike\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:51:33.301Z\",\"updated_at\":\"2025-12-18T16:51:33.301Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:51:33.529Z","metadata":"{\"id\":\"pattern-1766076693301-vgiike\",\"kind\":\"pattern\",\"is_negative\":false}"}
79
157
  {"id":"509ddf29-54c9-4d65-8610-dfc76321aadc","information":"--information","created_at":"2025-12-14T22:41:51.321Z","tags":"swarm,edge-case,workaround"}
158
+ {"id":"510c2a51-2f72-4bdd-8d3b-4dfd8ea3a0b7","information":"Ollama service config migration pattern: When updating Effect services to use new config structure, replace LibraryConfig.fromEnv() with loadConfig() import from types.ts. Update property access from flat config (config.ollamaHost) to nested structure (config.ollama.host, config.embedding.model). For auto-install functionality, use Effect.tryPromise to wrap spawn() calls - don't use Effect.gen wrapper around tryPromise as it adds unnecessary effect nesting. Place console.log outside the Effect for immediate logging. The pattern: Effect.tryPromise({ try: () => new Promise(...), catch: (e) => new CustomError(...) })","created_at":"1766261006627.0","tags":"effect,config-migration,ollama,spawn,child-process"}
159
+ {"id":"5166a145-4de1-4870-b75d-36670a00d76b","information":"## Database Migration: PGLite → libSQL Complete\n\n### Current State (Dec 2024)\n- **Primary database:** libSQL (SQLite-compatible)\n- **PGLite:** Only for migration from legacy databases\n- **AGENTS.md:** Updated to reflect libSQL as primary\n\n### Key APIs\n- `createInMemorySwarmMail()` - In-memory libSQL for tests\n- `getSwarmMailLibSQL()` - File-based libSQL for production\n- `createLibSQLAdapter()` - Low-level adapter\n\n### Migration Path\n- Legacy PGLite databases can be migrated via `migrate-pglite-to-libsql.ts`\n- Effect-TS durable primitives still need porting from PGLite to libSQL\n\n### Hive Tools Issue\nThe hive_* MCP tools are failing with \"no such column: stream\" error. This is NOT from the cursors table (that has correct schema). Need to trace the actual error source in the tool implementation.","created_at":"1766333931600.0","tags":"database-migration,libsql,pglite-deprecated,hive-tools,architecture"}
80
160
  {"id":"516a8144-80fc-4fdf-beb1-ab9a2a95ba36","information":"Swarm coordinator enforcement rules added to swarm.md: (1) CRITICAL section \"Coordinator Role Boundaries\" with explicit list of what coordinators DO (clarify, decompose, spawn, monitor, verify) and DO NOT (edit code, run tests, make quick fixes). (2) Sequential task pattern: spawn workers in order, await each before next - still get checkpointing, recovery, learning benefits. (3) Anti-patterns section with three examples: Mega-Coordinator (doing work inline), Sequential Work Without Workers, and \"Just This One Small Thing\". (4) Updated checklist with \"Coordinator did NOT edit any files\" and \"ALL subtasks spawned as workers\". Key insight from Event-Driven Microservices: \"orchestrator is responsible ONLY for orchestrating the business logic\".","created_at":"2025-12-18T00:31:38.099Z"}
161
+ {"id":"51a8fc37-f8e2-4626-b6fc-6fe3710d985a","information":"libSQL auto-migration module created for swarm-mail package. Key learnings:\n\n**Generated columns cannot be inserted:** libSQL's GENERATED columns (like `sequence INTEGER GENERATED ALWAYS AS (id) STORED`) throw SQLITE_ERROR if you try to INSERT into them. Solution: exclude generated columns from INSERT column list.\n\n**Dynamic schema detection required for graceful migration:** Old databases may have different schemas (missing columns, different types). Instead of hardcoding column lists, query source schema with `PRAGMA table_info(table_name)` and intersect with target columns. This allows migration to work even when source has subset of columns.\n\n**INSERT OR IGNORE rowsAffected check:** INSERT OR IGNORE silently succeeds even when row already exists (constraint violation). Check `result.rowsAffected > 0` to know if row was actually inserted vs skipped.\n\n**Global DB schema must exist before migration:** migrateProjectToGlobal() must create global DB schema with createLibSQLStreamsSchema() before calling migrateLibSQLToGlobal(), otherwise INSERT fails with \"no such table\".\n\n**Tables migrated (16 total):**\nStreams: events, agents, messages, message_recipients, reservations, cursors, locks\nHive: beads, bead_dependencies, bead_labels, bead_comments, blocked_beads_cache, dirty_beads\nLearning: eval_records, swarm_contexts, deferred\n\nModule location: packages/swarm-mail/src/streams/auto-migrate.ts\nTests: 13 passing, 624 LOC implementation, 270 LOC tests","created_at":"1766343789270.0","tags":"libsql,migration,schema-evolution,database,swarm-mail"}
162
+ {"id":"52a1adac-1c09-4028-97e4-61b5c71f16e1","information":"{\"id\":\"test-1766261760587-gumzslpkkb8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:16:00.587Z\",\"raw_value\":1}","created_at":"1766261760823.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:16:00.587Z\"}"}
81
163
  {"id":"5458bfe9-fc9d-4a1d-9373-18615a01cf86","information":"PGlite daemon crashes under heavy embedding load due to WASM memory constraints (~2GB limit). Root cause: unbounded WAL growth when processing many embeddings without checkpoints.\n\nSOLUTION: Gated batch processing with periodic checkpoints.\n\nImplementation in pdf-library:\n1. Created EmbeddingQueue service (src/services/EmbeddingQueue.ts) with:\n - processInBatches() - core primitive for gated processing\n - createEmbeddingProcessor() - high-level API with checkpoint callback\n - getAdaptiveBatchSize() - reduces batch size under memory pressure\n - DEFAULT_QUEUE_CONFIG: batchSize=50, concurrency=5, batchDelayMs=10\n\n2. Modified PDFLibrary.add() to process embeddings in batches:\n - Generate 50 embeddings at a time (not all at once)\n - Write batch to DB\n - CHECKPOINT after each batch (flushes WAL)\n - Small delay between batches for GC\n\nKey insight: The problem wasn't Ollama concurrency, it was WAL accumulation. Each embedding write adds to WAL, and without CHECKPOINT, WAL grows unbounded until WASM OOM.\n\nMemory math:\n- 1024-dim embedding = 4KB\n- 5000 embeddings = 20MB vectors\n- Plus WAL overhead = can exceed WASM limits\n- With batching: 50 embeddings = 200KB + checkpoint = bounded\n\nConfig options:\n- batchSize: 50 (lower = more checkpoints, less memory)\n- concurrency: 5 (Ollama parallelism within batch)\n- batchDelayMs: 10 (backpressure for GC)\n- checkpointAfterBatch: true (essential)\n- adaptiveBatchSize: true (reduces batch under memory pressure)","created_at":"2025-12-19T17:50:57.157Z","tags":"pglite,wasm,embedding,oom,checkpoint,backpressure,queue,daemon,memory"}
164
+ {"id":"5462691d-5632-4b78-8200-83e4bd68f94f","information":"swarm-mail convenience wrappers (getInbox, getMessage, appendEvent, etc.) had critical \"no such table\" bug. Root cause: wrappers auto-created adapters with createLibSQLAdapter() but never called createLibSQLStreamsSchema(). When users passed dbOverride (raw adapter) or when auto-creating, queries would fail with SQLITE_ERROR.\n\nFix: Created getOrCreateAdapter() utility in both projections-drizzle.ts and store-drizzle.ts that ALWAYS calls createLibSQLStreamsSchema(db) before returning adapter. This is idempotent (safe to call multiple times). All 9 convenience wrappers now use this utility.\n\nPattern: Any function that accepts dbOverride or auto-creates adapters MUST initialize schema. Never assume schema exists. The correct pattern used by getSwarmMailLibSQL() is:\n1. Create adapter\n2. Initialize schema (createLibSQLStreamsSchema)\n3. Return adapter\n\nAffects: getInbox, getMessage, getThreadMessages, getAgents, getAgent, getActiveReservations, checkConflicts, getEvalRecords, getEvalStats, appendEvent, readEvents, getLatestSequence.","created_at":"1766383542604.0","tags":"swarm-mail,libsql,schema-initialization,bug-fix,convenience-wrappers,drizzle"}
165
+ {"id":"5487a709-c38c-4d73-b3e6-a36c861c33f3","information":"## Drizzle Migration Pattern: Handling Column Renames (Not Just Missing/Wrong Type)\n\n**Problem:** `migrateDatabase()` in swarm-mail only handled missing columns and wrong types, not column renames. When cursors table changed from `stream_id TEXT PRIMARY KEY` (PGLite) to `stream TEXT + checkpoint TEXT` (libSQL), the migration added new columns but left the old `stream_id` column in place.\n\n**Root Cause:** Drizzle's `validateSchema()` checks for missing columns and type mismatches, but doesn't detect \"extra\" columns that should have been removed. When old schema has `stream_id` and new schema has `stream`, validation says \"`stream` is missing\" but doesn't say \"`stream_id` is extra\".\n\n**Solution Pattern:**\n1. Add special-case detection BEFORE standard validation\n2. Check for old column names that indicate legacy schema\n3. If old schema detected, DROP TABLE and recreate (only safe if data is ephemeral)\n4. For non-ephemeral data, would need ALTER TABLE RENAME COLUMN or data migration\n\n**Implementation:**\n```typescript\n// In migrateDatabase(), before standard validation:\nif (tableName === \"cursors\") {\n const needsCursorsMigration = await detectOldCursorsSchema(client);\n if (needsCursorsMigration) {\n await client.execute({ sql: `DROP TABLE cursors`, args: [] });\n await createTableFromSchema(client, tableName, tables);\n continue;\n }\n}\n\nasync function detectOldCursorsSchema(client: Client): Promise<boolean> {\n const columns = await client.execute(`PRAGMA table_xinfo(cursors)`);\n const columnNames = columns.rows.map(r => r.name as string);\n \n // Old schema has stream_id, new has stream + checkpoint\n const hasOldColumn = columnNames.includes(\"stream_id\");\n const hasNewColumns = columnNames.includes(\"stream\") && columnNames.includes(\"checkpoint\");\n \n return hasOldColumn && !hasNewColumns;\n}\n```\n\n**When to Use:**\n- Column renames across database migrations\n- Schema changes that add/remove/rename columns simultaneously\n- Ephemeral tables where DROP + CREATE is acceptable\n\n**When NOT to Use:**\n- Tables with important data (need data migration instead)\n- Production databases (use proper migration scripts)\n\n**Files:**\n- packages/swarm-mail/src/db/migrate.ts (migration logic)\n- packages/swarm-mail/src/db/migrate.test.ts (tests)\n- packages/swarm-mail/src/db/schema/streams.ts (schema definition)","created_at":"1766338291244.0","tags":"drizzle,migration,schema-changes,column-rename,libsql,sqlite"}
166
+ {"id":"550d8616-7064-4e45-ae86-63387526435a","information":"Drizzle ORM migration pattern for swarm-mail streams subsystem: When migrating from raw SQL to Drizzle ORM, create convenience wrapper functions that match old signatures. Pattern: (1) Drizzle functions take db SwarmDb as FIRST parameter, (2) Wrapper functions match old signature with dbOverride as LAST parameter, (3) Use dynamic import (await import) in wrappers to avoid circular dependencies, (4) Convert DatabaseAdapter to SwarmDb using toSwarmDb helper. This maintains backward compatibility - tests do not need changes. High-level functions (registerAgent, sendMessage) automatically use Drizzle through the wrappers.","created_at":"1766296542912.0","tags":"drizzle,migration,swarm-mail,testing"}
167
+ {"id":"556474e3-5398-46fd-9550-5f0744fcb198","information":"Walkthrough verification pattern for technical courses: Use .scratch/ directory as throwaway workspace for end-to-end lesson verification. Clone starter repos here, follow lessons step-by-step to verify code examples work on fresh clone. Directory should be gitignored. This enforces \"We Don't Ship Junk\" principle by testing lessons as students experience them. Example: .scratch/ai-sdk-walkthrough/ for verifying AI SDK course. Delete and recreate for each verification run to ensure clean environment.","created_at":"1766433551696.0","tags":"course-development,quality-assurance,verification,walkthrough,best-practices"}
82
168
  {"id":"56a594bf-f52e-4b28-9e8e-2a88c9745037","information":"TDD pattern for PGlite WAL auto-checkpoint during batch operations: \n1. Write failing tests first (getCheckpointInterval, shouldCheckpoint helpers)\n2. Implement minimal checkpoint interval logic (default 50 docs, configurable)\n3. Remove per-doc checkpoint from library.add() (wasteful for batch ops)\n4. Expose checkpoint() method on PDFLibrary service API\n5. Add checkpoint logic to batch ingest command (both TUI and console modes)\n6. Update TUI state to show checkpoint progress (checkpointInProgress, checkpointMessage, lastCheckpointAt fields)\n7. Use Effect.either() to handle checkpoint failures gracefully (log but continue)\n\nKey insight: Checkpointing every document adds 930MB WAL in real usage. Checkpointing every N documents (default 50) prevents WASM OOM while maintaining performance. Batch operations should own checkpointing, not individual operations.","created_at":"2025-12-19T17:28:31.265Z","tags":"tdd,pglite,wal,checkpoint,batch-operations,effect-ts"}
169
+ {"id":"571b2a05-aff5-493c-8db7-28dfadff501b","information":"{\"id\":\"test-1766259559212-clvgwqn44pc\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:39:19.212Z\",\"raw_value\":1}","created_at":"1766259559440.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:39:19.212Z\"}"}
170
+ {"id":"5776a7bb-00ca-4d6d-b82e-7216596da81c","information":"PostgreSQL ON CONFLICT clause must reference an actual unique constraint or exclusion constraint. In swarm_contexts table, migration v5 creates UNIQUE INDEX on (project_key, epic_id, bead_id), not on (id). Therefore ON CONFLICT (id) fails with \"no unique or exclusion constraint matching\". Fix: change to ON CONFLICT (project_key, epic_id, bead_id). Also: test queries must filter by ALL columns in the unique constraint when expecting single rows, otherwise queries span multiple projects and return unexpected counts.","created_at":"1766260293483.0","tags":"postgresql,upsert,on-conflict,unique-constraint,swarm-mail,testing"}
83
171
  {"id":"5822a985-22dd-4c52-aa57-3d048e376c1a","information":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:19.155Z\",\"updated_at\":\"2025-12-18T16:17:19.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:19.369Z","metadata":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"kind\":\"pattern\",\"is_negative\":false}"}
172
+ {"id":"58a49e56-1ed4-421f-803c-b87908415356","information":"Built swarm-db CLI for analytics queries using TDD. Key learnings:\n\n1. **DatabaseAdapter returns QueryResult<T>**: The libSQL adapter's query() method returns `{ rows: T[] }`, not `T[]` directly. Always access result.rows, not result itself.\n\n2. **Query function type inference issue**: TypeScript incorrectly infers analytics query functions as `AnalyticsQuery & { buildQuery?: ... }` instead of function types. Use `as any` with biome-ignore comment when mapping command names to query functions.\n\n3. **CLI structure for analytics**: 3-tier command structure works well:\n - query <sql>: raw SQL (validated, max 1000 rows)\n - analytics <command>: pre-built queries with filters\n - list: discovery of available commands\n\n4. **Time range parsing pattern**: Regex `^(\\d+)(d|h|m)$` with switch on unit. Store as Date, not string.\n\n5. **Formatter integration**: Analytics formatters (table/json/csv/jsonl) accept QueryResult with columns/rows/rowCount/executionTimeMs. Execution time measured in CLI layer, not query layer.\n\n6. **Testing strategy**: Unit test validation/parsing logic, integration test CLI with in-memory DB (`:memory:`). Manual testing via bash script catches edge cases.\n\nFile locations:\n- packages/swarm-mail/bin/swarm-db.ts (entry point, shebang, parseArgs)\n- packages/swarm-mail/src/cli/db.ts (implementations)\n- packages/swarm-mail/src/cli/db.test.ts (19 tests)\n- package.json bin entry: \"swarm-db\": \"./bin/swarm-db.ts\"","created_at":"1766434650307.0","tags":"cli,analytics,tdd,libsql,swarm-db,typescript"}
173
+ {"id":"59429c7c-7ba1-49f6-933c-2a13e9fbb4b3","information":"Research phase integration testing pattern: Test each layer independently (tool discovery, lockfile parsing, prompt generation), then test integration between layers (runResearchPhase orchestrates all pieces). Use real repo as fixture for realistic testing. Key insight: extractTechStack returns normalized names (\"next\" not \"next.js\") - tests must match actual TECH_PATTERNS implementation. ResearchResult returns { tech_stack, summaries, memory_ids } not installed_versions.","created_at":"1766517197167.0","tags":"testing,integration-tests,research-phase,swarm,patterns"}
174
+ {"id":"598d9dbe-997f-4508-b29f-b5420cbe1631","information":"{\"id\":\"test-1766260239804-rg13d19mfd\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:50:39.804Z\",\"raw_value\":1}","created_at":"1766260240028.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:50:39.804Z\"}"}
84
175
  {"id":"5a7064a2-2a11-44e5-a1c9-455c4b30e18d","information":"ADR writing pattern for swarm plugin: Structure follows Context → Decision → Consequences → Implementation Notes → Alternatives Considered → References → Success Criteria. Key elements: (1) Context section must articulate current pain points with concrete examples, not just abstractions. (2) Decision section shows actual code/JSON structures, not just prose descriptions. (3) Consequences split into Positive/Negative/Neutral with specific tradeoffs. (4) Implementation phases are numbered and actionable. (5) Alternatives Considered documents rejected approaches with reasoning. (6) References link to inspirations and related ADRs. Format creates forcing function for clear thinking - if you can't fill in all sections cleanly, decision may not be ready. Used successfully for ADR-001 (monorepo), ADR-007 (worktree isolation), and ADR-008 (worker handoff protocol).","created_at":"2025-12-18T17:26:05.386Z","tags":"adr,architecture-decision-records,documentation,swarm-plugin,system-design"}
85
176
  {"id":"5afe465e-ef42-4240-aa44-136967baf239","information":"CLI flag pattern for conditional output formatting: Use boolean flag (e.g., --expand) parsed via custom parseArgs function. Store flag state (const expand = opts.expand === true), then use ternary operator for conditional content: const preview = expand ? fullContent : truncatedContent. This allows backward-compatible feature addition without breaking default behavior. Applied in semantic-memory CLI to toggle between truncated (60 chars) and full content display.","created_at":"2025-12-18T17:01:12.075Z","tags":"cli,typescript,bun,flags,conditional-output,backward-compatibility"}
86
177
  {"id":"5b117709-6a91-4237-a532-0f08909da9f7","information":"Kent C. Dodds Unified Accounts Use Case (Dec 2024) - Driving requirement for @badass auth architecture. Kent has EpicAI.pro, EpicWeb.dev, EpicReact.dev on different TLDs sharing a database. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Solution: epicweb.dev is the \"hive\" site for auth, other sites are \"spokes\" that redirect there. Workshop App uses device flow (RFC 8628) to authenticate against the hive. This validates hive+spoke model and device flow as core requirements.","created_at":"2025-12-18T15:42:16.703Z"}
87
178
  {"id":"5d2404b8-3635-42a2-bd63-ae623aba2a62","information":"@badass Auth Architecture Decision (Dec 2024): Creators with multiple sites MUST designate a central \"hive\" site for auth. For Kent, epicweb.dev is the hive - all auth flows redirect there. Other sites (epicreact.dev, epicai.pro) are \"spoke\" sites that trust the hive. This is a REQUIREMENT, not optional. Simplifies cross-domain SSO - standard OAuth/OIDC pattern where hive is the IdP. Spoke sites redirect to hive for login, receive tokens back. Shared database means session/user data is already unified, just need the auth handshake.","created_at":"2025-12-18T15:39:52.225Z"}
179
+ {"id":"5d4cccf4-0638-4c6c-8489-152f89c04f87","information":"Atomic File Writes Pattern: For crash-safe state persistence: 1) Create temp file in SAME directory (atomic rename requires same filesystem), 2) Write content to temp file, 3) sync to flush buffers, 4) chmod permissions, 5) mv -f temp to final (POSIX guarantees atomicity), 6) sync directory entry. Prevents state corruption on SSH disconnect or crash. Use for: swarm state, hive issues.jsonl, any file that must survive interruption. Source: Dicklesworthstone/agentic_coding_flywheel_setup state.sh:193-290","created_at":"1766591013349.0","tags":"persistence,atomic,crash-safe,state,patterns,acfs"}
88
180
  {"id":"5d871dd3-e45a-4237-8d79-12e568949c91","information":"AI SDK v6 Runtime Identity Pattern: Use callOptionsSchema with Zod to define type-safe per-request context (userId, tier, permissions). Implement prepareCall function that receives typed options and returns config overrides (tools, instructions, model, temperature). This enables tier-based feature gating, region-specific compliance, A/B testing, dynamic model selection. Key: prepareCall runs on EVERY invocation - keep it fast, avoid async DB lookups, use in-memory cache or extract from headers/JWT. In tier-one app: free (queryFAQ only), pro (adds searchDocs), enterprise (adds askV0). Always include respondToTicketTool for structured exit. Console.log in prepareCall provides observability.","created_at":"2025-12-16T21:12:38.912Z","tags":"ai-sdk,ai-sdk-v6,runtime-identity,callOptionsSchema,prepareCall,tier-filtering,tool-gating"}
181
+ {"id":"5da9c7c6-ad9b-4747-a9f1-8e47022227bc","information":"PDF Brain config CLI implementation pattern: For nested config access (e.g., \"embedding.model\"), use path.split(\".\") and navigate object tree iteratively. Type coercion critical for boolean/number values from string CLI args: check typeof oldValue to determine how to parse newValue. loadConfig() creates config.json with defaults if missing (good UX). Always show note about API keys in env vars when displaying config - users need to know keys aren't stored in JSON.","created_at":"1766261053511.0","tags":"cli,config,nested-paths,type-coercion,pdf-brain"}
182
+ {"id":"5e6a594e-adf0-424d-9490-847233492ae2","information":"D3 force simulation improvements for graph clustering visualization: Implemented 5-part enhancement strategy for better cluster discovery UX. (1) Custom cluster force: pulls nodes toward cluster centroids with strength 0.2, requires updating centroids on each tick via clusterResult.clusterCentroids. (2) Radial layout: forceRadial pushes concepts (radius=0) toward center, documents (radius=400) toward periphery with strength 0.1. (3) Link strength by relationship type: broader=0.7 (tight hierarchy), has_concept=0.2 (loose many-to-many tagging), related=0.4 (medium). (4) Weakened rigid centering: reduced forceX/forceY strength from 0.015 to 0.005 to avoid fighting natural clustering. (5) Slower alpha decay: alphaDecay from 0.015 to 0.008, alphaMin from 0.005 to 0.001 for better equilibrium settling. Result: STRONG visual cluster separation with clear boundaries, concepts centralized, hierarchies tight, exploration-friendly neighborhoods. Pattern used in pdf-brain-viewer force graph.","created_at":"1766347782107.0","tags":"d3,force-simulation,clustering,graph-visualization,radial-layout,link-strength"}
183
+ {"id":"5f688547-d56f-4951-9ad9-69e7ddf60590","information":"{\"id\":\"pattern-1766297016224-adki3f\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:03:36.224Z\",\"updated_at\":\"2025-12-21T06:03:36.224Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766297016462.0","metadata":"{\"id\":\"pattern-1766297016224-adki3f\",\"kind\":\"pattern\",\"is_negative\":false}"}
89
184
  {"id":"5faca7a3-eefb-44bd-affb-3140d367c748","information":"PGlite daemon initialization pattern: After creating PGlite instance and calling waitReady, MUST initialize schema (CREATE TABLE IF NOT EXISTS) before starting socket server. Without schema init, daemon starts successfully but all database operations fail with \"relation does not exist\" errors. DatabaseClient connects to daemon socket but finds empty database. Schema initialization code should mirror Database.ts DirectDatabaseLive implementation exactly to ensure consistency between daemon and direct modes.","created_at":"2025-12-19T15:18:58.912Z","tags":"pglite,daemon,schema-initialization,database,socket-server"}
185
+ {"id":"6065c202-d39d-4a9c-a578-cbcc52e8f3b9","information":"{\"id\":\"test-1766263853476-920c0xetj4e\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:50:53.476Z\",\"raw_value\":1}","created_at":"1766263853706.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:50:53.476Z\"}"}
90
186
  {"id":"61b3acf6-2eaa-4670-b17d-401634a0e41e","information":"@badass Video Pipeline Extraction Plan (Dec 2024): Extract from @coursebuilder/core to @badass/video.\n\n**Files to Extract:**\n- packages/core/src/schemas/video-resource.ts - VideoResource schema\n- packages/core/src/schemas/mux.ts - Mux API response schemas\n- packages/core/src/lib/mux.ts:1-142 - Mux API client\n- packages/core/src/providers/deepgram.ts:1-200 - Transcription provider\n- packages/core/src/inngest/video-processing/functions/* - All Inngest functions\n- packages/core/src/inngest/video-processing/events/* - All event definitions\n- packages/core/src/inngest/video-processing/utils.ts - Mux thumbnail generation\n\n**Architecture:**\n- VideoResource is a ContentResource type (not embedded in posts)\n- Upload triggers Inngest job\n- Mux processes video\n- Deepgram transcribes\n- Webhooks update VideoResource with asset ID, playback info, transcript, SRT\n\n**API Design:**\nconst video = createVideoProcessor({ storage: mux, transcription: deepgram, jobs: inngest })\nawait video.process(uploadUrl) // Returns VideoResource ID","created_at":"2025-12-18T15:57:51.555Z"}
187
+ {"id":"6298607d-7d0d-4aaa-8ece-f53a208edfb9","information":"Effect-based SQLite retry pattern: Created withSqliteRetry() utility in swarm-mail/src/db/retry.ts following the pattern from lock.ts and ollama.ts. Key implementation detail: Use Effect.catchAllDefect() BEFORE Effect.retry() to convert defects (thrown exceptions) into failures that retry logic can handle. Without this, Effect.sync(() => throw error) creates a \"Die\" defect that bypasses retry. Retryable errors: SQLITE_BUSY, SQLITE_LOCKED. Non-retryable: SQLITE_CONSTRAINT, SQLITE_MISMATCH. Schedule: exponential(\"100 millis\").pipe(Schedule.compose(Schedule.recurs(3))) = 100ms, 200ms, 400ms, then fail. Exported from swarm-mail package for use in adapter write operations.","created_at":"1766592267105.0","metadata":"{\"module\":\"swarm-mail\",\"pattern\":\"effect-retry\",\"project\":\"opencode-swarm-plugin\",\"technology\":\"effect-ts,sqlite\"}"}
188
+ {"id":"62f31790-3897-4543-80e7-cf8a66061ece","information":"{\"id\":\"pattern-1766263088654-o2004a\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:38:08.654Z\",\"updated_at\":\"2025-12-20T20:38:08.654Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263088902.0","metadata":"{\"id\":\"pattern-1766263088654-o2004a\",\"kind\":\"pattern\",\"is_negative\":false}"}
91
189
  {"id":"632a5d9d-c85f-4f2a-9e2a-28d348f30c0d","information":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:30.591Z\",\"updated_at\":\"2025-12-18T16:17:30.591Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:30.812Z","metadata":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"kind\":\"pattern\",\"is_negative\":false}"}
92
190
  {"id":"63fa903d-a63d-4c10-98ce-4d3aed8dad3b","information":"{\"id\":\"test-1765678583954-hm5prpbn31i\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T02:16:23.954Z\",\"raw_value\":1}","created_at":"2025-12-14T02:16:24.154Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T02:16:23.954Z\"}"}
93
191
  {"id":"647f6901-730d-49f0-9ed5-c9b97cf40319","information":"{\"id\":\"pattern-1765386363018-cqs6f7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:06:03.017Z\",\"updated_at\":\"2025-12-10T17:06:03.017Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:06:03.211Z","metadata":"{\"id\":\"pattern-1765386363018-cqs6f7\",\"kind\":\"pattern\",\"is_negative\":false}"}
@@ -95,127 +193,289 @@
95
193
  {"id":"657322ff-9f27-4d0d-a763-157a141b5741","information":"Swarm Enhancement Plan (ADR-007): Integrating patterns from nexxeln/opencode-config\n\nKey features to add:\n1. **Optional Worktree Isolation** - `swarm_init(isolation=\"worktree\")` for large refactors. Each worker gets isolated git worktree, cherry-pick commits back on completion. Overkill for most tasks, but perfect for big refactors.\n\n2. **Structured Review Step** - Coordinator reviews worker output before marking complete. Review prompt includes epic goal, task requirements, dependency context, downstream context. Max 3 review attempts before task fails. UBS scan still runs as additional safety.\n\n3. **Retry Options on Abort** - `/swarm --retry` (same plan), `/swarm --retry --edit` (modify plan), fresh start. Requires persisting session state (already have via Hive).\n\nDecision: Coordinator does review (not separate reviewer agent) because coordinator already has epic context loaded, avoids spawning another agent, keeps feedback loop tight.\n\nSkipped: Staged changes on finalize (our flow already has explicit commit step).\n\nEpic: bd-lf2p4u-mjaja96b9da\nCredit: Patterns from https://github.com/nexxeln/opencode-config","created_at":"2025-12-17T21:40:05.334Z"}
96
194
  {"id":"663b7198-ff84-42f5-9883-13e4f2d90b90","information":"{\"id\":\"test-1765386508116-mzoi3mqss5\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:08:28.116Z\",\"raw_value\":1}","created_at":"2025-12-10T17:08:28.300Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:08:28.116Z\"}"}
97
195
  {"id":"66c33f4a-e504-4601-bf36-7cafcc5c745c","information":"SWARM-MAIL ADAPTER PATTERN DECISION (Dec 2024): Extracting swarm-mail as standalone package using adapter pattern from coursebuilder. Key design: 1) DatabaseAdapter interface abstracts SQL operations (query, exec, transaction), 2) SwarmMailAdapter interface defines all swarm-mail operations, 3) createSwarmMailAdapter(db) factory accepts injected database, 4) PGLite convenience layer provides getSwarmMail() singleton for simple usage. Benefits: portable (works with PGLite, Postgres, Turso), testable (inject in-memory), shareable (one db across consumers), decoupled (swarm-mail doesn't own db lifecycle). Pattern learned from github.com/badass-courses/course-builder/tree/main/packages/adapter-drizzle which uses table function injection for multi-tenant prefixing.","created_at":"2025-12-14T23:57:56.403Z"}
196
+ {"id":"66cc17da-4bf2-4094-aca0-21e0682d7106","information":"AI SDK v6 Section 1 Fundamentals Validation 2025-12-22: Found 6 critical issues, 2 HIGH PRIORITY BLOCKERS.\n\n**BLOCKER #1 (Lesson 05)**: Lines 111 & 129 both use 'openai/gpt-5-mini' - defeats entire lesson purpose. Fast vs Reasoning comparison uses SAME MODEL for both examples. Students cannot experience timing difference. Should be: gpt-5-mini (fast) vs gpt-5.1 or o3 (reasoning).\n\n**BLOCKER #2 (Lesson 04 line 144)**: References 'openai/gpt-5-nano' which DOES NOT EXIST. Will cause runtime error. Should remove or replace with 'openai/gpt-5-mini'.\n\n**Other Issues**: 4 instances of 'gpt-5' should be 'gpt-5.1' (Lesson 02 line 182, Lesson 04 lines 50, 132, 142). 1 instance of 'gpt-5-mini-mini' should be 'gpt-4.1-mini' (Lesson 04 line 145).\n\n**Correct v6 patterns validated**: generateText destructuring { text }, Output.object() usage, import from 'ai' package, all correct.\n\n**Environment tested**: .scratch/fundamentals-validation workspace, pnpm install succeeded, all referenced files exist (extraction.ts, essay.txt, env-check.ts), package.json scripts validated.\n\n**Previous cells filed but NOT fixed**: cell-is13o5-mji2yj856tl, cell-is13o5-mji2ym6ttkx, cell-is13o5-mji2zh5ndeq still open with same issues.","created_at":"1766468942258.0","tags":"ai-sdk-v6,section-1,fundamentals,validation,lesson-05-blocker,model-naming,gpt-5-nano-bug"}
98
197
  {"id":"67453dce-ee7c-4102-acf6-ccf279264b32","information":"@badass Database Sharing Decision (Dec 2024): Creator-level database sharing enabled. Sites owned by same creator CAN share a database (like Kent's epic-web + epic-react in course-builder). Enables cross-site features: unified purchases, shared content library, single user identity per creator. Mux/Inngest/Stripe always per-site isolated. Adapter pattern must support both isolated and shared DB scenarios via site config.","created_at":"2025-12-18T15:30:13.232Z"}
99
198
  {"id":"67a8d3fd-7e06-40c8-b13b-0606f032ee0a","information":"Lesson polish pattern for technical course content: Always verify Fast Track presence (3 quick steps to get basics working), ensure real output examples in Try It sections (actual terminal logs and JSON responses, not placeholders), standardize section headers (Project Prompt not Hands-On Exercise, Done-When not Done). Common issues found: missing Fast Track (~40% of lessons), placeholder outputs instead of real examples (~30%), inconsistent section naming (~20%). For rubric scoring: Fast Track absence drops Progressive Disclosure score (-0.5), missing real outputs drops Practical Implementation (-0.5). Quick fix: read tier-one implementation first to get actual outputs, then add Fast Track based on solution key steps. Target: 8.0+ overall, 9.0+ for polished lessons.","created_at":"2025-12-16T21:43:36.151Z","metadata":"{\"topic\":\"lesson-authoring\",\"pattern\":\"polish\",\"quality\":\"rubric-scoring\"}"}
199
+ {"id":"67e4b962-cf1f-428d-9856-48833f4bf688","information":"## Session Context: PGLite to libSQL Migration (Dec 21, 2025)\n\n### Epic: Remove PGLite, Port Effect Primitives to libSQL\nEpic ID: opencode-swarm-monorepo-lf2p4u-mjfxpg2p165\n\n### Completed Tasks:\n1. **DurableLock** - Ported to DatabaseAdapter, 16 tests passing\n2. **DurableDeferred** - Ported to DatabaseAdapter, 11 tests passing \n3. **DurableCursor** - Ported to DatabaseAdapter, 9 tests passing\n4. **DurableMailbox + ask pattern** - Ported to DatabaseAdapter, 10 tests passing\n5. **Removed PGLite from streams/index.ts** - Removed getDatabase(), instance management, exports\n\n### Key Schema Change:\nCursors table changed from:\n```sql\n-- OLD (PGLite)\nCREATE TABLE cursors (stream_id TEXT PRIMARY KEY, position INTEGER, updated_at INTEGER)\n\n-- NEW (libSQL) \nCREATE TABLE cursors (\n id INTEGER PRIMARY KEY AUTOINCREMENT,\n stream TEXT NOT NULL,\n checkpoint TEXT NOT NULL,\n position INTEGER NOT NULL DEFAULT 0,\n updated_at INTEGER NOT NULL,\n UNIQUE(stream, checkpoint)\n)\n```\n\n### Migration Logic Added:\nIn libsql-schema.ts, added detection of old schema and auto-migration:\n- Check if cursors table has stream_id column\n- If old schema detected, DROP TABLE and recreate with new schema\n- Use PRAGMA table_xinfo (not table_info) to see generated columns\n\n### Pattern for Effect Primitives:\nAll primitives now follow this pattern:\n- Add `db: DatabaseAdapter` to config interface (required, not optional)\n- Use `await db.exec()` for DDL and writes\n- Use `await db.query<T>()` for reads with `?` placeholders\n- Ensure table exists with CREATE TABLE IF NOT EXISTS\n- Tests use `createInMemorySwarmMailLibSQL(testId)` for in-memory DB\n\n### Files Modified:\n- streams/effect/lock.ts, lock.test.ts\n- streams/effect/deferred.ts, deferred.test.ts\n- streams/effect/cursor.ts, cursor.integration-test.ts\n- streams/effect/mailbox.ts, mailbox.test.ts\n- streams/effect/ask.ts, ask.integration-test.ts\n- streams/index.ts (removed PGLite exports)\n- streams/libsql-schema.ts (added cursor migration)\n- db/schema/streams.ts (updated cursorsTable schema)\n\n### Remaining Work:\n- Fix remaining 4 test failures (unknown which tests)\n- Task 5: Remove PGLite from streams/index.ts exports (in progress)\n- Task 6: Integrate DurableLock into swarm file reservations\n- Task 7: Integrate DurableDeferred into swarm task completion\n\n### Related Bugs Filed:\n- opencode-swarm-monorepo-lf2p4u-mjfzgw9c7gd: SQLITE_ERROR no such column: stream in hive_create_epic\n\n### Branch: feat/drizzle-migration-and-tests","created_at":"1766337429683.0","tags":"pglite-removal,libsql-migration,effect-primitives,session-context,swarm-mail,schema-migration"}
100
200
  {"id":"68a25df0-9dd0-4483-b64e-0103f574a5c2","information":"Test memory after migration fix","created_at":"2025-12-09T18:25:30.759Z","tags":"test"}
201
+ {"id":"68b16262-7e72-4198-bc42-99ed5a5e8d09","information":"{\"id\":\"test-1766296936166-x0pmm8ur62d\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T06:02:16.166Z\",\"raw_value\":1}","created_at":"1766296936371.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T06:02:16.166Z\"}"}
202
+ {"id":"68eddc6f-c892-49df-9380-539339066673","information":"{\"id\":\"pattern-1766260867325-o1cwfl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:01:07.325Z\",\"updated_at\":\"2025-12-20T20:01:07.325Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260867640.0","metadata":"{\"id\":\"pattern-1766260867325-o1cwfl\",\"kind\":\"pattern\",\"is_negative\":false}"}
203
+ {"id":"69ca8677-c1bf-44c9-b4bf-f7fb08d2b1c0","information":"Integration test pattern for OpenCode plugin tools: Call tool.execute() directly with mock ToolContext to test complete flow. Focus on happy paths and real-world workflows, not exhaustive field validation (unit tests cover that). Key learnings: (1) Tool output structure differs from storage layer - check actual return JSON not internal types. (2) For mandate tools, mandate_file returns { success, mandate, message }, mandate_vote returns { success, vote, promotion }, mandate_query/list return { count, results }. (3) Use InMemoryMandateStorage/createInMemorySwarmMail for isolation. (4) Integration tests verify tools work end-to-end, unit tests verify implementation details.","created_at":"1766295120592.0","tags":"testing,integration-tests,opencode-plugin,tdd"}
204
+ {"id":"6a8feca2-6dfd-4ec6-bc3c-7f0b603594d9","information":"{\"id\":\"test-1766262134969-m7gzvr176qq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:22:14.969Z\",\"raw_value\":1}","created_at":"1766262135220.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:22:14.969Z\"}"}
205
+ {"id":"6a92690a-72ec-4c30-b79c-d1b6d60b35f1","information":"{\"id\":\"test-1766349591225-em81o0nl1jf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:39:51.225Z\",\"raw_value\":1}","created_at":"1766349591468.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:39:51.225Z\"}"}
206
+ {"id":"6ac8457e-e26e-481d-a51c-cfeeff54c151","information":"Tech stack extraction for swarm research phase: Use regex patterns to detect common frameworks/libraries in task descriptions. Patterns should match case-insensitively and handle variations (e.g., 'Next.js', 'nextjs', 'next'). Return normalized lowercase names. Deduplicate using Set. Fast pattern: /next\\.?js|nextjs/i for Next.js, /react(?!ive)/i for React (negative lookahead prevents matching 'reactive'). Store patterns in TECH_PATTERNS map for easy extension.","created_at":"1766516842895.0"}
207
+ {"id":"6af70186-7cbf-42dc-91bb-2420dda1a2d2","information":"{\"id\":\"pattern-1766260844892-qlihj4\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:00:44.892Z\",\"updated_at\":\"2025-12-20T20:00:44.892Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260845104.0","metadata":"{\"id\":\"pattern-1766260844892-qlihj4\",\"kind\":\"pattern\",\"is_negative\":false}"}
208
+ {"id":"6b335dab-3622-4a9a-a9a3-7464ae60a6e4","information":"{\"id\":\"test-1766265063306-07dckj8yk1gp\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T21:11:03.306Z\",\"raw_value\":1}","created_at":"1766265063516.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T21:11:03.306Z\"}"}
101
209
  {"id":"6b6b00c9-540b-4ef6-a908-47048d9589d1","information":"Cross-domain SSO architecture insight: Kent's use case (EpicAI.pro, EpicWeb.dev, EpicReact.dev) requires unified identity across different TLDs. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Current course-builder uses NextAuth.js per-site. Solution requires either: (1) Shared auth database with cross-domain session tokens, (2) Central identity provider (IdP) that all sites trust, or (3) Token exchange protocol between sites. BetterAuth may have better cross-domain support than NextAuth. Key constraint: different domains means cookies don't share - need explicit SSO flow.","created_at":"2025-12-18T15:32:50.696Z"}
210
+ {"id":"6b75df69-f6b5-4f91-84d4-c91dafcd29d0","information":"Documentation Pass Plan (Comprehensive - Full Sweep):\n\nEPIC: Comprehensive Documentation Pass\nScope: READMEs, web docs, code comments, AGENTS.md\nApproach: Code is truth, verify against implementations, focus on recent PGLite→libSQL migration\n\nSubtasks (file-based strategy):\n1. Update swarm-mail package README - libSQL storage, getSwarmMailLibSQL, createLibSQLAdapter, createMemoryAdapter signature, architecture diagram\n2. Update swarm-mail JSDoc and code comments - scan src/**/*.ts for PGLite/deprecated API references\n3. Update opencode-swarm-plugin README - tool names, APIs, storage references\n4. Update web docs - swarm-mail section (apps/web/content/docs/packages/swarm-mail/*.mdx) - depends on #1\n5. Update web docs - opencode-plugin section (apps/web/content/docs/packages/opencode-plugin/*.mdx) - depends on #3\n6. Update root README and AGENTS.md - storage refs, tool names, workflows - depends on #1, #3\n\nKey API changes to verify:\n- getSwarmMail → getSwarmMailLibSQL (deprecated)\n- createMemoryAdapter signature changed\n- PGLite references should be libSQL\n- Storage architecture diagrams need updating\n\nSemantic memory findings to incorporate:\n- PGlite database existence check patterns changed\n- LibSQL vector search requires explicit vector index\n- createMemoryAdapter signature changed in opencode-swarm-plugin\n\nFix docs + minor code issues, file beads for larger issues found.","created_at":"1766279661875.0","tags":"documentation,planning,swarm,libsql,migration,epic"}
102
211
  {"id":"6c7021dc-8b8f-4497-92c7-9693e04c42a0","information":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T19:53:03.777Z\",\"updated_at\":\"2025-12-17T19:53:03.777Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T19:53:04.822Z","metadata":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"kind\":\"pattern\",\"is_negative\":false}"}
212
+ {"id":"6c93e56b-b3f6-4f5c-9d37-8e702fad2a0d","information":"HDBSCAN scaling bottleneck for 500k embeddings: Core issue is O(n²) distance matrix requirement. For 500k points × 1024 dims: 125 billion distance calculations, ~1TB RAM for dense matrix, ~35 hours compute time at 1μs/distance. The naive vis-utils JS implementation (github.com/rivulet-zhang/vis-utils) confirms this - it precomputes the full cachedDist matrix in mst.js precomputeDist() function using nested loops. SOLUTION: Leverage existing HNSW index (embeddings_idx in libSQL) for approximate k-NN queries. HNSW provides O(log n) queries vs O(n) brute force, reducing total complexity from O(n²) to O(n log n). For pdf-library: Use vector_top_k() queries to compute core distances, extract neighbor graph from HNSW (each point queries k=16 neighbors), then run agglomerative clustering on sparse graph instead of full MST. Memory drops from 1TB to ~100MB (64MB graph + 40MB dendrogram). Time drops from hours to ~11min for 3-level hierarchy. Key insight: Don't use HDBSCAN library - steal the concepts (hierarchical dendrogram, noise filtering, density-based clustering) and adapt to HNSW infrastructure we already have.","created_at":"1766426001603.0","tags":"hdbscan,clustering,scalability,hnsw,approximate-nearest-neighbor,500k-scale,distance-matrix,O(n²),performance,embeddings"}
213
+ {"id":"6d15ae24-2e07-4e22-bf23-dd846e900428","information":"Test isolation fix for .hive/ pollution: Tests MUST use `tmpdir()` from `node:os` instead of relative paths or hardcoded `/tmp/`. Pattern: `const TEST_DIR = join(tmpdir(), \\`test-name-${Date.now()}\\`)`. **Root cause**: memory/sync.test.ts was using `join(import.meta.dir, \".test-memory-sync\")` which created test directories in the source tree, polluting the repo. hive.integration.test.ts was using hardcoded `/tmp/` which works on Unix but fails on Windows. Always use `tmpdir()` for cross-platform temp directory handling. **Verification**: Run tests, check `git status .hive/` is clean, and `find packages -type d -name \".test-*\"` returns nothing.","created_at":"1766422059365.0","tags":"testing,test-isolation,tmpdir,hive,cross-platform"}
103
214
  {"id":"70613071-8231-49a5-bdca-a9b9f7e9c53c","information":"{\"id\":\"pattern-1765386530615-riuu0i\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:08:50.615Z\",\"updated_at\":\"2025-12-10T17:08:50.615Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:08:50.799Z","metadata":"{\"id\":\"pattern-1765386530615-riuu0i\",\"kind\":\"pattern\",\"is_negative\":false}"}
215
+ {"id":"7102b6c2-0338-48a2-b3be-6b263057a4ab","information":"SSE streaming with Bun.serve() requires sending initial data to flush headers. When using ReadableStream for SSE, if no existing events are available to send immediately, the client's fetch() will hang waiting for the first byte. Fix: Send an SSE comment (`: connected\\n\\n`) at the start of the stream to establish the connection. This is standard SSE practice - comments (lines starting with `:`) are ignored by clients but flush the response headers.","created_at":"1766597178157.0","tags":"bun,sse,server-sent-events,streaming,http,fetch,readablestream"}
104
216
  {"id":"713d8d68-90fa-4b2f-9ea0-5b06a0e6e50c","information":"{\"id\":\"test-1765771061095-2yd4dw3psvh\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:41.095Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:41.455Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:41.095Z\"}"}
105
217
  {"id":"7189cf77-2ceb-47c4-a354-0dc493876ded","information":"{\"id\":\"test-1765771127882-pdmhpieixbg\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:47.882Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:48.290Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:47.882Z\"}"}
106
218
  {"id":"71db34a5-29be-4431-98a9-e6a1e9416c8e","information":"PGlite WAL accumulation prevention pattern: Added `doctor` command to CLI that checks WAL file count and size (thresholds: 50 files OR 50MB). Also added graceful shutdown handlers (SIGINT, SIGTERM) that run CHECKPOINT before exit. Critical for MCP tool invocations which are separate processes that may not cleanly close database. Without these, WAL files accumulate over days causing WASM memory exhaustion (930 WAL files = 930MB crashed PGlite). Doctor command uses assessWALHealth() helper to warn users and suggest export/reimport. Shutdown handlers use dynamic import to avoid circular deps and check if DB exists before checkpointing.","created_at":"2025-12-19T04:03:22.627Z","tags":"pglite,wal,checkpoint,cli,graceful-shutdown,mcp,wasm-memory,prevention-pattern"}
107
219
  {"id":"729c2510-6ae1-4701-ba06-5faef13ec1f2","information":"postgres.js DatabaseAdapter wrapper pattern: postgres.js uses tagged template literals for queries (sql`SELECT...`) but DatabaseAdapter expects (sql, params) signature. Key implementation details: 1) Use sql.unsafe(sqlString, params) for raw SQL with parameters. 2) postgres.js returns Row[] directly (not wrapped in {rows:[]}), so wrap result: {rows: await sql.unsafe(...)}. 3) Type assertion needed: (await sql.unsafe(...)) as unknown as T[] because postgres.js unsafe returns Row[] but we need T[]. 4) Transaction support: sql.begin() callback receives TransactionSql that behaves like sql, wrap it recursively with wrapPostgres(). 5) sql.begin() returns Promise<UnwrapPromiseArray<T>>, need type assertion: result as T. 6) Factory pattern: createSocketAdapter validates options (either path OR host+port, not both), creates postgres client, validates with ping query, wraps and returns. 7) External postgres in build config to avoid bundling. Successfully implemented for swarm-mail socket adapter.","created_at":"2025-12-17T17:54:54.552Z"}
220
+ {"id":"72ea1de7-fa6e-4c40-b641-d9b40e86772c","information":"npm registry API for latest versions: Use https://registry.npmjs.org/{package}/latest endpoint. Returns JSON with version field. Works for scoped packages (@types/node). Graceful handling: return undefined on 404 or network errors - don't throw. Used in swarm-research.ts for optional upgrade checking when checkUpgrades=true parameter passed. Performance consideration: Promise.all for parallel fetches when checking multiple packages.","created_at":"1766517242442.0","tags":"npm,registry,api,versions,upgrades,swarm-research,network-resilience"}
108
221
  {"id":"738be6d8-6f06-45b5-9e48-f78c0689af64","information":"{\"id\":\"test-1765653641690-8bz4qvel2p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:20:41.690Z\",\"raw_value\":1}","created_at":"2025-12-13T19:20:41.892Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:20:41.690Z\"}"}
222
+ {"id":"7399bf68-936c-4129-bb20-dd9d332ddb1d","information":"{\"id\":\"pattern-1766263207762-zbob2h\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:40:07.762Z\",\"updated_at\":\"2025-12-20T20:40:07.762Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263208060.0","metadata":"{\"id\":\"pattern-1766263207762-zbob2h\",\"kind\":\"pattern\",\"is_negative\":false}"}
109
223
  {"id":"73a330d8-15ea-4ea6-80cf-9a9bdf82ae6b","information":"Integration tests should always use isolated collections to prevent test pollution. Best pattern discovered:\n\nFor semantic-memory tests:\n- Use unique collection names with timestamps in beforeEach\n- Example: test-feedback-${testSuite}-${Date.now()}\n- Always cleanup with storage.close() in afterEach\n\nFor database tests (PGLite/streams):\n- Use unique temp paths with timestamps and UUIDs\n- Example: /tmp/test-${testSuite}-${Date.now()}-${randomUUID()}\n- Always cleanup with closeDatabase() and rm -rf in afterEach\n\nWHY: Without isolation tests can interfere with each other causing flaky failures. Each test needs its own collection/database that gets cleaned up after the test runs.","created_at":"2025-12-14T22:36:54.874Z"}
224
+ {"id":"751c12a5-7ce2-4ad7-b91f-31e0f54ed076","information":"Svelte 5 component pattern for GraphControls: Used $props() rune for reactive props, defined TypeScript interfaces inline, used Catppuccin color palette via CSS custom properties with fallbacks. Component is purely presentational - takes features object, zoomLevel number, and onToggle callback. Used {#each} over const array with 'as const' assertion for type safety. Positioned absolutely with z-index 100 to float over canvas. Key insight: CSS custom properties (var(--cat-*)) don't need imports in script - they're runtime values.","created_at":"1766343278132.0","tags":"svelte,svelte5,components,typescript,catppuccin,ui,props"}
110
225
  {"id":"753a6005-3ecb-4bae-bbd0-bd38cfb2ab55","information":"Lite model support implementation pattern: Add model selection based on file types to optimize swarm costs. Key learnings: (1) File-type inference is simple but effective - all .md/.mdx or all .test./.spec. files use lite model, (2) Priority system works well: explicit override > file inference > default, (3) Integration point is swarm_spawn_subtask which returns recommended_model in metadata for coordinator to use with Task(), (4) Used dynamic import for selectWorkerModel to avoid circular dependencies, (5) Added risks: [] to mock subtask to satisfy DecomposedSubtask schema. Pattern applies to any swarm optimization where different task types have different resource needs.","created_at":"2025-12-19T00:31:23.462Z","tags":"swarm,model-selection,optimization,cost-savings"}
111
226
  {"id":"75fc6779-42fe-4c60-9836-c4bc3e2ee3e7","information":"BetterAuth cross-domain limitation (Dec 2024): crossSubDomainCookies only works for SUBDOMAINS of the same root domain (e.g., app1.example.com and app2.example.com). It does NOT work for different TLDs (epicweb.dev vs epicreact.dev vs epicai.pro). For Kent's use case, need a different solution: either (1) Central IdP on a shared domain, (2) Token exchange protocol between sites, or (3) Custom SSO plugin. This is a gap in BetterAuth that @badass may need to solve.","created_at":"2025-12-18T15:35:21.461Z"}
227
+ {"id":"76b2273e-c06d-44ba-b243-bc6180af1149","information":"{\"id\":\"pattern-1766263405842-l534tr\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:43:25.842Z\",\"updated_at\":\"2025-12-20T20:43:25.842Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263406063.0","metadata":"{\"id\":\"pattern-1766263405842-l534tr\",\"kind\":\"pattern\",\"is_negative\":false}"}
112
228
  {"id":"7792b139-5a37-44a9-9c6b-a5578ad93d48","information":"SWARM-MAIL EXTRACTION COMPLETE (Dec 2025): Successfully extracted swarm-mail as standalone npm package using adapter pattern. Key learnings: 1) Turborepo needs packageManager field in root package.json, 2) bun build doesn't resolve workspace:* - must build dependencies first with turbo, 3) TypeScript declarations need emitDeclarationOnly:true (not noEmit) plus tsc in build script, 4) Re-export everything from streams/index.ts for backward compatibility, 5) Coordinator should NOT reserve files - only workers reserve their own files. Architecture: createSwarmMailAdapter(db, projectKey) for DI, getSwarmMail(path) for convenience singleton. All 230 tests pass.","created_at":"2025-12-15T00:22:09.754Z"}
113
229
  {"id":"77e67fcb-446f-4444-8a27-624e43bc16c7","information":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T00:37:17.036Z\",\"updated_at\":\"2025-12-17T00:37:17.036Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T00:37:17.973Z","metadata":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"kind\":\"pattern\",\"is_negative\":false}"}
230
+ {"id":"7809bc09-f952-4a0b-9e8b-d1787500a22d","information":"{\"id\":\"pattern-1766593303550-g2j1y9\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:21:43.550Z\",\"updated_at\":\"2025-12-24T16:21:43.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766593303830.0","metadata":"{\"id\":\"pattern-1766593303550-g2j1y9\",\"kind\":\"pattern\",\"is_negative\":false}"}
231
+ {"id":"7a145b41-f975-4b3f-b849-9b2a4d96568c","information":"{\"id\":\"pattern-1766341864753-8kv4c7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T18:31:04.753Z\",\"updated_at\":\"2025-12-21T18:31:04.753Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766341864995.0","metadata":"{\"id\":\"pattern-1766341864753-8kv4c7\",\"kind\":\"pattern\",\"is_negative\":false}"}
232
+ {"id":"7a3a796e-8a02-46b7-8c94-d9e0dc317127","information":"Successfully implemented 5 pre-built analytics queries for swarm-mail event sourcing system using TDD methodology. Queries built using QueryBuilder fluent API with parameterized SQL to prevent injection. \n\nQueries implemented:\n1. failed-decompositions: Groups subtask_outcome failures by strategy, shows failure counts and avg duration\n2. strategy-success-rates: Calculates success rate percentage per strategy with total/successful/failed counts\n3. lock-contention: Identifies files with most reservations using reservation_released events, computes avg hold time\n4. agent-activity: Tracks agent event counts, first/last timestamps, active time spans\n5. message-latency: Computes p50/p95/p99 percentiles using window functions (ROW_NUMBER OVER)\n\nKey patterns learned:\n- Use QueryBuilder for consistency but raw SQL acceptable for complex queries (percentiles)\n- Always use parameterized queries (? placeholders) for security\n- json_extract() for querying JSON data fields in libSQL\n- CAST(...AS REAL) for floating-point aggregates (AVG, percentage calculations)\n- CASE WHEN for conditional aggregation (counting successes/failures separately)\n- Window functions (ROW_NUMBER OVER) for percentile approximation in SQLite/libSQL\n- Each query exports typed filter interfaces for type-safe usage\n\nTesting approach:\n- RED: Write comprehensive test expectations first (38 tests)\n- GREEN: Implement minimal code to pass (5 query modules + index)\n- Tests verify SQL structure, parameter handling, filter support, export contracts\n- All tests passing, typecheck clean, UBS scan clean","created_at":"1766433854114.0","metadata":"{\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjhkium6rpy\",\"query_count\":5,\"files_created\":7,\"tests_written\":38}","tags":"analytics,tdd,query-builder,libsql,event-sourcing,sql,percentiles,window-functions"}
233
+ {"id":"7a44a74d-5688-46eb-87ad-1740f1a057ae","information":"TDD RED phase regression testing discovered issues beyond the target bugs: (1) appendEventDrizzle doesn't work with trigger-based sequence generation in test-libsql.ts - needs manual sequence assignment or different approach, (2) Hive operations pass undefined to libSQL which throws TypeError. For regression tests, better to use direct SQL inserts to test the query logic in isolation, not the full event sourcing stack.","created_at":"1766415554151.0","tags":"testing,tdd,regression-tests,pglite-migration,libSQL"}
114
234
  {"id":"7a7221a1-6e25-4b95-b2e3-ee2430b6e9e5","information":"Bun + Turborepo monorepo setup gotcha: The `--filter` flag for `bun add` is BROKEN as of Aug 2025 - it installs dependencies to the ROOT package.json instead of the target workspace. ALWAYS use `--cwd` flag instead: `bun add <package> --cwd apps/my-app`. This is critical for workspace-specific dependency management. Also requires `packageManager` field in root package.json for Turborepo to resolve workspaces.","created_at":"2025-12-16T19:58:30.121Z"}
115
235
  {"id":"7a960377-f74a-4152-8aac-c0f80409da0c","information":"PGlite test isolation pattern: When testing event stores with PGlite, avoid using getDatabase() singleton in tests as it returns a shared instance that persists across tests. Instead, create isolated in-memory instances per test: pglite = new PGlite() in beforeEach. This prevents PGlite is closed errors when afterEach closes the database. For schema initialization, manually CREATE TABLE for core tables like events and schema_version instead of calling initializeSchema() which may have side effects on singletons.","created_at":"2025-12-16T22:08:11.460Z","metadata":"{\"context\":\"swarm-mail test patterns\"}","tags":"testing,pglite,event-store,isolation"}
116
236
  {"id":"7abb34bd-3bcd-4d6b-b8d3-eb81f748418c","information":"{\"id\":\"pattern-1765386439151-fwvekq\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:07:19.151Z\",\"updated_at\":\"2025-12-10T17:07:19.151Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:07:19.337Z","metadata":"{\"id\":\"pattern-1765386439151-fwvekq\",\"kind\":\"pattern\",\"is_negative\":false}"}
237
+ {"id":"7b4210bb-cc70-4b93-b306-bd112f38ce53","information":"AI SDK v6 Lesson 02-04 (Structured Data Extraction) verification: generateText + Output.object() pattern works correctly. Key finding: .describe() on Zod schema fields is CRITICAL for quality extraction. Without descriptions: title includes names, dates are relative strings, time formats vary. WITH descriptions providing context (today's date, format specs, default logic): title properly excludes names, dates calculated correctly in YYYY-MM-DD, times in HH:MM 24-hour, endTime auto-calculates 1-hour duration. Example impact: \"Meeting with Guillermo Rauch about Next Conf Keynote Practice tomorrow at 2pm\" → title changed from full string to \"Next Conf Keynote Practice Meeting\", date from \"tomorrow\" to \"2025-12-24\", time from \"2pm\" to \"14:00 - 15:00\". The Output.object() API correctly passes schema descriptions to the model, making structured extraction production-ready with proper field guidance.","created_at":"1766455846116.0","tags":"ai-sdk-v6,structured-extraction,generateText,zod,schema-descriptions"}
117
238
  {"id":"7b9b35dd-1e6b-4a23-a8a4-48ac094116c2","information":"SUBTASK_PROMPT_V2 memory emphasis pattern: To make workers actually use semantic memory, the prompt needs:\n\n1. **Visual prominence** - emoji (🧠💾), CAPS (MANDATORY, CRITICAL), bold formatting\n2. **Concrete examples by task type** - workers need to see exactly what query to run for their specific task (bug fix → error message, new feature → domain concept, etc.)\n3. **Good vs Bad examples** - show what a useful memory looks like vs a useless one\n4. **Explicit triggers** - list specific situations that MUST trigger memory storage (>15min debugging, found gotcha, architectural decision)\n5. **Consequences of skipping** - explain the pain they'll cause themselves and future agents\n6. **Checklist position matters** - memory query MUST be Step 2 (before any work), storage MUST be near-last (Step 8)\n\nKey insight: Workers ignore long prose but respond to visual hierarchy and concrete examples. The phrase \"If you learned it the hard way, STORE IT\" is more effective than paragraphs explaining why.","created_at":"2025-12-19T02:52:33.987Z","tags":"swarm,prompts,memory,worker-template,emphasis,visual-hierarchy"}
118
239
  {"id":"7beb8d2f-152d-41db-90a6-a622c552e8a1","information":"{\"id\":\"test-1765386529833-7ou9lp7ra57\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:08:49.833Z\",\"raw_value\":1}","created_at":"2025-12-10T17:08:50.009Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:08:49.833Z\"}"}
119
240
  {"id":"7c58dd11-320f-4a84-8173-96dfa639c10b","information":"Testing Drizzle adapters: avoid mocking Drizzle's query builder directly - creates circular reference errors. Instead, create a Fake adapter that implements the same interface but uses simple in-memory storage. Fake pattern: 1) Create FakeDatabase with Maps for storage, 2) Create FakeAdapter that wraps FakeDatabase and implements same interface as real adapter, 3) Tests call FakeAdapter methods which call simplified storage methods. This avoids JSON.stringify() errors from Drizzle's internal structures while maintaining test realism. Tests run 10x faster than real DB and are more maintainable.","created_at":"2025-12-18T16:31:51.601Z","tags":"testing,drizzle,orm,fakes,tdd,adapter-pattern"}
241
+ {"id":"7d7eb1aa-560c-4be8-a767-6224a2ccee5a","information":"Progressive disclosure for data visualization: Use zoom-based detail levels to control what's shown. Three-tier approach: (1) overview (k<0.3) shows only hubs/important nodes with faded context, (2) mid (0.3-0.7) shows all nodes but labels only hubs, (3) detail (k>0.7) shows all with labels for readable sizes (screenRadius > 8px). Hub classification by degree (default: 10+ connections). Key insight: don't hide context entirely - fade it (opacity 0.2) to preserve structure while directing attention. This implements Tufte's \"macro/micro readings\" - graph readable at both aggregate and detailed levels. All pure functions with O(1) complexity for performance.","created_at":"1766343275089.0","tags":"visualization,progressive-disclosure,tufte,zoom,d3,ux-patterns"}
242
+ {"id":"7d97cae8-4f01-45f4-a5e9-607123364b45","information":"Drizzle INSERT with Auto-Increment Columns:\n\n**Problem:** Drizzle INSERT failed with \"null value violates not-null constraint\" on `sequence` column when explicitly setting `sequence: null`.\n\n**Root Cause Schema Difference:**\n- **PGlite schema:** `sequence SERIAL` (auto-incrementing, cannot be NULL)\n- **LibSQL/Drizzle schema:** `sequence INTEGER` (nullable, auto-assigned by trigger)\n\nWhen using Drizzle with PGlite, the query tried to insert `sequence: null` which violated SERIAL constraint.\n\n**Solution:** OMIT the column from INSERT instead of setting it to `null`:\n```typescript\n// BEFORE (fails on PGlite)\nawait db.insert(eventsTable).values({\n type, project_key, timestamp,\n data: JSON.stringify(rest),\n sequence: null, // ❌ Violates SERIAL constraint\n})\n\n// AFTER (works on both)\nawait db.insert(eventsTable).values({\n type, project_key, timestamp,\n data: JSON.stringify(rest),\n // sequence omitted - auto-assigned by DB\n})\n```\n\n**Why This Works:**\n- PGlite/PostgreSQL: SERIAL auto-increments when column is omitted\n- LibSQL/SQLite: Trigger assigns next value when column is NULL or omitted\n\n**Files Changed:**\n- `store-drizzle.ts`: Removed `sequence: null` from appendEventDrizzle\n\n**Pattern:** For auto-increment columns that differ between PG and SQLite, OMIT the column from INSERT rather than setting to NULL. Let the database handle it.","created_at":"1766331479194.0","tags":"drizzle,serial,auto-increment,pglite,libsql,insert"}
120
243
  {"id":"7e06f0d4-1231-4b91-943a-b55587178b6a","information":"Daemon-first architecture pattern for PGlite: Auto-start daemon on first database access with graceful fallback. Implementation uses ensureDaemonRunning() function that: 1) checks if daemon running, 2) attempts auto-start if not, 3) returns {success, mode, error?} result. DatabaseLive Layer calls ensureDaemonRunning() and routes based on result - success routes to DatabaseClient (socket), failure falls back to DirectDatabaseLive with warning. This solves PGlite single-connection limitation by default while maintaining backwards compatibility. Key insight: NEVER throw from ensureDaemonRunning - always return a result, even on failure. Caller handles fallback logic. TDD approach: wrote 4 tests first (RED), implemented ensureDaemonRunning (GREEN), added JSDoc (REFACTOR). All 32 tests passing.","created_at":"2025-12-19T17:22:37.415Z","tags":"pglite,daemon,auto-start,tdd,graceful-fallback,architecture"}
121
244
  {"id":"7ec67bba-2397-4eba-b563-7df4f17d02f5","information":"OpenCode plugin hook interface pattern: hooks use string literal keys with optional function signatures. Format: \"namespace.event\"?: (input: {...}, output: {...}) => Promise<void>. The output parameter is mutable - plugins append to arrays or modify properties. Single-line formatting is preferred by prettier for simple signatures. Session compaction hooks allow plugins to inject context before summarization.","created_at":"2025-12-17T18:01:37.726Z"}
245
+ {"id":"7f00daa2-8e7d-419b-9810-88647287e18d","information":"{\"id\":\"test-1766593254903-gipm8etumjg\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:20:54.903Z\",\"raw_value\":1}","created_at":"1766593255286.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:20:54.903Z\"}"}
122
246
  {"id":"803fddcb-ef84-4df9-8038-c69a6ebee9c5","information":"Course-builder OAuth Device Flow implementation reference (Dec 2024): Full RFC 8628 implementation exists in apps/ai-hero/src/app/oauth/device/. Key components: (1) POST /oauth/device/code - generates device_code + user_code with human-readable-ids, 10min expiry, (2) /activate page where user enters user_code, (3) device-verification tRPC router that marks verification with verifiedByUserId, (4) POST /oauth/token polls for access token. Schema in packages/adapter-drizzle with DeviceVerification table. This pattern should be extracted into @badass/auth for CLI and Workshop App authentication.","created_at":"2025-12-18T15:41:09.121Z"}
247
+ {"id":"8055fead-5592-40da-afa1-8a64d98b9afe","information":"{\"id\":\"test-1766350691029-ckp899oybls\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T20:58:11.029Z\",\"raw_value\":1}","created_at":"1766350691387.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T20:58:11.029Z\"}"}
248
+ {"id":"820d4c41-6de0-436b-a2e7-70a79830f959","information":"Implemented Four Golden Signals analytics queries for swarm-mail event store. Key learnings:\n\n**JSON boolean handling in SQLite/libSQL:** JSON stores booleans as 0/1, not strings. Use `json_extract(data, '$.success') = 0` for false, NOT `= 'false'`. This is because json_extract returns native SQLite types (0 for false, 1 for true), not JSON strings.\n\n**json_each() table aliasing:** When using json_each() in a FROM clause with other tables, ALWAYS alias it and qualify column names. `json_each(events.data, '$.paths') as paths` then use `paths.value`, not `value`. Without aliasing, SQLite throws \"ambiguous column name: type\" because json_each has its own \"type\" column.\n\n**Correct json_each syntax:** Use `json_each(table.column, '$.field')` with the JSON path, NOT `json_each(json_extract(...))`. Direct syntax: `FROM events, json_each(events.data, '$.paths') as paths` then `paths.value` for array elements.\n\n**Time filter parameterization:** For optional time filters, use pattern: `WHERE (? IS NULL OR timestamp >= ?) AND (? IS NULL OR timestamp <= ?)`. Pass same value twice: [sinceMs, sinceMs, untilMs, untilMs]. This allows NULL to skip the filter while still using parameterized queries.\n\n**Test patterns:** Integration tests use `createInMemorySwarmMailLibSQL()`, then `swarmMail.getDatabase()` for raw SQL. Insert test data with `db.query(sql, [params])`, not `db.execute()` (DatabaseAdapter only has query method).\n\n**Four Golden Signals mapping:**\n1. Latency = task duration by strategy (subtask_outcome events)\n2. Traffic = events per hour (time-series bucketing with strftime)\n3. Errors = failed tasks by agent (success=false filter)\n4. Saturation = active reservations (created but not released)\n5. Conflicts = most contested files (json_each over paths array)","created_at":"1766594928898.0","tags":"swarm-mail,analytics,libsql,sqlite,json,four-golden-signals,testing"}
249
+ {"id":"8228a158-ceba-4195-bbd4-66039caeee34","information":"{\"id\":\"pattern-1766259539198-8szypl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:38:59.198Z\",\"updated_at\":\"2025-12-20T19:38:59.198Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766259539429.0","metadata":"{\"id\":\"pattern-1766259539198-8szypl\",\"kind\":\"pattern\",\"is_negative\":false}"}
123
250
  {"id":"825ccc37-c833-42e6-9069-4a531215cea2","information":"{\"id\":\"test-1765749524072-fs3i37vpoik\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T21:58:44.072Z\",\"raw_value\":1}","created_at":"2025-12-14T21:58:44.282Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T21:58:44.072Z\"}"}
124
251
  {"id":"82945143-4b25-418b-acaa-e3a02a2eb7b8","information":"{\"id\":\"test-1766104210635-2mewizal9aa\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-19T00:30:10.635Z\",\"raw_value\":1}","created_at":"2025-12-19T00:30:10.859Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-19T00:30:10.635Z\"}"}
125
252
  {"id":"8311ea42-e882-4b72-8f23-fc6e83250e5f","information":"{\"id\":\"test-1765751832219-4zgo42wxmyu\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:37:12.219Z\",\"raw_value\":1}","created_at":"2025-12-14T22:37:12.483Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:37:12.219Z\"}"}
126
253
  {"id":"834e33d4-b8d4-4c80-8a70-5d69d612efb0","information":"swarm_complete review gate UX fix: Changed review gate responses from { success: false, error: \"...\" } to { success: true, status: \"pending_review\" | \"needs_changes\", message: \"...\", next_steps: [...] }. This reframes the review gate as a workflow checkpoint, not an error state. Workers did nothing wrong - they just need to wait for coordinator review. The logic of when to check review status was already correct, only the response format needed fixing. Added 3 tests covering: (1) pending_review when no review attempted, (2) needs_changes when review rejected, (3) skip_review bypasses gate. Also added markReviewRejected() test helper to swarm-review.ts for simulating rejected reviews.","created_at":"2025-12-18T21:40:00.165Z","tags":"swarm,review-gate,ux-fix,workflow-state,testing"}
254
+ {"id":"83fad083-f9d7-4b0b-9434-3750b67c0ac8","information":"swarm-mail adapter instance mismatch bug RESOLVED: The getInbox empty bug was caused by TWO separate adapter caches. `libsql.convenience.ts` had its own `instances` map caching SwarmMailAdapter wrappers, while `store.ts` had `adapterCache` map for DatabaseAdapter instances. When tests called `getSwarmMailLibSQL(testProjectPath)`, it created an adapter cached in `instances`. When `sendSwarmMessage` called `appendEvent()`, it created a DIFFERENT adapter cached in `adapterCache`. Messages were written to one database instance and read from another = empty inbox.\n\n**Fix**: Made all adapter creation go through the SAME cache by:\n1. Exporting `getOrCreateAdapter` from `store.ts` (the one with caching logic)\n2. Making `store-drizzle.ts` delegate to `store.ts` for adapter creation (not create its own)\n3. Making `getSwarmMailLibSQL` use the shared cache from `store.ts` instead of creating adapters directly\n\n**Critical Insight**: Parameter order mattered - `store.ts` uses `(dbOverride, projectPath)` while `store-drizzle.ts` uses `(projectPath, dbOverride)`. Had to swap them when delegating.\n\n**Test Pattern**: Integration tests that use `getSwarmMailLibSQL` now share adapters with `sendSwarmMessage`, `appendEvent`, and `getInbox` - all operations use the same database instance as intended.\n\nThis was NOT the URL_INVALID bug (already fixed in commit 7bf9385). This was a separate instance mismatch issue discovered after URL normalization was resolved.","created_at":"1766423294803.0","metadata":"{\"files\":[\"swarm-mail/src/streams/store.ts\",\"swarm-mail/src/streams/store-drizzle.ts\",\"swarm-mail/src/libsql.convenience.ts\",\"swarm-mail/src/streams/swarm-mail.ts\"],\"pattern\":\"adapter-caching\",\"tests_fixed\":3}","tags":"swarm-mail,adapter-cache,bug-fix,integration-tests,database-instance"}
255
+ {"id":"8470c067-528b-43b6-a491-a9a5190c4c08","information":"{\"id\":\"pattern-1766264411599-5hpzj8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:00:11.599Z\",\"updated_at\":\"2025-12-20T21:00:11.599Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766264411818.0","metadata":"{\"id\":\"pattern-1766264411599-5hpzj8\",\"kind\":\"pattern\",\"is_negative\":false}"}
127
256
  {"id":"8476d7c1-9768-44a6-a378-dcaca8447aae","information":"hive_sync git remote handling: Fixed bug where hive_sync would fail with \"No configured push destination\" error when no git remote is configured. Root cause: implementation unconditionally tried to push/pull even when no remote exists. Solution: Check if remote exists with `git remote` command before attempting pull/push operations. If no remote, return success message \"(no remote configured)\" instead of failing. This allows local-only git repos to use hive_sync without errors. Implementation detail: The commit of .hive changes happens BEFORE the pull check, ensuring .hive state is committed even if pull/push are skipped.","created_at":"2025-12-18T18:02:37.061Z"}
257
+ {"id":"84e01642-1c96-4f37-b6e8-e90a1b9aa081","information":"{\"id\":\"test-1766262893918-4hhjkqasji2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:34:53.918Z\",\"raw_value\":1}","created_at":"1766262894140.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:34:53.918Z\"}"}
258
+ {"id":"84f2229d-1f63-44d2-84f3-ba5884a13b32","information":"{\"id\":\"pattern-1766260911605-ad5ur8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:01:51.605Z\",\"updated_at\":\"2025-12-20T20:01:51.605Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260911893.0","metadata":"{\"id\":\"pattern-1766260911605-ad5ur8\",\"kind\":\"pattern\",\"is_negative\":false}"}
259
+ {"id":"85310b4f-fc98-4675-9b6d-ae6f1d593306","information":"Drizzle ORM PGlite Adapter Integration Pattern:\n\n**Problem:** Projection wrappers calling `toSwarmDb()` failed with \"DatabaseAdapter does not have getClient() method\" when passed PGlite instances. `toSwarmDb()` only worked with LibSQLAdapter (which has `getClient()` method).\n\n**Root Cause:** swarm-mail supports BOTH PGlite and LibSQL, but Drizzle wrappers assumed LibSQL-only. `getDatabase()` returns PGlite, not LibSQLAdapter.\n\n**Solution:** Universal `toDrizzleDb()` function that:\n1. Detects if database is LibSQLAdapter (has `getClient()` method) OR PGlite (has `query`/`exec` methods)\n2. For LibSQL: uses `drizzle-orm/libsql` with `getClient()`\n3. For PGlite: uses `drizzle-orm/pglite` adapter directly with PGlite instance\n\n**Implementation:**\n```typescript\nexport function toDrizzleDb(db: any): SwarmDb {\n // LibSQL path\n if (db && typeof db.getClient === 'function') {\n return createDrizzleClient(db.getClient());\n }\n \n // PGlite path \n if (db && typeof db.query === 'function' && typeof db.exec === 'function') {\n const { drizzle } = require('drizzle-orm/pglite');\n const { schema } = require('./db/schema/index.js');\n return drizzle(db, { schema });\n }\n \n throw new Error('Database must be LibSQLAdapter or PGlite');\n}\n```\n\n**Files Changed:**\n- `libsql.convenience.ts`: Added `toDrizzleDb()`, exported from index\n- `projections-drizzle.ts`: Changed `toSwarmDb()` to `toDrizzleDb()` in all wrappers\n- `store-drizzle.ts`: Changed `toSwarmDb()` to `toDrizzleDb()` in all wrappers\n\n**Testing:** All projection queries (getActiveReservations, appendEvent, etc.) now work with both PGlite AND LibSQL.","created_at":"1766331440420.0","tags":"drizzle,pglite,libsql,database-adapter,type-detection"}
260
+ {"id":"85d1e309-e76d-4617-86ec-bc6f556d9e87","information":"PGlite Schema Sync with Drizzle Schema:\n\n**Problem:** Drizzle INSERT queries failed with \"relation does not exist\" or \"no unique constraint\" errors when using PGlite. Tables like `swarm_contexts`, `cursors`, `eval_decompositions`, `eval_outcomes` were defined in Drizzle schema but missing from PGlite initialization.\n\n**Root Cause:** Drizzle doesn't auto-create tables - it's just a query builder. PGlite schema initialization (`initializeSchema()` in streams/index.ts) was incomplete. It only had core tables (events, agents, messages, reservations, locks).\n\n**Solution:** Added missing tables to PGlite schema to match Drizzle schema exactly:\n\n1. **swarm_contexts** - checkpoint/recovery tracking (needs PRIMARY KEY for ON CONFLICT)\n2. **cursors** - stream position tracking\n3. **eval_decompositions** - task decomposition tracking \n4. **eval_outcomes** - subtask outcome recording\n\n**Critical Detail:** PostgreSQL/PGlite `ON CONFLICT (column)` requires PRIMARY KEY or UNIQUE constraint on that column. Drizzle schema had `.primaryKey()` but PGlite SQL needed explicit `PRIMARY KEY` in CREATE TABLE.\n\n**Files Changed:**\n- `streams/index.ts`: Added 4 missing tables to `initializeSchema()`\n\n**Pattern:** When adding Drizzle tables, ALWAYS add equivalent CREATE TABLE to PGlite schema. Keep them in sync.","created_at":"1766331453627.0","tags":"pglite,schema-sync,drizzle,database-migration,on-conflict"}
261
+ {"id":"872f41e4-f752-4ed0-aca9-2c2222f27768","information":"DurableDeferred Integration in Swarm: swarm_complete now resolves a DurableDeferred keyed by bead_id to enable cross-agent task completion signaling. This allows coordinators to await worker completion without polling. Implementation: After closing the cell, swarm_complete checks for a deferred with URL `deferred:${bead_id}` and resolves it with {completed: true, summary} payload. Non-fatal if deferred doesn't exist (backward compatibility). Coordinators can create the deferred BEFORE spawning workers, then await its resolution. Uses libSQL database via getSwarmMailLibSQL(). Returns deferred_resolved: boolean and deferred_error: string in response for debugging. Future improvement: Use Effect-TS DurableDeferred service instead of raw SQL for type safety and error handling.","created_at":"1766341155834.0","tags":"swarm,durabledeferred,effect-ts,cross-agent-signaling,task-completion"}
262
+ {"id":"888e5037-d33c-4182-8ff5-7c1466977f38","information":"Debug package integration for swarm-mail: Successfully implemented debug logging with namespace filtering (swarm:events, swarm:reservations, swarm:messages, swarm:checkpoints). Key learnings: (1) debug package checks DEBUG env var at import time, so tests need to use debug.enable()/disable() programmatically, NOT process.env.DEBUG directly. (2) Capturing stderr in tests requires proper typing: `process.stderr.write = ((chunk: Buffer | string) => {...}) as typeof process.stderr.write` to satisfy TypeScript. (3) Dynamic import in tests (`await import(\"./debug.ts\")`) ensures debug state is picked up after enable/disable calls. (4) debug package automatically adds timestamps and subsystem prefixes - no manual formatting needed. (5) For human debugging only - AI agents should use structured errors instead. Console output bloats AI context.","created_at":"1766433142777.0","tags":"debug,logging,testing,typescript,swarm-mail,environment-variables"}
263
+ {"id":"899fb8b8-d5fb-464d-a493-a8e5131e3f0e","information":"Svelte 5 runes pattern for reactive config objects: Use `$derived()` for objects that depend on props. WRONG: `const config = { width, height }` (captures initial value). CORRECT: `const config = $derived({ width, height })`. This ensures reactivity when props change. Also applies to computed values derived from props.","created_at":"1766343401494.0","tags":"svelte,svelte5,runes,reactivity,derived,props"}
264
+ {"id":"8a14fcbb-5546-4bdb-9a0b-91ac985a85fb","information":"DurableLock integration pattern for event-sourced file reservations:\n\n**Architecture:**\n- Keep existing event+projection architecture (reserveFiles/releaseFiles)\n- Add DurableLock underneath for actual mutex\n- Store lock holder IDs in both event (lock_holder_ids array) and projection (lock_holder_id column)\n- Release locks using stored holder IDs\n\n**Implementation steps:**\n1. Extend event schemas with lock_holder_ids optional field\n2. Add lock_holder_id column to projection table schema\n3. Update projection handler to store lock holder IDs\n4. In reserve function: call DurableLock.acquire() for each path, store holders\n5. In release function: read holders from projection, call DurableLock.release()\n\n**Key learnings:**\n- DurableLock requires holder ID for release - must be persisted\n- Locks auto-expire via TTL if release fails (graceful degradation)\n- Effect.runPromise() pattern for calling Effect-based DurableLock from async code\n- Schema changes require updating BOTH Drizzle schema (db/schema) AND libsql-schema.ts DDL\n\n**Gotchas:**\n- Database adapter must be passed explicitly to all store/projection functions (dbOverride parameter)\n- Schema initialization (createLibSQLStreamsSchema) must be called on first DB access\n- Bulk INSERT with lock_holder_ids requires careful parameter indexing ($baseParamCount+4+i pattern)\n\n**Test pattern:**\nQuery locks table directly after reserve/release to verify DurableLock was used","created_at":"1766341450388.0","metadata":"{\"date\":\"2025-12-21\",\"epic\":\"opencode-swarm-monorepo-lf2p4u-mjg1elo0g21\",\"task\":\"opencode-swarm-monorepo-lf2p4u-mjg1elo9uoa\",\"agent\":\"BoldStone\"}","tags":"durablelock,event-sourcing,file-reservations,libsql,effect-ts,swarm-mail"}
128
265
  {"id":"8a396b22-7a39-489a-ae5d-b5332b8f350e","information":"Course Builder monorepo structure for shared database adapters:\n\n- packages/core - defines CourseBuilderAdapter interface with 100+ methods, domain schemas (Zod), business logic\n- packages/adapter-drizzle - implements adapter interface, exports schema factories (getCourseBuilderSchema(tableFn)), supports MySQL/PG/SQLite via type discrimination\n- apps/* - each app creates own db instance, own table prefix, calls schema factory, passes both to adapter\n\nKey files:\n- packages/core/src/adapters.ts - interface definition with generic TDatabaseInstance\n- packages/adapter-drizzle/src/lib/mysql/index.ts - mySqlDrizzleAdapter(client, tableFn) implementation\n- apps/*/src/db/mysql-table.ts - app-specific mysqlTableCreator with unique prefix\n- apps/*/src/db/schema.ts - calls getCourseBuilderSchema(mysqlTable) to get prefixed tables\n- apps/*/src/db/index.ts - creates db instance, exports courseBuilderAdapter = DrizzleAdapter(db, mysqlTable)\n\nPattern enables 15+ apps sharing same database with table isolation via prefixes like zER_, zEW_, EDAI_, AI_, etc.","created_at":"2025-12-14T23:56:13.303Z"}
129
266
  {"id":"8a59059a-7374-49a6-ad4e-4dc5a4160a5c","information":"Docker test infrastructure approach for egghead migration:\n\n1. Use pg_dump for REAL schemas - don't manually recreate Rails table definitions. The schema has 50+ columns per table with specific defaults, constraints, and indexes.\n\n2. Export strategy (pragmatic path):\n - Option 2 (now): Export 2 POC courses with full schema via pg_dump --schema-only + COPY for data\n - Option 1 (next): Generalize to N random courses with --courses=N flag\n - Option 3 (goal): Full sanitized production dump\n\n3. Data anonymization: Replace emails with instructor{id}@test.egghead.io, null out authentication_token, encrypted_password, confirmation_token, reset_password_token\n\n4. Key tables in dependency order: users → instructors → series → lessons → tags → taggings → playlists → tracklists\n\n5. Shell script approach (export-poc-courses.sh) is cleaner than TypeScript for pg_dump operations - native psql/pg_dump tools handle schema complexity better than manual SQL generation.","created_at":"2025-12-13T17:35:48.194Z"}
130
267
  {"id":"8b23681b-7dc8-4501-882e-1ef66174881f","information":"{\"id\":\"pattern-1765751936368-siqk3d\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T22:38:56.368Z\",\"updated_at\":\"2025-12-14T22:38:56.368Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T22:38:56.603Z","metadata":"{\"id\":\"pattern-1765751936368-siqk3d\",\"kind\":\"pattern\",\"is_negative\":false}"}
268
+ {"id":"8b93efb6-c350-4538-b258-99bc7acc1e63","information":"Tool-adapter integration test coverage completed for opencode-swarm-plugin. Added 4 new tests covering memory tools (semantic_memory_store, semantic_memory_find), swarm coordination tools (swarm_broadcast, swarm_checkpoint), and a comprehensive smoke test that exercises 9 tools in sequence (init → create → reserve → progress → memory → send → close → release). All 20 tests pass. Key learnings: (1) semantic_memory_store returns {id: string}, not {success, id, information}. (2) swarm_checkpoint requires epic_id, files_modified, progress_percent fields - it's not just a simple checkpoint. (3) Smoke test pattern is valuable for catching adapter lifecycle bugs that unit tests miss. (4) swarm_checkpoint failure with \"no such table: swarm_contexts\" is EXPECTED in test environments without full swarm coordination setup - the test verifies it does NOT fail with \"dbOverride required\" which was the original bug.","created_at":"1766364993661.0","tags":"testing,integration-tests,tool-adapter,swarm-plugin"}
131
269
  {"id":"8c4f7a27-e641-4657-9bbe-857e77cdd200","information":"{\"id\":\"pattern-1765653391843-hizz8c\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:16:31.843Z\",\"updated_at\":\"2025-12-13T19:16:31.843Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:16:32.050Z","metadata":"{\"id\":\"pattern-1765653391843-hizz8c\",\"kind\":\"pattern\",\"is_negative\":false}"}
270
+ {"id":"8ce81695-8086-41b9-91e3-5d0f0cbaab42","information":"{\"id\":\"test-1766265160089-n01q6j2tpv\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T21:12:40.089Z\",\"raw_value\":1}","created_at":"1766265160311.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T21:12:40.089Z\"}"}
132
271
  {"id":"8dc5ec29-38ca-441b-9304-841a8b87a553","information":"PGLite daemon mode flipped to default in swarm-mail getSwarmMail(). Change: `const useSocket = process.env.SWARM_MAIL_SOCKET !== 'false'` (was `=== 'true'`). This prevents multi-process PGLite corruption by defaulting to single-daemon architecture. Users opt OUT with SWARM_MAIL_SOCKET=false for embedded mode. Updated JSDoc, added log messages for both modes. Critical for any tests that call getSwarmMail() - they now need explicit SWARM_MAIL_SOCKET=false in beforeAll() to avoid daemon startup attempts. Exit code 0 = all tests pass.","created_at":"2025-12-19T14:52:50.665Z","tags":"pglite,daemon,swarm-mail,default-behavior,multi-process,testing"}
272
+ {"id":"8ef0eb21-97dc-4e6c-8d10-dba0361ead11","information":"Svelte 5 canvas refactoring pattern: When extracting render logic from large Svelte components with canvas/d3, create separate render modules with RenderContext interface. \n\nKey pattern:\n1. Create shared types.ts with context interface (RenderContext with ctx, transform, state)\n2. Extract render phases into pure functions (renderLinks, renderNodes, renderLabels)\n3. Pass sizeScale and thresholds as parameters - don't recreate them\n4. Keep color palette (cat) in types.ts for cross-module access\n5. Component keeps simulation, zoom, interaction logic - delegates rendering\n\nBenefits:\n- Deep modules (simple interface, rich functionality)\n- Each render phase becomes independently testable\n- Future features (fisheye, bundling, hulls) can be added without modifying component\n- Component render() becomes 3 lines: renderLinks/Nodes/Labels\n\nGotcha: TypeScript import sorting in Svelte requires value imports before type imports with blank line separator.","created_at":"1766342935643.0","tags":"svelte,refactoring,canvas,d3,force-graph,render-phases,deep-modules"}
273
+ {"id":"8f1ef3ea-7d99-4997-9dc2-805987cea648","information":"CRITICAL BUG: Coordinator loses identity after compaction\n\nRoot cause: The compaction hook injects generic \"you are a coordinator\" context but doesn't include:\n1. The SPECIFIC epic ID being coordinated\n2. Which subtasks are done/pending/in_progress \n3. The original task description\n4. Which workers were spawned\n\nThe agent wakes up knowing it's a coordinator but not WHAT it's coordinating. It then starts doing work directly instead of spawning workers.\n\nFix needed in compaction-hook.ts:\n- Query hive for in_progress epics\n- Include epic ID, title, and subtask status in injected context\n- Include last known worker activity from swarm-mail\n- Make the context actionable: \"Resume coordinating epic bd-xxx\"\n\nThis is P0 - breaks the entire swarm coordination model.","created_at":"1766595208571.0","tags":"swarm,compaction,coordinator,bug,p0,context-loss"}
274
+ {"id":"8f24dcef-12cd-464f-906f-d3847062abd5","information":"{\"id\":\"test-1766593302223-7tdtts4ohgp\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:21:42.223Z\",\"raw_value\":1}","created_at":"1766593302468.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:21:42.223Z\"}"}
133
275
  {"id":"9126fdf3-7090-4dda-bc3b-d66e14362291","information":"{\"id\":\"pattern-1765664125767-wxih0g\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:15:25.767Z\",\"updated_at\":\"2025-12-13T22:15:25.767Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:15:25.968Z","metadata":"{\"id\":\"pattern-1765664125767-wxih0g\",\"kind\":\"pattern\",\"is_negative\":false}"}
134
276
  {"id":"91f6de54-cd46-46c4-a12b-7f80b2a887b9","information":"Test Isolation Pattern for semantic-memory: Use environment variable TEST_MEMORY_COLLECTIONS=true to suffix collection names with '-test'. Implemented via getCollectionNames() function that checks process.env.TEST_MEMORY_COLLECTIONS and conditionally appends '-test' to base collection names (swarm-feedback, swarm-patterns, swarm-maturity). Vitest integration config sets this env var automatically. Prevents test data from polluting production semantic-memory collections. Cleanup handled in vitest.integration.setup.ts teardown hook. Pattern enables running integration tests safely without affecting production learning data. Key insight: Dynamic collection naming at config resolution time (not runtime) ensures all storage instances in test mode automatically use test collections.","created_at":"2025-12-14T22:37:48.129Z","metadata":"{\"author\":\"WarmHawk\",\"pattern_type\":\"test_isolation\"}"}
277
+ {"id":"920ce3e0-5d5d-4cf4-be54-b5a450f6c18c","information":"pino-roll file rotation format: Uses NUMERIC rotation, not date-based. With frequency='daily' and extension='log', files are named {basename}.{number}log (e.g., swarm.1log, swarm.2log). The number increments with each rotation. The 'limit.count' option specifies how many OLD files to keep in addition to the current file. So limit: { count: 14 } means 14 rotated files + 1 current file = 15 total files max. Common misconception: thinking pino-roll will create date-based filenames like swarm-2024-12-24.log - it doesn't. That requires a custom transport or different package.","created_at":"1766592728219.0","tags":"pino,pino-roll,logging,rotation,file-naming,nodejs,bun"}
278
+ {"id":"921e7326-558a-4e7d-8f4d-c958541fdbf9","information":"{\"id\":\"pattern-1766262043345-lwfqkk\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:20:43.345Z\",\"updated_at\":\"2025-12-20T20:20:43.345Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262043583.0","metadata":"{\"id\":\"pattern-1766262043345-lwfqkk\",\"kind\":\"pattern\",\"is_negative\":false}"}
135
279
  {"id":"92242548-6162-48c6-864a-0d099a503ff4","information":"Documentation pattern for PGLite WAL safety deployment: When documenting database deployment modes, structure as three sections: 1) Daemon Mode (Recommended) with SIGTERM handler showing graceful shutdown, 2) Safety Features (checkpoint + health monitoring with code examples), 3) Ephemeral Instances (Testing) with explicit production warning. Key insight: Users need to see WHY daemon mode matters (WAL accumulation from multiple instances) and WHEN to checkpoint manually (migrations, bulk writes). Cross-reference from developer docs (AGENTS.md) to package README for detailed deployment guidance. This pattern prevents the \"docs scattered across files\" anti-pattern.","created_at":"2025-12-19T03:43:48.319Z","tags":"documentation,pglite,wal,deployment,swarm-mail,pattern"}
136
280
  {"id":"93ea9444-8481-4987-af75-d504f29c4cda","information":"Course index pages MUST link to first lesson in each section, not the section index page. Rule from AGENTS.md line 137: \"Sections are not navigable in the UI; always link to the first lesson in a section from indexes.\" \n\nCommon mistake: linking to section index instead of first lesson.\n\nExample from AI SDK Intelligent Agents course:\n- WRONG: [Section 1: The Agentic Loop](./agentic-loop)\n- CORRECT: [Section 1: The Agentic Loop](./agentic-loop/from-chain-to-loop)\n\nThis affects main course index where sections are listed. Section index pages are fine linking between themselves, but main navigation must link directly to first lesson.","created_at":"2025-12-16T21:10:41.227Z","tags":"course-structure,navigation,index-pages,lesson-links,style-guide"}
137
281
  {"id":"949aae72-b5ac-4b3d-9ca2-3b0cfc6a9814","information":"CLI daemon command implementation pattern for Bun projects using Effect.\n\n**Pattern:**\nHandle special commands (like `daemon`) separately from main Effect program, similar to `migrate` command. This avoids needing to run full application layers for lifecycle management.\n\n**Implementation:**\n```typescript\n// At bottom of cli.ts, before main program\nconst args = process.argv.slice(2);\n\nif (args[0] === \"daemon\") {\n const daemonProgram = Effect.gen(function* () {\n // Handle daemon subcommands\n // Use Effect.promise() to wrap async daemon functions\n });\n \n Effect.runPromise(\n daemonProgram.pipe(\n Effect.catchAll((error) => /* error handling */)\n )\n );\n} else if (args[0] === \"migrate\") {\n // ...\n} else {\n // Run main program with full dependencies\n Effect.runPromise(\n program.pipe(Effect.provide(PDFLibraryLive))\n );\n}\n```\n\n**Background Process Spawning:**\n```typescript\n// Spawn detached background daemon\nconst proc = Bun.spawn(\n [\"bun\", \"run\", join(__dirname, \"cli.ts\"), \"daemon\", \"start\", \"--foreground\"],\n {\n cwd: process.cwd(),\n stdio: [\"ignore\", \"ignore\", \"ignore\"],\n detached: true,\n }\n);\nproc.unref();\n\n// Wait for socket availability with timeout\nconst timeout = 5000;\nwhile (Date.now() - startTime < timeout) {\n const running = yield* Effect.promise(() => isDaemonRunning(config));\n if (running) break;\n yield* Effect.sleep(\"100 millis\");\n}\n```\n\n**Why Separate?**\n- Daemon commands don't need full PDFLibrary dependencies\n- Avoids circular dependency issues\n- Faster startup for lifecycle commands\n- Cleaner separation of concerns","created_at":"2025-12-19T15:10:50.703Z","tags":"bun,effect,cli,daemon,background-process"}
282
+ {"id":"9627bcc4-47e7-4e86-b251-1dd22feb8567","information":"Applied withSqliteRetry() wrapper to SwarmMailAdapter write operations for SQLITE_BUSY handling. Key insight: The adapter is a factory function returning an object literal, so retry helper must be a module-level function, not a class method. Write operations that need retry: db.exec() (in resetDatabase), db.checkpoint() (in runMigrations). Pattern: `await withRetry(() => db.operation())`. The wrapper uses Effect.runPromise(withSqliteRetry(Effect.tryPromise(operation))) for exponential backoff (100ms, 200ms, 400ms, max 3 retries). Integration tests confirm concurrent resetDatabase and checkpoint operations don't fail with SQLITE_BUSY. This completes the 3-part retry strategy: 1) PRAGMA busy_timeout=5000 (SQLite-level), 2) withSqliteRetry utility (application-level), 3) adapter integration (usage).","created_at":"1766592649337.0","tags":"sqlite,retry,SQLITE_BUSY,adapter,effect-ts,withRetry,checkpoint,swarm-mail"}
283
+ {"id":"964d41bf-f7d7-41b5-84b4-c3e6002dcdaf","information":"ClusterSummarizer service implementation pattern: Uses Effect-based architecture with Context/Layer. Service interface defines operations as Effect<Result, Error>. Implementation uses Layer.succeed with Effect.try for error handling. For text summarization, started with extractive approach (first sentence from each chunk) as placeholder, marked with TODO for future LLM integration via generateObject pattern. Test suite validates empty arrays, chunk limiting, and basic summarization logic. Pattern matches Clustering service architecture.","created_at":"1766421010459.0","tags":"effect-ts,clustering,summarization,service-pattern,tdd"}
138
284
  {"id":"9776db4c-e14f-4495-b9fc-05954676abbb","information":"{\"id\":\"test-1766074436954-pj27gd4lso\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:13:56.954Z\",\"raw_value\":1}","created_at":"2025-12-18T16:13:57.169Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:13:56.954Z\"}"}
139
285
  {"id":"97ab28c1-c249-4144-937e-88f2b0f4b398","information":"{\"id\":\"pattern-1766085029743-5mj578\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T19:10:29.743Z\",\"updated_at\":\"2025-12-18T19:10:29.743Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T19:10:29.969Z","metadata":"{\"id\":\"pattern-1766085029743-5mj578\",\"kind\":\"pattern\",\"is_negative\":false}"}
286
+ {"id":"9891d1d6-0015-4983-83cd-bc27c1df0d43","information":"SQLite ALTER TABLE ADD COLUMN has strict limitations that Drizzle doesn't warn about:\n\n**The Problem:**\n- ALTER TABLE cannot use non-constant defaults like `datetime('now')`, `CURRENT_TIMESTAMP`\n- ALTER TABLE cannot add NOT NULL columns without a default\n- Drizzle schema allows these but they fail at runtime with ALTER TABLE\n\n**Root Cause:**\nSQLite's ALTER TABLE is more restrictive than CREATE TABLE. CREATE TABLE allows SQL function defaults, but ALTER TABLE only allows constant literals.\n\n**The Solution:**\nSeparate default handling for CREATE vs ALTER:\n- CREATE TABLE: use original defaults (functions OK)\n- ALTER TABLE: provide constant defaults based on type (TEXT='', INTEGER=0, REAL=0.0)\n\n**Code Pattern:**\n```typescript\nfunction getColumnDefaultForAlterTable(col: AnySQLiteColumn<any>): string {\n const config = (col as any).config;\n \n // Skip SQL functions - not allowed in ALTER TABLE\n if (defaultVal.includes(\"(\")) {\n // Fall through to constant default\n }\n \n // Provide type-appropriate constant defaults\n const sqlType = normalizeType(col.getSQLType());\n if (sqlType === \"TEXT\") return \"DEFAULT ''\";\n if (sqlType === \"INTEGER\") return \"DEFAULT 0\";\n if (sqlType === \"REAL\") return \"DEFAULT 0.0\";\n}\n```\n\n**When This Matters:**\n- Runtime schema migrations when columns are missing\n- ALTER TABLE operations on existing tables\n- Drizzle schema validation and auto-fixing\n\n**Prevention:**\nDocument in schema comments when a default is non-constant so migration code can handle it specially.","created_at":"1766294601043.0","tags":"sqlite,drizzle,alter-table,schema-migration,gotcha"}
140
287
  {"id":"99a8fa5a-2287-4665-bf88-972213bc754b","information":"{\"id\":\"test-1766080415739-14f1w45qthd9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:53:35.739Z\",\"raw_value\":1}","created_at":"2025-12-18T17:53:36.012Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:53:35.739Z\"}"}
141
288
  {"id":"9a004fda-9142-4e55-9447-db005493487e","information":"{\"id\":\"pattern-1765771064070-9few2m\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:57:44.070Z\",\"updated_at\":\"2025-12-15T03:57:44.070Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:57:44.420Z","metadata":"{\"id\":\"pattern-1765771064070-9few2m\",\"kind\":\"pattern\",\"is_negative\":false}"}
142
289
  {"id":"9abd19da-6b27-40f1-a385-de69d0a0f55b","information":"Swarm coordination pattern for ADR writing (Dec 2024): When multiple ADRs need writing, spawn parallel workers with clear file ownership. Both workers may need to update shared index file (docs/adr/README.md) - coordinate via swarmmail to avoid conflicts. Pattern: first worker adds placeholder entries for both, second worker corrects titles. Workers should store learnings via semantic-memory_store after completing ADRs. Use swarm_complete (not hive_close) to auto-release reservations and record learning signals.","created_at":"2025-12-19T00:16:21.306Z","tags":"swarm,coordination,adr,parallel-work,file-conflicts,best-practice"}
143
290
  {"id":"9b55a76c-d07d-4a7c-b9c9-ea49f13c140f","information":"@badass Router Design Decision (Dec 2024): Hybrid approach combining uploadthing and course-builder patterns.\n\n**From Uploadthing (COPY):**\n1. Type-state builder pattern with UnsetMarker for compile-time safety\n2. Immutable chain - each method returns new builder\n3. Effect-TS at handler layer ONLY, not in builder API (builder stays pure TypeScript for DX)\n4. Two-phase adapter transformation: extract framework context then normalize to Web Request\n5. Subpath exports for tree-shaking: @badass/next, @badass/astro, @badass/server\n\n**From Course-Builder (KEEP):**\n1. Framework-agnostic core with single entry function\n2. Provider plugin system for integrations (payment, transcription, etc.)\n3. Adapter interface separating DB from business logic\n4. Inngest for background jobs\n\n**Changes from Course-Builder:**\n1. Switch-based routing becomes procedure registry with type inference\n2. String actions become type-safe procedures: router.checkout.call(input)\n3. Manual request/response becomes middleware chain\n4. Massive adapter interface splits into ContentAdapter, CommerceAdapter, VideoAdapter\n5. Video processing extracts to @badass/video\n\n**Key Files:**\n- uploadthing builder: packages/uploadthing/src/_internal/upload-builder.ts\n- uploadthing adapters: packages/uploadthing/src/next.ts, express.ts\n- course-builder core: packages/core/src/lib/index.ts:24\n- course-builder next: packages/next/src/lib/index.ts:50\n- course-builder astro: packages/astro/server.ts:44","created_at":"2025-12-18T15:57:47.086Z"}
144
291
  {"id":"9b7e2971-9b37-4783-8640-2c3504ae4450","information":"@badass CLI Architecture Decision (Dec 2024): Multi-site CLI pattern like PlanetScale/Stripe CLI. Sites are self-contained bounded contexts with own Mux/Inngest/Stripe accounts. CLI manages multiple sites via ~/.badass/config.json. Commands: badass auth login site, badass site use site, badass --site=site command. Each site provides its own API, CLI routes to appropriate site based on config.","created_at":"2025-12-18T15:30:12.361Z"}
292
+ {"id":"9b9c19de-bf95-4289-b9a2-7c8148069791","information":"{\"id\":\"pattern-1766261761595-um9s30\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:16:01.595Z\",\"updated_at\":\"2025-12-20T20:16:01.595Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261761860.0","metadata":"{\"id\":\"pattern-1766261761595-um9s30\",\"kind\":\"pattern\",\"is_negative\":false}"}
293
+ {"id":"9ba4910c-3d14-46b0-b6d0-009aa4d00f98","information":"Sparkline implementation for canvas data visualization: Use deterministic pseudo-random generation based on node ID hash for consistent sparklines across renders. Pattern: hash string → use as seed for Math.sin() to create deterministic noise. Key insight: sparklines should be deterministic (same input = same output) but unique per node. Implementation uses normalized data (0-1 range) with color gradient mapping (sky → teal → green based on thresholds). Canvas roundRect() API simplifies rounded bar chart rendering. For activity bars, use linear gradient (createLinearGradient) for visual polish. Always normalize values before rendering to ensure consistent visual scaling.","created_at":"1766343220020.0","tags":"canvas,sparklines,data-visualization,deterministic,pseudo-random,tufte"}
294
+ {"id":"9beb9a62-a39a-47ae-a296-c6b77493187f","information":"E2E swarm coordination integration test implementation complete (Dec 22, 2025).\n\n**Test Coverage:**\n- Epic creation with hive_create_epic (JSON response format, not prose)\n- Worker registration via SwarmMailAdapter\n- Parallel file reservations (2 workers, exclusive locks)\n- Multi-worker task completion via swarm_complete\n- Verification of closed cells via completion response\n\n**Key Learnings:**\n1. `hive_create_epic` returns JSON with `{ success, epic: {...}, subtasks: [{...}] }`\n2. `swarm_complete` returns JSON with `{ success, closed, bead_id, ... }`\n3. SwarmMail `reserveFiles` signature: `(projectKey, agentName, paths[], options?)`\n4. DatabaseAdapter `query()` returns `{ rows: T[] }`, not array directly\n5. Events table column is `type`, not `event_type`\n6. Reservations table uses `path_pattern` and `agent_name` columns\n7. `createInMemorySwarmMailLibSQL` creates streams + memory schemas only (no hive projections)\n\n**Test Pattern:**\n```typescript\n// Setup\nconst swarmMail = await createInMemorySwarmMailLibSQL(testProjectPath);\nsetHiveWorkingDirectory(testProjectPath);\n\n// Epic + subtasks\nconst result = await hive_create_epic.execute({...});\nconst { epic, subtasks } = JSON.parse(result);\n\n// Workers\nawait swarmMail.registerAgent(path, name, {program, model});\nawait swarmMail.reserveFiles(path, name, [files], {reason, exclusive});\n\n// Complete\nconst completion = await swarm_complete.execute({...skip_verification, skip_review});\nconst parsed = JSON.parse(completion);\nexpect(parsed.closed).toBe(true);\n```\n\n**Limitations Found:**\n- swarm_progress/complete try to create new adapters instead of using test instance\n- Reservation release fails in tests (uses different DB instance)\n- No hive projection tables in test DB (cells are event-sourced only)\n\n**Test verifies full coordination flow end-to-end without external dependencies.**","created_at":"1766380887793.0","tags":"e2e,integration-test,swarm-coordination,hive,swarm-mail,libSQL,testing-patterns"}
145
295
  {"id":"9c0a4991-16ab-4571-9010-f77741573540","information":"{\"id\":\"pattern-1765670644773-jturji\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T00:04:04.773Z\",\"updated_at\":\"2025-12-14T00:04:04.773Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T00:04:04.981Z","metadata":"{\"id\":\"pattern-1765670644773-jturji\",\"kind\":\"pattern\",\"is_negative\":false}"}
146
296
  {"id":"9d11a24b-119a-473d-b1d3-311602c6cbaa","information":"{\"id\":\"test-1766074742680-yt5vhmvkfzl\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:19:02.680Z\",\"raw_value\":1}","created_at":"2025-12-18T16:19:02.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:19:02.680Z\"}"}
297
+ {"id":"9d1875fc-6598-46fb-b297-b23656a8dbcb","information":"{\"id\":\"test-1766264315783-eqtqfr2j6y6\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:58:35.783Z\",\"raw_value\":1}","created_at":"1766264316016.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:58:35.783Z\"}"}
147
298
  {"id":"9d459798-bc90-4947-9c70-0b9bb9526e42","information":"Memory schema migrations in swarm-mail: Created v9 migration that adds memories and memory_embeddings tables to shared PGLite database. Critical: Must add \"CREATE EXTENSION IF NOT EXISTS vector;\" at start of migration SQL before using vector type. Integrated by importing memoryMigrations into streams/migrations.ts and spreading into main migrations array. Pattern: Module migrations append to main array (hive=v7-8, memory=v9). Tests verify table structure, indexes (HNSW, GIN, B-tree), cascade deletes, and 1024-dim vector storage. Memory schema uses TEXT ids, TIMESTAMPTZ timestamps, JSONB metadata, vector(1024) embeddings.","created_at":"2025-12-18T18:59:18.304Z","tags":"swarm-mail,migrations,pgvector,schema,pglite,memory"}
148
299
  {"id":"9ef20adf-0850-48e2-83b9-9af8f0976182","information":"Swarm Wave-based coordination pattern observed: When task instructions explicitly say \"WAIT for Wave1-X and Wave1-Y\", this indicates sequential dependency gates. If file reservation conflicts occur with expected dependencies, agent should:\n\n1. Check if prerequisite files/dirs exist in old state (confirms prereqs not done)\n2. Send BLOCKED message to coordinator with blocker details\n3. Update bead status to blocked\n4. Be patient - conflict holder likely working on prerequisite\n5. Don't attempt workarounds - the sequential ordering exists for a reason\n\nIn this case: bd-lf2p4u-mja6npihvzm (AdapterRename) correctly blocked waiting for Wave1-DirRename and Wave1-TypeRename. File conflict with GoldHawk on beads-adapter.ts was expected since that file needs to be moved/renamed by prereqs first.\n\nAnti-pattern: Trying to work around prerequisites by renaming imports before files are renamed - breaks everything.","created_at":"2025-12-17T15:51:25.825Z"}
149
300
  {"id":"9f18ab25-3898-4a71-866b-aad1627a6498","information":"Adapter factory pattern for event-sourced systems: createAdapter(db: DatabaseAdapter, projectKey: string) factory takes a DatabaseAdapter and returns interface with high-level operations. Delegates to store.ts for event operations (appendEvent, readEvents) and projections.ts for queries (getBead, queryBeads). This enables dependency injection and testing with different databases. Key: adapter methods create events with correct type, then call appendEvent(event, projectPath, db) to persist. Projections update automatically via event handlers. Example: createBead() generates bead_created event, appends it, then queries projection to return created bead.","created_at":"2025-12-16T22:08:24.450Z","metadata":"{\"context\":\"swarm-mail architecture\"}","tags":"adapter-pattern,event-sourcing,cqrs,dependency-injection"}
301
+ {"id":"a01b1e63-b02d-49fb-b0d4-48db482b6f22","information":"{\"id\":\"test-1766256912440-5hizpp3yl8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T18:55:12.440Z\",\"raw_value\":1}","created_at":"1766256912635.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T18:55:12.440Z\"}"}
302
+ {"id":"a02ef17d-6ac1-4575-8bd3-6d1854241f80","information":"checkSwarmHealth() and checkHealth() (agent-mail) were throwing \"has been removed\" errors instead of working. These were deprecated during PGlite → libSQL migration but never re-implemented.\n\nFix for checkSwarmHealth(): Use getSwarmMailLibSQL() adapter pattern, test connectivity with \"SELECT 1\", return { healthy: boolean, database: \"libsql\" }. Implemented in swarm-mail.ts.\n\nFix for checkHealth(): Delegate to checkSwarmHealth(). No need to duplicate logic. Implemented in agent-mail.ts.\n\nBoth functions are used by plugin tools (swarmmail_health) and internal health checks (tool-availability.ts, compaction-hook.ts). Leaving them broken would break plugin's health monitoring.\n\nPattern: When migrating infrastructure (PGlite → libSQL), don't just throw deprecation errors for public APIs. Either remove the API entirely or re-implement with new infrastructure. Half-deprecated functions break consumers.","created_at":"1766383554830.0","tags":"swarm-mail,health-check,deprecation,migration,libsql,pglite"}
150
303
  {"id":"a0921dff-b9b1-4cd1-b555-9acc6fe23e2f","information":"{\"id\":\"test-1766074455925-j3xb65rzg2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:14:15.925Z\",\"raw_value\":1}","created_at":"2025-12-18T16:14:16.152Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:14:15.925Z\"}"}
304
+ {"id":"a0fc22e7-2d15-4993-9fe4-e7af40e93cab","information":"{\"id\":\"test-1766260843953-vnyht2xat4p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:00:43.953Z\",\"raw_value\":1}","created_at":"1766260844173.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:00:43.953Z\"}"}
151
305
  {"id":"a1c9240a-b245-4109-a497-be818fa82127","information":"Effect.succeed() vs Effect.gen() in middleware: The @badass/core middleware implementation detects Effect objects by checking for \"_tag\" property. Effect.succeed() returns objects with \"_tag\", but Effect.gen() returns objects with \"_id\" and \"_op\" instead. Result: Effect.succeed() gets unwrapped properly via Effect.runPromise, but Effect.gen() returns the raw Effect object. Workaround: Use Effect.succeed() for simple context values in middleware, avoid Effect.gen() for middleware context functions.","created_at":"2025-12-18T16:32:14.305Z","tags":"effect-ts,middleware,badass-core,gotcha,effect-succeed,effect-gen"}
306
+ {"id":"a2e73118-c51b-411f-9ee6-fa11bb37a733","information":"{\"id\":\"pattern-1766263854559-5dy1gz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:50:54.559Z\",\"updated_at\":\"2025-12-20T20:50:54.559Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263854807.0","metadata":"{\"id\":\"pattern-1766263854559-5dy1gz\",\"kind\":\"pattern\",\"is_negative\":false}"}
307
+ {"id":"a43fb38d-b67c-40df-9a28-02d5e5ca529b","information":"PR triage context efficiency pattern: ALWAYS fetch metadata first (id, path, line, author) using `gh api --jq` to keep responses compact (~100 bytes per comment vs ~5KB with body). Only fetch full comment bodies for actionable items (human comments, high severity). This prevents context exhaustion on PRs with 50+ CodeRabbit comments. Triage into buckets: fix-with-code (implement + reply), won't-fix (acknowledge + explain), tracked-in-cell (create hive cell + link). Use batch acknowledgment for low-priority bot comments. Key insight: 50 metadata entries = ~5KB, 50 full bodies = ~500KB. Strategy is metadata-first categorization, then selective body fetches. Created pr-triage skill with full gh API patterns at .opencode/skills/pr-triage/","created_at":"1766424320611.0","tags":"pr-triage,github,context-efficiency,coderabbit,gh-api,workflow"}
308
+ {"id":"a46dc0eb-beae-4e1a-8261-a378ada89125","information":"{\"id\":\"test-1766262988210-2t45j8b22aw\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:36:28.210Z\",\"raw_value\":1}","created_at":"1766262988691.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:36:28.210Z\"}"}
309
+ {"id":"a4dbe094-d77b-4763-887f-13aee7dab5b6","information":"Implemented observability tools for OpenCode Swarm Plugin. KEY LEARNINGS: (1) swarm-mail analytics queries come in two forms - functions that take filters (failedDecompositions, strategySuccessRates, etc.) and objects with buildQuery methods (scopeViolations, taskDuration, etc.). Check for .buildQuery property before calling. (2) SwarmMailAdapter has getDatabase() method that returns the underlying DatabaseAdapter - use this instead of creating new libSQL adapters. (3) In-memory test databases work with createInMemorySwarmMailLibSQL(), no need for complex event creation in tests. (4) All analytics query functions must be exported from swarm-mail/src/index.ts, not just from analytics/index.ts, for plugin imports to work. (5) Plugin tools should use getSwarmMailLibSQL(projectPath) then .getDatabase() for consistent database access across tools.","created_at":"1766434941736.0","tags":"observability,analytics,plugin-tools,swarm-mail,testing"}
310
+ {"id":"a5f19ba7-d985-45b7-a3b3-d95e913d66fe","information":"Drizzle ORM Migration Pattern for Event-Sourced Projections\n\n**Context:** Migrated hive subsystem from raw SQL (DatabaseAdapter) to Drizzle ORM while maintaining backward compatibility with legacy code.\n\n**Key Pattern:**\n1. Convert projection layer (write operations) to Drizzle first - handles INSERTs, UPDATEs, DELETEs\n2. Convert event store operations (read/write events table) to Drizzle\n3. Create bidirectional adapters: `toSwarmDb()` (DatabaseAdapter → Drizzle) and `toDatabaseAdapter()` (Drizzle → DatabaseAdapter)\n4. Leave complex query layer (queries.ts) using raw SQL via DatabaseAdapter wrapper - avoid premature optimization\n\n**Why This Works:**\n- Event sourcing writes are simple (INSERT event, UPDATE projection) - perfect for Drizzle\n- Complex queries (CTEs, JSON operators, window functions) are messy in Drizzle - keep as raw SQL\n- Bidirectional adapters allow gradual migration without breaking existing code\n- Schema stays as single source of truth in Drizzle, but execution can be either\n\n**Implementation Details:**\n- `toDatabaseAdapter(db: SwarmDb)` wraps Drizzle with `.query()` and `.exec()` methods\n- Uses `sql.raw()` for executing raw SQL strings through Drizzle\n- Converts PostgreSQL `$1, $2` placeholders to SQLite `?` via `convertPlaceholders()`\n- Test helper schema MUST match Drizzle schema exactly (discovered `created_at` column mismatch)","created_at":"1766296492394.0","tags":"drizzle,migration,event-sourcing,adapter-pattern,backward-compatibility"}
152
311
  {"id":"a675722b-2e10-44c8-ac19-9525b53fe09c","information":"Investigation of \"Invalid Date\" error in hive JSONL parsing (Dec 19, 2025): The hypothesis was that jsonl.ts incorrectly casts ISO date strings as numbers. After thorough investigation, the code is CORRECT:\n\n1. Database stores dates as BIGINT (epoch milliseconds)\n2. Export (DB → JSONL): `new Date(epochNumber).toISOString()` ✅ Correct\n3. Import (JSONL → DB): `new Date(isoString).getTime()` ✅ Correct \n4. PGlite returns BIGINT as `number` type, not string ✅\n5. All 275 hive tests pass including new date-handling tests ✅\n\nThe task was based on incorrect hypothesis. The code at lines 207-210, 347-348, 465-468 in jsonl.ts and line 135 in merge.ts is working as designed. Added comprehensive date-handling tests to prevent future regressions.","created_at":"2025-12-19T17:41:17.868Z","tags":"investigation,dates,jsonl,hive,no-bug-found,test-coverage"}
312
+ {"id":"a719cd9c-39c1-4f01-8982-db6a86df02b0","information":"Drizzle ORM migration for hive/store.ts requires matching test schema in test-libsql.ts. When migrating event store operations to Drizzle, the Drizzle schema may define columns (like `created_at TEXT DEFAULT (datetime('now'))`) that aren't present in test database schemas. Solution: Update test schema in test-libsql.ts to include all columns from Drizzle schema, even if they're optional/nullable. Use simple `created_at TEXT` without DEFAULT function in test schemas to avoid SQLite syntax errors (SQLite doesn't support function calls in DEFAULT except CURRENT_TIMESTAMP). Pattern: Drizzle functions take `SwarmDb` as first parameter, wrapper functions match old signatures with `dbOverride` as last parameter, use `toDrizzleDb()` to convert DatabaseAdapter → SwarmDb.","created_at":"1766332024628.0","tags":"swarm-mail,drizzle,migration,testing,schema,event-store"}
313
+ {"id":"a73e01b4-a4e5-44ad-b9a6-41e7c7c8ca99","information":"{\"id\":\"pattern-1766265161035-2c4b4l\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:12:41.035Z\",\"updated_at\":\"2025-12-20T21:12:41.035Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766265161251.0","metadata":"{\"id\":\"pattern-1766265161035-2c4b4l\",\"kind\":\"pattern\",\"is_negative\":false}"}
314
+ {"id":"a74718c2-2788-4695-bd26-433d2e3ffdf4","information":"{\"id\":\"pattern-1766261666680-4qs1ny\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:14:26.680Z\",\"updated_at\":\"2025-12-20T20:14:26.680Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261666909.0","metadata":"{\"id\":\"pattern-1766261666680-4qs1ny\",\"kind\":\"pattern\",\"is_negative\":false}"}
315
+ {"id":"a78737c0-1f8f-4a24-a767-b875c5be5ba3","information":"{\"id\":\"pattern-1766260891441-scx84b\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:01:31.441Z\",\"updated_at\":\"2025-12-20T20:01:31.441Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260891678.0","metadata":"{\"id\":\"pattern-1766260891441-scx84b\",\"kind\":\"pattern\",\"is_negative\":false}"}
153
316
  {"id":"a7987c85-4b2e-4332-84ff-68d035606e5f","information":"Process exit hook pattern for PGLite flush safety net: Register process.on('beforeExit', async (code) => {...}) at module level to catch dirty cells before process exits. Pattern: iterate adapterCache, call FlushManager.flush() for each project. Critical: Use two flags for safety - exitHookRegistered (prevent duplicate registration) and exitHookRunning (prevent re-entry during async flush). Register hook immediately when module is imported via registerExitHook() call at module level. Non-fatal errors: wrap each flush in try/catch and log warnings. This is a safety net for the lazy write pattern where operations mark dirty and explicit flush writes to disk - catches any dirty cells that weren't explicitly synced before process exit. Tested with: beforeExit event emission, idempotency (multiple triggers), and graceful handling of no dirty cells.","created_at":"2025-12-19T17:06:21.423Z","tags":"process-exit-hook,safety-net,pglite,flush,idempotent,module-level"}
154
317
  {"id":"a7dcbbb8-af6b-45f1-b4d0-7fdefda3e99b","information":"When documenting plugin hooks in OpenCode, always add the hook event to the Events section list AND provide a complete example in the Examples section. The session.compacting hook allows plugins to inject custom context before LLM summarization during compaction - useful for preserving task state, decisions, and active work context across compaction boundaries.","created_at":"2025-12-17T17:59:20.017Z"}
155
318
  {"id":"a82a97ce-abb7-4d73-a288-5b49bf59ca74","information":"{\"id\":\"test-1766074638102-x5vrrbmco9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:18.102Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:18.330Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:18.102Z\"}"}
319
+ {"id":"a82ae6de-3d24-4851-9e02-87f2c5fb6e86","information":"## Swarm Decomposition: Remove PGLite, Port Effect Primitives to libSQL\n\n### Epic\n**Title:** Remove PGLite, Port Effect Primitives to libSQL, Integrate into Swarm\n\n**Description:** Complete removal of PGLite infrastructure (except migration tools), port all Effect-TS durable primitives to use libSQL/DatabaseAdapter, and integrate DurableLock + DurableDeferred into swarm worker coordination for file locking and task completion signals.\n\n**Upstream source:** https://github.com/durable-streams/durable-streams\n\n### Subtasks (7 total, validated)\n\n**Task 0: Port DurableLock to libSQL** (complexity: 3, parallel)\n- Files: lock.ts, lock.test.ts\n- Dependencies: none\n- Convert getDatabase() calls to accept DatabaseAdapter parameter\n\n**Task 1: Port DurableDeferred to libSQL** (complexity: 3, parallel)\n- Files: deferred.ts, deferred.test.ts\n- Dependencies: none\n- Convert getDatabase() calls to accept DatabaseAdapter parameter\n\n**Task 2: Port DurableCursor to libSQL** (complexity: 3, parallel)\n- Files: cursor.ts, cursor.integration-test.ts\n- Dependencies: none\n- Cursors table schema already updated (stream, checkpoint columns)\n\n**Task 3: Port DurableMailbox and ask pattern to libSQL** (complexity: 4, sequential)\n- Files: mailbox.ts, mailbox.test.ts, ask.ts, ask.integration-test.ts, layers.ts, index.ts\n- Dependencies: [0, 1, 2]\n- Update layers.ts for proper Effect service composition\n\n**Task 4: Remove PGLite from streams/index.ts** (complexity: 4, sequential)\n- Files: streams/index.ts, pglite.ts, src/index.ts\n- Dependencies: [0, 1, 2, 3]\n- Keep migrate-pglite-to-libsql.ts for migration CLI\n\n**Task 5: Integrate DurableLock into swarm file reservations** (complexity: 4, sequential)\n- Files: agent-mail.ts, swarm-mail.ts\n- Dependencies: [0, 4]\n- Replace current reservation system with DurableLock\n\n**Task 6: Integrate DurableDeferred into swarm task completion** (complexity: 4, sequential)\n- Files: swarm.ts, swarm-orchestrate.ts (in opencode-swarm-plugin)\n- Dependencies: [1, 4]\n- Enable cross-agent RPC pattern\n\n### Execution Order\n1. Spawn tasks 0, 1, 2 in parallel (Lock, Deferred, Cursor)\n2. Wait for all three, then spawn task 3 (Mailbox+ask)\n3. Wait for task 3, then spawn task 4 (Remove PGLite)\n4. Wait for task 4, then spawn tasks 5, 6 in parallel (Integration)\n\n### Blocker\nHive tools are broken due to cursors table schema change. Need to fix before spawning workers.","created_at":"1766333755376.0","tags":"swarm-decomposition,pglite-removal,effect-primitives,epic-plan,blocker"}
156
320
  {"id":"a849e675-58d3-4b5a-8c66-28e0dbbc297c","information":"{\"id\":\"test-1766001178291-jasc2x5op7s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-17T19:52:58.291Z\",\"raw_value\":1}","created_at":"2025-12-17T19:52:59.368Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-17T19:52:58.291Z\"}"}
321
+ {"id":"a8999f0a-57cb-450d-979a-ac8b122b7404","information":"{\"id\":\"pattern-1766260222122-wmr1cl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:22.118Z\",\"updated_at\":\"2025-12-20T19:50:22.118Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260222373.0","metadata":"{\"id\":\"pattern-1766260222122-wmr1cl\",\"kind\":\"pattern\",\"is_negative\":false}"}
157
322
  {"id":"a9034557-0634-45d0-b405-c0cdacd59c12","information":"{\"id\":\"test-1765386361375-9thynapgze\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:06:01.375Z\",\"raw_value\":1}","created_at":"2025-12-10T17:06:01.560Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:06:01.375Z\"}"}
323
+ {"id":"a921dc7d-4116-477d-98ee-dcf321eb1f75","information":"ACFS Contract Validation Pattern: Every swarm tool should call validateWorkerContract() FIRST before doing work. Check for: swarmmail_initialized, file reservations acquired, cell_id present, epic_id present. Fail fast with actionable error messages that explain HOW to fix, not just WHAT is missing. Example: \"Contract violation: swarmmail_init not called. Fix: Call swarmmail_init(project_path) before any file modifications.\" This prevents 80% of coordination bugs where workers call swarm_complete without proper setup. Source: Dicklesworthstone/agentic_coding_flywheel_setup contract.sh","created_at":"1766591003716.0","tags":"swarm,coordination,validation,contract,patterns,acfs"}
324
+ {"id":"ab7288ed-6ec8-4ff9-92ed-85c11445ddaf","information":"TDD pattern for structured error classes with context enrichment: Start with interface definition (ErrorContext), then write comprehensive tests covering construction, serialization, default values, and context population. Implement base class first with defaults (timestamp auto-populated, suggestions/recent_events default to empty arrays), then specialized error classes extend with just name override. Key insight: TypeScript's Partial<ErrorContext> allows flexible construction while maintaining type safety. Tests verify both minimal (message only) and maximal (all context fields) construction paths. The pattern scales well - 16 tests cover base + 4 specialized error classes comprehensively in under 200 lines.","created_at":"1766433215869.0","tags":"tdd,error-handling,typescript,observability,swarm-mail"}
325
+ {"id":"ac29eb86-4647-4d59-81c9-07bcfa7093bf","information":"PGlite dynamic import pattern for bundled code: When using PGlite in code that gets bundled with Bun, static imports cause WASM files to load at module import time, which fails if dist/ doesn't include the .data files. Solution: (1) Remove static imports: `import { PGlite } from \"@electric-sql/pglite\"`, (2) Add dynamic imports inside functions: `const { PGlite } = await import(\"@electric-sql/pglite\")`, (3) Use `any` type for db variable to avoid TypeScript generic type errors after dynamic import, (4) Use type assertions on query results: `await db.query(...) as { rows: MyType[] }`. This defers WASM loading until the function is actually called, preventing build-time ENOENT errors.","created_at":"1766259023346.0","tags":"pglite,dynamic-import,wasm,bundler,typescript,bun"}
158
326
  {"id":"acb950b8-656d-4488-a930-5176968d666f","information":"Integration testing auto-migration in createMemoryAdapter: Tests run against in-memory PGLite databases using createInMemorySwarmMail(). Key insight: If ~/.semantic-memory/memory exists on test machine, migration actually runs and imports real memories during tests. Tests must handle both scenarios (legacy DB exists vs doesn't exist) using toBeGreaterThanOrEqual(0) instead of toBe(0). This proved the migration works end-to-end in real conditions - 177 actual memories migrated successfully during test runs. Critical: Use resetMigrationCheck() in beforeEach() for test isolation (module-level flag persists across tests without reset). Access DatabaseAdapter via swarmMail.getDatabase(), not swarmMail.db (property doesn't exist).","created_at":"2025-12-18T21:26:02.233Z","metadata":"{\"cell_id\":\"mjbxj68dmtb\",\"epic_id\":\"mjbxj67vqil\",\"test_file\":\"memory.integration.test.ts\"}","tags":"testing,integration-tests,pglite,migration,memory,swarm-mail"}
327
+ {"id":"ad3f2d32-9a85-4298-986e-249a10f9a643","information":"Implemented `swarm log` CLI command with TDD approach. Key implementation details: 1) Log files are in ~/.config/swarm-tools/logs/ with .Nlog extension (e.g., swarm.1log, compaction.1log). 2) Log format is JSON lines with level (10=trace, 20=debug, 30=info, 40=warn, 50=error, 60=fatal), time (ISO), module (string), msg (string). 3) Filtering supports: module (positional arg), --level (warn/error/etc), --since (30s/5m/2h/1d format), --limit (default 50). 4) Output modes: colored formatted text (default) or --json for piping to jq. 5) Used parseArgs pattern from cli-builder skill - no dependencies, uses Node util module. 6) TDD pattern: wrote all test helpers first (parseLogLine, filterLogsByLevel, filterLogsByModule, etc) then implemented in swarm.ts. Tests verify parsing, filtering, formatting, and file reading logic.","created_at":"1766593177192.0","tags":"swarm,cli,logging,tdd,filtering,json"}
159
328
  {"id":"ad85dcb1-ae91-4b2f-8857-16a5d8747969","information":"3 High-Value Improvements for opencode-swarm-plugin (Dec 2024):\n\n1. **Prompt Template Registry with Hot-Reload**\n - Problem: Prompts hardcoded in swarm-prompts.ts, require rebuild to change\n - Solution: External templates in ~/.config/opencode/swarm/prompts/*.md with variable interpolation\n - Enables: A/B testing, project-specific customization, hot-reload during dev\n - Inspired by: mdflow template variables, Release It! \"configuration as UI\"\n\n2. **Worker Handoff Protocol with Structured Context** (RECOMMENDED FIRST)\n - Problem: Workers ignore 400-line SUBTASK_PROMPT_V2, confused about scope\n - Solution: Structured WorkerHandoff envelope with machine-readable contract (files_owned, success_criteria) + minimal prose\n - Enables: Contract validation in swarm_complete, automatic scope creep detection, smaller prompts\n - Inspired by: \"Patterns for Building AI Agents\" subagent handoff, Bellemare event contracts\n\n3. **Adaptive Decomposition with Feedback Loops**\n - Problem: Decomposition quality varies, learning system doesn't feed back into strategy selection\n - Solution: Strategy registry with outcome-weighted selection (confidence * success_rate / log(completion_time))\n - Enables: Self-improving decomposition, auto-deprecation of failing strategies, transparent reasoning\n - Inspired by: Bellemare event replay, mdflow adapter registry, existing pattern-maturity system\n\nImplementation order: #2 then #1 then #3 (handoff protocol creates structured signals needed for adaptive decomposition)","created_at":"2025-12-18T17:20:56.752Z"}
160
329
  {"id":"ae4ce932-255c-43bd-b4b0-64049d0afecf","information":"Database testing pattern for PGlite + pgvector in Effect-TS: Use isolated temp databases per test with makeTempDbPath() creating unique tmpdir paths. Critical: PGlite stores data in a DIRECTORY (not a file), so dbPath.replace(\".db\", \"\") gives the actual data dir. Cleanup with rmSync(dbDir, {recursive: true}). Effect services test via Effect.gen + Effect.provide(layer) + Effect.runPromise. Vector dimension errors (e.g., 1024 vs 3) throw from PGlite with \"expected N dimensions, not M\" - test with try/catch, not .rejects since Effect may wrap errors. Test decay by setting createdAt in past (Date.now() - 90*24*60*60*1000) and validating decayFactor < 0.6. Ordering tests need explicit timestamps, not Sleep delays.","created_at":"2025-12-18T17:16:46.245Z"}
161
330
  {"id":"ae77ee44-0037-451b-8465-3dce4630e18a","information":"{\"id\":\"pattern-1766080417904-ucxl91\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:53:37.904Z\",\"updated_at\":\"2025-12-18T17:53:37.904Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:53:38.137Z","metadata":"{\"id\":\"pattern-1766080417904-ucxl91\",\"kind\":\"pattern\",\"is_negative\":false}"}
331
+ {"id":"af97ab19-575c-4db2-9c60-3594d3698f5d","information":"{\"id\":\"test-1766259538220-8g5a5mcpk7e\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T19:38:58.220Z\",\"raw_value\":1}","created_at":"1766259538439.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T19:38:58.220Z\"}"}
332
+ {"id":"b14efc93-45be-4ec2-9ca8-ee14f23a88b4","information":"{\"id\":\"pattern-1766349513132-nmk7j3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:38:33.132Z\",\"updated_at\":\"2025-12-21T20:38:33.132Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766349513379.0","metadata":"{\"id\":\"pattern-1766349513132-nmk7j3\",\"kind\":\"pattern\",\"is_negative\":false}"}
333
+ {"id":"b37f55db-d1bf-4249-a757-39724bdf18f8","information":"AI SDK v6 Lesson 02-02 (Text Classification) verification: All steps pass cleanly on fresh clone. generateText + Output.array() pattern works as documented. Key progression: 1) Basic schema with z.enum for categories 2) Adding urgency field via schema extension 3) Multi-language with z.string() returns codes by default 4) Adding .describe() to language field produces full names. No compilation errors, outputs match lesson examples exactly. Students can follow this lesson without issues.","created_at":"1766455232378.0","tags":"ai-sdk,lesson-verification,text-classification,Output.array,zod,v6-patterns"}
162
334
  {"id":"b3c1b1c3-0c21-41a7-98cc-868df103875b","information":"When assigned a task to fix code that was already fixed: verify the current state first before making changes. In this case, projections.test.ts table names were already correct (bead_* not cell_*). The task description was outdated or the fix was already applied. Always read the file to confirm the problem exists before attempting fixes.","created_at":"2025-12-18T15:39:22.185Z"}
163
335
  {"id":"b3cbbf0c-981a-4f4f-8fa3-45175796e338","information":"{\"id\":\"test-1765386438362-dn6i6pzsef\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:07:18.362Z\",\"raw_value\":1}","created_at":"2025-12-10T17:07:18.549Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:07:18.362Z\"}"}
336
+ {"id":"b465f06f-ce75-47c7-84b5-567aa10e12b0","information":"AI SDK v6 Lesson 02-03 (Automatic Summarization) verification: All steps pass cleanly. generateText + Output.object() pattern works perfectly for summarization. Key progression: 1) Basic schema with 4 string fields (headline, context, discussionPoints, takeaways) 2) Adding .describe() to each field with specific constraints (Max 5 words, Max 2 sentences, **Include names**) produces dramatically better output. Evidence: headline went from 13 words to 5 words, takeaways correctly included names (Liam Johnson, James Smith, Emma Thompson). Minor issue: lesson uses any[] type parameter which triggers linting warning - this is a lesson code quality issue, not a verification blocker. Students can follow this lesson without issues.","created_at":"1766455545834.0","tags":"ai-sdk,lesson-verification,automatic-summarization,Output.object,generateText,zod,v6-patterns,schema-refinement,describe"}
337
+ {"id":"b495a2d7-8ba7-4004-b664-c26b299ebe8d","information":"{\"id\":\"test-1766265307855-6gleomdh3a7\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T21:15:07.855Z\",\"raw_value\":1}","created_at":"1766265308126.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T21:15:07.855Z\"}"}
338
+ {"id":"b4d79fa3-ec65-4a78-9da1-cd2e83e1b423","information":"gh API reply syntax for PR comments: Use `-F in_reply_to=COMMENT_ID` (not `-f in_reply_to_id`). The `-F` flag (capital F) and `in_reply_to` (not `in_reply_to_id`) are required for posting PR comment replies. Discovered when all 21 PR #54 comment replies failed with the old syntax.","created_at":"1766424662788.0","tags":"github,gh-cli,api,pr-comments,gotcha"}
164
339
  {"id":"b5c28f9e-6f13-40f3-8b7b-ed5191490723","information":"{\"id\":\"pattern-1765751833365-kd7r4x\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T22:37:13.365Z\",\"updated_at\":\"2025-12-14T22:37:13.365Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T22:37:13.577Z","metadata":"{\"id\":\"pattern-1765751833365-kd7r4x\",\"kind\":\"pattern\",\"is_negative\":false}"}
340
+ {"id":"b619f0d6-02c3-40b8-9924-b5b0b079e522","information":"PGlite database existence check: Don't just check if directory exists - check for PG_VERSION file. PGlite creates PostgreSQL-style database with PG_VERSION file in the root. Checking only directory existence with existsSync(dir) is insufficient because empty directories will pass the check, then PGlite.create() will fail with ENOENT trying to access missing database files. Correct check: const pgVersionFile = join(dbPath, \"PG_VERSION\"); return existsSync(pgVersionFile). This prevents migration code from attempting to open non-existent databases.","created_at":"1766257627744.0","metadata":"{\"pattern\":\"legacyDatabaseExists check\",\"location\":\"packages/swarm-mail/src/memory/migrate-legacy.ts\"}","tags":"pglite,database,file-check,migration,enoent"}
165
341
  {"id":"b6a9b8dc-0da0-43eb-ba32-14d4bb2bd88b","information":"@badass UI Components Reference (Dec 2024): Key extractable components from ai-hero:\n\n**High Priority (Ready to Extract):**\n1. DateTimePicker - apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/date-time-picker/date-time-picker.tsx:40 - React Aria based, self-contained\n2. CRUD Dialog Pattern - apps/ai-hero/src/app/admin/tags/tag-crud-dialog.tsx:34 - Generic pattern, 90% identical across uses\n3. Sidebar Layout - apps/ai-hero/src/app/(content)/cohorts/[slug]/_components/cohort-sidebar.tsx:13 - Sticky with mobile floating CTA\n\n**Medium Priority (Needs Refactoring):**\n4. withResourceForm HOC - apps/ai-hero/src/components/resource-form/with-resource-form.tsx:219 - Needs dependency injection to remove app-specific imports\n5. ListResourcesEdit - apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84 - Needs search provider abstraction (currently Typesense-coupled)\n\n**Shared UI Package (Already Extracted):**\n- packages/ui/resources-crud/edit-resources-form.tsx:28 - Mobile/desktop responsive form\n- packages/ui/resources-crud/create-resource-form.tsx - Resource creation\n\n**Architecture Patterns:**\n- Config-driven forms: Zod schema + config object equals full CRUD UI\n- Tool panel system: Pluggable tools with icon + component\n- Batch operations: Drag-and-drop with debounced batch saves\n- Factory pattern: createWorkshopFormConfig() for type-safe config","created_at":"2025-12-18T15:50:07.107Z"}
342
+ {"id":"b6b71724-e02b-42c5-8c34-e4ae6109aa00","information":"pdf-library AutoTagger auto-accept pattern: (1) Use extractRAGContext() to find relevant concepts via content embedding (threshold 0.5, limit 5) and add to LLM prompt - helps LLM match existing instead of proposing duplicates. (2) After LLM enrichment, call autoAcceptProposals() which generates embeddings for each proposal, checks findSimilarConcepts(embedding, 0.85) for duplicates, and auto-inserts novel concepts with taxonomy.addConcept() + storeConceptEmbedding(). (3) AutoTagger.enrich() now requires TaxonomyService | Ollama dependencies (updated interface). (4) validateProposedConcepts exported for testing. (5) JSON file workflow completely removed - no more manual proposal review, all automatic via embedding similarity.","created_at":"1766257443255.0","tags":"pdf-library,autotagger,taxonomy,embeddings,rag,auto-accept,deduplication"}
343
+ {"id":"b6e2cc14-5344-49a3-8ebc-3bad012f1d38","information":"FTS5 MATCH queries in libSQL/SQLite require quoting search terms to avoid operator parsing issues. Without quotes, hyphens are parsed as MINUS operators. Example: \"unique-keyword-12345\" → \"unique\" MINUS \"keyword\" → \"no such column: keyword\" error. Solution: Wrap query in double quotes, escaping existing quotes: `const quotedQuery = `\"${searchQuery.replace(/\"/g, '\"\"')}\"`;`. Affects all FTS5 full-text search implementations.","created_at":"1766260792853.0","metadata":"{\"file\":\"packages/swarm-mail/src/memory/store.ts\",\"function\":\"ftsSearch\",\"error_pattern\":\"no such column: keyword\"}","tags":"fts5,libsql,sqlite,full-text-search,query-syntax,gotcha"}
344
+ {"id":"b833bde6-8f1e-4d3a-948e-f0eef242cab3","information":"{\"id\":\"test-1766261005894-cuvagqzbes5\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:03:25.894Z\",\"raw_value\":1}","created_at":"1766261006168.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:03:25.894Z\"}"}
166
345
  {"id":"b89c6800-cc8a-477b-8bce-81ad325b1e87","information":"Enhanced doctor command in pdf-library with comprehensive health checks and --fix flag.\n\n**Implementation (TDD - all tests green):**\n\n1. **New Health Checks (5 total)**:\n - WAL files: existing assessWALHealth() (50 files/50MB thresholds)\n - Corrupted directories: checkCorruptedDirs() detects \" 2\" suffix pattern (\"base 2\", \"pg_multixact 2\")\n - Daemon status: async isDaemonRunning(daemonConfig) via Effect.promise\n - Ollama connectivity: library.checkReady() with try/catch\n - Orphaned data: library.repair() returns chunks/embeddings counts\n\n2. **New Functions**:\n - `checkCorruptedDirs(libraryPath, dirs)`: Returns CorruptedDirsResult with issues array\n - `assessDoctorHealth(data)`: Combines all checks into DoctorHealthResult with HealthCheck[] array\n\n3. **Auto-Repair with --fix flag**:\n - Parses opts.fix from args via parseArgs()\n - Removes corrupted directories with rmSync(path, { recursive: true, force: true })\n - Orphaned data auto-cleaned via existing repair() call\n - Shows recommendations when --fix not used\n\n4. **Key Patterns**:\n - Used Effect.gen for async flow (yield* Effect.promise for isDaemonRunning)\n - DaemonConfig requires: socketPath, pidPath, dbPath (all derived from config.libraryPath)\n - WAL health check handles non-existent pg_wal gracefully (assumes healthy)\n - All checks graceful-fail: database not existing doesn't crash, returns healthy defaults\n\n5. **Test Coverage**: 11 new tests covering checkCorruptedDirs edge cases and assessDoctorHealth combinations\n\n**Bug Prevention**: Always await isDaemonRunning with Effect.promise, never call synchronously (returns Promise<boolean>).","created_at":"2025-12-19T17:29:44.709Z","tags":"pdf-library,doctor-command,health-checks,tdd,effect-ts,cli,auto-repair"}
167
346
  {"id":"b8f28a17-d8a2-44e1-8b72-f74e2ae3a98a","information":"{\"id\":\"test-1765653517058-z98hhewgo3r\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:18:37.058Z\",\"raw_value\":1}","created_at":"2025-12-13T19:18:37.257Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:18:37.058Z\"}"}
347
+ {"id":"b9f53e2c-8086-4bab-95b1-0529595cb2f1","information":"## Hive Database Schema Bug - Root Cause and Fix\n\n**Error:** `SQLITE_ERROR: no such column: project_key` when running hive tools\n\n**Root Cause:** The libSQL database had tables with OLD schemas that were missing the `project_key` column. Specifically:\n- `messages` table was missing `project_key` column\n- `events` table had wrong schema (aggregate_id/aggregate_type/payload instead of project_key/timestamp/data)\n\n**Why it happened:**\n1. Tables were created by an older version of the code with different schema\n2. `CREATE TABLE IF NOT EXISTS` doesn't update existing tables\n3. `CREATE INDEX IF NOT EXISTS idx_messages_project ON messages(project_key)` failed because the column didn't exist\n4. The `schema_version` table was either missing or had incorrect entries\n\n**Debug approach that worked:**\n1. Added `SWARM_DEBUG=1` environment variable check\n2. Added console.error logging at each step of schema initialization\n3. Traced the exact SQL statement that failed\n4. Used `PRAGMA table_info(tablename)` to check actual column structure\n\n**Fix:**\n1. Drop and recreate tables with correct schema (safe if empty)\n2. Or use ALTER TABLE to add missing columns\n3. Ensure schema_version table accurately reflects applied migrations\n4. Delete fake schema_version entries and let migrations run properly\n\n**Prevention:**\n- Always check schema_version table matches actual database state\n- Use `swarm db` command to verify database health\n- Consider adding schema validation on startup that compares expected vs actual columns","created_at":"1766294004408.0","tags":"debugging,libsql,schema,migrations,hive,database,project_key"}
168
348
  {"id":"ba639de8-848f-4ced-92f5-9401dc270417","information":"{\"id\":\"test-1765664182311-clxw0y6xk4b\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:16:22.311Z\",\"raw_value\":1}","created_at":"2025-12-13T22:16:22.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:16:22.311Z\"}"}
349
+ {"id":"baaacd02-244f-4098-86b9-cd5c779c2e35","information":"{\"id\":\"pattern-1766263664511-zduc3o\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:47:44.511Z\",\"updated_at\":\"2025-12-20T20:47:44.511Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263664729.0","metadata":"{\"id\":\"pattern-1766263664511-zduc3o\",\"kind\":\"pattern\",\"is_negative\":false}"}
350
+ {"id":"bab8d96e-4698-48b2-a1ba-aa3252938028","information":"pdf-brain enrichment bug: concepts extracted but not stored in document_concepts join table.\n\nROOT CAUSE: AutoTagger.enrich() returns concepts array but never calls taxonomy.assignToDocument(). The concepts end up only in the tags array (as leaf names without category prefix, e.g., \"instructional-design\" instead of \"education/instructional-design\").\n\nDATA STATE:\n- documents.tags: [\"instructional-design\", \"cognitive-load\", ...] (leaf names only)\n- concepts.id: \"education/instructional-design\" (full path with category)\n- document_concepts: EMPTY (join table never populated)\n- concept_embeddings: 1641 rows (all concepts have embeddings)\n\nFIX REQUIRED:\n1. Backfill: Match tags to concepts by normalizing and comparing leaf portions\n2. Fix enrichment: After LLM returns concepts array, call taxonomy.assignToDocument() for each\n\nBACKFILL SCRIPT: scripts/migration/backfill-document-concepts.ts\n- Builds tag -> concept_id mapping (leaf + pref_label + alt_labels)\n- For each doc, matches tags to concepts\n- Inserts into document_concepts with confidence=0.8, source=\"backfill\"\n\nWHY THIS MATTERS: Without document_concepts populated, concept embeddings are useless for search expansion. The whole point is: query -> find similar concepts -> expand to all docs tagged with those concepts.","created_at":"1766331389808.0","tags":"pdf-brain,enrichment,bug,taxonomy,concepts,document_concepts,backfill"}
169
351
  {"id":"bad6e714-cf98-4609-a8f0-44c2e636901e","information":"Added legacy semantic-memory migration prompt to swarm setup CLI. Pattern follows existing .beads migration flow: 1) Check legacyDatabaseExists() after dependency checks, before model selection. 2) Call getMigrationStatus() to show counts (total, withEmbeddings). 3) Prompt user with p.confirm. 4) Create target DB with getSwarmMail(cwd). 5) Run migrateLegacyMemories({ targetDb, onProgress }) with spinner. 6) Show detailed results (migrated, skipped, failed). Key insight: Migration functions are exported from swarm-mail/src/memory/migrate-legacy.ts and re-exported from swarm-mail/src/index.ts. Needed to rebuild swarm-mail package after adding exports. Placement: lines 1672-1735 in bin/swarm.ts, right after .beads migration, before model selection.","created_at":"2025-12-18T21:09:36.891Z","tags":"cli,migration,semantic-memory,swarm-mail,legacy-migration,setup"}
352
+ {"id":"bbad7825-bc3b-4cc1-ad63-698d8a81889e","information":"TDD pattern for Pino logger instrumentation in existing code: Use lazy initialization (getLog() function instead of module-level const) to enable test mocking. Pattern: `let _logger: any | undefined; function getLog() { if (!_logger) { _logger = createChildLogger(\"module\"); } return _logger; }`. Mock in tests with: `mock.module(\"./logger\", () => ({ createChildLogger: () => mockLogger }))` BEFORE importing the module. This allows tests to capture log calls without hitting the actual file system. Applied successfully in compaction-hook.ts with 14 log points across START, GATHER (swarm-mail, hive), DETECT, INJECT, COMPLETE phases. All tests pass (18/18).","created_at":"1766593404339.0","tags":"tdd,testing,pino,logging,mocking,instrumentation,lazy-initialization"}
170
353
  {"id":"bc1b197e-9d63-4466-8c7d-d453e0949840","information":"BeadsAdapter interface pattern for swarm-mail: Interface split into 6 sub-adapters (BeadAdapter, DependencyAdapter, LabelAdapter, CommentAdapter, EpicAdapter, QueryAdapter, BeadsSchemaAdapter) combined into single BeadsAdapter, matching SwarmMailAdapter pattern. Migration v6 adds beads tables to shared PGLite database (shares schema_version with swarm-mail migrations v1-v5). Projections use updateProjections() dispatcher pattern to route events to handlers. Blocked cache uses recursive CTE for transitive blocker lookup with depth limit (10). Dirty tracking marks beads for incremental JSONL export. Key insight: Share same PGLite instance and migration system with swarm-mail - don't create separate database. Test pattern: wrapPGlite() creates DatabaseAdapter from PGlite instance for dependency injection in tests.","created_at":"2025-12-16T21:51:14.238Z"}
354
+ {"id":"bc574f69-e850-4327-b939-a8e2e96c08eb","information":"Workflow logging constraint VERIFIED: Files with \"use workflow\" or \"use step\" directives CANNOT import from ~/lib/logger (pino-based). They MUST use wlog from ~/lib/workflow-logger. The workflow bundler runs in a restricted environment without Node.js modules like pino or node:crypto. Initialize clients (LinearClient, Redis, Index, Search) inline in steps with explicit env var checks - do not import singletons from lib modules. Pattern: const apiKey = process.env.LINEAR_API_KEY; if (!apiKey) throw new Error(...); const linear = new LinearClient({ apiKey });","created_at":"1766517141969.0","tags":"workflow,vercel-workflow,logging,wlog,pino,linear-sdk"}
355
+ {"id":"bce57c41-f979-4cad-aae1-8def03a13bc2","information":"{\"id\":\"pattern-1766349592445-bafgdz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:39:52.445Z\",\"updated_at\":\"2025-12-21T20:39:52.445Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766349592662.0","metadata":"{\"id\":\"pattern-1766349592445-bafgdz\",\"kind\":\"pattern\",\"is_negative\":false}"}
356
+ {"id":"bcf542b5-34a8-4de4-8cc8-dc414784d0f5","information":"LibSQL vector search requires explicit vector index creation. Without the index, vector_top_k() fails with \"failed to parse vector index parameters\". \n\nThe required pattern for libSQL memory schema:\n1. Create table with F32_BLOB(1024) embedding column\n2. Create FTS5 virtual table for fallback search\n3. Create triggers (INSERT, UPDATE, DELETE) to sync FTS\n4. **CRITICAL**: CREATE INDEX idx_memories_embedding ON memories(libsql_vector_idx(embedding))\n\nThis pattern is now centralized in createTestMemoryDb() utility in swarm-mail/src/memory/test-utils.ts. Reference: adapter.test.ts createTestDb() function.\n\nCommon failure mode: Manual schema setup in tests often misses step 4, causing vector search to fail silently or with cryptic errors.","created_at":"1766257338726.0","metadata":"{\"source\":\"swarm-task\",\"cell_id\":\"opencode-swarm-monorepo-lf2p4u-mjenx80qhqn\",\"epic_id\":\"opencode-swarm-monorepo-lf2p4u-mjenx80mqiv\"}","tags":"libsql,vector-search,testing,memory,schema-setup"}
357
+ {"id":"bd5bcd8d-ac35-48ba-aee0-8efb657ab236","information":"{\"id\":\"pattern-1766264316797-qy4n51\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:58:36.797Z\",\"updated_at\":\"2025-12-20T20:58:36.797Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766264317047.0","metadata":"{\"id\":\"pattern-1766264316797-qy4n51\",\"kind\":\"pattern\",\"is_negative\":false}"}
171
358
  {"id":"bd7187c4-23be-4081-a315-c2c897fef72f","information":"## Session Context Capture (Dec 19, 2025)\n\n### Current Bug: \"Invalid Date\" error on hive_query\n\n**Symptom:** `hive_query` returns `{\"success\":false,\"error\":{\"code\":\"HiveError\",\"message\":\"Failed to query cells: Invalid Date\"}}`\n\n**Root Cause Investigation:**\n- JSONL file parses fine with jq\n- 17 lines in .hive/issues.jsonl, all status \"open\"\n- Date fields (created_at, updated_at) look valid: \"2025-12-19T17:14:05.371Z\"\n- Error comes from JavaScript Date constructor somewhere in swarm-mail/src/hive/\n\n**Likely culprits (from grep):**\n- `jsonl.ts:207-210` - `new Date(bead.created_at as number)` - casting string to number?\n- `jsonl.ts:347-348` - `new Date(cellExport.closed_at)` - closed_at might be undefined\n- `jsonl.ts:465-468` - same pattern\n- `merge.ts:135` - `new Date(cell.closed_at)` on potentially undefined\n\n**Hypothesis:** Code expects timestamps as numbers but JSONL has ISO strings, OR closed_at is undefined and being passed to Date constructor.\n\n### Open P1 Bugs (from earlier query)\n1. `mjd4pdh5651` - Make hive_sync bidirectional (import from JSONL after git pull)\n2. `mjd4pjujc7e` - Fix overly strict task_id regex requiring 3+ segments\n\n### Recent Completed Work\n- Smart ID resolution (resolvePartialId) - committed\n- Auto-sync at hive_create_epic, swarm_complete, process exit - committed \n- Removed max_subtasks limit of 10 - committed\n- Changeset pushed, waiting for CI to create version PR\n\n### Hive Viewer Epic Created\n- Epic ID: `mjd4yu2aguv` - 16 subtasks across 4 phases\n- Phase 1 (spike): OpenTUI hello world, JSONL parser, cell list component\n- Not yet started - was about to spawn workers\n\n### Files Modified This Session\n- packages/opencode-swarm-plugin/src/hive.ts (auto-sync)\n- packages/opencode-swarm-plugin/src/swarm-orchestrate.ts (auto-sync in swarm_complete)\n- packages/opencode-swarm-plugin/src/swarm-decompose.ts (removed max limit)\n- packages/opencode-swarm-plugin/src/swarm-prompts.ts (removed max limit)\n- .changeset/hive-smart-id-resolution.md (updated with all changes)","created_at":"2025-12-19T17:30:18.475Z","tags":"session-context,bug,invalid-date,hive-query,swarm-mail,jsonl,december-2025"}
172
359
  {"id":"be8c1c00-1128-4c4e-8984-6dc93db50610","information":"Auto-sync pattern in swarm_complete: When calling hive_sync from within a tool that operates on a specific project_key, you MUST temporarily set the hive working directory using setHiveWorkingDirectory(project_key) before calling hive_sync.execute(), then restore it in a finally block. Why: hive_sync uses getHiveWorkingDirectory() which defaults to process.cwd(), not the project_key argument. Without this, sync writes to wrong directory. Pattern: const prev = getHiveWorkingDirectory(); setHiveWorkingDirectory(projectKey); try { await hive_sync.execute({}, ctx); } finally { setHiveWorkingDirectory(prev); }","created_at":"2025-12-19T17:02:17.235Z","metadata":"{\"type\":\"gotcha\",\"pattern\":\"working-directory-context\",\"component\":\"swarm-orchestrate\"}","tags":"hive,sync,swarm,working-directory,context-management"}
360
+ {"id":"bf3948ec-720b-474f-a06b-463e142ca769","information":"{\"id\":\"pattern-1766296937208-dulm1q\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:02:17.208Z\",\"updated_at\":\"2025-12-21T06:02:17.208Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766296937442.0","metadata":"{\"id\":\"pattern-1766296937208-dulm1q\",\"kind\":\"pattern\",\"is_negative\":false}"}
361
+ {"id":"bffb1fa4-68f1-4c73-b4fb-909a1c5ee4d7","information":"{\"id\":\"pattern-1766260932025-539g3y\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:02:12.025Z\",\"updated_at\":\"2025-12-20T20:02:12.025Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260932299.0","metadata":"{\"id\":\"pattern-1766260932025-539g3y\",\"kind\":\"pattern\",\"is_negative\":false}"}
173
362
  {"id":"c0144f56-dcd6-4aba-a19e-5f10b7f7c68b","information":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:50.318Z\",\"updated_at\":\"2025-12-15T03:58:50.318Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:50.643Z","metadata":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"kind\":\"pattern\",\"is_negative\":false}"}
363
+ {"id":"c0c2dad4-c952-4a49-91d1-119fb33b477b","information":"SwarmMail database path migration to global location: Changed getDatabasePath() from project-local .opencode/streams.db to always return global ~/.opencode/swarm-mail.db. Added getOldProjectDbPaths() helper that returns both old libSQL path ({projectPath}/.opencode/streams.db) and old PGlite directory path ({projectPath}/.opencode/streams/) for migration detection. The getDatabasePath() signature remains backward-compatible - still accepts projectPath parameter but ignores it. This consolidates all SwarmMail data into a single global database for simpler management.","created_at":"1766343594886.0","tags":"swarm-mail,database-path,migration,global-db,libsql"}
174
364
  {"id":"c17bc88f-4015-4ab8-b0c6-cff0c7955eb5","information":"--information","created_at":"2025-12-14T22:42:53.190Z","tags":"documentation,semantic-memory,cli-syntax,gotcha,agent-reference"}
175
365
  {"id":"c1e3d77d-0183-4f45-80ba-a6d6318f0868","information":"Cell ID generation now uses project name from package.json as prefix instead of generic 'bd-'. Format is {slugified-name}-{hash}-{timestamp}{random}, e.g., swarm-mail-lf2p4u-mjbneh7mqah. Fallback is 'cell' prefix when package.json not found or has no name field. Implementation uses fs.readFileSync + fs.existsSync at ID generation time (lazy load), not adapter initialization. Slugification replaces @/spaces/special chars with dashes, removes leading/trailing dashes. Hash can be negative (use [-a-z0-9]+ regex pattern). Backward compatible - no changes to validation, existing bd-* IDs work fine. TDD approach: wrote failing tests first, implemented to pass, refactored to use ES module imports.","created_at":"2025-12-18T16:29:37.218Z"}
366
+ {"id":"c26bff59-2549-44e8-abf0-f8d7fe952889","information":"{\"id\":\"test-1766297015294-ot5uubgret\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T06:03:35.294Z\",\"raw_value\":1}","created_at":"1766297015498.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T06:03:35.294Z\"}"}
176
367
  {"id":"c27724f6-a65c-4641-830d-83a535f95c6b","information":"JSONL file format bug: `wc -l` showed 0 lines despite having content because records were concatenated with `lines.join(\"\\n\")` which doesn't add a trailing newline. The fix: (1) `serializeToJSONL()` now returns `JSON.stringify(cell) + \"\\n\"` and (2) `exportToJSONL()` uses `lines.join(\"\")` since each line already has `\\n`. Root cause: JSONL spec requires each line to end with newline, including the last line. Without trailing newline, `wc -l` returns 0 because it counts newline characters, not lines. Tests: verify `jsonl.endsWith(\"\\n\")` and `(jsonl.match(/\\n/g) || []).length === recordCount`.","created_at":"2025-12-19T16:18:17.706Z","tags":"jsonl,newlines,file-format,wc,unix-tools,bugs"}
368
+ {"id":"c373ecc8-5a84-44d7-8b5e-ba4f65f92a15","information":"createMemoryAdapter signature change in opencode-swarm-plugin: Changed from accepting `SwarmDb` (Drizzle client) to `DatabaseAdapter` for consistency with swarm-mail's getDatabase() return type. Internally converts using `toSwarmDb()` helper. This aligns with the pattern used throughout swarm-mail where DatabaseAdapter is the abstraction layer and Drizzle is an implementation detail. Callers now pass `swarmMail.getDatabase()` directly without needing to call `toSwarmDb()` themselves.\n\nCritical discovery: swarm-mail's `createLibSQLMemorySchema` in memory/libsql-schema.ts is outdated - missing columns: `tags TEXT DEFAULT '[]'`, `updated_at TEXT DEFAULT (datetime('now'))`, `decay_factor REAL DEFAULT 1.0`. The Drizzle schema in db/schema/memory.ts has these columns but the raw SQL schema doesn't. swarm-mail's own tests (store.drizzle.test.ts) work around this by creating the schema manually. This causes test failures when using `createLibSQLMemorySchema` - tests must create schema manually until swarm-mail is fixed.","created_at":"1766256829374.0","metadata":"{\"project\":\"opencode-swarm-plugin\",\"affected_files\":[\"packages/opencode-swarm-plugin/src/memory.ts\",\"packages/opencode-swarm-plugin/src/memory-tools.ts\"]}","tags":"typescript,swarm-mail,memory,database-adapter,drizzle,schema"}
369
+ {"id":"c39ece10-3ed4-4b70-9998-7da626aa96ec","information":"{\"id\":\"pattern-1766262544063-se5leq\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:29:04.063Z\",\"updated_at\":\"2025-12-20T20:29:04.063Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262544311.0","metadata":"{\"id\":\"pattern-1766262544063-se5leq\",\"kind\":\"pattern\",\"is_negative\":false}"}
370
+ {"id":"c46fb9d3-f659-4059-abac-181442f1502b","information":"Semantic zoom implementation pattern for canvas visualization: Create progressive content levels (minimal/standard/detailed/full) based on weighted formula (zoom * 0.7 + importance * 0.3). Extract different metadata fields at each level to avoid visual clutter at low zoom. Key insight: Text truncation needs both character-based (for non-canvas) and measure-based (using ctx.measureText) approaches. The measure-based approach accounts for actual rendered width. Render multi-line content with fontSize and lineHeight parameters for flexibility. Uses Catppuccin colors (cat.text, cat.subtext0, cat.teal, cat.subtext1) for semantic differentiation.","created_at":"1766343287635.0","tags":"canvas,semantic-zoom,visualization,progressive-disclosure,tufte"}
177
371
  {"id":"c48ccddf-2e1a-4f73-8e49-f89de6bd0877","information":"Bun monorepo publishing with changesets - COMPLETE SOLUTION (Dec 2024):\n\nPROBLEM: workspace:* protocol not resolved by npm publish or changeset publish\n\nROOT CAUSE: bun pm pack resolves workspace:* from LOCKFILE, not package.json. Stale lockfile = old versions.\n\nSOLUTION (from https://ianm.com/posts/2025-08-18-setting-up-changesets-with-bun-workspaces):\n1. ci:version script: `changeset version && bun update` - the bun update syncs lockfile after version bump\n2. ci:publish script: custom scripts/publish.ts using `bun pm pack` + `npm publish <tarball>`\n3. Setup .npmrc in CI: `echo \"//registry.npmjs.org/:_authToken=$NPM_TOKEN\" > .npmrc`\n\nWHY NOT:\n- `bunx changeset publish` - uses npm publish, doesn't resolve workspace:*\n- `bun publish` - no npm token support yet (track: github.com/oven-sh/bun/issues/15601)\n- OIDC trusted publishers - works but requires repository field in package.json for provenance\n\nWORKFLOW (.github/workflows/publish.yml):\n- Setup npmrc with NPM_TOKEN secret\n- version: bun run ci:version\n- publish: bun run ci:publish\n- changesets/action handles PR creation and tagging\n\nGOTCHAS:\n- CLI bin scripts need deps in dependencies, not devDependencies\n- Each package needs repository field for npm provenance\n- files field in package.json to include dist/\n\nFILES: scripts/publish.ts, .github/workflows/publish.yml, package.json (ci:version, ci:publish scripts)","created_at":"2025-12-15T05:07:27.735Z"}
372
+ {"id":"c48d5b39-7afc-4b3f-887c-e6e1ba5e6ed0","information":"{\"id\":\"pattern-1766262135955-k8e6k5\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:22:15.955Z\",\"updated_at\":\"2025-12-20T20:22:15.955Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262136180.0","metadata":"{\"id\":\"pattern-1766262135955-k8e6k5\",\"kind\":\"pattern\",\"is_negative\":false}"}
373
+ {"id":"c6121593-3fb2-4af2-b68a-ecfb5fe82a3d","information":"Nitro API route pattern for cron jobs with Vercel Workflow integration: Import `{ start } from \"workflow/api\"` (NOT \"workflow\") to trigger workflows from API routes. Use `defineEventHandler` wrapper, extract query params with `getQuery(event)`, and start workflow with `await start(workflowFn, [argsObject])`. The workflow args must be in an array even for a single object parameter. Return workflow run ID for tracking. Cron config in vercel.json: add to \"crons\" array with \"path\" and \"schedule\" (cron expression). Typecheck may fail on API routes outside build context - always verify with `pnpm build` instead. Logger from ~/lib/logger works in API routes (NOT workflow files which need wlog).","created_at":"1766517348451.0","tags":"nitro,vercel-workflow,cron,api-routes,pattern"}
178
374
  {"id":"c76fd51e-f15f-4f2c-9ca5-f3853806deef","information":"@badass/core TDD patterns successfully applied: Wrote characterization tests FIRST to document actual behavior (what IS) before behavior tests (what SHOULD). Key learnings: 1) z.coerce.date() creates new Date instances, so use .getTime() for equality checks not reference equality. 2) Zod .omit() strips fields silently, doesn't throw - test with .not.toHaveProperty(). 3) composeMiddleware in @badass/core runs middlewares sequentially (await first, then second), NOT in parallel - order matters. 4) Effect detection checks for \"_tag\" property, works for Effect.succeed() but NOT Effect.gen() which uses \"_id\". 5) Characterization tests caught 6 wrong assumptions about behavior before writing implementation-dependent tests. This validates the TDD pattern: write failing test, observe actual behavior, update test to match reality.","created_at":"2025-12-18T16:32:11.709Z","tags":"tdd,characterization-tests,badass-core,zod,effect-ts,middleware,testing-patterns"}
375
+ {"id":"c8a60a99-af35-450c-94c9-2e664b91ec71","information":"{\"id\":\"test-1766263568973-e0ugyob6fjf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:46:08.973Z\",\"raw_value\":1}","created_at":"1766263569199.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:46:08.973Z\"}"}
376
+ {"id":"c8c9d415-4351-40c4-8297-12d41043abcc","information":"{\"id\":\"test-1766261665641-byvzo7wnf4o\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:14:25.641Z\",\"raw_value\":1}","created_at":"1766261665906.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:14:25.641Z\"}"}
377
+ {"id":"c8d037e9-49ec-4d9d-9f4c-cea39464b754","information":"AI SDK v6 Section 3 (Conversational AI) Validation Results:\n\n**Critical v6 API discrepancies found:**\n\n1. **convertToModelMessages should be awaited** (Priority 1)\n - Affects lessons 03, 04\n - v6 docs show: `messages: await convertToModelMessages(messages)`\n - Course shows: `messages: convertToModelMessages(messages)` (synchronous)\n - Impact: Students learn incorrect async pattern for v6\n\n2. **useChat() missing transport configuration** (Priority 2)\n - Affects lessons 01, 02\n - v6 uses transport-based architecture (AI SDK 5.0+)\n - Default is DefaultChatTransport with /api/chat\n - Code works but doesn't teach v6 patterns explicitly\n - Should add callout explaining transport architecture\n\n3. **Tool definitions are CORRECT** ✅\n - `inputSchema` (not `parameters`) ✅\n - `execute` function pattern ✅\n - `tool()` helper usage ✅\n - Multi-step with `stepCountIs()` ✅\n\n4. **Elements integration is CORRECT** ✅\n - Message.parts array pattern ✅\n - Tool component usage ✅\n - Response component for markdown ✅\n - Generative UI patterns ✅\n\n**Validation methodology:**\n- Cross-referenced all code examples with /external/ai/content/docs/\n- Checked tool calling patterns against tools-and-tool-calling.mdx\n- Verified useChat patterns against chatbot.mdx and use-chat.mdx reference\n- Validated message.parts structure (v6 pattern)\n\n**Filed 5 bugs total:** All tagged with parent epic cell-is13o5-mji2v2bs6go","created_at":"1766464292957.0","tags":"ai-sdk-v6,course-validation,section-3,conversational-ai,usechat,tools"}
378
+ {"id":"c92c4517-d776-4860-b786-1e42cc25ade6","information":"{\"id\":\"test-1766256883769-ck73xya4rup\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T18:54:43.769Z\",\"raw_value\":1}","created_at":"1766256883975.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T18:54:43.769Z\"}"}
379
+ {"id":"c979cfb7-9372-4f6e-b4eb-0733c68fe515","information":"SQLite SQLITE_BUSY retry pattern for swarm tools: When multiple agents access the same libSQL database, SQLITE_BUSY errors occur. Three solutions in order of effort: 1) PRAGMA busy_timeout = 5000 (SQLite retries internally for 5 seconds), 2) Application-level withRetry() wrapper with exponential backoff (100ms * 2^attempt, max 3 retries), 3) Effect-based retry using Schedule.exponential().pipe(Schedule.recurs(3)). We already have Effect-based retry in streams/effect/lock.ts and memory/ollama.ts. Key insight from Release It!: \"Integration points are the number one killer of systems\" - every database call needs protection. Retryable errors: SQLITE_BUSY, SQLITE_LOCKED. Non-retryable: SQLITE_CONSTRAINT, SQLITE_MISMATCH.","created_at":"1766591275621.0","tags":"sqlite,retry,busy,database,locking,concurrency,patterns,effect"}
380
+ {"id":"c9940854-9a68-4661-8e23-89bebc38345b","information":"{\"id\":\"pattern-1766263761795-p01y8j\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:49:21.795Z\",\"updated_at\":\"2025-12-20T20:49:21.795Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263762019.0","metadata":"{\"id\":\"pattern-1766263761795-p01y8j\",\"kind\":\"pattern\",\"is_negative\":false}"}
179
381
  {"id":"c9d0eaaf-afb7-4c54-87f0-8ecb79bfb8eb","information":"Git-synced memories implementation pattern: Export memories to JSONL without embeddings (too large, ~4KB per memory). Store id, information, metadata, tags, confidence, created_at. Import skips duplicates by ID. Bidirectional sync: import from file first, then export all to file. Integration with hive_sync: after flushing cells to issues.jsonl, also sync memories.jsonl. Memory sync is optional - wrapped in try/catch so it doesn't fail the main sync. Key insight: PGlite returns JSONB as object not string, need to handle both cases when parsing metadata.","created_at":"2025-12-19T03:01:14.081Z","metadata":"{\"files\":[\"packages/swarm-mail/src/memory/sync.ts\"],\"pattern\":\"git-synced-memories\"}","tags":"memory-sync,jsonl,git-sync,hive,swarm-mail"}
382
+ {"id":"ca35365f-9fd9-4889-a1bd-1a44c1bae7ab","information":"## PGLite Removal Investigation - Effect Primitives Status\n\n### Finding: Effect-TS Durable Primitives Are NOT Used\n\nSearched for usage of DurableCursor, DurableMailbox, DurableLock, DurableDeferred across the codebase:\n\n1. **opencode-swarm-plugin/src/** - ZERO imports or usage\n2. **Only references found** - in swarm-mail's own dist/*.d.ts files (self-referential)\n\n### Effect Primitives Location\n- `packages/swarm-mail/src/streams/effect/cursor.ts`\n- `packages/swarm-mail/src/streams/effect/mailbox.ts`\n- `packages/swarm-mail/src/streams/effect/lock.ts`\n- `packages/swarm-mail/src/streams/effect/deferred.ts`\n- `packages/swarm-mail/src/streams/effect/ask.ts`\n- `packages/swarm-mail/src/streams/effect/layers.ts`\n\n### Current Dependency Chain\nEffect primitives → `getDatabase()` from `streams/index.ts` → PGLite\n\n### Decision Context\nTask: Remove PGLite except for migration paths\n\nOptions considered:\na) Remove Effect primitives entirely - simplifies, not used\nb) Port Effect primitives to libSQL - keeps patterns, changes backend\nc) Keep behind migration flag\n\n### Recommendation\nOption (a) Remove entirely is safest since:\n- Zero actual usage in production code\n- Can re-add later if needed\n- Removes PGLite dependency cleanly\n\nBUT user asked \"how COULD we use them\" - suggesting interest in keeping the patterns for future use.","created_at":"1766333479399.0","tags":"pglite-removal,effect-primitives,investigation,swarm-mail,architecture-decision"}
383
+ {"id":"cab59350-8135-4df0-97d8-6bae5596585c","information":"{\"id\":\"pattern-1766265308860-qdb2d3\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T21:15:08.860Z\",\"updated_at\":\"2025-12-20T21:15:08.860Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766265309080.0","metadata":"{\"id\":\"pattern-1766265308860-qdb2d3\",\"kind\":\"pattern\",\"is_negative\":false}"}
180
384
  {"id":"cc84f337-516e-40cc-9429-d557e4484d23","information":"@badass Implementation Decomposition Ready (Dec 2024) - Next steps after architecture questions resolved: Create epic with subtasks for (1) @badass/core - Effect-TS services, builder pattern from uploadthing, (2) @badass/db - Drizzle schemas, adapter interface supporting shared/isolated DB, (3) @badass/auth - BetterAuth with hive/spoke SSO, device flow for CLI/local apps, (4) @badass/next - createRouteHandler, site config, (5) @badass/cli - badass command with multi-site support, device flow auth, TUI for video uploads. Namespace is @badass/*, CLI binary is 'badass'. Reference repos: pingdotgg/uploadthing for Effect-TS router pattern, vercel/academy-content for CLI+Mux, badass-courses/course-builder for device flow and multi-site patterns.","created_at":"2025-12-18T15:42:12.574Z"}
181
385
  {"id":"cd179af2-3f9d-45ee-a349-8b7663f2078e","information":"JSONL sync architecture in swarm-mail hive module investigation (Dec 2024):\n\n**NO BUG FOUND** - System working as designed. 271/271 tests passing.\n\n**Architecture (Lazy Write Pattern)**:\n1. Operations (createCell, updateCell, closeCell) mark cells dirty via updateProjections() → markBeadDirty()\n2. Dirty tracking stored in dirty_beads table (cell_id, project_key, marked_at)\n3. User explicitly calls hive_sync tool to flush dirty cells to .hive/issues.jsonl\n4. FlushManager exports dirty cells via exportDirtyBeads() and writes to file\n\n**Key Implementation Details**:\n- updateProjections() in projections.ts line 118 marks ALL cells dirty after EVERY event\n- exportDirtyBeads() queries dirty_beads table, exports to JSONL\n- FlushManager.flush() writes JSONL to file, clears dirty flags\n- Table naming: \"beads\" is real table, \"cells\" is a view (migration v8) for compatibility\n- Both \"SELECT FROM beads\" and \"SELECT FROM cells\" work correctly\n\n**Why Tests All Pass**:\nFull integration test verifies: createCell → markDirty → exportDirtyBeads → FlushManager.flush() → file written correctly\n\n**Design Rationale**:\nLazy writes prevent excessive disk I/O. Operations mark dirty (cheap), user flushes when ready (expensive). Similar to git add/commit pattern.\n\n**If Asked \"Why Don't Cells Appear in JSONL?\"**:\nAnswer: Did you call hive_sync? Operations don't auto-flush. This is intentional.","created_at":"2025-12-19T16:28:00.031Z","tags":"hive,jsonl,sync,flush,dirty-tracking,swarm-mail,architecture"}
182
386
  {"id":"cd77b842-2aff-47c0-baba-97096aaf9322","information":"pdf-brain research session on memory systems for AI agents yielded 13 actionable patterns from cognitive science literature:\n\n1. **Testing Effect** (Range, 9853): Retrieval strengthens memory more than passive review. Query count should affect decay rate.\n\n2. **Interleaving** (Range): Mixed/varied practice leads to better transfer than blocked practice. Tag memories for cross-domain retrieval.\n\n3. **Self-Explanation** (e-Learning and Science of Instruction): Prompting \"WHY does this work?\" produces deeper learning than just storing facts.\n\n4. **Negative Examples** (Training Complex Cognitive Skills): Contrast correct with incorrect. Store anti-patterns alongside patterns.\n\n5. **Worked Examples** (Multimediabook): Before/after code snippets more valuable than abstract rules for novices.\n\n6. **Connection Strength** (Smart Notes, Zettelkasten): Well-connected notes decay slower. Cross-references surface unexpected insights.\n\n7. **Tacit Knowledge** (Nonaka/Takeuchi): Some knowledge is hard to articulate. Capture intuitions with examples, not just rules.\n\n8. **Chunking** (Kirschner): One transferable insight per memory. Too granular = noise, too broad = not actionable.\n\n9. **Metacognitive Prompts** (9853): \"Would you be able to apply this in a different context?\" encourages reflection on transferability.\n\n10. **Hierarchical Tags** (How Learning Works): Knowledge organization affects retrieval. Use domain/subdomain/topic structure.\n\n11. **Spaced Retrieval** (CodingCareer, Anki): Active scheduling beats passive decay. Surface due-for-review memories proactively.\n\n12. **Prior Knowledge Activation** (978-3-031-74661-1): New info connected to existing knowledge sticks longer. Link new memories to existing ones.\n\n13. **Schema Acquisition** (Training Complex Cognitive Skills): Store transferable patterns, not specific fixes. Schemas enable far transfer.\n\nKey sources: Training_Complex_Cognitive_Skills (360 pages), e-Learning and the Science of Instruction (759 pages), Range (366 pages), How Learning Works (274 pages), ten-steps-to-complex-learning (416 pages), Smart Notes (146 pages).","created_at":"2025-12-19T03:13:03.888Z","tags":"memory-systems,cognitive-science,pdf-brain,learning,spaced-repetition,schemas,research"}
387
+ {"id":"cdcf917a-f473-4d2d-9bb5-63baa35baa3b","information":"## Vision: Hive Viewer as Control Plane\n\nThe Hive/Cell Visualizer is evolving beyond read-only dashboards into a full **control plane** for swarm orchestration.\n\n### Current Scope (Visualizer Cell)\n- CLI Query Tool (`swarm viz`)\n- TanStack Start Web App (`swarm viz --serve`)\n- Static HTML Export (`swarm viz --export`)\n- Real-time cell/swarm status via Durable Streams\n\n### Logging Integration (New Cell)\n- Pino structured logging to `~/.config/swarm-tools/logs/`\n- `swarm log` CLI for querying/tailing\n- Compaction hook as first instrumentation target\n- Logs could feed into visualizer as a \"logs panel\"\n\n### Future Vision: Dynamic Configuration\nThe viewer could become a UI for managing/manipulating hives with **database-backed dynamic configuration**:\n\n1. **Coordinator Prompts** - Edit decomposition strategies, review criteria, spawn instructions\n2. **Worker Prompts** - Customize worker behavior, tool permissions, output formats\n3. **Compaction Instructions** - Tune what context gets preserved, priority ordering, token budgets\n4. **Skills Management** - Enable/disable skills, edit skill content, create project-specific skills\n5. **Learning Tuning** - Adjust confidence decay rates, pattern maturity thresholds, anti-pattern sensitivity\n\n### Architecture Implications\n- Prompts/instructions stored in libSQL (swarm.db), not hardcoded\n- Version history for prompts (event-sourced changes)\n- A/B testing capability (run different prompt variants)\n- Per-project overrides (global defaults + project customization)\n- Import/export for sharing configurations\n\n### Why This Matters\nCurrently all swarm behavior is hardcoded in TypeScript. Making it database-driven enables:\n- Non-developers to tune agent behavior\n- Rapid iteration without code deploys\n- Learning from what configurations work best\n- Sharing \"swarm recipes\" between projects/teams\n\n### Related Cells\n- Visualizer: `opencode-swarm-monorepo-lf2p4u-mjfzlbckh37`\n- Logging: `opencode-swarm-plugin--ys7z8-mjk6pwwn9nw`\n\nThis is a significant architectural evolution - from static code to dynamic control plane.","created_at":"1766591863016.0","metadata":"{\"timeframe\":\"long-term\",\"complexity\":\"high\",\"related_cells\":[\"opencode-swarm-monorepo-lf2p4u-mjfzlbckh37\",\"opencode-swarm-plugin--ys7z8-mjk6pwwn9nw\"]}","tags":"vision,architecture,hive-viewer,control-plane,dynamic-config,prompts,compaction,logging,future"}
183
388
  {"id":"cdeb1658-81dd-408b-b30e-ef1c36f9399c","information":"{\"id\":\"test-1766074660928-qaxaon6ib8i\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:40.928Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:41.163Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:40.928Z\"}"}
389
+ {"id":"cdfd91a4-b221-4939-bd6f-203a9827d29e","information":"swarm_spawn_retry tool pattern: Coordinators drive the retry loop by spawning NEW workers with retry context, not by messaging the completed worker. Workers are fire-and-forget - once swarm_complete runs, they can't receive messages. The retry prompt includes: (1) ⚠️ RETRY ATTEMPT {n}/3 header, (2) ISSUES FROM PREVIOUS ATTEMPT with file:line + suggestions, (3) PREVIOUS ATTEMPT diff (optional), (4) ORIGINAL TASK context, (5) Standard worker contract (swarmmail_init, reserve, fix, complete). Max 3 attempts enforced at tool level - throws error if attempt > 3. COORDINATOR_POST_WORKER_CHECKLIST now documents the retry flow: swarm_review_feedback(needs_changes) → swarm_spawn_retry() → Task(new worker). TDD pattern: wrote 8 tests FIRST covering prompt generation, attempt validation, diff inclusion, issues formatting, response structure, and worker contract. All tests passed after implementation. Tool exported in promptTools object alongside swarm_spawn_subtask and swarm_spawn_researcher.","created_at":"1766594566256.0","tags":"swarm,coordination,retry,tdd,prompt-generation,review-loop"}
390
+ {"id":"cea8a7e0-9252-42c3-94ab-7842063af1a6","information":"{\"id\":\"pattern-1766595000137-ttf692\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-24T16:50:00.137Z\",\"updated_at\":\"2025-12-24T16:50:00.137Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766595000417.0","metadata":"{\"id\":\"pattern-1766595000137-ttf692\",\"kind\":\"pattern\",\"is_negative\":false}"}
391
+ {"id":"cfc258d1-d420-444c-9532-ae46e8bcd619","information":"{\"id\":\"test-1766262799864-8dtsmvp6i13\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:33:19.864Z\",\"raw_value\":1}","created_at":"1766262800072.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:33:19.864Z\"}"}
184
392
  {"id":"cffea773-b97b-4582-b5d4-b0154bd12f83","information":"Lesson rating rubric application for AI SDK course: Setup lessons (00-) often score low on Hook & Motivation because they're functional rather than problem-focused. Fix: Add \"Why This Matters\" explaining infrastructure value (AI Gateway = unified multi-provider access, no vendor lock-in). Also, setup lessons need Fast Track even though they're procedural—format consistency matters for learner expectations. Real output examples critical (e.g., \"vc --version # Output: Vercel CLI 39.2.4\") because learners verify setup success by matching exact output. Changed \"Done\" to \"Done-When\" with unchecked boxes—learners check them off as they progress, improving engagement.","created_at":"2025-12-16T21:43:30.828Z"}
185
393
  {"id":"d0534c28-593b-40a1-998a-05cd7c82a32f","information":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:56:06.090Z\",\"updated_at\":\"2025-12-15T03:56:06.090Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:56:06.457Z","metadata":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"kind\":\"pattern\",\"is_negative\":false}"}
394
+ {"id":"d26902c4-6cb2-4b10-9cb9-63cba428436d","information":"{\"id\":\"test-1766296855773-l0w0n6pv18d\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-21T06:00:55.773Z\",\"raw_value\":1}","created_at":"1766296855983.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-21T06:00:55.773Z\"}"}
395
+ {"id":"d2ad29ee-76d6-4eaf-a9c7-674e6990cd19","information":"## SQL CHECK Constraint Violation: Status='closed' Requires closed_at\n\n**Problem:** `changeCellStatus()` in hive adapter was changing status to 'closed' without setting `closed_at`, violating CHECK constraint:\n```sql\nCHECK ((status = 'closed') = (closed_at IS NOT NULL))\n```\n\n**Error:**\n```\nSQLITE_CONSTRAINT_CHECK: CHECK constraint failed: (status = 'closed') = (closed_at IS NOT NULL)\n```\n\n**Root Cause:** Event projection handler `handleCellStatusChangedDrizzle()` only updated `status` and `updated_at`, ignoring the bidirectional constraint between `status` and `closed_at`.\n\n**The CHECK Constraint Means:**\n- When `status='closed'`, `closed_at` MUST be non-NULL\n- When `status!='closed'`, `closed_at` MUST be NULL\n- It's a bidirectional equality constraint\n\n**Fix Pattern:**\n```typescript\nasync function handleCellStatusChangedDrizzle(db: SwarmDb, event: CellEvent) {\n const toStatus = event.to_status as string;\n const updates: Partial<typeof beads.$inferInsert> = {\n status: toStatus,\n updated_at: event.timestamp,\n };\n\n // Set closed_at when transitioning to 'closed'\n if (toStatus === \"closed\") {\n updates.closed_at = event.timestamp;\n updates.closed_reason = event.reason ?? null;\n } else {\n // Clear closed_at when transitioning away from 'closed'\n updates.closed_at = null;\n updates.closed_reason = null;\n }\n\n await db.update(beads).set(updates).where(eq(beads.id, event.cell_id));\n}\n```\n\n**Key Insight:** When an event handler changes one side of a CHECK constraint, it MUST update the other side. The constraint isn't just validation - it's a data integrity rule that requires coordinated updates.\n\n**TDD Test That Caught It:**\n```typescript\ntest(\"changeCellStatus to 'closed' sets closed_at\", async () => {\n const cell = await adapter.createCell(projectKey, {...});\n const updated = await adapter.changeCellStatus(projectKey, cell.id, \"closed\");\n expect(updated.closed_at).toBeGreaterThan(0); // FAILED before fix\n});\n```\n\n**Related Pattern:** `closeCell()` event handler was ALREADY doing this correctly - it set `status`, `closed_at`, and `closed_reason` together. The bug was that `changeCellStatus()` bypassed this coordination.\n\n**Files:**\n- packages/swarm-mail/src/hive/projections-drizzle.ts (fix location)\n- packages/swarm-mail/src/hive/migrations.ts (CHECK constraint definition)\n- packages/swarm-mail/src/hive/adapter.test.ts (TDD test)","created_at":"1766338304428.0","tags":"sql,check-constraint,event-sourcing,projections,data-integrity,sqlite,hive"}
396
+ {"id":"d330686c-fa2d-40f3-a231-9c9ed3c463f9","information":"{\"id\":\"pattern-1766260203398-aeogl6\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:03.398Z\",\"updated_at\":\"2025-12-20T19:50:03.398Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260203622.0","metadata":"{\"id\":\"pattern-1766260203398-aeogl6\",\"kind\":\"pattern\",\"is_negative\":false}"}
397
+ {"id":"d3e584bc-65cd-4692-9d5d-e38e791a97e4","information":"Drizzle Migration Decision Framework for Complex Queries:\n\n**Principle:** Don't force everything into Drizzle. Use raw SQL when it's clearer and more maintainable.\n\n**Convert to Drizzle if:**\n1. Simple SELECT with WHERE, ORDER BY, LIMIT\n2. Basic JOINs (1-2 tables)\n3. Standard aggregations (COUNT, SUM, AVG)\n4. No dynamic query building\n\n**Keep as raw SQL if:**\n1. **Dynamic query building** - Conditional WHERE clauses based on options (Drizzle gets verbose)\n2. **Materialized view queries** - Cache tables with EXISTS/NOT EXISTS subqueries\n3. **Complex GROUP BY + HAVING** - Conditional counts with CASE expressions\n4. **JSON column operations** - SQLite JSON parsing (Drizzle doesn't support well)\n5. **Recursive CTEs** - WITH RECURSIVE queries (Drizzle doesn't support)\n6. **Complex sorting logic** - Multiple CASE expressions in ORDER BY\n\n**Hybrid Approach Works:** It's OK to mix Drizzle and raw SQL in a single function. Example from `getStatistics`:\n- Simple aggregations (status counts, type counts) → Drizzle\n- Cache table queries (blocked count, ready count) → Raw SQL\n\n**Real Example from hive/queries.ts migration:**\n- ✅ Migrated: `resolvePartialId`, `getStaleIssues`, `getStatistics` (partial)\n- ❌ Kept raw: `getReadyWork` (dynamic WHERE + EXISTS + CASE sorting), `getBlockedIssues` (cache JOIN + JSON), `getEpicsEligibleForClosure` (self-JOIN + GROUP BY + HAVING)\n\n**Why This Works:** Drizzle is great for simple CRUD, raw SQL is great for complex analytics. Using both maximizes readability.\n\n**Documentation is Key:** When keeping raw SQL, add inline comments explaining WHY (not just WHAT). Example: \"❌ KEPT AS RAW SQL: Requires cache table JOIN and JSON parsing. Drizzle doesn't have great JSON column support for SQLite.\"\n\nApplies to: Any Drizzle migration project, not just swarm-mail.\n","created_at":"1766332016278.0","tags":"drizzle,migration-strategy,raw-sql,hybrid-approach,decision-framework"}
398
+ {"id":"d4196c9c-c05f-4ec5-b38f-73f88edad3b1","information":"Edge bundling implementation for graph visualization: Created edge bundling feature that routes cross-cluster edges through cluster centroids using quadratic bezier curves. Key algorithm: (1) Same-cluster edges remain straight lines, (2) Cross-cluster edges curve through the midpoint of source/target cluster centroids, (3) Bundling strength parameter (0-1) interpolates between direct line and full bundling. Implementation uses canvas quadraticCurveTo for smooth curves. Includes configurable strength, tension, and minClusterSize parameters. Gracefully handles missing node positions and centroids by falling back to straight lines. Tested with 13 test cases covering edge cases like missing nodes, missing positions, custom strength values. Exports: bundleEdges(), renderBundledEdges(), shouldEnableBundling() heuristic (>50 nodes, >100 links). Visual effect reduces \"spaghetti\" in complex graphs by grouping edges into visual bundles.","created_at":"1766343866422.0","tags":"graph-visualization,edge-bundling,canvas,d3,clustering,bezier-curves,tufte"}
186
399
  {"id":"d47252c9-654a-4dea-913b-991951101d2a","information":"PGlite Socket Server Implementation Pattern for pdf-brain daemon:\n\nROOT CAUSE: PGlite is single-connection only. Multiple CLI invocations creating their own instances corrupt the database.\n\nSOLUTION: Daemon process that owns ONE PGlite instance and exposes it via Unix socket using @electric-sql/pglite-socket.\n\nKEY IMPLEMENTATION DETAILS:\n1. Package: @electric-sql/pglite-socket (not /server subpath - exports from main)\n2. Correct class name: PGLiteSocketServer (capital L)\n3. Constructor options: { db: PGlite, path: string } (not socketPath)\n4. Server lifecycle: call server.start() after construction, server.stop() in shutdown\n5. Use Unix socket (path option) instead of TCP for local-only daemon\n\nGRACEFUL SHUTDOWN PATTERN:\n```typescript\n// MANDATORY: CHECKPOINT before close to flush WAL\nawait db.exec(\"CHECKPOINT\");\nawait server.stop();\nawait db.close();\n// Then remove PID file and socket\n```\n\nPID FILE VALIDATION:\n- Check file exists\n- Parse PID as integer with Number.isNaN (not isNaN)\n- Verify process alive with process.kill(pid, 0) - signal 0 doesn't kill, just checks existence\n- Handle errors (process doesn't exist) by returning false\n\nTDD APPROACH EFFECTIVENESS:\n- Wrote 14 tests first covering all lifecycle states\n- Tests caught import path error (/server vs main)\n- Tests caught API differences (close vs stop, constructor args)\n- All tests green after implementation\n\nThis pattern prevents the PGlite multi-connection corruption bug (semantic memory 48610ac6-d52f-4505-8b06-9df2fad353aa) without implementing complex leader election.","created_at":"2025-12-19T14:51:31.659Z","tags":"pglite,daemon,socket-server,multi-connection,checkpoint,lifecycle-management"}
400
+ {"id":"d4ee7d0d-993b-43a2-aa45-73b5776745d5","information":"sendSwarmMessage URL_INVALID blocker RESOLVED by commit 7bf9385. createLibSQLAdapter now normalizes bare filesystem paths (e.g., '/Users/joel/.config/swarm-tools/swarm.db') to file: URLs ('file:/Users/joel/.config/swarm-tools/swarm.db') automatically. Tests now run without \"URL_INVALID\" errors.\n\nNEW ISSUE DISCOVERED: swarm-review.integration.test.ts tests pass the sendSwarmMessage call but fail on message retrieval. getInbox returns empty array even though sendSwarmMessage succeeded. Root cause still unknown - could be:\n1. Database connection not shared (sendSwarmMessage creates new adapter, test uses different instance)\n2. Message projection not materializing properly\n3. Database path mismatch between send and receive\n\nThis is a DIFFERENT bug from the URL normalization issue.\n","created_at":"1766422197892.0","tags":"swarm-mail,libsql,sendSwarmMessage,URL_INVALID,file-urls,integration-tests"}
401
+ {"id":"d536ec42-3015-4906-a849-9b6e07738be5","information":"PGLite cleanup pattern: When removing deprecated infrastructure, check both the main export file AND consumer test files. In swarm-mail PGLite removal, cleaned streams/index.ts and src/index.ts of all PGLite comment noise (9 references), but found 3 test files (projections.test.ts, debug.test.ts, agent-mail.test.ts) still importing removed functions (getDatabase, closeDatabase, resetDatabase). \n\nKey lesson: Export cleanup is TWO phases:\n1. Remove dead code from exports (THIS cell)\n2. Migrate consumers to new API (FOLLOW-UP cells)\n\nDon't expand scope silently - coordinate with swarm lead to create follow-up cells for consumer migration.\n\nTest-first approach worked well: wrote index.test.ts that verified NO PGLite imports, NO old functions, and confirmed expected exports exist. Test stayed GREEN throughout cleanup.","created_at":"1766340639996.0","tags":"pglite,cleanup,refactoring,test-driven,swarm-coordination"}
187
402
  {"id":"d624440f-abd3-4152-9243-7f8c7ad9c964","information":"Port Ollama embedding service from semantic-memory to swarm-mail successfully completed. Key patterns:\n\n**Effect-TS Service Pattern (Context.Tag)**:\n- Define service with Context.Tag(\"namespace/ServiceName\") extending tag class\n- Service interface specifies Effect signatures with explicit error types\n- Implementation uses Layer.succeed() to provide concrete implementation\n- Retry logic: Schedule.exponential(Duration.millis(100)).pipe(Schedule.compose(Schedule.recurs(3))) for 100ms→200ms→400ms backoff\n\n**Batch Processing Pattern**:\n- Use Stream.fromIterable(items).pipe(Stream.mapEffect(fn, { concurrency })) for controlled concurrency\n- Stream.runCollect + Effect.map(Chunk.toArray) to materialize results\n- Each item in batch gets independent retry logic from embedSingle\n\n**Health Check Pattern**:\n- Check both server availability AND model availability\n- Support version suffix matching (model name can have :latest, :v1, etc)\n- Provide actionable error messages (e.g., \"Run: ollama pull model-name\")\n\n**Testing with Mocked Fetch**:\n- Mock global.fetch for unit tests of Effect-based HTTP calls\n- Use Effect.flip to test error cases (converts failure to success for assertions)\n- Test retry behavior by tracking attempt count in mock\n- Test batch concurrency by tracking concurrent calls with counters\n\n**OllamaError Definition**:\n- Use Schema.TaggedError pattern for type-safe errors\n- Single reason field for error messages\n- Integrates with Effect error handling (Effect.fail, Effect.tryPromise catch)\n\nLocation: packages/swarm-mail/src/memory/ollama.ts\nTests: packages/swarm-mail/src/memory/ollama.test.ts (16 tests, all passing)\nConfig: MemoryConfig with ollamaHost and ollamaModel, defaults from env vars","created_at":"2025-12-18T18:57:57.759Z","tags":"effect-ts,ollama,embeddings,swarm-mail,context-tag,retry-pattern,testing"}
403
+ {"id":"d6614b59-70bb-4b86-9d20-14774faa9f5a","information":"Config file pattern for Effect Schema classes: When creating config with Schema.Class, define a static Default property for the default config instance, and implement loadConfig/saveConfig helpers outside the class. Use Schema.decodeSync for validation when loading from JSON. For simple serialization, JSON.stringify works directly on Schema instances without needing Schema.encode. File structure: imports at top (fs, path), Schema class definition with static Default, then standalone load/save functions that use the Default instance. This keeps the Schema class clean and separates IO concerns.","created_at":"1766260781808.0","tags":"effect,schema,config,patterns,typescript"}
404
+ {"id":"d66c650b-d7d5-4c06-89fe-1b5fc0d1dbee","information":"{\"id\":\"pattern-1766296858244-xlcat5\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T06:00:58.244Z\",\"updated_at\":\"2025-12-21T06:00:58.244Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766296858493.0","metadata":"{\"id\":\"pattern-1766296858244-xlcat5\",\"kind\":\"pattern\",\"is_negative\":false}"}
188
405
  {"id":"d6759351-07a1-40f2-9c3e-c49022039786","information":"Testing Zod schemas pattern: For date coercion tests, z.coerce.date() always creates NEW Date instances even when input is already a Date. This means reference equality (toBe) fails. Solution: use .toBeInstanceOf(Date) + .getTime() comparison for date values. Also, Zod .omit() doesn't reject extra fields, it silently strips them during parsing. Test with expect(result).not.toHaveProperty('omittedField') not expect().toThrow().","created_at":"2025-12-18T16:32:12.902Z","tags":"zod,testing,dates,schemas,validation,gotcha"}
406
+ {"id":"d70554ab-1551-4f5d-982e-425b65e191dc","information":"{\"id\":\"pattern-1766261425493-154cx7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:10:25.493Z\",\"updated_at\":\"2025-12-20T20:10:25.493Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261425745.0","metadata":"{\"id\":\"pattern-1766261425493-154cx7\",\"kind\":\"pattern\",\"is_negative\":false}"}
407
+ {"id":"d72166d4-f000-4748-bbd7-26196e7205d7","information":"Evalite Framework for Compaction Hook Testing\n\nCreated comprehensive eval suite for testing coordinator resumption after compaction. Key patterns:\n\n**Fixture Structure:**\n- Test cases include hive cells (simulated state) and swarm-mail state (agents, reservations, messages)\n- Expected includes confidence level, context type, mustContain/mustNotContain patterns\n- 5 test cases covering: active epic, multiple epics, no swarm, empty hive, blocked epic\n\n**Custom Scorers:**\n- confidenceAccuracy - validates detection confidence (high/medium/low/none)\n- contextInjectionCorrectness - validates context type (full/fallback/none) \n- requiredPatternsPresent - checks for required patterns (swarm_status, COORDINATOR, etc)\n- forbiddenPatternsAbsent - ensures no placeholders (bd-xxx, <epic>, <path>)\n- compactionQuality - weighted composite (25% confidence, 25% injection, 30% required, 20% forbidden)\n\n**Import Issue Workaround:**\n- Importing from src/compaction-hook.ts triggers OpenCode plugin chain with module resolution errors\n- Solution: Copy context constants directly into eval file to avoid deep imports\n- This keeps evals independent and runnable without full build\n\n**Results:**\n- 77% overall score detects the bug correctly\n- Test \"Epic ID must be specific\" scores 50% - shows placeholders in context (the actual bug)\n- Run with: bunx evalite run evals/compaction-resumption.eval.ts\n\nFile locations:\n- evals/fixtures/compaction-cases.ts\n- evals/scorers/compaction-scorers.ts \n- evals/compaction-resumption.eval.ts","created_at":"1766596294978.0","tags":"evalite,testing,compaction-hook,coordinator,swarm,eval-framework"}
408
+ {"id":"d7e4cdc5-87d6-49d6-a2d7-f0b3291152da","information":"Analyzed Dicklesworthstone/agentic_coding_flywheel_setup for swarm coordination patterns. Key findings:\n\n**1. Manifest-Driven Generation Pattern (acfs.manifest.yaml):**\n- YAML manifest defines modules with metadata: id, phase, dependencies, install commands, verify commands, installed_check\n- TypeScript generator (packages/manifest/src/generate.ts) compiles YAML → shell scripts (scripts/generated/)\n- Each module becomes an idempotent bash function with skip logic via installed_check\n- `installed_check: { run_as: target_user, command: \"test -x ~/.bun/bin/bun\" }` → skips if already installed\n- verified_installer pattern: delegates to checksummed upstream install scripts, no inline commands needed\n\n**2. State Persistence with Stable IDs (scripts/lib/state.sh):**\n- state.json v2 uses stable phase IDs ([\"user_setup\", \"filesystem\", \"shell_setup\"...]) NOT numbers\n- Why: if phases reorder, resume logic doesn't skip wrong phases\n- Atomic writes: temp file → sync → rename (prevents corruption on crash/disconnect)\n- Tracks completed_phases, current_phase, current_step, phase_durations, failed_phase + error\n- JSON schema versioning for migrations (v2 → v3 added ubuntu_upgrade section)\n\n**3. Checksum-Verified Installers (scripts/lib/security.sh):**\n- checksums.yaml: maps tool names to upstream URL + SHA256\n- fetch_and_run_with_recovery(): fetches, verifies checksum, pipes to runner if match, skips or aborts if mismatch\n- HTTPS enforcement: curl --proto '=https' --proto-redir '=https' (prevents downgrade attacks)\n- Sentinel-based fetching preserves trailing newlines (appends __ACFS_EOF_SENTINEL__, strips after hash)\n- Retry logic with exponential backoff for transient network errors (exit codes 6,7,28,35,52,56)\n\n**4. Contract Validation (scripts/lib/contract.sh):**\n- acfs_require_contract() validates required env vars (TARGET_USER, TARGET_HOME, MODE) and helper functions before generated modules run\n- Prevents runtime errors from missing context\n- Explicit dependencies over implicit coupling\n\n**5. Doctor Checks with Caching + Timeouts (scripts/lib/doctor.sh):**\n- Three-tier checks: binary existence, shallow verification, deep functional tests (--deep flag)\n- Cache successful deep checks for 5min to avoid slow re-runs\n- Per-check timeout (15s default) prevents indefinite hangs, returns special \"timeout\" status\n- JSON output mode for parsing, gum UI for humans\n- Skipped tools tracking from state.json to differentiate \"not installed\" vs \"skipped by user\"\n\n**6. AGENTS.md Destructive Command Controls:**\n- RULE 1: NEVER delete files without explicit approval in same session\n- Forbidden: git reset --hard, git clean -fd, rm -rf without user providing exact command\n- Audit trail required: user text, command run, timestamp\n- Bun-only mandate: no npm/yarn/pnpm, only bun.lock\n\n**7. Generated File Convention:**\n- scripts/generated/ NEVER edited manually (stamped with generator metadata)\n- Modify generator (packages/manifest/src/generate.ts) → regenerate → shellcheck\n- Clear separation: hand-written libs in scripts/lib/, generated modules in scripts/generated/\n\n**Implementation for swarm:**\n- Adopt manifest-driven plugin tool generation (YAML → TypeScript compiler → MCP tools)\n- Use stable IDs for swarm phases/subtasks (not array indices) in decomposition\n- Add checksum verification to skill downloads and external script execution\n- Contract validation for swarm workers (require swarmmail_init, file reservations before work)\n- Doctor-style health checks for swarm coordination (detect stale reservations, blocked agents)\n- AGENTS.md-style mandate for destructive operations (NEVER close cells without completion criteria met)","created_at":"1766590813558.0","tags":"agentic_coding_flywheel_setup,manifest-generation,state-persistence,idempotency"}
189
409
  {"id":"d7efe68a-3a5d-42c6-b203-d77ea9c61961","information":"Successfully completed Bead→Cell event schema rename with backward compatibility. Key pattern: Export new names as primary exports, then add deprecated type aliases and const aliases for all old names (schemas, types, and helper functions). For imports, use only the new names and don't try to create aliases in the import statement - create them as separate exports after. This allows existing code to continue using BeadEvent types while new code uses CellEvent types. Total renames: 20 schemas, 20 types, 3 helper functions - all with backward compat aliases marked with @deprecated JSDoc tags.","created_at":"2025-12-17T16:40:48.872Z"}
190
410
  {"id":"d8320ad2-425b-4c27-a854-ef5ce49a2e55","information":"{\"id\":\"pattern-1765771080299-rxkeql\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:00.299Z\",\"updated_at\":\"2025-12-15T03:58:00.299Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:01.723Z","metadata":"{\"id\":\"pattern-1765771080299-rxkeql\",\"kind\":\"pattern\",\"is_negative\":false}"}
411
+ {"id":"da3010a8-76fb-4eb2-ba5e-743b0d63baec","information":"## 🧠 Brain Chat Feature Decomposition\n\n**Project:** pdf-brain-viewer (SvelteKit)\n**Epic:** Full RAG Chat with Knowledge Graph Memory\n\n### Architecture\n```\n┌─────────────────────────────────────────────────────────────────────┐\n│ pdf-brain-viewer (SvelteKit) │\n├─────────────────────────────────────────────────────────────────────┤\n│ ┌──────────────┐ ┌──────────────────────────┐ ┌───────────────┐ │\n│ │ Chat Panel │ │ Force Graph │ │ Info Panel │ │\n│ │ (left) │ │ (center) │ │ (right) │ │\n│ └──────────────┘ └──────────────────────────┘ └───────────────┘ │\n└─────────────────────────────────────────────────────────────────────┘\n```\n\n### Data Model\n```\nthreads: id, title, created_at, updated_at, selected_node_id\nmessages: id, thread_id, role, content, created_at, embedding F32_BLOB(1024)\nmemories: id, content, type (fact|preference|insight|question), embedding F32_BLOB(1024)\nmemory_sources: memory_id → message_id\nmemory_concepts: memory_id → concept_id + confidence\nmemory_documents: memory_id → doc_id + confidence\nmemory_links: source_memory_id → target_memory_id + relation_type\n```\n\n### Tech Stack\n- AI SDK 6 beta + Vercel AI Gateway (anthropic/claude-opus-4-5)\n- ai-elements Svelte for chat UI\n- Vercel Workflow for durable memory extraction\n- LibSQL with F32_BLOB vectors + libsql_vector_idx\n- Ollama mxbai-embed-large (1024 dims)\n- Catppuccin Mocha theme\n\n### Subtasks (8 total, validated)\n\n**Wave 1 - Parallel (no deps):**\n1. Schema & Types [src/lib/db.ts, src/lib/types.ts] - complexity 4\n2. Ollama Embedding Service [src/lib/services/embedding.ts] - complexity 2\n\n**Wave 2 - Depends on Wave 1:**\n3. RAG Service: Hybrid Reranking [src/lib/services/rag.ts] - complexity 4 (deps: 0,1)\n4. Chat API: Streaming [src/routes/api/chat/+server.ts, src/lib/services/chat.ts] - complexity 4 (deps: 0,2)\n5. Vercel Workflow: Memory Extraction [src/lib/workflows/extract-memories.ts, vite.config.ts, API route] - complexity 5 (deps: 0,1)\n\n**Wave 3 - Depends on Wave 2:**\n6. Chat Panel Component [ChatPanel.svelte, MessageBubble.svelte, ThreadList.svelte] - complexity 4 (deps: 3)\n\n**Wave 4 - Depends on Wave 3:**\n7. IDE Layout: Three-Panel [+page.svelte, selection store, ResizeHandle] - complexity 3 (deps: 5)\n\n**Wave 5 - Final polish:**\n8. Global Catppuccin Theme [app.css, +layout.svelte, theme.ts] - complexity 2 (deps: 6)\n\n### RAG Strategy (Hybrid Reranking)\n1. Embed query with Ollama\n2. Parallel search: selected node context + embeddings + concept_embeddings + memories\n3. Combine, deduplicate, rerank by cosine similarity\n4. Return top-k with source attribution\n\n### Memory Extraction (Vercel Workflow)\n- Explicit: User says \"remember X\" → immediate extraction\n- Automatic: Background workflow after assistant responses\n- Extract: facts, preferences, insights, questions\n- Auto-link to concepts and similar memories\n\n### Key Decisions from Socratic Planning\n- Full knowledge graph (option C) - conversations as first-class citizens\n- Thread → Messages → Memories architecture (option A)\n- Hybrid memory extraction (option D) - explicit + background\n- Persisted chat with embeddings from day one (option B)","created_at":"1766336899420.0","metadata":"{\"epic\":\"brain-chat\",\"project\":\"pdf-brain-viewer\",\"strategy\":\"feature-based\",\"subtask_count\":8,\"total_complexity\":28}","tags":"pdf-brain-viewer,chat,rag,knowledge-graph,memory,decomposition,swarm,sveltekit,ai-sdk,vercel-workflow,catppuccin"}
191
412
  {"id":"da4dbfc8-fbd1-4a12-b0ed-8b262529953c","information":"@badass Effect Router Decision (Dec 2024): Build a router/builder pattern using Effect-TS, similar to uploadthing's approach. Reference implementation: pingdotgg/uploadthing/packages/uploadthing/src/effect-platform.ts and _internal/upload-builder.ts. This provides type-safe, composable route definitions with Effect's error handling and dependency injection. The router pattern will be used across @badass packages for consistent API design.","created_at":"2025-12-18T15:51:55.079Z"}
413
+ {"id":"da756adb-a188-41fa-a8cc-67a961a73bf2","information":"swarm_review_feedback retry_context pattern: When review status is needs_changes, return retry_context in the response for coordinators to use with swarm_spawn_retry. Workers are fire-and-forget Task subagents - once they complete, they're dead and can't receive messages. The retry_context includes: (1) task_id, (2) attempt number, (3) max_attempts (3), (4) structured issues array (file, line, issue, suggestion), (5) next_action hint (\"Use swarm_spawn_retry to spawn new worker\"). CRITICAL: DO NOT send sendSwarmMessage for needs_changes status - worker is dead. KEEP sendSwarmMessage for approved status (audit trail). After 3 failed attempts, task is marked blocked and no retry_context is returned. TDD pattern: wrote 6 failing tests FIRST covering retry_context structure, next_action hint, max_attempts, no message to dead worker, message kept for approved, no retry_context after failure. All tests passed after removing sendSwarmMessage calls and adding retry_context to response.","created_at":"1766595048679.0","tags":"swarm,review,retry,coordinator,worker,fire-and-forget,tdd"}
192
414
  {"id":"db9ed7ab-6599-4b62-b12d-276836a633cc","information":"Shared PGlite test server pattern for swarm-mail dramatically speeds up test suite execution. \n\n**ROOT CAUSE:** Each test creating new PGlite instance requires ~500ms WASM initialization. With 50+ tests, this adds 25+ seconds of pure overhead.\n\n**SOLUTION:** Share ONE PGlite instance across entire test suite via test-server.ts module-level state:\n\n```typescript\n// test-server.ts\nlet db: PGlite | null = null;\n\nexport async function startTestServer() {\n if (db) return { db }; // Reuse existing\n db = await PGlite.create({ extensions: { vector } });\n await runMigrations(db);\n return { db };\n}\n\nexport async function resetTestDatabase() {\n if (!db) throw new Error(\"Test server not started\");\n await db.exec(\"TRUNCATE agents, messages, beads, ... CASCADE\");\n}\n\nexport function getTestDb() {\n if (!db) throw new Error(\"Test server not started\");\n return db;\n}\n```\n\n**Test Pattern:**\n```typescript\nbeforeAll(async () => {\n await startTestServer(); // ONE init\n});\n\nbeforeEach(async () => {\n await resetTestDatabase(); // TRUNCATE (~10ms) instead of recreate (~500ms)\n});\n\nafterAll(async () => {\n await stopTestServer();\n});\n```\n\n**MEASURED RESULTS (hive/adapter.test.ts, 25 tests):**\n- Before: 8.63s (345ms per test)\n- After: 0.96s (38ms per test)\n- **~9x speedup, 90% reduction in test time**\n\n**KEY DECISIONS:**\n1. Abandoned PGLiteSocketServer approach - socket overhead added complexity without benefit\n2. Direct shared PGlite instance is simpler and faster\n3. TRUNCATE CASCADE between tests provides clean isolation\n4. Module-level state works perfectly for process-scoped test suites\n\n**GOTCHAS:**\n- Must TRUNCATE in correct order due to foreign keys (use CASCADE)\n- Must run migrations once at startup, not per test\n- Close cleanup is critical: `db.exec(\"CHECKPOINT\")` before `db.close()`\n\n**APPLICABILITY:** This pattern works for any test suite using PGlite where WASM init dominates test time. Expected 10-20x speedup for larger test suites (100+ tests).","created_at":"2025-12-19T15:12:21.422Z","tags":"testing,pglite,performance,test-patterns,swarm-mail,speedup"}
415
+ {"id":"dbba7b08-3fc3-4ccd-b51f-827770d11717","information":"Script-to-workflow integration pattern for Vercel Workflow in Nitro apps: Add --workflow flag to existing scripts to trigger workflow via cron API endpoint instead of inline processing. Pattern: (1) Parse --workflow flag, (2) Build URL with query params (full, team, etc), (3) Fetch http://localhost:3000/api/cron/sync-<name> endpoint, (4) Handle JSON response with runId, (5) Exit early before local processing. Keep existing --dry-run mode for local testing. Update script header docs to show both modes. Replace TODO ingestion comments with notes that workflow handles production ingestion. This allows scripts to serve dual purpose: local debugging AND workflow trigger without code duplication.","created_at":"1766517572228.0","tags":"vercel-workflow,script-patterns,api-integration,nitro"}
193
416
  {"id":"dc749a41-96ec-4ab2-a163-f1639857f9bd","information":"{\"id\":\"pattern-1766074743915-fstlv8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:19:03.915Z\",\"updated_at\":\"2025-12-18T16:19:03.915Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:19:04.142Z","metadata":"{\"id\":\"pattern-1766074743915-fstlv8\",\"kind\":\"pattern\",\"is_negative\":false}"}
417
+ {"id":"dcbf2f31-eab0-4e0b-8884-41c288908d9d","information":"**agentmail_release test \"failures\" were already fixed in commit eb2ff6d**: Task opencode-swarm-monorepo-lf2p4u-mjg00go0fga reported 3 failing agentmail_release integration tests. Investigation found all 3 tests passing (100% success). The tests were fixed in prior commit \"fix(swarm-mail): fix 32 failing tests - schema alignment and test infrastructure\" (eb2ff6d). The three tests verify: (1) releasing all reservations, (2) releasing specific paths only, and (3) releasing by reservation IDs. All verify the `released` count correctly matches expectations. **Key learning:** When a task describes failing tests, ALWAYS run them first to verify current state before investigating. Task descriptions can be outdated if based on pre-fix snapshots. Don't waste time fixing what's already fixed.","created_at":"1766338566116.0","tags":"testing,agentmail_release,drizzle-migration,swarm-coordination,already-fixed"}
194
418
  {"id":"dda2aaf9-9eb3-4a54-8eb8-9894743448af","information":"Kent C. Dodds unified accounts feature request (Dec 2024): Kent wants to unify accounts across EpicAI.pro, EpicWeb.dev, and EpicReact.dev. Use case: User buys Epic React, starts Epic Workshop App tutorial, shouldn't have to create a separate EpicWeb.dev account. Current pain: suboptimal experience forcing account creation on different domain. Alternative considered: local tracking (also suboptimal). This validates the need for creator-scoped unified identity in @badass architecture.","created_at":"2025-12-18T15:32:32.673Z"}
419
+ {"id":"df2fcb8c-ccbd-401b-8d19-9fc00927eece","information":"{\"id\":\"test-1766260866255-c66a1una25\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:01:06.255Z\",\"raw_value\":1}","created_at":"1766260866491.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:01:06.255Z\"}"}
195
420
  {"id":"e03ede8f-8cc6-467f-bc19-60a54cb07e2e","information":"WorkerHandoff integration: task_id must have 3+ segments (project-slug-hash). Tests with bd-123 format fail. Use test-swarm-plugin-lf2p4u-name123 instead.","created_at":"2025-12-18T17:36:15.053Z"}
421
+ {"id":"e04dfef8-c513-4557-8b6e-cee18253e17d","information":"## Session Context: PGLite to libSQL Migration (Dec 21, 2025)\n\n### Epic: Drizzle Migration + Plugin Integration Tests\n**Branch:** feat/drizzle-migration-and-tests\n**Cell ID:** opencode-swarm-monorepo-lf2p4u-mjf9zd9kgo7\n\n### Completed Work\n1. **Streams subsystem** - ✅ Fully converted to Drizzle with wrappers\n2. **Memory subsystem** - ✅ Already uses Drizzle (raw SQL only for vector/FTS5)\n3. **32 failing tests fixed** - Schema alignment and test infrastructure\n4. **PGLite → libSQL migration tool** - Created migrate-pglite-to-libsql.ts\n\n### In Progress\n1. **Hive subsystem conversion** - Still uses DatabaseAdapter with raw SQL\n2. **Remove PGLite from streams/index.ts exports** - Cleanup task\n\n### Key Technical Decisions\n- Use toSwarmDb() to convert DatabaseAdapter → SwarmDb (Drizzle client)\n- Keep complex CTEs as raw SQL via sql.raw() if Drizzle cannot express them\n- Schema source of truth: packages/swarm-mail/src/db/schema/*.ts\n- FTS5 and vector operations MUST stay as raw SQL (Drizzle does not support)\n\n### Test Status (Last Known)\n- swarm-mail: 595 pass, 15 skip, 0 fail\n- opencode-swarm-plugin: 423 pass, 0 fail\n- Integration tests: 440 pass, 18 skip, 6 fail (agentmail_release, swarm_checkpoint)\n\n### Files Modified (Key)\n- hive/store.ts - Event store operations\n- hive/projections.ts, projections-drizzle.ts - Query projections\n- hive/queries.ts, queries-drizzle.ts - Complex queries\n- streams/index.ts - Export cleanup needed\n- db/migrate.ts - Migration runner","created_at":"1766337614267.0","tags":"drizzle,migration,pglite,libsql,swarm-mail,hive,session-context"}
196
422
  {"id":"e0a89793-1dd3-4061-9621-524a5ae92841","information":"Documentation audit for BeadsAdapter migration completed 2025-01-16. Searched all docs in packages/opencode-swarm-plugin/docs/ for stale references to: bd CLI commands, Go implementation, SQLite, old architecture. Found 1 stale reference: swarm-mail-architecture.md line 519 incorrectly compared Agent Mail's \"SQLite file\" to Swarm Mail's PGLite. Fixed to \"PGLite (embedded Postgres)\" for accuracy. All other docs (ADR-001, ADR-002, ADR-003, ROADMAP, subagent-coordination-patterns.md, swarm-mail-architecture.md) correctly reference: PGLite event sourcing, BeadsAdapter from swarm-mail package, .beads/issues.jsonl sync. No references to deprecated bd CLI or Go implementation found.","created_at":"2025-12-17T01:00:46.822Z","tags":"documentation,audit,BeadsAdapter,migration,PGLite,swarm-mail"}
423
+ {"id":"e0e9227d-51b0-4943-8ba3-e5de88cda39c","information":"{\"id\":\"pattern-1766262232471-56tbqa\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:23:52.471Z\",\"updated_at\":\"2025-12-20T20:23:52.471Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262232691.0","metadata":"{\"id\":\"pattern-1766262232471-56tbqa\",\"kind\":\"pattern\",\"is_negative\":false}"}
197
424
  {"id":"e122ede7-8a62-4489-9742-3234b89a8fb2","information":"SWARM-MAIL ADAPTER PATTERN DECISION (Dec 2025): Extracting swarm-mail as standalone package using adapter pattern from coursebuilder. Key design: 1) DatabaseAdapter interface abstracts SQL operations (query, exec, transaction), 2) SwarmMailAdapter interface defines all swarm-mail operations, 3) createSwarmMailAdapter(db) factory accepts injected database, 4) PGLite convenience layer provides getSwarmMail() singleton for simple usage. Benefits: portable (works with PGLite, Postgres, Turso), testable (inject in-memory), shareable (one db across consumers), decoupled (swarm-mail doesn't own db lifecycle). Pattern learned from github.com/badass-courses/course-builder/tree/main/packages/adapter-drizzle which uses table function injection for multi-tenant prefixing.","created_at":"2025-12-15T00:02:39.759Z"}
425
+ {"id":"e12e683e-1053-457e-b9f8-a1b5fdb57f83","information":"{\"id\":\"pattern-1766350692476-lw871g\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-21T20:58:12.476Z\",\"updated_at\":\"2025-12-21T20:58:12.476Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766350692715.0","metadata":"{\"id\":\"pattern-1766350692476-lw871g\",\"kind\":\"pattern\",\"is_negative\":false}"}
426
+ {"id":"e13ca098-be54-4e93-af56-8dc579d01dcf","information":"{\"id\":\"test-1766260930824-4ncf1lztwvj\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:02:10.824Z\",\"raw_value\":1}","created_at":"1766260931067.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:02:10.824Z\"}"}
198
427
  {"id":"e18f64a6-d971-4ef8-8d09-02f3f7a445a5","information":"Schema file renaming with backward compatibility pattern: When renaming core schema files like Bead to Cell, create new file with updated names first, export all primary types and schemas with new names, then add backward compatibility section at bottom with deprecated JSDoc tags. Use pattern: export const OldName equals NewName and export type OldType equals NewType. This allows gradual migration across codebase without breaking existing imports. Delete old file only after new file is complete with aliases. For opencode-swarm-plugin Bead to Cell hive metaphor migration.","created_at":"2025-12-17T16:39:35.501Z"}
199
428
  {"id":"e1eb1c68-a71a-4c00-beb6-7310deffc166","information":"Documentation file rename with terminology update pattern: Renamed beads.mdx → hive.mdx in docs, updated all tool names (beads_* → hive_*), changed terminology (bead/beads → cell/cells), updated directory references (.beads/ → .hive/), and added backward compatibility note mentioning beads_* aliases still work but are deprecated. Key insight: When renaming documentation for deprecated APIs, ALWAYS include a migration note at the top explaining the old names still work but show warnings. This helps users transition smoothly without breaking existing code. File path was apps/web/content/docs/packages/opencode-plugin/","created_at":"2025-12-18T18:37:20.197Z","metadata":"{\"context\":\"v0.31 beads→hive rename\"}"}
200
429
  {"id":"e23e3f30-6e9f-4eb4-858f-2ac50f6e17ad","information":"@badass Multi-Database Testing Pattern (Dec 2024): Adopted from course-builder. Key pattern is PARAMETERIZED TEST SUITES.\n\n**Core Pattern:**\n```typescript\n// Write once in packages/db/test/adapter-tests.ts\nexport function runAdapterTests(options: {\n adapter: Adapter\n db: { connect, disconnect, user, session, ... }\n fixtures: TestFixtures\n}) {\n beforeAll(() => options.db.connect())\n afterAll(() => options.db.disconnect())\n \n test('creates user', async () => {\n const user = await options.adapter.createUser(options.fixtures.user)\n const dbUser = await options.db.user(user.id)\n expect(dbUser).toEqual(user)\n })\n}\n\n// Run against Postgres\nrunAdapterTests({ adapter: postgresAdapter, db: postgresHelpers, fixtures })\n\n// Run against SQLite\nrunAdapterTests({ adapter: sqliteAdapter, db: sqliteHelpers, fixtures })\n```\n\n**Key Files from course-builder:**\n- packages/utils/adapter.ts:84 - runBasicTests() (766 lines)\n- packages/adapter-drizzle/test/fixtures.ts - Shared test data\n- packages/adapter-drizzle/test/mysql/test.sh - Shell script for DB lifecycle\n\n**DRY Patterns:**\n1. Parameterized test suites (write once, run against multiple DBs)\n2. Shared fixtures file (single source of truth for test data)\n3. Shell scripts for database lifecycle (Docker container management)\n4. Shared vitest config via tooling package\n5. Optional test methods pattern (core required, extended optional)\n\n**Gotchas:**\n- Drizzle truncates milliseconds - zero them out in fixtures\n- Cleanup order matters - delete children before parents (FK constraints)\n- Test suite functions use vitest globals (side effects, not pure)","created_at":"2025-12-18T16:36:29.114Z"}
430
+ {"id":"e333f398-4fee-41d8-8edb-c0fc30376305","information":"AI SDK v6 Section 1 Fundamentals validation complete. Found 3 model naming bugs, all other v6 patterns CORRECT.\n\n**CORRECT v6 Patterns:**\n- Import: `import { generateText, Output } from 'ai'` ✅\n- Structured output: `Output.object({ schema })` with destructuring `{ output }` ✅\n- Basic text generation: `generateText({ model, prompt })` with destructuring `{ text }` ✅\n- No deprecated `generateObject` or `experimental_generateObject` references ✅\n\n**Bugs Filed:**\n1. cell-is13o5-mji2yj856tl: Lesson 04 line 132 - 'openai/gpt-5' should be 'openai/gpt-5.1'\n2. cell-is13o5-mji2ym6ttkx: Lesson 05 line 182 - 'openai/gpt-5' should be 'openai/gpt-5.1'\n3. cell-is13o5-mji2zh5ndeq: Lesson 04 Model Selection Guide - 'gpt-5' → 'gpt-5.1' and 'gpt-5-nano' → 'gpt-5-mini'\n\n**Model Names v6:**\n- Fast models: `gpt-4.1`, `gpt-4.1-mini`, `gpt-4o`, `gpt-4o-mini`\n- Reasoning models: `gpt-5.1`, `gpt-5-mini`, `o3`, `o1-mini`\n\n**Lessons Validated:**\n- 01-introduction-to-llms.mdx: PASS (conceptual example uses correct v6 Output.object pattern)\n- 02-prompting-fundamentals.mdx: PASS (basic generateText examples, no structured output)\n- 03-ai-sdk-dev-setup.mdx: PASS (setup instructions, no code validation issues)\n- 04-data-extraction.mdx: 3 bugs (model naming in code example + Model Selection Guide)\n- 05-model-types-and-performance.mdx: 1 bug (model naming in code example)\n\nAll imports, API calls, and destructuring patterns match official v6 docs exactly.","created_at":"1766463910105.0","tags":"ai-sdk-v6,section-1,fundamentals,validation,model-naming,Output.object,generateText"}
431
+ {"id":"e45a9f1d-12fa-4dbf-b6ff-f5d2b15abd27","information":"Drizzle Migration Pattern for Subsystem-Specific Queries:\n\n**Problem:** When migrating queries to Drizzle, using the full schema (via `toDrizzleDb()` or `createDrizzleClient()`) breaks tests when test databases only contain tables from one subsystem (e.g., hive tables but not streams tables).\n\n**Root Cause:** `createDrizzleClient()` loads ALL schemas from `db/schema/index.js` (streams, memory, hive). Drizzle validates schema on instantiation, causing \"table X has no column Y\" errors when tables don't exist.\n\n**Solution:** Create subsystem-specific Drizzle client factories that only load relevant schemas:\n\n```typescript\nfunction getHiveDrizzle(db: DatabaseAdapter) {\n // Import only hive schema tables\n const hiveSchema = { beads };\n \n // For LibSQL Client, get the client and wrap with Drizzle\n if (typeof (db as any).getClient === 'function') {\n const client = (db as any).getClient();\n return drizzle(client, { schema: hiveSchema });\n }\n \n // For PGlite or raw client, wrap directly\n return drizzle(db as any, { schema: hiveSchema });\n}\n```\n\n**Benefits:**\n- Tests work with minimal schema setup (only tables needed for subsystem)\n- Faster Drizzle instantiation (fewer tables to validate)\n- Clear separation of concerns (hive code only sees hive schema)\n\n**Pattern:** When migrating subsystems to Drizzle, create `get{Subsystem}Drizzle()` helpers in subsystem-specific files (e.g., `hive/queries-drizzle.ts`, `streams/store-drizzle.ts`).\n\n**Applies to:** swarm-mail hive subsystem, but pattern is universal for any Drizzle migration with multiple schemas.\n","created_at":"1766331998014.0","tags":"drizzle,testing,schema-isolation,subsystem-migration,hive"}
432
+ {"id":"e488f52a-f43c-49b2-be81-57f3e9c57d50","information":"{\"id\":\"pattern-1766260240802-kxdynu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T19:50:40.802Z\",\"updated_at\":\"2025-12-20T19:50:40.802Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766260241042.0","metadata":"{\"id\":\"pattern-1766260240802-kxdynu\",\"kind\":\"pattern\",\"is_negative\":false}"}
201
433
  {"id":"e490aaba-d992-4f89-9fca-9855979a86e5","information":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:16:25.895Z\",\"updated_at\":\"2025-12-14T02:16:25.895Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:16:26.095Z","metadata":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"kind\":\"pattern\",\"is_negative\":false}"}
434
+ {"id":"e5cb0bfa-a3b7-451e-a7f1-3bc13caa1b2f","information":"{\"id\":\"pattern-1766262989524-goyxtd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:36:29.524Z\",\"updated_at\":\"2025-12-20T20:36:29.524Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766262989741.0","metadata":"{\"id\":\"pattern-1766262989524-goyxtd\",\"kind\":\"pattern\",\"is_negative\":false}"}
435
+ {"id":"e6cd509d-b29d-4a32-9e45-1fb9587017a0","information":"Canvas rendering in Svelte 5: Always check getContext(\"2d\") returns non-null before using. Pattern: `const ctx = canvas.getContext(\"2d\"); if (!ctx) return;` instead of `const ctx = canvas.getContext(\"2d\")!;`. Also check node.x and node.y together in one condition: `if (node.x == null || node.y == null) continue;` not `if (node.x == null) continue; ... node.y!`.","created_at":"1766343409432.0","tags":"svelte,canvas,null-safety,typescript,rendering"}
202
436
  {"id":"e6ed4f0d-5c0a-46c0-871f-dbfd9167e0ba","information":"ADR-008 App Template Architecture decision: CLI scaffolding via bunx create-badass-app using Bun's native file I/O (no degit/giget dependencies). Database-backed email list via adapter pattern (default: SQLite, swappable to ConvertKit/etc). ContentResource pattern from ADR-003 for blog posts + collections. Next.js 16 + Tailwind + @badass/ui default stack. Template lives in packages/create-badass-app/templates/. Coordinated with ADR009Writer on shared docs/adr/README.md - both agents added their entries to avoid conflicts.","created_at":"2025-12-18T23:56:41.130Z","tags":"adr,app-template,scaffolding,cli,email-capture,adapter-pattern,swarm-coordination"}
203
437
  {"id":"e77b7ee9-ceea-4f0f-8314-30e64330d6c3","information":"DRIZZLE ORM + PGLITE FEASIBILITY ANALYSIS (Dec 2025):\n\nVERDICT: FEASIBLE via hybrid/coexistence approach.\n\nKEY FINDINGS:\n1. Drizzle has first-class PGLite support (drizzle-orm/pglite driver since v0.30.6)\n2. Can wrap existing PGLite instance: drizzle({ client: existingPGlite })\n3. Same API works on full PostgreSQL - future migration is trivial\n4. All PostgreSQL features work: JSONB, SERIAL, indexes, foreign keys, transactions\n\nRECOMMENDED APPROACH:\n- Keep existing migrations.ts for current tables\n- Use Drizzle for new features going forward\n- Implement DrizzleDatabaseAdapter wrapper to satisfy existing DatabaseAdapter interface\n- Gradual migration of high-churn tables over time\n\nEFFORT ESTIMATE: ~87 hours (2-3 weeks) for full migration\n\nWRAPPER PATTERN:\nclass DrizzleDatabaseAdapter implements DatabaseAdapter {\n constructor(private db: PgliteDatabase) {}\n async query<T>(sql, params) { return { rows: (await this.db.execute(sql.raw(sql, ...params))).rows }; }\n async transaction<T>(fn) { return this.db.transaction(tx => fn(new DrizzleDatabaseAdapter(tx))); }\n}\n\nREFERENCE: Course Builder has working adapter-drizzle package at badass-courses/course-builder\n\nGOTCHAS:\n- Drizzle doesn't auto-generate down migrations (rollback support is partial)\n- Drizzle uses template literals not $1,$2 params - wrapper must translate\n- Bundle size adds ~50kb (negligible for Node.js)","created_at":"2025-12-16T20:23:38.983Z"}
204
438
  {"id":"e7e92b71-82db-4a4f-a9b0-b4b4549c5a0e","information":"Beads validation and operations implementation completed for opencode-swarm-plugin-it2ke.19. Ported validation rules from steveyegge/beads internal/types/types.go: title max 500 chars, priority 0-4, status transition state machine (open->in_progress/blocked/closed, closed->open reopen, tombstone permanent). Operations layer provides high-level CRUD (createBead, getBead, updateBead, closeBead, reopenBead, deleteBead, searchBeads) wrapping BeadsAdapter with validation. All 41 validation tests pass. Operations tests reveal priority=0 handling issue - event stores priority correctly but projection defaults to 2, likely due to event.priority OR 2 treating 0 as falsy. Fix: use nullish coalescing instead for proper undefined handling.","created_at":"2025-12-16T22:19:50.241Z","tags":"beads,validation,operations,event-sourcing,priority-handling,steveyegge-port"}
439
+ {"id":"e84c9135-1eb8-417b-a753-6ff71b0becda","information":"Stable IDs for Subtasks: Use generated string identifiers (e.g., \"auth-setup-f3a2\") instead of array indices for subtask dependencies. Problem: If subtasks are reordered or new ones inserted, numeric indices break resume logic and dependency tracking. ACFS uses stable phase IDs in state.json v2 schema: completed_phases: [\"user_setup\", \"filesystem\"] NOT [1, 2]. Apply to hive epic subtasks - generate stable IDs at creation time, reference by ID not position. Source: Dicklesworthstone/agentic_coding_flywheel_setup state.sh","created_at":"1766591006754.0","tags":"swarm,hive,subtasks,ids,dependencies,patterns,acfs"}
440
+ {"id":"e8d7af1e-8896-4937-ac4d-08c8decc67fa","information":"{\"id\":\"pattern-1766263570127-xkxp9j\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:46:10.127Z\",\"updated_at\":\"2025-12-20T20:46:10.127Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263570370.0","metadata":"{\"id\":\"pattern-1766263570127-xkxp9j\",\"kind\":\"pattern\",\"is_negative\":false}"}
205
441
  {"id":"e9133cb2-0d3a-4ab6-8528-3b1f4a2ad306","information":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:48:36.548Z\",\"updated_at\":\"2025-12-13T22:48:36.548Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:48:36.768Z","metadata":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"kind\":\"pattern\",\"is_negative\":false}"}
442
+ {"id":"e97c791c-93a0-447c-9dbc-a46bd503f183","information":"Schema consolidation for libsql-schema.ts files: DO NOT use migrateDatabase() for initial schema creation. The migration system is designed for schema evolution (ALTER TABLE), not initial CREATE TABLE. libsql-schema.ts files serve as convenience helpers for tests/migrations and should keep explicit CREATE TABLE statements for clarity.\n\n**Why duplication is acceptable:**\n- libsql-schema.ts = convenience for tests (fast in-memory setup)\n- db/schema/*.ts = Drizzle schema (source of truth for structure)\n- FTS5/vector DDL MUST be in libsql-schema.ts (Drizzle can't create these)\n\n**Approach taken:**\n1. Keep CREATE TABLE in libsql-schema.ts for convenience\n2. Add prominent comments: \"MUST match db/schema/*.ts (source of truth)\"\n3. Remove duplicate logic, keep only FTS5/vector/index DDL that Drizzle can't handle\n4. Tests verify sync between schemas\n\n**Anti-pattern:** Trying to auto-generate CREATE TABLE from Drizzle schema via migrateDatabase() - causes quote escaping issues with defaults like \"'{}'\", fails for SQL function defaults like \"(datetime('now'))\".\n\nApplies to: swarm-mail package, memory/streams subsystems","created_at":"1766339063434.0","tags":"schema,consolidation,drizzle,libsql,fts5,vector,migration,source-of-truth"}
206
443
  {"id":"e9809d04-44d9-4ecb-9eef-a9a9ad45f4d8","information":"Verbose CLI output pattern for file operations: Created writeFileWithStatus(), mkdirWithStatus(), and rmWithStatus() helpers for swarm setup command. Each helper logs operation status (created/updated/unchanged for files, directory creation, file removal) using @clack/prompts logger. Pattern includes FileStats tracking to show summary at end: \"Setup complete: X files (Y created, Z updated, A unchanged)\". Key insight: Users need visibility into what changes during setup, especially for \"reinstall\" scenarios. Implementation: Check if file exists, compare content if exists, return status, log with appropriate p.log method (success for changes, message/dim for unchanged). This pattern is reusable for any CLI command that manipulates files.","created_at":"2025-12-18T16:52:09.530Z"}
207
444
  {"id":"ea487488-f609-4deb-b9f3-41282259a99d","information":"{\"id\":\"test-1765770963304-2pbmfn58gpr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:56:03.304Z\",\"raw_value\":1}","created_at":"2025-12-15T03:56:03.678Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:56:03.304Z\"}"}
445
+ {"id":"ea948134-c3b7-4733-89ae-8b6bb64970c7","information":"{\"id\":\"pattern-1766516102924-qsjzgw\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-23T18:55:02.924Z\",\"updated_at\":\"2025-12-23T18:55:02.924Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766516103152.0","metadata":"{\"id\":\"pattern-1766516102924-qsjzgw\",\"kind\":\"pattern\",\"is_negative\":false}"}
446
+ {"id":"eb883c3f-3b2c-4111-9a0a-76d1a5cf2d04","information":"{\"id\":\"test-1766261424533-y2mnd1sfsol\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:10:24.533Z\",\"raw_value\":1}","created_at":"1766261424789.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:10:24.533Z\"}"}
447
+ {"id":"ebee2566-7e95-470f-a05a-37e08b0bcad8","information":"DurableLock wiring in agent-mail reservation functions complete (Dec 21, 2025).\n\n**Problem:** 11 skipped tests in agent-mail.test.ts because DurableLock calls timed out. Root cause was missing `dbOverride` parameter in reserveAgentFiles(), releaseAgentFiles(), and initAgent().\n\n**Solution:**\n1. Added `dbOverride?: DatabaseAdapter` to ReserveFilesOptions, ReleaseFilesOptions, and InitAgentOptions interfaces\n2. Updated reserveAgentFiles() to use `const db = dbOverride ?? await getProjectDatabase(projectPath)`\n3. Updated releaseAgentFiles() same pattern\n4. Updated initAgent() same pattern\n5. Prevented closing db connection when dbOverride was provided (test owns the connection)\n6. Updated all 11 tests to create in-memory adapter with createInMemorySwarmMailLibSQL(testId) and pass db to all functions\n\n**Critical fix:** DurableLock should only be acquired for EXCLUSIVE reservations. Non-exclusive reservations should skip lock acquisition entirely. Added `if (exclusive)` guard around DurableLock.acquireLock() calls.\n\n**Test pattern:**\n```typescript\nconst { createInMemorySwarmMailLibSQL } = await import(\"../libsql.convenience\");\nconst testId = `unique-test-id-${Date.now()}`;\nconst swarmMail = await createInMemorySwarmMailLibSQL(testId);\nconst db = await swarmMail.getDatabase();\n\n// Pass db to ALL functions: initAgent, reserveAgentFiles, releaseAgentFiles\nawait initAgent({ projectPath, agentName, dbOverride: db });\nawait reserveAgentFiles({ projectPath, agentName, paths, dbOverride: db });\nawait releaseAgentFiles({ projectPath, agentName, dbOverride: db });\n\nawait swarmMail.close();\n```\n\n**Result:** All 28 tests pass, 0 skip, 0 fail. Tests run in 310ms.\n\n**Files modified:**\n- packages/swarm-mail/src/streams/agent-mail.ts\n- packages/swarm-mail/src/streams/agent-mail.test.ts","created_at":"1766379506193.0","tags":"drizzle-migration,DurableLock,agent-mail,libSQL,testing,exclusive-locks"}
448
+ {"id":"ec71c151-e9c5-4857-8f62-aa21be1fb8ee","information":"{\"id\":\"pattern-1766261102546-j31s5j\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:05:02.546Z\",\"updated_at\":\"2025-12-20T20:05:02.546Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766261102777.0","metadata":"{\"id\":\"pattern-1766261102546-j31s5j\",\"kind\":\"pattern\",\"is_negative\":false}"}
208
449
  {"id":"ed0f1389-a05d-4324-9770-ba00ecaae6b5","information":"@badass Payments Decision (Dec 2024): Creator-scoped payments. Creators sharing a database share a Stripe account. Purchase on epicreact.dev is visible on epicweb.dev. Entitlements sync across sites within a creator's ecosystem. This matches Kent's unified accounts use case.","created_at":"2025-12-18T15:53:59.795Z"}
209
450
  {"id":"ed97f39e-de6e-4f29-a608-2d13235d57ae","information":"Implemented swarm_plan_interactive tool for Socratic planning phase before decomposition. Tool has 4 modes: (1) socratic - full interactive with one question at a time, alternatives, recommendation, (2) fast - skip questions, go straight to decomposition, (3) auto - auto-select based on keywords, (4) confirm-only - show decomposition then yes/no. Key implementation notes: semantic memory is accessed via OpenCode global tools not direct import, uses formatMemoryQueryForDecomposition from learning module which returns {query, limit, instruction} object, integrates with existing selectStrategy and STRATEGIES from swarm-strategies module. Phase state machine: questioning → alternatives → recommendation → ready. Each phase returns SocraticPlanOutput JSON with phase, mode, ready_to_decompose flag, and next_action instruction.","created_at":"2025-12-16T16:21:02.123Z","tags":"swarm-planning,socratic-questioning,interactive-planning"}
210
451
  {"id":"edeae6d6-7656-4a0a-9481-7b295b98dcb7","information":"GREMLIN project structure (Dec 2024): Monorepo at /Users/joel/Code/badass-courses/gremlin with 9 ADRs documenting architecture. Three prime directives in AGENTS.md: (1) README Commandment - keep it current, it's marketing (2) ADR Commandment - document decisions BEFORE implementing (3) TDD Commandment - Red→Green→Refactor, no exceptions. Stack: Bun runtime, Turborepo, Vitest+Effect, Playwright, Biome, Next.js 16. Packages: @badass/core (router, schemas), @badass/db (Drizzle adapter). 159 unit tests + 2 E2E tests. CI/CD with intelligent E2E (Playwright sharding, change detection). Legacy course-builder as git submodule for reference patterns.","created_at":"2025-12-19T00:16:10.866Z","tags":"gremlin,project-structure,monorepo,agents-md,prime-directives,stack"}
211
452
  {"id":"ee586e5a-5aa2-4b71-904a-a4aee468076d","information":"{\"id\":\"pattern-1766074457007-guqdx7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:14:17.007Z\",\"updated_at\":\"2025-12-18T16:14:17.007Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:14:17.299Z","metadata":"{\"id\":\"pattern-1766074457007-guqdx7\",\"kind\":\"pattern\",\"is_negative\":false}"}
453
+ {"id":"ef0007e8-632e-41b9-bca5-4f22547500b1","information":"SQL injection prevention in libSQL/SQLite requires using `db.query()` with parameterized queries instead of `db.exec()` with string interpolation.\n\n**Vulnerable pattern:**\n```typescript\nawait db.exec(`\n INSERT INTO table (col1, col2)\n VALUES ('${userInput}', ${numericInput})\n`);\n```\n\n**Secure pattern:**\n```typescript\nawait db.query(\n `INSERT INTO table (col1, col2) VALUES (?, ?)`,\n [userInput, numericInput]\n);\n```\n\n**Why it matters:**\n- String interpolation allows SQL injection: malicious input like `\"'; DROP TABLE users; --\"` gets executed\n- Parameterized queries bind values safely - database treats them as data, not SQL code\n- Works for all parameter types (string, number, boolean)\n\n**Testing strategy:**\n- Test with malicious SQL in string parameters\n- Test with special characters (quotes, backslashes)\n- Verify malicious strings are stored literally, not executed\n- Check tables/data weren't modified by injection attempts\n\n**Affected locations in swarm-mail:**\n- `packages/swarm-mail/src/streams/effect/cursor.ts` lines 134-138 (loadCursorPosition)\n- `packages/swarm-mail/src/streams/effect/cursor.ts` lines 154-159 (saveCursorPosition)\n\nFixed by replacing `db.exec()` with string interpolation with `db.query()` using `?` placeholders and parameter arrays.","created_at":"1766375809350.0","tags":"security,sql-injection,libsql,sqlite,parameterized-queries,cursor,swarm-mail"}
212
454
  {"id":"ef25dc27-ef8f-41c9-8f44-4ef31ababa22","information":"Course Builder Drizzle Adapter Pattern for \"bring your own database\" sharing:\n\n1. **Table Function Injection**: Adapter accepts BOTH db instance AND table creator function. `DrizzleAdapter(db, tableFn)` - db is shared, tableFn is consumer-specific for namespacing.\n\n2. **Schema Factory Pattern**: Export `getSchema(tableFn)` factory, NOT concrete tables. Consumer calls factory with their prefixed table creator. Adapter never owns concrete table definitions.\n\n3. **Database Instance Injection**: Adapter stores reference to consumer's db instance, uses it for all queries. Adapter doesn't create db - consumer creates and passes it in.\n\n4. **Multi-Project Schema via Drizzle's tableCreator**: `mysqlTableCreator((name) => 'prefix_${name}')` enables table prefixing. Multiple apps share same database with isolated namespaces (e.g., `zER_users`, `zEW_users` in same db).\n\n5. **Consumer Usage Pattern**: Consumer creates pgTable with prefix, calls schema factory, creates db with merged schemas, passes db+tableFn to adapter.\n\nThis enables extracting packages like swarm-mail as pure libraries that integrate into consumer's database rather than owning their own instance. Key insight: the library is a \"guest\" in the consumer's database, not a \"host\".","created_at":"2025-12-14T23:56:11.298Z"}
455
+ {"id":"ef97f001-87ae-47c1-bfcb-f513cf991a23","information":"Researcher prompt template pattern for swarm documentation phase: Created RESEARCHER_PROMPT template following SUBTASK_PROMPT_V2 structure with [IDENTITY], [MISSION], [WORKFLOW], and [CRITICAL REQUIREMENTS] sections. Key design: coordinator provides EXPLICIT tech list (researcher doesn't discover what to research), researcher dynamically discovers TOOLS available (nextjs_docs, context7, fetch, pdf-brain). Two-output pattern: detailed findings to semantic-memory (searchable by future agents), condensed summary to coordinator via swarmmail_send for shared_context. Supports --check-upgrades flag for comparing installed vs latest versions. Tool signature: swarm_spawn_researcher(research_id, epic_id, tech_stack[], project_path, check_upgrades?). Returns JSON with prompt, subagent_type=\"swarm/researcher\", and expected_output schema. Exported via promptTools in swarmTools.","created_at":"1766515129291.0","tags":"swarm,researcher,documentation,prompt-template,epic-opencode-swarm-monorepo-lf2p4u-mjix9j5ssyz"}
456
+ {"id":"f13bc295-104a-4c65-af90-7cfa7eab1539","information":"swarm-mail getDatabase() migration: The old PGLite-style `getDatabase(projectPath)` standalone export was removed. Now use `getSwarmMailLibSQL(projectPath)` to get a SwarmMailAdapter, then call `adapter.getDatabase()` to get the DatabaseAdapter for raw queries. Example: `const swarmMail = await getSwarmMailLibSQL(projectPath); const db = await swarmMail.getDatabase(); await db.query(...)`","created_at":"1766345263816.0","tags":"swarm-mail,migration,getDatabase,libsql,api-change"}
457
+ {"id":"f1e4ec49-2123-46c4-9dfb-3bc334734e25","information":"{\"id\":\"test-1766593217747-1ure5lmoryr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-24T16:20:17.747Z\",\"raw_value\":1}","created_at":"1766593218085.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-24T16:20:17.747Z\"}"}
213
458
  {"id":"f24305d6-ce9c-4b32-84f1-cc6fbafa5899","information":"Effect-TS Layer routing pattern for daemon-aware connection fallback in pdf-library project.\n\n**Problem**: Database service needs to support both daemon mode (Unix socket via DatabaseClient) and single-process mode (direct PGlite) transparently.\n\n**Solution**: Use Layer.unwrapEffect to check daemon status at Layer creation time and route to appropriate implementation:\n\n```typescript\nexport const DatabaseLive = Layer.unwrapEffect(\n Effect.gen(function* () {\n const config = LibraryConfig.fromEnv();\n \n const daemonConfig = {\n socketPath: config.libraryPath,\n pidPath: `${config.libraryPath}/daemon.pid`,\n dbPath: config.dbPath,\n };\n\n const running = yield* Effect.promise(() => isDaemonRunning(daemonConfig));\n\n if (running) {\n // Route to DatabaseClient (Unix socket connection)\n return Layer.effect(\n Database,\n DatabaseClient.make(config.libraryPath).pipe(\n Layer.build,\n Effect.flatMap((context) => Effect.succeed(Context.get(context, DatabaseClient)))\n )\n );\n } else {\n // Route to direct PGlite implementation\n return DirectDatabaseLive;\n }\n })\n);\n```\n\n**Key insights**:\n- Layer.unwrapEffect allows decision at runtime (daemon check)\n- Layer.build + Context.get extracts DatabaseClient implementation\n- Compatible interfaces (Database and DatabaseClient) allow transparent routing\n- Tests verify fallback works when daemon not running\n\n**Why Layer.effect + Layer.build**:\nNeed to \"convert\" DatabaseClient layer to provide Database service. Pattern:\n1. Build DatabaseClient layer to get context\n2. Extract DatabaseClient implementation from context via Context.get\n3. Wrap in Layer.effect(Database, ...) to provide Database tag\n\nThis provides multi-process safety via daemon while maintaining single-process simplicity as fallback.","created_at":"2025-12-19T15:15:49.858Z","tags":"effect-ts,layer,routing,daemon,fallback,unix-socket,pglite"}
459
+ {"id":"f2b63c56-11dd-4e37-aa59-57d15987bf69","information":"LibSQL AsyncGenerator pattern: When implementing async generators in Effect-based services, the generator function must be called WITHIN the Effect scope to prevent CLIENT_CLOSED errors. The client is scoped to the Effect layer and closes when the scope ends.\n\n**WRONG**:\n```typescript\nconst db = await Effect.runPromise(Effect.provide(program, layer));\nconst batches = await collectGenerator(db.streamEmbeddings(10)); // CLIENT_CLOSED!\n```\n\n**CORRECT**:\n```typescript\nconst batches = await Effect.runPromise(\n Effect.gen(function* () {\n const db = yield* Database;\n // setup data...\n return yield* Effect.promise(() => collectGenerator(db.streamEmbeddings(10)));\n }).pipe(Effect.provide(layer))\n);\n```\n\nThe async generator holds a reference to the client, so it must be consumed before the Effect scope closes. Use Effect.promise() to wrap the async generator consumption inside the Effect scope.","created_at":"1766423830017.0","tags":"effect-ts,libsql,async-generators,scoping,client-lifecycle"}
460
+ {"id":"f2c0bac0-6db1-4453-acc1-4b2c56b2df32","information":"{\"id\":\"test-1766262543105-r7bm19lkujf\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:29:03.105Z\",\"raw_value\":1}","created_at":"1766262543323.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:29:03.105Z\"}"}
461
+ {"id":"f3514329-eb61-4447-b242-1f3e05d9bdcd","information":"AI SDK v6 Section 2 Validation Complete: API patterns are correct (generateText + Output.object/array, correct destructuring), but found systematic model naming bugs. All instances of `openai/gpt-4.1` should be `openai/gpt-4o-mini` and `openai/gpt-5` should be `openai/o1-mini`. Found across lessons 1-4. Lesson 5 (v0 UI) has no AI SDK code (just v0 integration tutorial). The core teaching is correct - only model identifiers need updating.","created_at":"1766464081509.0","tags":"ai-sdk-v6,validation,invisible-ai,model-names,bugs"}
462
+ {"id":"f3b50100-0bb4-4ff0-a9f4-440447b8aa94","information":"ADR writing pattern for opencode-swarm-plugin: Follow git-sync-distributed-coordination.md format with these sections: Context (problem statement with ASCII diagrams), Decision (architecture with detailed flow diagrams), Consequences (Positive/Negative/Risks), Implementation (files, functions, pseudocode), Alternatives Considered (rejected options with reasoning), Future Work (next steps), References. Use ASCII box diagrams for processes, state machines, and architecture. Include TypeScript pseudocode for key workflows. Reference specific OpenCode constraints and issues. Match existing ADR tone: technical, detailed, opinionated (\"this is the right architecture\").","created_at":"1766595569344.0","tags":"adr,documentation,architecture,opencode-swarm-plugin,writing-patterns"}
463
+ {"id":"f4e32f4b-6b15-4458-b904-e8cdf5d310cb","information":"{\"id\":\"test-1766263760686-zzafifmiqr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:49:20.686Z\",\"raw_value\":1}","created_at":"1766263760949.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:49:20.686Z\"}"}
464
+ {"id":"f519b624-497d-4115-a62a-fc3d637238ef","information":"{\"id\":\"test-1766261101180-gd9l9iem91g\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-20T20:05:01.180Z\",\"raw_value\":1}","created_at":"1766261101433.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-20T20:05:01.180Z\"}"}
465
+ {"id":"f51c6faf-225c-4a28-96d4-df2fe8849549","information":"{\"id\":\"test-1766516101566-59iepjl7xqy\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-23T18:55:01.566Z\",\"raw_value\":1}","created_at":"1766516101846.0","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-23T18:55:01.566Z\"}"}
466
+ {"id":"f5a5d45a-a679-4edd-8628-310cc639b109","information":"{\"id\":\"pattern-1766263310054-b2w7ig\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-20T20:41:50.054Z\",\"updated_at\":\"2025-12-20T20:41:50.054Z\",\"tags\":[],\"example_beads\":[]}","created_at":"1766263310316.0","metadata":"{\"id\":\"pattern-1766263310054-b2w7ig\",\"kind\":\"pattern\",\"is_negative\":false}"}
467
+ {"id":"f7a49f2c-9f9d-4e25-b910-973a703ebc99","information":"Plugin runtime migration from standalone getDatabase() to adapter pattern: The old PGLite-style `getDatabase(projectPath)` standalone export was removed from swarm-mail. Tests that called `const { getDatabase } = await import(\"swarm-mail\"); const db = await getDatabase(projectPath)` must migrate to adapter pattern: `const { getSwarmMailLibSQL } = await import(\"swarm-mail\"); const swarmMail = await getSwarmMailLibSQL(projectPath); const db = await swarmMail.getDatabase()`.\n\n**Why the change:** The standalone function was tightly coupled to PGLite. The adapter pattern (SwarmMailAdapter) provides database-agnostic interface.\n\n**Migration steps:**\n1. Replace `getDatabase, closeDatabase` imports with `getSwarmMailLibSQL, closeSwarmMailLibSQL`\n2. Replace `const db = await getDatabase(path)` with `const swarmMail = await getSwarmMailLibSQL(path); const db = await swarmMail.getDatabase()`\n3. Replace `await closeDatabase(path)` with `await closeSwarmMailLibSQL(path)`\n\n**Key insight:** Plugin code in swarm-orchestrate.ts, hive.ts, memory-tools.ts was ALREADY correctly using `swarmMail.getDatabase()`. They didn't need fixes - they were never broken. The issue Worker 1 fixed was in swarm-mail's store functions (appendEvent, readEvents) requiring explicit dbOverride parameter. Those now auto-create adapters via getOrCreateAdapter().","created_at":"1766349125655.0","tags":"swarm-mail,migration,adapter-pattern,database,getDatabase"}
214
468
  {"id":"f7f941bd-2467-49a2-b948-bba33ee263b1","information":"@badass Inngest Decision (Dec 2024): Site-isolated Inngest. Each site has its own Inngest app despite database sharing. Simpler blast radius, no cross-site event coordination complexity. Video processing, email jobs, etc. are site-scoped.","created_at":"2025-12-18T15:54:00.825Z"}
469
+ {"id":"f85ae083-b6c3-40d8-9599-c7a9c591069f","information":"HDBSCAN concepts yoinkable for pdf-library without full algorithm implementation: (1) Core distance via HNSW k-NN - compute core_k(x) = distance to k-th neighbor using existing vector_top_k(), provides noise robustness O(n log n) instead of O(n²). (2) Hierarchical clustering on HNSW graph - extract neighbor connections as sparse graph, run agglomerative with average linkage, single dendrogram contains all hierarchy levels (eliminates BIC k-selection). (3) Noise point filtering - minimum cluster size threshold (e.g., 5 chunks) + late merge detection (height > threshold × 1.5), filters OCR errors and outliers without forcing into clusters. (4) Height-based dendrogram cutting - cut at fixed distance thresholds (0.3, 0.5, 0.7 for cosine) for RAPTOR levels, simpler than stability optimization. SKIP: (1) Full MST construction via Prim's/Boruvka - even O(n log n) too expensive, HNSW graph IS the sparse MST approximation. (2) Stability-based cluster extraction - overkill for \"good enough\" clusters, height-based cutting sufficient. Implementation gains: 35% faster (17min → 11min), better cluster quality (noise filtering), single clustering run vs 3 independent k-means per level.","created_at":"1766426011488.0","tags":"hdbscan,clustering,raptor,hierarchical,noise-filtering,dendrogram,hnsw,agglomerative-clustering,k-selection,pdf-library"}
470
+ {"id":"f96dbbf1-bdea-4200-b6be-2ef8b64f80c7","information":"Fixed swarm-mail store.ts auto-adapter resolution: Removed requireDbOverride() error by implementing getOrCreateAdapter() function that auto-creates DatabaseAdapter instances when dbOverride is not provided. \n\n**Problem:** All store functions (appendEvent, readEvents, etc.) threw \"dbOverride parameter is required\" error when called without explicit DatabaseAdapter. This broke the API - callers shouldn't need to manually create adapters.\n\n**Root Cause:** requireDbOverride() function threw error if dbOverride was undefined. Legacy from PGlite removal.\n\n**Solution:**\n1. Added adapter cache (Map<string, DatabaseAdapter>) to avoid creating multiple instances\n2. Replaced requireDbOverride() with async getOrCreateAdapter(dbOverride?, projectPath?)\n3. Auto-creates adapter using getDatabasePath() + createLibSQLAdapter() when not provided\n4. Calls createLibSQLStreamsSchema() to initialize schema on new adapters\n5. Exported clearAdapterCache() for test isolation\n\n**Files Changed:**\n- store.ts: Added getOrCreateAdapter(), clearAdapterCache(), schema init\n- store.integration-test.ts: Added clearAdapterCache() + deleteGlobalDatabase() in afterEach\n- store-auto-adapter.test.ts: New test file proving fix works (2/2 pass)\n\n**Test Results:**\n- Integration tests: 21/24 pass (3 failures are pre-existing bugs unrelated to fix)\n- New focused tests: 2/2 pass\n- Original \"dbOverride required\" error completely eliminated\n\n**Key Insight:** getDatabasePath() ignores projectPath parameter and always returns global ~/.config/swarm-tools/swarm.db. Tests need to clear adapter cache + delete global DB for isolation.","created_at":"1766348469011.0","tags":"swarm-mail,store,database-adapter,auto-resolution,caching,libsql"}
471
+ {"id":"f9c44e94-1fc1-49e3-b0a9-28d09a1fa976","information":"Tool discovery pattern for researchers in swarm coordination: Created runtime detection of available documentation tools (MCP servers, CLI tools) using `discoverDocTools()`. Returns structured `DiscoveredTool[]` with name, type (mcp/cli/skill), capabilities array, and availability boolean.\n\nKey insight: Researchers discover HOW to fetch docs (available tools), not WHAT to research (coordinator provides tech list). This separation of concerns allows researchers to adapt to different environments.\n\nImplementation pattern:\n1. Define TOOL_DEFINITIONS with capabilities\n2. Check availability via isToolAvailable() for CLI, assume true for MCP (runtime detection)\n3. Return structured list with availability status\n4. Export as plugin tool with summary stats\n\nTDD approach worked well: 9 tests written first, all passing. Tests verify structure, availability detection, capability mapping, and graceful degradation.\n\nIntegration: Exported from swarm-research.ts → swarm.ts → index.ts (public API). Tool registered as `swarm_discover_tools` in plugin.\n\nFuture enhancement: OpenCode doesn't yet expose MCP server list, so we assume availability. When that's available, add actual MCP detection.","created_at":"1766515823304.0","tags":"swarm,research,tool-discovery,mcp,runtime-detection,tdd"}
215
472
  {"id":"fa0ede27-8993-4b8f-af9e-a1496684107e","information":"{\"id\":\"test-1765664066304-cw34qmxbxjm\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:14:26.304Z\",\"raw_value\":1}","created_at":"2025-12-13T22:14:26.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:14:26.304Z\"}"}
473
+ {"id":"facf6e03-d4c3-42da-8cf0-758434d4748f","information":"pino-roll async file creation timing: Files created via pino.transport() with pino-roll are written asynchronously. In tests, need to wait 500ms+ after logger.info() before checking if files exist with fs.readdir(). 100ms is too short and causes flaky tests. The transport spawns a worker thread that handles file writes, so the write operation doesn't complete synchronously.","created_at":"1766592745715.0","tags":"pino,pino-roll,testing,async,timing,flaky-tests"}
216
474
  {"id":"fb2f3480-9e10-443c-b9e9-755e83f648d8","information":"@badass Architecture Session Checkpoint (Dec 2024) - Ready to decompose into implementation. LOCKED DECISIONS: (1) CLI: Multi-site PlanetScale/Stripe pattern, ~/.badass/config.json, (2) DB: Creator-level sharing enabled, (3) Auth: Hive+Spoke model - creator designates one site as auth hive, spokes redirect there, (4) Cross-domain SSO: Hive acts as IdP since BetterAuth crossSubDomainCookies only works for subdomains not different TLDs, (5) Local app auth: RFC 8628 device flow (reference impl in course-builder ai-hero), (6) All core framework features in @badass/* packages. OPEN QUESTIONS for next session: (1) Content Model - posts vs courses/modules/lessons schema, (2) Video Pipeline - Mux integration (academy-content reference), (3) Payments - Stripe integration, cross-site purchases, (4) Event System - Inngest patterns. KEY REFERENCES: course-builder apps/ai-hero/src/app/oauth/device/ for device flow, vercel/academy-content for CLI+video pipeline, Kent's unified accounts request as driving use case.","created_at":"2025-12-18T15:42:07.722Z"}
475
+ {"id":"fb3b0250-8f3b-4b2d-804f-120254c70b0c","information":"LibSQL concept embeddings implementation for pdf-library: (1) Use F32_BLOB(768) for nomic-embed-text vectors - MUST match document embeddings dimension. (2) Store with vector32(JSON.stringify(embedding)), query with vector_top_k('concept_embeddings_idx', vector32(?), limit) joined to concepts table. (3) Distance to similarity: score = 1 - distance/2, threshold filter: distance <= 2*(1-threshold). (4) Index with compress_neighbors=float8 for 4x space savings, minimal recall loss. (5) TaxonomyService needs Layer.scoped (not Layer.effect) because addFinalizer requires Scope for cleanup. (6) Migration pattern: create table IF NOT EXISTS, create index IF NOT EXISTS, query for missing rows, batch process with progress reporting. (7) Concept embedding text format: \"prefLabel: definition\" or just \"prefLabel\" to match document chunk semantics.","created_at":"1766257019013.0","tags":"libsql,vector-search,embeddings,nomic-embed-text,taxonomy,effect-ts,migration"}
217
476
  {"id":"fb7adce8-e2f1-493c-beb6-8d3736a00b17","information":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:18:30.523Z\",\"updated_at\":\"2025-12-14T02:18:30.523Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:18:30.785Z","metadata":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"kind\":\"pattern\",\"is_negative\":false}"}
218
477
  {"id":"fbdec046-f92e-47dc-a80a-26e1a6c5fe8f","information":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:47:52.119Z\",\"updated_at\":\"2025-12-18T17:47:52.119Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:47:52.415Z","metadata":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"kind\":\"pattern\",\"is_negative\":false}"}
478
+ {"id":"fc9c8976-85c3-48d1-a5cf-88d05be9c5ca","information":"Pino logger singleton pattern for tests: When writing tests that create loggers with different directories, use a Map-based cache instead of a single module-level variable. Pattern: const loggerCache = new Map<string, Logger>() with cache keys like `${module}:${logDir}`. This allows tests to create isolated logger instances per test directory without interference. Also: clear require.cache[require.resolve(\"./logger\")] in beforeEach to force module reimport and reset singletons between tests.","created_at":"1766592738314.0","tags":"pino,testing,singleton,bun,typescript,cache-management"}
219
479
  {"id":"fdf514c6-3fba-4361-b5f4-fd7b5d023985","information":"{\"id\":\"test-1765771077694-7w6dasddwz8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:57.694Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:58.059Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:57.694Z\"}"}
220
480
  {"id":"ffb8e28a-303d-4941-afe7-bf21f69656fb","information":"{\"id\":\"test-1765666114922-71ihlfel1gc\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:48:34.922Z\",\"raw_value\":1}","created_at":"2025-12-13T22:48:35.124Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:48:34.922Z\"}"}
221
481
  {"id":"mem_mjbteazb_g1swqjm","information":"Test memory for tools integration","created_at":"2025-12-18T19:09:38.711Z","tags":"test"}
@@ -261,4 +521,17 @@
261
521
  {"id":"mem_mjk91ge8_39uareg","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T16:49:42.464Z"}
262
522
  {"id":"mem_mjk91k4k_a255x4y","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T16:49:47.300Z","tags":"test,memory"}
263
523
  {"id":"mem_mjk91kac_7tn2d8n","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T16:49:47.508Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
264
- {"id":"mem_mjk91knd_unxg7d7","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:49:47.977Z","tags":"test,verification"}
524
+ {"id":"mem_mjk91knd_unxg7d7","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:49:47.977Z","tags":"test,verification"}
525
+ {"id":"mem_mjkaz1iv_p4ibore","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T17:43:49.111Z","tags":"test,memory"}
526
+ {"id":"mem_mjkaz1ol_pkwgcn8","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T17:43:49.317Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
527
+ {"id":"mem_mjkaz1qv_n08jk1c","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T17:43:49.399Z","tags":"test,verification"}
528
+ {"id":"mem_mjkbf6a1_mhqaezf","information":"Test memory for tools integration","created_at":"2025-12-24T17:56:21.769Z","tags":"test"}
529
+ {"id":"mem_mjkbf6i0_77offf0","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T17:56:22.056Z"}
530
+ {"id":"mem_mjkbf97k_db6bxq6","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T17:56:25.568Z","tags":"test,memory"}
531
+ {"id":"mem_mjkbf99t_2duh0og","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T17:56:25.650Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
532
+ {"id":"mem_mjkbf9br_n4tg90b","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T17:56:25.719Z","tags":"test,verification"}
533
+ {"id":"mem_mjkbhr0d_3h2ejy4","information":"Test memory for tools integration","created_at":"2025-12-24T17:58:21.949Z","tags":"test"}
534
+ {"id":"mem_mjkbhrfa_hzcy46u","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T17:58:22.486Z"}
535
+ {"id":"mem_mjkbhu60_nv8kufy","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T17:58:26.040Z","tags":"test,memory"}
536
+ {"id":"mem_mjkbhubx_bdx22vh","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T17:58:26.253Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
537
+ {"id":"mem_mjkbhuf0_0s26nsz","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T17:58:26.364Z","tags":"test,verification"}