opencode-swarm-plugin 0.33.0 → 0.35.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.hive/issues.jsonl +12 -0
- package/.hive/memories.jsonl +255 -1
- package/.turbo/turbo-build.log +4 -4
- package/.turbo/turbo-test.log +289 -289
- package/CHANGELOG.md +133 -0
- package/README.md +29 -1
- package/bin/swarm.test.ts +342 -1
- package/bin/swarm.ts +351 -4
- package/dist/compaction-hook.d.ts +1 -1
- package/dist/compaction-hook.d.ts.map +1 -1
- package/dist/index.d.ts +95 -0
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +11848 -124
- package/dist/logger.d.ts +34 -0
- package/dist/logger.d.ts.map +1 -0
- package/dist/plugin.js +11722 -112
- package/dist/swarm-orchestrate.d.ts +105 -0
- package/dist/swarm-orchestrate.d.ts.map +1 -1
- package/dist/swarm-prompts.d.ts +54 -2
- package/dist/swarm-prompts.d.ts.map +1 -1
- package/dist/swarm-research.d.ts +127 -0
- package/dist/swarm-research.d.ts.map +1 -0
- package/dist/swarm-review.d.ts.map +1 -1
- package/dist/swarm.d.ts +56 -1
- package/dist/swarm.d.ts.map +1 -1
- package/evals/compaction-resumption.eval.ts +289 -0
- package/evals/coordinator-behavior.eval.ts +307 -0
- package/evals/fixtures/compaction-cases.ts +350 -0
- package/evals/scorers/compaction-scorers.ts +305 -0
- package/evals/scorers/index.ts +12 -0
- package/package.json +5 -2
- package/src/compaction-hook.test.ts +639 -1
- package/src/compaction-hook.ts +488 -18
- package/src/index.ts +29 -0
- package/src/logger.test.ts +189 -0
- package/src/logger.ts +135 -0
- package/src/swarm-decompose.ts +0 -7
- package/src/swarm-prompts.test.ts +164 -1
- package/src/swarm-prompts.ts +179 -12
- package/src/swarm-review.test.ts +177 -0
- package/src/swarm-review.ts +12 -47
package/.hive/memories.jsonl
CHANGED
|
@@ -1,3 +1,223 @@
|
|
|
1
|
+
{"id":"03864e7d-2f09-4779-8619-eaba5e98cb46","information":"PGlite WAL management solution for pdf-library project: Added checkpoint() method to Database service (Database.ts). PGlite supports standard PostgreSQL CHECKPOINT command - no special configuration needed. Implementation: checkpoint() => Effect.tryPromise({ try: async () => { await db.exec(\"CHECKPOINT\"); }, catch: ... }). This prevents WAL accumulation that caused 930 WAL files (930MB) and WASM OOM crash. CHECKPOINT forces WAL to be written to data files, allowing WAL recycling. Transaction safety for addChunks/addEmbeddings already existed (BEGIN/COMMIT/ROLLBACK pattern). Tests verify checkpoint can be called and transactions roll back on failure. Pattern applies to any PGlite project with batch operations.","created_at":"2025-12-19T03:41:35.101Z","metadata":"{\"file\":\"src/services/Database.ts\",\"project\":\"pdf-library\",\"test_file\":\"src/services/Database.test.ts\",\"tests_passing\":10}","tags":"pglite,wal,checkpoint,database,pdf-library,transaction,wasm,oom"}
|
|
2
|
+
{"id":"03fb1085-e349-47d3-9e2e-084e129a7fdb","information":"@badass Content Model Decision (Dec 2024): Use ContentResource + ContentResourceResource pattern from course-builder. Key files:\n\n**Database Schema:**\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource.ts:19` - Core ContentResource table with flexible JSON `fields` column\n- `packages/adapter-drizzle/src/lib/mysql/schemas/content/content-resource-resource.ts:14` - Join table for parent-child relationships with `position` (double for fractional ordering)\n\n**Collection Management:**\n- `apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84` - Main collection editor with drag-and-drop, search, tier selection\n- `apps/ai-hero/src/components/list-editor/lesson-list/tree.tsx:103` - Nested tree using Atlassian Pragmatic DnD\n- `apps/ai-hero/src/lib/lists-query.ts:268` - addPostToList for resource association\n\n**Resource Form Pattern:**\n- `apps/ai-hero/src/components/resource-form/with-resource-form.tsx:78` - HOC for config-driven resource editing\n- `apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/cohort-form-config.tsx:8` - Example config\n\n**Key Gotchas:**\n- Position is `double` not `int` - allows fractional positions for insertion without reordering\n- Nested loading hardcoded to 3 levels in adapter (line 2689-2723)\n- Slug format: `{slugified-title}~{guid}` for uniqueness\n- JSON fields validated by Zod at app layer, not DB level\n\n**Patterns to Extract to @badass:**\n1. ContentResource base model to @badass/core\n2. ResourceFormConfig pattern to @badass/core\n3. CollectionEditor component to @badass/ui\n4. Position management utilities to @badass/core/utils","created_at":"2025-12-18T15:50:04.300Z"}
|
|
3
|
+
{"id":"05ab4b37-7772-4e98-9c5d-34dfdee9da95","information":"{\"id\":\"pattern-1765653517980-ywilgz\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:18:37.980Z\",\"updated_at\":\"2025-12-13T19:18:37.980Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:18:38.186Z","metadata":"{\"id\":\"pattern-1765653517980-ywilgz\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
4
|
+
{"id":"05e32452-500a-4365-bf06-2cddac413184","information":"@badass Cross-Domain SSO Decision (Dec 2024): Use BetterAuth crossSite plugin as core framework feature. Enables unified identity across different TLDs (like Kent's EpicAI.pro, EpicWeb.dev, EpicReact.dev). Configuration: trustedOrigins array lists sibling sites. This is a CORE feature built into @badass/auth, not per-creator config. All sites in a creator's ecosystem automatically trust each other when sharing a database. Solves Kent's Workshop App tutorial flow - user with Epic React purchase doesn't need separate EpicWeb.dev account.","created_at":"2025-12-18T15:34:52.718Z"}
|
|
5
|
+
{"id":"06e8b34d-6400-4b4c-85bd-e74102c29a12","information":"SQL alias typo in getBlockedCells: JOIN clause defined alias bbc for blocked_beads_cache but ON clause incorrectly referenced bcc.cell_id. Root cause: typo during initial implementation. Prevention: verify alias consistency between JOIN and ON clauses.","created_at":"2025-12-18T15:42:50.822Z"}
|
|
6
|
+
{"id":"0712fa64-54a7-4b3c-9b53-93f6f626f38b","information":"ADR-009 Local Dev Database decision (Dec 2024): Docker Compose + MySQL 8.0 for local development. Matches PlanetScale production (MySQL-compatible). Scripts: bun db:up/down/reset/migrate/seed/studio. Drizzle Kit for migrations. Hybrid seed data approach: SQL bootstrap files for static data + TypeScript factories for dynamic test data. Port 3309 to avoid conflicts with local MySQL. Rejected alternatives: manual MySQL install (version fragmentation), PostgreSQL (PlanetScale is MySQL-only), SQLite local (dialect mismatch causes prod bugs), PlanetScale branches (network latency, cost), shared dev database (conflicts).","created_at":"2025-12-19T00:16:16.546Z","tags":"adr,database,docker,mysql,drizzle,local-dev,planetscale"}
|
|
7
|
+
{"id":"07b07817-d654-4f13-880f-1c43592c6bc5","information":"Updated swarm-coordination skill with 4 critical new patterns: Worker Survival Checklist (mandatory 9-step pattern), Socratic Planning Flow (interactive modes), Coordinator File Ownership Rule (coordinators never reserve files), Context Survival Patterns (checkpoint before risky ops, store learnings immediately, auto-checkpoints, delegate to subagents). These prevent common failures: silent workers, context exhaustion, ownership confusion, lost learnings.","created_at":"2025-12-16T16:26:19.718Z","tags":"swarm,coordination,patterns,documentation,skills"}
|
|
8
|
+
{"id":"0973178b-96f0-4fe2-bc39-6fbf5d5361c7","information":"Vitest workspace auto-discovery gotcha in monorepos. Even with vitest.workspace.ts configured with explicit project paths vitest still auto-discovers and tries to run ALL test files in the repository by default. This causes failures when legacy or archived code has missing dependencies. Solution add --dir scope flag to package.json test scripts to limit vitest search scope. Example test vitest --dir packages ensures only packages directory is scanned. Why workspace config alone is not enough the workspace file defines separate test projects but does not prevent auto-discovery. Vitest will still find and attempt to load test files outside the workspace unless you explicitly limit the search directory. Affects Bun Turborepo monorepos with archived legacy code.","created_at":"2025-12-18T16:48:31.583Z"}
|
|
9
|
+
{"id":"0b9184ca-cd44-42f1-ae5b-28c6aad6d368","information":"{\"id\":\"test-1766080068974-jpovvl8fce\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:47:48.974Z\",\"raw_value\":1}","created_at":"2025-12-18T17:47:49.178Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:47:48.974Z\"}"}
|
|
10
|
+
{"id":"0c44c18e-b76e-4d3c-a6a7-6bfe9836c795","information":"bd daemon creates git worktrees that block branch switching. The beads daemon (bd daemon) runs in background and creates worktrees at .git/beads-worktrees/main for syncing. When switching branches, git fails with \"fatal: 'main' is already used by worktree\". Solution: 1) Stop daemon with `bd daemon --stop`, 2) Remove .git/beads-worktrees and .git/worktrees directories, 3) Run `git worktree prune`, 4) Then checkout works. The daemon auto-starts and recreates worktrees, so stop it before branch operations. Config shows sync.branch = main which is the branch it tracks.","created_at":"2025-12-16T19:52:14.153Z"}
|
|
11
|
+
{"id":"0d062d9b-68a4-47f6-899d-a08d899d48c5","information":"swarm-mail daemon mode is now the default. Implementation change: `const useSocket = process.env.SWARM_MAIL_SOCKET !== 'false'` (was `=== 'true'`). This prevents multi-process PGLite corruption by defaulting to single-daemon architecture.\n\nLog messages are critical for user guidance:\n- Daemon mode: \"Using daemon mode (set SWARM_MAIL_SOCKET=false for embedded)\"\n- Embedded mode: \"Using embedded mode (unset SWARM_MAIL_SOCKET to use daemon)\"\n\nTesting default behavior: Test unsets env var with `delete process.env.SWARM_MAIL_SOCKET`, then verifies getSwarmMail() attempts daemon mode (which falls back to embedded if no daemon running). This proves the default without requiring actual daemon.\n\nTests that call getSwarmMail() directly MUST set `SWARM_MAIL_SOCKET=false` in setup to avoid daemon startup attempts during tests.","created_at":"2025-12-19T15:17:10.442Z","tags":"swarm-mail,daemon,socket,pglite,default-behavior,testing"}
|
|
12
|
+
{"id":"0d34c323-6962-40ed-87fd-3d954e8e8524","information":"{\"id\":\"test-1766074649441-2bahri75eeq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:29.441Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:29.716Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:29.441Z\"}"}
|
|
13
|
+
{"id":"0d5c110a-f9b9-457c-b4f3-e877d5051baa","information":"Zod schema pattern for structured contracts: WorkerHandoff replaces 400-line prose with machine-readable contracts. Key design decisions: (1) task_id regex requires minimum 3 segments (project-slug-hash) to prevent \"invalid-format\" matching - use /^[a-z0-9]+(-[a-z0-9]+){2,}(\\.[\\w-]+)?$/ not /^[a-z0-9]+(-[a-z0-9]+)+(\\.[\\w-]+)?$/. (2) Empty arrays valid for files_owned (read-only tasks) and files_readonly, but success_criteria must have at least one item (.min(1)) to prevent ambiguous completion. (3) Nested schemas (Contract, Context, Escalation) compose cleanly - validate each independently then combine. (4) Export all schemas AND types from index.ts for proper TypeScript inference. Pattern proven in cell.ts, task.ts, evaluation.ts schemas.","created_at":"2025-12-18T17:27:12.651Z"}
|
|
14
|
+
{"id":"0e2654bb-47e5-4a0e-9738-427712dee767","information":"{\"id\":\"test-1766085028669-e33njleg6ak\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T19:10:28.669Z\",\"raw_value\":1}","created_at":"2025-12-18T19:10:28.913Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T19:10:28.669Z\"}"}
|
|
15
|
+
{"id":"0e7acef9-5500-4342-9c12-ef50c5997dee","information":"{\"id\":\"pattern-1765664067335-e68cvl\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:14:27.335Z\",\"updated_at\":\"2025-12-13T22:14:27.335Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:14:27.567Z","metadata":"{\"id\":\"pattern-1765664067335-e68cvl\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
16
|
+
{"id":"1005d5c0-ac5e-4658-a555-3089c642fac5","information":"SWARM COORDINATION BUG: Coordinators must NEVER call swarmmail_reserve(). File reservation is exclusively for worker agents who are actually modifying files. When coordinator reserves files before spawning workers, it blocks the workers from accessing their assigned files. Correct flow: coordinator creates beads + spawns workers → workers call swarmmail_init() → workers call swarmmail_reserve() for their assigned files → workers do work → workers call swarm_complete() which auto-releases. The coordinator only monitors via swarmmail_inbox() and swarm_status().","created_at":"2025-12-14T23:18:17.346Z"}
|
|
17
|
+
{"id":"11c9e111-bf66-44e9-84d0-6c9a338bf290","information":"OpenCode command flags use simple prefix parsing (--flag-name). The /swarm command now supports planning modes: --fast (skip brainstorming), --auto (minimal Q&A), --confirm-only (show plan + yes/no), and default (full Socratic). These map to swarm_plan_interactive modes: 'fast', 'auto', 'confirm-only', 'socratic'. Key pattern: parse flags from command string, pass mode to swarm_plan_interactive, handle multi-turn conversation until ready_to_decompose=true, then delegate to swarm/planner subagent. The command documentation includes clear behavior table showing Questions/User Input/Confirmation for each mode.","created_at":"2025-12-16T16:25:10.423Z"}
|
|
18
|
+
{"id":"128aed42-765e-4958-9645-5031d57c60d2","information":"Context hygiene pattern for RAG systems: Implement reranking pipeline with rerankDocuments(), selectTopN(), and rerankAndSelect(). Start with keyword-based scoring for lessons (title 3x, content 2x, keywords 1x, term frequency 0.5x), then show production alternatives (Cohere, Together AI). Log token reduction metrics to demonstrate impact (~80% reduction typical). This teaches the concept while being runnable without external API keys.","created_at":"2025-12-16T21:29:47.790Z","metadata":"{\"type\":\"pattern\",\"domain\":\"rag-systems\"}","tags":"context-hygiene,reranking,ai-sdk,education"}
|
|
19
|
+
{"id":"131c9006-eb18-4fce-a248-359c9571032c","information":"Lesson authoring pattern for production-ready technical courses: Start with working implementation, then polish lesson content to match. For AI SDK courses, use @ts-expect-error for Vercel-only packages (like 'workflow') to avoid local TypeScript errors while maintaining educational value. Include Fast Track (3 quick steps), Project Prompt (requirements + hints), Try It (real output), and Solution (complete working code). Always create git tags for checkpoints (lesson-X.Y-solution) and push them.","created_at":"2025-12-16T21:29:34.961Z","metadata":"{\"type\":\"pattern\",\"domain\":\"lesson-authoring\"}","tags":"education,vercel,ai-sdk,workflows"}
|
|
20
|
+
{"id":"132ee45b-67b0-4499-8401-bf761432a9f0","information":"Drizzle ORM PostgreSQL ContentResource pattern: (1) NeonHttpDatabase type needs explicit schema object with tables AND relations - relations required for db.query to work. (2) Multi-column where: use and() helper not && operator. (3) Fractional positions: doublePrecision() not double(). (4) JSONB: Record string unknown not any. (5) Nested loading: recursively build Drizzle query objects for each depth level. (6) Slug format: slugified-title~guid for uniqueness.","created_at":"2025-12-18T16:06:01.731Z"}
|
|
21
|
+
{"id":"13557e2b-154a-45ae-bad9-291357d15536","information":"Durable Streams Protocol (Electric SQL) - The open protocol for real-time sync to client applications. Key concepts:\n\n1. **Offset format**: `<read-seq>_<byte-offset>` - 16-char zero-padded hex for each part, lexicographically sortable\n2. **Operations**: PUT (create), POST (append), GET (read with offset), DELETE, HEAD (metadata)\n3. **Read modes**: catch-up (from offset), long-poll (wait for new data), SSE (streaming)\n4. **Headers**: Stream-Next-Offset, Stream-Up-To-Date, Stream-Seq (writer coordination), Stream-TTL/Expires-At\n5. **Storage pattern**: LMDB for metadata + append-only log files for data\n6. **Recovery**: Scan files to compute true offset, reconcile with metadata on startup\n7. **File handle pooling**: SIEVE cache eviction for LRU file handles\n\nImplementation repo: github.com/durable-streams/durable-streams\n- @durable-streams/client - TypeScript client\n- @durable-streams/server - Reference implementation\n- @durable-streams/conformance-tests - Protocol compliance tests\n\nCritical for Agent Mail: Provides crash recovery, offset-based resumability, and long-poll for live tailing. Better than custom event sourcing because battle-tested at Electric SQL for 1.5 years.","created_at":"2025-12-13T16:52:31.021Z"}
|
|
22
|
+
{"id":"135aa45e-e41f-4864-b075-a8ff658ae9ae","information":"{\"id\":\"pattern-1766074438727-1olr11\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:13:58.727Z\",\"updated_at\":\"2025-12-18T16:13:58.727Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:13:58.949Z","metadata":"{\"id\":\"pattern-1766074438727-1olr11\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
23
|
+
{"id":"140dbeef-29c1-4abd-8bd3-cadc264f3169","information":"ADR-009 Local Dev Database Decision (Dec 2024):\n\nVERDICT: Docker Compose + MySQL 8.0 for local development\n\nRATIONALE:\n- PlanetScale production target is MySQL-compatible (Vitess-backed)\n- Local-to-production parity prevents \"works on my machine\" dialect issues\n- Docker Compose provides declarative, version-controlled database setup\n- Zero MySQL administration knowledge required for developers\n\nKEY DECISIONS:\n1. MySQL 8.0 (not Postgres, not SQLite) - matches PlanetScale production dialect\n2. Docker Compose (not manual install, not PlanetScale branches) - version consistency + easy onboarding\n3. Port 3309 (not 3306) - avoids conflict with local MySQL installations\n4. Hybrid seed strategy: SQL files for bootstrap + TypeScript factories for test data\n5. Drizzle Kit integration: drizzle-kit push for migrations, drizzle-kit studio for GUI\n\nREJECTED ALTERNATIVES:\n- SQLite local + MySQL prod: Dialect mismatch causes production bugs (AUTOINCREMENT vs AUTO_INCREMENT, date handling, foreign keys)\n- Postgres: PlanetScale is MySQL-only, migration later would be painful\n- PlanetScale branches: Network latency, internet dependency, cost, no offline work\n- Manual MySQL install: Version fragmentation, config drift, M1/M2 issues, onboarding friction\n\nSCRIPTS INTERFACE:\n- bun db:up - Start container\n- bun db:down - Stop container\n- bun db:reset - Wipe + recreate + seed\n- bun db:migrate - Drizzle Kit push\n- bun db:seed - Run TypeScript seed script\n- bun db:studio - Drizzle Kit GUI\n\nCOURSE-BUILDER PRECEDENT:\nLegacy apps use identical pattern: MySQL 8.0 + Docker Compose + Drizzle Kit + seed_data volume mount\n\nGOTCHA: SQLite local testing is tempting for speed but creates false confidence - queries that work in SQLite fail in production MySQL due to dialect differences. Always match production database locally.","created_at":"2025-12-18T23:57:41.853Z","tags":"adr,database,docker,mysql,drizzle,planetscale,local-dev"}
|
|
24
|
+
{"id":"14e46924-baf7-4d30-8361-532404832c3f","information":"README showcase structure for developer tools: Lead with the unique innovation (learning system), not features. Use ASCII art liberally for visual impact on GitHub. Structure: Hero (what/why different) → Quick start → Deep dive by category → Scale metrics → Credits. For multi-agent systems, emphasize cost optimization (coordinator-worker split) and learning mechanisms (confidence decay, anti-pattern inversion). Include architecture diagrams showing information flow, not just component boxes.","created_at":"2025-12-18T15:34:49.143Z","tags":"documentation,readme,showcase,portfolio,ascii-art,developer-tools,architecture"}
|
|
25
|
+
{"id":"154c5c23-f0e1-47b1-8d17-d27ee198f943","information":"Enhanced swarm setup command with comprehensive verbose logging using @clack/prompts p.log.* methods. Pattern: Use p.log.step() to announce major operations (e.g., \"Checking existing configuration...\", \"Writing agent configuration...\"), p.log.success() for successful completions, p.log.message(dim()) for detailed status info, and p.log.warn() for non-critical issues. This pattern leverages existing writeFileWithStatus(), mkdirWithStatus(), and rmWithStatus() helpers which already output their own status. The key is to add context-setting log.step() calls BEFORE sections that contain multiple file operations. Example: p.log.step(\"Writing configuration files...\") followed by multiple writeFileWithStatus() calls that each log their own status (created/updated/unchanged). Users see the overall flow while helper functions show granular file-level details. This creates a clear hierarchy: step announcements → operation details → success summaries.","created_at":"2025-12-18T21:36:18.393Z","tags":"cli,verbose-output,ux,clack-prompts,swarm-setup"}
|
|
26
|
+
{"id":"16323a37-5d59-4c0b-a27e-5ffdea930cf1","information":"{\"id\":\"pattern-1765771111190-acdzga\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:31.190Z\",\"updated_at\":\"2025-12-15T03:58:31.190Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:31.512Z","metadata":"{\"id\":\"pattern-1765771111190-acdzga\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
27
|
+
{"id":"167d7034-c725-4eda-96f9-7efd8f050c6b","information":"{\"id\":\"test-1765771108697-kiz3s5fu2v\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:28.697Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:29.165Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:28.697Z\"}"}
|
|
28
|
+
{"id":"16e62f42-bd4a-464a-aad5-31b4ac04797a","information":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:42.155Z\",\"updated_at\":\"2025-12-18T16:17:42.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:42.421Z","metadata":"{\"id\":\"pattern-1766074662155-kdgzzg\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
29
|
+
{"id":"18e1fd32-ef6a-4332-88e2-b19dfff2e230","information":"JSONL export/import implementation for swarm-mail beads package: Export works well with hash-based deduplication and dirty tracking. Import has issues when creating beads via direct SQL INSERT to preserve IDs - subsequent adapter calls for dependencies/labels/comments may fail silently. 13/29 tests passing. Working: serialize/parse JSONL, content hashing, full export, dirty export, new bead import. Failing: dependency/label/comment import for new beads created via direct INSERT.","created_at":"2025-12-16T23:05:17.663Z","tags":"typescript,beads,jsonl,event-sourcing"}
|
|
30
|
+
{"id":"19c70339-3281-4311-9e7f-591b264624ea","information":"Bead Event Store Integration completed 75%. Implemented beads/store.ts (336 lines) with appendBeadEvent readBeadEvents replayBeadEvents following streams/store.ts pattern. Created beads/events.ts (215 lines) with 20 bead event type definitions to avoid TypeScript cross-package import issues. Key learnings: Cross-package TS imports fail with not under rootDir error - duplicate type definitions in consuming package. PGLite schema initialization happens in initializeSchema not migrations - tests must call getDatabase or manually init schema. Projection update functions expect loose event types with index signatures - need cast to any. Remaining work: Fix test setup initialize core schema, implement beads/adapter.ts factory update beads/index.ts exports.","created_at":"2025-12-16T22:00:19.988Z"}
|
|
31
|
+
{"id":"1b0b1b73-196c-499b-9db7-530645d6749f","information":"GOTCHA: bun publish doesn't support npm OIDC trusted publishers (requires npm login). \n\nSOLUTION: Use bun pack + npm publish combo:\n1. `bun pm pack` - creates tarball WITH workspace:* resolved to actual versions\n2. `npm publish <tarball>` - publishes tarball with OIDC support\n\nThis is implemented in scripts/publish.ts for opencode-swarm-plugin monorepo.\n\nAlso: bin scripts that import external packages need those packages in dependencies, not just devDependencies. The bin/swarm.ts was missing @clack/prompts.","created_at":"2025-12-15T04:46:30.825Z"}
|
|
32
|
+
{"id":"1b236fab-235c-426d-b2cf-d9c54d051724","information":"MarkdownExtractor testing patterns for Effect-based services: Use Effect.runPromise() in test helpers to properly execute Effects. For file-based tests, use temp directories (mkdtempSync) with beforeAll/afterAll cleanup. When testing Effect error types (like MarkdownNotFoundError), catch the FiberFailure wrapper and check error string contains the error name - don't use instanceof on the wrapped error. Gray-matter parses YAML dates as Date objects, not strings. Code blocks in chunking get replaced with placeholders then restored, so test for content presence not exact backtick syntax.","created_at":"2025-12-16T21:41:26.968Z"}
|
|
33
|
+
{"id":"1d034b17-20ee-4442-927a-3943288153d0","information":"Test learning about swarm patterns","created_at":"2025-12-16T16:21:07.411Z","tags":"swarm,test"}
|
|
34
|
+
{"id":"1f1a19f9-485b-4344-8efa-390f0d0cc42b","information":"BeadsAdapter migration from bd CLI to event sourcing complete. All 9 beads_* tools migrated to direct BeadsAdapter calls. Key patterns: (1) getBeadsAdapter() singleton with lazy init via getSwarmMail()->createBeadsAdapter(), (2) formatBeadForOutput() maps adapter fields to schema (type->issue_type, timestamps->ISO strings), (3) markDirty() after every mutation for incremental export, (4) FlushManager for beads_sync instead of bd sync --flush-only, (5) deleteBead() for rollback in beads_create_epic instead of bd close. Critical: export beads from swarm-mail/src/index.ts via 'export * from ./beads' then rebuild.","created_at":"2025-12-16T23:40:04.564Z"}
|
|
35
|
+
{"id":"1fcf004e-9ffd-4949-b83c-8e043dc80536","information":"PGLite WAL Health Monitoring Implementation: Added proactive WAL size monitoring to prevent WASM OOM crashes.\n\nRoot cause from pdf-brain: 930 WAL files accumulated to 930MB, causing WASM crash. Solution: monitor BEFORE it reaches critical size.\n\nImplementation (TDD approach - all tests green):\n1. Added to DatabaseAdapter interface:\n - `getWalStats(): Promise<{ walSize: number, walFileCount: number }>` - scans pg_wal directory\n - `checkWalHealth(thresholdMb = 100): Promise<{ healthy: boolean, message: string }>` - warns when exceeds threshold\n\n2. Implemented in wrapPGlite():\n - getWalDirectoryStats() helper scans pg_wal directory recursively\n - Returns { walSize: 0, walFileCount: 0 } for in-memory databases\n - Default 100MB threshold (10x safety margin before 930MB crisis point)\n - Message includes actual size, file count, and threshold\n\n3. Integrated with SwarmMailAdapter:\n - Enhanced healthCheck() to return `{ connected: boolean, walHealth?: { healthy, message } }`\n - Enhanced getDatabaseStats() to include `wal?: { size, fileCount }`\n - Graceful fallback when WAL stats not available (other database types)\n\nTesting: 15 tests covering getWalStats, checkWalHealth, adapter integration, in-memory fallback, custom thresholds.\n\nKey insight: Filesystem-based monitoring works better than pg_stat_wal queries for PGLite since pg_stat_wal may not be fully supported in embedded mode.\n\nUsage pattern:\n```typescript\nconst health = await adapter.healthCheck({ walThresholdMb: 100 });\nif (!health.walHealth?.healthy) {\n console.warn(health.walHealth?.message);\n await adapter.checkpoint?.(); // Trigger WAL flush\n}\n```","created_at":"2025-12-19T03:41:05.238Z","metadata":"{\"files\":[\"pglite.ts\",\"adapter.ts\",\"types/database.ts\",\"types/adapter.ts\"],\"package\":\"swarm-mail\",\"test_count\":15}","tags":"pglite,wal,health-monitoring,prevention-pattern,tdd,wasm-oom"}
|
|
36
|
+
{"id":"1ffca519-0ca2-4df7-b6bf-603c2001327f","information":"Beads query implementation: Blocked cache must be invalidated in event handlers. handleBeadClosed must call invalidateBlockedCache for dependents - closing a blocker unblocks dependent beads. Without this the blocked cache returns stale data. Cache enables 25x faster ready work queries by avoiding recursive CTEs.","created_at":"2025-12-16T22:51:44.210Z"}
|
|
37
|
+
{"id":"2061a77a-3eb7-4d52-a3d5-2a2314622ede","information":"Successfully completed index.ts rename from beads to hive. Pattern: 1) Import both hiveTools and beadsTools (plus directory setters) from \"./hive\", 2) Use setHiveWorkingDirectory() in plugin init, 3) Spread hiveTools in tool registration (includes beads aliases), 4) Update hook to check both \"hive_close\" and \"beads_close\", 5) Update all JSDoc to mention hive as primary and beads as deprecated. Build and typecheck pass. Backward compatibility maintained through aliases exported from hive module.","created_at":"2025-12-17T16:48:43.284Z"}
|
|
38
|
+
{"id":"20c5ee43-3389-42bd-b125-7da87c55445c","information":"{\"id\":\"test-1765670643103-ac1htt8yv4s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T00:04:03.103Z\",\"raw_value\":1}","created_at":"2025-12-14T00:04:03.299Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T00:04:03.103Z\"}"}
|
|
39
|
+
{"id":"20fb300c-80b9-400c-8125-258e1ddbba9b","information":"Session compaction hook implementation: Plugin.trigger(\"session.compacting\", { sessionID }, { context: [] }) allows plugins to inject additional context into the compaction prompt. The hook returns { context: string[] } which gets spread into the prompt text array and joined with \\n\\n. Hook is called BEFORE processor.process() to ensure context is available during compaction. Located in packages/opencode/src/session/compaction.ts process() function.","created_at":"2025-12-17T18:01:32.282Z"}
|
|
40
|
+
{"id":"22174fd3-71ad-4e49-ac02-67bd38e89db6","information":"opencode-swarm-plugin CI/CD status (Dec 2024):\n\nPACKAGES:\n- swarm-mail@0.1.2 - published, has dist/, repository field, ASCII art README\n- opencode-swarm-plugin@0.23.4 - published but has swarm-mail@0.1.0 dep (stale lockfile issue)\n\nPENDING FIX: \n- Updated scripts/publish.ts to use bun pm pack + npm publish\n- Updated package.json with ci:version and ci:publish scripts \n- Updated publish.yml to setup .npmrc and use new scripts\n- Need to push and merge release PR to get swarm-mail@0.1.2 as dependency\n\nOPEN BEADS:\n- opencode-swarm-plugin-whh1n (P1 bug): swarm_complete fails silently - NOT ADDRESSED\n- opencode-swarm-plugin-gde33 (P2): Swarm Mail Generalization Analysis - NOT ADDRESSED\n\nNEXT SESSION:\n1. Commit and push the publish workflow fixes\n2. Merge release PR when it appears\n3. Verify npm install works with correct swarm-mail version\n4. Then tackle the swarm_complete bug or the skill creation swarm task","created_at":"2025-12-15T05:07:35.356Z"}
|
|
41
|
+
{"id":"2300685b-e672-461f-9846-5ba2b78c4ac0","information":"Daemon process lifecycle management pattern for Node.js: Use child_process.spawn with detached true and stdio ignore for background daemons. Unref child process to allow parent exit. Store PID in file system. Use process.kill(pid, 0) to check if process is alive without sending signal - ESRCH error means dead. Wait for daemon ready by polling health check. SIGTERM for graceful shutdown, SIGKILL as fallback. Clean up PID file after process exit. Dynamic import of optional dependencies like postgres to avoid bundling in library consumers.","created_at":"2025-12-17T17:54:13.019Z"}
|
|
42
|
+
{"id":"258e9231-4bf7-4dbd-809f-3a16de6908f7","information":"When renaming tools in tool-availability.ts, must update 4 places: 1) ToolName type union, 2) toolCheckers object with async checker function, 3) fallbackBehaviors Record with description, 4) tools array in checkAllTools(). Keep deprecated tools for backward compatibility by adding both old and new names to all 4 locations. Mark deprecated with comments.","created_at":"2025-12-17T16:41:27.639Z"}
|
|
43
|
+
{"id":"265444da-937e-4fa7-9f5a-0d551b5fcc32","information":"Auto-migration implementation in createMemoryAdapter: Added module-level flag `migrationChecked` to track if legacy memory migration has been checked. First call to createMemoryAdapter() checks: (1) legacyDatabaseExists() from swarm-mail, (2) target DB is empty (COUNT(*) FROM memories = 0), (3) if both true, runs migrateLegacyMemories() with console logging. Subsequent calls skip check (performance optimization). Critical: Export resetMigrationCheck() for test isolation - without it, module-level flag persists across tests causing false failures. Test pattern: beforeEach(() => resetMigrationCheck()) ensures each test starts with fresh state. Graceful degradation: migration failures log warnings but don't throw - adapter continues working. Migrated 176 real memories successfully in production test. Migration functions were added to swarm-mail/src/index.ts exports (legacyDatabaseExists, migrateLegacyMemories, getMigrationStatus, getDefaultLegacyPath).","created_at":"2025-12-18T21:12:31.305Z","metadata":"{\"file\":\"src/memory.ts\",\"pattern\":\"auto-migration-on-first-use\",\"project\":\"opencode-swarm-plugin\"}","tags":"auto-migration,memory,pglite,testing,module-state,swarm-mail"}
|
|
44
|
+
{"id":"27928bec-546f-4a77-a32f-53415771c127","information":"PGlite WAL accumulation root cause: \"different vector dimensions 1024 and 0\" error from failed embedding operations. Solution: Validate embeddings BEFORE database insert in Ollama service. Added validateEmbedding() function that checks: 1) dimension not 0 (empty), 2) dimension matches expected (1024 for nomic-embed-text), 3) no NaN/Infinity values. Integrated into embedSingle() which is used by both embed() and embedBatch(). This prevents pgvector corruption that causes WAL buildup since PGlite never checkpoints. Test coverage: 6 tests covering all validation cases in Ollama.test.ts.","created_at":"2025-12-19T03:30:20.283Z","tags":"pglite,pgvector,embeddings,validation,ollama,wal,database-corruption,pdf-library"}
|
|
45
|
+
{"id":"291f3101-82dc-41f8-b077-fbce25dfd767","information":"@badass Video Pipeline Decision (Dec 2024): Videos are ALWAYS separate ContentResource types, never embedded fields. Video resources link to posts/lessons via ContentResourceResource join table. This enables video reuse across multiple collections. \n\nCourse-builder has a full web-based, Inngest-backed video pipeline currently in @coursebuilder/core - but core is bloated and this needs extraction. Video processing should be its own package (@badass/video or @badass/mux).\n\nKey reference files for video pipeline:\n- course-builder core video processing (needs extraction, location TBD)\n- academy-content Mux integration: vercel/academy-content/plans/video-upload-processing-plan.md\n\nArchitecture: Upload triggers Inngest job, Mux processes video, webhook updates VideoResource with asset ID and playback info.","created_at":"2025-12-18T15:51:59.366Z"}
|
|
46
|
+
{"id":"2add0e53-1dba-4191-bea0-0451e681f898","information":"{\"id\":\"test-1765751935012-epiln8ycyte\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:38:55.012Z\",\"raw_value\":1}","created_at":"2025-12-14T22:38:55.304Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:38:55.012Z\"}"}
|
|
47
|
+
{"id":"2d87e08a-4fed-450a-9aa5-ee09cc8848d7","information":"{\"id\":\"pattern-1765733413093-1ct6rt\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T17:30:13.093Z\",\"updated_at\":\"2025-12-14T17:30:13.093Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T17:30:13.345Z","metadata":"{\"id\":\"pattern-1765733413093-1ct6rt\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
48
|
+
{"id":"2fcbad1b-56f1-471b-bc04-72a8765fe6c3","information":"Partial ID resolution in hive plugin tools: resolvePartialId from swarm-mail uses SQL LIKE pattern `%-{partialHash}%-%` to match the hash segment (middle portion of cell IDs). Cell ID format is `{prefix}-{hash}-{timestamp}{random}` where hash is 6 chars (can include negative sign creating consecutive hyphens like `cell--gcel4-mjd...`). In tests with many cells, short hashes (3-5 chars) often collide, causing ambiguous matches - use full hash or full ID for reliable resolution. The function returns null for no match, full ID for unique match, throws error for ambiguous. Integration: import resolvePartialId from swarm-mail, call before adapter operations with `const cellId = await resolvePartialId(adapter, projectKey, inputId) || inputId`. Add helpful error handling for \"Ambiguous hash\" and \"Cell not found\" messages.","created_at":"2025-12-19T16:30:14.215Z","tags":"hive,partial-id,resolution,swarm-mail,testing"}
|
|
49
|
+
{"id":"2febbadd-de6d-43e2-9e0a-ac3856755792","information":"Auto-sync pattern for hive_create_epic: After successfully creating epic + subtasks, immediately flush to JSONL using FlushManager so spawned workers can see cells without waiting for manual hive_sync. Implementation: ensureHiveDirectory() → new FlushManager({adapter, projectKey, outputPath}) → flush(). Wrapped in try/catch as non-fatal (log warning if fails). This mirrors the pattern in hive_sync but happens automatically after epic creation. Critical for swarm coordination - workers spawned after epic creation need to query cells from JSONL, not wait for coordinator to manually sync.","created_at":"2025-12-19T16:58:31.668Z","tags":"hive,swarm,auto-sync,epic-creation,flush-manager,coordination"}
|
|
50
|
+
{"id":"310ca5c8-13b1-483d-a28f-1140c9aa5d05","information":"{\"id\":\"pattern-1765386508923-acld7i\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:08:28.923Z\",\"updated_at\":\"2025-12-10T17:08:28.923Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:08:29.108Z","metadata":"{\"id\":\"pattern-1765386508923-acld7i\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
51
|
+
{"id":"317c723e-6240-4a5f-b810-da9c274b3ece","information":"BUN MONOREPO DEPENDENCY INSTALLATION - COMPLETE GUIDE (Dec 2025)\n\nPROBLEM: `bun add --filter <workspace>` is BROKEN - installs to ROOT package.json, not the target workspace. Support is in beta as of Aug 2025.\n\nSOLUTION: Use `--cwd` flag instead:\n```bash\nbun add <package> --cwd <workspace-path>\nbun add -d <package> --cwd <workspace-path> # dev dependency\n```\n\nEXAMPLES:\n```bash\n# Install to specific workspace\nbun add express --cwd apps/server\nbun add -d @types/node --cwd apps/server\nbun add express cors helmet --cwd apps/server\n\n# Install to shared package\nbun add lodash --cwd packages/shared\n```\n\nWHY --cwd WORKS:\n- Tells Bun to pretend it's inside that folder\n- Dependencies go to correct package.json\n- Lockfile (bun.lockb) stays centralized at root\n- No local node_modules pollution\n\nANTI-PATTERN (don't do this):\n```bash\ncd apps/server && bun add express && cd ../..\n# Creates local node_modules, breaks monorepo hoisting\n```\n\nPRO TIP - Add helper scripts to root package.json:\n```json\n{\n \"scripts\": {\n \"add:web\": \"bun add --cwd apps/wizardshit-ai\",\n \"add:server\": \"bun add --cwd apps/server\"\n }\n}\n```\n\nTURBOREPO COMPATIBILITY:\n- `turbo build --filter=server` works fine\n- `bun add --filter` is the broken one, not turbo's --filter\n\nSource: fgbyte.com blog post, verified in wizardshit.ai monorepo setup Dec 2025","created_at":"2025-12-16T19:59:16.995Z"}
|
|
52
|
+
{"id":"32577e43-8ceb-481c-a8ee-874cfd49dd00","information":"{\"id\":\"pattern-1765749526038-65vu4n\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T21:58:46.038Z\",\"updated_at\":\"2025-12-14T21:58:46.038Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T21:58:46.288Z","metadata":"{\"id\":\"pattern-1765749526038-65vu4n\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
53
|
+
{"id":"3287144c-e3f1-46fd-b6e1-ce4b82b35448","information":"PGLite BIGINT to Date conversion fix: PGLite can return BIGINT columns as JavaScript `bigint` type (version/environment-dependent). The Date constructor throws TypeError on bigint: `new Date(1234n)` fails with \"Cannot convert a BigInt value to a number\". \n\nSOLUTION: Wrap database timestamps in Number() before passing to Date constructor. Number() handles both number and bigint safely:\n- Number(1234) → 1234\n- Number(1234n) → 1234\n\nAPPLIED TO: packages/opencode-swarm-plugin/src/hive.ts, formatCellForOutput() function:\n- Line 590: created_at → new Date(Number(adapterCell.created_at))\n- Line 591: updated_at → new Date(Number(adapterCell.updated_at))\n- Line 593: closed_at → new Date(Number(adapterCell.closed_at))\n\nNOTE: Only affects READ path (database → output). WRITE path (JSONL → database) uses new Date(isoString).getTime() which is fine because input is string, not bigint.\n\nTESTING: Added integration tests in hive.integration.test.ts to verify dates parse correctly. All 66 hive tests pass with fix.","created_at":"2025-12-19T17:50:46.475Z","tags":"pglite,bigint,date,hive,database,type-safety"}
|
|
54
|
+
{"id":"330b1dcc-e675-4b49-8a0d-83c79ff9445e","information":"UTF-8 null byte sanitization for PostgreSQL: PostgreSQL TEXT columns crash with \"invalid byte sequence for encoding UTF8: 0x00\" when null bytes (\\x00) are present. Solution: sanitizeText(text: string) function using text.replace(/\\x00/g, \"\") to strip null bytes. Applied early in processing pipeline (before chunking) in both PDFExtractor and MarkdownExtractor. Critical to sanitize BEFORE other text processing to prevent null bytes from propagating through chunks into database. Used biome-ignore comment for noControlCharactersInRegex lint rule since we intentionally use \\x00 pattern.","created_at":"2025-12-19T17:16:53.764Z","tags":"postgresql,utf-8,null-bytes,sanitization,pdf-extraction,markdown-extraction,database-errors"}
|
|
55
|
+
{"id":"332273cc-89e1-477d-be2b-c0aee9fb08cd","information":"opencode-swarm-plugin v0.22.0 release - Major improvements to semantic memory and swarm mail coordination:\n\n1. MANDATORY semantic memory usage - agents now auto-capture learnings after every swarm_complete, with MANDATORY triggers documented in AGENTS.md for when to store memories (after bugs, architectural decisions, patterns discovered, debugging sessions)\n\n2. MANDATORY swarm mail coordination - comprehensive error handling in swarm_complete pushes failures to swarm mail for coordinator visibility, preventing silent failures\n\n3. Test isolation - TEST_MEMORY_COLLECTIONS env var prevents integration tests from polluting production semantic-memory (identified 32 test artifacts, 86% pollution rate)\n\n4. Swarm Mail architecture documentation - complete 3-tier stack (primitives, patterns, coordination) inlined into README with diagrams, clarified Agent Mail is inspiration vs Swarm Mail implementation\n\n5. Learning improvements - debug logging, session stats tracking, low usage alerts if <1 store operation in 10 minutes\n\nKey files changed: src/storage.ts (test isolation + logging), src/swarm-orchestrate.ts (auto-capture + error handling), AGENTS.md (+358 lines of MANDATORY usage), docs/swarm-mail-architecture.md (1,147 lines), README.md (architecture diagrams)\n\nThis release makes semantic-memory and swarm mail usage non-optional, forcing agents to coordinate and learn proactively.","created_at":"2025-12-14T22:50:23.470Z"}
|
|
56
|
+
{"id":"332822ce-746f-4f79-9283-3cfebc98dea7","information":"## Publishing Workflow Fix - In Progress (Dec 15, 2025)\n\n### Problem\nCI builds failing because @swarmtools/web (fumadocs docs site) has type errors. The .source/ directory with generated types doesn't exist in CI.\n\n### Root Cause\nFumadocs-mdx generates .source/ directory with TypeScript types at dev/build time. In CI, this directory doesn't exist when TypeScript runs.\n\n### What We Tried\n1. Committing .source/ - Reverted. Not best practice.\n2. postinstall script - Added postinstall fumadocs-mdx to package.json. Still failing.\n\n### Current State\n- Changeset exists for swarm-mail patch (fix-pglite-external.md)\n- swarm-mail and opencode-swarm-plugin build successfully\n- @swarmtools/web build fails on TypeScript\n- GitHub Actions Release workflow failing\n\n### Quick Fix Option\nEdit .github/workflows/publish.yml to exclude web from build:\n run: bun turbo build --filter=!@swarmtools/web\n\nThis is valid because @swarmtools/web is private and not published to npm.","created_at":"2025-12-15T16:16:33.265Z"}
|
|
57
|
+
{"id":"3597596f-e755-4e7b-963b-92995aec0ccc","information":"Refactored swarm-mail daemon from spawning external `pglite-server` binary to in-process PGLiteSocketServer. \n\n**Key Pattern:**\n```typescript\nimport { PGlite } from \"@electric-sql/pglite\"\nimport { vector } from \"@electric-sql/pglite/vector\"\nimport { PGLiteSocketServer } from \"@electric-sql/pglite-socket\"\n\n// Module-level state (one server per process)\nlet activeServer: PGLiteSocketServer | null = null\nlet activeDb: PGlite | null = null\n\n// Start in-process\nconst db = await PGlite.create({ dataDir, extensions: { vector } })\nconst server = new PGLiteSocketServer({ db, port, host })\nawait server.start()\n\n// Graceful shutdown (CRITICAL ORDER)\nawait db.exec(\"CHECKPOINT\") // Flush WAL first\nawait server.stop()\nawait db.close()\n```\n\n**Benefits:**\n- No external binary dependency (pglite-server)\n- Same process = simpler lifecycle management\n- PID file tracks current process.pid\n- Server reuse: check activeServer before creating new one\n\n**TDD Approach Worked:**\n- 4 new tests written first (RED)\n- Implementation made them pass (GREEN)\n- Refactored with JSDoc (REFACTOR)\n- All 12 tests passing in 519ms\n\n**Gotcha:** Constructor is `{ db, port, host }` for TCP or `{ db, path }` for Unix socket, not separate args.","created_at":"2025-12-19T15:00:35.753Z","tags":"pglite,daemon,in-process,tdd,swarm-mail,refactoring"}
|
|
58
|
+
{"id":"3667bbf3-77fa-4beb-868e-61164dd85081","information":"npm Trusted Publishers setup for opencode-swarm-plugin monorepo:\n\nPROBLEM SOLVED: npm token management is a mess. Trusted Publishers use OIDC - no tokens needed.\n\nSETUP:\n1. Workflow needs `permissions: id-token: write` \n2. Each npm package configured at npmjs.com/package/PKG/access with Trusted Publisher:\n - Organization: joelhooks\n - Repository: opencode-swarm-plugin \n - Workflow: publish.yml\n3. Use `bunx changeset publish` NOT `npm publish` directly - changeset publish is smarter, only publishes packages with new versions not yet on npm\n\nKEY GOTCHA: Using `bun turbo publish:pkg` with individual `npm publish --provenance` scripts FAILED because:\n- turbo tried to publish ALL packages including ones already at same version on npm\n- OIDC token detection didn't work through bun→npm chain properly\n\nSOLUTION: `bunx changeset publish` handles everything:\n- Checks npm registry for each package version\n- Only publishes packages where local version > npm version\n- Creates git tags automatically\n- Works with OIDC out of the box\n\nWORKFLOW FILE: .github/workflows/publish.yml\n- Triggers on push to main\n- Uses changesets/action@v1\n- publish command: `bun run release` which runs `bunx changeset publish`\n\nDOCS: https://docs.npmjs.com/trusted-publishers","created_at":"2025-12-15T04:34:51.427Z"}
|
|
59
|
+
{"id":"36e760a9-9737-4f1e-8159-a739f679af77","information":"Monorepo publishing with workspace:* protocol and npm OIDC trusted publishers:\n\nPROBLEM: workspace:* doesn't get resolved by npm publish or changeset publish, causing \"Unsupported URL Type workspace:*\" errors on install.\n\nSOLUTION (scripts/publish.ts):\n1. bun pm pack - creates tarball with workspace:* resolved to actual versions\n2. npm publish <tarball> - publishes with OIDC support\n\nWHY NOT bun publish? It resolves workspace:* but doesn't support npm OIDC trusted publishers (requires npm login).\n\nWHY NOT npm publish directly? It doesn't resolve workspace:* protocol.\n\nWHY NOT changeset publish? Uses npm under the hood, same problem.\n\nADDITIONAL GOTCHA: CLI bin scripts (like bin/swarm.ts) need external imports in dependencies, not devDependencies. Users installing globally won't have devDeps, causing \"Cannot find module\" errors.\n\nFILES:\n- scripts/publish.ts - custom publish script\n- .github/workflows/publish.yml - calls bun run release which runs scripts/publish.ts","created_at":"2025-12-15T04:47:41.617Z"}
|
|
60
|
+
{"id":"370b4da1-176a-4975-a58d-9cd46d515918","information":"TDD workflow for JSONL merge function: Write tests FIRST that verify behavior (empty files, overlaps, missing files), then implement minimal code to pass. For JSONL deduplication, use Set to track existing IDs, filter base records, append new ones, write back. Testing pattern: mkdirSync temp project, writeFileSync JSONL fixtures, run function, readFileSync + parse to verify. All 6 test cases passed on first implementation - TDD prevented edge case bugs.","created_at":"2025-12-18T00:56:09.189Z"}
|
|
61
|
+
{"id":"3a7be2a6-36d7-40b5-a14f-fedefadb4608","information":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:20:42.550Z\",\"updated_at\":\"2025-12-13T19:20:42.550Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:20:42.749Z","metadata":"{\"id\":\"pattern-1765653642550-rsyjbg\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
62
|
+
{"id":"3e88ec34-2b29-406f-8352-cd434ac23b68","information":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-19T00:30:11.784Z\",\"updated_at\":\"2025-12-19T00:30:11.784Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-19T00:30:11.993Z","metadata":"{\"id\":\"pattern-1766104211784-1ruqjf\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
63
|
+
{"id":"3eabd321-1ad6-4fa9-bf11-8fad2a57ea83","information":"{\"id\":\"test-1765733411282-pzqyaldzdya\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T17:30:11.282Z\",\"raw_value\":1}","created_at":"2025-12-14T17:30:11.541Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T17:30:11.282Z\"}"}
|
|
64
|
+
{"id":"3ec7f612-4075-48f0-b63e-ba46f646f577","information":"POC Migration Learnings (December 2025):\n\n1. SCHEMA PATTERNS:\n- Coursebuilder uses type='post' + fields.postType='course', but migration can use type='course' directly\n- Query files must support BOTH patterns with OR clause\n- Use .passthrough() on Zod schemas to allow extra migration fields (migratedAt, collaborators, legacyRailsId)\n- Remove 'use server' from files that export types/schemas (Next.js constraint)\n\n2. DATABASE CONSTRAINTS:\n- createdById is NOT NULL - must provide system user ID for migrations\n- Use Joel's ID: c903e890-0970-4d13-bdee-ea535aaaf69b for migration scripts\n\n3. VIDEO INTEGRATION:\n- Rails current_video_hls_url contains Mux playback IDs (extract with regex)\n- 97.5% of lessons have Mux coverage (193 missing = mark as retired)\n- VideoResource links to Lesson via ContentResourceResource table\n\n4. MIGRATION SCRIPTS:\n- investigation/poc-migrate-modern-course.ts - Sanity source\n- investigation/poc-migrate-legacy-course.ts - Rails source\n- investigation/src/lib/migration-utils.ts - Shared utilities\n\n5. TDD APPROACH NEEDED:\n- Unit tests for schema validation and field mapping\n- Docker containers for integration tests (postgres + mysql)\n- E2E verification with browser automation","created_at":"2025-12-13T17:07:15.655Z"}
|
|
65
|
+
{"id":"3faa59da-150b-4c02-a257-515df507fdbe","information":"{\"id\":\"test-1765664124701-aa17ylzydnq\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:15:24.701Z\",\"raw_value\":1}","created_at":"2025-12-13T22:15:24.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:15:24.701Z\"}"}
|
|
66
|
+
{"id":"40e45c96-514c-4f5e-a010-96215895a455","information":"{\"id\":\"test-1766076692243-0mib94hstes\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:51:32.243Z\",\"raw_value\":1}","created_at":"2025-12-18T16:51:32.478Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:51:32.243Z\"}"}
|
|
67
|
+
{"id":"41308199-3761-485f-a7a6-567f97417f95","information":"{\"id\":\"pattern-1765664183401-tex4za\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:16:23.401Z\",\"updated_at\":\"2025-12-13T22:16:23.401Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:16:23.600Z","metadata":"{\"id\":\"pattern-1765664183401-tex4za\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
68
|
+
{"id":"41453b78-e33f-41c2-aedd-3d521af2a2c4","information":"SUBTASK_PROMPT_V2 survival checklist pattern: Workers need 9-step mandatory workflow: 1) swarmmail_init (coordination), 2) semantic-memory_find (query past learnings BEFORE starting), 3) skills_list/skills_use (load domain knowledge), 4) swarmmail_reserve (worker reserves own files, NOT coordinator), 5) do work, 6) swarm_progress at 25/50/75% milestones (triggers auto-checkpoint), 7) swarm_checkpoint before risky ops (refactors, deletions), 8) semantic-memory_store (capture learnings), 9) swarm_complete (closes, releases, scans). KEY INSIGHT: Workers reserve their own files (step 4) - coordinator no longer does this. Past mistake: coordinators reserving caused confusion about who owns what. Worker self-reservation makes ownership explicit. Applies to all swarm worker agents.","created_at":"2025-12-16T16:21:16.745Z","metadata":"{\"context\":\"opencode-swarm-plugin\"}","tags":"swarm,coordination,worker-patterns,file-reservation,semantic-memory,skills,checkpointing,learning-loops"}
|
|
69
|
+
{"id":"429da23f-c274-4d2c-93ed-88eee75c4b20","information":"{\"id\":\"test-1765678709593-34lfj5t3x44\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T02:18:29.593Z\",\"raw_value\":1}","created_at":"2025-12-14T02:18:29.809Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T02:18:29.593Z\"}"}
|
|
70
|
+
{"id":"47e272e2-37c4-4ea1-b724-ec68de3c3bf1","information":"TDD pattern for database query functions: Write tests that use the actual database adapter (not mocks) to verify query behavior. For swarm-mail hive queries, tests use in-memory PGlite with full migrations. This catches SQL syntax errors, constraint violations, and index issues that mocks would miss. Pattern: beforeEach creates fresh PGlite instance, afterEach closes it. Each test creates necessary cells via adapter, then queries them. Fast enough (12s for 36 tests) because PGlite is in-memory.","created_at":"2025-12-19T16:17:46.254Z","tags":"tdd,testing,database,pglite,swarm-mail"}
|
|
71
|
+
{"id":"48610ac6-d52f-4505-8b06-9df2fad353aa","information":"CRITICAL BUG: PGLite database corruption when multiple swarm agents access shared database concurrently.\n\nROOT CAUSE: PGLite is single-connection only. When multiple parallel swarm worker agents each create their own PGLite instance pointing to the same database file, they corrupt each other's writes. This manifests as:\n- 'PGlite is closed' errors\n- Missing data after writes\n- Inconsistent query results\n- Database file corruption requiring deletion\n\nSOLUTION: Implement PGLite leader election pattern from multi-tab-worker docs (https://pglite.dev/docs/multi-tab-worker).\n\nThe pattern works by:\n1. Each worker/agent creates a PGliteWorker instead of PGlite directly\n2. Workers run an election to nominate ONE as the leader\n3. ONLY the leader starts the actual PGlite instance\n4. All other workers proxy their queries through the leader\n5. When leader dies, new election runs and new leader takes over\n\nKey APIs:\n- PGliteWorker - client that proxies to leader\n- worker({ init: () => PGlite }) - wrapper that handles election\n- onLeaderChange(callback) - subscribe to leader changes\n- isLeader: boolean - check if this instance is leader\n\nFor swarm-mail specifically:\n- The singleton pattern in pglite.ts is NOT sufficient for parallel agents\n- Each Task subagent runs in a separate process, not just separate async contexts\n- Need to implement a coordinator pattern where ONE agent owns the DB connection\n- Other agents communicate via IPC/file locks/Agent Mail instead of direct DB access\n\nWORKAROUND (current): Tests use isolated in-memory PGLite instances per test to avoid singleton conflicts.","created_at":"2025-12-17T17:18:27.494Z","tags":"pglite,database,corruption,swarm,parallel-agents,leader-election,critical-bug,P0"}
|
|
72
|
+
{"id":"4945b847-6fd0-42fe-aebd-6ee0d415b1cb","information":"CRITICAL SCHEMA FIX (Dec 2025): egghead-rails `series` table is DEPRECATED. Official courses are in `playlists` with `visibility_state='indexed'` (437 courses). Lessons link via `tracklists` polymorphic join table (tracklistable_type='Lesson', tracklistable_id=lesson.id), NOT via lessons.series_id. Standalone lessons (~1,650) are published lessons NOT in any indexed playlist. Use DISTINCT ON (l.id) when querying lessons to handle 36 lessons that appear in multiple courses.","created_at":"2025-12-13T23:17:05.679Z"}
|
|
73
|
+
{"id":"4a9929ba-3860-4ebe-8ea9-89688d79d348","information":"{\"id\":\"test-1765653389932-an49coy8vg4\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:16:29.932Z\",\"raw_value\":1}","created_at":"2025-12-13T19:16:30.132Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:16:29.932Z\"}"}
|
|
74
|
+
{"id":"4c47409c-83a4-4e85-87ed-1ee7445a3b09","information":"swarm-mail socket adapter hybrid pattern: getSwarmMail() now checks SWARM_MAIL_SOCKET=true env var to enable socket mode with graceful PGLite fallback on any failure. Close methods need conditional logic for pglite vs socket adapters. Env vars: SWARM_MAIL_SOCKET_PATH (unix socket), SWARM_MAIL_SOCKET_PORT (TCP, default 5433), SWARM_MAIL_SOCKET_HOST (TCP, default 127.0.0.1).","created_at":"2025-12-17T18:03:01.543Z"}
|
|
75
|
+
{"id":"4d167832-70e4-46b0-85ba-170e5826b9c8","information":"PGLite WAL Safety Pattern: Add checkpoint() to DatabaseAdapter interface and call after batch operations to prevent WAL bloat.\n\nRoot cause from pdf-brain: PGLite accumulated 930 WAL files (930MB) without explicit CHECKPOINT, causing WASM OOM crash. PostgreSQL CHECKPOINT command forces WAL to be written to data files, allowing WAL to be recycled.\n\nImplementation:\n1. Add `checkpoint?(): Promise<void>` to DatabaseAdapter interface (optional method)\n2. Implement in wrapPGlite: `async checkpoint() { await pglite.query(\"CHECKPOINT\"); }`\n3. Call after batch operations:\n - After runMigrations() in adapter.runMigrations()\n - After bulk event appends (if batching)\n - After large projection updates\n\nTDD approach confirmed effectiveness:\n- Write failing test expecting checkpoint() method\n- Implement checkpoint in interface + wrapper\n- Call from adapters after migrations\n- All tests green (29 tests passing)\n\nKey insight: CHECKPOINT is a PostgreSQL command, not PGLite-specific. Works for any PostgreSQL-compatible database but critical for embedded databases without automatic checkpointing.\n\nPattern applies to any PGLite usage with batch operations: migrations, bulk writes, large transactions.","created_at":"2025-12-19T03:34:00.966Z","tags":"pglite,wal,checkpoint,database-adapter,batch-operations,memory-management,wasm"}
|
|
76
|
+
{"id":"4df79169-bae1-4942-bfc3-8a0c5ba038de","information":"MemoryAdapter implementation pattern for Effect-TS + PGlite semantic memory: High-level adapter wraps low-level services (Ollama + MemoryStore) with graceful degradation. Key insights: (1) Use Effect.runPromise with Effect.either for optional Ollama - returns Left on failure, enabling FTS fallback. (2) Store decay calculation (90-day half-life) in adapter layer, not DB - keeps store generic. (3) validate() resets timestamp via direct SQL UPDATE, not store.store() which preserves original timestamps on conflict. (4) Tags parsed from comma-separated string and merged into metadata.tags array for searchability. (5) TDD with 22 tests first caught 3 design issues: metadata structure, embedding similarity mocking, timestamp update semantics. Integration test verifies full lifecycle: store→find→get→validate→remove with FTS fallback.","created_at":"2025-12-18T19:09:34.653Z","metadata":"{\"pattern\":\"high-level-adapter\",\"testing\":\"tdd-integration\",\"component\":\"swarm-mail/memory\"}","tags":"effect-ts,pglite,semantic-memory,adapter-pattern,graceful-degradation,tdd"}
|
|
77
|
+
{"id":"4f6a7e08-fa47-4f23-bca2-6e7edb72a702","information":"PGLite DatabaseAdapter wrapper pattern: PGLite's exec() method returns Promise<Results[]> but DatabaseAdapter interface expects Promise<void>. Solution: wrap with async function that awaits exec() but doesn't return the value. Example: exec: async (sql: string) => { await pglite.exec(sql); }. This matches the adapter contract without leaking PGLite-specific types. Used in swarm-mail package for database abstraction layer.","created_at":"2025-12-15T00:18:10.156Z","tags":"pglite,adapter-pattern,database,typescript,type-compatibility,swarm-mail"}
|
|
78
|
+
{"id":"4fca4eb1-e967-4992-8c48-502ea5596cde","information":"{\"id\":\"pattern-1766076693301-vgiike\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:51:33.301Z\",\"updated_at\":\"2025-12-18T16:51:33.301Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:51:33.529Z","metadata":"{\"id\":\"pattern-1766076693301-vgiike\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
79
|
+
{"id":"509ddf29-54c9-4d65-8610-dfc76321aadc","information":"--information","created_at":"2025-12-14T22:41:51.321Z","tags":"swarm,edge-case,workaround"}
|
|
80
|
+
{"id":"516a8144-80fc-4fdf-beb1-ab9a2a95ba36","information":"Swarm coordinator enforcement rules added to swarm.md: (1) CRITICAL section \"Coordinator Role Boundaries\" with explicit list of what coordinators DO (clarify, decompose, spawn, monitor, verify) and DO NOT (edit code, run tests, make quick fixes). (2) Sequential task pattern: spawn workers in order, await each before next - still get checkpointing, recovery, learning benefits. (3) Anti-patterns section with three examples: Mega-Coordinator (doing work inline), Sequential Work Without Workers, and \"Just This One Small Thing\". (4) Updated checklist with \"Coordinator did NOT edit any files\" and \"ALL subtasks spawned as workers\". Key insight from Event-Driven Microservices: \"orchestrator is responsible ONLY for orchestrating the business logic\".","created_at":"2025-12-18T00:31:38.099Z"}
|
|
81
|
+
{"id":"5458bfe9-fc9d-4a1d-9373-18615a01cf86","information":"PGlite daemon crashes under heavy embedding load due to WASM memory constraints (~2GB limit). Root cause: unbounded WAL growth when processing many embeddings without checkpoints.\n\nSOLUTION: Gated batch processing with periodic checkpoints.\n\nImplementation in pdf-library:\n1. Created EmbeddingQueue service (src/services/EmbeddingQueue.ts) with:\n - processInBatches() - core primitive for gated processing\n - createEmbeddingProcessor() - high-level API with checkpoint callback\n - getAdaptiveBatchSize() - reduces batch size under memory pressure\n - DEFAULT_QUEUE_CONFIG: batchSize=50, concurrency=5, batchDelayMs=10\n\n2. Modified PDFLibrary.add() to process embeddings in batches:\n - Generate 50 embeddings at a time (not all at once)\n - Write batch to DB\n - CHECKPOINT after each batch (flushes WAL)\n - Small delay between batches for GC\n\nKey insight: The problem wasn't Ollama concurrency, it was WAL accumulation. Each embedding write adds to WAL, and without CHECKPOINT, WAL grows unbounded until WASM OOM.\n\nMemory math:\n- 1024-dim embedding = 4KB\n- 5000 embeddings = 20MB vectors\n- Plus WAL overhead = can exceed WASM limits\n- With batching: 50 embeddings = 200KB + checkpoint = bounded\n\nConfig options:\n- batchSize: 50 (lower = more checkpoints, less memory)\n- concurrency: 5 (Ollama parallelism within batch)\n- batchDelayMs: 10 (backpressure for GC)\n- checkpointAfterBatch: true (essential)\n- adaptiveBatchSize: true (reduces batch under memory pressure)","created_at":"2025-12-19T17:50:57.157Z","tags":"pglite,wasm,embedding,oom,checkpoint,backpressure,queue,daemon,memory"}
|
|
82
|
+
{"id":"56a594bf-f52e-4b28-9e8e-2a88c9745037","information":"TDD pattern for PGlite WAL auto-checkpoint during batch operations: \n1. Write failing tests first (getCheckpointInterval, shouldCheckpoint helpers)\n2. Implement minimal checkpoint interval logic (default 50 docs, configurable)\n3. Remove per-doc checkpoint from library.add() (wasteful for batch ops)\n4. Expose checkpoint() method on PDFLibrary service API\n5. Add checkpoint logic to batch ingest command (both TUI and console modes)\n6. Update TUI state to show checkpoint progress (checkpointInProgress, checkpointMessage, lastCheckpointAt fields)\n7. Use Effect.either() to handle checkpoint failures gracefully (log but continue)\n\nKey insight: Checkpointing every document adds 930MB WAL in real usage. Checkpointing every N documents (default 50) prevents WASM OOM while maintaining performance. Batch operations should own checkpointing, not individual operations.","created_at":"2025-12-19T17:28:31.265Z","tags":"tdd,pglite,wal,checkpoint,batch-operations,effect-ts"}
|
|
83
|
+
{"id":"5822a985-22dd-4c52-aa57-3d048e376c1a","information":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:19.155Z\",\"updated_at\":\"2025-12-18T16:17:19.155Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:19.369Z","metadata":"{\"id\":\"pattern-1766074639155-9dtj9a\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
84
|
+
{"id":"5a7064a2-2a11-44e5-a1c9-455c4b30e18d","information":"ADR writing pattern for swarm plugin: Structure follows Context → Decision → Consequences → Implementation Notes → Alternatives Considered → References → Success Criteria. Key elements: (1) Context section must articulate current pain points with concrete examples, not just abstractions. (2) Decision section shows actual code/JSON structures, not just prose descriptions. (3) Consequences split into Positive/Negative/Neutral with specific tradeoffs. (4) Implementation phases are numbered and actionable. (5) Alternatives Considered documents rejected approaches with reasoning. (6) References link to inspirations and related ADRs. Format creates forcing function for clear thinking - if you can't fill in all sections cleanly, decision may not be ready. Used successfully for ADR-001 (monorepo), ADR-007 (worktree isolation), and ADR-008 (worker handoff protocol).","created_at":"2025-12-18T17:26:05.386Z","tags":"adr,architecture-decision-records,documentation,swarm-plugin,system-design"}
|
|
85
|
+
{"id":"5afe465e-ef42-4240-aa44-136967baf239","information":"CLI flag pattern for conditional output formatting: Use boolean flag (e.g., --expand) parsed via custom parseArgs function. Store flag state (const expand = opts.expand === true), then use ternary operator for conditional content: const preview = expand ? fullContent : truncatedContent. This allows backward-compatible feature addition without breaking default behavior. Applied in semantic-memory CLI to toggle between truncated (60 chars) and full content display.","created_at":"2025-12-18T17:01:12.075Z","tags":"cli,typescript,bun,flags,conditional-output,backward-compatibility"}
|
|
86
|
+
{"id":"5b117709-6a91-4237-a532-0f08909da9f7","information":"Kent C. Dodds Unified Accounts Use Case (Dec 2024) - Driving requirement for @badass auth architecture. Kent has EpicAI.pro, EpicWeb.dev, EpicReact.dev on different TLDs sharing a database. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Solution: epicweb.dev is the \"hive\" site for auth, other sites are \"spokes\" that redirect there. Workshop App uses device flow (RFC 8628) to authenticate against the hive. This validates hive+spoke model and device flow as core requirements.","created_at":"2025-12-18T15:42:16.703Z"}
|
|
87
|
+
{"id":"5d2404b8-3635-42a2-bd63-ae623aba2a62","information":"@badass Auth Architecture Decision (Dec 2024): Creators with multiple sites MUST designate a central \"hive\" site for auth. For Kent, epicweb.dev is the hive - all auth flows redirect there. Other sites (epicreact.dev, epicai.pro) are \"spoke\" sites that trust the hive. This is a REQUIREMENT, not optional. Simplifies cross-domain SSO - standard OAuth/OIDC pattern where hive is the IdP. Spoke sites redirect to hive for login, receive tokens back. Shared database means session/user data is already unified, just need the auth handshake.","created_at":"2025-12-18T15:39:52.225Z"}
|
|
88
|
+
{"id":"5d871dd3-e45a-4237-8d79-12e568949c91","information":"AI SDK v6 Runtime Identity Pattern: Use callOptionsSchema with Zod to define type-safe per-request context (userId, tier, permissions). Implement prepareCall function that receives typed options and returns config overrides (tools, instructions, model, temperature). This enables tier-based feature gating, region-specific compliance, A/B testing, dynamic model selection. Key: prepareCall runs on EVERY invocation - keep it fast, avoid async DB lookups, use in-memory cache or extract from headers/JWT. In tier-one app: free (queryFAQ only), pro (adds searchDocs), enterprise (adds askV0). Always include respondToTicketTool for structured exit. Console.log in prepareCall provides observability.","created_at":"2025-12-16T21:12:38.912Z","tags":"ai-sdk,ai-sdk-v6,runtime-identity,callOptionsSchema,prepareCall,tier-filtering,tool-gating"}
|
|
89
|
+
{"id":"5faca7a3-eefb-44bd-affb-3140d367c748","information":"PGlite daemon initialization pattern: After creating PGlite instance and calling waitReady, MUST initialize schema (CREATE TABLE IF NOT EXISTS) before starting socket server. Without schema init, daemon starts successfully but all database operations fail with \"relation does not exist\" errors. DatabaseClient connects to daemon socket but finds empty database. Schema initialization code should mirror Database.ts DirectDatabaseLive implementation exactly to ensure consistency between daemon and direct modes.","created_at":"2025-12-19T15:18:58.912Z","tags":"pglite,daemon,schema-initialization,database,socket-server"}
|
|
90
|
+
{"id":"61b3acf6-2eaa-4670-b17d-401634a0e41e","information":"@badass Video Pipeline Extraction Plan (Dec 2024): Extract from @coursebuilder/core to @badass/video.\n\n**Files to Extract:**\n- packages/core/src/schemas/video-resource.ts - VideoResource schema\n- packages/core/src/schemas/mux.ts - Mux API response schemas\n- packages/core/src/lib/mux.ts:1-142 - Mux API client\n- packages/core/src/providers/deepgram.ts:1-200 - Transcription provider\n- packages/core/src/inngest/video-processing/functions/* - All Inngest functions\n- packages/core/src/inngest/video-processing/events/* - All event definitions\n- packages/core/src/inngest/video-processing/utils.ts - Mux thumbnail generation\n\n**Architecture:**\n- VideoResource is a ContentResource type (not embedded in posts)\n- Upload triggers Inngest job\n- Mux processes video\n- Deepgram transcribes\n- Webhooks update VideoResource with asset ID, playback info, transcript, SRT\n\n**API Design:**\nconst video = createVideoProcessor({ storage: mux, transcription: deepgram, jobs: inngest })\nawait video.process(uploadUrl) // Returns VideoResource ID","created_at":"2025-12-18T15:57:51.555Z"}
|
|
91
|
+
{"id":"632a5d9d-c85f-4f2a-9e2a-28d348f30c0d","information":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:17:30.591Z\",\"updated_at\":\"2025-12-18T16:17:30.591Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:17:30.812Z","metadata":"{\"id\":\"pattern-1766074650591-rgfdz0\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
92
|
+
{"id":"63fa903d-a63d-4c10-98ce-4d3aed8dad3b","information":"{\"id\":\"test-1765678583954-hm5prpbn31i\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T02:16:23.954Z\",\"raw_value\":1}","created_at":"2025-12-14T02:16:24.154Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T02:16:23.954Z\"}"}
|
|
93
|
+
{"id":"647f6901-730d-49f0-9ed5-c9b97cf40319","information":"{\"id\":\"pattern-1765386363018-cqs6f7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:06:03.017Z\",\"updated_at\":\"2025-12-10T17:06:03.017Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:06:03.211Z","metadata":"{\"id\":\"pattern-1765386363018-cqs6f7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
94
|
+
{"id":"64b3dfb1-9e4b-4eb5-a410-d80ad459de43","information":"ROOT CAUSE ANALYSIS: Semantic Memory Test Pollution (Dec 2025)\n\nPROBLEM: Integration tests polluted production semantic-memory with 32 test artifacts (86% of all memories) across collections: test-patterns (16), test-feedback (16). Only 5 legitimate production memories remained.\n\nWHY IT HAPPENED:\n1. Tests wrote to shared MCP server - no test/prod isolation\n2. No collection naming convention - tests used arbitrary names\n3. No cleanup hooks in test teardown - pollution accumulated\n4. MCP server lacks delete/remove API - no automated cleanup possible\n\nIMPACT:\n- semantic-memory_find returns 86% test noise\n- Production knowledge base unreliable for semantic search\n- Wasted storage and embedding generation costs\n- Developers lose trust in knowledge base accuracy\n\nPREVENTION IMPLEMENTED (Dec 2025 via opencode-swarm-plugin-7x3pk):\n1. ✅ Subtask 1: Collection prefix isolation - test-*, temp-* reserved for tests\n2. ✅ Subtask 2: Cleanup hooks - afterEach() deletes test collections\n3. ✅ Subtask 3: Added mock semantic-memory for unit tests (avoid MCP)\n4. ✅ Subtask 5: Cleanup script at scripts/cleanup-test-memories.ts\n\nMANUAL CLEANUP REQUIRED:\nsemantic-memory MCP lacks delete API. Must use direct PostgreSQL access:\n```\npsql -h /Users/joel/.semantic-memory/memory -c \"DELETE FROM memories WHERE collection IN ('test-patterns', 'test-feedback');\"\n```\n\nFUTURE CONSIDERATIONS:\n- Request delete/remove tool from @opencode/semantic-memory maintainers\n- Add CI check: fail if test collections found in production\n- Document production collection naming: 'default' for general, domain-specific for specialized\n\nVERIFICATION:\nAfter manual cleanup, verify with semantic-memory_list - should show ~5 memories, all in 'default' collection.","created_at":"2025-12-14T22:39:53.177Z"}
|
|
95
|
+
{"id":"657322ff-9f27-4d0d-a763-157a141b5741","information":"Swarm Enhancement Plan (ADR-007): Integrating patterns from nexxeln/opencode-config\n\nKey features to add:\n1. **Optional Worktree Isolation** - `swarm_init(isolation=\"worktree\")` for large refactors. Each worker gets isolated git worktree, cherry-pick commits back on completion. Overkill for most tasks, but perfect for big refactors.\n\n2. **Structured Review Step** - Coordinator reviews worker output before marking complete. Review prompt includes epic goal, task requirements, dependency context, downstream context. Max 3 review attempts before task fails. UBS scan still runs as additional safety.\n\n3. **Retry Options on Abort** - `/swarm --retry` (same plan), `/swarm --retry --edit` (modify plan), fresh start. Requires persisting session state (already have via Hive).\n\nDecision: Coordinator does review (not separate reviewer agent) because coordinator already has epic context loaded, avoids spawning another agent, keeps feedback loop tight.\n\nSkipped: Staged changes on finalize (our flow already has explicit commit step).\n\nEpic: bd-lf2p4u-mjaja96b9da\nCredit: Patterns from https://github.com/nexxeln/opencode-config","created_at":"2025-12-17T21:40:05.334Z"}
|
|
96
|
+
{"id":"663b7198-ff84-42f5-9883-13e4f2d90b90","information":"{\"id\":\"test-1765386508116-mzoi3mqss5\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:08:28.116Z\",\"raw_value\":1}","created_at":"2025-12-10T17:08:28.300Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:08:28.116Z\"}"}
|
|
97
|
+
{"id":"66c33f4a-e504-4601-bf36-7cafcc5c745c","information":"SWARM-MAIL ADAPTER PATTERN DECISION (Dec 2024): Extracting swarm-mail as standalone package using adapter pattern from coursebuilder. Key design: 1) DatabaseAdapter interface abstracts SQL operations (query, exec, transaction), 2) SwarmMailAdapter interface defines all swarm-mail operations, 3) createSwarmMailAdapter(db) factory accepts injected database, 4) PGLite convenience layer provides getSwarmMail() singleton for simple usage. Benefits: portable (works with PGLite, Postgres, Turso), testable (inject in-memory), shareable (one db across consumers), decoupled (swarm-mail doesn't own db lifecycle). Pattern learned from github.com/badass-courses/course-builder/tree/main/packages/adapter-drizzle which uses table function injection for multi-tenant prefixing.","created_at":"2025-12-14T23:57:56.403Z"}
|
|
98
|
+
{"id":"67453dce-ee7c-4102-acf6-ccf279264b32","information":"@badass Database Sharing Decision (Dec 2024): Creator-level database sharing enabled. Sites owned by same creator CAN share a database (like Kent's epic-web + epic-react in course-builder). Enables cross-site features: unified purchases, shared content library, single user identity per creator. Mux/Inngest/Stripe always per-site isolated. Adapter pattern must support both isolated and shared DB scenarios via site config.","created_at":"2025-12-18T15:30:13.232Z"}
|
|
99
|
+
{"id":"67a8d3fd-7e06-40c8-b13b-0606f032ee0a","information":"Lesson polish pattern for technical course content: Always verify Fast Track presence (3 quick steps to get basics working), ensure real output examples in Try It sections (actual terminal logs and JSON responses, not placeholders), standardize section headers (Project Prompt not Hands-On Exercise, Done-When not Done). Common issues found: missing Fast Track (~40% of lessons), placeholder outputs instead of real examples (~30%), inconsistent section naming (~20%). For rubric scoring: Fast Track absence drops Progressive Disclosure score (-0.5), missing real outputs drops Practical Implementation (-0.5). Quick fix: read tier-one implementation first to get actual outputs, then add Fast Track based on solution key steps. Target: 8.0+ overall, 9.0+ for polished lessons.","created_at":"2025-12-16T21:43:36.151Z","metadata":"{\"topic\":\"lesson-authoring\",\"pattern\":\"polish\",\"quality\":\"rubric-scoring\"}"}
|
|
100
|
+
{"id":"68a25df0-9dd0-4483-b64e-0103f574a5c2","information":"Test memory after migration fix","created_at":"2025-12-09T18:25:30.759Z","tags":"test"}
|
|
101
|
+
{"id":"6b6b00c9-540b-4ef6-a908-47048d9589d1","information":"Cross-domain SSO architecture insight: Kent's use case (EpicAI.pro, EpicWeb.dev, EpicReact.dev) requires unified identity across different TLDs. User buys Epic React, starts Workshop App tutorial, shouldn't need separate EpicWeb.dev account. Current course-builder uses NextAuth.js per-site. Solution requires either: (1) Shared auth database with cross-domain session tokens, (2) Central identity provider (IdP) that all sites trust, or (3) Token exchange protocol between sites. BetterAuth may have better cross-domain support than NextAuth. Key constraint: different domains means cookies don't share - need explicit SSO flow.","created_at":"2025-12-18T15:32:50.696Z"}
|
|
102
|
+
{"id":"6c7021dc-8b8f-4497-92c7-9693e04c42a0","information":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T19:53:03.777Z\",\"updated_at\":\"2025-12-17T19:53:03.777Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T19:53:04.822Z","metadata":"{\"id\":\"pattern-1766001183777-5zk1l8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
103
|
+
{"id":"70613071-8231-49a5-bdca-a9b9f7e9c53c","information":"{\"id\":\"pattern-1765386530615-riuu0i\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:08:50.615Z\",\"updated_at\":\"2025-12-10T17:08:50.615Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:08:50.799Z","metadata":"{\"id\":\"pattern-1765386530615-riuu0i\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
104
|
+
{"id":"713d8d68-90fa-4b2f-9ea0-5b06a0e6e50c","information":"{\"id\":\"test-1765771061095-2yd4dw3psvh\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:41.095Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:41.455Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:41.095Z\"}"}
|
|
105
|
+
{"id":"7189cf77-2ceb-47c4-a354-0dc493876ded","information":"{\"id\":\"test-1765771127882-pdmhpieixbg\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:58:47.882Z\",\"raw_value\":1}","created_at":"2025-12-15T03:58:48.290Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:58:47.882Z\"}"}
|
|
106
|
+
{"id":"71db34a5-29be-4431-98a9-e6a1e9416c8e","information":"PGlite WAL accumulation prevention pattern: Added `doctor` command to CLI that checks WAL file count and size (thresholds: 50 files OR 50MB). Also added graceful shutdown handlers (SIGINT, SIGTERM) that run CHECKPOINT before exit. Critical for MCP tool invocations which are separate processes that may not cleanly close database. Without these, WAL files accumulate over days causing WASM memory exhaustion (930 WAL files = 930MB crashed PGlite). Doctor command uses assessWALHealth() helper to warn users and suggest export/reimport. Shutdown handlers use dynamic import to avoid circular deps and check if DB exists before checkpointing.","created_at":"2025-12-19T04:03:22.627Z","tags":"pglite,wal,checkpoint,cli,graceful-shutdown,mcp,wasm-memory,prevention-pattern"}
|
|
107
|
+
{"id":"729c2510-6ae1-4701-ba06-5faef13ec1f2","information":"postgres.js DatabaseAdapter wrapper pattern: postgres.js uses tagged template literals for queries (sql`SELECT...`) but DatabaseAdapter expects (sql, params) signature. Key implementation details: 1) Use sql.unsafe(sqlString, params) for raw SQL with parameters. 2) postgres.js returns Row[] directly (not wrapped in {rows:[]}), so wrap result: {rows: await sql.unsafe(...)}. 3) Type assertion needed: (await sql.unsafe(...)) as unknown as T[] because postgres.js unsafe returns Row[] but we need T[]. 4) Transaction support: sql.begin() callback receives TransactionSql that behaves like sql, wrap it recursively with wrapPostgres(). 5) sql.begin() returns Promise<UnwrapPromiseArray<T>>, need type assertion: result as T. 6) Factory pattern: createSocketAdapter validates options (either path OR host+port, not both), creates postgres client, validates with ping query, wraps and returns. 7) External postgres in build config to avoid bundling. Successfully implemented for swarm-mail socket adapter.","created_at":"2025-12-17T17:54:54.552Z"}
|
|
108
|
+
{"id":"738be6d8-6f06-45b5-9e48-f78c0689af64","information":"{\"id\":\"test-1765653641690-8bz4qvel2p\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:20:41.690Z\",\"raw_value\":1}","created_at":"2025-12-13T19:20:41.892Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:20:41.690Z\"}"}
|
|
109
|
+
{"id":"73a330d8-15ea-4ea6-80cf-9a9bdf82ae6b","information":"Integration tests should always use isolated collections to prevent test pollution. Best pattern discovered:\n\nFor semantic-memory tests:\n- Use unique collection names with timestamps in beforeEach\n- Example: test-feedback-${testSuite}-${Date.now()}\n- Always cleanup with storage.close() in afterEach\n\nFor database tests (PGLite/streams):\n- Use unique temp paths with timestamps and UUIDs\n- Example: /tmp/test-${testSuite}-${Date.now()}-${randomUUID()}\n- Always cleanup with closeDatabase() and rm -rf in afterEach\n\nWHY: Without isolation tests can interfere with each other causing flaky failures. Each test needs its own collection/database that gets cleaned up after the test runs.","created_at":"2025-12-14T22:36:54.874Z"}
|
|
110
|
+
{"id":"753a6005-3ecb-4bae-bbd0-bd38cfb2ab55","information":"Lite model support implementation pattern: Add model selection based on file types to optimize swarm costs. Key learnings: (1) File-type inference is simple but effective - all .md/.mdx or all .test./.spec. files use lite model, (2) Priority system works well: explicit override > file inference > default, (3) Integration point is swarm_spawn_subtask which returns recommended_model in metadata for coordinator to use with Task(), (4) Used dynamic import for selectWorkerModel to avoid circular dependencies, (5) Added risks: [] to mock subtask to satisfy DecomposedSubtask schema. Pattern applies to any swarm optimization where different task types have different resource needs.","created_at":"2025-12-19T00:31:23.462Z","tags":"swarm,model-selection,optimization,cost-savings"}
|
|
111
|
+
{"id":"75fc6779-42fe-4c60-9836-c4bc3e2ee3e7","information":"BetterAuth cross-domain limitation (Dec 2024): crossSubDomainCookies only works for SUBDOMAINS of the same root domain (e.g., app1.example.com and app2.example.com). It does NOT work for different TLDs (epicweb.dev vs epicreact.dev vs epicai.pro). For Kent's use case, need a different solution: either (1) Central IdP on a shared domain, (2) Token exchange protocol between sites, or (3) Custom SSO plugin. This is a gap in BetterAuth that @badass may need to solve.","created_at":"2025-12-18T15:35:21.461Z"}
|
|
112
|
+
{"id":"7792b139-5a37-44a9-9c6b-a5578ad93d48","information":"SWARM-MAIL EXTRACTION COMPLETE (Dec 2025): Successfully extracted swarm-mail as standalone npm package using adapter pattern. Key learnings: 1) Turborepo needs packageManager field in root package.json, 2) bun build doesn't resolve workspace:* - must build dependencies first with turbo, 3) TypeScript declarations need emitDeclarationOnly:true (not noEmit) plus tsc in build script, 4) Re-export everything from streams/index.ts for backward compatibility, 5) Coordinator should NOT reserve files - only workers reserve their own files. Architecture: createSwarmMailAdapter(db, projectKey) for DI, getSwarmMail(path) for convenience singleton. All 230 tests pass.","created_at":"2025-12-15T00:22:09.754Z"}
|
|
113
|
+
{"id":"77e67fcb-446f-4444-8a27-624e43bc16c7","information":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-17T00:37:17.036Z\",\"updated_at\":\"2025-12-17T00:37:17.036Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-17T00:37:17.973Z","metadata":"{\"id\":\"pattern-1765931837036-hbxgw2\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
114
|
+
{"id":"7a7221a1-6e25-4b95-b2e3-ee2430b6e9e5","information":"Bun + Turborepo monorepo setup gotcha: The `--filter` flag for `bun add` is BROKEN as of Aug 2025 - it installs dependencies to the ROOT package.json instead of the target workspace. ALWAYS use `--cwd` flag instead: `bun add <package> --cwd apps/my-app`. This is critical for workspace-specific dependency management. Also requires `packageManager` field in root package.json for Turborepo to resolve workspaces.","created_at":"2025-12-16T19:58:30.121Z"}
|
|
115
|
+
{"id":"7a960377-f74a-4152-8aac-c0f80409da0c","information":"PGlite test isolation pattern: When testing event stores with PGlite, avoid using getDatabase() singleton in tests as it returns a shared instance that persists across tests. Instead, create isolated in-memory instances per test: pglite = new PGlite() in beforeEach. This prevents PGlite is closed errors when afterEach closes the database. For schema initialization, manually CREATE TABLE for core tables like events and schema_version instead of calling initializeSchema() which may have side effects on singletons.","created_at":"2025-12-16T22:08:11.460Z","metadata":"{\"context\":\"swarm-mail test patterns\"}","tags":"testing,pglite,event-store,isolation"}
|
|
116
|
+
{"id":"7abb34bd-3bcd-4d6b-b8d3-eb81f748418c","information":"{\"id\":\"pattern-1765386439151-fwvekq\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-10T17:07:19.151Z\",\"updated_at\":\"2025-12-10T17:07:19.151Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-10T17:07:19.337Z","metadata":"{\"id\":\"pattern-1765386439151-fwvekq\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
117
|
+
{"id":"7b9b35dd-1e6b-4a23-a8a4-48ac094116c2","information":"SUBTASK_PROMPT_V2 memory emphasis pattern: To make workers actually use semantic memory, the prompt needs:\n\n1. **Visual prominence** - emoji (🧠💾), CAPS (MANDATORY, CRITICAL), bold formatting\n2. **Concrete examples by task type** - workers need to see exactly what query to run for their specific task (bug fix → error message, new feature → domain concept, etc.)\n3. **Good vs Bad examples** - show what a useful memory looks like vs a useless one\n4. **Explicit triggers** - list specific situations that MUST trigger memory storage (>15min debugging, found gotcha, architectural decision)\n5. **Consequences of skipping** - explain the pain they'll cause themselves and future agents\n6. **Checklist position matters** - memory query MUST be Step 2 (before any work), storage MUST be near-last (Step 8)\n\nKey insight: Workers ignore long prose but respond to visual hierarchy and concrete examples. The phrase \"If you learned it the hard way, STORE IT\" is more effective than paragraphs explaining why.","created_at":"2025-12-19T02:52:33.987Z","tags":"swarm,prompts,memory,worker-template,emphasis,visual-hierarchy"}
|
|
118
|
+
{"id":"7beb8d2f-152d-41db-90a6-a622c552e8a1","information":"{\"id\":\"test-1765386529833-7ou9lp7ra57\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:08:49.833Z\",\"raw_value\":1}","created_at":"2025-12-10T17:08:50.009Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:08:49.833Z\"}"}
|
|
119
|
+
{"id":"7c58dd11-320f-4a84-8173-96dfa639c10b","information":"Testing Drizzle adapters: avoid mocking Drizzle's query builder directly - creates circular reference errors. Instead, create a Fake adapter that implements the same interface but uses simple in-memory storage. Fake pattern: 1) Create FakeDatabase with Maps for storage, 2) Create FakeAdapter that wraps FakeDatabase and implements same interface as real adapter, 3) Tests call FakeAdapter methods which call simplified storage methods. This avoids JSON.stringify() errors from Drizzle's internal structures while maintaining test realism. Tests run 10x faster than real DB and are more maintainable.","created_at":"2025-12-18T16:31:51.601Z","tags":"testing,drizzle,orm,fakes,tdd,adapter-pattern"}
|
|
120
|
+
{"id":"7e06f0d4-1231-4b91-943a-b55587178b6a","information":"Daemon-first architecture pattern for PGlite: Auto-start daemon on first database access with graceful fallback. Implementation uses ensureDaemonRunning() function that: 1) checks if daemon running, 2) attempts auto-start if not, 3) returns {success, mode, error?} result. DatabaseLive Layer calls ensureDaemonRunning() and routes based on result - success routes to DatabaseClient (socket), failure falls back to DirectDatabaseLive with warning. This solves PGlite single-connection limitation by default while maintaining backwards compatibility. Key insight: NEVER throw from ensureDaemonRunning - always return a result, even on failure. Caller handles fallback logic. TDD approach: wrote 4 tests first (RED), implemented ensureDaemonRunning (GREEN), added JSDoc (REFACTOR). All 32 tests passing.","created_at":"2025-12-19T17:22:37.415Z","tags":"pglite,daemon,auto-start,tdd,graceful-fallback,architecture"}
|
|
121
|
+
{"id":"7ec67bba-2397-4eba-b563-7df4f17d02f5","information":"OpenCode plugin hook interface pattern: hooks use string literal keys with optional function signatures. Format: \"namespace.event\"?: (input: {...}, output: {...}) => Promise<void>. The output parameter is mutable - plugins append to arrays or modify properties. Single-line formatting is preferred by prettier for simple signatures. Session compaction hooks allow plugins to inject context before summarization.","created_at":"2025-12-17T18:01:37.726Z"}
|
|
122
|
+
{"id":"803fddcb-ef84-4df9-8038-c69a6ebee9c5","information":"Course-builder OAuth Device Flow implementation reference (Dec 2024): Full RFC 8628 implementation exists in apps/ai-hero/src/app/oauth/device/. Key components: (1) POST /oauth/device/code - generates device_code + user_code with human-readable-ids, 10min expiry, (2) /activate page where user enters user_code, (3) device-verification tRPC router that marks verification with verifiedByUserId, (4) POST /oauth/token polls for access token. Schema in packages/adapter-drizzle with DeviceVerification table. This pattern should be extracted into @badass/auth for CLI and Workshop App authentication.","created_at":"2025-12-18T15:41:09.121Z"}
|
|
123
|
+
{"id":"825ccc37-c833-42e6-9069-4a531215cea2","information":"{\"id\":\"test-1765749524072-fs3i37vpoik\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T21:58:44.072Z\",\"raw_value\":1}","created_at":"2025-12-14T21:58:44.282Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T21:58:44.072Z\"}"}
|
|
124
|
+
{"id":"82945143-4b25-418b-acaa-e3a02a2eb7b8","information":"{\"id\":\"test-1766104210635-2mewizal9aa\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-19T00:30:10.635Z\",\"raw_value\":1}","created_at":"2025-12-19T00:30:10.859Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-19T00:30:10.635Z\"}"}
|
|
125
|
+
{"id":"8311ea42-e882-4b72-8f23-fc6e83250e5f","information":"{\"id\":\"test-1765751832219-4zgo42wxmyu\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-14T22:37:12.219Z\",\"raw_value\":1}","created_at":"2025-12-14T22:37:12.483Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-14T22:37:12.219Z\"}"}
|
|
126
|
+
{"id":"834e33d4-b8d4-4c80-8a70-5d69d612efb0","information":"swarm_complete review gate UX fix: Changed review gate responses from { success: false, error: \"...\" } to { success: true, status: \"pending_review\" | \"needs_changes\", message: \"...\", next_steps: [...] }. This reframes the review gate as a workflow checkpoint, not an error state. Workers did nothing wrong - they just need to wait for coordinator review. The logic of when to check review status was already correct, only the response format needed fixing. Added 3 tests covering: (1) pending_review when no review attempted, (2) needs_changes when review rejected, (3) skip_review bypasses gate. Also added markReviewRejected() test helper to swarm-review.ts for simulating rejected reviews.","created_at":"2025-12-18T21:40:00.165Z","tags":"swarm,review-gate,ux-fix,workflow-state,testing"}
|
|
127
|
+
{"id":"8476d7c1-9768-44a6-a378-dcaca8447aae","information":"hive_sync git remote handling: Fixed bug where hive_sync would fail with \"No configured push destination\" error when no git remote is configured. Root cause: implementation unconditionally tried to push/pull even when no remote exists. Solution: Check if remote exists with `git remote` command before attempting pull/push operations. If no remote, return success message \"(no remote configured)\" instead of failing. This allows local-only git repos to use hive_sync without errors. Implementation detail: The commit of .hive changes happens BEFORE the pull check, ensuring .hive state is committed even if pull/push are skipped.","created_at":"2025-12-18T18:02:37.061Z"}
|
|
128
|
+
{"id":"8a396b22-7a39-489a-ae5d-b5332b8f350e","information":"Course Builder monorepo structure for shared database adapters:\n\n- packages/core - defines CourseBuilderAdapter interface with 100+ methods, domain schemas (Zod), business logic\n- packages/adapter-drizzle - implements adapter interface, exports schema factories (getCourseBuilderSchema(tableFn)), supports MySQL/PG/SQLite via type discrimination\n- apps/* - each app creates own db instance, own table prefix, calls schema factory, passes both to adapter\n\nKey files:\n- packages/core/src/adapters.ts - interface definition with generic TDatabaseInstance\n- packages/adapter-drizzle/src/lib/mysql/index.ts - mySqlDrizzleAdapter(client, tableFn) implementation\n- apps/*/src/db/mysql-table.ts - app-specific mysqlTableCreator with unique prefix\n- apps/*/src/db/schema.ts - calls getCourseBuilderSchema(mysqlTable) to get prefixed tables\n- apps/*/src/db/index.ts - creates db instance, exports courseBuilderAdapter = DrizzleAdapter(db, mysqlTable)\n\nPattern enables 15+ apps sharing same database with table isolation via prefixes like zER_, zEW_, EDAI_, AI_, etc.","created_at":"2025-12-14T23:56:13.303Z"}
|
|
129
|
+
{"id":"8a59059a-7374-49a6-ad4e-4dc5a4160a5c","information":"Docker test infrastructure approach for egghead migration:\n\n1. Use pg_dump for REAL schemas - don't manually recreate Rails table definitions. The schema has 50+ columns per table with specific defaults, constraints, and indexes.\n\n2. Export strategy (pragmatic path):\n - Option 2 (now): Export 2 POC courses with full schema via pg_dump --schema-only + COPY for data\n - Option 1 (next): Generalize to N random courses with --courses=N flag\n - Option 3 (goal): Full sanitized production dump\n\n3. Data anonymization: Replace emails with instructor{id}@test.egghead.io, null out authentication_token, encrypted_password, confirmation_token, reset_password_token\n\n4. Key tables in dependency order: users → instructors → series → lessons → tags → taggings → playlists → tracklists\n\n5. Shell script approach (export-poc-courses.sh) is cleaner than TypeScript for pg_dump operations - native psql/pg_dump tools handle schema complexity better than manual SQL generation.","created_at":"2025-12-13T17:35:48.194Z"}
|
|
130
|
+
{"id":"8b23681b-7dc8-4501-882e-1ef66174881f","information":"{\"id\":\"pattern-1765751936368-siqk3d\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T22:38:56.368Z\",\"updated_at\":\"2025-12-14T22:38:56.368Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T22:38:56.603Z","metadata":"{\"id\":\"pattern-1765751936368-siqk3d\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
131
|
+
{"id":"8c4f7a27-e641-4657-9bbe-857e77cdd200","information":"{\"id\":\"pattern-1765653391843-hizz8c\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T19:16:31.843Z\",\"updated_at\":\"2025-12-13T19:16:31.843Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T19:16:32.050Z","metadata":"{\"id\":\"pattern-1765653391843-hizz8c\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
132
|
+
{"id":"8dc5ec29-38ca-441b-9304-841a8b87a553","information":"PGLite daemon mode flipped to default in swarm-mail getSwarmMail(). Change: `const useSocket = process.env.SWARM_MAIL_SOCKET !== 'false'` (was `=== 'true'`). This prevents multi-process PGLite corruption by defaulting to single-daemon architecture. Users opt OUT with SWARM_MAIL_SOCKET=false for embedded mode. Updated JSDoc, added log messages for both modes. Critical for any tests that call getSwarmMail() - they now need explicit SWARM_MAIL_SOCKET=false in beforeAll() to avoid daemon startup attempts. Exit code 0 = all tests pass.","created_at":"2025-12-19T14:52:50.665Z","tags":"pglite,daemon,swarm-mail,default-behavior,multi-process,testing"}
|
|
133
|
+
{"id":"9126fdf3-7090-4dda-bc3b-d66e14362291","information":"{\"id\":\"pattern-1765664125767-wxih0g\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:15:25.767Z\",\"updated_at\":\"2025-12-13T22:15:25.767Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:15:25.968Z","metadata":"{\"id\":\"pattern-1765664125767-wxih0g\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
134
|
+
{"id":"91f6de54-cd46-46c4-a12b-7f80b2a887b9","information":"Test Isolation Pattern for semantic-memory: Use environment variable TEST_MEMORY_COLLECTIONS=true to suffix collection names with '-test'. Implemented via getCollectionNames() function that checks process.env.TEST_MEMORY_COLLECTIONS and conditionally appends '-test' to base collection names (swarm-feedback, swarm-patterns, swarm-maturity). Vitest integration config sets this env var automatically. Prevents test data from polluting production semantic-memory collections. Cleanup handled in vitest.integration.setup.ts teardown hook. Pattern enables running integration tests safely without affecting production learning data. Key insight: Dynamic collection naming at config resolution time (not runtime) ensures all storage instances in test mode automatically use test collections.","created_at":"2025-12-14T22:37:48.129Z","metadata":"{\"author\":\"WarmHawk\",\"pattern_type\":\"test_isolation\"}"}
|
|
135
|
+
{"id":"92242548-6162-48c6-864a-0d099a503ff4","information":"Documentation pattern for PGLite WAL safety deployment: When documenting database deployment modes, structure as three sections: 1) Daemon Mode (Recommended) with SIGTERM handler showing graceful shutdown, 2) Safety Features (checkpoint + health monitoring with code examples), 3) Ephemeral Instances (Testing) with explicit production warning. Key insight: Users need to see WHY daemon mode matters (WAL accumulation from multiple instances) and WHEN to checkpoint manually (migrations, bulk writes). Cross-reference from developer docs (AGENTS.md) to package README for detailed deployment guidance. This pattern prevents the \"docs scattered across files\" anti-pattern.","created_at":"2025-12-19T03:43:48.319Z","tags":"documentation,pglite,wal,deployment,swarm-mail,pattern"}
|
|
136
|
+
{"id":"93ea9444-8481-4987-af75-d504f29c4cda","information":"Course index pages MUST link to first lesson in each section, not the section index page. Rule from AGENTS.md line 137: \"Sections are not navigable in the UI; always link to the first lesson in a section from indexes.\" \n\nCommon mistake: linking to section index instead of first lesson.\n\nExample from AI SDK Intelligent Agents course:\n- WRONG: [Section 1: The Agentic Loop](./agentic-loop)\n- CORRECT: [Section 1: The Agentic Loop](./agentic-loop/from-chain-to-loop)\n\nThis affects main course index where sections are listed. Section index pages are fine linking between themselves, but main navigation must link directly to first lesson.","created_at":"2025-12-16T21:10:41.227Z","tags":"course-structure,navigation,index-pages,lesson-links,style-guide"}
|
|
137
|
+
{"id":"949aae72-b5ac-4b3d-9ca2-3b0cfc6a9814","information":"CLI daemon command implementation pattern for Bun projects using Effect.\n\n**Pattern:**\nHandle special commands (like `daemon`) separately from main Effect program, similar to `migrate` command. This avoids needing to run full application layers for lifecycle management.\n\n**Implementation:**\n```typescript\n// At bottom of cli.ts, before main program\nconst args = process.argv.slice(2);\n\nif (args[0] === \"daemon\") {\n const daemonProgram = Effect.gen(function* () {\n // Handle daemon subcommands\n // Use Effect.promise() to wrap async daemon functions\n });\n \n Effect.runPromise(\n daemonProgram.pipe(\n Effect.catchAll((error) => /* error handling */)\n )\n );\n} else if (args[0] === \"migrate\") {\n // ...\n} else {\n // Run main program with full dependencies\n Effect.runPromise(\n program.pipe(Effect.provide(PDFLibraryLive))\n );\n}\n```\n\n**Background Process Spawning:**\n```typescript\n// Spawn detached background daemon\nconst proc = Bun.spawn(\n [\"bun\", \"run\", join(__dirname, \"cli.ts\"), \"daemon\", \"start\", \"--foreground\"],\n {\n cwd: process.cwd(),\n stdio: [\"ignore\", \"ignore\", \"ignore\"],\n detached: true,\n }\n);\nproc.unref();\n\n// Wait for socket availability with timeout\nconst timeout = 5000;\nwhile (Date.now() - startTime < timeout) {\n const running = yield* Effect.promise(() => isDaemonRunning(config));\n if (running) break;\n yield* Effect.sleep(\"100 millis\");\n}\n```\n\n**Why Separate?**\n- Daemon commands don't need full PDFLibrary dependencies\n- Avoids circular dependency issues\n- Faster startup for lifecycle commands\n- Cleaner separation of concerns","created_at":"2025-12-19T15:10:50.703Z","tags":"bun,effect,cli,daemon,background-process"}
|
|
138
|
+
{"id":"9776db4c-e14f-4495-b9fc-05954676abbb","information":"{\"id\":\"test-1766074436954-pj27gd4lso\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:13:56.954Z\",\"raw_value\":1}","created_at":"2025-12-18T16:13:57.169Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:13:56.954Z\"}"}
|
|
139
|
+
{"id":"97ab28c1-c249-4144-937e-88f2b0f4b398","information":"{\"id\":\"pattern-1766085029743-5mj578\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T19:10:29.743Z\",\"updated_at\":\"2025-12-18T19:10:29.743Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T19:10:29.969Z","metadata":"{\"id\":\"pattern-1766085029743-5mj578\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
140
|
+
{"id":"99a8fa5a-2287-4665-bf88-972213bc754b","information":"{\"id\":\"test-1766080415739-14f1w45qthd9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T17:53:35.739Z\",\"raw_value\":1}","created_at":"2025-12-18T17:53:36.012Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T17:53:35.739Z\"}"}
|
|
141
|
+
{"id":"9a004fda-9142-4e55-9447-db005493487e","information":"{\"id\":\"pattern-1765771064070-9few2m\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:57:44.070Z\",\"updated_at\":\"2025-12-15T03:57:44.070Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:57:44.420Z","metadata":"{\"id\":\"pattern-1765771064070-9few2m\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
142
|
+
{"id":"9abd19da-6b27-40f1-a385-de69d0a0f55b","information":"Swarm coordination pattern for ADR writing (Dec 2024): When multiple ADRs need writing, spawn parallel workers with clear file ownership. Both workers may need to update shared index file (docs/adr/README.md) - coordinate via swarmmail to avoid conflicts. Pattern: first worker adds placeholder entries for both, second worker corrects titles. Workers should store learnings via semantic-memory_store after completing ADRs. Use swarm_complete (not hive_close) to auto-release reservations and record learning signals.","created_at":"2025-12-19T00:16:21.306Z","tags":"swarm,coordination,adr,parallel-work,file-conflicts,best-practice"}
|
|
143
|
+
{"id":"9b55a76c-d07d-4a7c-b9c9-ea49f13c140f","information":"@badass Router Design Decision (Dec 2024): Hybrid approach combining uploadthing and course-builder patterns.\n\n**From Uploadthing (COPY):**\n1. Type-state builder pattern with UnsetMarker for compile-time safety\n2. Immutable chain - each method returns new builder\n3. Effect-TS at handler layer ONLY, not in builder API (builder stays pure TypeScript for DX)\n4. Two-phase adapter transformation: extract framework context then normalize to Web Request\n5. Subpath exports for tree-shaking: @badass/next, @badass/astro, @badass/server\n\n**From Course-Builder (KEEP):**\n1. Framework-agnostic core with single entry function\n2. Provider plugin system for integrations (payment, transcription, etc.)\n3. Adapter interface separating DB from business logic\n4. Inngest for background jobs\n\n**Changes from Course-Builder:**\n1. Switch-based routing becomes procedure registry with type inference\n2. String actions become type-safe procedures: router.checkout.call(input)\n3. Manual request/response becomes middleware chain\n4. Massive adapter interface splits into ContentAdapter, CommerceAdapter, VideoAdapter\n5. Video processing extracts to @badass/video\n\n**Key Files:**\n- uploadthing builder: packages/uploadthing/src/_internal/upload-builder.ts\n- uploadthing adapters: packages/uploadthing/src/next.ts, express.ts\n- course-builder core: packages/core/src/lib/index.ts:24\n- course-builder next: packages/next/src/lib/index.ts:50\n- course-builder astro: packages/astro/server.ts:44","created_at":"2025-12-18T15:57:47.086Z"}
|
|
144
|
+
{"id":"9b7e2971-9b37-4783-8640-2c3504ae4450","information":"@badass CLI Architecture Decision (Dec 2024): Multi-site CLI pattern like PlanetScale/Stripe CLI. Sites are self-contained bounded contexts with own Mux/Inngest/Stripe accounts. CLI manages multiple sites via ~/.badass/config.json. Commands: badass auth login site, badass site use site, badass --site=site command. Each site provides its own API, CLI routes to appropriate site based on config.","created_at":"2025-12-18T15:30:12.361Z"}
|
|
145
|
+
{"id":"9c0a4991-16ab-4571-9010-f77741573540","information":"{\"id\":\"pattern-1765670644773-jturji\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T00:04:04.773Z\",\"updated_at\":\"2025-12-14T00:04:04.773Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T00:04:04.981Z","metadata":"{\"id\":\"pattern-1765670644773-jturji\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
146
|
+
{"id":"9d11a24b-119a-473d-b1d3-311602c6cbaa","information":"{\"id\":\"test-1766074742680-yt5vhmvkfzl\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:19:02.680Z\",\"raw_value\":1}","created_at":"2025-12-18T16:19:02.906Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:19:02.680Z\"}"}
|
|
147
|
+
{"id":"9d459798-bc90-4947-9c70-0b9bb9526e42","information":"Memory schema migrations in swarm-mail: Created v9 migration that adds memories and memory_embeddings tables to shared PGLite database. Critical: Must add \"CREATE EXTENSION IF NOT EXISTS vector;\" at start of migration SQL before using vector type. Integrated by importing memoryMigrations into streams/migrations.ts and spreading into main migrations array. Pattern: Module migrations append to main array (hive=v7-8, memory=v9). Tests verify table structure, indexes (HNSW, GIN, B-tree), cascade deletes, and 1024-dim vector storage. Memory schema uses TEXT ids, TIMESTAMPTZ timestamps, JSONB metadata, vector(1024) embeddings.","created_at":"2025-12-18T18:59:18.304Z","tags":"swarm-mail,migrations,pgvector,schema,pglite,memory"}
|
|
148
|
+
{"id":"9ef20adf-0850-48e2-83b9-9af8f0976182","information":"Swarm Wave-based coordination pattern observed: When task instructions explicitly say \"WAIT for Wave1-X and Wave1-Y\", this indicates sequential dependency gates. If file reservation conflicts occur with expected dependencies, agent should:\n\n1. Check if prerequisite files/dirs exist in old state (confirms prereqs not done)\n2. Send BLOCKED message to coordinator with blocker details\n3. Update bead status to blocked\n4. Be patient - conflict holder likely working on prerequisite\n5. Don't attempt workarounds - the sequential ordering exists for a reason\n\nIn this case: bd-lf2p4u-mja6npihvzm (AdapterRename) correctly blocked waiting for Wave1-DirRename and Wave1-TypeRename. File conflict with GoldHawk on beads-adapter.ts was expected since that file needs to be moved/renamed by prereqs first.\n\nAnti-pattern: Trying to work around prerequisites by renaming imports before files are renamed - breaks everything.","created_at":"2025-12-17T15:51:25.825Z"}
|
|
149
|
+
{"id":"9f18ab25-3898-4a71-866b-aad1627a6498","information":"Adapter factory pattern for event-sourced systems: createAdapter(db: DatabaseAdapter, projectKey: string) factory takes a DatabaseAdapter and returns interface with high-level operations. Delegates to store.ts for event operations (appendEvent, readEvents) and projections.ts for queries (getBead, queryBeads). This enables dependency injection and testing with different databases. Key: adapter methods create events with correct type, then call appendEvent(event, projectPath, db) to persist. Projections update automatically via event handlers. Example: createBead() generates bead_created event, appends it, then queries projection to return created bead.","created_at":"2025-12-16T22:08:24.450Z","metadata":"{\"context\":\"swarm-mail architecture\"}","tags":"adapter-pattern,event-sourcing,cqrs,dependency-injection"}
|
|
150
|
+
{"id":"a0921dff-b9b1-4cd1-b555-9acc6fe23e2f","information":"{\"id\":\"test-1766074455925-j3xb65rzg2\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:14:15.925Z\",\"raw_value\":1}","created_at":"2025-12-18T16:14:16.152Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:14:15.925Z\"}"}
|
|
151
|
+
{"id":"a1c9240a-b245-4109-a497-be818fa82127","information":"Effect.succeed() vs Effect.gen() in middleware: The @badass/core middleware implementation detects Effect objects by checking for \"_tag\" property. Effect.succeed() returns objects with \"_tag\", but Effect.gen() returns objects with \"_id\" and \"_op\" instead. Result: Effect.succeed() gets unwrapped properly via Effect.runPromise, but Effect.gen() returns the raw Effect object. Workaround: Use Effect.succeed() for simple context values in middleware, avoid Effect.gen() for middleware context functions.","created_at":"2025-12-18T16:32:14.305Z","tags":"effect-ts,middleware,badass-core,gotcha,effect-succeed,effect-gen"}
|
|
152
|
+
{"id":"a675722b-2e10-44c8-ac19-9525b53fe09c","information":"Investigation of \"Invalid Date\" error in hive JSONL parsing (Dec 19, 2025): The hypothesis was that jsonl.ts incorrectly casts ISO date strings as numbers. After thorough investigation, the code is CORRECT:\n\n1. Database stores dates as BIGINT (epoch milliseconds)\n2. Export (DB → JSONL): `new Date(epochNumber).toISOString()` ✅ Correct\n3. Import (JSONL → DB): `new Date(isoString).getTime()` ✅ Correct \n4. PGlite returns BIGINT as `number` type, not string ✅\n5. All 275 hive tests pass including new date-handling tests ✅\n\nThe task was based on incorrect hypothesis. The code at lines 207-210, 347-348, 465-468 in jsonl.ts and line 135 in merge.ts is working as designed. Added comprehensive date-handling tests to prevent future regressions.","created_at":"2025-12-19T17:41:17.868Z","tags":"investigation,dates,jsonl,hive,no-bug-found,test-coverage"}
|
|
153
|
+
{"id":"a7987c85-4b2e-4332-84ff-68d035606e5f","information":"Process exit hook pattern for PGLite flush safety net: Register process.on('beforeExit', async (code) => {...}) at module level to catch dirty cells before process exits. Pattern: iterate adapterCache, call FlushManager.flush() for each project. Critical: Use two flags for safety - exitHookRegistered (prevent duplicate registration) and exitHookRunning (prevent re-entry during async flush). Register hook immediately when module is imported via registerExitHook() call at module level. Non-fatal errors: wrap each flush in try/catch and log warnings. This is a safety net for the lazy write pattern where operations mark dirty and explicit flush writes to disk - catches any dirty cells that weren't explicitly synced before process exit. Tested with: beforeExit event emission, idempotency (multiple triggers), and graceful handling of no dirty cells.","created_at":"2025-12-19T17:06:21.423Z","tags":"process-exit-hook,safety-net,pglite,flush,idempotent,module-level"}
|
|
154
|
+
{"id":"a7dcbbb8-af6b-45f1-b4d0-7fdefda3e99b","information":"When documenting plugin hooks in OpenCode, always add the hook event to the Events section list AND provide a complete example in the Examples section. The session.compacting hook allows plugins to inject custom context before LLM summarization during compaction - useful for preserving task state, decisions, and active work context across compaction boundaries.","created_at":"2025-12-17T17:59:20.017Z"}
|
|
155
|
+
{"id":"a82a97ce-abb7-4d73-a288-5b49bf59ca74","information":"{\"id\":\"test-1766074638102-x5vrrbmco9\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:18.102Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:18.330Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:18.102Z\"}"}
|
|
156
|
+
{"id":"a849e675-58d3-4b5a-8c66-28e0dbbc297c","information":"{\"id\":\"test-1766001178291-jasc2x5op7s\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-17T19:52:58.291Z\",\"raw_value\":1}","created_at":"2025-12-17T19:52:59.368Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-17T19:52:58.291Z\"}"}
|
|
157
|
+
{"id":"a9034557-0634-45d0-b405-c0cdacd59c12","information":"{\"id\":\"test-1765386361375-9thynapgze\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:06:01.375Z\",\"raw_value\":1}","created_at":"2025-12-10T17:06:01.560Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:06:01.375Z\"}"}
|
|
158
|
+
{"id":"acb950b8-656d-4488-a930-5176968d666f","information":"Integration testing auto-migration in createMemoryAdapter: Tests run against in-memory PGLite databases using createInMemorySwarmMail(). Key insight: If ~/.semantic-memory/memory exists on test machine, migration actually runs and imports real memories during tests. Tests must handle both scenarios (legacy DB exists vs doesn't exist) using toBeGreaterThanOrEqual(0) instead of toBe(0). This proved the migration works end-to-end in real conditions - 177 actual memories migrated successfully during test runs. Critical: Use resetMigrationCheck() in beforeEach() for test isolation (module-level flag persists across tests without reset). Access DatabaseAdapter via swarmMail.getDatabase(), not swarmMail.db (property doesn't exist).","created_at":"2025-12-18T21:26:02.233Z","metadata":"{\"cell_id\":\"mjbxj68dmtb\",\"epic_id\":\"mjbxj67vqil\",\"test_file\":\"memory.integration.test.ts\"}","tags":"testing,integration-tests,pglite,migration,memory,swarm-mail"}
|
|
159
|
+
{"id":"ad85dcb1-ae91-4b2f-8857-16a5d8747969","information":"3 High-Value Improvements for opencode-swarm-plugin (Dec 2024):\n\n1. **Prompt Template Registry with Hot-Reload**\n - Problem: Prompts hardcoded in swarm-prompts.ts, require rebuild to change\n - Solution: External templates in ~/.config/opencode/swarm/prompts/*.md with variable interpolation\n - Enables: A/B testing, project-specific customization, hot-reload during dev\n - Inspired by: mdflow template variables, Release It! \"configuration as UI\"\n\n2. **Worker Handoff Protocol with Structured Context** (RECOMMENDED FIRST)\n - Problem: Workers ignore 400-line SUBTASK_PROMPT_V2, confused about scope\n - Solution: Structured WorkerHandoff envelope with machine-readable contract (files_owned, success_criteria) + minimal prose\n - Enables: Contract validation in swarm_complete, automatic scope creep detection, smaller prompts\n - Inspired by: \"Patterns for Building AI Agents\" subagent handoff, Bellemare event contracts\n\n3. **Adaptive Decomposition with Feedback Loops**\n - Problem: Decomposition quality varies, learning system doesn't feed back into strategy selection\n - Solution: Strategy registry with outcome-weighted selection (confidence * success_rate / log(completion_time))\n - Enables: Self-improving decomposition, auto-deprecation of failing strategies, transparent reasoning\n - Inspired by: Bellemare event replay, mdflow adapter registry, existing pattern-maturity system\n\nImplementation order: #2 then #1 then #3 (handoff protocol creates structured signals needed for adaptive decomposition)","created_at":"2025-12-18T17:20:56.752Z"}
|
|
160
|
+
{"id":"ae4ce932-255c-43bd-b4b0-64049d0afecf","information":"Database testing pattern for PGlite + pgvector in Effect-TS: Use isolated temp databases per test with makeTempDbPath() creating unique tmpdir paths. Critical: PGlite stores data in a DIRECTORY (not a file), so dbPath.replace(\".db\", \"\") gives the actual data dir. Cleanup with rmSync(dbDir, {recursive: true}). Effect services test via Effect.gen + Effect.provide(layer) + Effect.runPromise. Vector dimension errors (e.g., 1024 vs 3) throw from PGlite with \"expected N dimensions, not M\" - test with try/catch, not .rejects since Effect may wrap errors. Test decay by setting createdAt in past (Date.now() - 90*24*60*60*1000) and validating decayFactor < 0.6. Ordering tests need explicit timestamps, not Sleep delays.","created_at":"2025-12-18T17:16:46.245Z"}
|
|
161
|
+
{"id":"ae77ee44-0037-451b-8465-3dce4630e18a","information":"{\"id\":\"pattern-1766080417904-ucxl91\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:53:37.904Z\",\"updated_at\":\"2025-12-18T17:53:37.904Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:53:38.137Z","metadata":"{\"id\":\"pattern-1766080417904-ucxl91\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
162
|
+
{"id":"b3c1b1c3-0c21-41a7-98cc-868df103875b","information":"When assigned a task to fix code that was already fixed: verify the current state first before making changes. In this case, projections.test.ts table names were already correct (bead_* not cell_*). The task description was outdated or the fix was already applied. Always read the file to confirm the problem exists before attempting fixes.","created_at":"2025-12-18T15:39:22.185Z"}
|
|
163
|
+
{"id":"b3cbbf0c-981a-4f4f-8fa3-45175796e338","information":"{\"id\":\"test-1765386438362-dn6i6pzsef\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-10T17:07:18.362Z\",\"raw_value\":1}","created_at":"2025-12-10T17:07:18.549Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-10T17:07:18.362Z\"}"}
|
|
164
|
+
{"id":"b5c28f9e-6f13-40f3-8b7b-ed5191490723","information":"{\"id\":\"pattern-1765751833365-kd7r4x\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T22:37:13.365Z\",\"updated_at\":\"2025-12-14T22:37:13.365Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T22:37:13.577Z","metadata":"{\"id\":\"pattern-1765751833365-kd7r4x\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
165
|
+
{"id":"b6a9b8dc-0da0-43eb-ba32-14d4bb2bd88b","information":"@badass UI Components Reference (Dec 2024): Key extractable components from ai-hero:\n\n**High Priority (Ready to Extract):**\n1. DateTimePicker - apps/ai-hero/src/app/(content)/cohorts/[slug]/edit/_components/date-time-picker/date-time-picker.tsx:40 - React Aria based, self-contained\n2. CRUD Dialog Pattern - apps/ai-hero/src/app/admin/tags/tag-crud-dialog.tsx:34 - Generic pattern, 90% identical across uses\n3. Sidebar Layout - apps/ai-hero/src/app/(content)/cohorts/[slug]/_components/cohort-sidebar.tsx:13 - Sticky with mobile floating CTA\n\n**Medium Priority (Needs Refactoring):**\n4. withResourceForm HOC - apps/ai-hero/src/components/resource-form/with-resource-form.tsx:219 - Needs dependency injection to remove app-specific imports\n5. ListResourcesEdit - apps/ai-hero/src/components/list-editor/list-resources-edit.tsx:84 - Needs search provider abstraction (currently Typesense-coupled)\n\n**Shared UI Package (Already Extracted):**\n- packages/ui/resources-crud/edit-resources-form.tsx:28 - Mobile/desktop responsive form\n- packages/ui/resources-crud/create-resource-form.tsx - Resource creation\n\n**Architecture Patterns:**\n- Config-driven forms: Zod schema + config object equals full CRUD UI\n- Tool panel system: Pluggable tools with icon + component\n- Batch operations: Drag-and-drop with debounced batch saves\n- Factory pattern: createWorkshopFormConfig() for type-safe config","created_at":"2025-12-18T15:50:07.107Z"}
|
|
166
|
+
{"id":"b89c6800-cc8a-477b-8bce-81ad325b1e87","information":"Enhanced doctor command in pdf-library with comprehensive health checks and --fix flag.\n\n**Implementation (TDD - all tests green):**\n\n1. **New Health Checks (5 total)**:\n - WAL files: existing assessWALHealth() (50 files/50MB thresholds)\n - Corrupted directories: checkCorruptedDirs() detects \" 2\" suffix pattern (\"base 2\", \"pg_multixact 2\")\n - Daemon status: async isDaemonRunning(daemonConfig) via Effect.promise\n - Ollama connectivity: library.checkReady() with try/catch\n - Orphaned data: library.repair() returns chunks/embeddings counts\n\n2. **New Functions**:\n - `checkCorruptedDirs(libraryPath, dirs)`: Returns CorruptedDirsResult with issues array\n - `assessDoctorHealth(data)`: Combines all checks into DoctorHealthResult with HealthCheck[] array\n\n3. **Auto-Repair with --fix flag**:\n - Parses opts.fix from args via parseArgs()\n - Removes corrupted directories with rmSync(path, { recursive: true, force: true })\n - Orphaned data auto-cleaned via existing repair() call\n - Shows recommendations when --fix not used\n\n4. **Key Patterns**:\n - Used Effect.gen for async flow (yield* Effect.promise for isDaemonRunning)\n - DaemonConfig requires: socketPath, pidPath, dbPath (all derived from config.libraryPath)\n - WAL health check handles non-existent pg_wal gracefully (assumes healthy)\n - All checks graceful-fail: database not existing doesn't crash, returns healthy defaults\n\n5. **Test Coverage**: 11 new tests covering checkCorruptedDirs edge cases and assessDoctorHealth combinations\n\n**Bug Prevention**: Always await isDaemonRunning with Effect.promise, never call synchronously (returns Promise<boolean>).","created_at":"2025-12-19T17:29:44.709Z","tags":"pdf-library,doctor-command,health-checks,tdd,effect-ts,cli,auto-repair"}
|
|
167
|
+
{"id":"b8f28a17-d8a2-44e1-8b72-f74e2ae3a98a","information":"{\"id\":\"test-1765653517058-z98hhewgo3r\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T19:18:37.058Z\",\"raw_value\":1}","created_at":"2025-12-13T19:18:37.257Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T19:18:37.058Z\"}"}
|
|
168
|
+
{"id":"ba639de8-848f-4ced-92f5-9401dc270417","information":"{\"id\":\"test-1765664182311-clxw0y6xk4b\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:16:22.311Z\",\"raw_value\":1}","created_at":"2025-12-13T22:16:22.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:16:22.311Z\"}"}
|
|
169
|
+
{"id":"bad6e714-cf98-4609-a8f0-44c2e636901e","information":"Added legacy semantic-memory migration prompt to swarm setup CLI. Pattern follows existing .beads migration flow: 1) Check legacyDatabaseExists() after dependency checks, before model selection. 2) Call getMigrationStatus() to show counts (total, withEmbeddings). 3) Prompt user with p.confirm. 4) Create target DB with getSwarmMail(cwd). 5) Run migrateLegacyMemories({ targetDb, onProgress }) with spinner. 6) Show detailed results (migrated, skipped, failed). Key insight: Migration functions are exported from swarm-mail/src/memory/migrate-legacy.ts and re-exported from swarm-mail/src/index.ts. Needed to rebuild swarm-mail package after adding exports. Placement: lines 1672-1735 in bin/swarm.ts, right after .beads migration, before model selection.","created_at":"2025-12-18T21:09:36.891Z","tags":"cli,migration,semantic-memory,swarm-mail,legacy-migration,setup"}
|
|
170
|
+
{"id":"bc1b197e-9d63-4466-8c7d-d453e0949840","information":"BeadsAdapter interface pattern for swarm-mail: Interface split into 6 sub-adapters (BeadAdapter, DependencyAdapter, LabelAdapter, CommentAdapter, EpicAdapter, QueryAdapter, BeadsSchemaAdapter) combined into single BeadsAdapter, matching SwarmMailAdapter pattern. Migration v6 adds beads tables to shared PGLite database (shares schema_version with swarm-mail migrations v1-v5). Projections use updateProjections() dispatcher pattern to route events to handlers. Blocked cache uses recursive CTE for transitive blocker lookup with depth limit (10). Dirty tracking marks beads for incremental JSONL export. Key insight: Share same PGLite instance and migration system with swarm-mail - don't create separate database. Test pattern: wrapPGlite() creates DatabaseAdapter from PGlite instance for dependency injection in tests.","created_at":"2025-12-16T21:51:14.238Z"}
|
|
171
|
+
{"id":"bd7187c4-23be-4081-a315-c2c897fef72f","information":"## Session Context Capture (Dec 19, 2025)\n\n### Current Bug: \"Invalid Date\" error on hive_query\n\n**Symptom:** `hive_query` returns `{\"success\":false,\"error\":{\"code\":\"HiveError\",\"message\":\"Failed to query cells: Invalid Date\"}}`\n\n**Root Cause Investigation:**\n- JSONL file parses fine with jq\n- 17 lines in .hive/issues.jsonl, all status \"open\"\n- Date fields (created_at, updated_at) look valid: \"2025-12-19T17:14:05.371Z\"\n- Error comes from JavaScript Date constructor somewhere in swarm-mail/src/hive/\n\n**Likely culprits (from grep):**\n- `jsonl.ts:207-210` - `new Date(bead.created_at as number)` - casting string to number?\n- `jsonl.ts:347-348` - `new Date(cellExport.closed_at)` - closed_at might be undefined\n- `jsonl.ts:465-468` - same pattern\n- `merge.ts:135` - `new Date(cell.closed_at)` on potentially undefined\n\n**Hypothesis:** Code expects timestamps as numbers but JSONL has ISO strings, OR closed_at is undefined and being passed to Date constructor.\n\n### Open P1 Bugs (from earlier query)\n1. `mjd4pdh5651` - Make hive_sync bidirectional (import from JSONL after git pull)\n2. `mjd4pjujc7e` - Fix overly strict task_id regex requiring 3+ segments\n\n### Recent Completed Work\n- Smart ID resolution (resolvePartialId) - committed\n- Auto-sync at hive_create_epic, swarm_complete, process exit - committed \n- Removed max_subtasks limit of 10 - committed\n- Changeset pushed, waiting for CI to create version PR\n\n### Hive Viewer Epic Created\n- Epic ID: `mjd4yu2aguv` - 16 subtasks across 4 phases\n- Phase 1 (spike): OpenTUI hello world, JSONL parser, cell list component\n- Not yet started - was about to spawn workers\n\n### Files Modified This Session\n- packages/opencode-swarm-plugin/src/hive.ts (auto-sync)\n- packages/opencode-swarm-plugin/src/swarm-orchestrate.ts (auto-sync in swarm_complete)\n- packages/opencode-swarm-plugin/src/swarm-decompose.ts (removed max limit)\n- packages/opencode-swarm-plugin/src/swarm-prompts.ts (removed max limit)\n- .changeset/hive-smart-id-resolution.md (updated with all changes)","created_at":"2025-12-19T17:30:18.475Z","tags":"session-context,bug,invalid-date,hive-query,swarm-mail,jsonl,december-2025"}
|
|
172
|
+
{"id":"be8c1c00-1128-4c4e-8984-6dc93db50610","information":"Auto-sync pattern in swarm_complete: When calling hive_sync from within a tool that operates on a specific project_key, you MUST temporarily set the hive working directory using setHiveWorkingDirectory(project_key) before calling hive_sync.execute(), then restore it in a finally block. Why: hive_sync uses getHiveWorkingDirectory() which defaults to process.cwd(), not the project_key argument. Without this, sync writes to wrong directory. Pattern: const prev = getHiveWorkingDirectory(); setHiveWorkingDirectory(projectKey); try { await hive_sync.execute({}, ctx); } finally { setHiveWorkingDirectory(prev); }","created_at":"2025-12-19T17:02:17.235Z","metadata":"{\"type\":\"gotcha\",\"pattern\":\"working-directory-context\",\"component\":\"swarm-orchestrate\"}","tags":"hive,sync,swarm,working-directory,context-management"}
|
|
173
|
+
{"id":"c0144f56-dcd6-4aba-a19e-5f10b7f7c68b","information":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:50.318Z\",\"updated_at\":\"2025-12-15T03:58:50.318Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:50.643Z","metadata":"{\"id\":\"pattern-1765771130318-zvu1uu\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
174
|
+
{"id":"c17bc88f-4015-4ab8-b0c6-cff0c7955eb5","information":"--information","created_at":"2025-12-14T22:42:53.190Z","tags":"documentation,semantic-memory,cli-syntax,gotcha,agent-reference"}
|
|
175
|
+
{"id":"c1e3d77d-0183-4f45-80ba-a6d6318f0868","information":"Cell ID generation now uses project name from package.json as prefix instead of generic 'bd-'. Format is {slugified-name}-{hash}-{timestamp}{random}, e.g., swarm-mail-lf2p4u-mjbneh7mqah. Fallback is 'cell' prefix when package.json not found or has no name field. Implementation uses fs.readFileSync + fs.existsSync at ID generation time (lazy load), not adapter initialization. Slugification replaces @/spaces/special chars with dashes, removes leading/trailing dashes. Hash can be negative (use [-a-z0-9]+ regex pattern). Backward compatible - no changes to validation, existing bd-* IDs work fine. TDD approach: wrote failing tests first, implemented to pass, refactored to use ES module imports.","created_at":"2025-12-18T16:29:37.218Z"}
|
|
176
|
+
{"id":"c27724f6-a65c-4641-830d-83a535f95c6b","information":"JSONL file format bug: `wc -l` showed 0 lines despite having content because records were concatenated with `lines.join(\"\\n\")` which doesn't add a trailing newline. The fix: (1) `serializeToJSONL()` now returns `JSON.stringify(cell) + \"\\n\"` and (2) `exportToJSONL()` uses `lines.join(\"\")` since each line already has `\\n`. Root cause: JSONL spec requires each line to end with newline, including the last line. Without trailing newline, `wc -l` returns 0 because it counts newline characters, not lines. Tests: verify `jsonl.endsWith(\"\\n\")` and `(jsonl.match(/\\n/g) || []).length === recordCount`.","created_at":"2025-12-19T16:18:17.706Z","tags":"jsonl,newlines,file-format,wc,unix-tools,bugs"}
|
|
177
|
+
{"id":"c48ccddf-2e1a-4f73-8e49-f89de6bd0877","information":"Bun monorepo publishing with changesets - COMPLETE SOLUTION (Dec 2024):\n\nPROBLEM: workspace:* protocol not resolved by npm publish or changeset publish\n\nROOT CAUSE: bun pm pack resolves workspace:* from LOCKFILE, not package.json. Stale lockfile = old versions.\n\nSOLUTION (from https://ianm.com/posts/2025-08-18-setting-up-changesets-with-bun-workspaces):\n1. ci:version script: `changeset version && bun update` - the bun update syncs lockfile after version bump\n2. ci:publish script: custom scripts/publish.ts using `bun pm pack` + `npm publish <tarball>`\n3. Setup .npmrc in CI: `echo \"//registry.npmjs.org/:_authToken=$NPM_TOKEN\" > .npmrc`\n\nWHY NOT:\n- `bunx changeset publish` - uses npm publish, doesn't resolve workspace:*\n- `bun publish` - no npm token support yet (track: github.com/oven-sh/bun/issues/15601)\n- OIDC trusted publishers - works but requires repository field in package.json for provenance\n\nWORKFLOW (.github/workflows/publish.yml):\n- Setup npmrc with NPM_TOKEN secret\n- version: bun run ci:version\n- publish: bun run ci:publish\n- changesets/action handles PR creation and tagging\n\nGOTCHAS:\n- CLI bin scripts need deps in dependencies, not devDependencies\n- Each package needs repository field for npm provenance\n- files field in package.json to include dist/\n\nFILES: scripts/publish.ts, .github/workflows/publish.yml, package.json (ci:version, ci:publish scripts)","created_at":"2025-12-15T05:07:27.735Z"}
|
|
178
|
+
{"id":"c76fd51e-f15f-4f2c-9ca5-f3853806deef","information":"@badass/core TDD patterns successfully applied: Wrote characterization tests FIRST to document actual behavior (what IS) before behavior tests (what SHOULD). Key learnings: 1) z.coerce.date() creates new Date instances, so use .getTime() for equality checks not reference equality. 2) Zod .omit() strips fields silently, doesn't throw - test with .not.toHaveProperty(). 3) composeMiddleware in @badass/core runs middlewares sequentially (await first, then second), NOT in parallel - order matters. 4) Effect detection checks for \"_tag\" property, works for Effect.succeed() but NOT Effect.gen() which uses \"_id\". 5) Characterization tests caught 6 wrong assumptions about behavior before writing implementation-dependent tests. This validates the TDD pattern: write failing test, observe actual behavior, update test to match reality.","created_at":"2025-12-18T16:32:11.709Z","tags":"tdd,characterization-tests,badass-core,zod,effect-ts,middleware,testing-patterns"}
|
|
179
|
+
{"id":"c9d0eaaf-afb7-4c54-87f0-8ecb79bfb8eb","information":"Git-synced memories implementation pattern: Export memories to JSONL without embeddings (too large, ~4KB per memory). Store id, information, metadata, tags, confidence, created_at. Import skips duplicates by ID. Bidirectional sync: import from file first, then export all to file. Integration with hive_sync: after flushing cells to issues.jsonl, also sync memories.jsonl. Memory sync is optional - wrapped in try/catch so it doesn't fail the main sync. Key insight: PGlite returns JSONB as object not string, need to handle both cases when parsing metadata.","created_at":"2025-12-19T03:01:14.081Z","metadata":"{\"files\":[\"packages/swarm-mail/src/memory/sync.ts\"],\"pattern\":\"git-synced-memories\"}","tags":"memory-sync,jsonl,git-sync,hive,swarm-mail"}
|
|
180
|
+
{"id":"cc84f337-516e-40cc-9429-d557e4484d23","information":"@badass Implementation Decomposition Ready (Dec 2024) - Next steps after architecture questions resolved: Create epic with subtasks for (1) @badass/core - Effect-TS services, builder pattern from uploadthing, (2) @badass/db - Drizzle schemas, adapter interface supporting shared/isolated DB, (3) @badass/auth - BetterAuth with hive/spoke SSO, device flow for CLI/local apps, (4) @badass/next - createRouteHandler, site config, (5) @badass/cli - badass command with multi-site support, device flow auth, TUI for video uploads. Namespace is @badass/*, CLI binary is 'badass'. Reference repos: pingdotgg/uploadthing for Effect-TS router pattern, vercel/academy-content for CLI+Mux, badass-courses/course-builder for device flow and multi-site patterns.","created_at":"2025-12-18T15:42:12.574Z"}
|
|
181
|
+
{"id":"cd179af2-3f9d-45ee-a349-8b7663f2078e","information":"JSONL sync architecture in swarm-mail hive module investigation (Dec 2024):\n\n**NO BUG FOUND** - System working as designed. 271/271 tests passing.\n\n**Architecture (Lazy Write Pattern)**:\n1. Operations (createCell, updateCell, closeCell) mark cells dirty via updateProjections() → markBeadDirty()\n2. Dirty tracking stored in dirty_beads table (cell_id, project_key, marked_at)\n3. User explicitly calls hive_sync tool to flush dirty cells to .hive/issues.jsonl\n4. FlushManager exports dirty cells via exportDirtyBeads() and writes to file\n\n**Key Implementation Details**:\n- updateProjections() in projections.ts line 118 marks ALL cells dirty after EVERY event\n- exportDirtyBeads() queries dirty_beads table, exports to JSONL\n- FlushManager.flush() writes JSONL to file, clears dirty flags\n- Table naming: \"beads\" is real table, \"cells\" is a view (migration v8) for compatibility\n- Both \"SELECT FROM beads\" and \"SELECT FROM cells\" work correctly\n\n**Why Tests All Pass**:\nFull integration test verifies: createCell → markDirty → exportDirtyBeads → FlushManager.flush() → file written correctly\n\n**Design Rationale**:\nLazy writes prevent excessive disk I/O. Operations mark dirty (cheap), user flushes when ready (expensive). Similar to git add/commit pattern.\n\n**If Asked \"Why Don't Cells Appear in JSONL?\"**:\nAnswer: Did you call hive_sync? Operations don't auto-flush. This is intentional.","created_at":"2025-12-19T16:28:00.031Z","tags":"hive,jsonl,sync,flush,dirty-tracking,swarm-mail,architecture"}
|
|
182
|
+
{"id":"cd77b842-2aff-47c0-baba-97096aaf9322","information":"pdf-brain research session on memory systems for AI agents yielded 13 actionable patterns from cognitive science literature:\n\n1. **Testing Effect** (Range, 9853): Retrieval strengthens memory more than passive review. Query count should affect decay rate.\n\n2. **Interleaving** (Range): Mixed/varied practice leads to better transfer than blocked practice. Tag memories for cross-domain retrieval.\n\n3. **Self-Explanation** (e-Learning and Science of Instruction): Prompting \"WHY does this work?\" produces deeper learning than just storing facts.\n\n4. **Negative Examples** (Training Complex Cognitive Skills): Contrast correct with incorrect. Store anti-patterns alongside patterns.\n\n5. **Worked Examples** (Multimediabook): Before/after code snippets more valuable than abstract rules for novices.\n\n6. **Connection Strength** (Smart Notes, Zettelkasten): Well-connected notes decay slower. Cross-references surface unexpected insights.\n\n7. **Tacit Knowledge** (Nonaka/Takeuchi): Some knowledge is hard to articulate. Capture intuitions with examples, not just rules.\n\n8. **Chunking** (Kirschner): One transferable insight per memory. Too granular = noise, too broad = not actionable.\n\n9. **Metacognitive Prompts** (9853): \"Would you be able to apply this in a different context?\" encourages reflection on transferability.\n\n10. **Hierarchical Tags** (How Learning Works): Knowledge organization affects retrieval. Use domain/subdomain/topic structure.\n\n11. **Spaced Retrieval** (CodingCareer, Anki): Active scheduling beats passive decay. Surface due-for-review memories proactively.\n\n12. **Prior Knowledge Activation** (978-3-031-74661-1): New info connected to existing knowledge sticks longer. Link new memories to existing ones.\n\n13. **Schema Acquisition** (Training Complex Cognitive Skills): Store transferable patterns, not specific fixes. Schemas enable far transfer.\n\nKey sources: Training_Complex_Cognitive_Skills (360 pages), e-Learning and the Science of Instruction (759 pages), Range (366 pages), How Learning Works (274 pages), ten-steps-to-complex-learning (416 pages), Smart Notes (146 pages).","created_at":"2025-12-19T03:13:03.888Z","tags":"memory-systems,cognitive-science,pdf-brain,learning,spaced-repetition,schemas,research"}
|
|
183
|
+
{"id":"cdeb1658-81dd-408b-b30e-ef1c36f9399c","information":"{\"id\":\"test-1766074660928-qaxaon6ib8i\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-18T16:17:40.928Z\",\"raw_value\":1}","created_at":"2025-12-18T16:17:41.163Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-18T16:17:40.928Z\"}"}
|
|
184
|
+
{"id":"cffea773-b97b-4582-b5d4-b0154bd12f83","information":"Lesson rating rubric application for AI SDK course: Setup lessons (00-) often score low on Hook & Motivation because they're functional rather than problem-focused. Fix: Add \"Why This Matters\" explaining infrastructure value (AI Gateway = unified multi-provider access, no vendor lock-in). Also, setup lessons need Fast Track even though they're procedural—format consistency matters for learner expectations. Real output examples critical (e.g., \"vc --version # Output: Vercel CLI 39.2.4\") because learners verify setup success by matching exact output. Changed \"Done\" to \"Done-When\" with unchecked boxes—learners check them off as they progress, improving engagement.","created_at":"2025-12-16T21:43:30.828Z"}
|
|
185
|
+
{"id":"d0534c28-593b-40a1-998a-05cd7c82a32f","information":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:56:06.090Z\",\"updated_at\":\"2025-12-15T03:56:06.090Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:56:06.457Z","metadata":"{\"id\":\"pattern-1765770966090-vw9ofv\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
186
|
+
{"id":"d47252c9-654a-4dea-913b-991951101d2a","information":"PGlite Socket Server Implementation Pattern for pdf-brain daemon:\n\nROOT CAUSE: PGlite is single-connection only. Multiple CLI invocations creating their own instances corrupt the database.\n\nSOLUTION: Daemon process that owns ONE PGlite instance and exposes it via Unix socket using @electric-sql/pglite-socket.\n\nKEY IMPLEMENTATION DETAILS:\n1. Package: @electric-sql/pglite-socket (not /server subpath - exports from main)\n2. Correct class name: PGLiteSocketServer (capital L)\n3. Constructor options: { db: PGlite, path: string } (not socketPath)\n4. Server lifecycle: call server.start() after construction, server.stop() in shutdown\n5. Use Unix socket (path option) instead of TCP for local-only daemon\n\nGRACEFUL SHUTDOWN PATTERN:\n```typescript\n// MANDATORY: CHECKPOINT before close to flush WAL\nawait db.exec(\"CHECKPOINT\");\nawait server.stop();\nawait db.close();\n// Then remove PID file and socket\n```\n\nPID FILE VALIDATION:\n- Check file exists\n- Parse PID as integer with Number.isNaN (not isNaN)\n- Verify process alive with process.kill(pid, 0) - signal 0 doesn't kill, just checks existence\n- Handle errors (process doesn't exist) by returning false\n\nTDD APPROACH EFFECTIVENESS:\n- Wrote 14 tests first covering all lifecycle states\n- Tests caught import path error (/server vs main)\n- Tests caught API differences (close vs stop, constructor args)\n- All tests green after implementation\n\nThis pattern prevents the PGlite multi-connection corruption bug (semantic memory 48610ac6-d52f-4505-8b06-9df2fad353aa) without implementing complex leader election.","created_at":"2025-12-19T14:51:31.659Z","tags":"pglite,daemon,socket-server,multi-connection,checkpoint,lifecycle-management"}
|
|
187
|
+
{"id":"d624440f-abd3-4152-9243-7f8c7ad9c964","information":"Port Ollama embedding service from semantic-memory to swarm-mail successfully completed. Key patterns:\n\n**Effect-TS Service Pattern (Context.Tag)**:\n- Define service with Context.Tag(\"namespace/ServiceName\") extending tag class\n- Service interface specifies Effect signatures with explicit error types\n- Implementation uses Layer.succeed() to provide concrete implementation\n- Retry logic: Schedule.exponential(Duration.millis(100)).pipe(Schedule.compose(Schedule.recurs(3))) for 100ms→200ms→400ms backoff\n\n**Batch Processing Pattern**:\n- Use Stream.fromIterable(items).pipe(Stream.mapEffect(fn, { concurrency })) for controlled concurrency\n- Stream.runCollect + Effect.map(Chunk.toArray) to materialize results\n- Each item in batch gets independent retry logic from embedSingle\n\n**Health Check Pattern**:\n- Check both server availability AND model availability\n- Support version suffix matching (model name can have :latest, :v1, etc)\n- Provide actionable error messages (e.g., \"Run: ollama pull model-name\")\n\n**Testing with Mocked Fetch**:\n- Mock global.fetch for unit tests of Effect-based HTTP calls\n- Use Effect.flip to test error cases (converts failure to success for assertions)\n- Test retry behavior by tracking attempt count in mock\n- Test batch concurrency by tracking concurrent calls with counters\n\n**OllamaError Definition**:\n- Use Schema.TaggedError pattern for type-safe errors\n- Single reason field for error messages\n- Integrates with Effect error handling (Effect.fail, Effect.tryPromise catch)\n\nLocation: packages/swarm-mail/src/memory/ollama.ts\nTests: packages/swarm-mail/src/memory/ollama.test.ts (16 tests, all passing)\nConfig: MemoryConfig with ollamaHost and ollamaModel, defaults from env vars","created_at":"2025-12-18T18:57:57.759Z","tags":"effect-ts,ollama,embeddings,swarm-mail,context-tag,retry-pattern,testing"}
|
|
188
|
+
{"id":"d6759351-07a1-40f2-9c3e-c49022039786","information":"Testing Zod schemas pattern: For date coercion tests, z.coerce.date() always creates NEW Date instances even when input is already a Date. This means reference equality (toBe) fails. Solution: use .toBeInstanceOf(Date) + .getTime() comparison for date values. Also, Zod .omit() doesn't reject extra fields, it silently strips them during parsing. Test with expect(result).not.toHaveProperty('omittedField') not expect().toThrow().","created_at":"2025-12-18T16:32:12.902Z","tags":"zod,testing,dates,schemas,validation,gotcha"}
|
|
189
|
+
{"id":"d7efe68a-3a5d-42c6-b203-d77ea9c61961","information":"Successfully completed Bead→Cell event schema rename with backward compatibility. Key pattern: Export new names as primary exports, then add deprecated type aliases and const aliases for all old names (schemas, types, and helper functions). For imports, use only the new names and don't try to create aliases in the import statement - create them as separate exports after. This allows existing code to continue using BeadEvent types while new code uses CellEvent types. Total renames: 20 schemas, 20 types, 3 helper functions - all with backward compat aliases marked with @deprecated JSDoc tags.","created_at":"2025-12-17T16:40:48.872Z"}
|
|
190
|
+
{"id":"d8320ad2-425b-4c27-a854-ef5ce49a2e55","information":"{\"id\":\"pattern-1765771080299-rxkeql\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-15T03:58:00.299Z\",\"updated_at\":\"2025-12-15T03:58:00.299Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-15T03:58:01.723Z","metadata":"{\"id\":\"pattern-1765771080299-rxkeql\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
191
|
+
{"id":"da4dbfc8-fbd1-4a12-b0ed-8b262529953c","information":"@badass Effect Router Decision (Dec 2024): Build a router/builder pattern using Effect-TS, similar to uploadthing's approach. Reference implementation: pingdotgg/uploadthing/packages/uploadthing/src/effect-platform.ts and _internal/upload-builder.ts. This provides type-safe, composable route definitions with Effect's error handling and dependency injection. The router pattern will be used across @badass packages for consistent API design.","created_at":"2025-12-18T15:51:55.079Z"}
|
|
192
|
+
{"id":"db9ed7ab-6599-4b62-b12d-276836a633cc","information":"Shared PGlite test server pattern for swarm-mail dramatically speeds up test suite execution. \n\n**ROOT CAUSE:** Each test creating new PGlite instance requires ~500ms WASM initialization. With 50+ tests, this adds 25+ seconds of pure overhead.\n\n**SOLUTION:** Share ONE PGlite instance across entire test suite via test-server.ts module-level state:\n\n```typescript\n// test-server.ts\nlet db: PGlite | null = null;\n\nexport async function startTestServer() {\n if (db) return { db }; // Reuse existing\n db = await PGlite.create({ extensions: { vector } });\n await runMigrations(db);\n return { db };\n}\n\nexport async function resetTestDatabase() {\n if (!db) throw new Error(\"Test server not started\");\n await db.exec(\"TRUNCATE agents, messages, beads, ... CASCADE\");\n}\n\nexport function getTestDb() {\n if (!db) throw new Error(\"Test server not started\");\n return db;\n}\n```\n\n**Test Pattern:**\n```typescript\nbeforeAll(async () => {\n await startTestServer(); // ONE init\n});\n\nbeforeEach(async () => {\n await resetTestDatabase(); // TRUNCATE (~10ms) instead of recreate (~500ms)\n});\n\nafterAll(async () => {\n await stopTestServer();\n});\n```\n\n**MEASURED RESULTS (hive/adapter.test.ts, 25 tests):**\n- Before: 8.63s (345ms per test)\n- After: 0.96s (38ms per test)\n- **~9x speedup, 90% reduction in test time**\n\n**KEY DECISIONS:**\n1. Abandoned PGLiteSocketServer approach - socket overhead added complexity without benefit\n2. Direct shared PGlite instance is simpler and faster\n3. TRUNCATE CASCADE between tests provides clean isolation\n4. Module-level state works perfectly for process-scoped test suites\n\n**GOTCHAS:**\n- Must TRUNCATE in correct order due to foreign keys (use CASCADE)\n- Must run migrations once at startup, not per test\n- Close cleanup is critical: `db.exec(\"CHECKPOINT\")` before `db.close()`\n\n**APPLICABILITY:** This pattern works for any test suite using PGlite where WASM init dominates test time. Expected 10-20x speedup for larger test suites (100+ tests).","created_at":"2025-12-19T15:12:21.422Z","tags":"testing,pglite,performance,test-patterns,swarm-mail,speedup"}
|
|
193
|
+
{"id":"dc749a41-96ec-4ab2-a163-f1639857f9bd","information":"{\"id\":\"pattern-1766074743915-fstlv8\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:19:03.915Z\",\"updated_at\":\"2025-12-18T16:19:03.915Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:19:04.142Z","metadata":"{\"id\":\"pattern-1766074743915-fstlv8\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
194
|
+
{"id":"dda2aaf9-9eb3-4a54-8eb8-9894743448af","information":"Kent C. Dodds unified accounts feature request (Dec 2024): Kent wants to unify accounts across EpicAI.pro, EpicWeb.dev, and EpicReact.dev. Use case: User buys Epic React, starts Epic Workshop App tutorial, shouldn't have to create a separate EpicWeb.dev account. Current pain: suboptimal experience forcing account creation on different domain. Alternative considered: local tracking (also suboptimal). This validates the need for creator-scoped unified identity in @badass architecture.","created_at":"2025-12-18T15:32:32.673Z"}
|
|
195
|
+
{"id":"e03ede8f-8cc6-467f-bc19-60a54cb07e2e","information":"WorkerHandoff integration: task_id must have 3+ segments (project-slug-hash). Tests with bd-123 format fail. Use test-swarm-plugin-lf2p4u-name123 instead.","created_at":"2025-12-18T17:36:15.053Z"}
|
|
196
|
+
{"id":"e0a89793-1dd3-4061-9621-524a5ae92841","information":"Documentation audit for BeadsAdapter migration completed 2025-01-16. Searched all docs in packages/opencode-swarm-plugin/docs/ for stale references to: bd CLI commands, Go implementation, SQLite, old architecture. Found 1 stale reference: swarm-mail-architecture.md line 519 incorrectly compared Agent Mail's \"SQLite file\" to Swarm Mail's PGLite. Fixed to \"PGLite (embedded Postgres)\" for accuracy. All other docs (ADR-001, ADR-002, ADR-003, ROADMAP, subagent-coordination-patterns.md, swarm-mail-architecture.md) correctly reference: PGLite event sourcing, BeadsAdapter from swarm-mail package, .beads/issues.jsonl sync. No references to deprecated bd CLI or Go implementation found.","created_at":"2025-12-17T01:00:46.822Z","tags":"documentation,audit,BeadsAdapter,migration,PGLite,swarm-mail"}
|
|
197
|
+
{"id":"e122ede7-8a62-4489-9742-3234b89a8fb2","information":"SWARM-MAIL ADAPTER PATTERN DECISION (Dec 2025): Extracting swarm-mail as standalone package using adapter pattern from coursebuilder. Key design: 1) DatabaseAdapter interface abstracts SQL operations (query, exec, transaction), 2) SwarmMailAdapter interface defines all swarm-mail operations, 3) createSwarmMailAdapter(db) factory accepts injected database, 4) PGLite convenience layer provides getSwarmMail() singleton for simple usage. Benefits: portable (works with PGLite, Postgres, Turso), testable (inject in-memory), shareable (one db across consumers), decoupled (swarm-mail doesn't own db lifecycle). Pattern learned from github.com/badass-courses/course-builder/tree/main/packages/adapter-drizzle which uses table function injection for multi-tenant prefixing.","created_at":"2025-12-15T00:02:39.759Z"}
|
|
198
|
+
{"id":"e18f64a6-d971-4ef8-8d09-02f3f7a445a5","information":"Schema file renaming with backward compatibility pattern: When renaming core schema files like Bead to Cell, create new file with updated names first, export all primary types and schemas with new names, then add backward compatibility section at bottom with deprecated JSDoc tags. Use pattern: export const OldName equals NewName and export type OldType equals NewType. This allows gradual migration across codebase without breaking existing imports. Delete old file only after new file is complete with aliases. For opencode-swarm-plugin Bead to Cell hive metaphor migration.","created_at":"2025-12-17T16:39:35.501Z"}
|
|
199
|
+
{"id":"e1eb1c68-a71a-4c00-beb6-7310deffc166","information":"Documentation file rename with terminology update pattern: Renamed beads.mdx → hive.mdx in docs, updated all tool names (beads_* → hive_*), changed terminology (bead/beads → cell/cells), updated directory references (.beads/ → .hive/), and added backward compatibility note mentioning beads_* aliases still work but are deprecated. Key insight: When renaming documentation for deprecated APIs, ALWAYS include a migration note at the top explaining the old names still work but show warnings. This helps users transition smoothly without breaking existing code. File path was apps/web/content/docs/packages/opencode-plugin/","created_at":"2025-12-18T18:37:20.197Z","metadata":"{\"context\":\"v0.31 beads→hive rename\"}"}
|
|
200
|
+
{"id":"e23e3f30-6e9f-4eb4-858f-2ac50f6e17ad","information":"@badass Multi-Database Testing Pattern (Dec 2024): Adopted from course-builder. Key pattern is PARAMETERIZED TEST SUITES.\n\n**Core Pattern:**\n```typescript\n// Write once in packages/db/test/adapter-tests.ts\nexport function runAdapterTests(options: {\n adapter: Adapter\n db: { connect, disconnect, user, session, ... }\n fixtures: TestFixtures\n}) {\n beforeAll(() => options.db.connect())\n afterAll(() => options.db.disconnect())\n \n test('creates user', async () => {\n const user = await options.adapter.createUser(options.fixtures.user)\n const dbUser = await options.db.user(user.id)\n expect(dbUser).toEqual(user)\n })\n}\n\n// Run against Postgres\nrunAdapterTests({ adapter: postgresAdapter, db: postgresHelpers, fixtures })\n\n// Run against SQLite\nrunAdapterTests({ adapter: sqliteAdapter, db: sqliteHelpers, fixtures })\n```\n\n**Key Files from course-builder:**\n- packages/utils/adapter.ts:84 - runBasicTests() (766 lines)\n- packages/adapter-drizzle/test/fixtures.ts - Shared test data\n- packages/adapter-drizzle/test/mysql/test.sh - Shell script for DB lifecycle\n\n**DRY Patterns:**\n1. Parameterized test suites (write once, run against multiple DBs)\n2. Shared fixtures file (single source of truth for test data)\n3. Shell scripts for database lifecycle (Docker container management)\n4. Shared vitest config via tooling package\n5. Optional test methods pattern (core required, extended optional)\n\n**Gotchas:**\n- Drizzle truncates milliseconds - zero them out in fixtures\n- Cleanup order matters - delete children before parents (FK constraints)\n- Test suite functions use vitest globals (side effects, not pure)","created_at":"2025-12-18T16:36:29.114Z"}
|
|
201
|
+
{"id":"e490aaba-d992-4f89-9fca-9855979a86e5","information":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:16:25.895Z\",\"updated_at\":\"2025-12-14T02:16:25.895Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:16:26.095Z","metadata":"{\"id\":\"pattern-1765678585895-6ayv7z\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
202
|
+
{"id":"e6ed4f0d-5c0a-46c0-871f-dbfd9167e0ba","information":"ADR-008 App Template Architecture decision: CLI scaffolding via bunx create-badass-app using Bun's native file I/O (no degit/giget dependencies). Database-backed email list via adapter pattern (default: SQLite, swappable to ConvertKit/etc). ContentResource pattern from ADR-003 for blog posts + collections. Next.js 16 + Tailwind + @badass/ui default stack. Template lives in packages/create-badass-app/templates/. Coordinated with ADR009Writer on shared docs/adr/README.md - both agents added their entries to avoid conflicts.","created_at":"2025-12-18T23:56:41.130Z","tags":"adr,app-template,scaffolding,cli,email-capture,adapter-pattern,swarm-coordination"}
|
|
203
|
+
{"id":"e77b7ee9-ceea-4f0f-8314-30e64330d6c3","information":"DRIZZLE ORM + PGLITE FEASIBILITY ANALYSIS (Dec 2025):\n\nVERDICT: FEASIBLE via hybrid/coexistence approach.\n\nKEY FINDINGS:\n1. Drizzle has first-class PGLite support (drizzle-orm/pglite driver since v0.30.6)\n2. Can wrap existing PGLite instance: drizzle({ client: existingPGlite })\n3. Same API works on full PostgreSQL - future migration is trivial\n4. All PostgreSQL features work: JSONB, SERIAL, indexes, foreign keys, transactions\n\nRECOMMENDED APPROACH:\n- Keep existing migrations.ts for current tables\n- Use Drizzle for new features going forward\n- Implement DrizzleDatabaseAdapter wrapper to satisfy existing DatabaseAdapter interface\n- Gradual migration of high-churn tables over time\n\nEFFORT ESTIMATE: ~87 hours (2-3 weeks) for full migration\n\nWRAPPER PATTERN:\nclass DrizzleDatabaseAdapter implements DatabaseAdapter {\n constructor(private db: PgliteDatabase) {}\n async query<T>(sql, params) { return { rows: (await this.db.execute(sql.raw(sql, ...params))).rows }; }\n async transaction<T>(fn) { return this.db.transaction(tx => fn(new DrizzleDatabaseAdapter(tx))); }\n}\n\nREFERENCE: Course Builder has working adapter-drizzle package at badass-courses/course-builder\n\nGOTCHAS:\n- Drizzle doesn't auto-generate down migrations (rollback support is partial)\n- Drizzle uses template literals not $1,$2 params - wrapper must translate\n- Bundle size adds ~50kb (negligible for Node.js)","created_at":"2025-12-16T20:23:38.983Z"}
|
|
204
|
+
{"id":"e7e92b71-82db-4a4f-a9b0-b4b4549c5a0e","information":"Beads validation and operations implementation completed for opencode-swarm-plugin-it2ke.19. Ported validation rules from steveyegge/beads internal/types/types.go: title max 500 chars, priority 0-4, status transition state machine (open->in_progress/blocked/closed, closed->open reopen, tombstone permanent). Operations layer provides high-level CRUD (createBead, getBead, updateBead, closeBead, reopenBead, deleteBead, searchBeads) wrapping BeadsAdapter with validation. All 41 validation tests pass. Operations tests reveal priority=0 handling issue - event stores priority correctly but projection defaults to 2, likely due to event.priority OR 2 treating 0 as falsy. Fix: use nullish coalescing instead for proper undefined handling.","created_at":"2025-12-16T22:19:50.241Z","tags":"beads,validation,operations,event-sourcing,priority-handling,steveyegge-port"}
|
|
205
|
+
{"id":"e9133cb2-0d3a-4ab6-8528-3b1f4a2ad306","information":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-13T22:48:36.548Z\",\"updated_at\":\"2025-12-13T22:48:36.548Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-13T22:48:36.768Z","metadata":"{\"id\":\"pattern-1765666116548-wxhlb0\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
206
|
+
{"id":"e9809d04-44d9-4ecb-9eef-a9a9ad45f4d8","information":"Verbose CLI output pattern for file operations: Created writeFileWithStatus(), mkdirWithStatus(), and rmWithStatus() helpers for swarm setup command. Each helper logs operation status (created/updated/unchanged for files, directory creation, file removal) using @clack/prompts logger. Pattern includes FileStats tracking to show summary at end: \"Setup complete: X files (Y created, Z updated, A unchanged)\". Key insight: Users need visibility into what changes during setup, especially for \"reinstall\" scenarios. Implementation: Check if file exists, compare content if exists, return status, log with appropriate p.log method (success for changes, message/dim for unchanged). This pattern is reusable for any CLI command that manipulates files.","created_at":"2025-12-18T16:52:09.530Z"}
|
|
207
|
+
{"id":"ea487488-f609-4deb-b9f3-41282259a99d","information":"{\"id\":\"test-1765770963304-2pbmfn58gpr\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:56:03.304Z\",\"raw_value\":1}","created_at":"2025-12-15T03:56:03.678Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:56:03.304Z\"}"}
|
|
208
|
+
{"id":"ed0f1389-a05d-4324-9770-ba00ecaae6b5","information":"@badass Payments Decision (Dec 2024): Creator-scoped payments. Creators sharing a database share a Stripe account. Purchase on epicreact.dev is visible on epicweb.dev. Entitlements sync across sites within a creator's ecosystem. This matches Kent's unified accounts use case.","created_at":"2025-12-18T15:53:59.795Z"}
|
|
209
|
+
{"id":"ed97f39e-de6e-4f29-a608-2d13235d57ae","information":"Implemented swarm_plan_interactive tool for Socratic planning phase before decomposition. Tool has 4 modes: (1) socratic - full interactive with one question at a time, alternatives, recommendation, (2) fast - skip questions, go straight to decomposition, (3) auto - auto-select based on keywords, (4) confirm-only - show decomposition then yes/no. Key implementation notes: semantic memory is accessed via OpenCode global tools not direct import, uses formatMemoryQueryForDecomposition from learning module which returns {query, limit, instruction} object, integrates with existing selectStrategy and STRATEGIES from swarm-strategies module. Phase state machine: questioning → alternatives → recommendation → ready. Each phase returns SocraticPlanOutput JSON with phase, mode, ready_to_decompose flag, and next_action instruction.","created_at":"2025-12-16T16:21:02.123Z","tags":"swarm-planning,socratic-questioning,interactive-planning"}
|
|
210
|
+
{"id":"edeae6d6-7656-4a0a-9481-7b295b98dcb7","information":"GREMLIN project structure (Dec 2024): Monorepo at /Users/joel/Code/badass-courses/gremlin with 9 ADRs documenting architecture. Three prime directives in AGENTS.md: (1) README Commandment - keep it current, it's marketing (2) ADR Commandment - document decisions BEFORE implementing (3) TDD Commandment - Red→Green→Refactor, no exceptions. Stack: Bun runtime, Turborepo, Vitest+Effect, Playwright, Biome, Next.js 16. Packages: @badass/core (router, schemas), @badass/db (Drizzle adapter). 159 unit tests + 2 E2E tests. CI/CD with intelligent E2E (Playwright sharding, change detection). Legacy course-builder as git submodule for reference patterns.","created_at":"2025-12-19T00:16:10.866Z","tags":"gremlin,project-structure,monorepo,agents-md,prime-directives,stack"}
|
|
211
|
+
{"id":"ee586e5a-5aa2-4b71-904a-a4aee468076d","information":"{\"id\":\"pattern-1766074457007-guqdx7\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T16:14:17.007Z\",\"updated_at\":\"2025-12-18T16:14:17.007Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T16:14:17.299Z","metadata":"{\"id\":\"pattern-1766074457007-guqdx7\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
212
|
+
{"id":"ef25dc27-ef8f-41c9-8f44-4ef31ababa22","information":"Course Builder Drizzle Adapter Pattern for \"bring your own database\" sharing:\n\n1. **Table Function Injection**: Adapter accepts BOTH db instance AND table creator function. `DrizzleAdapter(db, tableFn)` - db is shared, tableFn is consumer-specific for namespacing.\n\n2. **Schema Factory Pattern**: Export `getSchema(tableFn)` factory, NOT concrete tables. Consumer calls factory with their prefixed table creator. Adapter never owns concrete table definitions.\n\n3. **Database Instance Injection**: Adapter stores reference to consumer's db instance, uses it for all queries. Adapter doesn't create db - consumer creates and passes it in.\n\n4. **Multi-Project Schema via Drizzle's tableCreator**: `mysqlTableCreator((name) => 'prefix_${name}')` enables table prefixing. Multiple apps share same database with isolated namespaces (e.g., `zER_users`, `zEW_users` in same db).\n\n5. **Consumer Usage Pattern**: Consumer creates pgTable with prefix, calls schema factory, creates db with merged schemas, passes db+tableFn to adapter.\n\nThis enables extracting packages like swarm-mail as pure libraries that integrate into consumer's database rather than owning their own instance. Key insight: the library is a \"guest\" in the consumer's database, not a \"host\".","created_at":"2025-12-14T23:56:11.298Z"}
|
|
213
|
+
{"id":"f24305d6-ce9c-4b32-84f1-cc6fbafa5899","information":"Effect-TS Layer routing pattern for daemon-aware connection fallback in pdf-library project.\n\n**Problem**: Database service needs to support both daemon mode (Unix socket via DatabaseClient) and single-process mode (direct PGlite) transparently.\n\n**Solution**: Use Layer.unwrapEffect to check daemon status at Layer creation time and route to appropriate implementation:\n\n```typescript\nexport const DatabaseLive = Layer.unwrapEffect(\n Effect.gen(function* () {\n const config = LibraryConfig.fromEnv();\n \n const daemonConfig = {\n socketPath: config.libraryPath,\n pidPath: `${config.libraryPath}/daemon.pid`,\n dbPath: config.dbPath,\n };\n\n const running = yield* Effect.promise(() => isDaemonRunning(daemonConfig));\n\n if (running) {\n // Route to DatabaseClient (Unix socket connection)\n return Layer.effect(\n Database,\n DatabaseClient.make(config.libraryPath).pipe(\n Layer.build,\n Effect.flatMap((context) => Effect.succeed(Context.get(context, DatabaseClient)))\n )\n );\n } else {\n // Route to direct PGlite implementation\n return DirectDatabaseLive;\n }\n })\n);\n```\n\n**Key insights**:\n- Layer.unwrapEffect allows decision at runtime (daemon check)\n- Layer.build + Context.get extracts DatabaseClient implementation\n- Compatible interfaces (Database and DatabaseClient) allow transparent routing\n- Tests verify fallback works when daemon not running\n\n**Why Layer.effect + Layer.build**:\nNeed to \"convert\" DatabaseClient layer to provide Database service. Pattern:\n1. Build DatabaseClient layer to get context\n2. Extract DatabaseClient implementation from context via Context.get\n3. Wrap in Layer.effect(Database, ...) to provide Database tag\n\nThis provides multi-process safety via daemon while maintaining single-process simplicity as fallback.","created_at":"2025-12-19T15:15:49.858Z","tags":"effect-ts,layer,routing,daemon,fallback,unix-socket,pglite"}
|
|
214
|
+
{"id":"f7f941bd-2467-49a2-b948-bba33ee263b1","information":"@badass Inngest Decision (Dec 2024): Site-isolated Inngest. Each site has its own Inngest app despite database sharing. Simpler blast radius, no cross-site event coordination complexity. Video processing, email jobs, etc. are site-scoped.","created_at":"2025-12-18T15:54:00.825Z"}
|
|
215
|
+
{"id":"fa0ede27-8993-4b8f-af9e-a1496684107e","information":"{\"id\":\"test-1765664066304-cw34qmxbxjm\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:14:26.304Z\",\"raw_value\":1}","created_at":"2025-12-13T22:14:26.517Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:14:26.304Z\"}"}
|
|
216
|
+
{"id":"fb2f3480-9e10-443c-b9e9-755e83f648d8","information":"@badass Architecture Session Checkpoint (Dec 2024) - Ready to decompose into implementation. LOCKED DECISIONS: (1) CLI: Multi-site PlanetScale/Stripe pattern, ~/.badass/config.json, (2) DB: Creator-level sharing enabled, (3) Auth: Hive+Spoke model - creator designates one site as auth hive, spokes redirect there, (4) Cross-domain SSO: Hive acts as IdP since BetterAuth crossSubDomainCookies only works for subdomains not different TLDs, (5) Local app auth: RFC 8628 device flow (reference impl in course-builder ai-hero), (6) All core framework features in @badass/* packages. OPEN QUESTIONS for next session: (1) Content Model - posts vs courses/modules/lessons schema, (2) Video Pipeline - Mux integration (academy-content reference), (3) Payments - Stripe integration, cross-site purchases, (4) Event System - Inngest patterns. KEY REFERENCES: course-builder apps/ai-hero/src/app/oauth/device/ for device flow, vercel/academy-content for CLI+video pipeline, Kent's unified accounts request as driving use case.","created_at":"2025-12-18T15:42:07.722Z"}
|
|
217
|
+
{"id":"fb7adce8-e2f1-493c-beb6-8d3736a00b17","information":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-14T02:18:30.523Z\",\"updated_at\":\"2025-12-14T02:18:30.523Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-14T02:18:30.785Z","metadata":"{\"id\":\"pattern-1765678710523-4bqqvd\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
218
|
+
{"id":"fbdec046-f92e-47dc-a80a-26e1a6c5fe8f","information":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"content\":\"Test pattern for semantic search\",\"kind\":\"pattern\",\"is_negative\":false,\"success_count\":0,\"failure_count\":0,\"created_at\":\"2025-12-18T17:47:52.119Z\",\"updated_at\":\"2025-12-18T17:47:52.119Z\",\"tags\":[],\"example_beads\":[]}","created_at":"2025-12-18T17:47:52.415Z","metadata":"{\"id\":\"pattern-1766080072119-xmi0cf\",\"kind\":\"pattern\",\"is_negative\":false}"}
|
|
219
|
+
{"id":"fdf514c6-3fba-4361-b5f4-fd7b5d023985","information":"{\"id\":\"test-1765771077694-7w6dasddwz8\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-15T03:57:57.694Z\",\"raw_value\":1}","created_at":"2025-12-15T03:57:58.059Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-15T03:57:57.694Z\"}"}
|
|
220
|
+
{"id":"ffb8e28a-303d-4941-afe7-bf21f69656fb","information":"{\"id\":\"test-1765666114922-71ihlfel1gc\",\"criterion\":\"type_safe\",\"type\":\"helpful\",\"timestamp\":\"2025-12-13T22:48:34.922Z\",\"raw_value\":1}","created_at":"2025-12-13T22:48:35.124Z","metadata":"{\"type\":\"helpful\",\"bead_id\":\"\",\"criterion\":\"type_safe\",\"timestamp\":\"2025-12-13T22:48:34.922Z\"}"}
|
|
1
221
|
{"id":"mem_mjbteazb_g1swqjm","information":"Test memory for tools integration","created_at":"2025-12-18T19:09:38.711Z","tags":"test"}
|
|
2
222
|
{"id":"mem_mjbteb35_o8xwaxn","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-18T19:09:38.849Z"}
|
|
3
223
|
{"id":"mem_mjbteo3a_mnd325l","information":"Test memory for tools integration","created_at":"2025-12-18T19:09:55.702Z","tags":"test"}
|
|
@@ -7,4 +227,38 @@
|
|
|
7
227
|
{"id":"mem_mjc4u7uc_ida1vo0","information":"Test memory for tools integration","created_at":"2025-12-19T00:29:56.916Z","tags":"test"}
|
|
8
228
|
{"id":"mem_mjc4u88t_692zb2b","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-19T00:29:57.437Z"}
|
|
9
229
|
{"id":"mem_mjc9u9iu_0p65p35","information":"Test memory for tools integration","created_at":"2025-12-19T02:49:57.174Z","tags":"test"}
|
|
10
|
-
{"id":"mem_mjc9u9nv_qp4wu75","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-19T02:49:57.355Z"}
|
|
230
|
+
{"id":"mem_mjc9u9nv_qp4wu75","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-19T02:49:57.355Z"}
|
|
231
|
+
{"id":"mem_mjg2bbr7_8dzcx47","information":"Test memory for tools integration","created_at":"2025-12-21T18:30:20.995Z","tags":"test"}
|
|
232
|
+
{"id":"mem_mjg2bflw_cvt2e62","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-21T18:30:25.988Z"}
|
|
233
|
+
{"id":"mem_mjg6xq4w_sqam2z5","information":"Test memory for tools integration","created_at":"2025-12-21T20:39:44.528Z","tags":"test"}
|
|
234
|
+
{"id":"mem_mjg6xqcv_4nbnupf","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-21T20:39:44.815Z"}
|
|
235
|
+
{"id":"mem_mjg7iixj_zdq1enx","information":"Test memory for tools integration","created_at":"2025-12-21T20:55:54.967Z","tags":"test"}
|
|
236
|
+
{"id":"mem_mjg7imsa_7jlc6jq","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-21T20:55:59.962Z"}
|
|
237
|
+
{"id":"mem_mjgg4gau_4u0p1n2","information":"Test memory for adapter wiring verification","created_at":"2025-12-22T00:56:54.918Z","tags":"test,memory"}
|
|
238
|
+
{"id":"mem_mjgg4ghm_dufh5fk","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-22T00:56:55.162Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
239
|
+
{"id":"mem_mjgg4gld_jcfjqld","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-22T00:56:55.297Z","tags":"test,verification"}
|
|
240
|
+
{"id":"mem_mjiy25rb_4duhcdj","information":"Test memory for tools integration","created_at":"2025-12-23T18:54:33.383Z","tags":"test"}
|
|
241
|
+
{"id":"mem_mjiy269d_1vulu1z","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-23T18:54:34.033Z"}
|
|
242
|
+
{"id":"mem_mjiy2a4w_lgi763j","information":"Test memory for adapter wiring verification","created_at":"2025-12-23T18:54:39.056Z","tags":"test,memory"}
|
|
243
|
+
{"id":"mem_mjiy2aa6_rzd6t98","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-23T18:54:39.246Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
244
|
+
{"id":"mem_mjiy2adk_rgj3nj2","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-23T18:54:39.368Z","tags":"test,verification"}
|
|
245
|
+
{"id":"mem_mjk7zaow_tbfn5xq","information":"Test memory for tools integration","created_at":"2025-12-24T16:20:02.144Z","tags":"test"}
|
|
246
|
+
{"id":"mem_mjk7zaxu_q5dnmh3","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T16:20:02.466Z"}
|
|
247
|
+
{"id":"mem_mjk7zeui_q42ye4o","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T16:20:07.530Z","tags":"test,memory"}
|
|
248
|
+
{"id":"mem_mjk7zeyo_awoip6a","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T16:20:07.680Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
249
|
+
{"id":"mem_mjk7zf2v_g2oe1pb","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:20:07.831Z","tags":"test,verification"}
|
|
250
|
+
{"id":"mem_mjk8090f_5tv3s34","information":"Test memory for tools integration","created_at":"2025-12-24T16:20:46.624Z","tags":"test"}
|
|
251
|
+
{"id":"mem_mjk8096p_elpumun","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T16:20:46.849Z"}
|
|
252
|
+
{"id":"mem_mjk80bna_ylpyr22","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T16:20:50.038Z","tags":"test,memory"}
|
|
253
|
+
{"id":"mem_mjk80bpa_97wmmwn","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T16:20:50.110Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
254
|
+
{"id":"mem_mjk80br5_ha4y3zt","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:20:50.177Z","tags":"test,verification"}
|
|
255
|
+
{"id":"mem_mjk81a5f_ep969wf","information":"Test memory for tools integration","created_at":"2025-12-24T16:21:34.755Z","tags":"test"}
|
|
256
|
+
{"id":"mem_mjk81ac6_5y5krag","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T16:21:34.998Z"}
|
|
257
|
+
{"id":"mem_mjk81cmm_pxhe6tv","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T16:21:37.966Z","tags":"test,memory"}
|
|
258
|
+
{"id":"mem_mjk81cqo_v3w2110","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T16:21:38.112Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
259
|
+
{"id":"mem_mjk81cua_hhxy8qi","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:21:38.242Z","tags":"test,verification"}
|
|
260
|
+
{"id":"mem_mjk91g18_y5s6wkg","information":"Test memory for tools integration","created_at":"2025-12-24T16:49:41.996Z","tags":"test"}
|
|
261
|
+
{"id":"mem_mjk91ge8_39uareg","information":"Findable test memory with unique keyword xyztest123","created_at":"2025-12-24T16:49:42.464Z"}
|
|
262
|
+
{"id":"mem_mjk91k4k_a255x4y","information":"Test memory for adapter wiring verification","created_at":"2025-12-24T16:49:47.300Z","tags":"test,memory"}
|
|
263
|
+
{"id":"mem_mjk91kac_7tn2d8n","information":"OAuth refresh tokens need 5min buffer before expiry","created_at":"2025-12-24T16:49:47.508Z","metadata":"{\"raw\":\"auth,tokens,oauth\"}","tags":"auth,integration-test"}
|
|
264
|
+
{"id":"mem_mjk91knd_unxg7d7","information":"Smoke test verified full tool adapter wiring works end-to-end","created_at":"2025-12-24T16:49:47.977Z","tags":"test,verification"}
|