neotoma 0.3.1 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # Neotoma: Truth Layer for Persistent Agent Memory
2
2
 
3
- ![Neotoma banner](https://raw.githubusercontent.com/markmhendrickson/neotoma/dev/docs/assets/neotoma_banner.png)
3
+ ![Neotoma banner](https://raw.githubusercontent.com/markmhendrickson/neotoma/main/docs/assets/neotoma_banner.png)
4
4
 
5
- Neotoma is a **truth layer**: an explicit, inspectable, replayable substrate for personal data that AI agents read and write. When agents act, personal data becomes state. Neotoma treats that state the way production systems do: contract-first, deterministic, immutable, and queryable.
5
+ [Neotoma](https://neotoma.io) is a **truth layer**: an explicit, inspectable, replayable substrate for personal data that AI agents read and write. When agents act, personal data becomes state. Neotoma treats that state the way production systems do: contract-first, deterministic, immutable, and queryable.
6
6
 
7
7
  **Why it exists:** The thing that keeps breaking in agentic systems is not intelligence but trust. Memory changes implicitly, context drifts, and you cannot see what changed or replay it. Neotoma provides the missing primitive: user-controlled, deterministic, inspectable memory with full provenance, so you can trust agents with real, ongoing state.
8
8
 
@@ -99,7 +99,7 @@ Neotoma stores personal data and requires secure configuration.
99
99
 
100
100
  **What's implemented:** Sources-first architecture with content-addressed storage, dual-path storing (file uploads + agent interactions), observations architecture, entity resolution with hash-based IDs, schema registry system, auto-enhancement, timeline generation, optional entity semantic search for `retrieve_entities` and `retrieve_entity_by_identifier` (local embeddings), MCP integration (ChatGPT, Claude, Cursor), full provenance and audit trail, React frontend, CLI. See [Release roadmap](#release-roadmap) and [docs/releases/](docs/releases/) for details.
101
101
 
102
- **Next steps:** Review uncommitted changes (262 files), apply pending migrations, audit test suite, plan v0.4.0 realistically based on current baseline.
102
+ **Next steps:** Review current uncommitted changes, apply pending migrations, audit the test suite, and plan v0.4.0 realistically based on the current baseline.
103
103
 
104
104
  ---
105
105
 
@@ -143,15 +143,15 @@ Breaking changes should be expected.
143
143
 
144
144
  ### Completed Releases
145
145
 
146
- - **v0.2.0** – Minimal storing + correction loop (`completed`). [docs/releases/v0.2.0/](docs/releases/v0.2.0/)
147
- - **v0.2.1** – Entity resolution enhancement (`completed`). [docs/releases/v0.2.1/](docs/releases/v0.2.1/)
148
- - **v0.2.2** – Development foundations (`completed`). [docs/releases/v0.2.2/](docs/releases/v0.2.2/)
149
- - **v0.2.15** – Vocabulary alignment + API simplification (`completed`). [docs/releases/v0.2.15/](docs/releases/v0.2.15/)
146
+ - **v0.2.0** – Minimal ingestion + correction loop (`in_testing`). [docs/releases/v0.2.0/](docs/releases/v0.2.0/)
147
+ - **v0.2.1** – Documentation generation system (`in_progress`). [docs/releases/v0.2.1/](docs/releases/v0.2.1/)
148
+ - **v0.2.2** – `list_capabilities` MCP action (`planning`). [docs/releases/v0.2.2/](docs/releases/v0.2.2/)
149
+ - **v0.2.15** – Complete architecture migration (`implemented`, pending migrations). [docs/releases/v0.2.15/](docs/releases/v0.2.15/)
150
150
  - **v0.3.0** – Reconciliation release (`completed`). [docs/releases/v0.3.0/](docs/releases/v0.3.0/)
151
151
 
152
152
  ### Future Planning
153
153
 
154
- Future releases will be planned realistically based on the v0.3.0 baseline. Previous aspirational releases (v0.4.0 through v2.1.0) have been archived to [docs/releases/archived/aspirational/](docs/releases/archived/aspirational/) and can be revisited for future planning.
154
+ Future releases will be planned realistically based on the v0.3.0 baseline. Previous aspirational releases (v0.4.0 through v2.2.0) have been archived to [docs/releases/archived/aspirational/](docs/releases/archived/aspirational/) and can be revisited for future planning.
155
155
 
156
156
  Full release index: [docs/releases/](docs/releases/).
157
157
 
@@ -219,7 +219,7 @@ After installation, configure MCP for your AI tool:
219
219
  neotoma mcp config
220
220
  ```
221
221
 
222
- Marketing site: **https://neotoma.io** (GitHub Pages, deployed from **dev**; custom domain set in repo Settings → Pages). See [Deployment](docs/infrastructure/deployment.md#marketing-site-neotomaio).
222
+ Marketing site: **https://neotoma.io** (GitHub Pages, deployed from **main**; custom domain set in repo Settings → Pages). See [Deployment](docs/infrastructure/deployment.md#marketing-site-neotomaio).
223
223
 
224
224
  ### Option 2: Clone repository (for development)
225
225
 
package/dist/actions.js CHANGED
@@ -1428,7 +1428,7 @@ app.get("/schemas/:entity_type", async (req, res) => {
1428
1428
  // Use validated user ID if not provided in query
1429
1429
  userId = userId || validated.userId;
1430
1430
  }
1431
- catch (authError) {
1431
+ catch {
1432
1432
  // Not a valid token - continue without user_id (will try global schema)
1433
1433
  }
1434
1434
  }
@@ -2088,7 +2088,7 @@ async function storeUnstructuredForApi(params) {
2088
2088
  };
2089
2089
  if (interpret && storageResult.sourceId) {
2090
2090
  const { extractTextFromBuffer, getPdfFirstPageImageDataUrl, getPdfWorkerDebug } = await import("./services/file_text_extraction.js");
2091
- const { extractWithLLM, extractWithLLMFromImage, extractFromCSVWithChunking, isLLMExtractionAvailable, } = await import("./services/llm_extraction.js");
2091
+ const { extractWithLLM, extractWithLLMFromImage, extractFromCSVWithChunking: _extractFromCSVWithChunking, isLLMExtractionAvailable, } = await import("./services/llm_extraction.js");
2092
2092
  const rawText = await extractTextFromBuffer(fileBuffer, mimeType, originalFilename || "file");
2093
2093
  const isCsv = mimeType?.toLowerCase() === "text/csv";
2094
2094
  if (!isCsv && !isLLMExtractionAvailable()) {