@swarmvaultai/engine 0.1.5 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -25,16 +25,21 @@ import {
25
25
  defaultVaultConfig,
26
26
  defaultVaultSchema,
27
27
  exploreVault,
28
+ exportGraphHtml,
28
29
  importInbox,
29
30
  ingestInput,
30
31
  initVault,
31
32
  installAgent,
32
33
  getWebSearchAdapterForTask,
33
34
  lintVault,
35
+ listSchedules,
34
36
  loadVaultConfig,
35
37
  loadVaultSchema,
38
+ loadVaultSchemas,
36
39
  queryVault,
40
+ runSchedule,
37
41
  searchVault,
42
+ serveSchedules,
38
43
  startGraphServer,
39
44
  startMcpServer,
40
45
  watchVault,
@@ -46,22 +51,24 @@ The engine also exports the main runtime types for providers, graph artifacts, p
46
51
  ## Example
47
52
 
48
53
  ```ts
49
- import { compileVault, importInbox, initVault, loadVaultSchema, queryVault, watchVault } from "@swarmvaultai/engine";
54
+ import { compileVault, exploreVault, exportGraphHtml, importInbox, initVault, loadVaultSchemas, queryVault, watchVault } from "@swarmvaultai/engine";
50
55
 
51
56
  const rootDir = process.cwd();
52
57
 
53
- await initVault(rootDir);
54
- const schema = await loadVaultSchema(rootDir);
55
- console.log(schema.path);
58
+ await initVault(rootDir, { obsidian: true });
59
+ const schemas = await loadVaultSchemas(rootDir);
60
+ console.log(schemas.root.path);
56
61
  await importInbox(rootDir);
57
- await compileVault(rootDir);
62
+ await compileVault(rootDir, {});
58
63
 
59
- const saved = await queryVault(rootDir, "What changed most recently?", true);
60
- console.log(saved.savedTo);
64
+ const saved = await queryVault(rootDir, { question: "What changed most recently?" });
65
+ console.log(saved.savedPath);
61
66
 
62
- const exploration = await exploreVault(rootDir, "What should I investigate next?", 3);
67
+ const exploration = await exploreVault(rootDir, { question: "What should I investigate next?", steps: 3, format: "report" });
63
68
  console.log(exploration.hubPath);
64
69
 
70
+ await exportGraphHtml(rootDir, "./exports/graph.html");
71
+
65
72
  const watcher = await watchVault(rootDir, { lint: true });
66
73
  ```
67
74
 
@@ -72,9 +79,13 @@ Each workspace carries a root markdown file named `swarmvault.schema.md`.
72
79
  The engine treats that file as vault-specific operating guidance for compile and query work. Currently:
73
80
 
74
81
  - `initVault()` creates the default schema file
75
- - `loadVaultSchema()` resolves the canonical file and legacy `schema.md` fallback
82
+ - `initVault()` also creates a human-only `wiki/insights/` area
83
+ - `initVault({ obsidian: true })` can also seed a minimal `.obsidian/` workspace
84
+ - `swarmvault.config.json` can define `projects` with root matching and optional per-project schema files
76
85
  - compile and query prompts include the schema content
77
86
  - generated pages store `schema_hash`
87
+ - generated pages also carry lifecycle metadata such as `status`, `created_at`, `updated_at`, `compiled_from`, `managed_by`, and `project_ids`
88
+ - saved visual outputs also carry `output_assets`
78
89
  - `lintVault()` marks generated pages stale when the schema changes
79
90
 
80
91
  ## Provider Model
@@ -98,6 +109,7 @@ Providers are capability-driven. Each provider declares support for features suc
98
109
  - `embeddings`
99
110
  - `streaming`
100
111
  - `local`
112
+ - `image_generation`
101
113
 
102
114
  This matters because many "OpenAI-compatible" backends only implement part of the OpenAI surface.
103
115
 
@@ -107,18 +119,26 @@ This matters because many "OpenAI-compatible" backends only implement part of th
107
119
 
108
120
  - `ingestInput(rootDir, input)` ingests a local path or URL
109
121
  - `importInbox(rootDir, inputDir?)` recursively imports supported inbox files and browser-clipper style bundles
122
+ - `.js`, `.jsx`, `.ts`, and `.tsx` inputs are treated as code sources and compiled into both source pages and `wiki/code/` module pages
110
123
 
111
124
  ### Compile + Query
112
125
 
113
- - `compileVault(rootDir)` writes wiki pages, graph data, and search state using the vault schema as guidance
114
- - `queryVault(rootDir, question, save)` answers against the compiled vault using the same schema layer and can persist a first-class output page
115
- - `exploreVault(rootDir, question, steps)` runs a save-first multi-step exploration loop and writes a hub page plus step outputs
126
+ - `compileVault(rootDir, { approve })` writes wiki pages, graph data, and search state using the vault schema as guidance, or stages a review bundle
127
+ - `queryVault(rootDir, { question, save, format, review })` answers against the compiled vault using the same schema layer and saves by default
128
+ - `exploreVault(rootDir, { question, steps, format, review })` runs a save-first multi-step exploration loop and writes a hub page plus step outputs
116
129
  - `searchVault(rootDir, query, limit)` searches compiled pages directly
130
+ - project-aware compile also builds `wiki/projects/index.md` plus `wiki/projects/<project>/index.md` rollups without duplicating page trees
131
+ - human-authored insight pages in `wiki/insights/` are indexed into search and available to query without being rewritten by compile
132
+ - `chart` and `image` formats save wrapper markdown pages plus local output assets under `wiki/outputs/assets/<slug>/`
117
133
 
118
134
  ### Automation
119
135
 
120
136
  - `watchVault(rootDir, options)` watches the inbox and appends run records to `state/jobs.ndjson`
121
137
  - `lintVault(rootDir, options)` runs structural lint, optional deep lint, and optional web-augmented evidence gathering
138
+ - `listSchedules(rootDir)`, `runSchedule(rootDir, jobId)`, and `serveSchedules(rootDir)` manage recurring local jobs from config
139
+ - compile, query, explore, lint, and watch also write canonical markdown session artifacts to `state/sessions/`
140
+ - scheduled `query` and `explore` jobs stage saved outputs through approvals when they write artifacts
141
+ - optional orchestration roles can enrich `lint`, `explore`, and compile post-pass behavior without bypassing the approval flow
122
142
 
123
143
  ### Web Search Adapters
124
144
 
@@ -129,8 +149,9 @@ This matters because many "OpenAI-compatible" backends only implement part of th
129
149
 
130
150
  - `createMcpServer(rootDir)` creates an MCP server instance
131
151
  - `startMcpServer(rootDir)` runs the MCP server over stdio
152
+ - `exportGraphHtml(rootDir, outputPath)` exports the graph workspace as a standalone HTML file
132
153
 
133
- The MCP surface includes tools for workspace info, page search, page reads, source listing, querying, ingestion, compile, and lint, along with resources for config, graph, manifests, schema, and page content.
154
+ The MCP surface includes tools for workspace info, page search, page reads, source listing, querying, ingestion, compile, and lint, along with resources for config, graph, manifests, schema, page content, and session artifacts.
134
155
 
135
156
  ## Artifacts
136
157
 
@@ -140,15 +161,23 @@ Running the engine produces a local workspace with these main areas:
140
161
  - `inbox/`: capture staging area for markdown bundles and imported files
141
162
  - `raw/sources/`: immutable source copies
142
163
  - `raw/assets/`: copied attachments referenced by ingested markdown bundles
143
- - `wiki/`: generated markdown pages, saved query outputs, and exploration hub pages
164
+ - `wiki/`: generated markdown pages, staged candidates, saved query outputs, exploration hub pages, and a human-only `insights/` area
165
+ - `wiki/outputs/assets/`: local chart/image artifacts and JSON manifests for saved visual outputs
166
+ - `wiki/code/`: generated module pages for ingested JS/TS sources
167
+ - `wiki/projects/`: generated project rollups over canonical pages
168
+ - `wiki/candidates/`: staged concept and entity pages awaiting confirmation on a later compile
144
169
  - `state/manifests/`: source manifests
145
170
  - `state/extracts/`: extracted text
146
171
  - `state/analyses/`: model analysis output
147
172
  - `state/graph.json`: compiled graph
148
173
  - `state/search.sqlite`: full-text index
174
+ - `state/sessions/`: canonical session artifacts
175
+ - `state/approvals/`: staged review bundles from `compileVault({ approve: true })`
176
+ - `state/schedules/`: persisted schedule state and leases
149
177
  - `state/jobs.ndjson`: watch-mode automation logs
150
178
 
151
- Saved outputs are indexed immediately into the graph page registry and search index, then linked back into compiled source, concept, and entity pages during later compile runs.
179
+ Saved outputs are indexed immediately into the graph page registry and search index, then linked back into compiled source, concept, and entity pages immediately through the lightweight artifact sync path. New concept and entity pages stage into `wiki/candidates/` first and promote to active pages on the next matching compile. Insight pages are indexed into search and page reads, but compile does not mutate them. Project-scoped pages receive `project_ids`, project tags, and layered root-plus-project schema hashes when all contributing sources resolve to the same configured project.
180
+ JS/TS code sources also emit module and symbol nodes into `state/graph.json`, so local imports, exports, inheritance, and same-module call edges are queryable through the same viewer and search pipeline.
152
181
 
153
182
  ## Notes
154
183