@swarmvaultai/engine 0.1.4 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -21,17 +21,25 @@ If you only want to use SwarmVault as a tool, install `@swarmvaultai/cli` instea
21
21
  import {
22
22
  compileVault,
23
23
  createMcpServer,
24
+ createWebSearchAdapter,
24
25
  defaultVaultConfig,
25
26
  defaultVaultSchema,
27
+ exploreVault,
28
+ exportGraphHtml,
26
29
  importInbox,
27
30
  ingestInput,
28
31
  initVault,
29
32
  installAgent,
33
+ getWebSearchAdapterForTask,
30
34
  lintVault,
35
+ listSchedules,
31
36
  loadVaultConfig,
32
37
  loadVaultSchema,
38
+ loadVaultSchemas,
33
39
  queryVault,
40
+ runSchedule,
34
41
  searchVault,
42
+ serveSchedules,
35
43
  startGraphServer,
36
44
  startMcpServer,
37
45
  watchVault,
@@ -43,18 +51,23 @@ The engine also exports the main runtime types for providers, graph artifacts, p
43
51
  ## Example
44
52
 
45
53
  ```ts
46
- import { compileVault, importInbox, initVault, loadVaultSchema, queryVault, watchVault } from "@swarmvaultai/engine";
54
+ import { compileVault, exploreVault, exportGraphHtml, importInbox, initVault, loadVaultSchemas, queryVault, watchVault } from "@swarmvaultai/engine";
47
55
 
48
56
  const rootDir = process.cwd();
49
57
 
50
- await initVault(rootDir);
51
- const schema = await loadVaultSchema(rootDir);
52
- console.log(schema.path);
58
+ await initVault(rootDir, { obsidian: true });
59
+ const schemas = await loadVaultSchemas(rootDir);
60
+ console.log(schemas.root.path);
53
61
  await importInbox(rootDir);
54
- await compileVault(rootDir);
62
+ await compileVault(rootDir, {});
55
63
 
56
- const result = await queryVault(rootDir, "What changed most recently?", true);
57
- console.log(result.answer);
64
+ const saved = await queryVault(rootDir, { question: "What changed most recently?" });
65
+ console.log(saved.savedPath);
66
+
67
+ const exploration = await exploreVault(rootDir, { question: "What should I investigate next?", steps: 3, format: "report" });
68
+ console.log(exploration.hubPath);
69
+
70
+ await exportGraphHtml(rootDir, "./exports/graph.html");
58
71
 
59
72
  const watcher = await watchVault(rootDir, { lint: true });
60
73
  ```
@@ -63,12 +76,16 @@ const watcher = await watchVault(rootDir, { lint: true });
63
76
 
64
77
  Each workspace carries a root markdown file named `swarmvault.schema.md`.
65
78
 
66
- The engine treats that file as vault-specific operating guidance for compile and query work. In `v0.1.4`:
79
+ The engine treats that file as vault-specific operating guidance for compile and query work. Currently:
67
80
 
68
81
  - `initVault()` creates the default schema file
69
- - `loadVaultSchema()` resolves the canonical file and legacy `schema.md` fallback
82
+ - `initVault()` also creates a human-only `wiki/insights/` area
83
+ - `initVault({ obsidian: true })` can also seed a minimal `.obsidian/` workspace
84
+ - `swarmvault.config.json` can define `projects` with root matching and optional per-project schema files
70
85
  - compile and query prompts include the schema content
71
86
  - generated pages store `schema_hash`
87
+ - generated pages also carry lifecycle metadata such as `status`, `created_at`, `updated_at`, `compiled_from`, `managed_by`, and `project_ids`
88
+ - saved visual outputs also carry `output_assets`
72
89
  - `lintVault()` marks generated pages stale when the schema changes
73
90
 
74
91
  ## Provider Model
@@ -92,6 +109,7 @@ Providers are capability-driven. Each provider declares support for features suc
92
109
  - `embeddings`
93
110
  - `streaming`
94
111
  - `local`
112
+ - `image_generation`
95
113
 
96
114
  This matters because many "OpenAI-compatible" backends only implement part of the OpenAI surface.
97
115
 
@@ -101,24 +119,39 @@ This matters because many "OpenAI-compatible" backends only implement part of th
101
119
 
102
120
  - `ingestInput(rootDir, input)` ingests a local path or URL
103
121
  - `importInbox(rootDir, inputDir?)` recursively imports supported inbox files and browser-clipper style bundles
122
+ - `.js`, `.jsx`, `.ts`, and `.tsx` inputs are treated as code sources and compiled into both source pages and `wiki/code/` module pages
104
123
 
105
124
  ### Compile + Query
106
125
 
107
- - `compileVault(rootDir)` writes wiki pages, graph data, and search state using the vault schema as guidance
108
- - `queryVault(rootDir, question, save)` answers against the compiled vault using the same schema layer
126
+ - `compileVault(rootDir, { approve })` writes wiki pages, graph data, and search state using the vault schema as guidance, or stages a review bundle
127
+ - `queryVault(rootDir, { question, save, format, review })` answers against the compiled vault using the same schema layer and saves by default
128
+ - `exploreVault(rootDir, { question, steps, format, review })` runs a save-first multi-step exploration loop and writes a hub page plus step outputs
109
129
  - `searchVault(rootDir, query, limit)` searches compiled pages directly
130
+ - project-aware compile also builds `wiki/projects/index.md` plus `wiki/projects/<project>/index.md` rollups without duplicating page trees
131
+ - human-authored insight pages in `wiki/insights/` are indexed into search and available to query without being rewritten by compile
132
+ - `chart` and `image` formats save wrapper markdown pages plus local output assets under `wiki/outputs/assets/<slug>/`
110
133
 
111
134
  ### Automation
112
135
 
113
136
  - `watchVault(rootDir, options)` watches the inbox and appends run records to `state/jobs.ndjson`
114
- - `lintVault(rootDir)` runs health and anti-drift checks
137
+ - `lintVault(rootDir, options)` runs structural lint, optional deep lint, and optional web-augmented evidence gathering
138
+ - `listSchedules(rootDir)`, `runSchedule(rootDir, jobId)`, and `serveSchedules(rootDir)` manage recurring local jobs from config
139
+ - compile, query, explore, lint, and watch also write canonical markdown session artifacts to `state/sessions/`
140
+ - scheduled `query` and `explore` jobs stage saved outputs through approvals when they write artifacts
141
+ - optional orchestration roles can enrich `lint`, `explore`, and compile post-pass behavior without bypassing the approval flow
142
+
143
+ ### Web Search Adapters
144
+
145
+ - `createWebSearchAdapter(rootDir, id, config)` constructs a normalized web search adapter
146
+ - `getWebSearchAdapterForTask(rootDir, "deepLintProvider")` resolves the configured adapter for `lint --deep --web`
115
147
 
116
148
  ### MCP
117
149
 
118
150
  - `createMcpServer(rootDir)` creates an MCP server instance
119
151
  - `startMcpServer(rootDir)` runs the MCP server over stdio
152
+ - `exportGraphHtml(rootDir, outputPath)` exports the graph workspace as a standalone HTML file
120
153
 
121
- The MCP surface includes tools for workspace info, page search, page reads, source listing, querying, ingestion, compile, and lint, along with resources for config, graph, manifests, schema, and page content.
154
+ The MCP surface includes tools for workspace info, page search, page reads, source listing, querying, ingestion, compile, and lint, along with resources for config, graph, manifests, schema, page content, and session artifacts.
122
155
 
123
156
  ## Artifacts
124
157
 
@@ -128,14 +161,24 @@ Running the engine produces a local workspace with these main areas:
128
161
  - `inbox/`: capture staging area for markdown bundles and imported files
129
162
  - `raw/sources/`: immutable source copies
130
163
  - `raw/assets/`: copied attachments referenced by ingested markdown bundles
131
- - `wiki/`: generated markdown pages and saved outputs
164
+ - `wiki/`: generated markdown pages, staged candidates, saved query outputs, exploration hub pages, and a human-only `insights/` area
165
+ - `wiki/outputs/assets/`: local chart/image artifacts and JSON manifests for saved visual outputs
166
+ - `wiki/code/`: generated module pages for ingested JS/TS sources
167
+ - `wiki/projects/`: generated project rollups over canonical pages
168
+ - `wiki/candidates/`: staged concept and entity pages awaiting confirmation on a later compile
132
169
  - `state/manifests/`: source manifests
133
170
  - `state/extracts/`: extracted text
134
171
  - `state/analyses/`: model analysis output
135
172
  - `state/graph.json`: compiled graph
136
173
  - `state/search.sqlite`: full-text index
174
+ - `state/sessions/`: canonical session artifacts
175
+ - `state/approvals/`: staged review bundles from `compileVault({ approve: true })`
176
+ - `state/schedules/`: persisted schedule state and leases
137
177
  - `state/jobs.ndjson`: watch-mode automation logs
138
178
 
179
+ Saved outputs are indexed immediately into the graph page registry and search index, then linked back into compiled source, concept, and entity pages immediately through the lightweight artifact sync path. New concept and entity pages stage into `wiki/candidates/` first and promote to active pages on the next matching compile. Insight pages are indexed into search and page reads, but compile does not mutate them. Project-scoped pages receive `project_ids`, project tags, and layered root-plus-project schema hashes when all contributing sources resolve to the same configured project.
180
+ JS/TS code sources also emit module and symbol nodes into `state/graph.json`, so local imports, exports, inheritance, and same-module call edges are queryable through the same viewer and search pipeline.
181
+
139
182
  ## Notes
140
183
 
141
184
  - The engine expects Node `>=24`