@swarmvaultai/engine 0.1.19 → 0.1.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -19,18 +19,24 @@ If you only want to use SwarmVault as a tool, install `@swarmvaultai/cli` instea
19
19
 
20
20
  ```ts
21
21
  import {
22
+ addInput,
23
+ benchmarkVault,
22
24
  compileVault,
23
25
  createMcpServer,
24
26
  createWebSearchAdapter,
25
27
  defaultVaultConfig,
26
28
  defaultVaultSchema,
27
29
  exploreVault,
30
+ exportGraphFormat,
28
31
  exportGraphHtml,
29
32
  explainGraphVault,
33
+ getWatchStatus,
34
+ getGitHookStatus,
30
35
  importInbox,
31
36
  ingestInput,
32
37
  initVault,
33
38
  installAgent,
39
+ installGitHooks,
34
40
  getWebSearchAdapterForTask,
35
41
  lintVault,
36
42
  listGodNodes,
@@ -41,11 +47,14 @@ import {
41
47
  pathGraphVault,
42
48
  queryGraphVault,
43
49
  queryVault,
50
+ runWatchCycle,
44
51
  runSchedule,
45
52
  searchVault,
46
53
  serveSchedules,
47
54
  startGraphServer,
48
55
  startMcpServer,
56
+ syncTrackedRepos,
57
+ uninstallGitHooks,
49
58
  watchVault,
50
59
  } from "@swarmvaultai/engine";
51
60
  ```
@@ -55,15 +64,30 @@ The engine also exports the main runtime types for providers, graph artifacts, p
55
64
  ## Example
56
65
 
57
66
  ```ts
58
- import { compileVault, exploreVault, exportGraphHtml, importInbox, initVault, loadVaultSchemas, queryVault, watchVault } from "@swarmvaultai/engine";
67
+ import {
68
+ compileVault,
69
+ exploreVault,
70
+ exportGraphHtml,
71
+ importInbox,
72
+ initVault,
73
+ installGitHooks,
74
+ loadVaultSchemas,
75
+ queryGraphVault,
76
+ queryVault,
77
+ runWatchCycle,
78
+ watchVault
79
+ } from "@swarmvaultai/engine";
59
80
 
60
81
  const rootDir = process.cwd();
61
82
 
62
83
  await initVault(rootDir, { obsidian: true });
63
84
  const schemas = await loadVaultSchemas(rootDir);
64
85
  console.log(schemas.root.path);
86
+ await addInput(rootDir, "https://arxiv.org/abs/2401.12345");
65
87
  await importInbox(rootDir);
66
88
  await compileVault(rootDir, {});
89
+ const benchmark = await benchmarkVault(rootDir);
90
+ console.log(benchmark.avgQueryTokens);
67
91
 
68
92
  const saved = await queryVault(rootDir, { question: "What changed most recently?" });
69
93
  console.log(saved.savedPath);
@@ -75,8 +99,13 @@ const exploration = await exploreVault(rootDir, { question: "What should I inves
75
99
  console.log(exploration.hubPath);
76
100
 
77
101
  await exportGraphHtml(rootDir, "./exports/graph.html");
102
+ await exportGraphFormat(rootDir, "graphml", "./exports/graph.graphml");
103
+
104
+ await runWatchCycle(rootDir, { repo: true });
105
+ console.log(await getWatchStatus(rootDir));
106
+ await installGitHooks(rootDir);
78
107
 
79
- const watcher = await watchVault(rootDir, { lint: true });
108
+ const watcher = await watchVault(rootDir, { lint: true, repo: true });
80
109
  ```
81
110
 
82
111
  ## Schema Layer
@@ -130,9 +159,10 @@ This matters because many "OpenAI-compatible" backends only implement part of th
130
159
  ### Ingest
131
160
 
132
161
  - `ingestInput(rootDir, input, { includeAssets, maxAssetSize })` ingests a local file path or URL
162
+ - `addInput(rootDir, input, { author, contributor })` captures supported URLs into normalized markdown before ingesting them, or falls back to generic URL ingest
133
163
  - `ingestDirectory(rootDir, inputDir, { repoRoot, include, exclude, maxFiles, gitignore })` recursively ingests a local directory as a repo-aware code/content source tree
134
164
  - `importInbox(rootDir, inputDir?)` recursively imports supported inbox files and browser-clipper style bundles
135
- - JavaScript, TypeScript, Python, Go, Rust, Java, C#, C, C++, and PHP inputs are treated as code sources and compiled into both source pages and `wiki/code/` module pages
165
+ - JavaScript, TypeScript, Python, Go, Rust, Java, C#, C, C++, PHP, Ruby, and PowerShell inputs are treated as code sources and compiled into both source pages and `wiki/code/` module pages
136
166
  - code manifests can carry `repoRelativePath`, and compile writes `state/code-index.json` so local imports can resolve across an ingested repo tree
137
167
  - HTML and markdown URL ingests localize remote image references into `raw/assets/<sourceId>/` by default and rewrite the stored markdown to local relative paths
138
168
 
@@ -140,6 +170,7 @@ This matters because many "OpenAI-compatible" backends only implement part of th
140
170
 
141
171
  - `compileVault(rootDir, { approve })` writes wiki pages, graph data, and search state using the vault schema as guidance, or stages a review bundle
142
172
  - compile also writes graph orientation pages such as `wiki/graph/report.md` and `wiki/graph/communities/<community>.md`
173
+ - `benchmarkVault(rootDir, { questions })` writes `state/benchmark.json` and folds the latest benchmark summary into `wiki/graph/report.md`
143
174
  - `queryVault(rootDir, { question, save, format, review })` answers against the compiled vault using the same schema layer and saves by default
144
175
  - `exploreVault(rootDir, { question, steps, format, review })` runs a save-first multi-step exploration loop and writes a hub page plus step outputs
145
176
  - `searchVault(rootDir, query, limit)` searches compiled pages directly
@@ -153,7 +184,12 @@ This matters because many "OpenAI-compatible" backends only implement part of th
153
184
 
154
185
  ### Automation
155
186
 
156
- - `watchVault(rootDir, options)` watches the inbox and appends run records to `state/jobs.ndjson`
187
+ - `watchVault(rootDir, options)` watches the inbox and optionally tracked repo roots, then appends run records to `state/jobs.ndjson`
188
+ - `runWatchCycle(rootDir, options)` runs the same inbox/repo refresh logic once without starting a watcher
189
+ - `getWatchStatus(rootDir)` reads the latest watch-status artifact plus pending semantic refresh entries
190
+ - `syncTrackedRepos(rootDir)` refreshes previously ingested repo roots, updates changed manifests, and removes deleted repo manifests
191
+ - `syncTrackedReposForWatch(rootDir)` is the repo-watch sync path that defers non-code semantic refresh into `state/watch/`
192
+ - `installGitHooks(rootDir)`, `uninstallGitHooks(rootDir)`, and `getGitHookStatus(rootDir)` manage local `post-commit` and `post-checkout` hook blocks for the nearest git repository
157
193
  - `lintVault(rootDir, options)` runs structural lint, optional deep lint, and optional web-augmented evidence gathering
158
194
  - `listSchedules(rootDir)`, `runSchedule(rootDir, jobId)`, and `serveSchedules(rootDir)` manage recurring local jobs from config
159
195
  - compile, query, explore, lint, and watch also write canonical markdown session artifacts to `state/sessions/`
@@ -170,6 +206,7 @@ This matters because many "OpenAI-compatible" backends only implement part of th
170
206
  - `createMcpServer(rootDir)` creates an MCP server instance
171
207
  - `startMcpServer(rootDir)` runs the MCP server over stdio
172
208
  - `exportGraphHtml(rootDir, outputPath)` exports the graph workspace as a standalone HTML file
209
+ - `exportGraphFormat(rootDir, "svg" | "graphml" | "cypher", outputPath)` exports the graph into interoperable file formats
173
210
 
174
211
  The MCP surface includes tools for workspace info, page search, page reads, source listing, querying, ingestion, compile, lint, and graph-native read operations such as graph query, node explain, neighbor lookup, shortest path, and god-node listing, along with resources for config, graph, manifests, schema, page content, and session artifacts.
175
212
 
@@ -191,11 +228,13 @@ Running the engine produces a local workspace with these main areas:
191
228
  - `state/extracts/`: extracted text
192
229
  - `state/analyses/`: model analysis output
193
230
  - `state/code-index.json`: repo-aware code module aliases and local resolution data
231
+ - `state/benchmark.json`: latest benchmark/trust summary for the current vault
194
232
  - `state/graph.json`: compiled graph
195
233
  - `state/search.sqlite`: full-text index
196
234
  - `state/sessions/`: canonical session artifacts
197
235
  - `state/approvals/`: staged review bundles from `compileVault({ approve: true })`
198
236
  - `state/schedules/`: persisted schedule state and leases
237
+ - `state/watch/`: watch-status and pending semantic refresh artifacts for repo automation
199
238
  - `state/jobs.ndjson`: watch-mode automation logs
200
239
 
201
240
  Saved outputs are indexed immediately into the graph page registry and search index, then linked back into compiled source, concept, and entity pages immediately through the lightweight artifact sync path. New concept and entity pages stage into `wiki/candidates/` first and promote to active pages on the next matching compile. Insight pages are indexed into search and page reads, but compile does not mutate them. Project-scoped pages receive `project_ids`, project tags, and layered root-plus-project schema hashes when all contributing sources resolve to the same configured project.