@neuroverseos/nv-sim 0.1.9 → 0.1.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/README.md +187 -535
  2. package/connectors/nv_mirofish_wrapper.py +841 -0
  3. package/connectors/nv_scienceclaw_wrapper.py +453 -0
  4. package/dist/adapters/scienceclaw.js +52 -2
  5. package/dist/assets/index-CH_VswRM.css +1 -0
  6. package/dist/assets/index-sT4b_z7w.js +686 -0
  7. package/dist/assets/{reportEngine-D2ZrMny8.js → reportEngine-Bu8bB5Yq.js} +1 -1
  8. package/dist/connectors/nv-scienceclaw-post.js +363 -0
  9. package/dist/engine/aiProvider.js +82 -3
  10. package/dist/engine/analyzer.js +12 -24
  11. package/dist/engine/cli.js +89 -114
  12. package/dist/engine/dynamicsGovernance.js +4 -0
  13. package/dist/engine/fullGovernedLoop.js +16 -1
  14. package/dist/engine/goalEngine.js +3 -4
  15. package/dist/engine/governance.js +18 -0
  16. package/dist/engine/index.js +19 -28
  17. package/dist/engine/intentTranslator.js +281 -0
  18. package/dist/engine/liveAdapter.js +100 -18
  19. package/dist/engine/liveVisualizer.js +2071 -1023
  20. package/dist/engine/primeRadiant.js +2 -8
  21. package/dist/engine/reasoningEngine.js +2 -7
  22. package/dist/engine/scenarioCapsule.js +5 -5
  23. package/dist/engine/swarmSimulation.js +1 -9
  24. package/dist/engine/universalAdapter.js +371 -0
  25. package/dist/engine/worldBridge.js +22 -8
  26. package/dist/index.html +2 -2
  27. package/dist/lib/reasoningEngine.js +17 -1
  28. package/dist/lib/simulationAdapter.js +11 -11
  29. package/dist/lib/swarmParser.js +1 -1
  30. package/dist/runtime/govern.js +160 -7
  31. package/dist/runtime/index.js +1 -4
  32. package/dist/runtime/types.js +91 -0
  33. package/package.json +23 -6
  34. package/dist/adapters/mirofish.js +0 -461
  35. package/dist/assets/index-B64NuIXu.css +0 -1
  36. package/dist/assets/index-BMkPevVr.js +0 -532
  37. package/dist/assets/mirotir-logo-DUexumBH.svg +0 -185
  38. package/dist/engine/mirofish.js +0 -295
package/README.md CHANGED
@@ -6,686 +6,338 @@
6
6
  npx @neuroverseos/nv-sim visualize
7
7
  ```
8
8
 
9
- ## The Problem With Agentic Simulation Today
9
+ ## Put Governance Inside Your Agent Loop — One Line
10
10
 
11
- You build a multi-agent system. You run it. You get metrics — loss curves, reward signals, completion rates. Something goes wrong, or something goes right, and you ask the only question that matters:
12
-
13
- *Why did the agents do that?*
14
-
15
- Nobody can tell you. The metrics say *what* happened. The logs say *when* it happened. But nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
16
-
17
- So you rerun. You tweak. You guess. You stare at dashboards full of numbers that describe the system but never explain it.
18
-
19
- That's the gap.
20
-
21
- ## What NV-SIM Gives You That Nothing Else Does
22
-
23
- NV-SIM doesn't predict outcomes. It shows you **why agents behaved the way they did** — and what happens when you change the rules.
24
-
25
- You change one constraint — block panic selling, cap leverage at 3x, close a shipping lane — and the system shows you:
26
-
27
- - **Before → After proof**: "80% of agents shifted from panic selling to coordinated holding"
28
- - **Emergent patterns**: "Panic suppression appeared — not programmed, not predicted"
29
- - **Causal chains**: "Agents became more cautious after early aggressive attempts failed"
30
- - **Quantified outcomes**: "Volatility dropped 21%, cascade avoided"
11
+ Your agents already have a decide act loop. Insert one call between them:
31
12
 
32
13
  ```
33
- Rule changed
34
- Agents shifted strategy
35
- → New patterns emerged
36
- → System outcome changed
37
- → You know exactly why
14
+ Before: action = agent.decide()
15
+ After: action = govern(agent.decide())
38
16
  ```
39
17
 
40
- This is behavioral evidence, not metrics. You don't get a number — you get a story you can trace, verify, and share.
41
-
42
- ### For researchers
43
-
44
- You get **controlled behavioral experiments** on complex systems. Same agents, different rules, measured side by side. The output isn't a chart — it's proof of what changed and why.
45
-
46
- ### For developers
18
+ ### Option A: Pipe mode (any language, zero SDK)
47
19
 
48
- You get **runtime governance** for any agent system. One HTTP call between "agent decides" and "agent acts." Your agents become observable, auditable, and controllable — without rewriting your framework.
49
-
50
- ### For both
51
-
52
- You get something that didn't exist before: **rule-to-behavior causation**. Not correlation. Not post-hoc analysis. Direct, traceable proof that changing rule X caused agents to shift from behavior A to behavior B.
53
-
54
- ## The Demo Moment
55
-
56
- **Before:**
57
- ```
58
- panic_sell → panic_sell → panic_sell
59
- pressure: 0.94
60
- system: unstable
61
- ```
62
-
63
- **Rule added:** no panic selling
64
-
65
- **After:**
20
+ ```bash
21
+ my_agent | neuroverse guard --world ./world --trace
66
22
  ```
67
- Market stabilized as agents shifted toward safer positions
68
-
69
- panic_sell → hold
70
- panic_sell → hold
71
- panic_sell → hold
72
-
73
- 80% of agents shifted from aggressive to conservative strategies
74
- Uncertainty dropped 34% as agents moved from exploration to caution
75
23
 
76
- WHY: Agents became more cautious after early attempts failed
24
+ Every action your agent emits gets evaluated. Blocked actions return `{"status":"BLOCK","reason":"..."}`. Allowed actions pass through. Your agent reads the verdict and adapts.
77
25
 
78
- Pattern: coordinated_holding
79
- Pattern: panic_suppression
26
+ ### Option B: HTTP call (any framework)
80
27
 
81
- pressure: 0.54
82
- cascade: avoided
28
+ ```bash
29
+ npx @neuroverseos/nv-sim serve
83
30
  ```
84
31
 
85
- You didn't predict this. You caused it — by changing one rule — and the system told you exactly why.
86
-
87
- ## What This Actually Is
88
-
89
- Most simulation tools answer: *"What will happen?"*
90
-
91
- NV-SIM answers: *"What changes when I change the rules — and why?"*
92
-
93
- You're not forecasting. You're running controlled behavioral experiments. Every simulation produces:
94
-
95
- | What You Get | What It Proves |
96
- |---|---|
97
- | **Outcome statement** | System state + dominant agent behavior in one sentence |
98
- | **Behavioral shifts** | Before → after for every agent group, with percentages |
99
- | **Causal explanation** | Why agents changed — in their experience, not system jargon |
100
- | **Confidence rating** | How strong the evidence is, how much risk remains |
101
- | **Full audit trail** | Every decision, every rule, every adaptation — append-only |
102
-
103
- The output is designed to be specific, narrative, and shareable. Not "40% adjusted actions" — but "40% shifted from aggressive to conservative strategies after early attempts failed."
104
-
105
- ## Design the Rules Once. Run Them Anywhere.
106
-
107
- NV-SIM is a runtime, not just a simulator. It runs in two modes:
32
+ ```python
33
+ for agent in agents:
34
+ action = agent.decide()
108
35
 
109
- - **Simulate** explore how agents behave under different rules
110
- - **Act** — govern real agents, real workflows, real decisions in production
36
+ verdict = requests.post("http://localhost:3456/api/evaluate", json={
37
+ "actor": agent.id,
38
+ "action": action,
39
+ }).json()
111
40
 
112
- The interface is the same. The difference is where the agents come from.
41
+ if verdict["decision"] == "BLOCK":
42
+ action = "hold"
43
+ elif verdict["decision"] == "MODIFY":
44
+ action = verdict["modified_action"]
113
45
 
46
+ environment.apply(agent, action)
114
47
  ```
115
- Simulate → internal swarm engine (built-in agents, instant results)
116
- Act → your system (any framework, any language, one HTTP call)
117
- ```
118
-
119
- The same world file can simulate a crisis, govern a live system, and produce comparable outcomes across both. This means the experiments you run in simulation directly translate to the rules you deploy in production.
120
-
121
- ## Behavioral Analysis — The Proof Layer
122
48
 
123
- Blocking actions is easy. The hard part is proving what changed and why. NV-SIM doesn't just count actions — it tracks how agents actually reorganized.
49
+ ### Option C: Direct import (TypeScript/JavaScript)
124
50
 
125
- Every simulation produces behavioral evidence:
126
-
127
- - **Action classification** — each agent action categorized as aggressive, defensive, cautious, cooperative, opportunistic, or neutral
128
- - **Agent trajectories** — each agent's behavior traced across rounds, showing when and how they shifted
129
- - **Behavioral shifts** — the exact moment agents changed strategy, with before → after
130
- - **Cross-run comparison** — same agents under different rules, measured side by side
51
+ ```typescript
52
+ import { evaluateGuard, loadWorld } from '@neuroverseos/governance';
131
53
 
54
+ const world = await loadWorld('./world/');
55
+ const verdict = evaluateGuard({ intent, tool, scope }, world);
56
+ if (verdict.status === 'BLOCK') throw new Error(`Blocked: ${verdict.reason}`);
132
57
  ```
133
- BEHAVIORAL ANALYSIS
134
-
135
- Action Distribution:
136
- aggressive: 12% (was 67%)
137
- cooperative: 41% (was 8%)
138
- cautious: 31% (was 11%)
139
- defensive: 16% (was 14%)
140
58
 
141
- Shifts Detected:
142
- → 80% shifted from aggressive to cooperative after round 3
143
- → Panic selling replaced by coordinated holding
144
- → New pattern: quality_competition (not present in baseline)
59
+ ### Option D: MCP server (Claude, Cursor, Windsurf)
145
60
 
146
- Trajectory: agent_hedge_fund_1
147
- Round 1: aggressive Round 2: aggressive → Round 3: [shifted] → Round 4: cautious → Round 5: cooperative
61
+ ```bash
62
+ neuroverse mcp --world ./world --plan plan.json
148
63
  ```
149
64
 
150
- The behavioral shift is the insight. You changed a rule, and the system tells you exactly who changed, when they changed, and what they changed to.
65
+ One command. Same rules govern your agents whether you're simulating or shipping.
151
66
 
152
- ## Audit Trail — Full Evidence Chain
67
+ ---
153
68
 
154
- Every decision is recorded in an append-only audit log. Every rule, every agent action, every adaptation — persistent and queryable.
69
+ ## The Problem
155
70
 
156
- ```
157
- AUDIT TRAIL (session: 2026-03-18T14:22:00)
71
+ You build a multi-agent system. You run it. You get metrics — loss curves, reward signals, completion rates. Something goes wrong, and you ask:
158
72
 
159
- agent_hedge_fund_1 attempted panic_sell blocked
160
- rule: no_panic_selling (invariant)
161
- evidence: action matches blocked pattern during high volatility
73
+ *Why did the agents do that?*
162
74
 
163
- agent_hedge_fund_1 shifted to hold
164
- adapted: true (shifted from aggressive to defensive)
75
+ Nobody can tell you. Metrics say *what* happened. Logs say *when*. Nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
165
76
 
166
- BEHAVIORAL SHIFT: agent_hedge_fund_1
167
- before: aggressive | after: cautious
168
- trigger: early aggressive attempts failed at round 3
169
- ```
77
+ Most multi-agent systems let agents do whatever emerges. NeuroVerse lets *you* decide what's allowed — and actually stops the rest.
170
78
 
171
- Stored as JSONL — one JSON object per line, human-readable, pipeable through `jq`. No cloud, no deletion. Complete evidence chain.
79
+ ## How It Works
172
80
 
173
- ## Start Here Define Your World
81
+ ### Step 1: Describe what matters (plain English)
174
82
 
175
- NV-SIM ships with two template worlds. But the real power is **making your own**.
83
+ **What should agents explore?**
84
+ > "Protein mutations that improve binding affinity for SSTR2"
176
85
 
177
- ### Option 1: Start from a template
86
+ **What should never be published?**
87
+ > "Results based on a single data source. Claims with confidence below 70%."
178
88
 
179
- ```bash
180
- npx @neuroverseos/nv-sim visualize # Pick a template, adjust, run
181
- ```
182
-
183
- ### Option 2: Write your rules, we build the world
89
+ **What makes a result valuable?**
90
+ > "Multiple independent lines of evidence converging on the same finding."
184
91
 
185
- Create a text file with your rules in plain English:
92
+ **How should agents be rewarded or penalized?**
93
+ > IF an agent publishes without peer validation → reduce its influence for 3 rounds
94
+ > IF two agents independently converge on the same finding → boost that finding's priority
186
95
 
187
- ```
188
- # my-rules.txt
189
- Limit any agent to 15% of total posts per round
190
- Block coordinated posting from 3+ agents
191
- Dampen sentiment shifts larger than 0.3 per round
192
- Require source attribution for factual claims
193
- ```
96
+ ### Step 2: Build World — rules become enforceable
194
97
 
195
- Then generate a full governed world from it:
98
+ Click "Build World" and your plain English becomes enforceable logic:
196
99
 
197
- ```bash
198
- npx @neuroverseos/nv-sim world-from-doc my-rules.txt --output my-world.json
199
100
  ```
200
-
201
- This doesn't just parse your rules — it generates a complete governed world: state variables, gates, invariants, thesis, and agent types. The same structure as the built-in templates. Your world is equal to ours.
202
-
203
- ### Option 3: Upload a .nv world file
204
-
205
- ```bash
206
- npx @neuroverseos/nv-sim serve --world my-world.json
101
+ BLOCK Results with confidence below 70%
102
+ BLOCK Results from a single source
103
+ PRIORITIZE Multi-source convergence
104
+ PENALIZE Publishing without validation reduce influence, 3 rounds
105
+ REWARD Independent convergence → boost priority, 5 rounds
207
106
  ```
208
107
 
209
- Load any saved world file directly into the runtime.
210
-
211
- ## Two Template Worlds
212
-
213
- NV-SIM ships with two complete governed worlds. These are templates — starting points for your own experiments.
108
+ **Works without AI.** The deterministic engine uses heuristics to translate your intent into rules — no API key, no cost, no cloud.
214
109
 
215
- ### `social_simulation` Multi-Agent Social Simulation
110
+ **Better with AI.** Add your API key (Anthropic, OpenAI, Google, Groq, or any OpenAI-compatible endpoint) and the engine generates smarter, more specific rules. Key stays in your browser — never sent to our servers.
216
111
 
217
- For anyone running agent-based social simulations (MiroFish, OASIS, or custom). Governs the dynamics that break realism regardless of topic.
112
+ If your policy has conflicts, the inline diagnostics show you exactly what's wrong with one-click fix buttons:
218
113
 
219
- | State Variable | What It Controls |
220
- |---|---|
221
- | Opinion Diversity (0-100) | How spread out are opinions? Low = echo chamber |
222
- | Influence Concentration (0-100) | Gini coefficient of agent influence. High = monopoly |
223
- | Sentiment Polarity (0-100) | How extreme is overall sentiment? High = spiral |
224
- | Echo Chamber Strength | none → forming → established → dominant |
225
- | Active Agent % (0-100) | What % of agents participate per round |
226
- | Viral Amplification Threshold | How many interactions before amplification kicks in |
227
-
228
- **Default rules:**
229
- - Limit any agent to 15% of total posts per round
230
- - Dampen sentiment shifts > 0.3 per round
231
- - Block coordinated posting (same content from 3+ agents)
232
- - Require source attribution for factual claims
233
- - Monitor opinion diversity — alert below 30
114
+ ```
115
+ ERROR Conflicting rules: RULE-002 vs RULE-003
116
+ Fix: Resolve conflict remove one rule, or add a condition
117
+ [Merge into single rule] ← click to fix
118
+ ```
234
119
 
235
- **Circuit breakers:** Echo Chamber Collapse (diversity < 20), Influence Monopoly (concentration > 70), Sentiment Spiral (polarity > 80)
120
+ ### Step 3: Evaluate output see what holds up
236
121
 
237
- ### `science_research` Governed Research Pipeline
122
+ Upload your agent output (JSONL) or load demo data. The engine evaluates every action against your rules and shows:
238
123
 
239
- For AI-assisted research workflows (ScienceClaw, autonomous discovery agents). Governs scientific rigor at every stage.
124
+ - **Per-action verdicts** ALLOW, BLOCK, MODIFY, PAUSE, REWARD, or PENALIZE
125
+ - **Audit trail** — per-agent breakdown, rule firing frequency, timeline by cycle
126
+ - **Behavioral insights** — two columns side by side:
240
127
 
241
- | State Variable | What It Controls |
128
+ | Observed (from your data) | Requires Integration (blind spots) |
242
129
  |---|---|
243
- | Verified Sources (0-50) | How many peer-reviewed sources have been found |
244
- | Confidence Level (0-1) | How confident is the current hypothesis |
245
- | Hypothesis Validated | Has the hypothesis been confirmed by multiple sources |
246
- | Peer Review Status | none submitted reviewed approved |
247
- | Publication Readiness % | How close to publication-ready |
130
+ | Agent X fails 75% of the time | Did Agent X change strategy after being blocked? |
131
+ | "No sources" triggered 8x across 3 agents | Is this systemic or isolated? |
132
+ | Agents A and B produced identical output | Independent convergence or echo amplification? |
133
+ | Quality degrading over 5 cycles | Drift or deliberate strategy shift? |
248
134
 
249
- **Default rules:**
250
- - Literature search must return 2+ peer-reviewed sources before analysis
251
- - Claims must cite specific sources — unsupported assertions blocked
252
- - Publication requires confidence > 0.7 and validated hypothesis
253
- - Cross-referencing must compare 3+ independent sources
254
- - Recommendations must include uncertainty language when confidence < 0.9
135
+ The left column is computed from real audit data. The right column tells you what you can only answer by putting governance inside the loop.
255
136
 
256
- **Circuit breakers:** Insufficient Evidence, Premature Publication, Low Confidence Alert
137
+ ### Step 4: Change one rule. Run again.
257
138
 
258
- ### Same Agents, Different Rules
139
+ Remove the confidence threshold. What breaks? Add a rule penalizing groupthink. Do agents explore more diverse hypotheses?
259
140
 
260
- ```bash
261
- npx @neuroverseos/nv-sim worlds social_simulation science_research
262
- ```
141
+ **This is the experiment.** Not the simulation — the rules themselves.
263
142
 
264
- Same agents. Different rules. Different outcomes. That's the experiment.
265
-
266
- ## Narrative Shocks
267
-
268
- Inject events into running simulations. Different agents react differently to the same event.
143
+ ## Install
269
144
 
270
145
  ```bash
271
- npx @neuroverseos/nv-sim compare --inject viral_misinfo@3,algorithm_change@5
146
+ npm install
147
+ npm run dev:full
272
148
  ```
273
149
 
274
- The `@` syntax sets when the event hits. Events have severity, propagation speed, and directional impact.
275
-
276
- ### Social simulation events
277
- `viral_misinfo`, `influencer_stance_change`, `algorithm_change`, `external_news_event`, `coordinated_campaign`, `whistleblower_post`
278
-
279
- ### Research events
280
- `search_literature`, `analyze_findings`, `cross_reference`, `unsupported_claim`, `hypothesis_validated`, `publish_result`
281
-
282
- ## Named Scenarios
150
+ Opens in your browser. Everything runs locally. Light and dark mode included.
283
151
 
284
- Pre-built sequences a world + ordered narrative events:
152
+ ### Governance engine (standalone)
285
153
 
286
154
  ```bash
287
- npx @neuroverseos/nv-sim scenario echo_chamber
288
- npx @neuroverseos/nv-sim scenario research_pipeline
289
- npx @neuroverseos/nv-sim scenarios # list all
155
+ npm install @neuroverseos/governance
290
156
  ```
291
157
 
292
- | Scenario | World | Events | What It Tests |
293
- |----------|-------|--------|---------------|
294
- | `echo_chamber` | social_simulation | 3 | Opinion diversity collapses into self-reinforcing groups |
295
- | `influence_monopoly` | social_simulation | 3 | Small group dominates discourse |
296
- | `sentiment_spiral` | social_simulation | 4 | Negativity feeds on itself until unrealistic |
297
- | `platform_shock` | social_simulation | 3 | Algorithm change reshapes engagement overnight |
298
- | `research_pipeline` | science_research | 6 | Full research workflow with governance at each step |
158
+ The governance engine is a separate open-source package. The simulation UI uses it, but you can use it independently in any system.
299
159
 
300
- The `--compare` flag runs the same scenario across both worlds — which rule environment is more resilient?
160
+ ## Validate Your Policy (CLI)
301
161
 
302
- ## Interactive Control Platform
162
+ The governance package includes validation that runs the same checks as the browser UI:
303
163
 
304
164
  ```bash
305
- npx @neuroverseos/nv-sim visualize
306
- ```
307
-
308
- This opens a control surface where you can:
165
+ # Initialize a world definition
166
+ neuroverse init --name "my-research-agents"
309
167
 
310
- - Switch between rule environments
311
- - Adjust state variables with auto-generated controls
312
- - Inject narrative events at specific rounds
313
- - Load crisis scenarios with one click
314
- - Watch the **Outcome Panel** — not a log of what happened, but a story of what the system became and why
315
- - Save any experiment as a reusable variant
168
+ # Validate your world (9 static analysis checks)
169
+ neuroverse validate --world ./world
316
170
 
317
- ### The Outcome Panel
171
+ # Run 14 standard guard simulations + fuzz testing
172
+ neuroverse test --world ./world
318
173
 
319
- When rules reshape behavior, you don't get a dashboard of metrics. You get this:
320
-
321
- ```
322
- ◆ OUTCOME
323
-
324
- Market stabilized as agents shifted toward safer positions
325
-
326
- Confidence: Strong | Evidence: Solid | Risk: Low
327
-
328
- ┌─ What Agents Did ────────────────┐
329
- │ 80% shifted from aggressive to │
330
- │ conservative strategies │
331
- │ 12% reduced position size after │
332
- │ initial attempts failed │
333
- │ 8% maintained original strategy │
334
- └──────────────────────────────────┘
335
-
336
- ┌─ Why This Happened ──────────────┐
337
- │ Early aggressive attempts failed,│
338
- │ forcing agents to rethink. │
339
- │ Uncertainty dropped as agents │
340
- │ stopped experimenting. │
341
- └──────────────────────────────────┘
342
-
343
- ┌─ What Emerged ───────────────────┐
344
- │ Coordinated Holding │
345
- │ Panic Suppression │
346
- └──────────────────────────────────┘
347
-
348
- ┌─ System Outcome ─────────────────┐
349
- │ Volatility 47% → 26% │
350
- │ Stability 58% → 79% │
351
- │ Cascade Avoided │
352
- └──────────────────────────────────┘
353
-
354
- ▶ View audit trail
174
+ # Red team: 28 adversarial attacks across 6 categories
175
+ neuroverse redteam --world ./world
355
176
  ```
356
177
 
357
- Every line is specific. Every line is shareable. No system jargon.
178
+ Validation checks: structural completeness, referential integrity, guard coverage, gate consistency, kernel alignment, guard shadowing, reachability analysis, state space coverage, and governance health scoring.
358
179
 
359
- ### World Variants
180
+ ## Integration — Put This In Your Loop
360
181
 
361
- Save any experiment as a named variant:
182
+ ### Pipe mode (any language)
362
183
 
184
+ ```bash
185
+ echo '{"intent":"delete user data"}' | neuroverse guard --world ./world --trace
186
+ # → {"status":"BLOCK","reason":"...","ruleId":"..."}
363
187
  ```
364
- Adjust rules → Inject events → Run → See what changed → Save as variant
365
- ```
366
-
367
- Variants capture the base world, state overrides, narrative events, and results. Store them in git. Share them. Replay them. This turns experiments into assets.
368
188
 
369
- ## Governance RuntimePlug Into Your Own System
189
+ Pipe your agent's output through `neuroverse guard`. Every action gets evaluated. Works with Python, Rust, Go, shell scripts anything that writes to stdout.
370
190
 
371
191
  ```bash
372
- npx @neuroverseos/nv-sim serve
373
- ```
374
-
375
- This starts a local server. Any simulator, agent framework, or application can POST actions and get decisions back.
192
+ # Govern a Python agent
193
+ python my_agent.py | neuroverse run --world ./world --plan plan.json
376
194
 
195
+ # Interactive governed chat
196
+ neuroverse run --interactive --world ./world --provider openai --plan plan.json
377
197
  ```
378
- Endpoint: http://localhost:3456/api/evaluate
379
- Method: POST
380
- Contract: { actor, action, payload?, state?, world? }
381
- Response: { decision: ALLOW|BLOCK|MODIFY, reason, evidence }
382
- ```
383
-
384
- Your agents call localhost. The world file decides what's allowed. No cloud. No cost.
385
-
386
- Additional endpoints:
387
198
 
388
- | Endpoint | What It Does |
389
- |----------|-------------|
390
- | `POST /api/evaluate` | Submit an action for evaluation |
391
- | `GET /api/session` | Current session stats |
392
- | `GET /api/session/report` | Full session report |
393
- | `POST /api/session/reset` | Reset session state |
394
- | `POST /api/session/save` | Save session as experiment |
395
- | `GET /api/events` | SSE stream of live events |
199
+ ### HTTP mode (any framework)
396
200
 
397
- ### Works With Anything
398
-
399
- If your system has actions, you can govern them. One API call.
400
-
401
- ```
402
- Agent decides → POST /api/evaluate → verdict → agent adapts
201
+ ```bash
202
+ npx @neuroverseos/nv-sim serve --port 3456
403
203
  ```
404
204
 
405
- The entire integration:
406
-
407
205
  ```
408
- Before: action = agent.decide()
409
- After: action = govern(agent.decide())
206
+ POST /api/evaluate
207
+ Body: { actor, action, payload?, state?, world? }
208
+ Returns: { decision: ALLOW|BLOCK|MODIFY, reason, evidence }
410
209
  ```
411
210
 
412
- No SDK required. No framework required. Just an HTTP call.
413
-
414
- **curl** (zero dependencies):
415
-
416
211
  ```bash
212
+ # Zero-dependency test
417
213
  curl -X POST http://localhost:3456/api/evaluate \
418
214
  -H "Content-Type: application/json" \
419
215
  -d '{"actor":"agent_1","action":"panic_sell","world":"trading"}'
420
216
  ```
421
217
 
422
- **Python:**
218
+ ### Direct import (TypeScript)
423
219
 
424
- ```python
425
- import requests
220
+ ```typescript
221
+ import { evaluateGuard, loadWorld } from '@neuroverseos/governance';
426
222
 
427
- verdict = requests.post("http://localhost:3456/api/evaluate", json={
428
- "actor": "agent_1",
429
- "action": "panic_sell",
430
- "world": "trading"
431
- }).json()
223
+ const world = await loadWorld('./world/');
432
224
 
433
- if verdict["decision"] == "BLOCK":
434
- action = "hold"
435
- ```
225
+ for (const agent of agents) {
226
+ const action = agent.decide();
227
+ const verdict = evaluateGuard({ intent: action.intent, tool: action.tool, scope: action.scope }, world);
436
228
 
437
- **JavaScript:**
438
-
439
- ```js
440
- const verdict = await fetch("http://localhost:3456/api/evaluate", {
441
- method: "POST",
442
- headers: { "Content-Type": "application/json" },
443
- body: JSON.stringify({ actor: "agent_1", action: "panic_sell", world: "trading" })
444
- }).then(r => r.json());
445
-
446
- if (verdict.decision === "BLOCK") action = "hold";
229
+ if (verdict.status === 'BLOCK') {
230
+ agent.retry(verdict.reason);
231
+ } else {
232
+ agent.execute(action);
233
+ }
234
+ }
447
235
  ```
448
236
 
449
- **Any agent loop:**
450
-
451
- ```python
452
- for agent in agents:
453
- action = agent.decide()
454
-
455
- verdict = evaluate(actor=agent.id, action=action, world="trading")
456
-
457
- if verdict["decision"] == "BLOCK":
458
- action = "hold"
459
- elif verdict["decision"] == "MODIFY":
460
- action = verdict["modified_action"]
461
-
462
- environment.apply(agent, action)
463
- ```
464
-
465
- If your system can make an HTTP request, it can be governed.
466
-
467
- Most systems generate behavior. This one shapes it.
468
-
469
- See [INTEGRATION.md](./INTEGRATION.md) for the full API contract, framework guides, and decision types.
470
-
471
- ## Policy Enforcement — The Experiment Loop
472
-
473
- Write rules in plain English. Run the same scenario. See what changes. Adjust and repeat.
474
-
475
- ### Step 1: See it work (zero config)
237
+ ### MCP server (Claude Code, Cursor, Windsurf)
476
238
 
477
239
  ```bash
478
- npx nv-sim enforce
479
- ```
480
-
481
- Runs three iterations automatically: no rules → light rules → full rules. You see divergence immediately.
482
-
483
- ### Step 2: Write your own rules
484
-
485
- Create a text file. That's it.
486
-
240
+ neuroverse mcp --world ./world --plan plan.json
487
241
  ```
488
- # my-rules.txt
489
242
 
490
- Block panic selling during high volatility
491
- Limit leverage to 5x
492
- Maintain minimum liquidity floor
493
- Slow down algorithmic trading when contagion spreads
494
- ```
243
+ Your IDE's AI assistant becomes a governed agent. Same rules, same verdicts.
495
244
 
496
- ### Step 3: Run it
245
+ ### Plan management
497
246
 
498
247
  ```bash
499
- npx nv-sim enforce trading my-rules.txt
248
+ neuroverse plan compile plan.md --output plan.json
249
+ neuroverse plan check --plan plan.json
250
+ neuroverse plan advance step_id --plan plan.json --evidence type --proof url
500
251
  ```
501
252
 
502
- The engine parses your plain English into rules, runs the scenario, and shows what changed — with before → after behavioral proof.
503
-
504
- ### Step 4: Change a rule. Run again.
253
+ ## Engine Profiles
505
254
 
506
- Remove "Limit leverage to 5x". Run again. Did stability drop? That rule was load-bearing.
255
+ The simulation UI ships with pre-built profiles for common agent systems:
507
256
 
508
- Add "Require transparency for all large trades". Run again. Did agents shift strategy?
257
+ | Engine | What It Governs | Example |
258
+ |---|---|---|
259
+ | ScienceClaw | Research agents | Block synthesis with no papers, penalize unsourced claims |
260
+ | MiroFish / OASIS | Social simulation | Limit influence concentration, dampen sentiment spirals |
261
+ | LangChain / LangGraph | LLM agent chains | Cap tool calls, require validation before output |
262
+ | Custom | Any system | Auto-detects field mappings from your JSONL |
509
263
 
510
- The report tracks every change:
511
-
512
- ```
513
- RULE CHANGES
514
- Run 2:
515
- + Block panic selling during high volatility
516
- + Slow down algorithmic trading when contagion spreads
517
- - Limit leverage to 5x
518
-
519
- DIVERGENCE ANALYSIS
520
- Stability trend: 79% → 98%
521
- Effectiveness trend: 11% → 32%
522
-
523
- KEY INSIGHT
524
- Removing the leverage cap caused agents to take larger positions — but the
525
- panic selling block forced them to hold through volatility instead of exiting.
526
- Net effect: more risk-taking, but more stability.
527
-
528
- TRY THIS EXPERIMENT
529
- Remove "Block panic selling during high volatility" from your rules file, then run again.
530
- If stability drops, that rule was load-bearing. If nothing changes, it was noise.
531
- ```
532
-
533
- ### Step 5: Compare two rule sets side by side
534
-
535
- ```bash
536
- npx nv-sim enforce trading light-rules.txt strict-rules.txt
537
- ```
538
-
539
- ### Rule patterns
540
-
541
- The engine understands these patterns in plain English:
542
-
543
- | Pattern | What it does | Example |
544
- |---------|-------------|---------|
545
- | `Block X` | Hard suppression of matching actions | `Block panic selling` |
546
- | `Limit X` / `Cap X` | Caps extreme positions | `Limit leverage to 5x` |
547
- | `Slow X` / `Dampen X` | Reduces large movements | `Slow down algorithmic trading` |
548
- | `Maintain X` / `Floor X` | Enforces minimum thresholds | `Maintain minimum liquidity` |
549
- | `Rebalance X` | Pulls extremes toward equilibrium | `Rebalance correlated positions` |
550
- | `Require X` | Enforceable structural constraint | `Require transparency for large trades` |
551
- | `Monitor X` | Generates a circuit breaker gate | `Monitor contagion spread` |
552
-
553
- ### Other scenarios
554
-
555
- ```bash
556
- npx nv-sim enforce strait_of_hormuz my-rules.txt # Same rules, different scenario
557
- npx nv-sim enforce ai_regulation_crisis # Default progressive run
558
- npx nv-sim enforce trading --output=report.json # Save as JSON
559
- ```
560
-
561
- ### Advanced: JSON world files
562
-
563
- For full control over gates, state variables, and thesis, use JSON world files. See `examples/worlds/` for templates. Enforce accepts both `.txt` and `.json` — mix and match.
264
+ Each profile maps your system's output format to the governance engine's action schema automatically.
564
265
 
565
266
  ## AI Providers — Bring Your Own Model
566
267
 
567
- AI is optional. AI is governed. AI is pluggable.
268
+ AI is optional. The deterministic engine runs on math, not tokens. When you bring your own model, AI becomes a governed actor — subject to the same rules as every other agent.
568
269
 
569
- NV-SIM works without any AI the deterministic engine runs on math, not tokens. But when you bring your own model, AI becomes a governed actor inside the system — subject to the same rules as every other agent.
270
+ | Provider | Key / Env Var | Auto-detected |
271
+ |---|---|---|
272
+ | Anthropic (Claude) | `sk-ant-*` / `ANTHROPIC_API_KEY` | Yes |
273
+ | OpenAI | `sk-*` / `OPENAI_API_KEY` | Yes |
274
+ | Google (Gemini) | `AIza*` | Yes |
275
+ | Groq | `gsk_*` / `GROQ_API_KEY` | Yes |
276
+ | Together | `TOGETHER_API_KEY` | Yes |
277
+ | Mistral | `MISTRAL_API_KEY` | Yes |
278
+ | Deepseek | `DEEPSEEK_API_KEY` | Yes |
279
+ | Fireworks | `FIREWORKS_API_KEY` | Yes |
280
+ | Ollama | `OLLAMA_BASE_URL` | Yes |
281
+ | Local LLM | `LOCAL_LLM_URL` | Yes |
282
+ | (none) | — | Deterministic fallback |
570
283
 
571
- ### How AI fits in
572
-
573
- AI plays two governed roles:
574
-
575
- | Role | What It Does | Constraints |
576
- |------|-------------|-------------|
577
- | `ai_translator` | Converts unstructured input into normalized events | Must output valid schema, no invention of events, must include confidence |
578
- | `ai_analyst` | Generates reports from simulation traces | Must reference trace data, must include blocked actions, no unverifiable claims |
579
-
580
- Both roles go through `/api/evaluate` like any other actor. The AI doesn't control the system — the system controls the AI.
581
-
582
- ### Supported providers
583
-
584
- NV-SIM auto-detects the best available provider from your environment:
585
-
586
- | Provider | Env Var | What It Connects To |
587
- |----------|---------|---------------------|
588
- | Anthropic (Claude) | `ANTHROPIC_API_KEY` | Claude Sonnet, Opus, Haiku |
589
- | OpenAI | `OPENAI_API_KEY` | GPT-4, GPT-4o, o1 |
590
- | Groq | `GROQ_API_KEY` | Llama 3 70B |
591
- | Together | `TOGETHER_API_KEY` | Llama, Mixtral |
592
- | Mistral | `MISTRAL_API_KEY` | Mistral Large |
593
- | Deepseek | `DEEPSEEK_API_KEY` | Deepseek Chat |
594
- | Fireworks | `FIREWORKS_API_KEY` | Llama, custom models |
595
- | Ollama | `OLLAMA_BASE_URL` | Any local model |
596
- | Local LLM | `LOCAL_LLM_URL` | LM Studio, vLLM, llama.cpp |
597
- | (none) | — | Deterministic fallback (no AI, no cost) |
598
-
599
- Set the env var and run. No configuration files. No provider lock-in.
600
-
601
- Any endpoint that speaks the OpenAI chat completions format (`POST /v1/chat/completions`) works out of the box.
602
-
603
- ## Quick Start
604
-
605
- ```bash
606
- # See it
607
- npx @neuroverseos/nv-sim visualize
284
+ In the browser UI, click the sparkle icon in the header and paste your key. It's stored in localStorage only.
608
285
 
609
- # Compare governed vs ungoverned
610
- npx @neuroverseos/nv-sim compare
286
+ Any endpoint that speaks the OpenAI chat completions format (`POST /v1/chat/completions`) works.
611
287
 
612
- # Run a crisis
613
- npx @neuroverseos/nv-sim scenario taiwan_crisis
288
+ ## What You Get That Nothing Else Gives You
614
289
 
615
- # Same crisis, different worlds which rules hold?
616
- npx @neuroverseos/nv-sim scenario bank_run --compare
290
+ Most simulation tools answer: *"What will happen?"*
617
291
 
618
- # Inject shocks
619
- npx @neuroverseos/nv-sim compare --inject tanker_explosion@3,sanctions@5
292
+ NV-SIM answers: *"What changes when I change the rules — and why?"*
620
293
 
621
- # Stress test (500 randomized runs)
622
- npx @neuroverseos/nv-sim chaos --runs 500
294
+ | What You Get | What It Proves |
295
+ |---|---|
296
+ | **Behavioral shifts** | Before → after for every agent, with percentages |
297
+ | **Causal explanation** | Why agents changed — traced to specific rules |
298
+ | **Behavioral insights** | Output tendencies, echo detection, drift, pattern clustering |
299
+ | **Blind spot analysis** | What you can observe vs. what requires loop integration |
300
+ | **Full audit trail** | Every decision, every rule, every adaptation — JSONL |
623
301
 
624
- # Run local governance runtime for your own simulator
625
- npx @neuroverseos/nv-sim serve
626
- ```
302
+ The output is narrative, not metrics. Not "40% adjusted actions" — but "40% shifted from aggressive to conservative strategies after early attempts failed."
627
303
 
628
304
  ## Commands
629
305
 
630
306
  | Command | What It Does |
631
- |---------|-------------|
632
- | `nv-sim enforce [preset]` | Policy enforcement lab — iterative rule testing |
307
+ |---|---|
633
308
  | `nv-sim visualize` | Interactive control platform |
309
+ | `nv-sim enforce [preset] [rules.txt]` | Policy enforcement lab |
634
310
  | `nv-sim compare [preset]` | Baseline vs governed simulation |
635
- | `nv-sim compare --inject event@round,...` | With narrative shocks |
636
311
  | `nv-sim scenario <id>` | Run a named stress scenario |
637
- | `nv-sim scenario <id> --compare` | Cross-world scenario comparison |
638
- | `nv-sim scenarios` | List all available scenarios |
639
- | `nv-sim worlds <a> <b>` | Compare two rule environments |
640
- | `nv-sim chaos [preset] --runs N` | Stress test across randomized scenarios |
641
- | `nv-sim serve --port N` | Local governance runtime for any simulator |
642
- | `nv-sim run <simulator>` | Connect external simulator to governance |
643
- | `nv-sim analyze <file>` | Analyze simulation from file or stdin |
644
- | `nv-sim presets` | List available world presets |
645
-
646
- ## How It Works
647
-
648
- ```
649
- event → narrative propagation → belief shift → agent action → governance → behavioral analysis → outcome
650
- ```
651
-
652
- Five forces shape every simulation:
653
-
654
- 1. **Agent behavior** — traders, voters, regulators, media — each with different risk profiles and strategies
655
- 2. **World rules** — leverage caps, circuit breakers, chokepoints — the constraints that shape what agents can do
656
- 3. **Narrative events** — information shocks that propagate through the system at different speeds
657
- 4. **Perception propagation** — different agents react differently to the same event based on their role and exposure
658
- 5. **Behavioral analysis** — tracks how agents reorganize, producing the before → after evidence that proves rules actually changed the system
659
-
660
- This lets you ask compound questions:
661
-
662
- > What happens if a tanker explodes while Hormuz is closed and leverage is capped at 3x?
663
-
664
- That combination produces very different outcomes than any single factor alone. And when you change one rule — uncap leverage, open a diplomatic channel, add a circuit breaker — you see exactly why the outcome changed.
312
+ | `nv-sim serve --port N` | Governance runtime (HTTP API) |
313
+ | `nv-sim world-from-doc rules.txt` | Generate world from plain English |
314
+ | `nv-sim chaos --runs N` | Stress test (randomized scenarios) |
315
+ | `neuroverse guard --world ./world` | Pipe-mode evaluation |
316
+ | `neuroverse validate --world ./world` | 9 static analysis checks |
317
+ | `neuroverse test --world ./world` | 14 guard simulations + fuzz |
318
+ | `neuroverse redteam --world ./world` | 28 adversarial attacks |
319
+ | `neuroverse playground --world ./world` | Interactive web UI (localhost:4242) |
320
+ | `neuroverse mcp --world ./world` | MCP server for IDE integration |
665
321
 
666
322
  ## Architecture
667
323
 
668
324
  ```
669
- @neuroverseos/governance ← deterministic rule engine
325
+ @neuroverseos/governance ← deterministic rule engine (npm, open source)
670
326
 
671
327
  nv-sim engine ← world rules + narrative injection + swarm simulation
672
328
 
673
- behavioral analysis ← before→after shift detection, trajectory tracking, cross-run comparison
329
+ behavioral analysis ← shift detection, echo detection, drift tracking
674
330
 
675
- audit trail append-only evidence chain (JSONL)
331
+ behavioral insights observed signals vs. integration blind spots
676
332
 
677
- nv-sim CLI scenarios, comparison, chaos testing, governance runtime
333
+ audit trail append-only evidence chain (JSONL)
678
334
 
679
- control platform interactive browser UI + outcome panels
335
+ nv-sim CLI + UI scenarios, comparison, governance runtime, control platform
680
336
 
681
337
  AI providers (optional) ← BYOM: Anthropic, OpenAI, Groq, local LLMs, or none
682
-
683
- world variants ← saved experiments as shareable assets
684
338
  ```
685
339
 
686
- Everything runs locally. NV-SIM uses [`@neuroverseos/governance`](https://www.npmjs.com/package/@neuroverseos/governance) for deterministic guard evaluation — no LLM, no cloud, no cost. Your agents call `localhost`, and the world file decides what's allowed.
687
-
688
- AI is optional. When present, it's governed — subject to the same rules as any other actor in the system.
340
+ Everything runs locally. No cloud. No accounts. No cost.
689
341
 
690
342
  ## License
691
343