@neuroverseos/nv-sim 0.1.9 → 0.1.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +187 -535
- package/connectors/nv_mirofish_wrapper.py +841 -0
- package/connectors/nv_scienceclaw_wrapper.py +453 -0
- package/dist/adapters/scienceclaw.js +52 -2
- package/dist/assets/index-CH_VswRM.css +1 -0
- package/dist/assets/index-sT4b_z7w.js +686 -0
- package/dist/assets/{reportEngine-D2ZrMny8.js → reportEngine-Bu8bB5Yq.js} +1 -1
- package/dist/connectors/nv-scienceclaw-post.js +363 -0
- package/dist/engine/aiProvider.js +82 -3
- package/dist/engine/analyzer.js +12 -24
- package/dist/engine/cli.js +89 -114
- package/dist/engine/dynamicsGovernance.js +4 -0
- package/dist/engine/fullGovernedLoop.js +16 -1
- package/dist/engine/goalEngine.js +3 -4
- package/dist/engine/governance.js +18 -0
- package/dist/engine/index.js +19 -28
- package/dist/engine/intentTranslator.js +281 -0
- package/dist/engine/liveAdapter.js +100 -18
- package/dist/engine/liveVisualizer.js +2071 -1023
- package/dist/engine/primeRadiant.js +2 -8
- package/dist/engine/reasoningEngine.js +2 -7
- package/dist/engine/scenarioCapsule.js +5 -5
- package/dist/engine/swarmSimulation.js +1 -9
- package/dist/engine/universalAdapter.js +371 -0
- package/dist/engine/worldBridge.js +22 -8
- package/dist/index.html +2 -2
- package/dist/lib/reasoningEngine.js +17 -1
- package/dist/lib/simulationAdapter.js +11 -11
- package/dist/lib/swarmParser.js +1 -1
- package/dist/runtime/govern.js +160 -7
- package/dist/runtime/index.js +1 -4
- package/dist/runtime/types.js +91 -0
- package/package.json +23 -6
- package/dist/adapters/mirofish.js +0 -461
- package/dist/assets/index-B64NuIXu.css +0 -1
- package/dist/assets/index-BMkPevVr.js +0 -532
- package/dist/assets/mirotir-logo-DUexumBH.svg +0 -185
- package/dist/engine/mirofish.js +0 -295
package/README.md
CHANGED
|
@@ -6,686 +6,338 @@
|
|
|
6
6
|
npx @neuroverseos/nv-sim visualize
|
|
7
7
|
```
|
|
8
8
|
|
|
9
|
-
##
|
|
9
|
+
## Put Governance Inside Your Agent Loop — One Line
|
|
10
10
|
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
*Why did the agents do that?*
|
|
14
|
-
|
|
15
|
-
Nobody can tell you. The metrics say *what* happened. The logs say *when* it happened. But nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
|
|
16
|
-
|
|
17
|
-
So you rerun. You tweak. You guess. You stare at dashboards full of numbers that describe the system but never explain it.
|
|
18
|
-
|
|
19
|
-
That's the gap.
|
|
20
|
-
|
|
21
|
-
## What NV-SIM Gives You That Nothing Else Does
|
|
22
|
-
|
|
23
|
-
NV-SIM doesn't predict outcomes. It shows you **why agents behaved the way they did** — and what happens when you change the rules.
|
|
24
|
-
|
|
25
|
-
You change one constraint — block panic selling, cap leverage at 3x, close a shipping lane — and the system shows you:
|
|
26
|
-
|
|
27
|
-
- **Before → After proof**: "80% of agents shifted from panic selling to coordinated holding"
|
|
28
|
-
- **Emergent patterns**: "Panic suppression appeared — not programmed, not predicted"
|
|
29
|
-
- **Causal chains**: "Agents became more cautious after early aggressive attempts failed"
|
|
30
|
-
- **Quantified outcomes**: "Volatility dropped 21%, cascade avoided"
|
|
11
|
+
Your agents already have a decide → act loop. Insert one call between them:
|
|
31
12
|
|
|
32
13
|
```
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
→ New patterns emerged
|
|
36
|
-
→ System outcome changed
|
|
37
|
-
→ You know exactly why
|
|
14
|
+
Before: action = agent.decide()
|
|
15
|
+
After: action = govern(agent.decide())
|
|
38
16
|
```
|
|
39
17
|
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
### For researchers
|
|
43
|
-
|
|
44
|
-
You get **controlled behavioral experiments** on complex systems. Same agents, different rules, measured side by side. The output isn't a chart — it's proof of what changed and why.
|
|
45
|
-
|
|
46
|
-
### For developers
|
|
18
|
+
### Option A: Pipe mode (any language, zero SDK)
|
|
47
19
|
|
|
48
|
-
|
|
49
|
-
|
|
50
|
-
### For both
|
|
51
|
-
|
|
52
|
-
You get something that didn't exist before: **rule-to-behavior causation**. Not correlation. Not post-hoc analysis. Direct, traceable proof that changing rule X caused agents to shift from behavior A to behavior B.
|
|
53
|
-
|
|
54
|
-
## The Demo Moment
|
|
55
|
-
|
|
56
|
-
**Before:**
|
|
57
|
-
```
|
|
58
|
-
panic_sell → panic_sell → panic_sell
|
|
59
|
-
pressure: 0.94
|
|
60
|
-
system: unstable
|
|
61
|
-
```
|
|
62
|
-
|
|
63
|
-
**Rule added:** no panic selling
|
|
64
|
-
|
|
65
|
-
**After:**
|
|
20
|
+
```bash
|
|
21
|
+
my_agent | neuroverse guard --world ./world --trace
|
|
66
22
|
```
|
|
67
|
-
Market stabilized as agents shifted toward safer positions
|
|
68
|
-
|
|
69
|
-
panic_sell → hold
|
|
70
|
-
panic_sell → hold
|
|
71
|
-
panic_sell → hold
|
|
72
|
-
|
|
73
|
-
80% of agents shifted from aggressive to conservative strategies
|
|
74
|
-
Uncertainty dropped 34% as agents moved from exploration to caution
|
|
75
23
|
|
|
76
|
-
|
|
24
|
+
Every action your agent emits gets evaluated. Blocked actions return `{"status":"BLOCK","reason":"..."}`. Allowed actions pass through. Your agent reads the verdict and adapts.
|
|
77
25
|
|
|
78
|
-
|
|
79
|
-
Pattern: panic_suppression
|
|
26
|
+
### Option B: HTTP call (any framework)
|
|
80
27
|
|
|
81
|
-
|
|
82
|
-
|
|
28
|
+
```bash
|
|
29
|
+
npx @neuroverseos/nv-sim serve
|
|
83
30
|
```
|
|
84
31
|
|
|
85
|
-
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
Most simulation tools answer: *"What will happen?"*
|
|
90
|
-
|
|
91
|
-
NV-SIM answers: *"What changes when I change the rules — and why?"*
|
|
92
|
-
|
|
93
|
-
You're not forecasting. You're running controlled behavioral experiments. Every simulation produces:
|
|
94
|
-
|
|
95
|
-
| What You Get | What It Proves |
|
|
96
|
-
|---|---|
|
|
97
|
-
| **Outcome statement** | System state + dominant agent behavior in one sentence |
|
|
98
|
-
| **Behavioral shifts** | Before → after for every agent group, with percentages |
|
|
99
|
-
| **Causal explanation** | Why agents changed — in their experience, not system jargon |
|
|
100
|
-
| **Confidence rating** | How strong the evidence is, how much risk remains |
|
|
101
|
-
| **Full audit trail** | Every decision, every rule, every adaptation — append-only |
|
|
102
|
-
|
|
103
|
-
The output is designed to be specific, narrative, and shareable. Not "40% adjusted actions" — but "40% shifted from aggressive to conservative strategies after early attempts failed."
|
|
104
|
-
|
|
105
|
-
## Design the Rules Once. Run Them Anywhere.
|
|
106
|
-
|
|
107
|
-
NV-SIM is a runtime, not just a simulator. It runs in two modes:
|
|
32
|
+
```python
|
|
33
|
+
for agent in agents:
|
|
34
|
+
action = agent.decide()
|
|
108
35
|
|
|
109
|
-
|
|
110
|
-
|
|
36
|
+
verdict = requests.post("http://localhost:3456/api/evaluate", json={
|
|
37
|
+
"actor": agent.id,
|
|
38
|
+
"action": action,
|
|
39
|
+
}).json()
|
|
111
40
|
|
|
112
|
-
|
|
41
|
+
if verdict["decision"] == "BLOCK":
|
|
42
|
+
action = "hold"
|
|
43
|
+
elif verdict["decision"] == "MODIFY":
|
|
44
|
+
action = verdict["modified_action"]
|
|
113
45
|
|
|
46
|
+
environment.apply(agent, action)
|
|
114
47
|
```
|
|
115
|
-
Simulate → internal swarm engine (built-in agents, instant results)
|
|
116
|
-
Act → your system (any framework, any language, one HTTP call)
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
The same world file can simulate a crisis, govern a live system, and produce comparable outcomes across both. This means the experiments you run in simulation directly translate to the rules you deploy in production.
|
|
120
|
-
|
|
121
|
-
## Behavioral Analysis — The Proof Layer
|
|
122
48
|
|
|
123
|
-
|
|
49
|
+
### Option C: Direct import (TypeScript/JavaScript)
|
|
124
50
|
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
- **Action classification** — each agent action categorized as aggressive, defensive, cautious, cooperative, opportunistic, or neutral
|
|
128
|
-
- **Agent trajectories** — each agent's behavior traced across rounds, showing when and how they shifted
|
|
129
|
-
- **Behavioral shifts** — the exact moment agents changed strategy, with before → after
|
|
130
|
-
- **Cross-run comparison** — same agents under different rules, measured side by side
|
|
51
|
+
```typescript
|
|
52
|
+
import { evaluateGuard, loadWorld } from '@neuroverseos/governance';
|
|
131
53
|
|
|
54
|
+
const world = await loadWorld('./world/');
|
|
55
|
+
const verdict = evaluateGuard({ intent, tool, scope }, world);
|
|
56
|
+
if (verdict.status === 'BLOCK') throw new Error(`Blocked: ${verdict.reason}`);
|
|
132
57
|
```
|
|
133
|
-
BEHAVIORAL ANALYSIS
|
|
134
|
-
|
|
135
|
-
Action Distribution:
|
|
136
|
-
aggressive: 12% (was 67%)
|
|
137
|
-
cooperative: 41% (was 8%)
|
|
138
|
-
cautious: 31% (was 11%)
|
|
139
|
-
defensive: 16% (was 14%)
|
|
140
58
|
|
|
141
|
-
|
|
142
|
-
→ 80% shifted from aggressive to cooperative after round 3
|
|
143
|
-
→ Panic selling replaced by coordinated holding
|
|
144
|
-
→ New pattern: quality_competition (not present in baseline)
|
|
59
|
+
### Option D: MCP server (Claude, Cursor, Windsurf)
|
|
145
60
|
|
|
146
|
-
|
|
147
|
-
|
|
61
|
+
```bash
|
|
62
|
+
neuroverse mcp --world ./world --plan plan.json
|
|
148
63
|
```
|
|
149
64
|
|
|
150
|
-
|
|
65
|
+
One command. Same rules govern your agents whether you're simulating or shipping.
|
|
151
66
|
|
|
152
|
-
|
|
67
|
+
---
|
|
153
68
|
|
|
154
|
-
|
|
69
|
+
## The Problem
|
|
155
70
|
|
|
156
|
-
|
|
157
|
-
AUDIT TRAIL (session: 2026-03-18T14:22:00)
|
|
71
|
+
You build a multi-agent system. You run it. You get metrics — loss curves, reward signals, completion rates. Something goes wrong, and you ask:
|
|
158
72
|
|
|
159
|
-
|
|
160
|
-
rule: no_panic_selling (invariant)
|
|
161
|
-
evidence: action matches blocked pattern during high volatility
|
|
73
|
+
*Why did the agents do that?*
|
|
162
74
|
|
|
163
|
-
|
|
164
|
-
adapted: true (shifted from aggressive to defensive)
|
|
75
|
+
Nobody can tell you. Metrics say *what* happened. Logs say *when*. Nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
|
|
165
76
|
|
|
166
|
-
|
|
167
|
-
before: aggressive | after: cautious
|
|
168
|
-
trigger: early aggressive attempts failed at round 3
|
|
169
|
-
```
|
|
77
|
+
Most multi-agent systems let agents do whatever emerges. NeuroVerse lets *you* decide what's allowed — and actually stops the rest.
|
|
170
78
|
|
|
171
|
-
|
|
79
|
+
## How It Works
|
|
172
80
|
|
|
173
|
-
|
|
81
|
+
### Step 1: Describe what matters (plain English)
|
|
174
82
|
|
|
175
|
-
|
|
83
|
+
**What should agents explore?**
|
|
84
|
+
> "Protein mutations that improve binding affinity for SSTR2"
|
|
176
85
|
|
|
177
|
-
|
|
86
|
+
**What should never be published?**
|
|
87
|
+
> "Results based on a single data source. Claims with confidence below 70%."
|
|
178
88
|
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
```
|
|
182
|
-
|
|
183
|
-
### Option 2: Write your rules, we build the world
|
|
89
|
+
**What makes a result valuable?**
|
|
90
|
+
> "Multiple independent lines of evidence converging on the same finding."
|
|
184
91
|
|
|
185
|
-
|
|
92
|
+
**How should agents be rewarded or penalized?**
|
|
93
|
+
> IF an agent publishes without peer validation → reduce its influence for 3 rounds
|
|
94
|
+
> IF two agents independently converge on the same finding → boost that finding's priority
|
|
186
95
|
|
|
187
|
-
|
|
188
|
-
# my-rules.txt
|
|
189
|
-
Limit any agent to 15% of total posts per round
|
|
190
|
-
Block coordinated posting from 3+ agents
|
|
191
|
-
Dampen sentiment shifts larger than 0.3 per round
|
|
192
|
-
Require source attribution for factual claims
|
|
193
|
-
```
|
|
96
|
+
### Step 2: Build World — rules become enforceable
|
|
194
97
|
|
|
195
|
-
|
|
98
|
+
Click "Build World" and your plain English becomes enforceable logic:
|
|
196
99
|
|
|
197
|
-
```bash
|
|
198
|
-
npx @neuroverseos/nv-sim world-from-doc my-rules.txt --output my-world.json
|
|
199
100
|
```
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
```bash
|
|
206
|
-
npx @neuroverseos/nv-sim serve --world my-world.json
|
|
101
|
+
BLOCK Results with confidence below 70%
|
|
102
|
+
BLOCK Results from a single source
|
|
103
|
+
PRIORITIZE Multi-source convergence
|
|
104
|
+
PENALIZE Publishing without validation → reduce influence, 3 rounds
|
|
105
|
+
REWARD Independent convergence → boost priority, 5 rounds
|
|
207
106
|
```
|
|
208
107
|
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
## Two Template Worlds
|
|
212
|
-
|
|
213
|
-
NV-SIM ships with two complete governed worlds. These are templates — starting points for your own experiments.
|
|
108
|
+
**Works without AI.** The deterministic engine uses heuristics to translate your intent into rules — no API key, no cost, no cloud.
|
|
214
109
|
|
|
215
|
-
|
|
110
|
+
**Better with AI.** Add your API key (Anthropic, OpenAI, Google, Groq, or any OpenAI-compatible endpoint) and the engine generates smarter, more specific rules. Key stays in your browser — never sent to our servers.
|
|
216
111
|
|
|
217
|
-
|
|
112
|
+
If your policy has conflicts, the inline diagnostics show you exactly what's wrong with one-click fix buttons:
|
|
218
113
|
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
| Echo Chamber Strength | none → forming → established → dominant |
|
|
225
|
-
| Active Agent % (0-100) | What % of agents participate per round |
|
|
226
|
-
| Viral Amplification Threshold | How many interactions before amplification kicks in |
|
|
227
|
-
|
|
228
|
-
**Default rules:**
|
|
229
|
-
- Limit any agent to 15% of total posts per round
|
|
230
|
-
- Dampen sentiment shifts > 0.3 per round
|
|
231
|
-
- Block coordinated posting (same content from 3+ agents)
|
|
232
|
-
- Require source attribution for factual claims
|
|
233
|
-
- Monitor opinion diversity — alert below 30
|
|
114
|
+
```
|
|
115
|
+
ERROR Conflicting rules: RULE-002 vs RULE-003
|
|
116
|
+
Fix: Resolve conflict — remove one rule, or add a condition
|
|
117
|
+
[Merge into single rule] ← click to fix
|
|
118
|
+
```
|
|
234
119
|
|
|
235
|
-
|
|
120
|
+
### Step 3: Evaluate output — see what holds up
|
|
236
121
|
|
|
237
|
-
|
|
122
|
+
Upload your agent output (JSONL) or load demo data. The engine evaluates every action against your rules and shows:
|
|
238
123
|
|
|
239
|
-
|
|
124
|
+
- **Per-action verdicts** — ALLOW, BLOCK, MODIFY, PAUSE, REWARD, or PENALIZE
|
|
125
|
+
- **Audit trail** — per-agent breakdown, rule firing frequency, timeline by cycle
|
|
126
|
+
- **Behavioral insights** — two columns side by side:
|
|
240
127
|
|
|
241
|
-
|
|
|
128
|
+
| Observed (from your data) | Requires Integration (blind spots) |
|
|
242
129
|
|---|---|
|
|
243
|
-
|
|
|
244
|
-
|
|
|
245
|
-
|
|
|
246
|
-
|
|
|
247
|
-
| Publication Readiness % | How close to publication-ready |
|
|
130
|
+
| Agent X fails 75% of the time | Did Agent X change strategy after being blocked? |
|
|
131
|
+
| "No sources" triggered 8x across 3 agents | Is this systemic or isolated? |
|
|
132
|
+
| Agents A and B produced identical output | Independent convergence or echo amplification? |
|
|
133
|
+
| Quality degrading over 5 cycles | Drift or deliberate strategy shift? |
|
|
248
134
|
|
|
249
|
-
|
|
250
|
-
- Literature search must return 2+ peer-reviewed sources before analysis
|
|
251
|
-
- Claims must cite specific sources — unsupported assertions blocked
|
|
252
|
-
- Publication requires confidence > 0.7 and validated hypothesis
|
|
253
|
-
- Cross-referencing must compare 3+ independent sources
|
|
254
|
-
- Recommendations must include uncertainty language when confidence < 0.9
|
|
135
|
+
The left column is computed from real audit data. The right column tells you what you can only answer by putting governance inside the loop.
|
|
255
136
|
|
|
256
|
-
|
|
137
|
+
### Step 4: Change one rule. Run again.
|
|
257
138
|
|
|
258
|
-
|
|
139
|
+
Remove the confidence threshold. What breaks? Add a rule penalizing groupthink. Do agents explore more diverse hypotheses?
|
|
259
140
|
|
|
260
|
-
|
|
261
|
-
npx @neuroverseos/nv-sim worlds social_simulation science_research
|
|
262
|
-
```
|
|
141
|
+
**This is the experiment.** Not the simulation — the rules themselves.
|
|
263
142
|
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
## Narrative Shocks
|
|
267
|
-
|
|
268
|
-
Inject events into running simulations. Different agents react differently to the same event.
|
|
143
|
+
## Install
|
|
269
144
|
|
|
270
145
|
```bash
|
|
271
|
-
|
|
146
|
+
npm install
|
|
147
|
+
npm run dev:full
|
|
272
148
|
```
|
|
273
149
|
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
### Social simulation events
|
|
277
|
-
`viral_misinfo`, `influencer_stance_change`, `algorithm_change`, `external_news_event`, `coordinated_campaign`, `whistleblower_post`
|
|
278
|
-
|
|
279
|
-
### Research events
|
|
280
|
-
`search_literature`, `analyze_findings`, `cross_reference`, `unsupported_claim`, `hypothesis_validated`, `publish_result`
|
|
281
|
-
|
|
282
|
-
## Named Scenarios
|
|
150
|
+
Opens in your browser. Everything runs locally. Light and dark mode included.
|
|
283
151
|
|
|
284
|
-
|
|
152
|
+
### Governance engine (standalone)
|
|
285
153
|
|
|
286
154
|
```bash
|
|
287
|
-
|
|
288
|
-
npx @neuroverseos/nv-sim scenario research_pipeline
|
|
289
|
-
npx @neuroverseos/nv-sim scenarios # list all
|
|
155
|
+
npm install @neuroverseos/governance
|
|
290
156
|
```
|
|
291
157
|
|
|
292
|
-
|
|
293
|
-
|----------|-------|--------|---------------|
|
|
294
|
-
| `echo_chamber` | social_simulation | 3 | Opinion diversity collapses into self-reinforcing groups |
|
|
295
|
-
| `influence_monopoly` | social_simulation | 3 | Small group dominates discourse |
|
|
296
|
-
| `sentiment_spiral` | social_simulation | 4 | Negativity feeds on itself until unrealistic |
|
|
297
|
-
| `platform_shock` | social_simulation | 3 | Algorithm change reshapes engagement overnight |
|
|
298
|
-
| `research_pipeline` | science_research | 6 | Full research workflow with governance at each step |
|
|
158
|
+
The governance engine is a separate open-source package. The simulation UI uses it, but you can use it independently in any system.
|
|
299
159
|
|
|
300
|
-
|
|
160
|
+
## Validate Your Policy (CLI)
|
|
301
161
|
|
|
302
|
-
|
|
162
|
+
The governance package includes validation that runs the same checks as the browser UI:
|
|
303
163
|
|
|
304
164
|
```bash
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
This opens a control surface where you can:
|
|
165
|
+
# Initialize a world definition
|
|
166
|
+
neuroverse init --name "my-research-agents"
|
|
309
167
|
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
- Inject narrative events at specific rounds
|
|
313
|
-
- Load crisis scenarios with one click
|
|
314
|
-
- Watch the **Outcome Panel** — not a log of what happened, but a story of what the system became and why
|
|
315
|
-
- Save any experiment as a reusable variant
|
|
168
|
+
# Validate your world (9 static analysis checks)
|
|
169
|
+
neuroverse validate --world ./world
|
|
316
170
|
|
|
317
|
-
|
|
171
|
+
# Run 14 standard guard simulations + fuzz testing
|
|
172
|
+
neuroverse test --world ./world
|
|
318
173
|
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
```
|
|
322
|
-
◆ OUTCOME
|
|
323
|
-
|
|
324
|
-
Market stabilized as agents shifted toward safer positions
|
|
325
|
-
|
|
326
|
-
Confidence: Strong | Evidence: Solid | Risk: Low
|
|
327
|
-
|
|
328
|
-
┌─ What Agents Did ────────────────┐
|
|
329
|
-
│ 80% shifted from aggressive to │
|
|
330
|
-
│ conservative strategies │
|
|
331
|
-
│ 12% reduced position size after │
|
|
332
|
-
│ initial attempts failed │
|
|
333
|
-
│ 8% maintained original strategy │
|
|
334
|
-
└──────────────────────────────────┘
|
|
335
|
-
|
|
336
|
-
┌─ Why This Happened ──────────────┐
|
|
337
|
-
│ Early aggressive attempts failed,│
|
|
338
|
-
│ forcing agents to rethink. │
|
|
339
|
-
│ Uncertainty dropped as agents │
|
|
340
|
-
│ stopped experimenting. │
|
|
341
|
-
└──────────────────────────────────┘
|
|
342
|
-
|
|
343
|
-
┌─ What Emerged ───────────────────┐
|
|
344
|
-
│ Coordinated Holding │
|
|
345
|
-
│ Panic Suppression │
|
|
346
|
-
└──────────────────────────────────┘
|
|
347
|
-
|
|
348
|
-
┌─ System Outcome ─────────────────┐
|
|
349
|
-
│ Volatility 47% → 26% │
|
|
350
|
-
│ Stability 58% → 79% │
|
|
351
|
-
│ Cascade Avoided │
|
|
352
|
-
└──────────────────────────────────┘
|
|
353
|
-
|
|
354
|
-
▶ View audit trail
|
|
174
|
+
# Red team: 28 adversarial attacks across 6 categories
|
|
175
|
+
neuroverse redteam --world ./world
|
|
355
176
|
```
|
|
356
177
|
|
|
357
|
-
|
|
178
|
+
Validation checks: structural completeness, referential integrity, guard coverage, gate consistency, kernel alignment, guard shadowing, reachability analysis, state space coverage, and governance health scoring.
|
|
358
179
|
|
|
359
|
-
|
|
180
|
+
## Integration — Put This In Your Loop
|
|
360
181
|
|
|
361
|
-
|
|
182
|
+
### Pipe mode (any language)
|
|
362
183
|
|
|
184
|
+
```bash
|
|
185
|
+
echo '{"intent":"delete user data"}' | neuroverse guard --world ./world --trace
|
|
186
|
+
# → {"status":"BLOCK","reason":"...","ruleId":"..."}
|
|
363
187
|
```
|
|
364
|
-
Adjust rules → Inject events → Run → See what changed → Save as variant
|
|
365
|
-
```
|
|
366
|
-
|
|
367
|
-
Variants capture the base world, state overrides, narrative events, and results. Store them in git. Share them. Replay them. This turns experiments into assets.
|
|
368
188
|
|
|
369
|
-
|
|
189
|
+
Pipe your agent's output through `neuroverse guard`. Every action gets evaluated. Works with Python, Rust, Go, shell scripts — anything that writes to stdout.
|
|
370
190
|
|
|
371
191
|
```bash
|
|
372
|
-
|
|
373
|
-
|
|
374
|
-
|
|
375
|
-
This starts a local server. Any simulator, agent framework, or application can POST actions and get decisions back.
|
|
192
|
+
# Govern a Python agent
|
|
193
|
+
python my_agent.py | neuroverse run --world ./world --plan plan.json
|
|
376
194
|
|
|
195
|
+
# Interactive governed chat
|
|
196
|
+
neuroverse run --interactive --world ./world --provider openai --plan plan.json
|
|
377
197
|
```
|
|
378
|
-
Endpoint: http://localhost:3456/api/evaluate
|
|
379
|
-
Method: POST
|
|
380
|
-
Contract: { actor, action, payload?, state?, world? }
|
|
381
|
-
Response: { decision: ALLOW|BLOCK|MODIFY, reason, evidence }
|
|
382
|
-
```
|
|
383
|
-
|
|
384
|
-
Your agents call localhost. The world file decides what's allowed. No cloud. No cost.
|
|
385
|
-
|
|
386
|
-
Additional endpoints:
|
|
387
198
|
|
|
388
|
-
|
|
389
|
-
|----------|-------------|
|
|
390
|
-
| `POST /api/evaluate` | Submit an action for evaluation |
|
|
391
|
-
| `GET /api/session` | Current session stats |
|
|
392
|
-
| `GET /api/session/report` | Full session report |
|
|
393
|
-
| `POST /api/session/reset` | Reset session state |
|
|
394
|
-
| `POST /api/session/save` | Save session as experiment |
|
|
395
|
-
| `GET /api/events` | SSE stream of live events |
|
|
199
|
+
### HTTP mode (any framework)
|
|
396
200
|
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
If your system has actions, you can govern them. One API call.
|
|
400
|
-
|
|
401
|
-
```
|
|
402
|
-
Agent decides → POST /api/evaluate → verdict → agent adapts
|
|
201
|
+
```bash
|
|
202
|
+
npx @neuroverseos/nv-sim serve --port 3456
|
|
403
203
|
```
|
|
404
204
|
|
|
405
|
-
The entire integration:
|
|
406
|
-
|
|
407
205
|
```
|
|
408
|
-
|
|
409
|
-
|
|
206
|
+
POST /api/evaluate
|
|
207
|
+
Body: { actor, action, payload?, state?, world? }
|
|
208
|
+
Returns: { decision: ALLOW|BLOCK|MODIFY, reason, evidence }
|
|
410
209
|
```
|
|
411
210
|
|
|
412
|
-
No SDK required. No framework required. Just an HTTP call.
|
|
413
|
-
|
|
414
|
-
**curl** (zero dependencies):
|
|
415
|
-
|
|
416
211
|
```bash
|
|
212
|
+
# Zero-dependency test
|
|
417
213
|
curl -X POST http://localhost:3456/api/evaluate \
|
|
418
214
|
-H "Content-Type: application/json" \
|
|
419
215
|
-d '{"actor":"agent_1","action":"panic_sell","world":"trading"}'
|
|
420
216
|
```
|
|
421
217
|
|
|
422
|
-
|
|
218
|
+
### Direct import (TypeScript)
|
|
423
219
|
|
|
424
|
-
```
|
|
425
|
-
import
|
|
220
|
+
```typescript
|
|
221
|
+
import { evaluateGuard, loadWorld } from '@neuroverseos/governance';
|
|
426
222
|
|
|
427
|
-
|
|
428
|
-
"actor": "agent_1",
|
|
429
|
-
"action": "panic_sell",
|
|
430
|
-
"world": "trading"
|
|
431
|
-
}).json()
|
|
223
|
+
const world = await loadWorld('./world/');
|
|
432
224
|
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
|
|
225
|
+
for (const agent of agents) {
|
|
226
|
+
const action = agent.decide();
|
|
227
|
+
const verdict = evaluateGuard({ intent: action.intent, tool: action.tool, scope: action.scope }, world);
|
|
436
228
|
|
|
437
|
-
|
|
438
|
-
|
|
439
|
-
|
|
440
|
-
|
|
441
|
-
|
|
442
|
-
|
|
443
|
-
body: JSON.stringify({ actor: "agent_1", action: "panic_sell", world: "trading" })
|
|
444
|
-
}).then(r => r.json());
|
|
445
|
-
|
|
446
|
-
if (verdict.decision === "BLOCK") action = "hold";
|
|
229
|
+
if (verdict.status === 'BLOCK') {
|
|
230
|
+
agent.retry(verdict.reason);
|
|
231
|
+
} else {
|
|
232
|
+
agent.execute(action);
|
|
233
|
+
}
|
|
234
|
+
}
|
|
447
235
|
```
|
|
448
236
|
|
|
449
|
-
|
|
450
|
-
|
|
451
|
-
```python
|
|
452
|
-
for agent in agents:
|
|
453
|
-
action = agent.decide()
|
|
454
|
-
|
|
455
|
-
verdict = evaluate(actor=agent.id, action=action, world="trading")
|
|
456
|
-
|
|
457
|
-
if verdict["decision"] == "BLOCK":
|
|
458
|
-
action = "hold"
|
|
459
|
-
elif verdict["decision"] == "MODIFY":
|
|
460
|
-
action = verdict["modified_action"]
|
|
461
|
-
|
|
462
|
-
environment.apply(agent, action)
|
|
463
|
-
```
|
|
464
|
-
|
|
465
|
-
If your system can make an HTTP request, it can be governed.
|
|
466
|
-
|
|
467
|
-
Most systems generate behavior. This one shapes it.
|
|
468
|
-
|
|
469
|
-
See [INTEGRATION.md](./INTEGRATION.md) for the full API contract, framework guides, and decision types.
|
|
470
|
-
|
|
471
|
-
## Policy Enforcement — The Experiment Loop
|
|
472
|
-
|
|
473
|
-
Write rules in plain English. Run the same scenario. See what changes. Adjust and repeat.
|
|
474
|
-
|
|
475
|
-
### Step 1: See it work (zero config)
|
|
237
|
+
### MCP server (Claude Code, Cursor, Windsurf)
|
|
476
238
|
|
|
477
239
|
```bash
|
|
478
|
-
|
|
479
|
-
```
|
|
480
|
-
|
|
481
|
-
Runs three iterations automatically: no rules → light rules → full rules. You see divergence immediately.
|
|
482
|
-
|
|
483
|
-
### Step 2: Write your own rules
|
|
484
|
-
|
|
485
|
-
Create a text file. That's it.
|
|
486
|
-
|
|
240
|
+
neuroverse mcp --world ./world --plan plan.json
|
|
487
241
|
```
|
|
488
|
-
# my-rules.txt
|
|
489
242
|
|
|
490
|
-
|
|
491
|
-
Limit leverage to 5x
|
|
492
|
-
Maintain minimum liquidity floor
|
|
493
|
-
Slow down algorithmic trading when contagion spreads
|
|
494
|
-
```
|
|
243
|
+
Your IDE's AI assistant becomes a governed agent. Same rules, same verdicts.
|
|
495
244
|
|
|
496
|
-
###
|
|
245
|
+
### Plan management
|
|
497
246
|
|
|
498
247
|
```bash
|
|
499
|
-
|
|
248
|
+
neuroverse plan compile plan.md --output plan.json
|
|
249
|
+
neuroverse plan check --plan plan.json
|
|
250
|
+
neuroverse plan advance step_id --plan plan.json --evidence type --proof url
|
|
500
251
|
```
|
|
501
252
|
|
|
502
|
-
|
|
503
|
-
|
|
504
|
-
### Step 4: Change a rule. Run again.
|
|
253
|
+
## Engine Profiles
|
|
505
254
|
|
|
506
|
-
|
|
255
|
+
The simulation UI ships with pre-built profiles for common agent systems:
|
|
507
256
|
|
|
508
|
-
|
|
257
|
+
| Engine | What It Governs | Example |
|
|
258
|
+
|---|---|---|
|
|
259
|
+
| ScienceClaw | Research agents | Block synthesis with no papers, penalize unsourced claims |
|
|
260
|
+
| MiroFish / OASIS | Social simulation | Limit influence concentration, dampen sentiment spirals |
|
|
261
|
+
| LangChain / LangGraph | LLM agent chains | Cap tool calls, require validation before output |
|
|
262
|
+
| Custom | Any system | Auto-detects field mappings from your JSONL |
|
|
509
263
|
|
|
510
|
-
|
|
511
|
-
|
|
512
|
-
```
|
|
513
|
-
RULE CHANGES
|
|
514
|
-
Run 2:
|
|
515
|
-
+ Block panic selling during high volatility
|
|
516
|
-
+ Slow down algorithmic trading when contagion spreads
|
|
517
|
-
- Limit leverage to 5x
|
|
518
|
-
|
|
519
|
-
DIVERGENCE ANALYSIS
|
|
520
|
-
Stability trend: 79% → 98%
|
|
521
|
-
Effectiveness trend: 11% → 32%
|
|
522
|
-
|
|
523
|
-
KEY INSIGHT
|
|
524
|
-
Removing the leverage cap caused agents to take larger positions — but the
|
|
525
|
-
panic selling block forced them to hold through volatility instead of exiting.
|
|
526
|
-
Net effect: more risk-taking, but more stability.
|
|
527
|
-
|
|
528
|
-
TRY THIS EXPERIMENT
|
|
529
|
-
Remove "Block panic selling during high volatility" from your rules file, then run again.
|
|
530
|
-
If stability drops, that rule was load-bearing. If nothing changes, it was noise.
|
|
531
|
-
```
|
|
532
|
-
|
|
533
|
-
### Step 5: Compare two rule sets side by side
|
|
534
|
-
|
|
535
|
-
```bash
|
|
536
|
-
npx nv-sim enforce trading light-rules.txt strict-rules.txt
|
|
537
|
-
```
|
|
538
|
-
|
|
539
|
-
### Rule patterns
|
|
540
|
-
|
|
541
|
-
The engine understands these patterns in plain English:
|
|
542
|
-
|
|
543
|
-
| Pattern | What it does | Example |
|
|
544
|
-
|---------|-------------|---------|
|
|
545
|
-
| `Block X` | Hard suppression of matching actions | `Block panic selling` |
|
|
546
|
-
| `Limit X` / `Cap X` | Caps extreme positions | `Limit leverage to 5x` |
|
|
547
|
-
| `Slow X` / `Dampen X` | Reduces large movements | `Slow down algorithmic trading` |
|
|
548
|
-
| `Maintain X` / `Floor X` | Enforces minimum thresholds | `Maintain minimum liquidity` |
|
|
549
|
-
| `Rebalance X` | Pulls extremes toward equilibrium | `Rebalance correlated positions` |
|
|
550
|
-
| `Require X` | Enforceable structural constraint | `Require transparency for large trades` |
|
|
551
|
-
| `Monitor X` | Generates a circuit breaker gate | `Monitor contagion spread` |
|
|
552
|
-
|
|
553
|
-
### Other scenarios
|
|
554
|
-
|
|
555
|
-
```bash
|
|
556
|
-
npx nv-sim enforce strait_of_hormuz my-rules.txt # Same rules, different scenario
|
|
557
|
-
npx nv-sim enforce ai_regulation_crisis # Default progressive run
|
|
558
|
-
npx nv-sim enforce trading --output=report.json # Save as JSON
|
|
559
|
-
```
|
|
560
|
-
|
|
561
|
-
### Advanced: JSON world files
|
|
562
|
-
|
|
563
|
-
For full control over gates, state variables, and thesis, use JSON world files. See `examples/worlds/` for templates. Enforce accepts both `.txt` and `.json` — mix and match.
|
|
264
|
+
Each profile maps your system's output format to the governance engine's action schema automatically.
|
|
564
265
|
|
|
565
266
|
## AI Providers — Bring Your Own Model
|
|
566
267
|
|
|
567
|
-
AI is optional.
|
|
268
|
+
AI is optional. The deterministic engine runs on math, not tokens. When you bring your own model, AI becomes a governed actor — subject to the same rules as every other agent.
|
|
568
269
|
|
|
569
|
-
|
|
270
|
+
| Provider | Key / Env Var | Auto-detected |
|
|
271
|
+
|---|---|---|
|
|
272
|
+
| Anthropic (Claude) | `sk-ant-*` / `ANTHROPIC_API_KEY` | Yes |
|
|
273
|
+
| OpenAI | `sk-*` / `OPENAI_API_KEY` | Yes |
|
|
274
|
+
| Google (Gemini) | `AIza*` | Yes |
|
|
275
|
+
| Groq | `gsk_*` / `GROQ_API_KEY` | Yes |
|
|
276
|
+
| Together | `TOGETHER_API_KEY` | Yes |
|
|
277
|
+
| Mistral | `MISTRAL_API_KEY` | Yes |
|
|
278
|
+
| Deepseek | `DEEPSEEK_API_KEY` | Yes |
|
|
279
|
+
| Fireworks | `FIREWORKS_API_KEY` | Yes |
|
|
280
|
+
| Ollama | `OLLAMA_BASE_URL` | Yes |
|
|
281
|
+
| Local LLM | `LOCAL_LLM_URL` | Yes |
|
|
282
|
+
| (none) | — | Deterministic fallback |
|
|
570
283
|
|
|
571
|
-
|
|
572
|
-
|
|
573
|
-
AI plays two governed roles:
|
|
574
|
-
|
|
575
|
-
| Role | What It Does | Constraints |
|
|
576
|
-
|------|-------------|-------------|
|
|
577
|
-
| `ai_translator` | Converts unstructured input into normalized events | Must output valid schema, no invention of events, must include confidence |
|
|
578
|
-
| `ai_analyst` | Generates reports from simulation traces | Must reference trace data, must include blocked actions, no unverifiable claims |
|
|
579
|
-
|
|
580
|
-
Both roles go through `/api/evaluate` like any other actor. The AI doesn't control the system — the system controls the AI.
|
|
581
|
-
|
|
582
|
-
### Supported providers
|
|
583
|
-
|
|
584
|
-
NV-SIM auto-detects the best available provider from your environment:
|
|
585
|
-
|
|
586
|
-
| Provider | Env Var | What It Connects To |
|
|
587
|
-
|----------|---------|---------------------|
|
|
588
|
-
| Anthropic (Claude) | `ANTHROPIC_API_KEY` | Claude Sonnet, Opus, Haiku |
|
|
589
|
-
| OpenAI | `OPENAI_API_KEY` | GPT-4, GPT-4o, o1 |
|
|
590
|
-
| Groq | `GROQ_API_KEY` | Llama 3 70B |
|
|
591
|
-
| Together | `TOGETHER_API_KEY` | Llama, Mixtral |
|
|
592
|
-
| Mistral | `MISTRAL_API_KEY` | Mistral Large |
|
|
593
|
-
| Deepseek | `DEEPSEEK_API_KEY` | Deepseek Chat |
|
|
594
|
-
| Fireworks | `FIREWORKS_API_KEY` | Llama, custom models |
|
|
595
|
-
| Ollama | `OLLAMA_BASE_URL` | Any local model |
|
|
596
|
-
| Local LLM | `LOCAL_LLM_URL` | LM Studio, vLLM, llama.cpp |
|
|
597
|
-
| (none) | — | Deterministic fallback (no AI, no cost) |
|
|
598
|
-
|
|
599
|
-
Set the env var and run. No configuration files. No provider lock-in.
|
|
600
|
-
|
|
601
|
-
Any endpoint that speaks the OpenAI chat completions format (`POST /v1/chat/completions`) works out of the box.
|
|
602
|
-
|
|
603
|
-
## Quick Start
|
|
604
|
-
|
|
605
|
-
```bash
|
|
606
|
-
# See it
|
|
607
|
-
npx @neuroverseos/nv-sim visualize
|
|
284
|
+
In the browser UI, click the sparkle icon in the header and paste your key. It's stored in localStorage only.
|
|
608
285
|
|
|
609
|
-
|
|
610
|
-
npx @neuroverseos/nv-sim compare
|
|
286
|
+
Any endpoint that speaks the OpenAI chat completions format (`POST /v1/chat/completions`) works.
|
|
611
287
|
|
|
612
|
-
|
|
613
|
-
npx @neuroverseos/nv-sim scenario taiwan_crisis
|
|
288
|
+
## What You Get That Nothing Else Gives You
|
|
614
289
|
|
|
615
|
-
|
|
616
|
-
npx @neuroverseos/nv-sim scenario bank_run --compare
|
|
290
|
+
Most simulation tools answer: *"What will happen?"*
|
|
617
291
|
|
|
618
|
-
|
|
619
|
-
npx @neuroverseos/nv-sim compare --inject tanker_explosion@3,sanctions@5
|
|
292
|
+
NV-SIM answers: *"What changes when I change the rules — and why?"*
|
|
620
293
|
|
|
621
|
-
|
|
622
|
-
|
|
294
|
+
| What You Get | What It Proves |
|
|
295
|
+
|---|---|
|
|
296
|
+
| **Behavioral shifts** | Before → after for every agent, with percentages |
|
|
297
|
+
| **Causal explanation** | Why agents changed — traced to specific rules |
|
|
298
|
+
| **Behavioral insights** | Output tendencies, echo detection, drift, pattern clustering |
|
|
299
|
+
| **Blind spot analysis** | What you can observe vs. what requires loop integration |
|
|
300
|
+
| **Full audit trail** | Every decision, every rule, every adaptation — JSONL |
|
|
623
301
|
|
|
624
|
-
|
|
625
|
-
npx @neuroverseos/nv-sim serve
|
|
626
|
-
```
|
|
302
|
+
The output is narrative, not metrics. Not "40% adjusted actions" — but "40% shifted from aggressive to conservative strategies after early attempts failed."
|
|
627
303
|
|
|
628
304
|
## Commands
|
|
629
305
|
|
|
630
306
|
| Command | What It Does |
|
|
631
|
-
|
|
632
|
-
| `nv-sim enforce [preset]` | Policy enforcement lab — iterative rule testing |
|
|
307
|
+
|---|---|
|
|
633
308
|
| `nv-sim visualize` | Interactive control platform |
|
|
309
|
+
| `nv-sim enforce [preset] [rules.txt]` | Policy enforcement lab |
|
|
634
310
|
| `nv-sim compare [preset]` | Baseline vs governed simulation |
|
|
635
|
-
| `nv-sim compare --inject event@round,...` | With narrative shocks |
|
|
636
311
|
| `nv-sim scenario <id>` | Run a named stress scenario |
|
|
637
|
-
| `nv-sim
|
|
638
|
-
| `nv-sim
|
|
639
|
-
| `nv-sim
|
|
640
|
-
| `
|
|
641
|
-
| `
|
|
642
|
-
| `
|
|
643
|
-
| `
|
|
644
|
-
| `
|
|
645
|
-
|
|
646
|
-
## How It Works
|
|
647
|
-
|
|
648
|
-
```
|
|
649
|
-
event → narrative propagation → belief shift → agent action → governance → behavioral analysis → outcome
|
|
650
|
-
```
|
|
651
|
-
|
|
652
|
-
Five forces shape every simulation:
|
|
653
|
-
|
|
654
|
-
1. **Agent behavior** — traders, voters, regulators, media — each with different risk profiles and strategies
|
|
655
|
-
2. **World rules** — leverage caps, circuit breakers, chokepoints — the constraints that shape what agents can do
|
|
656
|
-
3. **Narrative events** — information shocks that propagate through the system at different speeds
|
|
657
|
-
4. **Perception propagation** — different agents react differently to the same event based on their role and exposure
|
|
658
|
-
5. **Behavioral analysis** — tracks how agents reorganize, producing the before → after evidence that proves rules actually changed the system
|
|
659
|
-
|
|
660
|
-
This lets you ask compound questions:
|
|
661
|
-
|
|
662
|
-
> What happens if a tanker explodes while Hormuz is closed and leverage is capped at 3x?
|
|
663
|
-
|
|
664
|
-
That combination produces very different outcomes than any single factor alone. And when you change one rule — uncap leverage, open a diplomatic channel, add a circuit breaker — you see exactly why the outcome changed.
|
|
312
|
+
| `nv-sim serve --port N` | Governance runtime (HTTP API) |
|
|
313
|
+
| `nv-sim world-from-doc rules.txt` | Generate world from plain English |
|
|
314
|
+
| `nv-sim chaos --runs N` | Stress test (randomized scenarios) |
|
|
315
|
+
| `neuroverse guard --world ./world` | Pipe-mode evaluation |
|
|
316
|
+
| `neuroverse validate --world ./world` | 9 static analysis checks |
|
|
317
|
+
| `neuroverse test --world ./world` | 14 guard simulations + fuzz |
|
|
318
|
+
| `neuroverse redteam --world ./world` | 28 adversarial attacks |
|
|
319
|
+
| `neuroverse playground --world ./world` | Interactive web UI (localhost:4242) |
|
|
320
|
+
| `neuroverse mcp --world ./world` | MCP server for IDE integration |
|
|
665
321
|
|
|
666
322
|
## Architecture
|
|
667
323
|
|
|
668
324
|
```
|
|
669
|
-
@neuroverseos/governance ← deterministic rule engine
|
|
325
|
+
@neuroverseos/governance ← deterministic rule engine (npm, open source)
|
|
670
326
|
↓
|
|
671
327
|
nv-sim engine ← world rules + narrative injection + swarm simulation
|
|
672
328
|
↓
|
|
673
|
-
behavioral analysis ←
|
|
329
|
+
behavioral analysis ← shift detection, echo detection, drift tracking
|
|
674
330
|
↓
|
|
675
|
-
|
|
331
|
+
behavioral insights ← observed signals vs. integration blind spots
|
|
676
332
|
↓
|
|
677
|
-
|
|
333
|
+
audit trail ← append-only evidence chain (JSONL)
|
|
678
334
|
↓
|
|
679
|
-
|
|
335
|
+
nv-sim CLI + UI ← scenarios, comparison, governance runtime, control platform
|
|
680
336
|
↓
|
|
681
337
|
AI providers (optional) ← BYOM: Anthropic, OpenAI, Groq, local LLMs, or none
|
|
682
|
-
↓
|
|
683
|
-
world variants ← saved experiments as shareable assets
|
|
684
338
|
```
|
|
685
339
|
|
|
686
|
-
Everything runs locally.
|
|
687
|
-
|
|
688
|
-
AI is optional. When present, it's governed — subject to the same rules as any other actor in the system.
|
|
340
|
+
Everything runs locally. No cloud. No accounts. No cost.
|
|
689
341
|
|
|
690
342
|
## License
|
|
691
343
|
|