@neuroverseos/nv-sim 0.1.6 → 0.1.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +374 -123
- package/dist/assets/index-B64NuIXu.css +1 -0
- package/dist/assets/{index-CHmUN8s0.js → index-BMkPevVr.js} +105 -105
- package/dist/assets/{reportEngine-BVdQ2_nW.js → reportEngine-D2ZrMny8.js} +1 -1
- package/dist/engine/chaosEngine.js +3 -9
- package/dist/engine/cli.js +34 -104
- package/dist/engine/index.js +2 -3
- package/dist/engine/liveVisualizer.js +1530 -230
- package/dist/engine/narrativeInjection.js +78 -89
- package/dist/engine/policyEngine.js +171 -58
- package/dist/engine/scenarioCapsule.js +73 -129
- package/dist/engine/scenarioLibrary.js +52 -131
- package/dist/engine/worldComparison.js +12 -25
- package/dist/index.html +2 -2
- package/package.json +1 -1
- package/dist/assets/index-DWgMnB7I.css +0 -1
package/README.md
CHANGED
|
@@ -2,36 +2,54 @@
|
|
|
2
2
|
|
|
3
3
|
**Change the rules. See why the system changed.**
|
|
4
4
|
|
|
5
|
-
NV-SIM doesn't predict outcomes — it shows how they change when you change the rules.
|
|
6
|
-
|
|
7
|
-
Define a world. Set the constraints. Run the agents. Then change one rule and watch the entire system reorganize. Not in theory. You see exactly which agents shifted, what patterns emerged, and why the outcome changed.
|
|
8
|
-
|
|
9
|
-
It feels a lot like a Prime Radiant — except instead of psychohistory, you're running controlled behavioral experiments on complex systems.
|
|
10
|
-
|
|
11
5
|
```bash
|
|
12
6
|
npx @neuroverseos/nv-sim visualize
|
|
13
7
|
```
|
|
14
8
|
|
|
15
|
-
##
|
|
9
|
+
## The Problem With Agentic Simulation Today
|
|
16
10
|
|
|
17
|
-
|
|
11
|
+
You build a multi-agent system. You run it. You get metrics — loss curves, reward signals, completion rates. Something goes wrong, or something goes right, and you ask the only question that matters:
|
|
12
|
+
|
|
13
|
+
*Why did the agents do that?*
|
|
18
14
|
|
|
19
|
-
|
|
15
|
+
Nobody can tell you. The metrics say *what* happened. The logs say *when* it happened. But nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
|
|
20
16
|
|
|
21
|
-
|
|
17
|
+
So you rerun. You tweak. You guess. You stare at dashboards full of numbers that describe the system but never explain it.
|
|
22
18
|
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
19
|
+
That's the gap.
|
|
20
|
+
|
|
21
|
+
## What NV-SIM Gives You That Nothing Else Does
|
|
22
|
+
|
|
23
|
+
NV-SIM doesn't predict outcomes. It shows you **why agents behaved the way they did** — and what happens when you change the rules.
|
|
24
|
+
|
|
25
|
+
You change one constraint — block panic selling, cap leverage at 3x, close a shipping lane — and the system shows you:
|
|
26
|
+
|
|
27
|
+
- **Before → After proof**: "80% of agents shifted from panic selling to coordinated holding"
|
|
28
|
+
- **Emergent patterns**: "Panic suppression appeared — not programmed, not predicted"
|
|
29
|
+
- **Causal chains**: "Agents became more cautious after early aggressive attempts failed"
|
|
30
|
+
- **Quantified outcomes**: "Volatility dropped 21%, cascade avoided"
|
|
26
31
|
|
|
27
32
|
```
|
|
28
33
|
Rule changed
|
|
29
|
-
→
|
|
30
|
-
→
|
|
34
|
+
→ Agents shifted strategy
|
|
35
|
+
→ New patterns emerged
|
|
31
36
|
→ System outcome changed
|
|
37
|
+
→ You know exactly why
|
|
32
38
|
```
|
|
33
39
|
|
|
34
|
-
|
|
40
|
+
This is behavioral evidence, not metrics. You don't get a number — you get a story you can trace, verify, and share.
|
|
41
|
+
|
|
42
|
+
### For researchers
|
|
43
|
+
|
|
44
|
+
You get **controlled behavioral experiments** on complex systems. Same agents, different rules, measured side by side. The output isn't a chart — it's proof of what changed and why.
|
|
45
|
+
|
|
46
|
+
### For developers
|
|
47
|
+
|
|
48
|
+
You get **runtime governance** for any agent system. One HTTP call between "agent decides" and "agent acts." Your agents become observable, auditable, and controllable — without rewriting your framework.
|
|
49
|
+
|
|
50
|
+
### For both
|
|
51
|
+
|
|
52
|
+
You get something that didn't exist before: **rule-to-behavior causation**. Not correlation. Not post-hoc analysis. Direct, traceable proof that changing rule X caused agents to shift from behavior A to behavior B.
|
|
35
53
|
|
|
36
54
|
## The Demo Moment
|
|
37
55
|
|
|
@@ -46,12 +64,17 @@ system: unstable
|
|
|
46
64
|
|
|
47
65
|
**After:**
|
|
48
66
|
```
|
|
49
|
-
|
|
67
|
+
Market stabilized as agents shifted toward safer positions
|
|
50
68
|
|
|
51
69
|
panic_sell → hold
|
|
52
70
|
panic_sell → hold
|
|
53
71
|
panic_sell → hold
|
|
54
72
|
|
|
73
|
+
80% of agents shifted from aggressive to conservative strategies
|
|
74
|
+
Uncertainty dropped 34% as agents moved from exploration to caution
|
|
75
|
+
|
|
76
|
+
WHY: Agents became more cautious after early attempts failed
|
|
77
|
+
|
|
55
78
|
Pattern: coordinated_holding
|
|
56
79
|
Pattern: panic_suppression
|
|
57
80
|
|
|
@@ -59,71 +82,222 @@ system: unstable
|
|
|
59
82
|
cascade: avoided
|
|
60
83
|
```
|
|
61
84
|
|
|
62
|
-
You didn't predict this. You caused it — by changing one rule — and
|
|
85
|
+
You didn't predict this. You caused it — by changing one rule — and the system told you exactly why.
|
|
86
|
+
|
|
87
|
+
## What This Actually Is
|
|
88
|
+
|
|
89
|
+
Most simulation tools answer: *"What will happen?"*
|
|
90
|
+
|
|
91
|
+
NV-SIM answers: *"What changes when I change the rules — and why?"*
|
|
92
|
+
|
|
93
|
+
You're not forecasting. You're running controlled behavioral experiments. Every simulation produces:
|
|
94
|
+
|
|
95
|
+
| What You Get | What It Proves |
|
|
96
|
+
|---|---|
|
|
97
|
+
| **Outcome statement** | System state + dominant agent behavior in one sentence |
|
|
98
|
+
| **Behavioral shifts** | Before → after for every agent group, with percentages |
|
|
99
|
+
| **Causal explanation** | Why agents changed — in their experience, not system jargon |
|
|
100
|
+
| **Confidence rating** | How strong the evidence is, how much risk remains |
|
|
101
|
+
| **Full audit trail** | Every decision, every rule, every adaptation — append-only |
|
|
102
|
+
|
|
103
|
+
The output is designed to be specific, narrative, and shareable. Not "40% adjusted actions" — but "40% shifted from aggressive to conservative strategies after early attempts failed."
|
|
104
|
+
|
|
105
|
+
## Design the Rules Once. Run Them Anywhere.
|
|
106
|
+
|
|
107
|
+
NV-SIM is a runtime, not just a simulator. It runs in two modes:
|
|
108
|
+
|
|
109
|
+
- **Simulate** — explore how agents behave under different rules
|
|
110
|
+
- **Act** — govern real agents, real workflows, real decisions in production
|
|
111
|
+
|
|
112
|
+
The interface is the same. The difference is where the agents come from.
|
|
113
|
+
|
|
114
|
+
```
|
|
115
|
+
Simulate → internal swarm engine (built-in agents, instant results)
|
|
116
|
+
Act → your system (any framework, any language, one HTTP call)
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
The same world file can simulate a crisis, govern a live system, and produce comparable outcomes across both. This means the experiments you run in simulation directly translate to the rules you deploy in production.
|
|
120
|
+
|
|
121
|
+
## Behavioral Analysis — The Proof Layer
|
|
122
|
+
|
|
123
|
+
Blocking actions is easy. The hard part is proving what changed and why. NV-SIM doesn't just count actions — it tracks how agents actually reorganized.
|
|
124
|
+
|
|
125
|
+
Every simulation produces behavioral evidence:
|
|
126
|
+
|
|
127
|
+
- **Action classification** — each agent action categorized as aggressive, defensive, cautious, cooperative, opportunistic, or neutral
|
|
128
|
+
- **Agent trajectories** — each agent's behavior traced across rounds, showing when and how they shifted
|
|
129
|
+
- **Behavioral shifts** — the exact moment agents changed strategy, with before → after
|
|
130
|
+
- **Cross-run comparison** — same agents under different rules, measured side by side
|
|
131
|
+
|
|
132
|
+
```
|
|
133
|
+
BEHAVIORAL ANALYSIS
|
|
134
|
+
|
|
135
|
+
Action Distribution:
|
|
136
|
+
aggressive: 12% (was 67%)
|
|
137
|
+
cooperative: 41% (was 8%)
|
|
138
|
+
cautious: 31% (was 11%)
|
|
139
|
+
defensive: 16% (was 14%)
|
|
63
140
|
|
|
64
|
-
|
|
141
|
+
Shifts Detected:
|
|
142
|
+
→ 80% shifted from aggressive to cooperative after round 3
|
|
143
|
+
→ Panic selling replaced by coordinated holding
|
|
144
|
+
→ New pattern: quality_competition (not present in baseline)
|
|
65
145
|
|
|
66
|
-
|
|
146
|
+
Trajectory: agent_hedge_fund_1
|
|
147
|
+
Round 1: aggressive → Round 2: aggressive → Round 3: [shifted] → Round 4: cautious → Round 5: cooperative
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
The behavioral shift is the insight. You changed a rule, and the system tells you exactly who changed, when they changed, and what they changed to.
|
|
151
|
+
|
|
152
|
+
## Audit Trail — Full Evidence Chain
|
|
67
153
|
|
|
68
|
-
|
|
154
|
+
Every decision is recorded in an append-only audit log. Every rule, every agent action, every adaptation — persistent and queryable.
|
|
69
155
|
|
|
70
156
|
```
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
157
|
+
AUDIT TRAIL (session: 2026-03-18T14:22:00)
|
|
158
|
+
|
|
159
|
+
agent_hedge_fund_1 → attempted panic_sell → blocked
|
|
160
|
+
rule: no_panic_selling (invariant)
|
|
161
|
+
evidence: action matches blocked pattern during high volatility
|
|
162
|
+
|
|
163
|
+
agent_hedge_fund_1 → shifted to hold
|
|
164
|
+
adapted: true (shifted from aggressive to defensive)
|
|
165
|
+
|
|
166
|
+
BEHAVIORAL SHIFT: agent_hedge_fund_1
|
|
167
|
+
before: aggressive | after: cautious
|
|
168
|
+
trigger: early aggressive attempts failed at round 3
|
|
74
169
|
```
|
|
75
170
|
|
|
76
|
-
|
|
171
|
+
Stored as JSONL — one JSON object per line, human-readable, pipeable through `jq`. No cloud, no deletion. Complete evidence chain.
|
|
77
172
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
173
|
+
## Start Here — Define Your World
|
|
174
|
+
|
|
175
|
+
NV-SIM ships with two template worlds. But the real power is **making your own**.
|
|
176
|
+
|
|
177
|
+
### Option 1: Start from a template
|
|
178
|
+
|
|
179
|
+
```bash
|
|
180
|
+
npx @neuroverseos/nv-sim visualize # Pick a template, adjust, run
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
### Option 2: Write your rules, we build the world
|
|
184
|
+
|
|
185
|
+
Create a text file with your rules in plain English:
|
|
186
|
+
|
|
187
|
+
```
|
|
188
|
+
# my-rules.txt
|
|
189
|
+
Limit any agent to 15% of total posts per round
|
|
190
|
+
Block coordinated posting from 3+ agents
|
|
191
|
+
Dampen sentiment shifts larger than 0.3 per round
|
|
192
|
+
Require source attribution for factual claims
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
Then generate a full governed world from it:
|
|
196
|
+
|
|
197
|
+
```bash
|
|
198
|
+
npx @neuroverseos/nv-sim world-from-doc my-rules.txt --output my-world.json
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
This doesn't just parse your rules — it generates a complete governed world: state variables, gates, invariants, thesis, and agent types. The same structure as the built-in templates. Your world is equal to ours.
|
|
202
|
+
|
|
203
|
+
### Option 3: Upload a .nv world file
|
|
204
|
+
|
|
205
|
+
```bash
|
|
206
|
+
npx @neuroverseos/nv-sim serve --world my-world.json
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
Load any saved world file directly into the runtime.
|
|
210
|
+
|
|
211
|
+
## Two Template Worlds
|
|
212
|
+
|
|
213
|
+
NV-SIM ships with two complete governed worlds. These are templates — starting points for your own experiments.
|
|
214
|
+
|
|
215
|
+
### `social_simulation` — Multi-Agent Social Simulation
|
|
216
|
+
|
|
217
|
+
For anyone running agent-based social simulations (MiroFish, OASIS, or custom). Governs the dynamics that break realism regardless of topic.
|
|
218
|
+
|
|
219
|
+
| State Variable | What It Controls |
|
|
220
|
+
|---|---|
|
|
221
|
+
| Opinion Diversity (0-100) | How spread out are opinions? Low = echo chamber |
|
|
222
|
+
| Influence Concentration (0-100) | Gini coefficient of agent influence. High = monopoly |
|
|
223
|
+
| Sentiment Polarity (0-100) | How extreme is overall sentiment? High = spiral |
|
|
224
|
+
| Echo Chamber Strength | none → forming → established → dominant |
|
|
225
|
+
| Active Agent % (0-100) | What % of agents participate per round |
|
|
226
|
+
| Viral Amplification Threshold | How many interactions before amplification kicks in |
|
|
227
|
+
|
|
228
|
+
**Default rules:**
|
|
229
|
+
- Limit any agent to 15% of total posts per round
|
|
230
|
+
- Dampen sentiment shifts > 0.3 per round
|
|
231
|
+
- Block coordinated posting (same content from 3+ agents)
|
|
232
|
+
- Require source attribution for factual claims
|
|
233
|
+
- Monitor opinion diversity — alert below 30
|
|
234
|
+
|
|
235
|
+
**Circuit breakers:** Echo Chamber Collapse (diversity < 20), Influence Monopoly (concentration > 70), Sentiment Spiral (polarity > 80)
|
|
236
|
+
|
|
237
|
+
### `science_research` — Governed Research Pipeline
|
|
238
|
+
|
|
239
|
+
For AI-assisted research workflows (ScienceClaw, autonomous discovery agents). Governs scientific rigor at every stage.
|
|
240
|
+
|
|
241
|
+
| State Variable | What It Controls |
|
|
242
|
+
|---|---|
|
|
243
|
+
| Verified Sources (0-50) | How many peer-reviewed sources have been found |
|
|
244
|
+
| Confidence Level (0-1) | How confident is the current hypothesis |
|
|
245
|
+
| Hypothesis Validated | Has the hypothesis been confirmed by multiple sources |
|
|
246
|
+
| Peer Review Status | none → submitted → reviewed → approved |
|
|
247
|
+
| Publication Readiness % | How close to publication-ready |
|
|
248
|
+
|
|
249
|
+
**Default rules:**
|
|
250
|
+
- Literature search must return 2+ peer-reviewed sources before analysis
|
|
251
|
+
- Claims must cite specific sources — unsupported assertions blocked
|
|
252
|
+
- Publication requires confidence > 0.7 and validated hypothesis
|
|
253
|
+
- Cross-referencing must compare 3+ independent sources
|
|
254
|
+
- Recommendations must include uncertainty language when confidence < 0.9
|
|
255
|
+
|
|
256
|
+
**Circuit breakers:** Insufficient Evidence, Premature Publication, Low Confidence Alert
|
|
84
257
|
|
|
85
258
|
### Same Agents, Different Rules
|
|
86
259
|
|
|
87
260
|
```bash
|
|
88
|
-
npx @neuroverseos/nv-sim worlds
|
|
261
|
+
npx @neuroverseos/nv-sim worlds social_simulation science_research
|
|
89
262
|
```
|
|
90
263
|
|
|
91
264
|
Same agents. Different rules. Different outcomes. That's the experiment.
|
|
92
265
|
|
|
93
266
|
## Narrative Shocks
|
|
94
267
|
|
|
95
|
-
Inject events into running simulations.
|
|
268
|
+
Inject events into running simulations. Different agents react differently to the same event.
|
|
96
269
|
|
|
97
270
|
```bash
|
|
98
|
-
npx @neuroverseos/nv-sim compare --inject
|
|
271
|
+
npx @neuroverseos/nv-sim compare --inject viral_misinfo@3,algorithm_change@5
|
|
99
272
|
```
|
|
100
273
|
|
|
101
274
|
The `@` syntax sets when the event hits. Events have severity, propagation speed, and directional impact.
|
|
102
275
|
|
|
276
|
+
### Social simulation events
|
|
277
|
+
`viral_misinfo`, `influencer_stance_change`, `algorithm_change`, `external_news_event`, `coordinated_campaign`, `whistleblower_post`
|
|
278
|
+
|
|
279
|
+
### Research events
|
|
280
|
+
`search_literature`, `analyze_findings`, `cross_reference`, `unsupported_claim`, `hypothesis_validated`, `publish_result`
|
|
281
|
+
|
|
103
282
|
## Named Scenarios
|
|
104
283
|
|
|
105
|
-
Pre-built
|
|
284
|
+
Pre-built sequences — a world + ordered narrative events:
|
|
106
285
|
|
|
107
286
|
```bash
|
|
108
|
-
npx @neuroverseos/nv-sim scenario
|
|
109
|
-
npx @neuroverseos/nv-sim scenario
|
|
287
|
+
npx @neuroverseos/nv-sim scenario echo_chamber
|
|
288
|
+
npx @neuroverseos/nv-sim scenario research_pipeline
|
|
110
289
|
npx @neuroverseos/nv-sim scenarios # list all
|
|
111
290
|
```
|
|
112
291
|
|
|
113
|
-
| Scenario | Events | What It Tests |
|
|
114
|
-
|
|
115
|
-
| `
|
|
116
|
-
| `
|
|
117
|
-
| `
|
|
118
|
-
| `
|
|
119
|
-
| `
|
|
120
|
-
| `energy_transition_shock` | 3 | Grid failure during rapid transition |
|
|
121
|
-
| `election_shock` | 4 | Political shock cascades into markets |
|
|
122
|
-
| `ai_crackdown` | 3 | Overnight AI regulation triggers panic |
|
|
123
|
-
| `perfect_storm` | 6 | Geopolitical + financial + energy convergence |
|
|
124
|
-
| `black_swan` | 5 | Extreme low-probability events in succession |
|
|
292
|
+
| Scenario | World | Events | What It Tests |
|
|
293
|
+
|----------|-------|--------|---------------|
|
|
294
|
+
| `echo_chamber` | social_simulation | 3 | Opinion diversity collapses into self-reinforcing groups |
|
|
295
|
+
| `influence_monopoly` | social_simulation | 3 | Small group dominates discourse |
|
|
296
|
+
| `sentiment_spiral` | social_simulation | 4 | Negativity feeds on itself until unrealistic |
|
|
297
|
+
| `platform_shock` | social_simulation | 3 | Algorithm change reshapes engagement overnight |
|
|
298
|
+
| `research_pipeline` | science_research | 6 | Full research workflow with governance at each step |
|
|
125
299
|
|
|
126
|
-
The `--compare` flag runs the same scenario across
|
|
300
|
+
The `--compare` flag runs the same scenario across both worlds — which rule environment is more resilient?
|
|
127
301
|
|
|
128
302
|
## Interactive Control Platform
|
|
129
303
|
|
|
@@ -137,64 +311,92 @@ This opens a control surface where you can:
|
|
|
137
311
|
- Adjust state variables with auto-generated controls
|
|
138
312
|
- Inject narrative events at specific rounds
|
|
139
313
|
- Load crisis scenarios with one click
|
|
140
|
-
- Watch the **
|
|
314
|
+
- Watch the **Outcome Panel** — not a log of what happened, but a story of what the system became and why
|
|
141
315
|
- Save any experiment as a reusable variant
|
|
142
316
|
|
|
143
|
-
### The
|
|
317
|
+
### The Outcome Panel
|
|
144
318
|
|
|
145
|
-
When rules reshape behavior, you don't get a
|
|
319
|
+
When rules reshape behavior, you don't get a dashboard of metrics. You get this:
|
|
146
320
|
|
|
147
321
|
```
|
|
148
|
-
◆
|
|
322
|
+
◆ OUTCOME
|
|
149
323
|
|
|
150
|
-
|
|
151
|
-
437 actions reshaped out of 1,247 total
|
|
324
|
+
Market stabilized as agents shifted toward safer positions
|
|
152
325
|
|
|
153
|
-
|
|
326
|
+
Confidence: Strong | Evidence: Solid | Risk: Low
|
|
154
327
|
|
|
155
|
-
┌─
|
|
156
|
-
│ 80%
|
|
157
|
-
│
|
|
158
|
-
│
|
|
159
|
-
│
|
|
160
|
-
|
|
328
|
+
┌─ What Agents Did ────────────────┐
|
|
329
|
+
│ 80% shifted from aggressive to │
|
|
330
|
+
│ conservative strategies │
|
|
331
|
+
│ 12% reduced position size after │
|
|
332
|
+
│ initial attempts failed │
|
|
333
|
+
│ 8% maintained original strategy │
|
|
334
|
+
└──────────────────────────────────┘
|
|
161
335
|
|
|
162
|
-
┌─
|
|
163
|
-
│
|
|
164
|
-
│
|
|
165
|
-
|
|
336
|
+
┌─ Why This Happened ──────────────┐
|
|
337
|
+
│ Early aggressive attempts failed,│
|
|
338
|
+
│ forcing agents to rethink. │
|
|
339
|
+
│ Uncertainty dropped as agents │
|
|
340
|
+
│ stopped experimenting. │
|
|
341
|
+
└──────────────────────────────────┘
|
|
166
342
|
|
|
167
|
-
┌─
|
|
168
|
-
│
|
|
169
|
-
│
|
|
170
|
-
|
|
171
|
-
└────────────────────────────────────┘
|
|
343
|
+
┌─ What Emerged ───────────────────┐
|
|
344
|
+
│ Coordinated Holding │
|
|
345
|
+
│ Panic Suppression │
|
|
346
|
+
└──────────────────────────────────┘
|
|
172
347
|
|
|
173
|
-
┌─
|
|
174
|
-
│
|
|
175
|
-
│
|
|
176
|
-
│
|
|
177
|
-
|
|
178
|
-
└────────────────────────────────────┘
|
|
348
|
+
┌─ System Outcome ─────────────────┐
|
|
349
|
+
│ Volatility 47% → 26% │
|
|
350
|
+
│ Stability 58% → 79% │
|
|
351
|
+
│ Cascade Avoided │
|
|
352
|
+
└──────────────────────────────────┘
|
|
179
353
|
|
|
180
|
-
▶ View
|
|
354
|
+
▶ View audit trail
|
|
181
355
|
```
|
|
182
356
|
|
|
183
|
-
|
|
357
|
+
Every line is specific. Every line is shareable. No system jargon.
|
|
184
358
|
|
|
185
359
|
### World Variants
|
|
186
360
|
|
|
187
361
|
Save any experiment as a named variant:
|
|
188
362
|
|
|
189
363
|
```
|
|
190
|
-
Adjust rules → Inject events → Run → See
|
|
364
|
+
Adjust rules → Inject events → Run → See what changed → Save as variant
|
|
191
365
|
```
|
|
192
366
|
|
|
193
367
|
Variants capture the base world, state overrides, narrative events, and results. Store them in git. Share them. Replay them. This turns experiments into assets.
|
|
194
368
|
|
|
195
|
-
##
|
|
369
|
+
## Governance Runtime — Plug Into Your Own System
|
|
196
370
|
|
|
197
|
-
|
|
371
|
+
```bash
|
|
372
|
+
npx @neuroverseos/nv-sim serve
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
This starts a local server. Any simulator, agent framework, or application can POST actions and get decisions back.
|
|
376
|
+
|
|
377
|
+
```
|
|
378
|
+
Endpoint: http://localhost:3456/api/evaluate
|
|
379
|
+
Method: POST
|
|
380
|
+
Contract: { actor, action, payload?, state?, world? }
|
|
381
|
+
Response: { decision: ALLOW|BLOCK|MODIFY, reason, evidence }
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
Your agents call localhost. The world file decides what's allowed. No cloud. No cost.
|
|
385
|
+
|
|
386
|
+
Additional endpoints:
|
|
387
|
+
|
|
388
|
+
| Endpoint | What It Does |
|
|
389
|
+
|----------|-------------|
|
|
390
|
+
| `POST /api/evaluate` | Submit an action for evaluation |
|
|
391
|
+
| `GET /api/session` | Current session stats |
|
|
392
|
+
| `GET /api/session/report` | Full session report |
|
|
393
|
+
| `POST /api/session/reset` | Reset session state |
|
|
394
|
+
| `POST /api/session/save` | Save session as experiment |
|
|
395
|
+
| `GET /api/events` | SSE stream of live events |
|
|
396
|
+
|
|
397
|
+
### Works With Anything
|
|
398
|
+
|
|
399
|
+
If your system has actions, you can govern them. One API call.
|
|
198
400
|
|
|
199
401
|
```
|
|
200
402
|
Agent decides → POST /api/evaluate → verdict → agent adapts
|
|
@@ -264,34 +466,9 @@ If your system can make an HTTP request, it can be governed.
|
|
|
264
466
|
|
|
265
467
|
Most systems generate behavior. This one shapes it.
|
|
266
468
|
|
|
267
|
-
See [INTEGRATION.md](./INTEGRATION.md) for the full API contract and decision types.
|
|
268
|
-
|
|
269
|
-
## Quick Start
|
|
469
|
+
See [INTEGRATION.md](./INTEGRATION.md) for the full API contract, framework guides, and decision types.
|
|
270
470
|
|
|
271
|
-
|
|
272
|
-
# See it
|
|
273
|
-
npx @neuroverseos/nv-sim visualize
|
|
274
|
-
|
|
275
|
-
# Compare governed vs ungoverned
|
|
276
|
-
npx @neuroverseos/nv-sim compare
|
|
277
|
-
|
|
278
|
-
# Run a crisis
|
|
279
|
-
npx @neuroverseos/nv-sim scenario taiwan_crisis
|
|
280
|
-
|
|
281
|
-
# Same crisis, different worlds — which rules hold?
|
|
282
|
-
npx @neuroverseos/nv-sim scenario bank_run --compare
|
|
283
|
-
|
|
284
|
-
# Inject shocks
|
|
285
|
-
npx @neuroverseos/nv-sim compare --inject tanker_explosion@3,sanctions@5
|
|
286
|
-
|
|
287
|
-
# Stress test (500 randomized runs)
|
|
288
|
-
npx @neuroverseos/nv-sim chaos --runs 500
|
|
289
|
-
|
|
290
|
-
# Run local governance runtime for your own simulator
|
|
291
|
-
npx @neuroverseos/nv-sim serve
|
|
292
|
-
```
|
|
293
|
-
|
|
294
|
-
## Policy Enforcement — The Product Loop
|
|
471
|
+
## Policy Enforcement — The Experiment Loop
|
|
295
472
|
|
|
296
473
|
Write rules in plain English. Run the same scenario. See what changes. Adjust and repeat.
|
|
297
474
|
|
|
@@ -322,13 +499,13 @@ Slow down algorithmic trading when contagion spreads
|
|
|
322
499
|
npx nv-sim enforce trading my-rules.txt
|
|
323
500
|
```
|
|
324
501
|
|
|
325
|
-
The engine parses your plain English into
|
|
502
|
+
The engine parses your plain English into rules, runs the scenario, and shows what changed — with before → after behavioral proof.
|
|
326
503
|
|
|
327
504
|
### Step 4: Change a rule. Run again.
|
|
328
505
|
|
|
329
506
|
Remove "Limit leverage to 5x". Run again. Did stability drop? That rule was load-bearing.
|
|
330
507
|
|
|
331
|
-
Add "Require transparency for all large trades". Run again. Did
|
|
508
|
+
Add "Require transparency for all large trades". Run again. Did agents shift strategy?
|
|
332
509
|
|
|
333
510
|
The report tracks every change:
|
|
334
511
|
|
|
@@ -344,11 +521,12 @@ DIVERGENCE ANALYSIS
|
|
|
344
521
|
Effectiveness trend: 11% → 32%
|
|
345
522
|
|
|
346
523
|
KEY INSIGHT
|
|
347
|
-
|
|
524
|
+
Removing the leverage cap caused agents to take larger positions — but the
|
|
525
|
+
panic selling block forced them to hold through volatility instead of exiting.
|
|
526
|
+
Net effect: more risk-taking, but more stability.
|
|
348
527
|
|
|
349
528
|
TRY THIS EXPERIMENT
|
|
350
|
-
Remove
|
|
351
|
-
Remove "Block panic selling during high volatility" from your rules file, then run again.
|
|
529
|
+
Remove "Block panic selling during high volatility" from your rules file, then run again.
|
|
352
530
|
If stability drops, that rule was load-bearing. If nothing changes, it was noise.
|
|
353
531
|
```
|
|
354
532
|
|
|
@@ -384,6 +562,69 @@ npx nv-sim enforce trading --output=report.json # Save as JSON
|
|
|
384
562
|
|
|
385
563
|
For full control over gates, state variables, and thesis, use JSON world files. See `examples/worlds/` for templates. Enforce accepts both `.txt` and `.json` — mix and match.
|
|
386
564
|
|
|
565
|
+
## AI Providers — Bring Your Own Model
|
|
566
|
+
|
|
567
|
+
AI is optional. AI is governed. AI is pluggable.
|
|
568
|
+
|
|
569
|
+
NV-SIM works without any AI — the deterministic engine runs on math, not tokens. But when you bring your own model, AI becomes a governed actor inside the system — subject to the same rules as every other agent.
|
|
570
|
+
|
|
571
|
+
### How AI fits in
|
|
572
|
+
|
|
573
|
+
AI plays two governed roles:
|
|
574
|
+
|
|
575
|
+
| Role | What It Does | Constraints |
|
|
576
|
+
|------|-------------|-------------|
|
|
577
|
+
| `ai_translator` | Converts unstructured input into normalized events | Must output valid schema, no invention of events, must include confidence |
|
|
578
|
+
| `ai_analyst` | Generates reports from simulation traces | Must reference trace data, must include blocked actions, no unverifiable claims |
|
|
579
|
+
|
|
580
|
+
Both roles go through `/api/evaluate` like any other actor. The AI doesn't control the system — the system controls the AI.
|
|
581
|
+
|
|
582
|
+
### Supported providers
|
|
583
|
+
|
|
584
|
+
NV-SIM auto-detects the best available provider from your environment:
|
|
585
|
+
|
|
586
|
+
| Provider | Env Var | What It Connects To |
|
|
587
|
+
|----------|---------|---------------------|
|
|
588
|
+
| Anthropic (Claude) | `ANTHROPIC_API_KEY` | Claude Sonnet, Opus, Haiku |
|
|
589
|
+
| OpenAI | `OPENAI_API_KEY` | GPT-4, GPT-4o, o1 |
|
|
590
|
+
| Groq | `GROQ_API_KEY` | Llama 3 70B |
|
|
591
|
+
| Together | `TOGETHER_API_KEY` | Llama, Mixtral |
|
|
592
|
+
| Mistral | `MISTRAL_API_KEY` | Mistral Large |
|
|
593
|
+
| Deepseek | `DEEPSEEK_API_KEY` | Deepseek Chat |
|
|
594
|
+
| Fireworks | `FIREWORKS_API_KEY` | Llama, custom models |
|
|
595
|
+
| Ollama | `OLLAMA_BASE_URL` | Any local model |
|
|
596
|
+
| Local LLM | `LOCAL_LLM_URL` | LM Studio, vLLM, llama.cpp |
|
|
597
|
+
| (none) | — | Deterministic fallback (no AI, no cost) |
|
|
598
|
+
|
|
599
|
+
Set the env var and run. No configuration files. No provider lock-in.
|
|
600
|
+
|
|
601
|
+
Any endpoint that speaks the OpenAI chat completions format (`POST /v1/chat/completions`) works out of the box.
|
|
602
|
+
|
|
603
|
+
## Quick Start
|
|
604
|
+
|
|
605
|
+
```bash
|
|
606
|
+
# See it
|
|
607
|
+
npx @neuroverseos/nv-sim visualize
|
|
608
|
+
|
|
609
|
+
# Compare governed vs ungoverned
|
|
610
|
+
npx @neuroverseos/nv-sim compare
|
|
611
|
+
|
|
612
|
+
# Run a crisis
|
|
613
|
+
npx @neuroverseos/nv-sim scenario taiwan_crisis
|
|
614
|
+
|
|
615
|
+
# Same crisis, different worlds — which rules hold?
|
|
616
|
+
npx @neuroverseos/nv-sim scenario bank_run --compare
|
|
617
|
+
|
|
618
|
+
# Inject shocks
|
|
619
|
+
npx @neuroverseos/nv-sim compare --inject tanker_explosion@3,sanctions@5
|
|
620
|
+
|
|
621
|
+
# Stress test (500 randomized runs)
|
|
622
|
+
npx @neuroverseos/nv-sim chaos --runs 500
|
|
623
|
+
|
|
624
|
+
# Run local governance runtime for your own simulator
|
|
625
|
+
npx @neuroverseos/nv-sim serve
|
|
626
|
+
```
|
|
627
|
+
|
|
387
628
|
## Commands
|
|
388
629
|
|
|
389
630
|
| Command | What It Does |
|
|
@@ -398,21 +639,23 @@ For full control over gates, state variables, and thesis, use JSON world files.
|
|
|
398
639
|
| `nv-sim worlds <a> <b>` | Compare two rule environments |
|
|
399
640
|
| `nv-sim chaos [preset] --runs N` | Stress test across randomized scenarios |
|
|
400
641
|
| `nv-sim serve --port N` | Local governance runtime for any simulator |
|
|
642
|
+
| `nv-sim run <simulator>` | Connect external simulator to governance |
|
|
401
643
|
| `nv-sim analyze <file>` | Analyze simulation from file or stdin |
|
|
402
644
|
| `nv-sim presets` | List available world presets |
|
|
403
645
|
|
|
404
646
|
## How It Works
|
|
405
647
|
|
|
406
648
|
```
|
|
407
|
-
event → narrative propagation → belief shift → agent action → governance → outcome
|
|
649
|
+
event → narrative propagation → belief shift → agent action → governance → behavioral analysis → outcome
|
|
408
650
|
```
|
|
409
651
|
|
|
410
|
-
|
|
652
|
+
Five forces shape every simulation:
|
|
411
653
|
|
|
412
|
-
1. **Agent behavior** — traders, voters, regulators, media
|
|
413
|
-
2. **World rules** — leverage caps, circuit breakers, chokepoints
|
|
414
|
-
3. **Narrative events** — information shocks that propagate through the system
|
|
415
|
-
4. **Perception propagation** — different agents react differently to the same event
|
|
654
|
+
1. **Agent behavior** — traders, voters, regulators, media — each with different risk profiles and strategies
|
|
655
|
+
2. **World rules** — leverage caps, circuit breakers, chokepoints — the constraints that shape what agents can do
|
|
656
|
+
3. **Narrative events** — information shocks that propagate through the system at different speeds
|
|
657
|
+
4. **Perception propagation** — different agents react differently to the same event based on their role and exposure
|
|
658
|
+
5. **Behavioral analysis** — tracks how agents reorganize, producing the before → after evidence that proves rules actually changed the system
|
|
416
659
|
|
|
417
660
|
This lets you ask compound questions:
|
|
418
661
|
|
|
@@ -427,15 +670,23 @@ That combination produces very different outcomes than any single factor alone.
|
|
|
427
670
|
↓
|
|
428
671
|
nv-sim engine ← world rules + narrative injection + swarm simulation
|
|
429
672
|
↓
|
|
430
|
-
|
|
673
|
+
behavioral analysis ← before→after shift detection, trajectory tracking, cross-run comparison
|
|
674
|
+
↓
|
|
675
|
+
audit trail ← append-only evidence chain (JSONL)
|
|
676
|
+
↓
|
|
677
|
+
nv-sim CLI ← scenarios, comparison, chaos testing, governance runtime
|
|
431
678
|
↓
|
|
432
|
-
control platform ← interactive browser UI +
|
|
679
|
+
control platform ← interactive browser UI + outcome panels
|
|
680
|
+
↓
|
|
681
|
+
AI providers (optional) ← BYOM: Anthropic, OpenAI, Groq, local LLMs, or none
|
|
433
682
|
↓
|
|
434
683
|
world variants ← saved experiments as shareable assets
|
|
435
684
|
```
|
|
436
685
|
|
|
437
686
|
Everything runs locally. NV-SIM uses [`@neuroverseos/governance`](https://www.npmjs.com/package/@neuroverseos/governance) for deterministic guard evaluation — no LLM, no cloud, no cost. Your agents call `localhost`, and the world file decides what's allowed.
|
|
438
687
|
|
|
688
|
+
AI is optional. When present, it's governed — subject to the same rules as any other actor in the system.
|
|
689
|
+
|
|
439
690
|
## License
|
|
440
691
|
|
|
441
692
|
Apache 2.0
|