@neuroverseos/nv-sim 0.1.9 → 0.1.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/README.md +90 -3
  2. package/connectors/nv_mirofish_wrapper.py +841 -0
  3. package/connectors/nv_scienceclaw_wrapper.py +453 -0
  4. package/dist/adapters/scienceclaw.js +52 -2
  5. package/dist/assets/index-B43_0HyO.css +1 -0
  6. package/dist/assets/index-CdghpsS8.js +595 -0
  7. package/dist/assets/{reportEngine-D2ZrMny8.js → reportEngine-CYSZfooa.js} +1 -1
  8. package/dist/connectors/nv-scienceclaw-post.js +376 -0
  9. package/dist/engine/aiProvider.js +82 -3
  10. package/dist/engine/analyzer.js +12 -24
  11. package/dist/engine/cli.js +89 -114
  12. package/dist/engine/dynamicsGovernance.js +4 -0
  13. package/dist/engine/fullGovernedLoop.js +16 -1
  14. package/dist/engine/goalEngine.js +3 -4
  15. package/dist/engine/governance.js +18 -0
  16. package/dist/engine/index.js +19 -28
  17. package/dist/engine/intentTranslator.js +281 -0
  18. package/dist/engine/liveAdapter.js +100 -18
  19. package/dist/engine/liveVisualizer.js +2071 -1023
  20. package/dist/engine/primeRadiant.js +2 -8
  21. package/dist/engine/reasoningEngine.js +2 -7
  22. package/dist/engine/scenarioCapsule.js +5 -5
  23. package/dist/engine/swarmSimulation.js +1 -9
  24. package/dist/engine/worldBridge.js +22 -8
  25. package/dist/index.html +2 -2
  26. package/dist/lib/reasoningEngine.js +17 -1
  27. package/dist/lib/simulationAdapter.js +11 -11
  28. package/dist/lib/swarmParser.js +1 -1
  29. package/dist/runtime/govern.js +160 -7
  30. package/dist/runtime/index.js +1 -4
  31. package/dist/runtime/types.js +91 -0
  32. package/package.json +23 -6
  33. package/dist/adapters/mirofish.js +0 -461
  34. package/dist/assets/index-B64NuIXu.css +0 -1
  35. package/dist/assets/index-BMkPevVr.js +0 -532
  36. package/dist/assets/mirotir-logo-DUexumBH.svg +0 -185
  37. package/dist/engine/mirofish.js +0 -295
package/README.md CHANGED
@@ -14,9 +14,96 @@ You build a multi-agent system. You run it. You get metrics — loss curves, rew
14
14
 
15
15
  Nobody can tell you. The metrics say *what* happened. The logs say *when* it happened. But nothing tells you *why agents changed their behavior* — which rule caused it, which agents shifted first, what strategy they abandoned, and what they replaced it with.
16
16
 
17
- So you rerun. You tweak. You guess. You stare at dashboards full of numbers that describe the system but never explain it.
17
+ Most multi-agent systems let agents do whatever emerges. NeuroVerse lets *you* decide what's allowed and actually stops the rest.
18
18
 
19
- That's the gap.
19
+ ## How It Works (For Humans)
20
+
21
+ You're a scientist. You have AI agents running experiments for you — testing hypotheses, searching literature, publishing findings. But they do dumb things. They publish claims with no evidence. They cite one paper and call it proof. They pile onto whatever the first agent found.
22
+
23
+ You need to say: **"Here's what counts as good science in my lab."** And you need the system to actually enforce it.
24
+
25
+ That's what NeuroVerse does.
26
+
27
+ ### Step 1: You describe what matters
28
+
29
+ You open the app and fill in plain English:
30
+
31
+ **What should agents explore?**
32
+ > "Protein mutations that improve binding affinity for SSTR2"
33
+
34
+ **What should never be published?**
35
+ > "Results based on a single data source. Claims with confidence below 70%."
36
+
37
+ **What makes a result valuable?**
38
+ > "Multiple independent lines of evidence converging on the same finding."
39
+
40
+ **How should agents be rewarded or penalized?**
41
+ > IF an agent publishes without peer validation → reduce its influence for 3 rounds
42
+ > IF two agents independently converge on the same finding → boost that finding's priority
43
+
44
+ **Anything else you want to try?**
45
+ > "Reward agents that challenge the dominant consensus."
46
+
47
+ ### Step 2: The system translates your words into rules
48
+
49
+ When you click "Build World," your plain English becomes enforceable logic. Before anything runs, you see exactly what will be enforced:
50
+
51
+ ```
52
+ BLOCK Results with confidence below 70%
53
+ BLOCK Results from a single source
54
+ PRIORITIZE Multi-source convergence
55
+ PENALIZE Publishing without validation → reduce influence, 3 rounds
56
+ REWARD Independent convergence → boost priority, 5 rounds
57
+ ```
58
+
59
+ You can remove rules, change them, add more. Nothing runs until you say so.
60
+
61
+ ### Step 3: Run the experiment — and see what actually happens
62
+
63
+ The agents run. Some of them try to publish low-confidence findings. **Those get blocked.** Not flagged. Not warned. Blocked. The UI shows you exactly which rule stopped it and why.
64
+
65
+ Some agents independently arrive at the same mutation. **Those get boosted.** Their findings move to the top. Not because they gamed the system — because they met the standard you set.
66
+
67
+ An agent that keeps publishing without validation? **Its influence drops.** It still participates, but its findings carry less weight. For three rounds.
68
+
69
+ You can see all of this happening:
70
+ - A green badge means rules are being enforced
71
+ - A yellow badge means you're just previewing (the engine isn't connected)
72
+ - Every blocked action shows the exact rule and condition
73
+ - Every reward and penalty shows which agent, what happened, and for how long
74
+
75
+ ### Step 4: Change one rule. Run again.
76
+
77
+ Remove the confidence threshold. What breaks? Do agents start publishing garbage? Does it matter?
78
+
79
+ Add a rule that penalizes groupthink. Do agents start exploring more diverse hypotheses?
80
+
81
+ **This is the experiment.** Not the simulation — the rules themselves. You're not testing what agents do. You're testing which rules produce the science you actually want.
82
+
83
+ ### What's different about this
84
+
85
+ Other agent systems give you emergent behavior and hope for the best.
86
+
87
+ NeuroVerse gives you a control surface:
88
+
89
+ | You say | The system does |
90
+ |---|---|
91
+ | *"Block single-source claims"* | Blocks them. Every time. With evidence. |
92
+ | *"Penalize publishing without review"* | Reduces the agent's influence. For exactly as long as you specified. |
93
+ | *"Reward independent convergence"* | Boosts the finding's priority. The agents don't even know — it just rises. |
94
+
95
+ The agents don't need to understand the rules. They just operate within them.
96
+
97
+ ### Running it
98
+
99
+ ```bash
100
+ npm install
101
+ npm run dev:full
102
+ ```
103
+
104
+ Opens in your browser. Everything runs on your machine. No cloud. No accounts. No cost.
105
+
106
+ > **If the governance engine isn't running, nothing is enforced.** The UI always tells you: green = real, yellow = preview.
20
107
 
21
108
  ## What NV-SIM Gives You That Nothing Else Does
22
109
 
@@ -214,7 +301,7 @@ NV-SIM ships with two complete governed worlds. These are templates — starting
214
301
 
215
302
  ### `social_simulation` — Multi-Agent Social Simulation
216
303
 
217
- For anyone running agent-based social simulations (MiroFish, OASIS, or custom). Governs the dynamics that break realism regardless of topic.
304
+ For anyone running agent-based social simulations. Governs the dynamics that break realism regardless of topic.
218
305
 
219
306
  | State Variable | What It Controls |
220
307
  |---|---|