@svrnsec/pulse 0.7.0 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (48) hide show
  1. package/LICENSE +21 -21
  2. package/README.md +883 -782
  3. package/SECURITY.md +86 -86
  4. package/bin/svrnsec-pulse.js +7 -7
  5. package/dist/{pulse.cjs.js → pulse.cjs} +6378 -6419
  6. package/dist/pulse.cjs.map +1 -0
  7. package/dist/pulse.esm.js +6379 -6420
  8. package/dist/pulse.esm.js.map +1 -1
  9. package/index.d.ts +895 -846
  10. package/package.json +185 -184
  11. package/pkg/pulse_core.js +174 -173
  12. package/src/analysis/audio.js +213 -213
  13. package/src/analysis/authenticityAudit.js +408 -393
  14. package/src/analysis/coherence.js +502 -502
  15. package/src/analysis/coordinatedBehavior.js +825 -804
  16. package/src/analysis/heuristic.js +428 -428
  17. package/src/analysis/jitter.js +446 -446
  18. package/src/analysis/llm.js +473 -472
  19. package/src/analysis/populationEntropy.js +404 -403
  20. package/src/analysis/provider.js +248 -248
  21. package/src/analysis/refraction.js +392 -391
  22. package/src/analysis/trustScore.js +356 -356
  23. package/src/cli/args.js +36 -36
  24. package/src/cli/commands/scan.js +192 -192
  25. package/src/cli/runner.js +157 -157
  26. package/src/collector/adaptive.js +200 -200
  27. package/src/collector/bio.js +297 -287
  28. package/src/collector/canvas.js +247 -239
  29. package/src/collector/dram.js +203 -203
  30. package/src/collector/enf.js +311 -311
  31. package/src/collector/entropy.js +195 -195
  32. package/src/collector/gpu.js +248 -245
  33. package/src/collector/idleAttestation.js +480 -480
  34. package/src/collector/sabTimer.js +189 -191
  35. package/src/fingerprint.js +475 -475
  36. package/src/index.js +342 -342
  37. package/src/integrations/react-native.js +462 -459
  38. package/src/integrations/react.js +184 -185
  39. package/src/middleware/express.js +155 -155
  40. package/src/middleware/next.js +174 -175
  41. package/src/proof/challenge.js +249 -249
  42. package/src/proof/engagementToken.js +426 -394
  43. package/src/proof/fingerprint.js +268 -268
  44. package/src/proof/validator.js +82 -142
  45. package/src/registry/serializer.js +349 -349
  46. package/src/terminal.js +263 -263
  47. package/src/update-notifier.js +259 -264
  48. package/dist/pulse.cjs.js.map +0 -1
package/README.md CHANGED
@@ -1,782 +1,883 @@
1
- # @svrnsec/pulse
2
-
3
- [![CI](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml/badge.svg)](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml)
4
- [![npm version](https://img.shields.io/npm/v/@svrnsec/pulse.svg?style=flat)](https://www.npmjs.com/package/@svrnsec/pulse)
5
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](./LICENSE)
6
- [![Security Policy](https://img.shields.io/badge/security-policy-orange.svg)](./SECURITY.md)
7
-
8
- A hardware-physics probe that distinguishes real consumer silicon from sanitised cloud VMs and AI inference endpoints.
9
-
10
- It does not maintain a database of known bad actors. It measures thermodynamic constants.
11
-
12
- ---
13
-
14
- ## 30-Second Quickstart
15
-
16
- ```bash
17
- npm install @svrnsec/pulse
18
- ```
19
-
20
- ```js
21
- // Express — drop-in server-side verification
22
- import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
23
-
24
- app.use('/api', createPulseMiddleware({ minScore: 0.6 }));
25
- ```
26
-
27
- ```jsx
28
- // React — live probe with real-time signal meters
29
- import { usePulse } from '@svrnsec/pulse/react';
30
-
31
- function TrustGate() {
32
- const { run, pct, vmConf, hwConf, earlyVerdict, result } = usePulse();
33
-
34
- return (
35
- <button onClick={run}>
36
- {pct < 100 ? `Probing… ${pct}%` : earlyVerdict}
37
- </button>
38
- );
39
- }
40
- ```
41
-
42
- ```js
43
- // Node.js — raw proof commitment
44
- import { pulse } from '@svrnsec/pulse';
45
-
46
- const { payload, hash } = await pulse({ nonce: crypto.randomUUID() });
47
- // payload.classification.jitterScore → 0.798 (real hw) | 0.45 (VM)
48
- // payload.classification.flags → [] (clean) | ['CV_TOO_HIGH_...'] (VM)
49
- // hash → SHA-256 commitment you send to your server for validation
50
- ```
51
-
52
- No API key. No account. No data leaves the client. Runs entirely in your infrastructure.
53
-
54
- ---
55
-
56
- ## The Problem With Every Other Approach
57
-
58
- Every bot detection system is, at its core, a database. Known bad IP ranges. Known headless browser fingerprints. Known datacenter ASNs. Known CAPTCHA-solving services.
59
-
60
- The attacker's job is simple: don't be in the database. The moment a new cloud region launches, a new headless runtime ships, or a new residential proxy network comes online, the database is stale.
61
-
62
- Pulse doesn't work that way.
63
-
64
- A VM's hypervisor clock is mathematically perfect — it cannot produce thermal noise because there is no thermal feedback loop in a virtual timer. Real silicon running under sustained load gets measurably noisier as electrons move through gates that are physically getting hotter. That relationship is a law of physics. It does not change when AWS launches a new instance type in 2027. It does not change when a new hypervisor ships. It cannot be patched.
65
-
66
- ---
67
-
68
- ## The Two Layers
69
-
70
- **Detection** answers: *Is this a VM?*
71
- Handled entirely by the heuristic engine. No signatures, no database. Five physical relationships, measured and cross-checked. If they're mutually coherent with what thermodynamics predicts, it's real hardware. If any of them contradict each other in ways physics wouldn't allow, something is being faked.
72
-
73
- **Classification** answers: *Which VM is it?*
74
- Handled by the provider fingerprinter. Matches the timing autocorrelation profile against known hypervisor scheduler rhythms (KVM's 250ms quantum, Xen's 750ms credit scheduler, Hyper-V's 15.6ms quantum). This is the part that improves with more data — but it's not needed for detection. A brand-new hypervisor from a company that doesn't exist yet will still fail detection the moment it tries to present a mathematically flat clock.
75
-
76
- ---
77
-
78
- ## The Five Physical Signals
79
-
80
- ### 1. Entropy-Jitter Ratio
81
-
82
- The key signal. When a real CPU runs sustained compute, thermal throttling kicks in and timing jitter *increases* — the die gets hotter, the transistors switch slightly slower, and you can measure it.
83
-
84
- ```
85
- hotQE / coldQE ≥ 1.08 → thermal feedback confirmed (real silicon)
86
- hotQE / coldQE ≈ 1.00 → clock is insensitive to guest thermal state (VM)
87
- ```
88
-
89
- A KVM hypervisor maintains a synthetic clock that ticks at a constant rate regardless of what the guest OS is doing. Its entropy ratio across cold/load/hot phases is flat. On 192.222.57.254 a 12 vCPU / 480GB RAM / GH200 Grace Hopper machine it measured 1.01. On the local GTX 1650 Super machine it measured 1.24.
90
-
91
- A software implementation cannot fake this without generating actual heat.
92
-
93
- ### 2. Hurst-Autocorrelation Coherence
94
-
95
- Genuine Brownian noise (what real hardware timing looks like) has a Hurst exponent near 0.5 and near-zero autocorrelation at all lags. These two are physically linked by the relationship `expected_AC = |2H - 1|`.
96
-
97
- If you measure H=0.5 but find high autocorrelation — or low H but low autocorrelation — the data was generated, not measured. A VM that tries to fake the Hurst Exponent without adjusting the autocorrelation profile, or vice versa, fails this check immediately.
98
-
99
- ### 3. CV-Entropy Coherence
100
-
101
- High coefficient of variation (timing spread) must come from a genuinely spread-out distribution, which means high quantization entropy. A VM that inflates CV by adding synthetic outliers at fixed offsets — say, every 50th iteration triggers a steal-time burst — produces high CV but low entropy because 93% of samples still fall in two bins.
102
-
103
- From 192.222.57.254 (GH200): CV=0.0829 (seems variable) but QE=1.27 bits (extreme clustering). Incoherent. On real hardware, CV=0.1494 → QE=3.59 bits. Coherent.
104
-
105
- ### 4. The Picket Fence Detector
106
-
107
- Hypervisor scheduler quanta create periodic steal-time bursts. A KVM host running at ~5ms/iteration with a 250ms quantum will pause the guest every ~50 iterations. This shows up as elevated autocorrelation at lag-50 relative to lag-5. The autocorrelation profile looks like fence posts at regular intervals — hence the name.
108
-
109
- ```
110
- Real hardware: lag-1 AC=0.07 lag-50 AC=0.03 (flat, no rhythm)
111
- KVM VM: lag-1 AC=0.67 lag-50 AC=0.71 (periodic steal-time)
112
- ```
113
-
114
- The dominant lag also lets the classifier estimate the scheduler quantum: `lag × 5ms/iter quantum`. This is how it identifies KVM (250ms), Xen (750ms), and Hyper-V (15.6ms) without any prior knowledge of the host.
115
-
116
- ### 5. Skewness-Kurtosis Coherence
117
-
118
- Real hardware timing is right-skewed with positive kurtosis. OS preemptions create occasional large delays on the right tail, while the body of the distribution stays compact. A VM that adds synthetic spikes at fixed offsets tends to produce the wrong skew direction or an implausibly symmetric distribution.
119
-
120
- ---
121
-
122
- ## Benchmark Results
123
-
124
- *12 trials × 200 iterations. Two real environments.*
125
-
126
- ### Local Machine — GTX 1650 Super · i5-10400 · Win11 · 16GB DDR4
127
-
128
- ```
129
- Pulse Score [████████████████████████████████░░░░░░░░] 79.8%
130
- ```
131
-
132
- | Metric | Value | Physical interpretation |
133
- |---|---|---|
134
- | Coefficient of Variation | 0.1494 | Spread from thermal noise + OS interrupts |
135
- | Hurst Exponent | 0.5505 | Near-Brownian — i.i.d. noise from independent sources |
136
- | Quantization Entropy | 3.59 bits | Timings genuinely spread across distribution |
137
- | Autocorr lag-1 | 0.0698 | Near-zero — no periodic forcing |
138
- | Autocorr lag-50 | 0.0312 | Flat at distance — no scheduler rhythm |
139
- | Entropy-Jitter Ratio | 1.24 | Entropy grew 24% from cold to hot — thermal feedback confirmed |
140
- | Thermal Pattern | sawtooth | Fan cycling, not hypervisor |
141
- | Outlier Rate | 2.25% | OS context switches unpredictable, not periodic |
142
-
143
- **Distribution:**
144
- ```
145
- 3.60ms │██████ 8
146
- 3.88ms │█████ 7
147
- 4.16ms │██████████████ 19
148
- 4.44ms │██████████████████████ 30
149
- 4.73ms │████████████████████████████████████ 50 ← peak
150
- 5.01ms │██████████████████████ 30
151
- 5.29ms │████████████████ 22
152
- 5.57ms │█████████████ 18
153
- 5.85ms │██████ 8
154
- 6.13ms │█ 2
155
- 7.53ms │█ 1 ← OS preemption
156
- 8.94ms │█ 1
157
- ```
158
-
159
- Normal bell curve, right-tailed from OS preemptions. Exactly what Brownian timing noise looks like.
160
-
161
- ---
162
-
163
- ### Remote VM — 192.222.57.254 — KVM · 12 vCPU · 480GB RAM · NVIDIA GH200 Grace Hopper · Ubuntu 22.04
164
-
165
- ```
166
- Pulse Score [██████████████████░░░░░░░░░░░░░░░░░░░░░░] 45.0%
167
- ```
168
-
169
- | Metric | Value | Physical interpretation |
170
- |---|---|---|
171
- | Coefficient of Variation | 0.0829 | Artificially consistent — hypervisor flattens variance |
172
- | Hurst Exponent | 0.0271 | Anti-persistent — caused by timer quantization artifacts |
173
- | Quantization Entropy | 1.27 bits | 93% of samples on two values — not a distribution |
174
- | Autocorr lag-1 | 0.666 | Periodic forcing — steal-time burst every ~50 samples |
175
- | Autocorr lag-50 | 0.710 | Still elevated at lag-50 — confirms periodic scheduler |
176
- | Entropy-Jitter Ratio | 1.01 | Flat hypervisor clock has no thermal feedback |
177
- | Thermal Pattern | sawtooth (synthetic) | Produced by scheduler bursts, not temperature |
178
- | Outlier Rate | 6.00% | Exactly 6%the steal-time bursts are deterministic |
179
-
180
- **Distribution:**
181
- ```
182
- 5.00ms │████████████████████████████████████ 123 ← 61% of all samples
183
- 5.11ms │███████████████████ 65 ← 32% of all samples
184
- 5.22ms │ 0
185
- ... │ 0 ← impossible values
186
- 6.72ms │█ 2
187
- 6.83ms │█ 4 ← steal-time bursts
188
- 7.05ms │█ 3
189
- ```
190
-
191
- This is the "Picket Fence" — 93% of samples at exactly two values. Nothing in between. A continuous physical process cannot produce this. A synthetic clock rounding to its host tick resolution can.
192
-
193
- **Heuristic Engine Output:**
194
- ```
195
- ENTROPY_FLAT_UNDER_LOAD EJR=1.01 (expected ≥1.08 for real hardware) penalty -0.10
196
- PICKET_FENCE_DETECTED lag-50 AC=0.71 > baseline 0.08 penalty -0.08
197
- HURST_AUTOCORR_INCOHERENT H=0.027 vs expected AC=|2H-1|=0.946 penalty -0.12
198
- CV_ENTROPY_INCOHERENT CV=0.083 expected QE≈2.83, actual QE=1.27 penalty -0.10
199
- ```
200
-
201
- Each of those four flags is a different physical law being violated. Spoofing one is straightforward. Spoofing all four simultaneously while keeping them mutually consistent with each other is not.
202
-
203
- ---
204
-
205
- ## Adaptive Early Exit
206
-
207
- The probe doesn't always need 200 iterations. It checks signal confidence every 25 and exits when the verdict is already decisive:
208
-
209
- ```
210
- Environment Iters used Wall time Speedup
211
- ────────────────────────────────────────────────────
212
- KVM (obvious) 50 ~0.9s 75%
213
- VMware ESXi 75 ~1.4s 60%
214
- Physical desktop ~120 ~2.1s 40%
215
- Ambiguous 200 ~3.5s —
216
- ```
217
-
218
- The 192.222.57.254 GH200 VM hit the exit condition at iteration 50. 480GB of RAM and a Grace Hopper Superchip cannot change the fact that the hypervisor clock is mathematically perfect. The signal was conclusive within the first batch.
219
-
220
- ---
221
-
222
- ## Installation
223
-
224
- ```bash
225
- npm install @svrnsec/pulse
226
- ```
227
-
228
- Node.js ≥ 18. The WASM binary is compiled from Rust and bundled — no separate `.wasm` file to host.
229
-
230
- The package is self-contained. It does not phone home. It does not contact any external service. Everything runs inside your infrastructure.
231
-
232
- To build from source (requires [Rust](https://rustup.rs) and [wasm-pack](https://rustwasm.github.io/wasm-pack/)):
233
-
234
- ```bash
235
- git clone https://github.com/ayronny14-alt/Svrn-Pulse-Security
236
- cd Svrn-Pulse-Security
237
- npm install
238
- npm run build
239
- ```
240
-
241
- ---
242
-
243
- ## Usage
244
-
245
- ### Client side
246
-
247
- ```js
248
- import { pulse } from '@svrnsec/pulse';
249
-
250
- // Get a nonce from your server (prevents replay attacks)
251
- const { nonce } = await fetch('/api/pulse/challenge').then(r => r.json());
252
-
253
- // Run the probe — adaptive, exits early when signal is decisive
254
- const { payload, hash } = await pulse({
255
- nonce,
256
- onProgress: (stage, meta) => {
257
- if (stage === 'entropy_batch') {
258
- // Live signal during probe stream to a progress bar
259
- // meta: { pct, vmConf, hwConf, earlyVerdict, etaMs }
260
- console.log(`${meta.pct}%${meta.earlyVerdict ?? 'measuring...'}`);
261
- }
262
- },
263
- });
264
-
265
- // Send commitment to your server
266
- const result = await fetch('/api/pulse/verify', {
267
- method: 'POST',
268
- headers: { 'Content-Type': 'application/json' },
269
- body: JSON.stringify({ payload, hash }),
270
- }).then(r => r.json());
271
- ```
272
-
273
- ### High-level `Fingerprint` class
274
-
275
- ```js
276
- import { Fingerprint } from '@svrnsec/pulse';
277
-
278
- const fp = await Fingerprint.collect({ nonce });
279
-
280
- fp.isSynthetic // true / false
281
- fp.score // 0.0–1.0
282
- fp.confidence // 0–100
283
- fp.tier // 'high' | 'medium' | 'low' | 'uncertain'
284
- fp.profile // 'analog-fog' | 'picket-fence' | 'burst-scheduler' | ...
285
- fp.providerId // 'kvm-digitalocean' | 'nitro-aws' | 'physical' | ...
286
- fp.providerLabel // 'DigitalOcean Droplet (KVM)'
287
- fp.schedulerQuantumMs // 250 estimated from autocorrelation peak lag
288
- fp.entropyJitterRatio // 1.24 — hotQE / coldQE
289
- fp.topFlag // 'PICKET_FENCE_DETECTED'
290
- fp.findings // full heuristic engine report
291
- fp.physicalEvidence // confirmed physical properties (bonuses)
292
-
293
- fp.hardwareId() // stable 16-char hex ID — BLAKE3(GPU + audio signals)
294
- fp.metrics() // flat object of all numeric metrics for logging
295
- fp.toCommitment() // { payload, hash } send to server
296
- ```
297
-
298
- ### Server side
299
-
300
- ```js
301
- import { validateProof, generateNonce } from '@svrnsec/pulse/validator';
302
-
303
- // Challenge endpoint — runs on your server, not ours
304
- app.get('/api/pulse/challenge', async (req, res) => {
305
- const nonce = generateNonce();
306
- await redis.set(`pulse:${nonce}`, '1', 'EX', 300);
307
- res.json({ nonce });
308
- });
309
-
310
- // Verify endpoint
311
- app.post('/api/pulse/verify', async (req, res) => {
312
- const result = await validateProof(req.body.payload, req.body.hash, {
313
- minJitterScore: 0.55,
314
- requireBio: false,
315
- checkNonce: async (n) => redis.del(`pulse:${n}`).then(d => d === 1),
316
- });
317
- res.json(result);
318
- });
319
- ```
320
-
321
- ### Express middleware
322
-
323
- ```js
324
- import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
325
-
326
- const pulse = createPulseMiddleware({
327
- threshold: 0.6,
328
- store: {
329
- set: (k, ttl) => redis.set(k, '1', 'EX', ttl),
330
- consume: (k) => redis.del(k).then(n => n === 1),
331
- },
332
- });
333
-
334
- app.get('/api/pulse/challenge', pulse.challenge);
335
- app.post('/checkout', pulse.verify, handler); // req.pulse injected
336
- ```
337
-
338
- ### Next.js App Router
339
-
340
- ```js
341
- // app/api/pulse/challenge/route.js
342
- import { pulseChallenge } from '@svrnsec/pulse/middleware/next';
343
- export const GET = pulseChallenge();
344
-
345
- // app/api/checkout/route.js
346
- import { withPulse } from '@svrnsec/pulse/middleware/next';
347
- export const POST = withPulse({ threshold: 0.6 })(async (req) => {
348
- const { score, provider } = req.pulse;
349
- return Response.json({ ok: true, score });
350
- });
351
- ```
352
-
353
- ### React hook
354
-
355
- ```jsx
356
- import { usePulse } from '@svrnsec/pulse/react';
357
-
358
- function Checkout() {
359
- const { run, stage, pct, vmConf, hwConf, result, isReady } = usePulse({
360
- challengeUrl: '/api/pulse/challenge',
361
- verifyUrl: '/api/pulse/verify',
362
- });
363
-
364
- return (
365
- <button onClick={run} disabled={!isReady && stage !== null}>
366
- {stage === 'entropy_batch'
367
- ? `Measuring... ${pct}% (VM: ${vmConf.toFixed(2)} / HW: ${hwConf.toFixed(2)})`
368
- : 'Verify Device'}
369
- </button>
370
- );
371
- }
372
- ```
373
-
374
- ### TypeScript
375
-
376
- Full declarations shipped in `index.d.ts`. Every interface, every callback, every return type:
377
-
378
- ```ts
379
- import { pulse, Fingerprint } from '@svrnsec/pulse';
380
- import type {
381
- PulseOptions, PulseCommitment,
382
- ProgressMeta, PulseStage,
383
- ValidationResult, FingerprintReport,
384
- } from '@svrnsec/pulse';
385
-
386
- const fp = await Fingerprint.collect({ nonce });
387
- // fp is fully typed — all properties, methods, and nested objects
388
- ```
389
-
390
- ---
391
-
392
- ## Validation result
393
-
394
- ```js
395
- {
396
- valid: true,
397
- score: 0.8215, // heuristic-adjusted score
398
- confidence: 'high', // 'high' | 'medium' | 'low' | 'rejected'
399
- reasons: [], // populated when valid: false
400
- riskFlags: [], // non-blocking signals worth logging
401
- meta: {
402
- receivedAt: 1742686350535,
403
- proofAge: 2841, // ms since probe ran
404
- jitterScore: 0.7983,
405
- canvasRenderer: 'NVIDIA GeForce GTX 1650 Super/PCIe/SSE2',
406
- bioActivity: true,
407
- }
408
- }
409
- ```
410
-
411
- **Score thresholds:**
412
-
413
- | Score | Confidence | Meaning |
414
- |---|---|---|
415
- | ≥ 0.75 | high | Real consumer hardware |
416
- | 0.55 – 0.75 | medium | Likely real, some signals ambiguous |
417
- | 0.35 – 0.55 | low | Borderline — VM, Chromebook, virtual display |
418
- | < 0.35 | rejected | Strong VM/AI indicators |
419
-
420
- ---
421
-
422
- ## Detection capabilities
423
-
424
- | Scenario | Result | Primary signal |
425
- |---|---|---|
426
- | Cloud VM (AWS, GCP, Azure, DO) | Blocked | EJR flat + quantized ticks + picket fence |
427
- | Headless Chrome / Puppeteer | Blocked | SwiftShader renderer + no bio activity |
428
- | AI inference endpoint | Blocked | VM timing profile + zero bio signals |
429
- | Proof replay attack | Blocked | Nonce consumed atomically on first use |
430
- | Payload tampering | Blocked | BLAKE3 hash fails immediately |
431
- | Metric spoofing (one signal) | Blocked | Cross-metric coherence check |
432
- | Metric spoofing (all signals) | Very hard | 5 physically-linked relationships must be jointly coherent |
433
- | Hardware you've never seen before | Blocked | Physics is the check, not a database |
434
- | GPU passthrough VMs | Partial | Canvas check varies; timing is primary |
435
- | Remote desktop (real machine) | Pass | Timing is real; bio may be weak |
436
-
437
- ---
438
-
439
- ## The Registry Classification, Not Detection
440
-
441
- The `src/registry/serializer.js` module stores signatures for known provider environments. It is used for the **label**, not the **verdict**.
442
-
443
- If the heuristic engine says "this is a VM," the registry says "specifically, this is a DigitalOcean Droplet running KVM with a 5ms scheduler quantum." If the registry has never seen this particular hypervisor before, it returns `profile: 'generic-vm'` — but the heuristic engine already caught it.
444
-
445
- You can extend the registry with a signature collected from any new environment:
446
-
447
- ```js
448
- import { serializeSignature, KNOWN_PROFILES } from '@svrnsec/pulse/registry';
449
-
450
- // After collecting a Fingerprint on the target machine:
451
- const sig = serializeSignature(fp, { name: 'AWS r7g.xlarge (Graviton3)', date: '2025-01' });
452
- // sig.id deterministic 'sig_abc123...'
453
- // Buckets continuous metrics for privacy — not reversible to raw values
454
- ```
455
-
456
- The detection engine doesn't need updates when new hardware ships. The registry benefits from them for labelling accuracy.
457
-
458
- ---
459
-
460
- ---
461
-
462
- ## TrustScore — Unified 0–100 Human Score
463
-
464
- The TrustScore engine converts all physical signals into a single integer that security teams can threshold, dashboard, and alert on.
465
-
466
- ```js
467
- import { computeTrustScore, formatTrustScore } from '@svrnsec/pulse/trust';
468
-
469
- const ts = computeTrustScore(payload, { enf, gpu, dram, llm, idle });
470
- // → { score: 87, grade: 'B', label: 'Verified', hardCap: null, breakdown: {...} }
471
-
472
- console.log(formatTrustScore(ts));
473
- // → "TrustScore 87/100 B · Verified [physics:91% enf:80% gpu:100% dram:87% bio:70%]"
474
- ```
475
-
476
- **Signal weights:** Physics layer 40pts · ENF 20pts · GPU 15pts · DRAM 15pts · Bio/LLM 10pts
477
-
478
- **Hard floors** that bonus points cannot override:
479
-
480
- | Condition | Cap | Why |
481
- |---|---|---|
482
- | EJR forgery detected | 20 | Physics law violated |
483
- | Software GPU renderer | 45 | Likely VM/container |
484
- | LLM agent conf > 0.85 | 30 | AI-driven session |
485
- | No bio + no ENF | 55 | Cannot confirm human on real device |
486
-
487
- ---
488
-
489
- ## Proof-of-Idle Defeating Click Farms at the Physics Layer
490
-
491
- Click farms run 1,000 real phones at sustained maximum throughput. Browser fingerprinting cannot catch them — they ARE real devices.
492
-
493
- The physics: a real device between interactions cools via Newton's Law of Cooling — a smooth exponential variance decay. A farm script pausing to fake idle drops CPU load from 100% to 0% instantly, producing a step function in the timing variance. You cannot fake a cooling curve faster than real time.
494
-
495
- ```js
496
- import { createIdleMonitor } from '@svrnsec/pulse/idle';
497
-
498
- // Browser — hooks visibilitychange and blur/focus automatically
499
- const monitor = createIdleMonitor();
500
- monitor.start();
501
-
502
- // When user triggers an engagement action:
503
- const idleProof = monitor.getProof(); // null if device never genuinely rested
504
-
505
- // Node.js / React Native manual control
506
- monitor.declareIdle();
507
- monitor.declareActive();
508
- ```
509
-
510
- **Thermal transition taxonomy:**
511
-
512
- | Label | Meaning | Farm? |
513
- |---|---|---|
514
- | `hot_to_cold` | Smooth exponential variance decay | No — genuine cooling |
515
- | `cold` | Device already at rest temperature | No — genuine idle |
516
- | `cooling` | Mild ongoing decay | No |
517
- | `step_function` | >75% variance drop in first interval | Yes — script paused |
518
- | `sustained_hot` | No cooling at all during idle period | Yes — constant load |
519
-
520
- **TrustScore impact:** `hot_to_cold` → +8pts bonus. `step_function` → hard cap 65. `sustained_hot` → hard cap 60.
521
-
522
- The hash chain (`SHA-256(prevHash ‖ ts ‖ meanMs ‖ variance)`) proves samples were taken in sequence at real intervals. N nodes at 30-second spacing = (N−1)×30s minimum elapsed time cannot be back-filled faster than real time.
523
-
524
- ---
525
-
526
- ## Population Entropy — Sybil Detection at Cohort Level
527
-
528
- One fake account is hard to detect. A warehouse of 1,000 phones running the same script is statistically impossible to hide.
529
-
530
- ```js
531
- import { analysePopulation } from '@svrnsec/pulse/population';
532
-
533
- const verdict = analysePopulation(tokenCohort);
534
- // → { authentic: false, sybilScore: 84, flags: ['TIMESTAMP_RHYTHM', 'THERMAL_HOMOGENEOUS'], ... }
535
- ```
536
-
537
- Five independent statistical tests on a cohort of engagement tokens:
538
-
539
- | Test | What it catches | Farm signal |
540
- |---|---|---|
541
- | Timestamp rhythm | Lag-1/lag-2 autocorrelation of arrival times | Farms dispatch in clock-timed batches |
542
- | Entropy dispersion | CV of physics scores across cohort | Cloned VMs are too similar (CV < 0.04) |
543
- | Thermal diversity | Shannon entropy of transition labels | 1,000 phones → same thermal state |
544
- | Idle plausibility | Clustering of idle durations | Scripts always pause for the same duration |
545
- | ENF phase coherence | Variance of grid frequency deviations | Co-located devices share the same circuit |
546
-
547
- `sybilScore < 40 = authentic cohort`. Coordinated farms score 80+.
548
-
549
- ---
550
-
551
- ## Engagement Tokens 30-Second Physics-Backed Proof
552
-
553
- A short-lived cryptographic token that proves a specific engagement event originated from a real human on real hardware that had genuinely rested between interactions.
554
-
555
- ```js
556
- import { createEngagementToken, verifyEngagementToken } from '@svrnsec/pulse/engage';
557
-
558
- // Clientafter the interaction
559
- const { compact } = createEngagementToken({
560
- pulseResult,
561
- idleProof: monitor.getProof(),
562
- interaction: { type: 'click', ts: Date.now(), motorConsistency: 0.82 },
563
- secret: process.env.PULSE_SECRET,
564
- });
565
- // Attach to API call: X-Pulse-Token: <compact>
566
-
567
- // Server — before crediting any engagement metric
568
- const result = await verifyEngagementToken(compact, process.env.PULSE_SECRET, {
569
- checkNonce: (n) => redis.del(`pulse:nonce:${n}`).then(d => d === 1),
570
- });
571
- // result.valid, result.riskSignals, result.idleWarnings
572
- ```
573
-
574
- **What the token proves:**
575
-
576
- 1. Real hardware DRAM refresh present, ENF grid signal detected
577
- 2. Genuine idle — Hash-chained thermal measurements spanning ≥ 45s
578
- 3. Physical cooling — Variance decay was smooth, not a step function
579
- 4. Fresh interaction — 30-second TTL eliminates token brokers
580
- 5. Tamper-evident — HMAC-SHA256 over all fraud-relevant fields
581
-
582
- HMAC signs: `v|n|iat|exp|idle.chain|idle.dMs|hw.ent|evt.t|evt.ts`
583
-
584
- Advisory fields (thermal label, cooling monotonicity) are in the token body for risk scoring but deliberately excluded from the HMAC changing them can't gain access credit without breaking the signature.
585
-
586
- ---
587
-
588
- ## Authenticity Audit — The $44 Billion Question
589
-
590
- Elon paid $44 billion arguing about what percentage of Twitter's users were real humans. Nobody had a physics-layer tool to measure it. This is that tool.
591
-
592
- ```js
593
- import { authenticityAudit } from '@svrnsec/pulse/audit';
594
-
595
- const report = authenticityAudit(tokenCohort, { confidenceLevel: 0.95 });
596
- ```
597
-
598
- ```js
599
- {
600
- cohortSize: 10000,
601
- estimatedHumanPct: 73.4,
602
- confidenceInterval: [69.1, 77.8], // 95% bootstrap CI
603
- grade: 'HIGH_FRAUD',
604
- botClusterCount: 5,
605
- botClusters: [
606
- {
607
- id: 'farm_a3f20c81',
608
- size: 847,
609
- sybilScore: 94,
610
- signature: {
611
- enfRegion: 'americas',
612
- dramVerdict: 'dram',
613
- thermalLabel: 'sustained_hot',
614
- meanEnfDev: 0.0231, // Hz — localizes to substation/building
615
- meanIdleMs: 57200, // script sleeps for exactly 57s
616
- },
617
- topSignals: ['timestamp_rhythm', 'thermal_diversity'],
618
- },
619
- ],
620
- recommendation: 'CRITICAL: 5 bot farm clusters account for a majority of traffic...',
621
- }
622
- ```
623
-
624
- **Method:** Tokens are clustered by hardware signature (ENF deviation bucket × DRAM verdict × thermal label × 10-minute time bucket). Organic users scatter across all dimensions. A farm in one building, running the same script, on the same hardware generation collapses into one tight cluster. Each cluster is scored with Population Entropy. A non-parametric bootstrap produces the confidence interval.
625
-
626
- **Typical values:**
627
-
628
- | Scenario | estimatedHumanPct |
629
- |---|---|
630
- | Organic product feed | 92–97% |
631
- | Incentivised engagement campaign | 55–75% |
632
- | Coordinated click farm attack | 8–35% |
633
-
634
- ---
635
-
636
- ## Tests
637
-
638
- ```bash
639
- npm test
640
- ```
641
-
642
- ```
643
- integration.test.js 43 tests — core engine, provider classifier, commitment, registry
644
- stress.test.js 92 tests — adversarial: KVM, VMware, Docker, LLM agents,
645
- Gaussian noise injection, synthetic thermal drift,
646
- score separation (real min vs VM max)
647
- engagement.test.js 45 tests — IdleAttestation state machine, thermal classification,
648
- Population Entropy (all 5 tests), Engagement Token
649
- creation/verification/replay/tamper, risk signals
650
- audit.test.js 18 tests — Authenticity Audit: organic vs farm cohorts, CI
651
- properties, multi-farm fingerprinting, grade thresholds
652
-
653
- Test Suites: 4 passed
654
- Tests: 158 passed, 0 failed
655
- Time: ~1.0s
656
- ```
657
-
658
- ---
659
-
660
- ## Demo
661
-
662
- ```bash
663
- node demo/node-demo.js
664
- ```
665
-
666
- Simulates real hardware (Box-Muller Gaussian noise — no periodic components, no artificial autocorrelation) and VM timing profiles (0.1ms quantization grid + steal-time bursts every 50 iterations). Runs both through the full analysis and commitment pipeline. No WASM needed.
667
-
668
- Open `demo/web/index.html` in a browser to see the animated probe running on your actual machine.
669
-
670
- ---
671
-
672
- ## Project structure
673
-
674
- ```
675
- sovereign-pulse/
676
- ├── src/
677
- │ ├── index.js pulse() — main entry point
678
- │ ├── fingerprint.js Fingerprint class (high-level API)
679
- │ ├── collector/
680
- │ │ ├── entropy.js WASM bridge + phased/adaptive routing
681
- │ │ ├── adaptive.js Adaptive early-exit engine
682
- │ │ ├── bio.js Mouse/keyboard interference coefficient
683
- │ │ ├── canvas.js WebGL/2D canvas fingerprint
684
- │ │ ├── gpu.js WebGPU thermal growth probe
685
- │ │ ├── dram.js DRAM refresh cycle detector
686
- │ │ ├── enf.js Electrical Network Frequency probe
687
- │ │ ├── sabTimer.js Sub-millisecond SAB timer
688
- │ │ └── idleAttestation.js Proof-of-Idle — thermal hash chain (v0.5.0)
689
- │ ├── analysis/
690
- │ │ ├── jitter.js Statistical classifier (6 components)
691
- │ │ ├── heuristic.js Cross-metric physics coherence engine
692
- │ │ ├── provider.js Hypervisor/cloud provider classifier
693
- │ │ ├── audio.js AudioContext callback jitter
694
- │ │ ├── llm.js LLM agent behavioural detector
695
- │ │ ├── trustScore.js Unified 0–100 TrustScore engine (v0.4.0)
696
- │ │ ├── populationEntropy.js Sybil detection — 5 cohort-level tests (v0.5.0)
697
- │ │ └── authenticityAudit.js $44B question humanPct + CI (v0.6.0)
698
- │ ├── middleware/
699
- │ │ ├── express.js Express/Fastify/Hono drop-in
700
- │ │ └── next.js Next.js App Router HOC
701
- │ ├── integrations/
702
- │ │ ├── react.js usePulse() hook
703
- │ │ └── react-native.js Expo accelerometer + thermal bridge
704
- │ ├── proof/
705
- │ │ ├── fingerprint.js BLAKE3 commitment builder
706
- │ │ ├── validator.js Server-side proof verifier
707
- │ │ ├── challenge.js HMAC challenge/response
708
- │ │ └── engagementToken.js 30s physics-backed engagement token (v0.5.0)
709
- │ └── registry/
710
- │ └── serializer.js Provider signature serializer + matcher
711
- ├── crates/pulse-core/ Rust/WASM entropy probe
712
- ├── index.d.ts Full TypeScript declarations
713
- ├── demo/
714
- │ ├── web/index.html Standalone browser demo
715
- │ ├── node-demo.js CLI demo (no WASM required)
716
- │ ├── benchmark.js Generates numbers in this README
717
- │ └── perf.js Pipeline overhead benchmarks
718
- └── test/
719
- ├── integration.test.js 43 tests — core engine
720
- ├── stress.test.js 92 tests adversarial attack suite
721
- ├── engagement.test.js 45 tests — idle / population / tokens
722
- └── audit.test.js 18 tests — authenticity audit
723
- ```
724
-
725
- ---
726
-
727
- ## Privacy
728
-
729
- Nothing leaves the browser except a ~1.6KB statistical summary:
730
-
731
- - Timing arrays → BLAKE3 hashed, only hash transmitted
732
- - GPU pixel buffers → BLAKE3 hashed, only hash transmitted
733
- - Mouse coordinates never stored, only timing deltas used
734
- - Keystrokes → only dwell/flight times, key labels discarded immediately
735
-
736
- The server receives enough to verify the proof. Not enough to reconstruct any original signal. Not enough to re-identify a user across sessions.
737
-
738
- `hardwareId()` is a BLAKE3 hash of GPU renderer string + audio sample rate. Stable per physical device, not reversible, not cross-origin linkable.
739
-
740
- ---
741
-
742
- ## Limitations
743
-
744
- - The probe runs for 0.9–3.5 seconds. Best suited for deliberate actions (login, checkout, form submit) not page load.
745
- - Mobile browsers cap `performance.now()` to 1ms resolution. Signal quality is reduced; the classifier adjusts but scores trend lower.
746
- - GPU passthrough VMs pass the canvas check. Timing is the primary discriminator in that case.
747
- - This is one signal among many. High-stakes applications should layer it with behavioral and network signals.
748
- - The heuristic engine catches unknown VMs via physics. The provider classifier labels them by scheduler signature. If a new hypervisor ships with an unusual quantum, it will be detected and flagged as `generic-vm` until the registry is updated.
749
-
750
- ---
751
-
752
- ## FAQ
753
-
754
- **Does it work with browser extensions installed (uBlock, Privacy Badger, 1Password)?**
755
-
756
- Yes. Extensions don't touch the physics layer. The core probe is thermal — it measures entropy growth via WASM matrix multiply timing across cold/load/hot CPU phases. Extensions cannot fake DRAM refresh variance or thermal noise on real silicon. Canvas signals (which some extensions do affect) are weighted inputs, not gates. The heuristic engine cross-validates across 5 independent signals, so no single channel can cause a false flag.
757
-
758
- **What about Brave's timer clamping?**
759
-
760
- Brave reduces `performance.now()` resolution to 100µs to prevent fingerprinting. We detect this via `timerGranularityMs` and adjust thresholds accordingly. A clamped timer on real hardware still shows thermal variance across phases. A VM with a clamped timer is still flat. The EJR check survives timer clamping — it's a ratio, not an absolute threshold.
761
-
762
- **Can a VM spoof this?**
763
-
764
- Spoofing one signal is straightforward. Spoofing all five simultaneously while keeping them mutually coherent with each other is a different problem. The Hurst-AC coherence check specifically catches data that was *generated* to look right rather than *measured* from real hardware — the two signals are physically linked and have to match each other, not just hit individual thresholds. See the [KVM example above](#the-picket-fence-detector) where four physical laws are violated simultaneously.
765
-
766
- **Does it collect or transmit any personal data?**
767
-
768
- No. Nothing leaves the browser except a ~1.6KB statistical summary with all raw signals BLAKE3-hashed. The server receives enough to verify the proof. Not enough to reconstruct any original signal or re-identify a user across sessions.
769
-
770
- **What's the performance overhead?**
771
-
772
- The probe takes 0.9–3.5 seconds depending on how quickly the signal converges. For obvious VMs it exits at 50 iterations (~0.9s). For real hardware it typically exits around 100–120 iterations (~2s). JavaScript overhead outside the probe itself is under 2ms. Best used on deliberate user actions (login, checkout) not page load.
773
-
774
- **Mobile support?**
775
-
776
- Mobile browsers cap `performance.now()` to 1ms resolution which reduces signal quality. The classifier adjusts thresholds and scores trend lower, but the directional verdict (VM vs. physical) remains accurate. The bio layer (touch timing, accelerometer jitter on supported devices) compensates partially.
777
-
778
- ---
779
-
780
- ## License
781
-
782
- MIT
1
+ # @svrnsec/pulse
2
+
3
+ [![CI](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml/badge.svg)](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml)
4
+ [![npm version](https://img.shields.io/npm/v/@svrnsec/pulse.svg?style=flat)](https://www.npmjs.com/package/@svrnsec/pulse)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](./LICENSE)
6
+ [![Security Policy](https://img.shields.io/badge/security-policy-orange.svg)](./SECURITY.md)
7
+
8
+ A hardware-physics probe that distinguishes real consumer silicon from sanitised cloud VMs and AI inference endpoints.
9
+
10
+ It does not maintain a database of known bad actors. It measures thermodynamic constants.
11
+
12
+ ---
13
+
14
+ ## 30-Second Quickstart
15
+
16
+ ```bash
17
+ npm install @svrnsec/pulse
18
+ ```
19
+
20
+ ```js
21
+ // Express — drop-in server-side verification
22
+ import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
23
+
24
+ app.use('/api', createPulseMiddleware({ minScore: 0.6 }));
25
+ ```
26
+
27
+ ```jsx
28
+ // React — live probe with real-time signal meters
29
+ import { usePulse } from '@svrnsec/pulse/react';
30
+
31
+ function TrustGate() {
32
+ const { run, pct, vmConf, hwConf, earlyVerdict, result } = usePulse();
33
+
34
+ return (
35
+ <button onClick={run}>
36
+ {pct < 100 ? `Probing… ${pct}%` : earlyVerdict}
37
+ </button>
38
+ );
39
+ }
40
+ ```
41
+
42
+ ```js
43
+ // Node.js — raw proof commitment
44
+ import { pulse } from '@svrnsec/pulse';
45
+
46
+ const { payload, hash } = await pulse({ nonce: crypto.randomUUID() });
47
+ // payload.classification.jitterScore → 0.798 (real hw) | 0.45 (VM)
48
+ // payload.classification.flags → [] (clean) | ['CV_TOO_HIGH_...'] (VM)
49
+ // hash → BLAKE3 commitment you send to your server for validation
50
+ ```
51
+
52
+ **Self-hosted mode:** No API key. No account. No data leaves the client. Runs entirely in your infrastructure.
53
+
54
+ **Hosted API mode:** Zero server setup — pass an `apiKey` and the SDK handles challenge/verify automatically:
55
+
56
+ ```js
57
+ const result = await pulse({ apiKey: 'sk_live_...' });
58
+ // result.result.valid, result.result.score, result.result.confidence
59
+ ```
60
+
61
+ ---
62
+
63
+ ## The Problem With Every Other Approach
64
+
65
+ Every bot detection system is, at its core, a database. Known bad IP ranges. Known headless browser fingerprints. Known datacenter ASNs. Known CAPTCHA-solving services.
66
+
67
+ The attacker's job is simple: don't be in the database. The moment a new cloud region launches, a new headless runtime ships, or a new residential proxy network comes online, the database is stale.
68
+
69
+ Pulse doesn't work that way.
70
+
71
+ A VM's hypervisor clock is mathematically perfect it cannot produce thermal noise because there is no thermal feedback loop in a virtual timer. Real silicon running under sustained load gets measurably noisier as electrons move through gates that are physically getting hotter. That relationship is a law of physics. It does not change when AWS launches a new instance type in 2027. It does not change when a new hypervisor ships. It cannot be patched.
72
+
73
+ ---
74
+
75
+ ## The Two Layers
76
+
77
+ **Detection** answers: *Is this a VM?*
78
+ Handled entirely by the heuristic engine. No signatures, no database. Five physical relationships, measured and cross-checked. If they're mutually coherent with what thermodynamics predicts, it's real hardware. If any of them contradict each other in ways physics wouldn't allow, something is being faked.
79
+
80
+ **Classification** answers: *Which VM is it?*
81
+ Handled by the provider fingerprinter. Matches the timing autocorrelation profile against known hypervisor scheduler rhythms (KVM's 250ms quantum, Xen's 750ms credit scheduler, Hyper-V's 15.6ms quantum). This is the part that improves with more data — but it's not needed for detection. A brand-new hypervisor from a company that doesn't exist yet will still fail detection the moment it tries to present a mathematically flat clock.
82
+
83
+ ---
84
+
85
+ ## The Five Physical Signals
86
+
87
+ ### 1. Entropy-Jitter Ratio
88
+
89
+ The key signal. When a real CPU runs sustained compute, thermal throttling kicks in and timing jitter *increases*the die gets hotter, the transistors switch slightly slower, and you can measure it.
90
+
91
+ ```
92
+ hotQE / coldQE ≥ 1.08 → thermal feedback confirmed (real silicon)
93
+ hotQE / coldQE ≈ 1.00 → clock is insensitive to guest thermal state (VM)
94
+ ```
95
+
96
+ A KVM hypervisor maintains a synthetic clock that ticks at a constant rate regardless of what the guest OS is doing. Its entropy ratio across cold/load/hot phases is flat. On a KVM VM (12 vCPU / 480GB RAM / GH200 Grace Hopper) it measured 1.01. On a local GTX 1650 Super machine it measured 1.24.
97
+
98
+ A software implementation cannot fake this without generating actual heat.
99
+
100
+ ### 2. Hurst-Autocorrelation Coherence
101
+
102
+ Genuine Brownian noise (what real hardware timing looks like) has a Hurst exponent near 0.5 and near-zero autocorrelation at all lags. These two are physically linked by the relationship `expected_AC = |2H - 1|`.
103
+
104
+ If you measure H=0.5 but find high autocorrelation — or low H but low autocorrelation — the data was generated, not measured. A VM that tries to fake the Hurst Exponent without adjusting the autocorrelation profile, or vice versa, fails this check immediately.
105
+
106
+ ### 3. CV-Entropy Coherence
107
+
108
+ High coefficient of variation (timing spread) must come from a genuinely spread-out distribution, which means high quantization entropy. A VM that inflates CV by adding synthetic outliers at fixed offsets — say, every 50th iteration triggers a steal-time burst — produces high CV but low entropy because 93% of samples still fall in two bins.
109
+
110
+ From a KVM GH200 VM: CV=0.0829 (seems variable) but QE=1.27 bits (extreme clustering). Incoherent. On real hardware, CV=0.1494 → QE=3.59 bits. Coherent.
111
+
112
+ ### 4. The Picket Fence Detector
113
+
114
+ Hypervisor scheduler quanta create periodic steal-time bursts. A KVM host running at ~5ms/iteration with a 250ms quantum will pause the guest every ~50 iterations. This shows up as elevated autocorrelation at lag-50 relative to lag-5. The autocorrelation profile looks like fence posts at regular intervals — hence the name.
115
+
116
+ ```
117
+ Real hardware: lag-1 AC=0.07 lag-50 AC=0.03 (flat, no rhythm)
118
+ KVM VM: lag-1 AC=0.67 lag-50 AC=0.71 (periodic steal-time)
119
+ ```
120
+
121
+ The dominant lag also lets the classifier estimate the scheduler quantum: `lag × 5ms/iter ≈ quantum`. This is how it identifies KVM (250ms), Xen (750ms), and Hyper-V (15.6ms) without any prior knowledge of the host.
122
+
123
+ ### 5. Skewness-Kurtosis Coherence
124
+
125
+ Real hardware timing is right-skewed with positive kurtosis. OS preemptions create occasional large delays on the right tail, while the body of the distribution stays compact. A VM that adds synthetic spikes at fixed offsets tends to produce the wrong skew direction or an implausibly symmetric distribution.
126
+
127
+ ---
128
+
129
+ ## Benchmark Results
130
+
131
+ *12 trials × 200 iterations. Two real environments.*
132
+
133
+ ### Local Machine — GTX 1650 Super · i5-10400 · Win11 · 16GB DDR4
134
+
135
+ ```
136
+ Pulse Score [████████████████████████████████░░░░░░░░] 79.8%
137
+ ```
138
+
139
+ | Metric | Value | Physical interpretation |
140
+ |---|---|---|
141
+ | Coefficient of Variation | 0.1494 | Spread from thermal noise + OS interrupts |
142
+ | Hurst Exponent | 0.5505 | Near-Brownian — i.i.d. noise from independent sources |
143
+ | Quantization Entropy | 3.59 bits | Timings genuinely spread across distribution |
144
+ | Autocorr lag-1 | 0.0698 | Near-zero — no periodic forcing |
145
+ | Autocorr lag-50 | 0.0312 | Flat at distance — no scheduler rhythm |
146
+ | Entropy-Jitter Ratio | 1.24 | Entropy grew 24% from cold to hot — thermal feedback confirmed |
147
+ | Thermal Pattern | sawtooth | Fan cycling, not hypervisor |
148
+ | Outlier Rate | 2.25% | OS context switches — unpredictable, not periodic |
149
+
150
+ **Distribution:**
151
+ ```
152
+ 3.60ms │██████ 8
153
+ 3.88ms │█████ 7
154
+ 4.16ms │██████████████ 19
155
+ 4.44ms │██████████████████████ 30
156
+ 4.73ms │████████████████████████████████████ 50 ← peak
157
+ 5.01ms │██████████████████████ 30
158
+ 5.29ms │████████████████ 22
159
+ 5.57ms │█████████████ 18
160
+ 5.85ms │██████ 8
161
+ 6.13ms │█ 2
162
+ 7.53ms │█ 1 ← OS preemption
163
+ 8.94ms │█ 1
164
+ ```
165
+
166
+ Normal bell curve, right-tailed from OS preemptions. Exactly what Brownian timing noise looks like.
167
+
168
+ ---
169
+
170
+ ### Remote VM — KVM · 12 vCPU · 480GB RAM · NVIDIA GH200 Grace Hopper · Ubuntu 22.04
171
+
172
+ ```
173
+ Pulse Score [██████████████████░░░░░░░░░░░░░░░░░░░░░░] 45.0%
174
+ ```
175
+
176
+ | Metric | Value | Physical interpretation |
177
+ |---|---|---|
178
+ | Coefficient of Variation | 0.0829 | Artificially consistenthypervisor flattens variance |
179
+ | Hurst Exponent | 0.0271 | Anti-persistent — caused by timer quantization artifacts |
180
+ | Quantization Entropy | 1.27 bits | 93% of samples on two values — not a distribution |
181
+ | Autocorr lag-1 | 0.666 | Periodic forcing — steal-time burst every ~50 samples |
182
+ | Autocorr lag-50 | 0.710 | Still elevated at lag-50 — confirms periodic scheduler |
183
+ | Entropy-Jitter Ratio | 1.01 | Flat hypervisor clock has no thermal feedback |
184
+ | Thermal Pattern | sawtooth (synthetic) | Produced by scheduler bursts, not temperature |
185
+ | Outlier Rate | 6.00% | Exactly 6% — the steal-time bursts are deterministic |
186
+
187
+ **Distribution:**
188
+ ```
189
+ 5.00ms │████████████████████████████████████ 123 ← 61% of all samples
190
+ 5.11ms │███████████████████ 65 ← 32% of all samples
191
+ 5.22ms │ 0
192
+ ... │ 0 ← impossible values
193
+ 6.72ms │█ 2
194
+ 6.83ms │█ 4 ← steal-time bursts
195
+ 7.05ms │█ 3
196
+ ```
197
+
198
+ This is the "Picket Fence" — 93% of samples at exactly two values. Nothing in between. A continuous physical process cannot produce this. A synthetic clock rounding to its host tick resolution can.
199
+
200
+ **Heuristic Engine Output:**
201
+ ```
202
+ ENTROPY_FLAT_UNDER_LOAD EJR=1.01 (expected ≥1.08 for real hardware) penalty -0.10
203
+ PICKET_FENCE_DETECTED lag-50 AC=0.71 > baseline 0.08 penalty -0.08
204
+ HURST_AUTOCORR_INCOHERENT H=0.027 vs expected AC=|2H-1|=0.946 penalty -0.12
205
+ CV_ENTROPY_INCOHERENT CV=0.083 expected QE≈2.83, actual QE=1.27 penalty -0.10
206
+ ```
207
+
208
+ Each of those four flags is a different physical law being violated. Spoofing one is straightforward. Spoofing all four simultaneously while keeping them mutually consistent with each other is not.
209
+
210
+ ---
211
+
212
+ ## Adaptive Early Exit
213
+
214
+ The probe doesn't always need 200 iterations. It checks signal confidence every 25 and exits when the verdict is already decisive:
215
+
216
+ ```
217
+ Environment Iters used Wall time Speedup
218
+ ────────────────────────────────────────────────────
219
+ KVM (obvious) 50 ~0.9s 75%
220
+ VMware ESXi 75 ~1.4s 60%
221
+ Physical desktop ~120 ~2.1s 40%
222
+ Ambiguous 200 ~3.5s —
223
+ ```
224
+
225
+ The GH200 VM hit the exit condition at iteration 50. 480GB of RAM and a Grace Hopper Superchip cannot change the fact that the hypervisor clock is mathematically perfect. The signal was conclusive within the first batch.
226
+
227
+ ---
228
+
229
+ ## Installation
230
+
231
+ ```bash
232
+ npm install @svrnsec/pulse
233
+ ```
234
+
235
+ Node.js 18. The WASM binary is compiled from Rust and bundled — no separate `.wasm` file to host.
236
+
237
+ The package is self-contained. It does not phone home. It does not contact any external service. Everything runs inside your infrastructure.
238
+
239
+ To build from source (requires [Rust](https://rustup.rs) and [wasm-pack](https://rustwasm.github.io/wasm-pack/)):
240
+
241
+ ```bash
242
+ git clone https://github.com/ayronny14-alt/Svrn-Pulse-Security
243
+ cd Svrn-Pulse-Security
244
+ npm install
245
+ npm run build
246
+ ```
247
+
248
+ ---
249
+
250
+ ## Usage
251
+
252
+ ### Client side
253
+
254
+ ```js
255
+ import { pulse } from '@svrnsec/pulse';
256
+
257
+ // Get a nonce from your server (prevents replay attacks)
258
+ const { nonce } = await fetch('/api/pulse/challenge').then(r => r.json());
259
+
260
+ // Run the probe adaptive, exits early when signal is decisive
261
+ const { payload, hash } = await pulse({
262
+ nonce,
263
+ onProgress: (stage, meta) => {
264
+ if (stage === 'entropy_batch') {
265
+ // Live signal during probe — stream to a progress bar
266
+ // meta: { pct, vmConf, hwConf, earlyVerdict, etaMs }
267
+ console.log(`${meta.pct}% — ${meta.earlyVerdict ?? 'measuring...'}`);
268
+ }
269
+ },
270
+ });
271
+
272
+ // Send commitment to your server
273
+ const result = await fetch('/api/pulse/verify', {
274
+ method: 'POST',
275
+ headers: { 'Content-Type': 'application/json' },
276
+ body: JSON.stringify({ payload, hash }),
277
+ }).then(r => r.json());
278
+ ```
279
+
280
+ ### High-level `Fingerprint` class
281
+
282
+ ```js
283
+ import { Fingerprint } from '@svrnsec/pulse';
284
+
285
+ const fp = await Fingerprint.collect({ nonce });
286
+
287
+ fp.isSynthetic // true / false
288
+ fp.score // 0.0–1.0
289
+ fp.confidence // 0–100
290
+ fp.tier // 'high' | 'medium' | 'low' | 'uncertain'
291
+ fp.profile // 'analog-fog' | 'picket-fence' | 'burst-scheduler' | ...
292
+ fp.providerId // 'kvm-digitalocean' | 'nitro-aws' | 'physical' | ...
293
+ fp.providerLabel // 'DigitalOcean Droplet (KVM)'
294
+ fp.schedulerQuantumMs // 250 estimated from autocorrelation peak lag
295
+ fp.entropyJitterRatio // 1.24hotQE / coldQE
296
+ fp.topFlag // 'PICKET_FENCE_DETECTED'
297
+ fp.findings // full heuristic engine report
298
+ fp.physicalEvidence // confirmed physical properties (bonuses)
299
+
300
+ fp.hardwareId() // stable 32-char hex ID — BLAKE3(GPU + audio signals), 128-bit collision resistance
301
+ fp.metrics() // flat object of all numeric metrics for logging
302
+ fp.toCommitment() // { payload, hash } — send to server
303
+ ```
304
+
305
+ ### Server side
306
+
307
+ ```js
308
+ import { validateProof, generateNonce } from '@svrnsec/pulse/validator';
309
+
310
+ // Challenge endpoint — runs on your server, not ours
311
+ app.get('/api/pulse/challenge', async (req, res) => {
312
+ const nonce = generateNonce();
313
+ await redis.set(`pulse:${nonce}`, '1', 'EX', 300);
314
+ res.json({ nonce });
315
+ });
316
+
317
+ // Verify endpoint
318
+ app.post('/api/pulse/verify', async (req, res) => {
319
+ const result = await validateProof(req.body.payload, req.body.hash, {
320
+ minJitterScore: 0.55,
321
+ requireBio: false,
322
+ checkNonce: async (n) => redis.del(`pulse:${n}`).then(d => d === 1),
323
+ });
324
+ res.json(result);
325
+ });
326
+ ```
327
+
328
+ ### Express middleware
329
+
330
+ ```js
331
+ import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
332
+
333
+ const pulse = createPulseMiddleware({
334
+ threshold: 0.6,
335
+ store: {
336
+ set: (k, ttl) => redis.set(k, '1', 'EX', ttl),
337
+ consume: (k) => redis.del(k).then(n => n === 1),
338
+ },
339
+ });
340
+
341
+ app.get('/api/pulse/challenge', pulse.challenge);
342
+ app.post('/checkout', pulse.verify, handler); // req.pulse injected
343
+ ```
344
+
345
+ ### Next.js App Router
346
+
347
+ ```js
348
+ // app/api/pulse/challenge/route.js
349
+ import { pulseChallenge } from '@svrnsec/pulse/middleware/next';
350
+ export const GET = pulseChallenge();
351
+
352
+ // app/api/checkout/route.js
353
+ import { withPulse } from '@svrnsec/pulse/middleware/next';
354
+ export const POST = withPulse({ threshold: 0.6 })(async (req) => {
355
+ const { score, provider } = req.pulse;
356
+ return Response.json({ ok: true, score });
357
+ });
358
+ ```
359
+
360
+ ### React hook
361
+
362
+ ```jsx
363
+ import { usePulse } from '@svrnsec/pulse/react';
364
+
365
+ function Checkout() {
366
+ const { run, stage, pct, vmConf, hwConf, result, isReady } = usePulse({
367
+ challengeUrl: '/api/pulse/challenge',
368
+ verifyUrl: '/api/pulse/verify',
369
+ });
370
+
371
+ return (
372
+ <button onClick={run} disabled={!isReady && stage !== null}>
373
+ {stage === 'entropy_batch'
374
+ ? `Measuring... ${pct}% (VM: ${vmConf.toFixed(2)} / HW: ${hwConf.toFixed(2)})`
375
+ : 'Verify Device'}
376
+ </button>
377
+ );
378
+ }
379
+ ```
380
+
381
+ ### TypeScript
382
+
383
+ Full declarations shipped in `index.d.ts`. Every interface, every callback, every return type:
384
+
385
+ ```ts
386
+ import { pulse, Fingerprint } from '@svrnsec/pulse';
387
+ import type {
388
+ PulseOptions, PulseCommitment,
389
+ ProgressMeta, PulseStage,
390
+ ValidationResult, FingerprintReport,
391
+ } from '@svrnsec/pulse';
392
+
393
+ const fp = await Fingerprint.collect({ nonce });
394
+ // fp is fully typed — all properties, methods, and nested objects
395
+ ```
396
+
397
+ ---
398
+
399
+ ## Validation result
400
+
401
+ ```js
402
+ {
403
+ valid: true,
404
+ score: 0.8215, // heuristic-adjusted score
405
+ confidence: 'high', // 'high' | 'medium' | 'low' | 'rejected'
406
+ reasons: [], // populated when valid: false
407
+ riskFlags: [], // non-blocking signals worth logging
408
+ meta: {
409
+ receivedAt: 1742686350535,
410
+ proofAge: 2841, // ms since probe ran
411
+ jitterScore: 0.7983,
412
+ canvasRenderer: 'NVIDIA GeForce GTX 1650 Super/PCIe/SSE2',
413
+ bioActivity: true,
414
+ }
415
+ }
416
+ ```
417
+
418
+ **Score thresholds:**
419
+
420
+ | Score | Confidence | Meaning |
421
+ |---|---|---|
422
+ | 0.75 | high | Real consumer hardware |
423
+ | 0.55 – 0.75 | medium | Likely real, some signals ambiguous |
424
+ | 0.35 – 0.55 | low | Borderline VM, Chromebook, virtual display |
425
+ | < 0.35 | rejected | Strong VM/AI indicators |
426
+
427
+ ---
428
+
429
+ ## Detection capabilities
430
+
431
+ | Scenario | Result | Primary signal |
432
+ |---|---|---|
433
+ | Cloud VM (AWS, GCP, Azure, DO) | Blocked | EJR flat + quantized ticks + picket fence |
434
+ | Headless Chrome / Puppeteer | Blocked | SwiftShader renderer + no bio activity |
435
+ | AI inference endpoint | Blocked | VM timing profile + zero bio signals |
436
+ | Proof replay attack | Blocked | Nonce consumed atomically on first use |
437
+ | Payload tampering | Blocked | BLAKE3 hash fails immediately |
438
+ | Metric spoofing (one signal) | Blocked | Cross-metric coherence check |
439
+ | Metric spoofing (all signals) | Very hard | 5 physically-linked relationships must be jointly coherent |
440
+ | Hardware you've never seen before | Blocked | Physics is the check, not a database |
441
+ | GPU passthrough VMs | Partial | Canvas check varies; timing is primary |
442
+ | Remote desktop (real machine) | Pass | Timing is real; bio may be weak |
443
+
444
+ ---
445
+
446
+ ## The Registry — Classification, Not Detection
447
+
448
+ The `src/registry/serializer.js` module stores signatures for known provider environments. It is used for the **label**, not the **verdict**.
449
+
450
+ If the heuristic engine says "this is a VM," the registry says "specifically, this is a DigitalOcean Droplet running KVM with a 5ms scheduler quantum." If the registry has never seen this particular hypervisor before, it returns `profile: 'generic-vm'` — but the heuristic engine already caught it.
451
+
452
+ You can extend the registry with a signature collected from any new environment:
453
+
454
+ ```js
455
+ import { serializeSignature, KNOWN_PROFILES } from '@svrnsec/pulse/registry';
456
+
457
+ // After collecting a Fingerprint on the target machine:
458
+ const sig = serializeSignature(fp, { name: 'AWS r7g.xlarge (Graviton3)', date: '2025-01' });
459
+ // sig.id → deterministic 'sig_abc123...'
460
+ // Buckets continuous metrics for privacy — not reversible to raw values
461
+ ```
462
+
463
+ The detection engine doesn't need updates when new hardware ships. The registry benefits from them for labelling accuracy.
464
+
465
+ ---
466
+
467
+ ---
468
+
469
+ ## TrustScore Unified 0–100 Human Score
470
+
471
+ The TrustScore engine converts all physical signals into a single integer that security teams can threshold, dashboard, and alert on.
472
+
473
+ ```js
474
+ import { computeTrustScore, formatTrustScore } from '@svrnsec/pulse/trust';
475
+
476
+ const ts = computeTrustScore(payload, { enf, gpu, dram, llm, idle });
477
+ // → { score: 87, grade: 'B', label: 'Verified', hardCap: null, breakdown: {...} }
478
+
479
+ console.log(formatTrustScore(ts));
480
+ // "TrustScore 87/100 B · Verified [physics:91% enf:80% gpu:100% dram:87% bio:70%]"
481
+ ```
482
+
483
+ **Signal weights:** Physics layer 40pts · ENF 20pts · GPU 15pts · DRAM 15pts · Bio/LLM 10pts
484
+
485
+ **Hard floors** that bonus points cannot override:
486
+
487
+ | Condition | Cap | Why |
488
+ |---|---|---|
489
+ | EJR forgery detected | 20 | Physics law violated |
490
+ | Software GPU renderer | 45 | Likely VM/container |
491
+ | LLM agent conf > 0.85 | 30 | AI-driven session |
492
+ | No bio + no ENF | 55 | Cannot confirm human on real device |
493
+
494
+ ---
495
+
496
+ ## Proof-of-Idle Defeating Click Farms at the Physics Layer
497
+
498
+ Click farms run 1,000 real phones at sustained maximum throughput. Browser fingerprinting cannot catch them they ARE real devices.
499
+
500
+ The physics: a real device between interactions cools via Newton's Law of Cooling — a smooth exponential variance decay. A farm script pausing to fake idle drops CPU load from 100% to 0% instantly, producing a step function in the timing variance. You cannot fake a cooling curve faster than real time.
501
+
502
+ ```js
503
+ import { createIdleMonitor } from '@svrnsec/pulse/idle';
504
+
505
+ // Browser hooks visibilitychange and blur/focus automatically
506
+ const monitor = createIdleMonitor();
507
+ monitor.start();
508
+
509
+ // When user triggers an engagement action:
510
+ const idleProof = monitor.getProof(); // null if device never genuinely rested
511
+
512
+ // Node.js / React Native manual control
513
+ monitor.declareIdle();
514
+ monitor.declareActive();
515
+ ```
516
+
517
+ **Thermal transition taxonomy:**
518
+
519
+ | Label | Meaning | Farm? |
520
+ |---|---|---|
521
+ | `hot_to_cold` | Smooth exponential variance decay | No — genuine cooling |
522
+ | `cold` | Device already at rest temperature | Nogenuine idle |
523
+ | `cooling` | Mild ongoing decay | No |
524
+ | `step_function` | >75% variance drop in first interval | Yes — script paused |
525
+ | `sustained_hot` | No cooling at all during idle period | Yes — constant load |
526
+
527
+ **TrustScore impact:** `hot_to_cold` → +8pts bonus. `step_function` → hard cap 65. `sustained_hot` → hard cap 60.
528
+
529
+ The hash chain (`SHA-256(prevHash ‖ ts ‖ meanMs ‖ variance)`) proves samples were taken in sequence at real intervals. N nodes at 30-second spacing = (N−1)×30s minimum elapsed time — cannot be back-filled faster than real time.
530
+
531
+ ---
532
+
533
+ ## Population Entropy — Sybil Detection at Cohort Level
534
+
535
+ One fake account is hard to detect. A warehouse of 1,000 phones running the same script is statistically impossible to hide.
536
+
537
+ ```js
538
+ import { analysePopulation } from '@svrnsec/pulse/population';
539
+
540
+ const verdict = analysePopulation(tokenCohort);
541
+ // { authentic: false, sybilScore: 84, flags: ['TIMESTAMP_RHYTHM', 'THERMAL_HOMOGENEOUS'], ... }
542
+ ```
543
+
544
+ Five independent statistical tests on a cohort of engagement tokens:
545
+
546
+ | Test | What it catches | Farm signal |
547
+ |---|---|---|
548
+ | Timestamp rhythm | Lag-1/lag-2 autocorrelation of arrival times | Farms dispatch in clock-timed batches |
549
+ | Entropy dispersion | CV of physics scores across cohort | Cloned VMs are too similar (CV < 0.04) |
550
+ | Thermal diversity | Shannon entropy of transition labels | 1,000 phones → same thermal state |
551
+ | Idle plausibility | Clustering of idle durations | Scripts always pause for the same duration |
552
+ | ENF phase coherence | Variance of grid frequency deviations | Co-located devices share the same circuit |
553
+
554
+ `sybilScore < 40 = authentic cohort`. Coordinated farms score 80+.
555
+
556
+ ---
557
+
558
+ ## Engagement Tokens 30-Second Physics-Backed Proof
559
+
560
+ A short-lived cryptographic token that proves a specific engagement event originated from a real human on real hardware that had genuinely rested between interactions.
561
+
562
+ ```js
563
+ import { createEngagementToken, verifyEngagementToken } from '@svrnsec/pulse/engage';
564
+
565
+ // Client after the interaction
566
+ const { compact } = createEngagementToken({
567
+ pulseResult,
568
+ idleProof: monitor.getProof(),
569
+ interaction: { type: 'click', ts: Date.now(), motorConsistency: 0.82 },
570
+ secret: process.env.PULSE_SECRET,
571
+ });
572
+ // Attach to API call: X-Pulse-Token: <compact>
573
+
574
+ // Server before crediting any engagement metric
575
+ const result = await verifyEngagementToken(compact, process.env.PULSE_SECRET, {
576
+ checkNonce: (n) => redis.del(`pulse:nonce:${n}`).then(d => d === 1),
577
+ });
578
+ // result.valid, result.riskSignals, result.idleWarnings
579
+ ```
580
+
581
+ **What the token proves:**
582
+
583
+ 1. Real hardware — DRAM refresh present, ENF grid signal detected
584
+ 2. Genuine idleHash-chained thermal measurements spanning 45s
585
+ 3. Physical cooling — Variance decay was smooth, not a step function
586
+ 4. Fresh interaction — 30-second TTL eliminates token brokers
587
+ 5. Tamper-evident — HMAC-SHA256 over all fraud-relevant fields
588
+
589
+ HMAC signs: `v|n|iat|exp|idle.chain|idle.dMs|hw.ent|evt.t|evt.ts`
590
+
591
+ Advisory fields (thermal label, cooling monotonicity) are in the token body for risk scoring but deliberately excluded from the HMAC — changing them can't gain access credit without breaking the signature.
592
+
593
+ ---
594
+
595
+ ## Authenticity Audit The $44 Billion Question
596
+
597
+ Elon paid $44 billion arguing about what percentage of Twitter's users were real humans. Nobody had a physics-layer tool to measure it. This is that tool.
598
+
599
+ ```js
600
+ import { authenticityAudit } from '@svrnsec/pulse/audit';
601
+
602
+ const report = authenticityAudit(tokenCohort, { confidenceLevel: 0.95 });
603
+ ```
604
+
605
+ ```js
606
+ {
607
+ cohortSize: 10000,
608
+ estimatedHumanPct: 73.4,
609
+ confidenceInterval: [69.1, 77.8], // 95% bootstrap CI
610
+ grade: 'HIGH_FRAUD',
611
+ botClusterCount: 5,
612
+ botClusters: [
613
+ {
614
+ id: 'farm_a3f20c81',
615
+ size: 847,
616
+ sybilScore: 94,
617
+ signature: {
618
+ enfRegion: 'americas',
619
+ dramVerdict: 'dram',
620
+ thermalLabel: 'sustained_hot',
621
+ meanEnfDev: 0.0231, // Hz — localizes to substation/building
622
+ meanIdleMs: 57200, // script sleeps for exactly 57s
623
+ },
624
+ topSignals: ['timestamp_rhythm', 'thermal_diversity'],
625
+ },
626
+ ],
627
+ recommendation: 'CRITICAL: 5 bot farm clusters account for a majority of traffic...',
628
+ }
629
+ ```
630
+
631
+ **Method:** Tokens are clustered by hardware signature (ENF deviation bucket × DRAM verdict × thermal label × 10-minute time bucket). Organic users scatter across all dimensions. A farm in one building, running the same script, on the same hardware generation collapses into one tight cluster. Each cluster is scored with Population Entropy. A non-parametric bootstrap produces the confidence interval.
632
+
633
+ **Typical values:**
634
+
635
+ | Scenario | estimatedHumanPct |
636
+ |---|---|
637
+ | Organic product feed | 92–97% |
638
+ | Incentivised engagement campaign | 55–75% |
639
+ | Coordinated click farm attack | 8–35% |
640
+
641
+ ---
642
+
643
+ ## HMAC-Signed Challenge Protocol
644
+
645
+ Plain random nonces prevent replay attacks but not forged challenges. The challenge module adds server-signed HMAC authentication:
646
+
647
+ ```js
648
+ import { createChallenge, verifyChallenge, generateSecret } from '@svrnsec/pulse/challenge';
649
+
650
+ // One-time setup: generate a 256-bit secret
651
+ const secret = generateSecret(); // store in env vars
652
+
653
+ // Challenge endpoint
654
+ app.get('/api/challenge', (req, res) => {
655
+ const challenge = createChallenge(secret);
656
+ await redis.set(`pulse:${challenge.nonce}`, '1', 'EX', 300);
657
+ res.json(challenge);
658
+ });
659
+
660
+ // Verify endpoint — validates HMAC before processing the proof
661
+ app.post('/api/verify', async (req, res) => {
662
+ const { valid, reason } = await verifyChallenge(req.body.challenge, secret, {
663
+ checkNonce: async (n) => (await redis.del(`pulse:${n}`)) === 1,
664
+ });
665
+ if (!valid) return res.status(400).json({ error: reason });
666
+ // ... proceed with validateProof
667
+ });
668
+ ```
669
+
670
+ The HMAC covers `nonce|issuedAt|expiresAt` — altering any field breaks the signature. Timing-safe comparison prevents side-channel attacks on the signature verification.
671
+
672
+ ---
673
+
674
+ ## Coordinated Behavior Detection
675
+
676
+ Detects coordinated inauthentic behavior across a cohort of engagement tokens using five independent analysis layers:
677
+
678
+ ```js
679
+ import { detectCoordinatedBehavior } from '@svrnsec/pulse/coordination';
680
+
681
+ const result = detectCoordinatedBehavior(tokenCohort);
682
+ // result.clusters detected bot farm clusters with similarity scores
683
+ // result.coordinationScore 0-100, higher = more coordinated
684
+ ```
685
+
686
+ **Analysis layers:**
687
+ - **Temporal clustering** — Poisson test on arrival time distributions
688
+ - **Signal fingerprint collision** Entropy band / thermal label / motor band hash matching
689
+ - **Drift fingerprinting** — Clock drift rate convergence across devices
690
+ - **Mutual information matrix** — Louvain-lite community detection
691
+ - **Entropy velocity** — Shannon entropy growth rate vs traffic growth
692
+
693
+ ---
694
+
695
+ ## LLM Agent Detection
696
+
697
+ Detects AI-controlled headless browsers (AutoGPT, Playwright+LLM, browser agents) through behavioral biometrics:
698
+
699
+ ```js
700
+ import { detectLlmAgent } from '@svrnsec/pulse/llm';
701
+
702
+ const result = detectLlmAgent(bioSnapshot);
703
+ // result.aiConf — 0-1 confidence this is an AI agent
704
+ // result.humanConf — 0-1 confidence this is a human
705
+ // result.verdict — 'human' | 'ai_agent' | 'ambiguous'
706
+ ```
707
+
708
+ **Six behavioral signals:**
709
+ 1. Think-time pattern — LLMs produce characteristic pause distributions
710
+ 2. Mouse path smoothness — Bezier-interpolated paths vs. natural micro-tremor
711
+ 3. Keystroke correction rate — Humans make typos; LLMs don't backspace
712
+ 4. Physiological tremor — 8-12 Hz micro-oscillation present in all human motor control
713
+ 5. Inter-event gap distribution — LLM response latencies cluster differently than human reaction times
714
+ 6. Motor consistency — AI agents maintain unnaturally consistent click precision
715
+
716
+ TrustScore hard cap: AI agent confidence > 0.85 caps the score at 30.
717
+
718
+ ---
719
+
720
+ ## RefractionCross-Environment Timer Calibration
721
+
722
+ Different environments have different timer resolutions. Refraction automatically calibrates scoring thresholds:
723
+
724
+ ```js
725
+ import { calibrate, getProfile } from '@svrnsec/pulse/refraction';
726
+
727
+ const profile = await calibrate();
728
+ // profile.env — 'browser' | 'node' | 'deno' | 'worker'
729
+ // profile.grain timer resolution in ms
730
+ // profile.thresholds — adjusted scoring thresholds for this environment
731
+ ```
732
+
733
+ The calibration profile adjusts all downstream analysis so a 100us-clamped Brave browser and a nanosecond-precision Node.js process are scored against appropriate baselines.
734
+
735
+ ---
736
+
737
+ ## Tests
738
+
739
+ ```bash
740
+ npm test
741
+ ```
742
+
743
+ ```
744
+ integration.test.js 43 tests — core engine, provider classifier, commitment, registry
745
+ stress.test.js 92 tests — adversarial: KVM, VMware, Docker, LLM agents,
746
+ Gaussian noise injection, synthetic thermal drift,
747
+ score separation (real min vs VM max)
748
+ engagement.test.js 45 tests — IdleAttestation state machine, thermal classification,
749
+ Population Entropy (all 5 tests), Engagement Token
750
+ creation/verification/replay/tamper, risk signals
751
+ audit.test.js 18 tests — Authenticity Audit: organic vs farm cohorts, CI
752
+ properties, multi-farm fingerprinting, grade thresholds
753
+
754
+ Test Suites: 4 passed
755
+ Tests: 158 passed, 0 failed
756
+ Time: ~1.0s
757
+ ```
758
+
759
+ ---
760
+
761
+ ## Demo
762
+
763
+ ```bash
764
+ node demo/node-demo.js
765
+ ```
766
+
767
+ Simulates real hardware (Box-Muller Gaussian noise — no periodic components, no artificial autocorrelation) and VM timing profiles (0.1ms quantization grid + steal-time bursts every 50 iterations). Runs both through the full analysis and commitment pipeline. No WASM needed.
768
+
769
+ Open `demo/web/index.html` in a browser to see the animated probe running on your actual machine.
770
+
771
+ ---
772
+
773
+ ## Project structure
774
+
775
+ ```
776
+ sovereign-pulse/
777
+ ├── src/
778
+ │ ├── index.js pulse() — main entry point
779
+ │ ├── fingerprint.js Fingerprint class (high-level API)
780
+ │ ├── collector/
781
+ │ │ ├── entropy.js WASM bridge + phased/adaptive routing
782
+ │ │ ├── adaptive.js Adaptive early-exit engine
783
+ │ │ ├── bio.js Mouse/keyboard interference coefficient
784
+ │ │ ├── canvas.js WebGL/2D canvas fingerprint
785
+ │ │ ├── gpu.js WebGPU thermal growth probe
786
+ │ │ ├── dram.js DRAM refresh cycle detector
787
+ │ │ ├── enf.js Electrical Network Frequency probe
788
+ │ │ ├── sabTimer.js Sub-millisecond SAB timer
789
+ │ │ └── idleAttestation.js Proof-of-Idle — thermal hash chain (v0.5.0)
790
+ │ ├── analysis/
791
+ │ │ ├── jitter.js Statistical classifier (6 components)
792
+ │ │ ├── heuristic.js Cross-metric physics coherence engine
793
+ │ │ ├── provider.js Hypervisor/cloud provider classifier
794
+ │ │ ├── audio.js AudioContext callback jitter
795
+ │ │ ├── llm.js LLM agent behavioural detector
796
+ │ │ ├── trustScore.js Unified 0–100 TrustScore engine (v0.4.0)
797
+ │ │ ├── populationEntropy.js Sybil detection — 5 cohort-level tests (v0.5.0)
798
+ │ │ └── authenticityAudit.js $44B question — humanPct + CI (v0.6.0)
799
+ │ ├── middleware/
800
+ │ │ ├── express.js Express/Fastify/Hono drop-in
801
+ │ │ └── next.js Next.js App Router HOC
802
+ │ ├── integrations/
803
+ │ │ ├── react.js usePulse() hook
804
+ │ │ └── react-native.js Expo accelerometer + thermal bridge
805
+ │ ├── proof/
806
+ │ │ ├── fingerprint.js BLAKE3 commitment builder
807
+ │ │ ├── validator.js Server-side proof verifier
808
+ │ │ ├── challenge.js HMAC challenge/response
809
+ │ │ └── engagementToken.js 30s physics-backed engagement token (v0.5.0)
810
+ │ └── registry/
811
+ │ └── serializer.js Provider signature serializer + matcher
812
+ ├── crates/pulse-core/ Rust/WASM entropy probe
813
+ ├── index.d.ts Full TypeScript declarations
814
+ ├── demo/
815
+ │ ├── web/index.html Standalone browser demo
816
+ │ ├── node-demo.js CLI demo (no WASM required)
817
+ │ ├── benchmark.js Generates numbers in this README
818
+ │ └── perf.js Pipeline overhead benchmarks
819
+ └── test/
820
+ ├── integration.test.js 43 tests — core engine
821
+ ├── stress.test.js 92 tests — adversarial attack suite
822
+ ├── engagement.test.js 45 tests — idle / population / tokens
823
+ └── audit.test.js 18 tests — authenticity audit
824
+ ```
825
+
826
+ ---
827
+
828
+ ## Privacy
829
+
830
+ Nothing leaves the browser except a ~1.6KB statistical summary:
831
+
832
+ - Timing arrays → BLAKE3 hashed, only hash transmitted
833
+ - GPU pixel buffers → BLAKE3 hashed, only hash transmitted
834
+ - Mouse coordinates → never stored, only timing deltas used
835
+ - Keystrokes → only dwell/flight times, key labels discarded immediately
836
+
837
+ The server receives enough to verify the proof. Not enough to reconstruct any original signal. Not enough to re-identify a user across sessions.
838
+
839
+ `hardwareId()` is a 128-bit BLAKE3 hash of GPU renderer string + audio sample rate. Stable per physical device, not reversible, not cross-origin linkable.
840
+
841
+ ---
842
+
843
+ ## Limitations
844
+
845
+ - The probe runs for 0.9–3.5 seconds. Best suited for deliberate actions (login, checkout, form submit) not page load.
846
+ - Mobile browsers cap `performance.now()` to 1ms resolution. Signal quality is reduced; the classifier adjusts but scores trend lower.
847
+ - GPU passthrough VMs pass the canvas check. Timing is the primary discriminator in that case.
848
+ - This is one signal among many. High-stakes applications should layer it with behavioral and network signals.
849
+ - The heuristic engine catches unknown VMs via physics. The provider classifier labels them by scheduler signature. If a new hypervisor ships with an unusual quantum, it will be detected and flagged as `generic-vm` until the registry is updated.
850
+
851
+ ---
852
+
853
+ ## FAQ
854
+
855
+ **Does it work with browser extensions installed (uBlock, Privacy Badger, 1Password)?**
856
+
857
+ Yes. Extensions don't touch the physics layer. The core probe is thermal — it measures entropy growth via WASM matrix multiply timing across cold/load/hot CPU phases. Extensions cannot fake DRAM refresh variance or thermal noise on real silicon. Canvas signals (which some extensions do affect) are weighted inputs, not gates. The heuristic engine cross-validates across 5 independent signals, so no single channel can cause a false flag.
858
+
859
+ **What about Brave's timer clamping?**
860
+
861
+ Brave reduces `performance.now()` resolution to 100µs to prevent fingerprinting. We detect this via `timerGranularityMs` and adjust thresholds accordingly. A clamped timer on real hardware still shows thermal variance across phases. A VM with a clamped timer is still flat. The EJR check survives timer clamping — it's a ratio, not an absolute threshold.
862
+
863
+ **Can a VM spoof this?**
864
+
865
+ Spoofing one signal is straightforward. Spoofing all five simultaneously while keeping them mutually coherent with each other is a different problem. The Hurst-AC coherence check specifically catches data that was *generated* to look right rather than *measured* from real hardware — the two signals are physically linked and have to match each other, not just hit individual thresholds. See the [KVM example above](#the-picket-fence-detector) where four physical laws are violated simultaneously.
866
+
867
+ **Does it collect or transmit any personal data?**
868
+
869
+ No. Nothing leaves the browser except a ~1.6KB statistical summary with all raw signals BLAKE3-hashed. The server receives enough to verify the proof. Not enough to reconstruct any original signal or re-identify a user across sessions.
870
+
871
+ **What's the performance overhead?**
872
+
873
+ The probe takes 0.9–3.5 seconds depending on how quickly the signal converges. For obvious VMs it exits at 50 iterations (~0.9s). For real hardware it typically exits around 100–120 iterations (~2s). JavaScript overhead outside the probe itself is under 2ms. Best used on deliberate user actions (login, checkout) not page load.
874
+
875
+ **Mobile support?**
876
+
877
+ Mobile browsers cap `performance.now()` to 1ms resolution which reduces signal quality. The classifier adjusts thresholds and scores trend lower, but the directional verdict (VM vs. physical) remains accurate. The bio layer (touch timing, accelerometer jitter on supported devices) compensates partially.
878
+
879
+ ---
880
+
881
+ ## License
882
+
883
+ MIT