@svrnsec/pulse 0.6.0 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (48) hide show
  1. package/LICENSE +21 -21
  2. package/README.md +883 -622
  3. package/SECURITY.md +86 -86
  4. package/bin/svrnsec-pulse.js +7 -7
  5. package/dist/{pulse.cjs.js → pulse.cjs} +6379 -6420
  6. package/dist/pulse.cjs.map +1 -0
  7. package/dist/pulse.esm.js +6380 -6421
  8. package/dist/pulse.esm.js.map +1 -1
  9. package/index.d.ts +895 -846
  10. package/package.json +185 -165
  11. package/pkg/pulse_core.js +174 -173
  12. package/src/analysis/audio.js +213 -213
  13. package/src/analysis/authenticityAudit.js +408 -390
  14. package/src/analysis/coherence.js +502 -502
  15. package/src/analysis/coordinatedBehavior.js +825 -0
  16. package/src/analysis/heuristic.js +428 -428
  17. package/src/analysis/jitter.js +446 -446
  18. package/src/analysis/llm.js +473 -472
  19. package/src/analysis/populationEntropy.js +404 -403
  20. package/src/analysis/provider.js +248 -248
  21. package/src/analysis/refraction.js +392 -0
  22. package/src/analysis/trustScore.js +356 -356
  23. package/src/cli/args.js +36 -36
  24. package/src/cli/commands/scan.js +192 -192
  25. package/src/cli/runner.js +157 -157
  26. package/src/collector/adaptive.js +200 -200
  27. package/src/collector/bio.js +297 -287
  28. package/src/collector/canvas.js +247 -239
  29. package/src/collector/dram.js +203 -203
  30. package/src/collector/enf.js +311 -311
  31. package/src/collector/entropy.js +195 -195
  32. package/src/collector/gpu.js +248 -245
  33. package/src/collector/idleAttestation.js +480 -480
  34. package/src/collector/sabTimer.js +189 -191
  35. package/src/fingerprint.js +475 -475
  36. package/src/index.js +342 -342
  37. package/src/integrations/react-native.js +462 -459
  38. package/src/integrations/react.js +184 -185
  39. package/src/middleware/express.js +155 -155
  40. package/src/middleware/next.js +174 -175
  41. package/src/proof/challenge.js +249 -249
  42. package/src/proof/engagementToken.js +426 -394
  43. package/src/proof/fingerprint.js +268 -268
  44. package/src/proof/validator.js +83 -143
  45. package/src/registry/serializer.js +349 -349
  46. package/src/terminal.js +263 -263
  47. package/src/update-notifier.js +259 -264
  48. package/dist/pulse.cjs.js.map +0 -1
package/README.md CHANGED
@@ -1,622 +1,883 @@
1
- # @sovereign/pulse
2
-
3
- [![CI](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml/badge.svg)](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml)
4
- [![npm version](https://img.shields.io/npm/v/@svrnsec/pulse.svg?style=flat)](https://www.npmjs.com/package/@svrnsec/pulse)
5
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](./LICENSE)
6
- [![Security Policy](https://img.shields.io/badge/security-policy-orange.svg)](./SECURITY.md)
7
-
8
- A hardware-physics probe that distinguishes real consumer silicon from sanitised cloud VMs and AI inference endpoints.
9
-
10
- It does not maintain a database of known bad actors. It measures thermodynamic constants.
11
-
12
- ---
13
-
14
- ## 30-Second Quickstart
15
-
16
- ```bash
17
- npm install @svrnsec/pulse
18
- ```
19
-
20
- ```js
21
- // Express — drop-in server-side verification
22
- import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
23
-
24
- app.use('/api', createPulseMiddleware({ minScore: 0.6 }));
25
- ```
26
-
27
- ```jsx
28
- // React — live probe with real-time signal meters
29
- import { usePulse } from '@svrnsec/pulse/react';
30
-
31
- function TrustGate() {
32
- const { run, pct, vmConf, hwConf, earlyVerdict, result } = usePulse();
33
-
34
- return (
35
- <button onClick={run}>
36
- {pct < 100 ? `Probing… ${pct}%` : earlyVerdict}
37
- </button>
38
- );
39
- }
40
- ```
41
-
42
- ```js
43
- // Node.js — raw proof commitment
44
- import { pulse } from '@svrnsec/pulse';
45
-
46
- const proof = await pulse({ nonce: crypto.randomUUID() });
47
- console.log(proof.score, proof.confidence); // 0.798, 'high'
48
- ```
49
-
50
- No API key. No account. No data leaves the client. Runs entirely in your infrastructure.
51
-
52
- ---
53
-
54
- ## The Problem With Every Other Approach
55
-
56
- Every bot detection system is, at its core, a database. Known bad IP ranges. Known headless browser fingerprints. Known datacenter ASNs. Known CAPTCHA-solving services.
57
-
58
- The attacker's job is simple: don't be in the database. The moment a new cloud region launches, a new headless runtime ships, or a new residential proxy network comes online, the database is stale.
59
-
60
- Pulse doesn't work that way.
61
-
62
- A VM's hypervisor clock is mathematically perfect — it cannot produce thermal noise because there is no thermal feedback loop in a virtual timer. Real silicon running under sustained load gets measurably noisier as electrons move through gates that are physically getting hotter. That relationship is a law of physics. It does not change when AWS launches a new instance type in 2027. It does not change when a new hypervisor ships. It cannot be patched.
63
-
64
- ---
65
-
66
- ## The Two Layers
67
-
68
- **Detection** answers: *Is this a VM?*
69
- Handled entirely by the heuristic engine. No signatures, no database. Five physical relationships, measured and cross-checked. If they're mutually coherent with what thermodynamics predicts, it's real hardware. If any of them contradict each other in ways physics wouldn't allow, something is being faked.
70
-
71
- **Classification** answers: *Which VM is it?*
72
- Handled by the provider fingerprinter. Matches the timing autocorrelation profile against known hypervisor scheduler rhythms (KVM's 250ms quantum, Xen's 750ms credit scheduler, Hyper-V's 15.6ms quantum). This is the part that improves with more data — but it's not needed for detection. A brand-new hypervisor from a company that doesn't exist yet will still fail detection the moment it tries to present a mathematically flat clock.
73
-
74
- ---
75
-
76
- ## The Five Physical Signals
77
-
78
- ### 1. Entropy-Jitter Ratio
79
-
80
- The key signal. When a real CPU runs sustained compute, thermal throttling kicks in and timing jitter *increases* the die gets hotter, the transistors switch slightly slower, and you can measure it.
81
-
82
- ```
83
- hotQE / coldQE ≥ 1.08 → thermal feedback confirmed (real silicon)
84
- hotQE / coldQE ≈ 1.00 → clock is insensitive to guest thermal state (VM)
85
- ```
86
-
87
- A KVM hypervisor maintains a synthetic clock that ticks at a constant rate regardless of what the guest OS is doing. Its entropy ratio across cold/load/hot phases is flat. On 192.222.57.254 it measured 1.01. On the local GTX 1650 Super machine it measured 1.24.
88
-
89
- A software implementation cannot fake this without generating actual heat.
90
-
91
- ### 2. Hurst-Autocorrelation Coherence
92
-
93
- Genuine Brownian noise (what real hardware timing looks like) has a Hurst exponent near 0.5 and near-zero autocorrelation at all lags. These two are physically linked by the relationship `expected_AC = |2H - 1|`.
94
-
95
- If you measure H=0.5 but find high autocorrelation — or low H but low autocorrelation — the data was generated, not measured. A VM that tries to fake the Hurst Exponent without adjusting the autocorrelation profile, or vice versa, fails this check immediately.
96
-
97
- ### 3. CV-Entropy Coherence
98
-
99
- High coefficient of variation (timing spread) must come from a genuinely spread-out distribution, which means high quantization entropy. A VM that inflates CV by adding synthetic outliers at fixed offsets — say, every 50th iteration triggers a steal-time burst — produces high CV but low entropy because 93% of samples still fall in two bins.
100
-
101
- From 192.222.57.254: CV=0.0829 (seems variable) but QE=1.27 bits (extreme clustering). Incoherent. On real hardware, CV=0.1494 → QE=3.59 bits. Coherent.
102
-
103
- ### 4. The Picket Fence Detector
104
-
105
- Hypervisor scheduler quanta create periodic steal-time bursts. A KVM host running at ~5ms/iteration with a 250ms quantum will pause the guest every ~50 iterations. This shows up as elevated autocorrelation at lag-50 relative to lag-5. The autocorrelation profile looks like fence posts at regular intervals — hence the name.
106
-
107
- ```
108
- Real hardware: lag-1 AC=0.07 lag-50 AC=0.03 (flat, no rhythm)
109
- KVM VM: lag-1 AC=0.67 lag-50 AC=0.71 (periodic steal-time)
110
- ```
111
-
112
- The dominant lag also lets the classifier estimate the scheduler quantum: `lag × 5ms/iter ≈ quantum`. This is how it identifies KVM (250ms), Xen (750ms), and Hyper-V (15.6ms) without any prior knowledge of the host.
113
-
114
- ### 5. Skewness-Kurtosis Coherence
115
-
116
- Real hardware timing is right-skewed with positive kurtosis. OS preemptions create occasional large delays on the right tail, while the body of the distribution stays compact. A VM that adds synthetic spikes at fixed offsets tends to produce the wrong skew direction or an implausibly symmetric distribution.
117
-
118
- ---
119
-
120
- ## Benchmark Results
121
-
122
- *12 trials × 200 iterations. Two real environments.*
123
-
124
- ### Local Machine — GTX 1650 Super · i5-10400 · Win11 · 16GB DDR4
125
-
126
- ```
127
- Pulse Score [████████████████████████████████░░░░░░░░] 79.8%
128
- ```
129
-
130
- | Metric | Value | Physical interpretation |
131
- |---|---|---|
132
- | Coefficient of Variation | 0.1494 | Spread from thermal noise + OS interrupts |
133
- | Hurst Exponent | 0.5505 | Near-Brownian i.i.d. noise from independent sources |
134
- | Quantization Entropy | 3.59 bits | Timings genuinely spread across distribution |
135
- | Autocorr lag-1 | 0.0698 | Near-zero — no periodic forcing |
136
- | Autocorr lag-50 | 0.0312 | Flat at distance — no scheduler rhythm |
137
- | Entropy-Jitter Ratio | 1.24 | Entropy grew 24% from cold to hot — thermal feedback confirmed |
138
- | Thermal Pattern | sawtooth | Fan cycling, not hypervisor |
139
- | Outlier Rate | 2.25% | OS context switches — unpredictable, not periodic |
140
-
141
- **Distribution:**
142
- ```
143
- 3.60ms │██████ 8
144
- 3.88ms │█████ 7
145
- 4.16ms │██████████████ 19
146
- 4.44ms │██████████████████████ 30
147
- 4.73ms │████████████████████████████████████ 50 ← peak
148
- 5.01ms │██████████████████████ 30
149
- 5.29ms │████████████████ 22
150
- 5.57ms │█████████████ 18
151
- 5.85ms │██████ 8
152
- 6.13ms │█ 2
153
- 7.53ms │█ 1 ← OS preemption
154
- 8.94ms │█ 1
155
- ```
156
-
157
- Normal bell curve, right-tailed from OS preemptions. Exactly what Brownian timing noise looks like.
158
-
159
- ---
160
-
161
- ### Remote VM — 192.222.57.254 — KVM · 2 vCPU · 2GB · Ubuntu 22.04
162
-
163
- ```
164
- Pulse Score [██████████████████░░░░░░░░░░░░░░░░░░░░░░] 45.0%
165
- ```
166
-
167
- | Metric | Value | Physical interpretation |
168
- |---|---|---|
169
- | Coefficient of Variation | 0.0829 | Artificially consistent — hypervisor flattens variance |
170
- | Hurst Exponent | 0.0271 | Anti-persistent caused by timer quantization artifacts |
171
- | Quantization Entropy | 1.27 bits | 93% of samples on two values — not a distribution |
172
- | Autocorr lag-1 | 0.666 | Periodic forcing — steal-time burst every ~50 samples |
173
- | Autocorr lag-50 | 0.710 | Still elevated at lag-50 — confirms periodic scheduler |
174
- | Entropy-Jitter Ratio | 1.01 | Flat — hypervisor clock has no thermal feedback |
175
- | Thermal Pattern | sawtooth (synthetic) | Produced by scheduler bursts, not temperature |
176
- | Outlier Rate | 6.00% | Exactly 6% — the steal-time bursts are deterministic |
177
-
178
- **Distribution:**
179
- ```
180
- 5.00ms │████████████████████████████████████ 123 ← 61% of all samples
181
- 5.11ms │███████████████████ 65 ← 32% of all samples
182
- 5.22ms │ 0
183
- ... │ 0 ← impossible values
184
- 6.72ms │█ 2
185
- 6.83ms │█ 4 ← steal-time bursts
186
- 7.05ms │█ 3
187
- ```
188
-
189
- This is the "Picket Fence" — 93% of samples at exactly two values. Nothing in between. A continuous physical process cannot produce this. A synthetic clock rounding to its host tick resolution can.
190
-
191
- **Heuristic Engine Output:**
192
- ```
193
- ENTROPY_FLAT_UNDER_LOAD EJR=1.01 (expected ≥1.08 for real hardware) penalty -0.10
194
- PICKET_FENCE_DETECTED lag-50 AC=0.71 > baseline 0.08 penalty -0.08
195
- HURST_AUTOCORR_INCOHERENT H=0.027 vs expected AC=|2H-1|=0.946 penalty -0.12
196
- CV_ENTROPY_INCOHERENT CV=0.083 → expected QE≈2.83, actual QE=1.27 penalty -0.10
197
- ```
198
-
199
- Each of those four flags is a different physical law being violated. Spoofing one is straightforward. Spoofing all four simultaneously while keeping them mutually consistent with each other is not.
200
-
201
- ---
202
-
203
- ## Adaptive Early Exit
204
-
205
- The probe doesn't always need 200 iterations. It checks signal confidence every 25 and exits when the verdict is already decisive:
206
-
207
- ```
208
- Environment Iters used Wall time Speedup
209
- ────────────────────────────────────────────────────
210
- KVM (obvious) 50 ~0.9s 75%
211
- VMware ESXi 75 ~1.4s 60%
212
- Physical desktop ~120 ~2.1s 40%
213
- Ambiguous 200 ~3.5s —
214
- ```
215
-
216
- The 192.222.57.254 VM hit the exit condition at iteration 50. The signal was conclusive within the first batch.
217
-
218
- ---
219
-
220
- ## Installation
221
-
222
- ```bash
223
- npm install @svrnsec/pulse
224
- ```
225
-
226
- Node.js ≥ 18. The WASM binary is compiled from Rust and bundled — no separate `.wasm` file to host.
227
-
228
- The package is self-contained. It does not phone home. It does not contact any external service. Everything runs inside your infrastructure.
229
-
230
- To build from source (requires [Rust](https://rustup.rs) and [wasm-pack](https://rustwasm.github.io/wasm-pack/)):
231
-
232
- ```bash
233
- git clone https://github.com/ayronny14-alt/Svrn-Pulse-Security
234
- cd Svrn-Pulse-Security
235
- npm install
236
- npm run build
237
- ```
238
-
239
- ---
240
-
241
- ## Usage
242
-
243
- ### Client side
244
-
245
- ```js
246
- import { pulse } from '@sovereign/pulse';
247
-
248
- // Get a nonce from your server (prevents replay attacks)
249
- const { nonce } = await fetch('/api/pulse/challenge').then(r => r.json());
250
-
251
- // Run the probe — adaptive, exits early when signal is decisive
252
- const { payload, hash } = await pulse({
253
- nonce,
254
- onProgress: (stage, meta) => {
255
- if (stage === 'entropy_batch') {
256
- // Live signal during probe — stream to a progress bar
257
- // meta: { pct, vmConf, hwConf, earlyVerdict, etaMs }
258
- console.log(`${meta.pct}% ${meta.earlyVerdict ?? 'measuring...'}`);
259
- }
260
- },
261
- });
262
-
263
- // Send commitment to your server
264
- const result = await fetch('/api/pulse/verify', {
265
- method: 'POST',
266
- headers: { 'Content-Type': 'application/json' },
267
- body: JSON.stringify({ payload, hash }),
268
- }).then(r => r.json());
269
- ```
270
-
271
- ### High-level `Fingerprint` class
272
-
273
- ```js
274
- import { Fingerprint } from '@sovereign/pulse';
275
-
276
- const fp = await Fingerprint.collect({ nonce });
277
-
278
- fp.isSynthetic // true / false
279
- fp.score // 0.0–1.0
280
- fp.confidence // 0–100
281
- fp.tier // 'high' | 'medium' | 'low' | 'uncertain'
282
- fp.profile // 'analog-fog' | 'picket-fence' | 'burst-scheduler' | ...
283
- fp.providerId // 'kvm-digitalocean' | 'nitro-aws' | 'physical' | ...
284
- fp.providerLabel // 'DigitalOcean Droplet (KVM)'
285
- fp.schedulerQuantumMs // 250 estimated from autocorrelation peak lag
286
- fp.entropyJitterRatio // 1.24 — hotQE / coldQE
287
- fp.topFlag // 'PICKET_FENCE_DETECTED'
288
- fp.findings // full heuristic engine report
289
- fp.physicalEvidence // confirmed physical properties (bonuses)
290
-
291
- fp.hardwareId() // stable 16-char hex ID BLAKE3(GPU + audio signals)
292
- fp.metrics() // flat object of all numeric metrics for logging
293
- fp.toCommitment() // { payload, hash } — send to server
294
- ```
295
-
296
- ### Server side
297
-
298
- ```js
299
- import { validateProof, generateNonce } from '@sovereign/pulse/validator';
300
-
301
- // Challenge endpoint runs on your server, not ours
302
- app.get('/api/pulse/challenge', async (req, res) => {
303
- const nonce = generateNonce();
304
- await redis.set(`pulse:${nonce}`, '1', 'EX', 300);
305
- res.json({ nonce });
306
- });
307
-
308
- // Verify endpoint
309
- app.post('/api/pulse/verify', async (req, res) => {
310
- const result = await validateProof(req.body.payload, req.body.hash, {
311
- minJitterScore: 0.55,
312
- requireBio: false,
313
- checkNonce: async (n) => redis.del(`pulse:${n}`).then(d => d === 1),
314
- });
315
- res.json(result);
316
- });
317
- ```
318
-
319
- ### Express middleware
320
-
321
- ```js
322
- import { createPulseMiddleware } from '@sovereign/pulse/middleware/express';
323
-
324
- const pulse = createPulseMiddleware({
325
- threshold: 0.6,
326
- store: {
327
- set: (k, ttl) => redis.set(k, '1', 'EX', ttl),
328
- consume: (k) => redis.del(k).then(n => n === 1),
329
- },
330
- });
331
-
332
- app.get('/api/pulse/challenge', pulse.challenge);
333
- app.post('/checkout', pulse.verify, handler); // req.pulse injected
334
- ```
335
-
336
- ### Next.js App Router
337
-
338
- ```js
339
- // app/api/pulse/challenge/route.js
340
- import { pulseChallenge } from '@sovereign/pulse/middleware/next';
341
- export const GET = pulseChallenge();
342
-
343
- // app/api/checkout/route.js
344
- import { withPulse } from '@sovereign/pulse/middleware/next';
345
- export const POST = withPulse({ threshold: 0.6 })(async (req) => {
346
- const { score, provider } = req.pulse;
347
- return Response.json({ ok: true, score });
348
- });
349
- ```
350
-
351
- ### React hook
352
-
353
- ```jsx
354
- import { usePulse } from '@sovereign/pulse/react';
355
-
356
- function Checkout() {
357
- const { run, stage, pct, vmConf, hwConf, result, isReady } = usePulse({
358
- challengeUrl: '/api/pulse/challenge',
359
- verifyUrl: '/api/pulse/verify',
360
- });
361
-
362
- return (
363
- <button onClick={run} disabled={!isReady && stage !== null}>
364
- {stage === 'entropy_batch'
365
- ? `Measuring... ${pct}% (VM: ${vmConf.toFixed(2)} / HW: ${hwConf.toFixed(2)})`
366
- : 'Verify Device'}
367
- </button>
368
- );
369
- }
370
- ```
371
-
372
- ### TypeScript
373
-
374
- Full declarations shipped in `index.d.ts`. Every interface, every callback, every return type:
375
-
376
- ```ts
377
- import { pulse, Fingerprint } from '@sovereign/pulse';
378
- import type {
379
- PulseOptions, PulseCommitment,
380
- ProgressMeta, PulseStage,
381
- ValidationResult, FingerprintReport,
382
- } from '@sovereign/pulse';
383
-
384
- const fp = await Fingerprint.collect({ nonce });
385
- // fp is fully typed — all properties, methods, and nested objects
386
- ```
387
-
388
- ---
389
-
390
- ## Validation result
391
-
392
- ```js
393
- {
394
- valid: true,
395
- score: 0.8215, // heuristic-adjusted score
396
- confidence: 'high', // 'high' | 'medium' | 'low' | 'rejected'
397
- reasons: [], // populated when valid: false
398
- riskFlags: [], // non-blocking signals worth logging
399
- meta: {
400
- receivedAt: 1742686350535,
401
- proofAge: 2841, // ms since probe ran
402
- jitterScore: 0.7983,
403
- canvasRenderer: 'NVIDIA GeForce GTX 1650 Super/PCIe/SSE2',
404
- bioActivity: true,
405
- }
406
- }
407
- ```
408
-
409
- **Score thresholds:**
410
-
411
- | Score | Confidence | Meaning |
412
- |---|---|---|
413
- | ≥ 0.75 | high | Real consumer hardware |
414
- | 0.55 – 0.75 | medium | Likely real, some signals ambiguous |
415
- | 0.35 – 0.55 | low | Borderline — VM, Chromebook, virtual display |
416
- | < 0.35 | rejected | Strong VM/AI indicators |
417
-
418
- ---
419
-
420
- ## Detection capabilities
421
-
422
- | Scenario | Result | Primary signal |
423
- |---|---|---|
424
- | Cloud VM (AWS, GCP, Azure, DO) | Blocked | EJR flat + quantized ticks + picket fence |
425
- | Headless Chrome / Puppeteer | Blocked | SwiftShader renderer + no bio activity |
426
- | AI inference endpoint | Blocked | VM timing profile + zero bio signals |
427
- | Proof replay attack | Blocked | Nonce consumed atomically on first use |
428
- | Payload tampering | Blocked | BLAKE3 hash fails immediately |
429
- | Metric spoofing (one signal) | Blocked | Cross-metric coherence check |
430
- | Metric spoofing (all signals) | Very hard | 5 physically-linked relationships must be jointly coherent |
431
- | Hardware you've never seen before | Blocked | Physics is the check, not a database |
432
- | GPU passthrough VMs | Partial | Canvas check varies; timing is primary |
433
- | Remote desktop (real machine) | Pass | Timing is real; bio may be weak |
434
-
435
- ---
436
-
437
- ## The Registry Classification, Not Detection
438
-
439
- The `src/registry/serializer.js` module stores signatures for known provider environments. It is used for the **label**, not the **verdict**.
440
-
441
- If the heuristic engine says "this is a VM," the registry says "specifically, this is a DigitalOcean Droplet running KVM with a 5ms scheduler quantum." If the registry has never seen this particular hypervisor before, it returns `profile: 'generic-vm'` — but the heuristic engine already caught it.
442
-
443
- You can extend the registry with a signature collected from any new environment:
444
-
445
- ```js
446
- import { serializeSignature, KNOWN_PROFILES } from '@sovereign/pulse/registry';
447
-
448
- // After collecting a Fingerprint on the target machine:
449
- const sig = serializeSignature(fp, { name: 'AWS r7g.xlarge (Graviton3)', date: '2025-01' });
450
- // sig.id deterministic 'sig_abc123...'
451
- // Buckets continuous metrics for privacy — not reversible to raw values
452
- ```
453
-
454
- The detection engine doesn't need updates when new hardware ships. The registry benefits from them for labelling accuracy.
455
-
456
- ---
457
-
458
- ## Tests
459
-
460
- ```bash
461
- npm test
462
- ```
463
-
464
- ```
465
- computeStats ✓ basic statistics are correct
466
- ✓ constant array has zero CV
467
- computeHurst ✓ returns value in [0,1]
468
- ✓ constant series returns ~0.5 (fallback)
469
- detectQuantizationEntropy ✓ real hardware samples have high entropy
470
- ✓ quantized (VM) samples have low entropy
471
- detectThermalSignature ✓ detects rising pattern
472
- ✓ detects flat pattern
473
- classifyJitter ✓ real hardware scores higher than VM
474
- score is in [0,1]
475
- ✓ VM samples are flagged
476
- insufficient data returns zero score with flag
477
- runHeuristicEngine ✓ EJR < 1.02 triggers penalty
478
- ✓ EJR ≥ 1.08 triggers bonus
479
- ✓ Hurst-autocorr incoherence penalised
480
- picket fence detector triggers on periodic AC
481
- ✓ skewness-kurtosis bonus on right-skewed leptokurtic
482
- ✓ clean metrics produce no flags
483
- detectProvider ✓ KVM profile matched from autocorr signature
484
- ✓ physical profile matched from analog-fog metrics
485
- scheduler quantum estimated from lag-25 AC
486
- ✓ Nitro identified from near-flat AC profile
487
- alternatives list populated
488
- buildCommitment ✓ produces deterministic hash
489
- any field change breaks the hash
490
- canonicalJson ✓ sorts keys deterministically
491
- validateProof ✓ valid proof passes
492
- tampered payload is rejected
493
- ✓ low jitter score is rejected
494
- ✓ software renderer is blocked
495
- ✓ expired proof is rejected
496
- nonce check is called
497
- ✓ rejected nonce fails proof
498
- generateNonce ✓ produces 64-char hex strings
499
- ✓ each call is unique
500
- serializeSignature ✓ produces deterministic sig_ ID
501
- ✓ buckets continuous metrics for privacy
502
- ✓ isSynthetic flag preserved
503
- matchRegistry ✓ exact match returns similarity 1.0
504
- ✓ different class returns low similarity
505
- alternatives sorted by similarity
506
- compareSignatures ✓ same class returns sameClass=true
507
- ✓ physical vs VM returns sameClass=false
508
-
509
- Test Suites: 1 passed
510
- Tests: 43 passed, 0 failed
511
- Time: 0.327s
512
- ```
513
-
514
- ---
515
-
516
- ## Demo
517
-
518
- ```bash
519
- node demo/node-demo.js
520
- ```
521
-
522
- Simulates real hardware (Box-Muller Gaussian noise no periodic components, no artificial autocorrelation) and VM timing profiles (0.1ms quantization grid + steal-time bursts every 50 iterations). Runs both through the full analysis and commitment pipeline. No WASM needed.
523
-
524
- Open `demo/web/index.html` in a browser to see the animated probe running on your actual machine.
525
-
526
- ---
527
-
528
- ## Project structure
529
-
530
- ```
531
- sovereign-pulse/
532
- ├── src/
533
- │ ├── index.js pulse()main entry point
534
- │ ├── fingerprint.js Fingerprint class (high-level API)
535
- │ ├── collector/
536
- │ │ ├── entropy.js WASM bridge + phased/adaptive routing
537
- │ │ ├── adaptive.js Adaptive early-exit engine
538
- │ │ ├── bio.js Mouse/keyboard interference coefficient
539
- │ │ └── canvas.js WebGL/2D canvas fingerprint
540
- │ ├── analysis/
541
- │ │ ├── jitter.js Statistical classifier (6 components)
542
- │ │ ├── heuristic.js Cross-metric physics coherence engine
543
- │ │ ├── provider.js Hypervisor/cloud provider classifier
544
- │ │ └── audio.js AudioContext callback jitter
545
- │ ├── middleware/
546
- │ │ ├── express.js Express/Fastify/Hono drop-in
547
- │ │ └── next.js Next.js App Router HOC
548
- │ ├── integrations/
549
- │ │ └── react.js usePulse() hook
550
- │ ├── proof/
551
- │ │ ├── fingerprint.js BLAKE3 commitment builder
552
- │ │ └── validator.js Server-side proof verifier
553
- │ └── registry/
554
- │ └── serializer.js Provider signature serializer + matcher
555
- ├── crates/pulse-core/ Rust/WASM entropy probe
556
- ├── index.d.ts Full TypeScript declarations
557
- ├── demo/
558
- │ ├── web/index.html Standalone browser demo
559
- │ ├── node-demo.js CLI demo (no WASM required)
560
- │ ├── benchmark.js Generates numbers in this README
561
- │ └── perf.js Pipeline overhead benchmarks
562
- └── test/integration.test.js 43 tests
563
- ```
564
-
565
- ---
566
-
567
- ## Privacy
568
-
569
- Nothing leaves the browser except a ~1.6KB statistical summary:
570
-
571
- - Timing arrays → BLAKE3 hashed, only hash transmitted
572
- - GPU pixel buffers BLAKE3 hashed, only hash transmitted
573
- - Mouse coordinates → never stored, only timing deltas used
574
- - Keystrokes only dwell/flight times, key labels discarded immediately
575
-
576
- The server receives enough to verify the proof. Not enough to reconstruct any original signal. Not enough to re-identify a user across sessions.
577
-
578
- `hardwareId()` is a BLAKE3 hash of GPU renderer string + audio sample rate. Stable per physical device, not reversible, not cross-origin linkable.
579
-
580
- ---
581
-
582
- ## Limitations
583
-
584
- - The probe runs for 0.9–3.5 seconds. Best suited for deliberate actions (login, checkout, form submit) not page load.
585
- - Mobile browsers cap `performance.now()` to 1ms resolution. Signal quality is reduced; the classifier adjusts but scores trend lower.
586
- - GPU passthrough VMs pass the canvas check. Timing is the primary discriminator in that case.
587
- - This is one signal among many. High-stakes applications should layer it with behavioral and network signals.
588
- - The heuristic engine catches unknown VMs via physics. The provider classifier labels them by scheduler signature. If a new hypervisor ships with an unusual quantum, it will be detected and flagged as `generic-vm` until the registry is updated.
589
-
590
- ---
591
-
592
- ## FAQ
593
-
594
- **Does it work with browser extensions installed (uBlock, Privacy Badger, 1Password)?**
595
-
596
- Yes. Extensions don't touch the physics layer. The core probe is thermal — it measures entropy growth via WASM matrix multiply timing across cold/load/hot CPU phases. Extensions cannot fake DRAM refresh variance or thermal noise on real silicon. Canvas signals (which some extensions do affect) are weighted inputs, not gates. The heuristic engine cross-validates across 5 independent signals, so no single channel can cause a false flag.
597
-
598
- **What about Brave's timer clamping?**
599
-
600
- Brave reduces `performance.now()` resolution to 100µs to prevent fingerprinting. We detect this via `timerGranularityMs` and adjust thresholds accordingly. A clamped timer on real hardware still shows thermal variance across phases. A VM with a clamped timer is still flat. The EJR check survives timer clamping — it's a ratio, not an absolute threshold.
601
-
602
- **Can a VM spoof this?**
603
-
604
- Spoofing one signal is straightforward. Spoofing all five simultaneously while keeping them mutually coherent with each other is a different problem. The Hurst-AC coherence check specifically catches data that was *generated* to look right rather than *measured* from real hardware — the two signals are physically linked and have to match each other, not just hit individual thresholds. See the [KVM example above](#the-picket-fence-detector) where four physical laws are violated simultaneously.
605
-
606
- **Does it collect or transmit any personal data?**
607
-
608
- No. Nothing leaves the browser except a ~1.6KB statistical summary with all raw signals BLAKE3-hashed. The server receives enough to verify the proof. Not enough to reconstruct any original signal or re-identify a user across sessions.
609
-
610
- **What's the performance overhead?**
611
-
612
- The probe takes 0.9–3.5 seconds depending on how quickly the signal converges. For obvious VMs it exits at 50 iterations (~0.9s). For real hardware it typically exits around 100–120 iterations (~2s). JavaScript overhead outside the probe itself is under 2ms. Best used on deliberate user actions (login, checkout) not page load.
613
-
614
- **Mobile support?**
615
-
616
- Mobile browsers cap `performance.now()` to 1ms resolution which reduces signal quality. The classifier adjusts thresholds and scores trend lower, but the directional verdict (VM vs. physical) remains accurate. The bio layer (touch timing, accelerometer jitter on supported devices) compensates partially.
617
-
618
- ---
619
-
620
- ## License
621
-
622
- MIT
1
+ # @svrnsec/pulse
2
+
3
+ [![CI](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml/badge.svg)](https://github.com/ayronny14-alt/Svrn-Pulse-Security/actions/workflows/ci.yml)
4
+ [![npm version](https://img.shields.io/npm/v/@svrnsec/pulse.svg?style=flat)](https://www.npmjs.com/package/@svrnsec/pulse)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](./LICENSE)
6
+ [![Security Policy](https://img.shields.io/badge/security-policy-orange.svg)](./SECURITY.md)
7
+
8
+ A hardware-physics probe that distinguishes real consumer silicon from sanitised cloud VMs and AI inference endpoints.
9
+
10
+ It does not maintain a database of known bad actors. It measures thermodynamic constants.
11
+
12
+ ---
13
+
14
+ ## 30-Second Quickstart
15
+
16
+ ```bash
17
+ npm install @svrnsec/pulse
18
+ ```
19
+
20
+ ```js
21
+ // Express — drop-in server-side verification
22
+ import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
23
+
24
+ app.use('/api', createPulseMiddleware({ minScore: 0.6 }));
25
+ ```
26
+
27
+ ```jsx
28
+ // React — live probe with real-time signal meters
29
+ import { usePulse } from '@svrnsec/pulse/react';
30
+
31
+ function TrustGate() {
32
+ const { run, pct, vmConf, hwConf, earlyVerdict, result } = usePulse();
33
+
34
+ return (
35
+ <button onClick={run}>
36
+ {pct < 100 ? `Probing… ${pct}%` : earlyVerdict}
37
+ </button>
38
+ );
39
+ }
40
+ ```
41
+
42
+ ```js
43
+ // Node.js — raw proof commitment
44
+ import { pulse } from '@svrnsec/pulse';
45
+
46
+ const { payload, hash } = await pulse({ nonce: crypto.randomUUID() });
47
+ // payload.classification.jitterScore → 0.798 (real hw) | 0.45 (VM)
48
+ // payload.classification.flags → [] (clean) | ['CV_TOO_HIGH_...'] (VM)
49
+ // hash → BLAKE3 commitment you send to your server for validation
50
+ ```
51
+
52
+ **Self-hosted mode:** No API key. No account. No data leaves the client. Runs entirely in your infrastructure.
53
+
54
+ **Hosted API mode:** Zero server setup — pass an `apiKey` and the SDK handles challenge/verify automatically:
55
+
56
+ ```js
57
+ const result = await pulse({ apiKey: 'sk_live_...' });
58
+ // result.result.valid, result.result.score, result.result.confidence
59
+ ```
60
+
61
+ ---
62
+
63
+ ## The Problem With Every Other Approach
64
+
65
+ Every bot detection system is, at its core, a database. Known bad IP ranges. Known headless browser fingerprints. Known datacenter ASNs. Known CAPTCHA-solving services.
66
+
67
+ The attacker's job is simple: don't be in the database. The moment a new cloud region launches, a new headless runtime ships, or a new residential proxy network comes online, the database is stale.
68
+
69
+ Pulse doesn't work that way.
70
+
71
+ A VM's hypervisor clock is mathematically perfect — it cannot produce thermal noise because there is no thermal feedback loop in a virtual timer. Real silicon running under sustained load gets measurably noisier as electrons move through gates that are physically getting hotter. That relationship is a law of physics. It does not change when AWS launches a new instance type in 2027. It does not change when a new hypervisor ships. It cannot be patched.
72
+
73
+ ---
74
+
75
+ ## The Two Layers
76
+
77
+ **Detection** answers: *Is this a VM?*
78
+ Handled entirely by the heuristic engine. No signatures, no database. Five physical relationships, measured and cross-checked. If they're mutually coherent with what thermodynamics predicts, it's real hardware. If any of them contradict each other in ways physics wouldn't allow, something is being faked.
79
+
80
+ **Classification** answers: *Which VM is it?*
81
+ Handled by the provider fingerprinter. Matches the timing autocorrelation profile against known hypervisor scheduler rhythms (KVM's 250ms quantum, Xen's 750ms credit scheduler, Hyper-V's 15.6ms quantum). This is the part that improves with more data — but it's not needed for detection. A brand-new hypervisor from a company that doesn't exist yet will still fail detection the moment it tries to present a mathematically flat clock.
82
+
83
+ ---
84
+
85
+ ## The Five Physical Signals
86
+
87
+ ### 1. Entropy-Jitter Ratio
88
+
89
+ The key signal. When a real CPU runs sustained compute, thermal throttling kicks in and timing jitter *increases* — the die gets hotter, the transistors switch slightly slower, and you can measure it.
90
+
91
+ ```
92
+ hotQE / coldQE ≥ 1.08 → thermal feedback confirmed (real silicon)
93
+ hotQE / coldQE ≈ 1.00 → clock is insensitive to guest thermal state (VM)
94
+ ```
95
+
96
+ A KVM hypervisor maintains a synthetic clock that ticks at a constant rate regardless of what the guest OS is doing. Its entropy ratio across cold/load/hot phases is flat. On a KVM VM (12 vCPU / 480GB RAM / GH200 Grace Hopper) it measured 1.01. On a local GTX 1650 Super machine it measured 1.24.
97
+
98
+ A software implementation cannot fake this without generating actual heat.
99
+
100
+ ### 2. Hurst-Autocorrelation Coherence
101
+
102
+ Genuine Brownian noise (what real hardware timing looks like) has a Hurst exponent near 0.5 and near-zero autocorrelation at all lags. These two are physically linked by the relationship `expected_AC = |2H - 1|`.
103
+
104
+ If you measure H=0.5 but find high autocorrelation — or low H but low autocorrelation — the data was generated, not measured. A VM that tries to fake the Hurst Exponent without adjusting the autocorrelation profile, or vice versa, fails this check immediately.
105
+
106
+ ### 3. CV-Entropy Coherence
107
+
108
+ High coefficient of variation (timing spread) must come from a genuinely spread-out distribution, which means high quantization entropy. A VM that inflates CV by adding synthetic outliers at fixed offsets — say, every 50th iteration triggers a steal-time burst — produces high CV but low entropy because 93% of samples still fall in two bins.
109
+
110
+ From a KVM GH200 VM: CV=0.0829 (seems variable) but QE=1.27 bits (extreme clustering). Incoherent. On real hardware, CV=0.1494 → QE=3.59 bits. Coherent.
111
+
112
+ ### 4. The Picket Fence Detector
113
+
114
+ Hypervisor scheduler quanta create periodic steal-time bursts. A KVM host running at ~5ms/iteration with a 250ms quantum will pause the guest every ~50 iterations. This shows up as elevated autocorrelation at lag-50 relative to lag-5. The autocorrelation profile looks like fence posts at regular intervals — hence the name.
115
+
116
+ ```
117
+ Real hardware: lag-1 AC=0.07 lag-50 AC=0.03 (flat, no rhythm)
118
+ KVM VM: lag-1 AC=0.67 lag-50 AC=0.71 (periodic steal-time)
119
+ ```
120
+
121
+ The dominant lag also lets the classifier estimate the scheduler quantum: `lag × 5ms/iter ≈ quantum`. This is how it identifies KVM (250ms), Xen (750ms), and Hyper-V (15.6ms) without any prior knowledge of the host.
122
+
123
+ ### 5. Skewness-Kurtosis Coherence
124
+
125
+ Real hardware timing is right-skewed with positive kurtosis. OS preemptions create occasional large delays on the right tail, while the body of the distribution stays compact. A VM that adds synthetic spikes at fixed offsets tends to produce the wrong skew direction or an implausibly symmetric distribution.
126
+
127
+ ---
128
+
129
+ ## Benchmark Results
130
+
131
+ *12 trials × 200 iterations. Two real environments.*
132
+
133
+ ### Local Machine GTX 1650 Super · i5-10400 · Win11 · 16GB DDR4
134
+
135
+ ```
136
+ Pulse Score [████████████████████████████████░░░░░░░░] 79.8%
137
+ ```
138
+
139
+ | Metric | Value | Physical interpretation |
140
+ |---|---|---|
141
+ | Coefficient of Variation | 0.1494 | Spread from thermal noise + OS interrupts |
142
+ | Hurst Exponent | 0.5505 | Near-Brownian — i.i.d. noise from independent sources |
143
+ | Quantization Entropy | 3.59 bits | Timings genuinely spread across distribution |
144
+ | Autocorr lag-1 | 0.0698 | Near-zero — no periodic forcing |
145
+ | Autocorr lag-50 | 0.0312 | Flat at distance — no scheduler rhythm |
146
+ | Entropy-Jitter Ratio | 1.24 | Entropy grew 24% from cold to hot — thermal feedback confirmed |
147
+ | Thermal Pattern | sawtooth | Fan cycling, not hypervisor |
148
+ | Outlier Rate | 2.25% | OS context switches — unpredictable, not periodic |
149
+
150
+ **Distribution:**
151
+ ```
152
+ 3.60ms │██████ 8
153
+ 3.88ms │█████ 7
154
+ 4.16ms │██████████████ 19
155
+ 4.44ms │██████████████████████ 30
156
+ 4.73ms │████████████████████████████████████ 50 ← peak
157
+ 5.01ms │██████████████████████ 30
158
+ 5.29ms │████████████████ 22
159
+ 5.57ms │█████████████ 18
160
+ 5.85ms │██████ 8
161
+ 6.13ms │█ 2
162
+ 7.53ms │█ 1 ← OS preemption
163
+ 8.94ms │█ 1
164
+ ```
165
+
166
+ Normal bell curve, right-tailed from OS preemptions. Exactly what Brownian timing noise looks like.
167
+
168
+ ---
169
+
170
+ ### Remote VM KVM · 12 vCPU · 480GB RAM · NVIDIA GH200 Grace Hopper · Ubuntu 22.04
171
+
172
+ ```
173
+ Pulse Score [██████████████████░░░░░░░░░░░░░░░░░░░░░░] 45.0%
174
+ ```
175
+
176
+ | Metric | Value | Physical interpretation |
177
+ |---|---|---|
178
+ | Coefficient of Variation | 0.0829 | Artificially consistent — hypervisor flattens variance |
179
+ | Hurst Exponent | 0.0271 | Anti-persistent — caused by timer quantization artifacts |
180
+ | Quantization Entropy | 1.27 bits | 93% of samples on two values — not a distribution |
181
+ | Autocorr lag-1 | 0.666 | Periodic forcing steal-time burst every ~50 samples |
182
+ | Autocorr lag-50 | 0.710 | Still elevated at lag-50 — confirms periodic scheduler |
183
+ | Entropy-Jitter Ratio | 1.01 | Flat — hypervisor clock has no thermal feedback |
184
+ | Thermal Pattern | sawtooth (synthetic) | Produced by scheduler bursts, not temperature |
185
+ | Outlier Rate | 6.00% | Exactly 6% — the steal-time bursts are deterministic |
186
+
187
+ **Distribution:**
188
+ ```
189
+ 5.00ms │████████████████████████████████████ 123 ← 61% of all samples
190
+ 5.11ms │███████████████████ 65 ← 32% of all samples
191
+ 5.22ms │ 0
192
+ ... │ 0 ← impossible values
193
+ 6.72ms │█ 2
194
+ 6.83ms │█ 4 ← steal-time bursts
195
+ 7.05ms │█ 3
196
+ ```
197
+
198
+ This is the "Picket Fence" — 93% of samples at exactly two values. Nothing in between. A continuous physical process cannot produce this. A synthetic clock rounding to its host tick resolution can.
199
+
200
+ **Heuristic Engine Output:**
201
+ ```
202
+ ENTROPY_FLAT_UNDER_LOAD EJR=1.01 (expected ≥1.08 for real hardware) penalty -0.10
203
+ PICKET_FENCE_DETECTED lag-50 AC=0.71 > baseline 0.08 penalty -0.08
204
+ HURST_AUTOCORR_INCOHERENT H=0.027 vs expected AC=|2H-1|=0.946 penalty -0.12
205
+ CV_ENTROPY_INCOHERENT CV=0.083 expected QE≈2.83, actual QE=1.27 penalty -0.10
206
+ ```
207
+
208
+ Each of those four flags is a different physical law being violated. Spoofing one is straightforward. Spoofing all four simultaneously while keeping them mutually consistent with each other is not.
209
+
210
+ ---
211
+
212
+ ## Adaptive Early Exit
213
+
214
+ The probe doesn't always need 200 iterations. It checks signal confidence every 25 and exits when the verdict is already decisive:
215
+
216
+ ```
217
+ Environment Iters used Wall time Speedup
218
+ ────────────────────────────────────────────────────
219
+ KVM (obvious) 50 ~0.9s 75%
220
+ VMware ESXi 75 ~1.4s 60%
221
+ Physical desktop ~120 ~2.1s 40%
222
+ Ambiguous 200 ~3.5s —
223
+ ```
224
+
225
+ The GH200 VM hit the exit condition at iteration 50. 480GB of RAM and a Grace Hopper Superchip cannot change the fact that the hypervisor clock is mathematically perfect. The signal was conclusive within the first batch.
226
+
227
+ ---
228
+
229
+ ## Installation
230
+
231
+ ```bash
232
+ npm install @svrnsec/pulse
233
+ ```
234
+
235
+ Node.js ≥ 18. The WASM binary is compiled from Rust and bundled — no separate `.wasm` file to host.
236
+
237
+ The package is self-contained. It does not phone home. It does not contact any external service. Everything runs inside your infrastructure.
238
+
239
+ To build from source (requires [Rust](https://rustup.rs) and [wasm-pack](https://rustwasm.github.io/wasm-pack/)):
240
+
241
+ ```bash
242
+ git clone https://github.com/ayronny14-alt/Svrn-Pulse-Security
243
+ cd Svrn-Pulse-Security
244
+ npm install
245
+ npm run build
246
+ ```
247
+
248
+ ---
249
+
250
+ ## Usage
251
+
252
+ ### Client side
253
+
254
+ ```js
255
+ import { pulse } from '@svrnsec/pulse';
256
+
257
+ // Get a nonce from your server (prevents replay attacks)
258
+ const { nonce } = await fetch('/api/pulse/challenge').then(r => r.json());
259
+
260
+ // Run the probe — adaptive, exits early when signal is decisive
261
+ const { payload, hash } = await pulse({
262
+ nonce,
263
+ onProgress: (stage, meta) => {
264
+ if (stage === 'entropy_batch') {
265
+ // Live signal during probe — stream to a progress bar
266
+ // meta: { pct, vmConf, hwConf, earlyVerdict, etaMs }
267
+ console.log(`${meta.pct}% ${meta.earlyVerdict ?? 'measuring...'}`);
268
+ }
269
+ },
270
+ });
271
+
272
+ // Send commitment to your server
273
+ const result = await fetch('/api/pulse/verify', {
274
+ method: 'POST',
275
+ headers: { 'Content-Type': 'application/json' },
276
+ body: JSON.stringify({ payload, hash }),
277
+ }).then(r => r.json());
278
+ ```
279
+
280
+ ### High-level `Fingerprint` class
281
+
282
+ ```js
283
+ import { Fingerprint } from '@svrnsec/pulse';
284
+
285
+ const fp = await Fingerprint.collect({ nonce });
286
+
287
+ fp.isSynthetic // true / false
288
+ fp.score // 0.0–1.0
289
+ fp.confidence // 0–100
290
+ fp.tier // 'high' | 'medium' | 'low' | 'uncertain'
291
+ fp.profile // 'analog-fog' | 'picket-fence' | 'burst-scheduler' | ...
292
+ fp.providerId // 'kvm-digitalocean' | 'nitro-aws' | 'physical' | ...
293
+ fp.providerLabel // 'DigitalOcean Droplet (KVM)'
294
+ fp.schedulerQuantumMs // 250 — estimated from autocorrelation peak lag
295
+ fp.entropyJitterRatio // 1.24 — hotQE / coldQE
296
+ fp.topFlag // 'PICKET_FENCE_DETECTED'
297
+ fp.findings // full heuristic engine report
298
+ fp.physicalEvidence // confirmed physical properties (bonuses)
299
+
300
+ fp.hardwareId() // stable 32-char hex ID — BLAKE3(GPU + audio signals), 128-bit collision resistance
301
+ fp.metrics() // flat object of all numeric metrics for logging
302
+ fp.toCommitment() // { payload, hash } — send to server
303
+ ```
304
+
305
+ ### Server side
306
+
307
+ ```js
308
+ import { validateProof, generateNonce } from '@svrnsec/pulse/validator';
309
+
310
+ // Challenge endpoint runs on your server, not ours
311
+ app.get('/api/pulse/challenge', async (req, res) => {
312
+ const nonce = generateNonce();
313
+ await redis.set(`pulse:${nonce}`, '1', 'EX', 300);
314
+ res.json({ nonce });
315
+ });
316
+
317
+ // Verify endpoint
318
+ app.post('/api/pulse/verify', async (req, res) => {
319
+ const result = await validateProof(req.body.payload, req.body.hash, {
320
+ minJitterScore: 0.55,
321
+ requireBio: false,
322
+ checkNonce: async (n) => redis.del(`pulse:${n}`).then(d => d === 1),
323
+ });
324
+ res.json(result);
325
+ });
326
+ ```
327
+
328
+ ### Express middleware
329
+
330
+ ```js
331
+ import { createPulseMiddleware } from '@svrnsec/pulse/middleware/express';
332
+
333
+ const pulse = createPulseMiddleware({
334
+ threshold: 0.6,
335
+ store: {
336
+ set: (k, ttl) => redis.set(k, '1', 'EX', ttl),
337
+ consume: (k) => redis.del(k).then(n => n === 1),
338
+ },
339
+ });
340
+
341
+ app.get('/api/pulse/challenge', pulse.challenge);
342
+ app.post('/checkout', pulse.verify, handler); // req.pulse injected
343
+ ```
344
+
345
+ ### Next.js App Router
346
+
347
+ ```js
348
+ // app/api/pulse/challenge/route.js
349
+ import { pulseChallenge } from '@svrnsec/pulse/middleware/next';
350
+ export const GET = pulseChallenge();
351
+
352
+ // app/api/checkout/route.js
353
+ import { withPulse } from '@svrnsec/pulse/middleware/next';
354
+ export const POST = withPulse({ threshold: 0.6 })(async (req) => {
355
+ const { score, provider } = req.pulse;
356
+ return Response.json({ ok: true, score });
357
+ });
358
+ ```
359
+
360
+ ### React hook
361
+
362
+ ```jsx
363
+ import { usePulse } from '@svrnsec/pulse/react';
364
+
365
+ function Checkout() {
366
+ const { run, stage, pct, vmConf, hwConf, result, isReady } = usePulse({
367
+ challengeUrl: '/api/pulse/challenge',
368
+ verifyUrl: '/api/pulse/verify',
369
+ });
370
+
371
+ return (
372
+ <button onClick={run} disabled={!isReady && stage !== null}>
373
+ {stage === 'entropy_batch'
374
+ ? `Measuring... ${pct}% (VM: ${vmConf.toFixed(2)} / HW: ${hwConf.toFixed(2)})`
375
+ : 'Verify Device'}
376
+ </button>
377
+ );
378
+ }
379
+ ```
380
+
381
+ ### TypeScript
382
+
383
+ Full declarations shipped in `index.d.ts`. Every interface, every callback, every return type:
384
+
385
+ ```ts
386
+ import { pulse, Fingerprint } from '@svrnsec/pulse';
387
+ import type {
388
+ PulseOptions, PulseCommitment,
389
+ ProgressMeta, PulseStage,
390
+ ValidationResult, FingerprintReport,
391
+ } from '@svrnsec/pulse';
392
+
393
+ const fp = await Fingerprint.collect({ nonce });
394
+ // fp is fully typed — all properties, methods, and nested objects
395
+ ```
396
+
397
+ ---
398
+
399
+ ## Validation result
400
+
401
+ ```js
402
+ {
403
+ valid: true,
404
+ score: 0.8215, // heuristic-adjusted score
405
+ confidence: 'high', // 'high' | 'medium' | 'low' | 'rejected'
406
+ reasons: [], // populated when valid: false
407
+ riskFlags: [], // non-blocking signals worth logging
408
+ meta: {
409
+ receivedAt: 1742686350535,
410
+ proofAge: 2841, // ms since probe ran
411
+ jitterScore: 0.7983,
412
+ canvasRenderer: 'NVIDIA GeForce GTX 1650 Super/PCIe/SSE2',
413
+ bioActivity: true,
414
+ }
415
+ }
416
+ ```
417
+
418
+ **Score thresholds:**
419
+
420
+ | Score | Confidence | Meaning |
421
+ |---|---|---|
422
+ | 0.75 | high | Real consumer hardware |
423
+ | 0.55 – 0.75 | medium | Likely real, some signals ambiguous |
424
+ | 0.35 0.55 | low | Borderline VM, Chromebook, virtual display |
425
+ | < 0.35 | rejected | Strong VM/AI indicators |
426
+
427
+ ---
428
+
429
+ ## Detection capabilities
430
+
431
+ | Scenario | Result | Primary signal |
432
+ |---|---|---|
433
+ | Cloud VM (AWS, GCP, Azure, DO) | Blocked | EJR flat + quantized ticks + picket fence |
434
+ | Headless Chrome / Puppeteer | Blocked | SwiftShader renderer + no bio activity |
435
+ | AI inference endpoint | Blocked | VM timing profile + zero bio signals |
436
+ | Proof replay attack | Blocked | Nonce consumed atomically on first use |
437
+ | Payload tampering | Blocked | BLAKE3 hash fails immediately |
438
+ | Metric spoofing (one signal) | Blocked | Cross-metric coherence check |
439
+ | Metric spoofing (all signals) | Very hard | 5 physically-linked relationships must be jointly coherent |
440
+ | Hardware you've never seen before | Blocked | Physics is the check, not a database |
441
+ | GPU passthrough VMs | Partial | Canvas check varies; timing is primary |
442
+ | Remote desktop (real machine) | Pass | Timing is real; bio may be weak |
443
+
444
+ ---
445
+
446
+ ## The Registry Classification, Not Detection
447
+
448
+ The `src/registry/serializer.js` module stores signatures for known provider environments. It is used for the **label**, not the **verdict**.
449
+
450
+ If the heuristic engine says "this is a VM," the registry says "specifically, this is a DigitalOcean Droplet running KVM with a 5ms scheduler quantum." If the registry has never seen this particular hypervisor before, it returns `profile: 'generic-vm'` — but the heuristic engine already caught it.
451
+
452
+ You can extend the registry with a signature collected from any new environment:
453
+
454
+ ```js
455
+ import { serializeSignature, KNOWN_PROFILES } from '@svrnsec/pulse/registry';
456
+
457
+ // After collecting a Fingerprint on the target machine:
458
+ const sig = serializeSignature(fp, { name: 'AWS r7g.xlarge (Graviton3)', date: '2025-01' });
459
+ // sig.id → deterministic 'sig_abc123...'
460
+ // Buckets continuous metrics for privacy — not reversible to raw values
461
+ ```
462
+
463
+ The detection engine doesn't need updates when new hardware ships. The registry benefits from them for labelling accuracy.
464
+
465
+ ---
466
+
467
+ ---
468
+
469
+ ## TrustScore Unified 0–100 Human Score
470
+
471
+ The TrustScore engine converts all physical signals into a single integer that security teams can threshold, dashboard, and alert on.
472
+
473
+ ```js
474
+ import { computeTrustScore, formatTrustScore } from '@svrnsec/pulse/trust';
475
+
476
+ const ts = computeTrustScore(payload, { enf, gpu, dram, llm, idle });
477
+ // { score: 87, grade: 'B', label: 'Verified', hardCap: null, breakdown: {...} }
478
+
479
+ console.log(formatTrustScore(ts));
480
+ // "TrustScore 87/100 B · Verified [physics:91% enf:80% gpu:100% dram:87% bio:70%]"
481
+ ```
482
+
483
+ **Signal weights:** Physics layer 40pts · ENF 20pts · GPU 15pts · DRAM 15pts · Bio/LLM 10pts
484
+
485
+ **Hard floors** that bonus points cannot override:
486
+
487
+ | Condition | Cap | Why |
488
+ |---|---|---|
489
+ | EJR forgery detected | 20 | Physics law violated |
490
+ | Software GPU renderer | 45 | Likely VM/container |
491
+ | LLM agent conf > 0.85 | 30 | AI-driven session |
492
+ | No bio + no ENF | 55 | Cannot confirm human on real device |
493
+
494
+ ---
495
+
496
+ ## Proof-of-Idle Defeating Click Farms at the Physics Layer
497
+
498
+ Click farms run 1,000 real phones at sustained maximum throughput. Browser fingerprinting cannot catch them — they ARE real devices.
499
+
500
+ The physics: a real device between interactions cools via Newton's Law of Cooling — a smooth exponential variance decay. A farm script pausing to fake idle drops CPU load from 100% to 0% instantly, producing a step function in the timing variance. You cannot fake a cooling curve faster than real time.
501
+
502
+ ```js
503
+ import { createIdleMonitor } from '@svrnsec/pulse/idle';
504
+
505
+ // Browser hooks visibilitychange and blur/focus automatically
506
+ const monitor = createIdleMonitor();
507
+ monitor.start();
508
+
509
+ // When user triggers an engagement action:
510
+ const idleProof = monitor.getProof(); // null if device never genuinely rested
511
+
512
+ // Node.js / React Native — manual control
513
+ monitor.declareIdle();
514
+ monitor.declareActive();
515
+ ```
516
+
517
+ **Thermal transition taxonomy:**
518
+
519
+ | Label | Meaning | Farm? |
520
+ |---|---|---|
521
+ | `hot_to_cold` | Smooth exponential variance decay | No — genuine cooling |
522
+ | `cold` | Device already at rest temperature | No genuine idle |
523
+ | `cooling` | Mild ongoing decay | No |
524
+ | `step_function` | >75% variance drop in first interval | Yes script paused |
525
+ | `sustained_hot` | No cooling at all during idle period | Yes — constant load |
526
+
527
+ **TrustScore impact:** `hot_to_cold` → +8pts bonus. `step_function` → hard cap 65. `sustained_hot` → hard cap 60.
528
+
529
+ The hash chain (`SHA-256(prevHash ‖ ts ‖ meanMs ‖ variance)`) proves samples were taken in sequence at real intervals. N nodes at 30-second spacing = (N−1)×30s minimum elapsed time — cannot be back-filled faster than real time.
530
+
531
+ ---
532
+
533
+ ## Population Entropy Sybil Detection at Cohort Level
534
+
535
+ One fake account is hard to detect. A warehouse of 1,000 phones running the same script is statistically impossible to hide.
536
+
537
+ ```js
538
+ import { analysePopulation } from '@svrnsec/pulse/population';
539
+
540
+ const verdict = analysePopulation(tokenCohort);
541
+ // { authentic: false, sybilScore: 84, flags: ['TIMESTAMP_RHYTHM', 'THERMAL_HOMOGENEOUS'], ... }
542
+ ```
543
+
544
+ Five independent statistical tests on a cohort of engagement tokens:
545
+
546
+ | Test | What it catches | Farm signal |
547
+ |---|---|---|
548
+ | Timestamp rhythm | Lag-1/lag-2 autocorrelation of arrival times | Farms dispatch in clock-timed batches |
549
+ | Entropy dispersion | CV of physics scores across cohort | Cloned VMs are too similar (CV < 0.04) |
550
+ | Thermal diversity | Shannon entropy of transition labels | 1,000 phones → same thermal state |
551
+ | Idle plausibility | Clustering of idle durations | Scripts always pause for the same duration |
552
+ | ENF phase coherence | Variance of grid frequency deviations | Co-located devices share the same circuit |
553
+
554
+ `sybilScore < 40 = authentic cohort`. Coordinated farms score 80+.
555
+
556
+ ---
557
+
558
+ ## Engagement Tokens — 30-Second Physics-Backed Proof
559
+
560
+ A short-lived cryptographic token that proves a specific engagement event originated from a real human on real hardware that had genuinely rested between interactions.
561
+
562
+ ```js
563
+ import { createEngagementToken, verifyEngagementToken } from '@svrnsec/pulse/engage';
564
+
565
+ // Client — after the interaction
566
+ const { compact } = createEngagementToken({
567
+ pulseResult,
568
+ idleProof: monitor.getProof(),
569
+ interaction: { type: 'click', ts: Date.now(), motorConsistency: 0.82 },
570
+ secret: process.env.PULSE_SECRET,
571
+ });
572
+ // Attach to API call: X-Pulse-Token: <compact>
573
+
574
+ // Server before crediting any engagement metric
575
+ const result = await verifyEngagementToken(compact, process.env.PULSE_SECRET, {
576
+ checkNonce: (n) => redis.del(`pulse:nonce:${n}`).then(d => d === 1),
577
+ });
578
+ // result.valid, result.riskSignals, result.idleWarnings
579
+ ```
580
+
581
+ **What the token proves:**
582
+
583
+ 1. Real hardware — DRAM refresh present, ENF grid signal detected
584
+ 2. Genuine idle Hash-chained thermal measurements spanning 45s
585
+ 3. Physical cooling Variance decay was smooth, not a step function
586
+ 4. Fresh interaction 30-second TTL eliminates token brokers
587
+ 5. Tamper-evident HMAC-SHA256 over all fraud-relevant fields
588
+
589
+ HMAC signs: `v|n|iat|exp|idle.chain|idle.dMs|hw.ent|evt.t|evt.ts`
590
+
591
+ Advisory fields (thermal label, cooling monotonicity) are in the token body for risk scoring but deliberately excluded from the HMAC — changing them can't gain access credit without breaking the signature.
592
+
593
+ ---
594
+
595
+ ## Authenticity Audit — The $44 Billion Question
596
+
597
+ Elon paid $44 billion arguing about what percentage of Twitter's users were real humans. Nobody had a physics-layer tool to measure it. This is that tool.
598
+
599
+ ```js
600
+ import { authenticityAudit } from '@svrnsec/pulse/audit';
601
+
602
+ const report = authenticityAudit(tokenCohort, { confidenceLevel: 0.95 });
603
+ ```
604
+
605
+ ```js
606
+ {
607
+ cohortSize: 10000,
608
+ estimatedHumanPct: 73.4,
609
+ confidenceInterval: [69.1, 77.8], // 95% bootstrap CI
610
+ grade: 'HIGH_FRAUD',
611
+ botClusterCount: 5,
612
+ botClusters: [
613
+ {
614
+ id: 'farm_a3f20c81',
615
+ size: 847,
616
+ sybilScore: 94,
617
+ signature: {
618
+ enfRegion: 'americas',
619
+ dramVerdict: 'dram',
620
+ thermalLabel: 'sustained_hot',
621
+ meanEnfDev: 0.0231, // Hz — localizes to substation/building
622
+ meanIdleMs: 57200, // script sleeps for exactly 57s
623
+ },
624
+ topSignals: ['timestamp_rhythm', 'thermal_diversity'],
625
+ },
626
+ ],
627
+ recommendation: 'CRITICAL: 5 bot farm clusters account for a majority of traffic...',
628
+ }
629
+ ```
630
+
631
+ **Method:** Tokens are clustered by hardware signature (ENF deviation bucket × DRAM verdict × thermal label × 10-minute time bucket). Organic users scatter across all dimensions. A farm in one building, running the same script, on the same hardware generation collapses into one tight cluster. Each cluster is scored with Population Entropy. A non-parametric bootstrap produces the confidence interval.
632
+
633
+ **Typical values:**
634
+
635
+ | Scenario | estimatedHumanPct |
636
+ |---|---|
637
+ | Organic product feed | 92–97% |
638
+ | Incentivised engagement campaign | 55–75% |
639
+ | Coordinated click farm attack | 8–35% |
640
+
641
+ ---
642
+
643
+ ## HMAC-Signed Challenge Protocol
644
+
645
+ Plain random nonces prevent replay attacks but not forged challenges. The challenge module adds server-signed HMAC authentication:
646
+
647
+ ```js
648
+ import { createChallenge, verifyChallenge, generateSecret } from '@svrnsec/pulse/challenge';
649
+
650
+ // One-time setup: generate a 256-bit secret
651
+ const secret = generateSecret(); // store in env vars
652
+
653
+ // Challenge endpoint
654
+ app.get('/api/challenge', (req, res) => {
655
+ const challenge = createChallenge(secret);
656
+ await redis.set(`pulse:${challenge.nonce}`, '1', 'EX', 300);
657
+ res.json(challenge);
658
+ });
659
+
660
+ // Verify endpoint — validates HMAC before processing the proof
661
+ app.post('/api/verify', async (req, res) => {
662
+ const { valid, reason } = await verifyChallenge(req.body.challenge, secret, {
663
+ checkNonce: async (n) => (await redis.del(`pulse:${n}`)) === 1,
664
+ });
665
+ if (!valid) return res.status(400).json({ error: reason });
666
+ // ... proceed with validateProof
667
+ });
668
+ ```
669
+
670
+ The HMAC covers `nonce|issuedAt|expiresAt` — altering any field breaks the signature. Timing-safe comparison prevents side-channel attacks on the signature verification.
671
+
672
+ ---
673
+
674
+ ## Coordinated Behavior Detection
675
+
676
+ Detects coordinated inauthentic behavior across a cohort of engagement tokens using five independent analysis layers:
677
+
678
+ ```js
679
+ import { detectCoordinatedBehavior } from '@svrnsec/pulse/coordination';
680
+
681
+ const result = detectCoordinatedBehavior(tokenCohort);
682
+ // result.clusters — detected bot farm clusters with similarity scores
683
+ // result.coordinationScore — 0-100, higher = more coordinated
684
+ ```
685
+
686
+ **Analysis layers:**
687
+ - **Temporal clustering** — Poisson test on arrival time distributions
688
+ - **Signal fingerprint collision** — Entropy band / thermal label / motor band hash matching
689
+ - **Drift fingerprinting** — Clock drift rate convergence across devices
690
+ - **Mutual information matrix** — Louvain-lite community detection
691
+ - **Entropy velocity** — Shannon entropy growth rate vs traffic growth
692
+
693
+ ---
694
+
695
+ ## LLM Agent Detection
696
+
697
+ Detects AI-controlled headless browsers (AutoGPT, Playwright+LLM, browser agents) through behavioral biometrics:
698
+
699
+ ```js
700
+ import { detectLlmAgent } from '@svrnsec/pulse/llm';
701
+
702
+ const result = detectLlmAgent(bioSnapshot);
703
+ // result.aiConf — 0-1 confidence this is an AI agent
704
+ // result.humanConf — 0-1 confidence this is a human
705
+ // result.verdict — 'human' | 'ai_agent' | 'ambiguous'
706
+ ```
707
+
708
+ **Six behavioral signals:**
709
+ 1. Think-time pattern — LLMs produce characteristic pause distributions
710
+ 2. Mouse path smoothness — Bezier-interpolated paths vs. natural micro-tremor
711
+ 3. Keystroke correction rate — Humans make typos; LLMs don't backspace
712
+ 4. Physiological tremor — 8-12 Hz micro-oscillation present in all human motor control
713
+ 5. Inter-event gap distribution — LLM response latencies cluster differently than human reaction times
714
+ 6. Motor consistency — AI agents maintain unnaturally consistent click precision
715
+
716
+ TrustScore hard cap: AI agent confidence > 0.85 caps the score at 30.
717
+
718
+ ---
719
+
720
+ ## Refraction — Cross-Environment Timer Calibration
721
+
722
+ Different environments have different timer resolutions. Refraction automatically calibrates scoring thresholds:
723
+
724
+ ```js
725
+ import { calibrate, getProfile } from '@svrnsec/pulse/refraction';
726
+
727
+ const profile = await calibrate();
728
+ // profile.env — 'browser' | 'node' | 'deno' | 'worker'
729
+ // profile.grain — timer resolution in ms
730
+ // profile.thresholds — adjusted scoring thresholds for this environment
731
+ ```
732
+
733
+ The calibration profile adjusts all downstream analysis so a 100us-clamped Brave browser and a nanosecond-precision Node.js process are scored against appropriate baselines.
734
+
735
+ ---
736
+
737
+ ## Tests
738
+
739
+ ```bash
740
+ npm test
741
+ ```
742
+
743
+ ```
744
+ integration.test.js 43 tests — core engine, provider classifier, commitment, registry
745
+ stress.test.js 92 tests — adversarial: KVM, VMware, Docker, LLM agents,
746
+ Gaussian noise injection, synthetic thermal drift,
747
+ score separation (real min vs VM max)
748
+ engagement.test.js 45 tests — IdleAttestation state machine, thermal classification,
749
+ Population Entropy (all 5 tests), Engagement Token
750
+ creation/verification/replay/tamper, risk signals
751
+ audit.test.js 18 tests — Authenticity Audit: organic vs farm cohorts, CI
752
+ properties, multi-farm fingerprinting, grade thresholds
753
+
754
+ Test Suites: 4 passed
755
+ Tests: 158 passed, 0 failed
756
+ Time: ~1.0s
757
+ ```
758
+
759
+ ---
760
+
761
+ ## Demo
762
+
763
+ ```bash
764
+ node demo/node-demo.js
765
+ ```
766
+
767
+ Simulates real hardware (Box-Muller Gaussian noise — no periodic components, no artificial autocorrelation) and VM timing profiles (0.1ms quantization grid + steal-time bursts every 50 iterations). Runs both through the full analysis and commitment pipeline. No WASM needed.
768
+
769
+ Open `demo/web/index.html` in a browser to see the animated probe running on your actual machine.
770
+
771
+ ---
772
+
773
+ ## Project structure
774
+
775
+ ```
776
+ sovereign-pulse/
777
+ ├── src/
778
+ │ ├── index.js pulse() — main entry point
779
+ │ ├── fingerprint.js Fingerprint class (high-level API)
780
+ │ ├── collector/
781
+ │ │ ├── entropy.js WASM bridge + phased/adaptive routing
782
+ │ │ ├── adaptive.js Adaptive early-exit engine
783
+ │ │ ├── bio.js Mouse/keyboard interference coefficient
784
+ │ │ ├── canvas.js WebGL/2D canvas fingerprint
785
+ │ │ ├── gpu.js WebGPU thermal growth probe
786
+ │ │ ├── dram.js DRAM refresh cycle detector
787
+ │ │ ├── enf.js Electrical Network Frequency probe
788
+ │ │ ├── sabTimer.js Sub-millisecond SAB timer
789
+ │ │ └── idleAttestation.js Proof-of-Idle — thermal hash chain (v0.5.0)
790
+ │ ├── analysis/
791
+ │ │ ├── jitter.js Statistical classifier (6 components)
792
+ │ │ ├── heuristic.js Cross-metric physics coherence engine
793
+ │ │ ├── provider.js Hypervisor/cloud provider classifier
794
+ │ │ ├── audio.js AudioContext callback jitter
795
+ │ │ ├── llm.js LLM agent behavioural detector
796
+ │ │ ├── trustScore.js Unified 0–100 TrustScore engine (v0.4.0)
797
+ │ │ ├── populationEntropy.js Sybil detection — 5 cohort-level tests (v0.5.0)
798
+ │ │ └── authenticityAudit.js $44B question — humanPct + CI (v0.6.0)
799
+ │ ├── middleware/
800
+ │ │ ├── express.js Express/Fastify/Hono drop-in
801
+ │ │ └── next.js Next.js App Router HOC
802
+ │ ├── integrations/
803
+ │ │ ├── react.js usePulse() hook
804
+ │ │ └── react-native.js Expo accelerometer + thermal bridge
805
+ │ ├── proof/
806
+ │ │ ├── fingerprint.js BLAKE3 commitment builder
807
+ │ │ ├── validator.js Server-side proof verifier
808
+ │ │ ├── challenge.js HMAC challenge/response
809
+ │ │ └── engagementToken.js 30s physics-backed engagement token (v0.5.0)
810
+ │ └── registry/
811
+ │ └── serializer.js Provider signature serializer + matcher
812
+ ├── crates/pulse-core/ Rust/WASM entropy probe
813
+ ├── index.d.ts Full TypeScript declarations
814
+ ├── demo/
815
+ │ ├── web/index.html Standalone browser demo
816
+ │ ├── node-demo.js CLI demo (no WASM required)
817
+ │ ├── benchmark.js Generates numbers in this README
818
+ │ └── perf.js Pipeline overhead benchmarks
819
+ └── test/
820
+ ├── integration.test.js 43 tests — core engine
821
+ ├── stress.test.js 92 tests — adversarial attack suite
822
+ ├── engagement.test.js 45 tests — idle / population / tokens
823
+ └── audit.test.js 18 tests — authenticity audit
824
+ ```
825
+
826
+ ---
827
+
828
+ ## Privacy
829
+
830
+ Nothing leaves the browser except a ~1.6KB statistical summary:
831
+
832
+ - Timing arrays → BLAKE3 hashed, only hash transmitted
833
+ - GPU pixel buffers → BLAKE3 hashed, only hash transmitted
834
+ - Mouse coordinates → never stored, only timing deltas used
835
+ - Keystrokes → only dwell/flight times, key labels discarded immediately
836
+
837
+ The server receives enough to verify the proof. Not enough to reconstruct any original signal. Not enough to re-identify a user across sessions.
838
+
839
+ `hardwareId()` is a 128-bit BLAKE3 hash of GPU renderer string + audio sample rate. Stable per physical device, not reversible, not cross-origin linkable.
840
+
841
+ ---
842
+
843
+ ## Limitations
844
+
845
+ - The probe runs for 0.9–3.5 seconds. Best suited for deliberate actions (login, checkout, form submit) not page load.
846
+ - Mobile browsers cap `performance.now()` to 1ms resolution. Signal quality is reduced; the classifier adjusts but scores trend lower.
847
+ - GPU passthrough VMs pass the canvas check. Timing is the primary discriminator in that case.
848
+ - This is one signal among many. High-stakes applications should layer it with behavioral and network signals.
849
+ - The heuristic engine catches unknown VMs via physics. The provider classifier labels them by scheduler signature. If a new hypervisor ships with an unusual quantum, it will be detected and flagged as `generic-vm` until the registry is updated.
850
+
851
+ ---
852
+
853
+ ## FAQ
854
+
855
+ **Does it work with browser extensions installed (uBlock, Privacy Badger, 1Password)?**
856
+
857
+ Yes. Extensions don't touch the physics layer. The core probe is thermal — it measures entropy growth via WASM matrix multiply timing across cold/load/hot CPU phases. Extensions cannot fake DRAM refresh variance or thermal noise on real silicon. Canvas signals (which some extensions do affect) are weighted inputs, not gates. The heuristic engine cross-validates across 5 independent signals, so no single channel can cause a false flag.
858
+
859
+ **What about Brave's timer clamping?**
860
+
861
+ Brave reduces `performance.now()` resolution to 100µs to prevent fingerprinting. We detect this via `timerGranularityMs` and adjust thresholds accordingly. A clamped timer on real hardware still shows thermal variance across phases. A VM with a clamped timer is still flat. The EJR check survives timer clamping — it's a ratio, not an absolute threshold.
862
+
863
+ **Can a VM spoof this?**
864
+
865
+ Spoofing one signal is straightforward. Spoofing all five simultaneously while keeping them mutually coherent with each other is a different problem. The Hurst-AC coherence check specifically catches data that was *generated* to look right rather than *measured* from real hardware — the two signals are physically linked and have to match each other, not just hit individual thresholds. See the [KVM example above](#the-picket-fence-detector) where four physical laws are violated simultaneously.
866
+
867
+ **Does it collect or transmit any personal data?**
868
+
869
+ No. Nothing leaves the browser except a ~1.6KB statistical summary with all raw signals BLAKE3-hashed. The server receives enough to verify the proof. Not enough to reconstruct any original signal or re-identify a user across sessions.
870
+
871
+ **What's the performance overhead?**
872
+
873
+ The probe takes 0.9–3.5 seconds depending on how quickly the signal converges. For obvious VMs it exits at 50 iterations (~0.9s). For real hardware it typically exits around 100–120 iterations (~2s). JavaScript overhead outside the probe itself is under 2ms. Best used on deliberate user actions (login, checkout) not page load.
874
+
875
+ **Mobile support?**
876
+
877
+ Mobile browsers cap `performance.now()` to 1ms resolution which reduces signal quality. The classifier adjusts thresholds and scores trend lower, but the directional verdict (VM vs. physical) remains accurate. The bio layer (touch timing, accelerometer jitter on supported devices) compensates partially.
878
+
879
+ ---
880
+
881
+ ## License
882
+
883
+ MIT