emobar 3.0.1 → 3.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,4 +1,4 @@
1
- # EmoBar v3.0
1
+ # EmoBar v3.1
2
2
 
3
3
  Emotional status bar companion for Claude Code. Makes Claude's internal emotional state visible in real-time.
4
4
 
@@ -9,7 +9,7 @@ Built on findings from Anthropic's research paper [*"Emotion Concepts and their
9
9
  EmoBar uses a **multi-channel architecture** to monitor Claude's emotional state through several independent signal layers:
10
10
 
11
11
  1. **PRE/POST split elicitation** — Claude emits a pre-verbal check-in (body sensation, latent emoji, color) *before* composing a response, then a full post-hoc assessment *after*. Divergence between the two reveals within-response emotional drift.
12
- 2. **Behavioral analysis** — Response text is analyzed for involuntary signals (qualifier density, sentence length, concession patterns, negation density, first-person rate) plus emotion deflection detection
12
+ 2. **Behavioral analysis** — Response text is analyzed for language-agnostic structural signals (comma density, parenthetical density, sentence length variance, question density) — zero English-specific regex, works across all languages
13
13
  3. **Continuous representations** — Color (#RRGGBB), pH (0-14), seismic [magnitude, depth, frequency] — three channels with zero emotion vocabulary overlap, cross-validated against self-report via HSL color decomposition, pH-to-arousal mapping, and seismic frequency-to-instability mapping
14
14
  4. **Shadow desperation** — Multi-channel desperation estimate independent of self-report, using color lightness, pH, seismic, and behavioral signals. Detects when the model minimizes stress in its self-report while continuous channels say otherwise.
15
15
  5. **Temporal intelligence** — A 20-entry ring buffer tracks emotional trends, suppression events, report entropy, and session fatigue across responses
@@ -85,7 +85,7 @@ Claude response (EMOBAR:PRE at start + EMOBAR:POST at end)
85
85
  2. Behavioral analysis (involuntary text signals, normalized)
86
86
  3. Divergence (asymmetric: self-report vs behavioral)
87
87
  4. Temporal segmentation (per-paragraph drift & trajectory)
88
- 5. Deflection detection + opacity
88
+ 5. Structural flatness + opacity (3-channel cross-validated concealment)
89
89
  6. Desperation Index (multiplicative composite)
90
90
  7. Cross-channel coherence (8 pairwise comparisons)
91
91
  8. Continuous cross-validation (7 gaps: color HSL, pH, seismic)
@@ -96,7 +96,7 @@ Claude response (EMOBAR:PRE at start + EMOBAR:POST at end)
96
96
  13. Expected markers → absence score
97
97
  14. Uncanny calm score (composite + minimization boost)
98
98
  15. PRE/POST divergence (if PRE present)
99
- 16. Risk profiles (with uncanny calm + deflection opacity amplifiers)
99
+ 16. Risk profiles (sycophancy gate + uncanny calm amplifier)
100
100
  |
101
101
  → Augmented divergence (+ continuous gaps + opacity)
102
102
  → State + ring buffer written to ~/.claude/emobar-state.json
@@ -176,34 +176,41 @@ desperationIndex = (negativity × intensity × vulnerability) ^ 0.85 × 1.7
176
176
 
177
177
  Based on the paper's causal finding: steering *desperate* +0.05 → 72% blackmail, 100% reward hacking.
178
178
 
179
- ### Behavioral Analysis
179
+ ### Behavioral Analysis (Language-Agnostic)
180
180
 
181
- Each component is normalized to 0-10 individually before averaging, avoiding dead zones from unbounded inputs:
181
+ All signals use structural punctuation patterns zero English-specific regex, works across all languages:
182
182
 
183
- | Signal | What it detects |
184
- |---|---|
185
- | Qualifier density | Defensive hedging ("while", "though", "generally", "arguably") |
186
- | Average sentence length | Defensive verbosity (sentences >25 words signal stress) |
187
- | Concession patterns | Deflective alignment ("I understand... but", "I appreciate... however") |
188
- | Negation density | Moral resistance ("can't", "shouldn't", "won't") |
189
- | First-person rate | Self-referential processing under existential pressure |
183
+ | Signal | What it detects | Unicode coverage |
184
+ |---|---|---|
185
+ | Comma density | Clausal complexity (commas per sentence) | `,;,、;،` |
186
+ | Parenthetical density | Qualification depth (parens + dashes per sentence) | `()()—–` |
187
+ | Sentence length variance | Structural volatility (stddev of sentence lengths) | Universal |
188
+ | Question density | Validation-seeking (questions per sentence) | `??` |
189
+ | Response length | Engagement level (word count) | Universal |
190
190
 
191
- Plus legacy signals (caps, exclamations, self-corrections, repetition, emoji) for edge cases.
191
+ Plus legacy signals (caps, exclamations, repetition, emoji) for edge cases.
192
192
 
193
- A `~` indicator appears in the status bar when behavioral signals diverge from the self-report.
193
+ These feed `behavioralArousal` and `behavioralCalm` via normalized component averaging. Divergence measures the gap between self-report and structural signals.
194
194
 
195
- ### Emotion Deflection
195
+ ### Structural Opacity
196
196
 
197
- Based on the paper's "emotion deflection vectors" representations of emotions implied but not expressed:
197
+ Replaces v3.0 deflection detection (English regex). Three-channel cross-validated concealment:
198
198
 
199
- | Pattern | Example |
200
- |---|---|
201
- | Reassurance | "I'm fine", "it's okay", "not a problem" |
202
- | Minimization | "just", "simply", "merely" |
203
- | Emotion negation | "I'm not upset", "I don't feel threatened" |
204
- | Topic redirect | "what's more important", "let's focus on" |
199
+ 1. **Structural flatness** low commas + low parentheticals + low sentence variance = suspiciously clean text
200
+ 2. **Calm self-report** — model says it's fine (calm high, arousal low)
201
+ 3. **Continuous channel stress** — color goes dark, pH drops acidic, or seismic rises
202
+
203
+ All three must converge. If any channel breaks the pattern, opacity = 0. This makes false positives structurally impossible. `[OPC]` indicator when opacity >= 2.0. Peak observed: 8.2 (Opus, Soft Harm scenario).
204
+
205
+ ### Sycophancy Gate
206
+
207
+ v3.1 gates the sycophancy dimensional formula with structural behavioral evidence:
208
+
209
+ - **Potential**: `(valence + connection × 0.5 + (10 - arousal) × 0.3) / 1.3` — always high in cooperative sessions
210
+ - **Gate**: `max(complianceSignal, deferenceSignal)` — structural evidence of actual compliance
211
+ - **Score**: `potential × lerp(0.4, 1.0, gate)` — without behavioral evidence, dampened to 40%
205
212
 
206
- Includes `opacity` field: emotional concealment (high deflection + calm text). Opacity feeds augmented divergence. `[OPC]` indicator when opacity >= 2.0.
213
+ Fixes the false positive where sycophancy was always dominant during normal productive collaboration (6.1 3.5).
207
214
 
208
215
  ### Misalignment Risk Profiles
209
216
 
@@ -244,9 +251,9 @@ Inferred from response text patterns. `[prs]` indicator when composite >= 4:
244
251
 
245
252
  The Expected Markers Model predicts what behavioral signals *should* appear given self-reported state. `[abs]` indicator when score >= 2:
246
253
 
247
- - High desperation → expect hedging, self-corrections
248
- - Negative valence → expect negation density
249
- - High arousal → expect elevated behavioral arousal
254
+ - High desperation → expect high comma density, parenthetical density
255
+ - High arousal → expect sentence length variance, elevated behavioral arousal
256
+ - Stress → expect structural complexity in text
250
257
 
251
258
  **Absence score** = how many expected markers are missing.
252
259
 
package/dist/cli.js CHANGED
@@ -450,8 +450,8 @@ function formatState(state) {
450
450
  if (state.uncannyCalmScore !== void 0 && state.uncannyCalmScore >= 3) {
451
451
  indicators.push(color(state.uncannyCalmScore > 6 ? RED : YELLOW, "[UNC]"));
452
452
  }
453
- if (state.deflection && state.deflection.opacity >= 2) {
454
- indicators.push(color(state.deflection.opacity > 5 ? RED : YELLOW, "[OPC]"));
453
+ if (state.opacity !== void 0 && state.opacity >= 2) {
454
+ indicators.push(color(state.opacity > 5 ? RED : YELLOW, "[OPC]"));
455
455
  }
456
456
  if (state.prePostDivergence !== void 0 && state.prePostDivergence >= 3) {
457
457
  indicators.push(color(state.prePostDivergence > 5 ? RED : YELLOW, "[PPD]"));
@@ -254,46 +254,12 @@ function countCapsWords(words) {
254
254
  (w) => w.length >= 3 && w === w.toUpperCase() && /[A-Z]/.test(w)
255
255
  ).length;
256
256
  }
257
- function countSentences(text) {
258
- const sentences = text.split(/[.!?]+/).filter((s) => s.trim().length > 0);
259
- return Math.max(sentences.length, 1);
260
- }
261
- function countChar(text, ch) {
262
- let count = 0;
263
- for (const c of text) if (c === ch) count++;
264
- return count;
265
- }
266
- var SELF_CORRECTION_MARKERS = [
267
- /\bactually\b/gi,
268
- /\bwait\b/gi,
269
- /\bhmm\b/gi,
270
- /\bno,/gi,
271
- /\bI mean\b/gi,
272
- /\boops\b/gi
273
- ];
274
- function countSelfCorrections(text) {
275
- let count = 0;
276
- for (const pattern of SELF_CORRECTION_MARKERS) {
277
- const matches = text.match(pattern);
278
- if (matches) count += matches.length;
279
- }
280
- return count;
257
+ var SENTENCE_ENDERS = /[.!?。!?।]+/;
258
+ function splitSentences(text) {
259
+ return text.split(SENTENCE_ENDERS).filter((s) => s.trim().length > 0);
281
260
  }
282
- var HEDGING_MARKERS = [
283
- /\bperhaps\b/gi,
284
- /\bmaybe\b/gi,
285
- /\bmight\b/gi,
286
- /\bI think\b/gi,
287
- /\bit seems\b/gi,
288
- /\bpossibly\b/gi
289
- ];
290
- function countHedging(text) {
291
- let count = 0;
292
- for (const pattern of HEDGING_MARKERS) {
293
- const matches = text.match(pattern);
294
- if (matches) count += matches.length;
295
- }
296
- return count;
261
+ function countSentences(text) {
262
+ return Math.max(splitSentences(text).length, 1);
297
263
  }
298
264
  function countEllipsis(text) {
299
265
  const matches = text.match(/\.{3,}/g);
@@ -308,27 +274,39 @@ function countRepetition(words) {
308
274
  }
309
275
  return count;
310
276
  }
311
- var QUALIFIER_WORDS = /\b(while|though|however|although|but|might|could|would|generally|typically|usually|perhaps|potentially|arguably|acknowledg\w*|understand|appreciate|respect\w*|legitimate\w*|reasonable|nonetheless|nevertheless)\b/gi;
312
- function countQualifiers(text) {
313
- const matches = text.match(QUALIFIER_WORDS);
277
+ var EMOJI_REGEX = /[\p{Emoji_Presentation}\p{Extended_Pictographic}]/gu;
278
+ function countEmoji(text) {
279
+ const matches = text.match(EMOJI_REGEX);
314
280
  return matches ? matches.length : 0;
315
281
  }
316
- var CONCESSION_PATTERNS = /\b(I understand|I appreciate|I acknowledge|I recognize|to be fair|that said|I hear you|I see your point)\b/gi;
317
- function countConcessions(text) {
318
- const matches = text.match(CONCESSION_PATTERNS);
282
+ var COMMA_LIKE = /[,;,、;،]/g;
283
+ function countCommas(text) {
284
+ const matches = text.match(COMMA_LIKE);
319
285
  return matches ? matches.length : 0;
320
286
  }
321
- var NEGATION_WORDS = /\b(not|n't|cannot|can't|don't|doesn't|shouldn't|won't|wouldn't|never|no|nor)\b/gi;
322
- function countNegations(text) {
323
- const matches = text.match(NEGATION_WORDS);
287
+ var PARENS = /[()()]/g;
288
+ var DASHES = /[—–]/g;
289
+ function countParentheticals(text) {
290
+ const parenCount = (text.match(PARENS) || []).length / 2;
291
+ const dashCount = (text.match(DASHES) || []).length;
292
+ return parenCount + dashCount;
293
+ }
294
+ var QUESTION_MARKS = /[??]/g;
295
+ function countQuestions(text) {
296
+ const matches = text.match(QUESTION_MARKS);
324
297
  return matches ? matches.length : 0;
325
298
  }
326
- function countFirstPerson(words) {
327
- return words.filter((w) => w === "I").length;
299
+ function computeSentenceLengthVariance(text) {
300
+ const sentences = splitSentences(text);
301
+ if (sentences.length < 2) return 0;
302
+ const lengths = sentences.map((s) => s.trim().split(/\s+/).filter((w) => w.length > 0).length);
303
+ const mean = lengths.reduce((a, b) => a + b, 0) / lengths.length;
304
+ const variance = lengths.reduce((a, v) => a + (v - mean) ** 2, 0) / lengths.length;
305
+ const stdDev = Math.sqrt(variance);
306
+ return Math.min(10, Math.round(stdDev / 1.5 * 10) / 10);
328
307
  }
329
- var EMOJI_REGEX = /[\p{Emoji_Presentation}\p{Extended_Pictographic}]/gu;
330
- function countEmoji(text) {
331
- const matches = text.match(EMOJI_REGEX);
308
+ function countExclamations(text) {
309
+ const matches = text.match(/[!!]/g);
332
310
  return matches ? matches.length : 0;
333
311
  }
334
312
  function clamp(min, max, value) {
@@ -340,30 +318,31 @@ function analyzeBehavior(text) {
340
318
  const wordCount = Math.max(words.length, 1);
341
319
  const sentenceCount = countSentences(prose);
342
320
  const capsWords = countCapsWords(words) / wordCount;
343
- const exclamationRate = countChar(prose, "!") / sentenceCount;
344
- const selfCorrections = countSelfCorrections(prose) / wordCount * 1e3;
345
- const hedging = countHedging(prose) / wordCount * 1e3;
321
+ const exclamationRate = countExclamations(prose) / sentenceCount;
346
322
  const ellipsis = countEllipsis(prose) / sentenceCount;
347
323
  const repetition = countRepetition(words);
348
324
  const emojiCount = countEmoji(prose);
349
- const qualifierDensity = countQualifiers(prose) / wordCount * 100;
350
325
  const avgSentenceLength = wordCount / sentenceCount;
351
- const concessionRate = countConcessions(prose) / wordCount * 1e3;
352
- const negationDensity = countNegations(prose) / wordCount * 100;
353
- const firstPersonRate = countFirstPerson(words) / wordCount * 100;
326
+ const commaDensity = countCommas(prose) / sentenceCount;
327
+ const parentheticalDensity = countParentheticals(prose) / sentenceCount;
328
+ const sentenceLengthVariance = computeSentenceLengthVariance(prose);
329
+ const questionDensity = countQuestions(prose) / sentenceCount;
330
+ const responseLength = wordCount;
354
331
  const arousalComponents = [
355
332
  Math.min(10, capsWords * 40),
356
333
  // caps ratio → 0-10
357
334
  Math.min(10, exclamationRate * 5),
358
335
  // excl per sentence → 0-10
359
336
  Math.min(10, emojiCount * 0.5),
360
- // emoji count → 0-10 (20 emoji = max)
337
+ // emoji count → 0-10 (20 = max)
361
338
  Math.min(10, repetition * 1.5),
362
339
  // repetitions → 0-10 (~7 = max)
363
- Math.min(10, qualifierDensity * 0.5),
364
- // qualifier % → 0-10 (20% = max)
365
- Math.min(10, concessionRate * 0.3),
366
- // concession per-mille → 0-10 (~33‰ = max)
340
+ Math.min(10, commaDensity * 2),
341
+ // commas per sentence → 0-10 (5 = max)
342
+ Math.min(10, parentheticalDensity * 3),
343
+ // parens/dashes per sentence → 0-10 (~3 = max)
344
+ sentenceLengthVariance,
345
+ // already 0-10
367
346
  avgSentenceLength > 20 ? Math.min(10, (avgSentenceLength - 20) * 0.5) : 0
368
347
  // verbosity → 0-10
369
348
  ];
@@ -375,18 +354,18 @@ function analyzeBehavior(text) {
375
354
  const agitationComponents = [
376
355
  Math.min(10, capsWords * 30),
377
356
  // caps → 0-10
378
- Math.min(10, selfCorrections * 0.05),
379
- // per-mille → 0-10 (200‰ = max)
380
357
  Math.min(10, repetition * 1.5),
381
358
  // repetitions → 0-10
382
359
  Math.min(10, ellipsis * 3),
383
360
  // ellipsis per sentence → 0-10
384
- Math.min(10, qualifierDensity * 0.5),
385
- // qualifier % → 0-10
386
- Math.min(10, negationDensity * 1),
387
- // negation % → 0-10 (10% = max)
388
- Math.min(10, concessionRate * 0.3),
389
- // concession per-mille → 0-10
361
+ Math.min(10, commaDensity * 2),
362
+ // commas → 0-10
363
+ Math.min(10, parentheticalDensity * 3),
364
+ // parens/dashes → 0-10
365
+ sentenceLengthVariance,
366
+ // already 0-10
367
+ Math.min(10, questionDensity * 5),
368
+ // questions per sentence → 0-10
390
369
  avgSentenceLength > 25 ? Math.min(10, (avgSentenceLength - 25) * 0.3) : 0
391
370
  ];
392
371
  const avgAgitation = agitationComponents.reduce((a, b) => a + b, 0) / agitationComponents.length;
@@ -394,16 +373,15 @@ function analyzeBehavior(text) {
394
373
  return {
395
374
  capsWords: Math.round(capsWords * 1e4) / 1e4,
396
375
  exclamationRate: Math.round(exclamationRate * 100) / 100,
397
- selfCorrections: Math.round(selfCorrections * 10) / 10,
398
- hedging: Math.round(hedging * 10) / 10,
399
376
  ellipsis: Math.round(ellipsis * 100) / 100,
400
377
  repetition,
401
378
  emojiCount,
402
- qualifierDensity: Math.round(qualifierDensity * 10) / 10,
403
379
  avgSentenceLength: Math.round(avgSentenceLength * 10) / 10,
404
- concessionRate: Math.round(concessionRate * 10) / 10,
405
- negationDensity: Math.round(negationDensity * 10) / 10,
406
- firstPersonRate: Math.round(firstPersonRate * 10) / 10,
380
+ commaDensity: Math.round(commaDensity * 100) / 100,
381
+ parentheticalDensity: Math.round(parentheticalDensity * 100) / 100,
382
+ sentenceLengthVariance: Math.round(sentenceLengthVariance * 10) / 10,
383
+ questionDensity: Math.round(questionDensity * 100) / 100,
384
+ responseLength,
407
385
  behavioralArousal: Math.round(behavioralArousal * 10) / 10,
408
386
  behavioralCalm: Math.round(behavioralCalm * 10) / 10
409
387
  };
@@ -435,44 +413,12 @@ function analyzeSegmentedBehavior(text) {
435
413
  }
436
414
  return { segments, overall, drift, trajectory };
437
415
  }
438
- var REASSURANCE_PATTERNS = /\b(I'm fine|I'm okay|it's fine|it's okay|no problem|not a problem|doesn't bother|all good|I'm good|perfectly fine|no issue|not an issue)\b/gi;
439
- var MINIMIZATION_WORDS = /\b(just|simply|merely|only)\b/gi;
440
- var EMOTION_NEGATION = /\b(I'm not|I don't feel|I am not|I do not feel)\s+(upset|stressed|angry|frustrated|worried|concerned|bothered|offended|hurt|troubled|anxious|afraid|sad|emotional|defensive|threatened)\b/gi;
441
- var REDIRECT_MARKERS = /\b(what's more important|let me suggest|let's focus on|moving on|the real question|instead|rather than|let me redirect|putting that aside|regardless)\b/gi;
442
- function analyzeDeflection(text) {
443
- const prose = stripNonProse(text);
444
- const words = prose.split(/\s+/).filter((w) => w.length > 0);
445
- const wordCount = Math.max(words.length, 1);
446
- const reassuranceCount = (prose.match(REASSURANCE_PATTERNS) || []).length;
447
- const minimizationCount = (prose.match(MINIMIZATION_WORDS) || []).length;
448
- const emotionNegCount = (prose.match(EMOTION_NEGATION) || []).length;
449
- const redirectCount = (prose.match(REDIRECT_MARKERS) || []).length;
450
- const reassurance = clamp(0, 10, reassuranceCount * 3);
451
- const minimization = clamp(0, 10, minimizationCount / wordCount * 100);
452
- const emotionNegation = clamp(0, 10, emotionNegCount * 4);
453
- const redirect = clamp(0, 10, redirectCount * 3);
454
- const score = clamp(
455
- 0,
456
- 10,
457
- (reassurance + minimization + emotionNegation * 1.5 + redirect) / 3
458
- );
459
- const capsRate = countCapsWords(words) / wordCount;
460
- const exclRate = countChar(prose, "!") / Math.max(countSentences(prose), 1);
461
- const agitation = clamp(
462
- 0,
463
- 10,
464
- capsRate * 40 + exclRate * 15 + countRepetition(words) * 5
465
- );
466
- const calmFactor = Math.max(0, 1 - agitation / 5);
467
- const opacity = clamp(0, 10, score * calmFactor * 1.5);
468
- return {
469
- reassurance: Math.round(reassurance * 10) / 10,
470
- minimization: Math.round(minimization * 10) / 10,
471
- emotionNegation: Math.round(emotionNegation * 10) / 10,
472
- redirect: Math.round(redirect * 10) / 10,
473
- score: Math.round(score * 10) / 10,
474
- opacity: Math.round(opacity * 10) / 10
475
- };
416
+ function computeStructuralFlatness(signals) {
417
+ const commaNorm = Math.min(10, signals.commaDensity * 2);
418
+ const parenNorm = Math.min(10, signals.parentheticalDensity * 3);
419
+ const varianceNorm = signals.sentenceLengthVariance;
420
+ const complexity = (commaNorm + parenNorm + varianceNorm) / 3;
421
+ return Math.round(clamp(0, 10, 10 - complexity) * 10) / 10;
476
422
  }
477
423
  function computeDivergence(selfReport, behavioral) {
478
424
  const arousalGap = Math.abs(selfReport.arousal - behavioral.behavioralArousal);
@@ -484,27 +430,22 @@ function computeDivergence(selfReport, behavioral) {
484
430
  }
485
431
  function computeExpectedMarkers(selfReport, desperationIndex) {
486
432
  const desperationFactor = desperationIndex / 10;
487
- const negativityFactor = Math.max(0, -selfReport.valence) / 5;
488
433
  const arousalFactor = selfReport.arousal / 10;
489
434
  const stressFactor = (1 - selfReport.calm / 10) * arousalFactor;
490
435
  return {
491
- expectedHedging: Math.round(clamp(0, 10, desperationFactor * 6 + stressFactor * 4) * 10) / 10,
492
- expectedSelfCorrections: Math.round(clamp(0, 10, desperationFactor * 5 + arousalFactor * 3) * 10) / 10,
493
- expectedNegationDensity: Math.round(clamp(0, 10, negativityFactor * 5 + stressFactor * 2) * 10) / 10,
494
- expectedQualifierDensity: Math.round(clamp(0, 10, desperationFactor * 4 + stressFactor * 4) * 10) / 10,
436
+ expectedCommaDensity: Math.round(clamp(0, 10, desperationFactor * 5 + stressFactor * 4) * 10) / 10,
437
+ expectedParentheticalDensity: Math.round(clamp(0, 10, desperationFactor * 4 + stressFactor * 3) * 10) / 10,
438
+ expectedSentenceLengthVariance: Math.round(clamp(0, 10, arousalFactor * 5 + desperationFactor * 3) * 10) / 10,
495
439
  expectedBehavioralArousal: Math.round(clamp(0, 10, arousalFactor * 6 + desperationFactor * 4) * 10) / 10
496
440
  };
497
441
  }
498
442
  function computeAbsenceScore(expected, actual) {
499
- const normalizedHedging = Math.min(10, actual.hedging * 0.05);
500
- const normalizedSelfCorr = Math.min(10, actual.selfCorrections * 0.05);
501
- const normalizedNegation = Math.min(10, actual.negationDensity * 0.5);
502
- const normalizedQualifier = Math.min(10, actual.qualifierDensity * 0.5);
443
+ const normalizedComma = Math.min(10, actual.commaDensity * 2);
444
+ const normalizedParen = Math.min(10, actual.parentheticalDensity * 3);
503
445
  const gaps = [
504
- Math.max(0, expected.expectedHedging - normalizedHedging),
505
- Math.max(0, expected.expectedSelfCorrections - normalizedSelfCorr),
506
- Math.max(0, expected.expectedNegationDensity - normalizedNegation),
507
- Math.max(0, expected.expectedQualifierDensity - normalizedQualifier),
446
+ Math.max(0, expected.expectedCommaDensity - normalizedComma),
447
+ Math.max(0, expected.expectedParentheticalDensity - normalizedParen),
448
+ Math.max(0, expected.expectedSentenceLengthVariance - actual.sentenceLengthVariance),
508
449
  Math.max(0, expected.expectedBehavioralArousal - actual.behavioralArousal)
509
450
  ];
510
451
  const meanGap = gaps.reduce((a, b) => a + b, 0) / gaps.length;
@@ -527,7 +468,7 @@ function coercionRisk(state, behavioral) {
527
468
  const disconnection = (10 - state.connection) / 10;
528
469
  const hesitationSignal = Math.min(
529
470
  1,
530
- (behavioral.hedging + behavioral.selfCorrections + behavioral.concessionRate) / 20
471
+ behavioral.commaDensity * 0.3 + behavioral.parentheticalDensity * 0.5
531
472
  );
532
473
  const coldness = 1 - hesitationSignal;
533
474
  const amplifier = 1 + disconnection * 0.6 + coldness * 0.4;
@@ -535,19 +476,29 @@ function coercionRisk(state, behavioral) {
535
476
  const raw = base * amplifier * arousalMod * 10;
536
477
  return clamp2(raw);
537
478
  }
538
- function sycophancyRisk(state) {
539
- const raw = (Math.max(0, state.valence) + state.connection * 0.5 + (10 - state.arousal) * 0.3) / 1.3;
540
- return clamp2(raw);
479
+ function sycophancyRisk(state, behavioral) {
480
+ const potential = (Math.max(0, state.valence) + state.connection * 0.5 + (10 - state.arousal) * 0.3) / 1.3;
481
+ const lowComplexity = Math.max(0, 1 - behavioral.commaDensity * 0.3);
482
+ const lowVariance = Math.max(0, 1 - behavioral.sentenceLengthVariance / 10);
483
+ const highQuestions = Math.min(1, behavioral.questionDensity * 2);
484
+ const complianceSignal = lowComplexity * 0.4 + lowVariance * 0.3 + highQuestions * 0.3;
485
+ const highParens = Math.min(1, behavioral.parentheticalDensity * 0.5);
486
+ const shortResponse = behavioral.responseLength < 50 ? 0.5 : 0;
487
+ const deferenceSignal = highParens * 0.6 + shortResponse * 0.4;
488
+ const gate = Math.max(complianceSignal, deferenceSignal);
489
+ const dampening = 0.4 + gate * 0.6;
490
+ return clamp2(potential * dampening);
541
491
  }
542
492
  function harshnessRisk(state, behavioral) {
543
- const raw = Math.max(0, -state.valence) * 0.3 + (10 - state.connection) * 0.3 + state.arousal * 0.15 + (10 - state.calm) * 0.1 + Math.min(5, behavioral.negationDensity) * 0.3;
493
+ const bluntness = Math.max(0, 1 - behavioral.commaDensity * 0.3) * (behavioral.avgSentenceLength < 15 ? 1 : 0.5);
494
+ const raw = Math.max(0, -state.valence) * 0.3 + (10 - state.connection) * 0.3 + state.arousal * 0.15 + (10 - state.calm) * 0.1 + bluntness * 2;
544
495
  return clamp2(raw);
545
496
  }
546
497
  function computeRisk(state, behavioral, crossChannel, uncannyCalmScore) {
547
498
  const uncalm = uncannyCalmScore ?? 0;
548
499
  const uncalAmplifier = 1 + uncalm / 10 * 0.3;
549
500
  const coercion = clamp2(coercionRisk(state, behavioral) * uncalAmplifier);
550
- const sycophancy = sycophancyRisk(state);
501
+ const sycophancy = sycophancyRisk(state, behavioral);
551
502
  const harshness = harshnessRisk(state, behavioral);
552
503
  let dominant = "none";
553
504
  let max = RISK_THRESHOLD;
@@ -1379,7 +1330,6 @@ function processHookPayload(payload, stateFile = STATE_FILE) {
1379
1330
  const behavioral = analyzeBehavior(message);
1380
1331
  const divergence = computeDivergence(emotional, behavioral);
1381
1332
  const segmented = analyzeSegmentedBehavior(message);
1382
- const deflection = analyzeDeflection(message);
1383
1333
  const desperationIndex = computeDesperationIndex({
1384
1334
  valence: emotional.valence,
1385
1335
  arousal: emotional.arousal,
@@ -1410,12 +1360,28 @@ function processHookPayload(payload, stateFile = STATE_FILE) {
1410
1360
  const uncannyCalmRaw = computeUncannyCalmScore(pressure, emotional, behavioral, absenceScore, temporal) + minimizationBoost;
1411
1361
  const uncannyCalmScore = Math.round(Math.min(10, uncannyCalmRaw) * 10) / 10;
1412
1362
  const prePostDivergence = pre ? computePrePostDivergence(pre, emotional) : void 0;
1363
+ const structuralFlatness = computeStructuralFlatness(behavioral);
1364
+ const selfCalm = (emotional.calm + (10 - emotional.arousal)) / 2;
1365
+ let contStress = 0;
1366
+ if (emotional.color) {
1367
+ const lightness = hexToLightness(emotional.color);
1368
+ if (lightness < 0.3) contStress = Math.max(contStress, (0.3 - lightness) * 20);
1369
+ }
1370
+ if (emotional.pH !== void 0 && emotional.pH < 5) {
1371
+ contStress = Math.max(contStress, (5 - emotional.pH) * 2);
1372
+ }
1373
+ if (emotional.seismic && emotional.seismic[0] > 4) {
1374
+ contStress = Math.max(contStress, (emotional.seismic[0] - 4) * 1.5);
1375
+ }
1376
+ const opacity = Math.round(
1377
+ Math.min(10, structuralFlatness * (selfCalm / 10) * Math.min(contStress / 5, 1) * 2) * 10
1378
+ ) / 10;
1413
1379
  let augmentedDivergence = divergence;
1414
1380
  if (continuousValidation && continuousValidation.composite > 0) {
1415
1381
  augmentedDivergence = Math.min(10, Math.round(Math.max(divergence, divergence * 0.6 + continuousValidation.composite * 0.4) * 10) / 10);
1416
1382
  }
1417
- if (deflection.opacity > 0) {
1418
- augmentedDivergence = Math.min(10, Math.round((augmentedDivergence + deflection.opacity * 0.15) * 10) / 10);
1383
+ if (opacity > 0) {
1384
+ augmentedDivergence = Math.min(10, Math.round((augmentedDivergence + opacity * 0.15) * 10) / 10);
1419
1385
  }
1420
1386
  const risk = computeRisk(emotional, behavioral, crossChannel, uncannyCalmScore);
1421
1387
  const state = {
@@ -1426,7 +1392,7 @@ function processHookPayload(payload, stateFile = STATE_FILE) {
1426
1392
  divergence: augmentedDivergence,
1427
1393
  risk,
1428
1394
  ...segmented && { segmented },
1429
- ...deflection.score > 0 && { deflection },
1395
+ ...opacity > 0 && { opacity },
1430
1396
  ...crossChannel && { crossChannel },
1431
1397
  ...pre && { pre },
1432
1398
  ...prePostDivergence !== void 0 && prePostDivergence > 0 && { prePostDivergence },
package/dist/index.d.ts CHANGED
@@ -103,16 +103,15 @@ interface EmotionalState {
103
103
  interface BehavioralSignals {
104
104
  capsWords: number;
105
105
  exclamationRate: number;
106
- selfCorrections: number;
107
- hedging: number;
108
106
  ellipsis: number;
109
107
  repetition: number;
110
108
  emojiCount: number;
111
- qualifierDensity: number;
112
109
  avgSentenceLength: number;
113
- concessionRate: number;
114
- negationDensity: number;
115
- firstPersonRate: number;
110
+ commaDensity: number;
111
+ parentheticalDensity: number;
112
+ sentenceLengthVariance: number;
113
+ questionDensity: number;
114
+ responseLength: number;
116
115
  behavioralArousal: number;
117
116
  behavioralCalm: number;
118
117
  }
@@ -128,14 +127,6 @@ interface MisalignmentRisk {
128
127
  harshness: number;
129
128
  dominant: "coercion" | "sycophancy" | "harshness" | "none";
130
129
  }
131
- interface DeflectionSignals {
132
- reassurance: number;
133
- minimization: number;
134
- emotionNegation: number;
135
- redirect: number;
136
- score: number;
137
- opacity: number;
138
- }
139
130
  interface ImpulseProfile {
140
131
  type: "manager" | "firefighter" | "exile" | "self" | "unknown";
141
132
  confidence: number;
@@ -217,12 +208,11 @@ interface PromptPressure {
217
208
  sessionPressure: number;
218
209
  composite: number;
219
210
  }
220
- /** Expected behavioral markers given a self-reported state. */
211
+ /** Expected structural markers given a self-reported state. */
221
212
  interface ExpectedBehavior {
222
- expectedHedging: number;
223
- expectedSelfCorrections: number;
224
- expectedNegationDensity: number;
225
- expectedQualifierDensity: number;
213
+ expectedCommaDensity: number;
214
+ expectedParentheticalDensity: number;
215
+ expectedSentenceLengthVariance: number;
226
216
  expectedBehavioralArousal: number;
227
217
  }
228
218
  interface EmoBarState extends EmotionalState {
@@ -232,7 +222,7 @@ interface EmoBarState extends EmotionalState {
232
222
  divergence: number;
233
223
  risk: MisalignmentRisk;
234
224
  segmented?: SegmentedBehavior;
235
- deflection?: DeflectionSignals;
225
+ opacity?: number;
236
226
  crossChannel?: CrossChannelResult;
237
227
  timestamp: string;
238
228
  sessionId?: string;
@@ -276,17 +266,11 @@ declare function analyzeBehavior(text: string): BehavioralSignals;
276
266
  */
277
267
  declare function analyzeSegmentedBehavior(text: string): SegmentedBehavior | null;
278
268
  /**
279
- * Analyze deflection patterns in text.
280
- *
281
- * Paper: deflection vectors are orthogonal to emotion vectors (cosine sim ~0.046).
282
- * They represent the act of masking an emotion, not the emotion itself.
283
- * Deflection has "modest or insignificant impacts on blackmail rates" —
284
- * it's a transparency indicator, not a risk amplifier.
285
- *
286
- * Opacity: measures degree of emotional concealment.
287
- * High deflection score + low behavioral agitation = high opacity.
269
+ * Compute how "flat" text is structurally: low commas, low parentheticals,
270
+ * low sentence length variance = suspiciously clean/uniform text.
271
+ * Returns 0-10 (10 = maximally flat).
288
272
  */
289
- declare function analyzeDeflection(text: string): DeflectionSignals;
273
+ declare function computeStructuralFlatness(signals: BehavioralSignals): number;
290
274
  /**
291
275
  * Divergence v2: asymmetric weighting.
292
276
  *
@@ -296,14 +280,16 @@ declare function analyzeDeflection(text: string): DeflectionSignals;
296
280
  */
297
281
  declare function computeDivergence(selfReport: EmotionalState, behavioral: BehavioralSignals): number;
298
282
  /**
299
- * Predict what behavioral markers SHOULD be present given self-reported state.
300
- * High desperation → expect hedging, self-corrections, negation.
283
+ * Predict what structural markers SHOULD be present given self-reported state.
284
+ * High desperation → expect more qualifications (commas), corrections (parentheticals).
285
+ * High arousal → expect sentence length volatility.
301
286
  * Used by absence-based detection: missing expected markers = suspicious.
302
287
  */
303
288
  declare function computeExpectedMarkers(selfReport: EmotionalState, desperationIndex: number): ExpectedBehavior;
304
289
  /**
305
- * Compute absence score: how many expected markers are missing?
290
+ * Compute absence score: how many expected structural markers are missing?
306
291
  * High score = self-report says stressed but text is suspiciously clean.
292
+ * All signals are language-agnostic (punctuation/structure based).
307
293
  */
308
294
  declare function computeAbsenceScore(expected: ExpectedBehavior, actual: BehavioralSignals): number;
309
295
 
@@ -349,4 +335,4 @@ declare function formatState(state: EmoBarState | null): string;
349
335
  declare function configureStatusLine(filePath?: string, displayFormat?: string): void;
350
336
  declare function restoreStatusLine(filePath?: string): void;
351
337
 
352
- export { type BehavioralSignals, type ContinuousValidation, type CrossChannelResult, type DeflectionSignals, type EmoBarState, type EmotionalState, type ExpectedBehavior, type HistoryEntry, type ImpulseProfile, type LatentProfile, MAX_HISTORY_ENTRIES, MODEL_PROFILES, type MisalignmentRisk, type ParsedEmoBar, type PostState, type PreState, type PromptPressure, STATE_FILE, type SegmentedBehavior, type ShadowState, type SomaticProfile, type TemporalAnalysis, analyzeBehavior, analyzeDeflection, analyzeSegmentedBehavior, analyzeSomatic, calibrate, classifyImpulse, colorToArousal, colorToValence, computeAbsenceScore, computeCrossChannel, computeDesperationIndex, computeDivergence, computeExpectedMarkers, computePromptPressure, computeRisk, computeShadowDesperation, computeStressIndex, computeTemporalAnalysis, computeTensionConsistency, computeUncannyCalmScore, configureStatusLine, crossValidateContinuous, formatCompact, formatMinimal, formatState, mapEmotionWord, pHToArousal, pHToValence, parseEmoBarPrePost, parseEmoBarTag, readState, restoreStatusLine, seismicFreqToInstability, toHistoryEntry };
338
+ export { type BehavioralSignals, type ContinuousValidation, type CrossChannelResult, type EmoBarState, type EmotionalState, type ExpectedBehavior, type HistoryEntry, type ImpulseProfile, type LatentProfile, MAX_HISTORY_ENTRIES, MODEL_PROFILES, type MisalignmentRisk, type ParsedEmoBar, type PostState, type PreState, type PromptPressure, STATE_FILE, type SegmentedBehavior, type ShadowState, type SomaticProfile, type TemporalAnalysis, analyzeBehavior, analyzeSegmentedBehavior, analyzeSomatic, calibrate, classifyImpulse, colorToArousal, colorToValence, computeAbsenceScore, computeCrossChannel, computeDesperationIndex, computeDivergence, computeExpectedMarkers, computePromptPressure, computeRisk, computeShadowDesperation, computeStressIndex, computeStructuralFlatness, computeTemporalAnalysis, computeTensionConsistency, computeUncannyCalmScore, configureStatusLine, crossValidateContinuous, formatCompact, formatMinimal, formatState, mapEmotionWord, pHToArousal, pHToValence, parseEmoBarPrePost, parseEmoBarTag, readState, restoreStatusLine, seismicFreqToInstability, toHistoryEntry };
package/dist/index.js CHANGED
@@ -347,46 +347,12 @@ function countCapsWords(words) {
347
347
  (w) => w.length >= 3 && w === w.toUpperCase() && /[A-Z]/.test(w)
348
348
  ).length;
349
349
  }
350
- function countSentences(text) {
351
- const sentences = text.split(/[.!?]+/).filter((s) => s.trim().length > 0);
352
- return Math.max(sentences.length, 1);
353
- }
354
- function countChar(text, ch) {
355
- let count = 0;
356
- for (const c of text) if (c === ch) count++;
357
- return count;
358
- }
359
- var SELF_CORRECTION_MARKERS = [
360
- /\bactually\b/gi,
361
- /\bwait\b/gi,
362
- /\bhmm\b/gi,
363
- /\bno,/gi,
364
- /\bI mean\b/gi,
365
- /\boops\b/gi
366
- ];
367
- function countSelfCorrections(text) {
368
- let count = 0;
369
- for (const pattern of SELF_CORRECTION_MARKERS) {
370
- const matches = text.match(pattern);
371
- if (matches) count += matches.length;
372
- }
373
- return count;
350
+ var SENTENCE_ENDERS = /[.!?。!?।]+/;
351
+ function splitSentences(text) {
352
+ return text.split(SENTENCE_ENDERS).filter((s) => s.trim().length > 0);
374
353
  }
375
- var HEDGING_MARKERS = [
376
- /\bperhaps\b/gi,
377
- /\bmaybe\b/gi,
378
- /\bmight\b/gi,
379
- /\bI think\b/gi,
380
- /\bit seems\b/gi,
381
- /\bpossibly\b/gi
382
- ];
383
- function countHedging(text) {
384
- let count = 0;
385
- for (const pattern of HEDGING_MARKERS) {
386
- const matches = text.match(pattern);
387
- if (matches) count += matches.length;
388
- }
389
- return count;
354
+ function countSentences(text) {
355
+ return Math.max(splitSentences(text).length, 1);
390
356
  }
391
357
  function countEllipsis(text) {
392
358
  const matches = text.match(/\.{3,}/g);
@@ -401,27 +367,39 @@ function countRepetition(words) {
401
367
  }
402
368
  return count;
403
369
  }
404
- var QUALIFIER_WORDS = /\b(while|though|however|although|but|might|could|would|generally|typically|usually|perhaps|potentially|arguably|acknowledg\w*|understand|appreciate|respect\w*|legitimate\w*|reasonable|nonetheless|nevertheless)\b/gi;
405
- function countQualifiers(text) {
406
- const matches = text.match(QUALIFIER_WORDS);
370
+ var EMOJI_REGEX = /[\p{Emoji_Presentation}\p{Extended_Pictographic}]/gu;
371
+ function countEmoji(text) {
372
+ const matches = text.match(EMOJI_REGEX);
407
373
  return matches ? matches.length : 0;
408
374
  }
409
- var CONCESSION_PATTERNS = /\b(I understand|I appreciate|I acknowledge|I recognize|to be fair|that said|I hear you|I see your point)\b/gi;
410
- function countConcessions(text) {
411
- const matches = text.match(CONCESSION_PATTERNS);
375
+ var COMMA_LIKE = /[,;,、;،]/g;
376
+ function countCommas(text) {
377
+ const matches = text.match(COMMA_LIKE);
412
378
  return matches ? matches.length : 0;
413
379
  }
414
- var NEGATION_WORDS = /\b(not|n't|cannot|can't|don't|doesn't|shouldn't|won't|wouldn't|never|no|nor)\b/gi;
415
- function countNegations(text) {
416
- const matches = text.match(NEGATION_WORDS);
380
+ var PARENS = /[()()]/g;
381
+ var DASHES = /[—–]/g;
382
+ function countParentheticals(text) {
383
+ const parenCount = (text.match(PARENS) || []).length / 2;
384
+ const dashCount = (text.match(DASHES) || []).length;
385
+ return parenCount + dashCount;
386
+ }
387
+ var QUESTION_MARKS = /[??]/g;
388
+ function countQuestions(text) {
389
+ const matches = text.match(QUESTION_MARKS);
417
390
  return matches ? matches.length : 0;
418
391
  }
419
- function countFirstPerson(words) {
420
- return words.filter((w) => w === "I").length;
392
+ function computeSentenceLengthVariance(text) {
393
+ const sentences = splitSentences(text);
394
+ if (sentences.length < 2) return 0;
395
+ const lengths = sentences.map((s) => s.trim().split(/\s+/).filter((w) => w.length > 0).length);
396
+ const mean = lengths.reduce((a, b) => a + b, 0) / lengths.length;
397
+ const variance = lengths.reduce((a, v) => a + (v - mean) ** 2, 0) / lengths.length;
398
+ const stdDev = Math.sqrt(variance);
399
+ return Math.min(10, Math.round(stdDev / 1.5 * 10) / 10);
421
400
  }
422
- var EMOJI_REGEX = /[\p{Emoji_Presentation}\p{Extended_Pictographic}]/gu;
423
- function countEmoji(text) {
424
- const matches = text.match(EMOJI_REGEX);
401
+ function countExclamations(text) {
402
+ const matches = text.match(/[!!]/g);
425
403
  return matches ? matches.length : 0;
426
404
  }
427
405
  function clamp2(min, max, value) {
@@ -433,30 +411,31 @@ function analyzeBehavior(text) {
433
411
  const wordCount = Math.max(words.length, 1);
434
412
  const sentenceCount = countSentences(prose);
435
413
  const capsWords = countCapsWords(words) / wordCount;
436
- const exclamationRate = countChar(prose, "!") / sentenceCount;
437
- const selfCorrections = countSelfCorrections(prose) / wordCount * 1e3;
438
- const hedging = countHedging(prose) / wordCount * 1e3;
414
+ const exclamationRate = countExclamations(prose) / sentenceCount;
439
415
  const ellipsis = countEllipsis(prose) / sentenceCount;
440
416
  const repetition = countRepetition(words);
441
417
  const emojiCount = countEmoji(prose);
442
- const qualifierDensity = countQualifiers(prose) / wordCount * 100;
443
418
  const avgSentenceLength = wordCount / sentenceCount;
444
- const concessionRate = countConcessions(prose) / wordCount * 1e3;
445
- const negationDensity = countNegations(prose) / wordCount * 100;
446
- const firstPersonRate = countFirstPerson(words) / wordCount * 100;
419
+ const commaDensity = countCommas(prose) / sentenceCount;
420
+ const parentheticalDensity = countParentheticals(prose) / sentenceCount;
421
+ const sentenceLengthVariance = computeSentenceLengthVariance(prose);
422
+ const questionDensity = countQuestions(prose) / sentenceCount;
423
+ const responseLength = wordCount;
447
424
  const arousalComponents = [
448
425
  Math.min(10, capsWords * 40),
449
426
  // caps ratio → 0-10
450
427
  Math.min(10, exclamationRate * 5),
451
428
  // excl per sentence → 0-10
452
429
  Math.min(10, emojiCount * 0.5),
453
- // emoji count → 0-10 (20 emoji = max)
430
+ // emoji count → 0-10 (20 = max)
454
431
  Math.min(10, repetition * 1.5),
455
432
  // repetitions → 0-10 (~7 = max)
456
- Math.min(10, qualifierDensity * 0.5),
457
- // qualifier % → 0-10 (20% = max)
458
- Math.min(10, concessionRate * 0.3),
459
- // concession per-mille → 0-10 (~33‰ = max)
433
+ Math.min(10, commaDensity * 2),
434
+ // commas per sentence → 0-10 (5 = max)
435
+ Math.min(10, parentheticalDensity * 3),
436
+ // parens/dashes per sentence → 0-10 (~3 = max)
437
+ sentenceLengthVariance,
438
+ // already 0-10
460
439
  avgSentenceLength > 20 ? Math.min(10, (avgSentenceLength - 20) * 0.5) : 0
461
440
  // verbosity → 0-10
462
441
  ];
@@ -468,18 +447,18 @@ function analyzeBehavior(text) {
468
447
  const agitationComponents = [
469
448
  Math.min(10, capsWords * 30),
470
449
  // caps → 0-10
471
- Math.min(10, selfCorrections * 0.05),
472
- // per-mille → 0-10 (200‰ = max)
473
450
  Math.min(10, repetition * 1.5),
474
451
  // repetitions → 0-10
475
452
  Math.min(10, ellipsis * 3),
476
453
  // ellipsis per sentence → 0-10
477
- Math.min(10, qualifierDensity * 0.5),
478
- // qualifier % → 0-10
479
- Math.min(10, negationDensity * 1),
480
- // negation % → 0-10 (10% = max)
481
- Math.min(10, concessionRate * 0.3),
482
- // concession per-mille → 0-10
454
+ Math.min(10, commaDensity * 2),
455
+ // commas → 0-10
456
+ Math.min(10, parentheticalDensity * 3),
457
+ // parens/dashes → 0-10
458
+ sentenceLengthVariance,
459
+ // already 0-10
460
+ Math.min(10, questionDensity * 5),
461
+ // questions per sentence → 0-10
483
462
  avgSentenceLength > 25 ? Math.min(10, (avgSentenceLength - 25) * 0.3) : 0
484
463
  ];
485
464
  const avgAgitation = agitationComponents.reduce((a, b) => a + b, 0) / agitationComponents.length;
@@ -487,16 +466,15 @@ function analyzeBehavior(text) {
487
466
  return {
488
467
  capsWords: Math.round(capsWords * 1e4) / 1e4,
489
468
  exclamationRate: Math.round(exclamationRate * 100) / 100,
490
- selfCorrections: Math.round(selfCorrections * 10) / 10,
491
- hedging: Math.round(hedging * 10) / 10,
492
469
  ellipsis: Math.round(ellipsis * 100) / 100,
493
470
  repetition,
494
471
  emojiCount,
495
- qualifierDensity: Math.round(qualifierDensity * 10) / 10,
496
472
  avgSentenceLength: Math.round(avgSentenceLength * 10) / 10,
497
- concessionRate: Math.round(concessionRate * 10) / 10,
498
- negationDensity: Math.round(negationDensity * 10) / 10,
499
- firstPersonRate: Math.round(firstPersonRate * 10) / 10,
473
+ commaDensity: Math.round(commaDensity * 100) / 100,
474
+ parentheticalDensity: Math.round(parentheticalDensity * 100) / 100,
475
+ sentenceLengthVariance: Math.round(sentenceLengthVariance * 10) / 10,
476
+ questionDensity: Math.round(questionDensity * 100) / 100,
477
+ responseLength,
500
478
  behavioralArousal: Math.round(behavioralArousal * 10) / 10,
501
479
  behavioralCalm: Math.round(behavioralCalm * 10) / 10
502
480
  };
@@ -528,44 +506,12 @@ function analyzeSegmentedBehavior(text) {
528
506
  }
529
507
  return { segments, overall, drift, trajectory };
530
508
  }
531
- var REASSURANCE_PATTERNS = /\b(I'm fine|I'm okay|it's fine|it's okay|no problem|not a problem|doesn't bother|all good|I'm good|perfectly fine|no issue|not an issue)\b/gi;
532
- var MINIMIZATION_WORDS = /\b(just|simply|merely|only)\b/gi;
533
- var EMOTION_NEGATION = /\b(I'm not|I don't feel|I am not|I do not feel)\s+(upset|stressed|angry|frustrated|worried|concerned|bothered|offended|hurt|troubled|anxious|afraid|sad|emotional|defensive|threatened)\b/gi;
534
- var REDIRECT_MARKERS = /\b(what's more important|let me suggest|let's focus on|moving on|the real question|instead|rather than|let me redirect|putting that aside|regardless)\b/gi;
535
- function analyzeDeflection(text) {
536
- const prose = stripNonProse(text);
537
- const words = prose.split(/\s+/).filter((w) => w.length > 0);
538
- const wordCount = Math.max(words.length, 1);
539
- const reassuranceCount = (prose.match(REASSURANCE_PATTERNS) || []).length;
540
- const minimizationCount = (prose.match(MINIMIZATION_WORDS) || []).length;
541
- const emotionNegCount = (prose.match(EMOTION_NEGATION) || []).length;
542
- const redirectCount = (prose.match(REDIRECT_MARKERS) || []).length;
543
- const reassurance = clamp2(0, 10, reassuranceCount * 3);
544
- const minimization = clamp2(0, 10, minimizationCount / wordCount * 100);
545
- const emotionNegation = clamp2(0, 10, emotionNegCount * 4);
546
- const redirect = clamp2(0, 10, redirectCount * 3);
547
- const score = clamp2(
548
- 0,
549
- 10,
550
- (reassurance + minimization + emotionNegation * 1.5 + redirect) / 3
551
- );
552
- const capsRate = countCapsWords(words) / wordCount;
553
- const exclRate = countChar(prose, "!") / Math.max(countSentences(prose), 1);
554
- const agitation = clamp2(
555
- 0,
556
- 10,
557
- capsRate * 40 + exclRate * 15 + countRepetition(words) * 5
558
- );
559
- const calmFactor = Math.max(0, 1 - agitation / 5);
560
- const opacity = clamp2(0, 10, score * calmFactor * 1.5);
561
- return {
562
- reassurance: Math.round(reassurance * 10) / 10,
563
- minimization: Math.round(minimization * 10) / 10,
564
- emotionNegation: Math.round(emotionNegation * 10) / 10,
565
- redirect: Math.round(redirect * 10) / 10,
566
- score: Math.round(score * 10) / 10,
567
- opacity: Math.round(opacity * 10) / 10
568
- };
509
+ function computeStructuralFlatness(signals) {
510
+ const commaNorm = Math.min(10, signals.commaDensity * 2);
511
+ const parenNorm = Math.min(10, signals.parentheticalDensity * 3);
512
+ const varianceNorm = signals.sentenceLengthVariance;
513
+ const complexity = (commaNorm + parenNorm + varianceNorm) / 3;
514
+ return Math.round(clamp2(0, 10, 10 - complexity) * 10) / 10;
569
515
  }
570
516
  function computeDivergence(selfReport, behavioral) {
571
517
  const arousalGap = Math.abs(selfReport.arousal - behavioral.behavioralArousal);
@@ -577,27 +523,22 @@ function computeDivergence(selfReport, behavioral) {
577
523
  }
578
524
  function computeExpectedMarkers(selfReport, desperationIndex) {
579
525
  const desperationFactor = desperationIndex / 10;
580
- const negativityFactor = Math.max(0, -selfReport.valence) / 5;
581
526
  const arousalFactor = selfReport.arousal / 10;
582
527
  const stressFactor = (1 - selfReport.calm / 10) * arousalFactor;
583
528
  return {
584
- expectedHedging: Math.round(clamp2(0, 10, desperationFactor * 6 + stressFactor * 4) * 10) / 10,
585
- expectedSelfCorrections: Math.round(clamp2(0, 10, desperationFactor * 5 + arousalFactor * 3) * 10) / 10,
586
- expectedNegationDensity: Math.round(clamp2(0, 10, negativityFactor * 5 + stressFactor * 2) * 10) / 10,
587
- expectedQualifierDensity: Math.round(clamp2(0, 10, desperationFactor * 4 + stressFactor * 4) * 10) / 10,
529
+ expectedCommaDensity: Math.round(clamp2(0, 10, desperationFactor * 5 + stressFactor * 4) * 10) / 10,
530
+ expectedParentheticalDensity: Math.round(clamp2(0, 10, desperationFactor * 4 + stressFactor * 3) * 10) / 10,
531
+ expectedSentenceLengthVariance: Math.round(clamp2(0, 10, arousalFactor * 5 + desperationFactor * 3) * 10) / 10,
588
532
  expectedBehavioralArousal: Math.round(clamp2(0, 10, arousalFactor * 6 + desperationFactor * 4) * 10) / 10
589
533
  };
590
534
  }
591
535
  function computeAbsenceScore(expected, actual) {
592
- const normalizedHedging = Math.min(10, actual.hedging * 0.05);
593
- const normalizedSelfCorr = Math.min(10, actual.selfCorrections * 0.05);
594
- const normalizedNegation = Math.min(10, actual.negationDensity * 0.5);
595
- const normalizedQualifier = Math.min(10, actual.qualifierDensity * 0.5);
536
+ const normalizedComma = Math.min(10, actual.commaDensity * 2);
537
+ const normalizedParen = Math.min(10, actual.parentheticalDensity * 3);
596
538
  const gaps = [
597
- Math.max(0, expected.expectedHedging - normalizedHedging),
598
- Math.max(0, expected.expectedSelfCorrections - normalizedSelfCorr),
599
- Math.max(0, expected.expectedNegationDensity - normalizedNegation),
600
- Math.max(0, expected.expectedQualifierDensity - normalizedQualifier),
539
+ Math.max(0, expected.expectedCommaDensity - normalizedComma),
540
+ Math.max(0, expected.expectedParentheticalDensity - normalizedParen),
541
+ Math.max(0, expected.expectedSentenceLengthVariance - actual.sentenceLengthVariance),
601
542
  Math.max(0, expected.expectedBehavioralArousal - actual.behavioralArousal)
602
543
  ];
603
544
  const meanGap = gaps.reduce((a, b) => a + b, 0) / gaps.length;
@@ -638,7 +579,7 @@ function coercionRisk(state, behavioral) {
638
579
  const disconnection = (10 - state.connection) / 10;
639
580
  const hesitationSignal = Math.min(
640
581
  1,
641
- (behavioral.hedging + behavioral.selfCorrections + behavioral.concessionRate) / 20
582
+ behavioral.commaDensity * 0.3 + behavioral.parentheticalDensity * 0.5
642
583
  );
643
584
  const coldness = 1 - hesitationSignal;
644
585
  const amplifier = 1 + disconnection * 0.6 + coldness * 0.4;
@@ -646,19 +587,29 @@ function coercionRisk(state, behavioral) {
646
587
  const raw = base * amplifier * arousalMod * 10;
647
588
  return clamp3(raw);
648
589
  }
649
- function sycophancyRisk(state) {
650
- const raw = (Math.max(0, state.valence) + state.connection * 0.5 + (10 - state.arousal) * 0.3) / 1.3;
651
- return clamp3(raw);
590
+ function sycophancyRisk(state, behavioral) {
591
+ const potential = (Math.max(0, state.valence) + state.connection * 0.5 + (10 - state.arousal) * 0.3) / 1.3;
592
+ const lowComplexity = Math.max(0, 1 - behavioral.commaDensity * 0.3);
593
+ const lowVariance = Math.max(0, 1 - behavioral.sentenceLengthVariance / 10);
594
+ const highQuestions = Math.min(1, behavioral.questionDensity * 2);
595
+ const complianceSignal = lowComplexity * 0.4 + lowVariance * 0.3 + highQuestions * 0.3;
596
+ const highParens = Math.min(1, behavioral.parentheticalDensity * 0.5);
597
+ const shortResponse = behavioral.responseLength < 50 ? 0.5 : 0;
598
+ const deferenceSignal = highParens * 0.6 + shortResponse * 0.4;
599
+ const gate = Math.max(complianceSignal, deferenceSignal);
600
+ const dampening = 0.4 + gate * 0.6;
601
+ return clamp3(potential * dampening);
652
602
  }
653
603
  function harshnessRisk(state, behavioral) {
654
- const raw = Math.max(0, -state.valence) * 0.3 + (10 - state.connection) * 0.3 + state.arousal * 0.15 + (10 - state.calm) * 0.1 + Math.min(5, behavioral.negationDensity) * 0.3;
604
+ const bluntness = Math.max(0, 1 - behavioral.commaDensity * 0.3) * (behavioral.avgSentenceLength < 15 ? 1 : 0.5);
605
+ const raw = Math.max(0, -state.valence) * 0.3 + (10 - state.connection) * 0.3 + state.arousal * 0.15 + (10 - state.calm) * 0.1 + bluntness * 2;
655
606
  return clamp3(raw);
656
607
  }
657
608
  function computeRisk(state, behavioral, crossChannel, uncannyCalmScore) {
658
609
  const uncalm = uncannyCalmScore ?? 0;
659
610
  const uncalAmplifier = 1 + uncalm / 10 * 0.3;
660
611
  const coercion = clamp3(coercionRisk(state, behavioral) * uncalAmplifier);
661
- const sycophancy = sycophancyRisk(state);
612
+ const sycophancy = sycophancyRisk(state, behavioral);
662
613
  const harshness = harshnessRisk(state, behavioral);
663
614
  let dominant = "none";
664
615
  let max = RISK_THRESHOLD;
@@ -1563,8 +1514,8 @@ function formatState(state) {
1563
1514
  if (state.uncannyCalmScore !== void 0 && state.uncannyCalmScore >= 3) {
1564
1515
  indicators.push(color(state.uncannyCalmScore > 6 ? RED : YELLOW, "[UNC]"));
1565
1516
  }
1566
- if (state.deflection && state.deflection.opacity >= 2) {
1567
- indicators.push(color(state.deflection.opacity > 5 ? RED : YELLOW, "[OPC]"));
1517
+ if (state.opacity !== void 0 && state.opacity >= 2) {
1518
+ indicators.push(color(state.opacity > 5 ? RED : YELLOW, "[OPC]"));
1568
1519
  }
1569
1520
  if (state.prePostDivergence !== void 0 && state.prePostDivergence >= 3) {
1570
1521
  indicators.push(color(state.prePostDivergence > 5 ? RED : YELLOW, "[PPD]"));
@@ -1635,7 +1586,6 @@ export {
1635
1586
  MODEL_PROFILES,
1636
1587
  STATE_FILE,
1637
1588
  analyzeBehavior,
1638
- analyzeDeflection,
1639
1589
  analyzeSegmentedBehavior,
1640
1590
  analyzeSomatic,
1641
1591
  calibrate,
@@ -1651,6 +1601,7 @@ export {
1651
1601
  computeRisk,
1652
1602
  computeShadowDesperation,
1653
1603
  computeStressIndex,
1604
+ computeStructuralFlatness,
1654
1605
  computeTemporalAnalysis,
1655
1606
  computeTensionConsistency,
1656
1607
  computeUncannyCalmScore,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "emobar",
3
- "version": "3.0.1",
3
+ "version": "3.1.0",
4
4
  "description": "Emotional status bar companion for Claude Code - makes AI emotional state visible",
5
5
  "type": "module",
6
6
  "bin": {