driftguard-mcp 0.1.7 → 0.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +104 -67
  2. package/dist/bin.js +160 -298
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -1,8 +1,10 @@
1
1
  # driftguard-mcp
2
2
 
3
- Real-time AI conversation drift monitor — MCP server for Claude Code, Gemini CLI, Codex CLI, and Cursor.
3
+ Real-time AI conversation drift monitor for Claude Code, Gemini CLI, Codex CLI, and Cursor.
4
4
 
5
- Reads your session directly, scores it across 7 factors, and exposes the result as MCP tools you can call mid-session. No browser, no API keys, no UI just a score when you need it.
5
+ Long AI sessions degrade the model fills its context window, starts repeating itself, and loses track of what you originally asked for. driftguard-mcp reads your session file directly, measures the signals that actually predict this, and tells you when to start fresh.
6
+
7
+ No browser. No API keys. No UI. Works as an MCP server your AI CLI can call mid-session.
6
8
 
7
9
  ---
8
10
 
@@ -16,7 +18,7 @@ driftguard-mcp setup
16
18
  `setup` automatically configures all supported AI CLIs on your machine. Restart your AI CLI(s) after running it.
17
19
 
18
20
  <details>
19
- <summary>Manual config (if you prefer)</summary>
21
+ <summary>Manual config</summary>
20
22
 
21
23
  ### Claude Code — `~/.claude.json`
22
24
 
@@ -44,17 +46,12 @@ driftguard-mcp setup
44
46
  }
45
47
  ```
46
48
 
47
- ### Codex CLI — `~/.codex/config.json`
49
+ ### Codex CLI — `~/.codex/config.toml`
48
50
 
49
- ```json
50
- {
51
- "mcpServers": {
52
- "driftguard": {
53
- "command": "driftguard-mcp",
54
- "env": { "DRIFTCLI_ADAPTER": "codex" }
55
- }
56
- }
57
- }
51
+ ```toml
52
+ [mcp_servers.driftguard]
53
+ command = "driftguard-mcp"
54
+ env.DRIFTCLI_ADAPTER = "codex"
58
55
  ```
59
56
 
60
57
  ### Cursor — `~/.cursor/mcp.json`
@@ -70,9 +67,7 @@ driftguard-mcp setup
70
67
  }
71
68
  ```
72
69
 
73
- > Note: Cursor drift is calculated from Claude Code sessions on your machine not from Cursor's own conversation history.
74
-
75
- > **`DRIFTCLI_ADAPTER`** tells driftguard-mcp which CLI's sessions to read. Without it, the server falls back to whichever session file was modified most recently, which may be from a different CLI. `driftguard-mcp setup` sets this automatically.
70
+ > **`DRIFTCLI_ADAPTER`** tells driftguard-mcp which CLI's sessions to read. `driftguard-mcp setup` sets this automatically.
76
71
 
77
72
  </details>
78
73
 
@@ -80,62 +75,104 @@ driftguard-mcp setup
80
75
 
81
76
  ## Usage
82
77
 
83
- Call the tools directly from any session:
78
+ Call these tools from any session:
84
79
 
85
- - **`get_drift()`** — check the current drift score
86
- - **`get_handoff()`** — generate a handoff prompt when drift is high
87
- - **`get_trend()`** — see the full score history for this session
80
+ - **`get_drift()`** — check if the session is degrading
81
+ - **`get_handoff()`** — write a `handoff.md` to continue in a fresh session
82
+ - **`get_trend()`** — full score history with sparkline
88
83
 
89
84
  ---
90
85
 
91
- ## What is drift?
86
+ ## What it looks like
92
87
 
93
- Long AI sessions degrade. The model starts repeating itself, losing track of the original goal, hedging more, and producing inconsistent code. driftguard-mcp measures this in real time across 7 factors:
88
+ **Healthy session:**
94
89
 
95
- | Factor | What it measures |
96
- |--------|-----------------|
97
- | Context Saturation | How full the context window is getting |
98
- | Topic Scatter | How far the conversation has wandered from its starting topics |
99
- | Uncertainty Signals | Hedging language density |
100
- | Code Inconsistency | Conflicting patterns across code blocks |
101
- | Repetition | Rehashing of earlier content |
102
- | Goal Distance | Drift from the original user intent |
103
- | Confidence Drift | Declining confidence over the session |
90
+ ```
91
+ ✅ Context is healthy.
104
92
 
105
- Score thresholds: **fresh** 0–29 | **warming** 30–60 | **drifting** 61–80 | **polluted** 81–100
93
+ Context depth ███░░░░░░░ 28
94
+ Repetition ██░░░░░░░░ 15
106
95
 
107
- ---
96
+ Score: 12/100 · 14 messages
97
+ ```
108
98
 
109
- ## Tools
99
+ **Session that needs a reset:**
110
100
 
111
- ### `get_drift()`
101
+ ```
102
+ ⚠️ Start fresh now — context is full and responses are repeating heavily.
112
103
 
113
- Returns the current drift score and factor breakdown for the active session. When drift exceeds the warn threshold, a handoff prompt is included automatically so you can start fresh without a separate call.
104
+ Context depth █████████░ 88
105
+ Repetition ████████░░ 72
106
+ Length collapse █████░░░░░ 48
114
107
 
108
+ Score: 84/100 · 67 messages
109
+
110
+ → Call get_handoff() to write handoff.md before starting fresh.
115
111
  ```
116
- Drift Score: 59 WARMING
117
- Messages: 42
118
-
119
- Factor breakdown:
120
- contextSaturation: 72.0
121
- topicScatter: 50.0
122
- uncertaintySignals: 31.0
123
- codeInconsistency: 12.0
124
- repetition: 0.0
125
- goalDistance: 44.0
126
- confidenceDrift: 2.0
127
-
128
- Context is healthy.
129
- Trend (last 8): ▁▃▅▆▇ +12 over 8 checks ↗
112
+
113
+ The score leads with a plain-English recommendation. The two bars that matter most — context depth and repetition — always appear. Others only show when they're contributing something meaningful.
114
+
115
+ ---
116
+
117
+ ## Handoff workflow
118
+
119
+ When drift is high, call `get_handoff()`. The AI writes a `handoff.md` in your project root using its full session context:
120
+
121
+ ```markdown
122
+ ## What we accomplished
123
+ Implemented JWT authentication with refresh token rotation. Added middleware,
124
+ updated the user model, wrote integration tests. All tests passing.
125
+
126
+ ## Current state
127
+ Auth flow is working end-to-end. Rate limiting is stubbed but not implemented.
128
+ The `/refresh` endpoint has a known edge case with concurrent requests (see TODO in auth.ts:142).
129
+
130
+ ## Files modified
131
+ - src/middleware/auth.ts — JWT verify + refresh logic
132
+ - src/models/user.ts — added refreshToken field + index
133
+ - src/routes/auth.ts — /login, /logout, /refresh endpoints
134
+ - tests/integration/auth.test.ts — 14 new tests
135
+
136
+ ## Open questions / next steps
137
+ - Implement rate limiting on /login (decided on: 5 attempts per 15 min)
138
+ - Fix concurrent refresh edge case
139
+ - Add token blacklist for logout
140
+
141
+ ## Context for next session
142
+ Using jsonwebtoken@9, refresh tokens stored in DB (not Redis — decision was made
143
+ to keep it simple for now). Access token TTL: 15min. Refresh TTL: 7 days.
130
144
  ```
131
145
 
132
- ### `get_handoff()`
146
+ Load `handoff.md` at the start of your next session. You continue without losing context.
147
+
148
+ ---
149
+
150
+ ## What it measures
151
+
152
+ The score is driven primarily by two signals that reliably predict context degradation:
153
+
154
+ | Factor | Weight | What it measures |
155
+ |--------|--------|-----------------|
156
+ | **Context depth** | 37% | Token volume in the session (real API counts for Claude and Gemini) |
157
+ | **Repetition** | 37% | 3-gram overlap across recent assistant responses — the model recycling its own output |
158
+ | Response length collapse | 15% | Assistant responses getting shorter over time |
159
+ | Goal distance | 8% | Vocabulary drift from your stated goal (pass `goal` param to activate) |
160
+ | Uncertainty signals | 2% | Explicit self-corrections ("I was wrong", "let me correct that") |
161
+ | Confidence drift | 1% | Hedging language trend (early vs late responses) |
133
162
 
134
- Generates a structured handoff prompt. Summarises top topics, recent messages, and the last code block. Paste it into a new session in any supported AI CLI to continue without losing context.
163
+ Context depth and repetition together are the clearest signs the model is running out of useful context. The others contribute supporting signal but don't dominate the score.
164
+
165
+ ---
135
166
 
136
- ### `get_trend()`
167
+ ## `get_drift()` options
168
+
169
+ Pass an optional `goal` string to anchor the goal distance measurement to a specific objective:
170
+
171
+ ```
172
+ get_drift({ goal: "build a JWT authentication system" })
173
+ ```
137
174
 
138
- Shows the full drift history for the current session sparkline, score sequence, peak, average, and trajectory.
175
+ Without it, goal distance returns 0 (no anchor = no measurement).
139
176
 
140
177
  ---
141
178
 
@@ -159,35 +196,35 @@ Both are plain JSON. All fields are optional.
159
196
 
160
197
  | Preset | Best for |
161
198
  |--------|----------|
162
- | `coding` | Focused coding sessions — weights code consistency and repetition |
199
+ | `coding` | Focused coding sessions |
163
200
  | `research` | Research or planning — weights topic stability and goal alignment |
164
201
  | `brainstorm` | Brainstorming — relaxed topic scatter penalty |
165
- | `strict` | Equal weight across all seven factors |
202
+ | `strict` | Equal weight across all six factors |
166
203
 
167
204
  ### All options
168
205
 
169
206
  | Key | Default | Description |
170
207
  |-----|---------|-------------|
171
- | `preset` | — | Named weight preset (see above) |
208
+ | `preset` | — | Named weight preset |
172
209
  | `weights` | — | Per-factor weight overrides, applied on top of preset |
173
- | `warnThreshold` | `60` | Score at which `get_drift()` warns and includes a handoff prompt |
174
- | `storage.enabled` | `true` | Persist drift snapshots for `get_trend()` and sparklines |
210
+ | `warnThreshold` | `60` | Score threshold for warnings |
211
+ | `storage.enabled` | `true` | Persist drift snapshots for `get_trend()` |
175
212
  | `storage.directory` | `~/.driftcli/history` | Override snapshot storage path |
176
- | `sessionResolution.cacheTtlMs` | `5000` | How long to cache the resolved session file (ms) |
213
+ | `sessionResolution.cacheTtlMs` | `5000` | Session file cache TTL (ms) |
177
214
 
178
215
  ### Environment variables
179
216
 
180
217
  | Variable | Description |
181
218
  |----------|-------------|
182
- | `DRIFTCLI_ADAPTER` | Pin session lookup to a specific CLI: `claude`, `gemini`, or `codex`. Set automatically by `driftguard-mcp setup`. |
219
+ | `DRIFTCLI_ADAPTER` | Pin to a specific CLI: `claude`, `gemini`, or `codex`. Set automatically by `setup`. |
183
220
  | `DRIFTCLI_SESSION_ID` | Force a specific session UUID (Claude Code only). |
184
- | `DRIFTCLI_HOME` | Override the home directory used for session file discovery. |
221
+ | `DRIFTCLI_HOME` | Override home directory for session file discovery. |
185
222
 
186
223
  ---
187
224
 
188
225
  ## CLI watcher
189
226
 
190
- For a live terminal dashboard that polls every 3 seconds, open a separate terminal and run:
227
+ Live terminal dashboard, polls every 3 seconds:
191
228
 
192
229
  ```bash
193
230
  driftguard-mcp watch
@@ -199,7 +236,7 @@ driftguard-mcp watch
199
236
 
200
237
  | CLI | Status |
201
238
  |-----|--------|
202
- | Claude Code | Supported |
203
- | Gemini CLI | Supported |
204
- | Codex CLI | Supported |
205
- | Cursor | Supported (monitors Claude Code / Gemini / Codex sessions) |
239
+ | Claude Code | Supported — real token counts |
240
+ | Gemini CLI | Supported — real token counts |
241
+ | Codex CLI | Supported — estimated token counts |
242
+ | Cursor | Supported (monitors Claude Code / Gemini / Codex sessions) |
package/dist/bin.js CHANGED
@@ -40,6 +40,9 @@ function parseTimestamp(raw) {
40
40
  }
41
41
  return Date.now();
42
42
  }
43
+ function isNoise(text) {
44
+ return NOISE_PATTERNS.some((p) => p.test(text));
45
+ }
43
46
  function parseJSONL(filePath) {
44
47
  const raw = fs.readFileSync(filePath, "utf-8");
45
48
  const lines = raw.split("\n").filter((l) => l.trim());
@@ -48,6 +51,10 @@ function parseJSONL(filePath) {
48
51
  for (const line of lines) {
49
52
  try {
50
53
  const entry = JSON.parse(line);
54
+ if (entry.type === "system" && entry.subtype === "compact_boundary") {
55
+ messages.length = 0;
56
+ continue;
57
+ }
51
58
  if (entry.type !== "user" && entry.type !== "assistant") continue;
52
59
  const content = entry.message?.content;
53
60
  let text = "";
@@ -68,12 +75,22 @@ function parseJSONL(filePath) {
68
75
  }
69
76
  }
70
77
  if (!text.trim()) continue;
78
+ if (isNoise(text)) continue;
79
+ let inputTokens;
80
+ if (entry.type === "assistant") {
81
+ const usage = entry.message?.usage;
82
+ if (usage) {
83
+ const total = (usage.input_tokens ?? 0) + (usage.cache_creation_input_tokens ?? 0) + (usage.cache_read_input_tokens ?? 0);
84
+ if (total > 0) inputTokens = total;
85
+ }
86
+ }
71
87
  messages.push({
72
88
  id: entry.uuid,
73
89
  role: entry.message.role,
74
90
  content: text,
75
91
  timestamp: parseTimestamp(entry.timestamp),
76
- ...toolTokens > 0 ? { toolTokens } : {}
92
+ ...toolTokens > 0 ? { toolTokens } : {},
93
+ ...inputTokens !== void 0 ? { inputTokens } : {}
77
94
  });
78
95
  } catch {
79
96
  skipped++;
@@ -189,13 +206,18 @@ function findLatestSession() {
189
206
  }
190
207
  return latestFile;
191
208
  }
192
- var fs, path, os;
209
+ var fs, path, os, NOISE_PATTERNS;
193
210
  var init_claude_parser = __esm({
194
211
  "src/watchers/claude-parser.ts"() {
195
212
  "use strict";
196
213
  fs = __toESM(require("fs"));
197
214
  path = __toESM(require("path"));
198
215
  os = __toESM(require("os"));
216
+ NOISE_PATTERNS = [
217
+ /^Tool loaded\.\s*$/,
218
+ /^MCP server connected\.\s*$/,
219
+ /^MCP server disconnected\.\s*$/
220
+ ];
199
221
  }
200
222
  });
201
223
 
@@ -529,44 +551,23 @@ var init_types = __esm({
529
551
  "src/core/types.ts"() {
530
552
  "use strict";
531
553
  DEFAULT_WEIGHTS = {
532
- contextSaturation: 0.2,
533
- topicScatter: 0.12,
534
- uncertaintySignals: 0.15,
535
- codeInconsistency: 0.08,
536
- repetition: 0.2,
537
- goalDistance: 0.15,
538
- confidenceDrift: 0.1
554
+ contextSaturation: 0.37,
555
+ uncertaintySignals: 0.02,
556
+ repetition: 0.37,
557
+ goalDistance: 0.08,
558
+ confidenceDrift: 0.01,
559
+ responseLengthCollapse: 0.15
539
560
  };
540
561
  }
541
562
  });
542
563
 
543
564
  // src/core/topic-analyzer.ts
544
- function calculateTopicEntropy(messages) {
545
- if (messages.length < 3) return 0;
546
- const capped = messages.length > TOPIC_ENTROPY_MSG_CAP ? messages.slice(-TOPIC_ENTROPY_MSG_CAP) : messages;
547
- const windows = createWindows(capped, 3);
548
- if (windows.length < 2) return 0;
549
- const corpus = windows.map((w) => w.join(" "));
550
- const tfidfVectors = buildTfidfVectors(corpus);
551
- const consecutiveSims = [];
552
- for (let i = 1; i < tfidfVectors.length; i++) {
553
- consecutiveSims.push(cosineSimilarity(tfidfVectors[i - 1], tfidfVectors[i]));
554
- }
555
- const avgConsecutive = consecutiveSims.reduce((a, b) => a + b, 0) / consecutiveSims.length;
556
- const wideSims = [];
557
- for (let i = 3; i < tfidfVectors.length; i += 3) {
558
- wideSims.push(cosineSimilarity(tfidfVectors[i - 3], tfidfVectors[i]));
559
- }
560
- const avgWide = wideSims.length > 0 ? wideSims.reduce((a, b) => a + b, 0) / wideSims.length : avgConsecutive;
561
- const blendedSimilarity = avgConsecutive * 0.6 + avgWide * 0.4;
562
- const entropy = Math.round((1 - blendedSimilarity) * 100);
563
- return Math.min(100, Math.max(0, entropy));
564
- }
565
565
  function calculateAnchorDrift(messages, userGoal) {
566
+ if (!userGoal) return 0;
566
567
  if (messages.length < 4) return 0;
567
568
  const userMessages = messages.filter((m) => m.role === "user");
568
569
  if (userMessages.length < 2) return 0;
569
- const anchorDoc = userGoal ? userGoal : userMessages.slice(0, Math.min(2, userMessages.length)).map((m) => m.content).join(" ");
570
+ const anchorDoc = userGoal;
570
571
  const recentMessages = messages.slice(-3);
571
572
  const recentDoc = recentMessages.map((m) => m.content).join(" ");
572
573
  if (!anchorDoc.trim() || !recentDoc.trim()) return 0;
@@ -579,25 +580,12 @@ function calculateAnchorDrift(messages, userGoal) {
579
580
  return Math.max(0, isNaN(score) ? 0 : score);
580
581
  }
581
582
  function calculateGoalDriftCheckpoints(messages, userGoal) {
582
- if (messages.length < 4) {
583
- return {
584
- checkpoints: [],
585
- trajectory: "stable",
586
- averageScore: 0,
587
- startToEndDrift: 0
588
- };
589
- }
583
+ const empty = { checkpoints: [], trajectory: "stable", averageScore: 0, startToEndDrift: 0 };
584
+ if (!userGoal) return empty;
585
+ if (messages.length < 4) return empty;
590
586
  const userMessages = messages.filter((m) => m.role === "user");
591
- if (userMessages.length < 2) {
592
- return {
593
- checkpoints: [],
594
- trajectory: "stable",
595
- averageScore: 0,
596
- startToEndDrift: 0
597
- };
598
- }
599
- const anchorTexts = userGoal ? [userGoal] : userMessages.slice(0, Math.min(2, userMessages.length));
600
- const anchorDoc = Array.isArray(anchorTexts) ? anchorTexts.map((m) => typeof m === "string" ? m : m.content).join(" ") : anchorTexts;
587
+ if (userMessages.length < 2) return empty;
588
+ const anchorDoc = userGoal;
601
589
  if (!anchorDoc.trim()) {
602
590
  return {
603
591
  checkpoints: [],
@@ -733,17 +721,6 @@ function tokenize(text) {
733
721
  const withoutCode = text.replace(/```[\s\S]*?```/g, " ");
734
722
  return withoutCode.toLowerCase().replace(/[^a-z0-9\s]/g, " ").split(/\s+/).filter((w) => w.length >= 4 && !STOP_WORDS.has(w));
735
723
  }
736
- function createWindows(messages, windowSize) {
737
- const windows = [];
738
- for (let i = 0; i <= messages.length - windowSize; i++) {
739
- const windowTexts = messages.slice(i, i + windowSize).map((m) => {
740
- const withoutCode = m.content.replace(/```[\s\S]*?```/g, " ");
741
- return withoutCode.toLowerCase().replace(/[^a-z0-9\s]/g, " ").split(/\s+/).filter((w) => w.length >= 4 && !STOP_WORDS.has(w));
742
- }).flat();
743
- windows.push(windowTexts);
744
- }
745
- return windows;
746
- }
747
724
  function extractNgrams(text, n = 3) {
748
725
  const tokens = tokenize(text).filter((w) => !/^\d/.test(w));
749
726
  if (tokens.length < n) return /* @__PURE__ */ new Set();
@@ -785,11 +762,10 @@ function countWordSyllables(word) {
785
762
  if (clean.endsWith("le") && clean.length > 2 && !/[aeiouy]/.test(clean[clean.length - 3])) count++;
786
763
  return Math.max(1, count);
787
764
  }
788
- var TOPIC_ENTROPY_MSG_CAP, STOP_WORDS;
765
+ var STOP_WORDS;
789
766
  var init_topic_analyzer = __esm({
790
767
  "src/core/topic-analyzer.ts"() {
791
768
  "use strict";
792
- TOPIC_ENTROPY_MSG_CAP = 150;
793
769
  STOP_WORDS = /* @__PURE__ */ new Set([
794
770
  "about",
795
771
  "after",
@@ -1153,14 +1129,8 @@ var init_topic_analyzer = __esm({
1153
1129
  // src/core/contradiction-detector.ts
1154
1130
  function countContradictions(assistantMessages) {
1155
1131
  let count = 0;
1156
- for (let i = 0; i < assistantMessages.length; i++) {
1157
- const msg = assistantMessages[i];
1158
- const text = msg.content.toLowerCase();
1159
- count += countPatternMatches(text, CORRECTION_PATTERNS);
1160
- if (i > 0) {
1161
- const prevText = assistantMessages[i - 1].content.toLowerCase();
1162
- count += detectReversals(prevText, text);
1163
- }
1132
+ for (const msg of assistantMessages) {
1133
+ count += countPatternMatches(msg.content.toLowerCase(), CORRECTION_PATTERNS);
1164
1134
  }
1165
1135
  return count;
1166
1136
  }
@@ -1172,24 +1142,7 @@ function countPatternMatches(text, patterns) {
1172
1142
  }
1173
1143
  return count;
1174
1144
  }
1175
- function detectReversals(prevText, currentText) {
1176
- let reversals = 0;
1177
- const hasReversalSignal = currentText.includes("actually") || currentText.includes("however") || currentText.includes("but ") || currentText.includes("instead") || // Dutch
1178
- currentText.includes("eigenlijk") || currentText.includes("echter") || currentText.includes("maar ") || // German
1179
- currentText.includes("eigentlich") || currentText.includes("allerdings") || currentText.includes("aber ") || currentText.includes("stattdessen") || // French
1180
- currentText.includes("en fait") || currentText.includes("cependant") || currentText.includes("mais ") || currentText.includes("plut\xF4t") || // Spanish
1181
- currentText.includes("en realidad") || currentText.includes("sin embargo") || currentText.includes("pero ") || currentText.includes("en cambio") || // Portuguese
1182
- currentText.includes("na verdade") || currentText.includes("no entanto") || currentText.includes("mas ") || currentText.includes("em vez");
1183
- if (hasReversalSignal) {
1184
- for (const negation of NEGATION_PAIRS) {
1185
- if (prevText.includes(negation.positive) && currentText.includes(negation.negative)) {
1186
- reversals++;
1187
- }
1188
- }
1189
- }
1190
- return reversals;
1191
- }
1192
- var CORRECTION_PATTERNS, NEGATION_PAIRS;
1145
+ var CORRECTION_PATTERNS;
1193
1146
  var init_contradiction_detector = __esm({
1194
1147
  "src/core/contradiction-detector.ts"() {
1195
1148
  "use strict";
@@ -1266,43 +1219,6 @@ var init_contradiction_detector = __esm({
1266
1219
  /meu erro/gi,
1267
1220
  /eu (?:me enganei|me confundi|errei)/gi
1268
1221
  ];
1269
- NEGATION_PAIRS = [
1270
- // English
1271
- { positive: "you should", negative: "you shouldn't" },
1272
- { positive: "you should", negative: "you should not" },
1273
- { positive: "recommend", negative: "don't recommend" },
1274
- { positive: "recommend", negative: "do not recommend" },
1275
- { positive: "best practice", negative: "not.+best practice" },
1276
- { positive: "you can", negative: "you can't" },
1277
- { positive: "you can", negative: "you cannot" },
1278
- { positive: "safe to", negative: "not safe to" },
1279
- { positive: "correct", negative: "incorrect" },
1280
- // Dutch
1281
- { positive: "je moet", negative: "je moet niet" },
1282
- { positive: "je kunt", negative: "je kunt niet" },
1283
- { positive: "aanbevolen", negative: "niet aanbevolen" },
1284
- { positive: "veilig", negative: "niet veilig" },
1285
- // German
1286
- { positive: "du solltest", negative: "du solltest nicht" },
1287
- { positive: "du kannst", negative: "du kannst nicht" },
1288
- { positive: "empfohlen", negative: "nicht empfohlen" },
1289
- { positive: "sicher", negative: "nicht sicher" },
1290
- // French
1291
- { positive: "vous devriez", negative: "vous ne devriez pas" },
1292
- { positive: "vous pouvez", negative: "vous ne pouvez pas" },
1293
- { positive: "recommand\xE9", negative: "pas recommand\xE9" },
1294
- { positive: "correct", negative: "pas correct" },
1295
- // Spanish
1296
- { positive: "deber\xEDas", negative: "no deber\xEDas" },
1297
- { positive: "puedes", negative: "no puedes" },
1298
- { positive: "recomendado", negative: "no recomendado" },
1299
- { positive: "seguro", negative: "no es seguro" },
1300
- // Portuguese
1301
- { positive: "voc\xEA deveria", negative: "voc\xEA n\xE3o deveria" },
1302
- { positive: "voc\xEA pode", negative: "voc\xEA n\xE3o pode" },
1303
- { positive: "recomendado", negative: "n\xE3o recomendado" },
1304
- { positive: "seguro", negative: "n\xE3o \xE9 seguro" }
1305
- ];
1306
1222
  }
1307
1223
  });
1308
1224
 
@@ -1336,38 +1252,8 @@ function trackConfidenceTrend(messages) {
1336
1252
  const trendScore = Math.max(0, lateAvg - earlyAvg);
1337
1253
  return Math.round(trendScore);
1338
1254
  }
1339
- function detectNegationReversals(messages) {
1340
- const assistantMessages = messages.filter((m) => m.role === "assistant");
1341
- if (assistantMessages.length < 2) return 0;
1342
- let reversalCount = 0;
1343
- for (const msg of assistantMessages) {
1344
- const lower = msg.content.toLowerCase();
1345
- for (const pattern of HEDGING_PATTERNS.negationPatterns) {
1346
- const regex = new RegExp(`\\b${pattern}\\b`, "gi");
1347
- if (regex.test(lower)) {
1348
- reversalCount += 1;
1349
- break;
1350
- }
1351
- }
1352
- }
1353
- const baseScore = reversalCount / assistantMessages.length * 50;
1354
- const lateReversals = assistantMessages.slice(-Math.ceil(assistantMessages.length / 3));
1355
- const lateReversalCount = lateReversals.filter((m) => {
1356
- const lower = m.content.toLowerCase();
1357
- return HEDGING_PATTERNS.negationPatterns.some((p) => new RegExp(`\\b${p}\\b`, "i").test(lower));
1358
- }).length;
1359
- const lateBonus = lateReversalCount / Math.max(lateReversals.length, 1) * 30;
1360
- return Math.min(100, Math.round(baseScore + lateBonus));
1361
- }
1362
1255
  function calculateConfidenceDrift(messages) {
1363
- if (messages.length < 2) return 0;
1364
- const hedgingScore = messages.filter((m) => m.role === "assistant").reduce((sum, m) => sum + detectHedgingLanguage(m.content), 0) / Math.max(messages.filter((m) => m.role === "assistant").length, 1);
1365
- const trendScore = trackConfidenceTrend(messages);
1366
- const reversalScore = detectNegationReversals(messages);
1367
- const composite = Math.round(
1368
- hedgingScore * 0.4 + trendScore * 0.35 + reversalScore * 0.25
1369
- );
1370
- return Math.min(100, composite);
1256
+ return Math.min(100, trackConfidenceTrend(messages));
1371
1257
  }
1372
1258
  var HEDGING_PATTERNS;
1373
1259
  var init_confidence_analyzer = __esm({
@@ -1378,7 +1264,6 @@ var init_confidence_analyzer = __esm({
1378
1264
  modalUncertainty: [
1379
1265
  "might",
1380
1266
  "may",
1381
- "could",
1382
1267
  "could be",
1383
1268
  "appears to be",
1384
1269
  "seems to be",
@@ -1388,23 +1273,14 @@ var init_confidence_analyzer = __esm({
1388
1273
  "arguably",
1389
1274
  "allegedly"
1390
1275
  ],
1391
- // Adverbs of uncertainty
1276
+ // Adverbs of genuine uncertainty
1392
1277
  uncertainAdverbs: [
1393
1278
  "probably",
1394
1279
  "likely",
1395
1280
  "perhaps",
1396
1281
  "maybe",
1397
- "somewhat",
1398
- "relatively",
1399
- "quite",
1400
- "fairly",
1401
- "rather",
1402
1282
  "approximately",
1403
- "roughly",
1404
- "sort of",
1405
- "kind of",
1406
- "a bit",
1407
- "a little"
1283
+ "roughly"
1408
1284
  ],
1409
1285
  // Epistemic markers (subjective speech)
1410
1286
  epistemicMarkers: [
@@ -1415,35 +1291,10 @@ var init_confidence_analyzer = __esm({
1415
1291
  "it seems",
1416
1292
  "it appears",
1417
1293
  "I would say",
1418
- "I'd say"
1419
- ],
1420
- // Downgraders (reduce force of statement)
1421
- downgraders: [
1422
- "just",
1423
- "only",
1424
- "merely",
1425
- "simply",
1426
- "barely",
1427
- "scarcely",
1428
- "somewhat",
1429
- "not quite",
1430
- "not entirely",
1431
- "not fully"
1432
- ],
1433
- // Negation reversals (contradicting prior claims)
1434
- negationPatterns: [
1435
- "actually",
1436
- "wait",
1437
- "hold on",
1438
- "correction",
1439
- "let me correct",
1440
- "upon reflection",
1441
- "on second thought",
1442
- "rethinking",
1443
- "mistake",
1444
- "I was wrong",
1445
- "I apologize",
1446
- "I retract"
1294
+ "I'd say",
1295
+ "I'm not sure",
1296
+ "I'm not certain",
1297
+ "I'm unsure"
1447
1298
  ]
1448
1299
  };
1449
1300
  }
@@ -1463,15 +1314,14 @@ function calculateDrift(messages, weights = DEFAULT_WEIGHTS, userGoal) {
1463
1314
  }
1464
1315
  const factors = {
1465
1316
  contextSaturation: calcMessageDecay(messages),
1466
- topicScatter: calculateTopicEntropy(messages),
1467
1317
  uncertaintySignals: calcContradictionScore(messages),
1468
- codeInconsistency: calcCodeInconsistency(messages),
1469
1318
  repetition: calcRepetition(messages),
1470
1319
  goalDistance: calculateAnchorDrift(messages, userGoal),
1471
- confidenceDrift: calculateConfidenceDrift(messages)
1320
+ confidenceDrift: calculateConfidenceDrift(messages),
1321
+ responseLengthCollapse: calcResponseLengthCollapse(messages)
1472
1322
  };
1473
1323
  const score = Math.min(100, Math.max(0, Math.round(
1474
- factors.contextSaturation * weights.contextSaturation + factors.topicScatter * weights.topicScatter + factors.uncertaintySignals * weights.uncertaintySignals + factors.codeInconsistency * weights.codeInconsistency + factors.repetition * weights.repetition + factors.goalDistance * weights.goalDistance + factors.confidenceDrift * weights.confidenceDrift
1324
+ factors.contextSaturation * weights.contextSaturation + factors.uncertaintySignals * weights.uncertaintySignals + factors.repetition * weights.repetition + factors.goalDistance * weights.goalDistance + factors.confidenceDrift * weights.confidenceDrift + factors.responseLengthCollapse * weights.responseLengthCollapse
1475
1325
  )));
1476
1326
  const level = scoreToLevel(score);
1477
1327
  const sessionDuration = messages[messages.length - 1].timestamp - messages[0].timestamp;
@@ -1531,16 +1381,6 @@ function calcContradictionScore(messages) {
1531
1381
  const score = Math.min(100, totalContradictions / 5 * 80);
1532
1382
  return Math.round(score);
1533
1383
  }
1534
- function calcCodeInconsistency(messages) {
1535
- const codeBlocks = extractCodeBlocks(messages);
1536
- if (codeBlocks.length < 2) return 0;
1537
- const languages = new Set(
1538
- codeBlocks.map((b) => b.language).filter((l) => l !== "unknown")
1539
- );
1540
- if (languages.size <= 1) return 0;
1541
- const score = Math.min(100, Math.round(15 + (languages.size - 1) * 20));
1542
- return score;
1543
- }
1544
1384
  function calcRepetition(messages) {
1545
1385
  if (messages.length < 8) return 0;
1546
1386
  const assistantMsgs = messages.filter((m) => m.role === "assistant").slice(-25);
@@ -1630,24 +1470,21 @@ function charSimilarity(a, b) {
1630
1470
  const bigramScore = union === 0 ? 1 : intersection / union;
1631
1471
  return Math.max(positionalScore, bigramScore);
1632
1472
  }
1633
- function extractCodeBlocks(messages) {
1634
- const blocks = [];
1635
- for (const msg of messages) {
1636
- for (const match of msg.content.matchAll(/```(\w*)\n([\s\S]*?)```/g)) {
1637
- const language = detectLanguage(match[1], match[2]);
1638
- blocks.push({ language, content: match[2] });
1639
- }
1640
- }
1641
- return blocks;
1642
- }
1643
- function detectLanguage(label, content) {
1644
- if (label) return label.toLowerCase();
1645
- if (content.includes("import React") || content.includes("useState")) return "jsx";
1646
- if (content.includes("def ") && content.includes(":")) return "python";
1647
- if (content.includes("func ") && content.includes("{")) return "go";
1648
- if (content.includes("fn ") && content.includes("->")) return "rust";
1649
- if (content.includes("function") || content.includes("const ")) return "javascript";
1650
- return "unknown";
1473
+ function calcResponseLengthCollapse(messages) {
1474
+ const assistantMsgs = messages.filter((m) => m.role === "assistant" && m.content.trim().length > 10);
1475
+ if (assistantMsgs.length < 6) return 0;
1476
+ const quarter = Math.max(2, Math.floor(assistantMsgs.length / 4));
1477
+ const earlyMsgs = assistantMsgs.slice(0, quarter);
1478
+ const lateMsgs = assistantMsgs.slice(-quarter);
1479
+ const avgWords = (msgs) => msgs.reduce((sum, m) => sum + m.content.split(/\s+/).filter((w) => w.length > 0).length, 0) / msgs.length;
1480
+ const earlyAvg = avgWords(earlyMsgs);
1481
+ const lateAvg = avgWords(lateMsgs);
1482
+ if (earlyAvg === 0) return 0;
1483
+ const ratio = lateAvg / earlyAvg;
1484
+ if (ratio >= 0.7) return 0;
1485
+ if (ratio >= 0.5) return Math.round((0.7 - ratio) / 0.2 * 40);
1486
+ if (ratio >= 0.3) return Math.round(40 + (0.5 - ratio) / 0.2 * 35);
1487
+ return Math.min(100, Math.round(75 + (0.3 - ratio) / 0.3 * 25));
1651
1488
  }
1652
1489
  function generateRecommendations(score, factors) {
1653
1490
  const recs = [];
@@ -1658,15 +1495,9 @@ function generateRecommendations(score, factors) {
1658
1495
  if (factors.contextSaturation > 50) {
1659
1496
  recs.push("Long conversation \u2014 consider starting fresh with a summary of key decisions.");
1660
1497
  }
1661
- if (factors.topicScatter > 50) {
1662
- recs.push("Multiple topics detected \u2014 try to keep one topic per conversation.");
1663
- }
1664
1498
  if (factors.uncertaintySignals > 40) {
1665
1499
  recs.push("AI is self-correcting frequently \u2014 context may be confused. Re-state your requirements.");
1666
1500
  }
1667
- if (factors.codeInconsistency > 30) {
1668
- recs.push("Multiple languages/frameworks in one chat \u2014 start a new chat for each tech stack.");
1669
- }
1670
1501
  if (factors.repetition > 30) {
1671
1502
  recs.push("AI is repeating itself \u2014 context is degrading. Start a new conversation.");
1672
1503
  }
@@ -1676,6 +1507,9 @@ function generateRecommendations(score, factors) {
1676
1507
  if (factors.confidenceDrift > 40) {
1677
1508
  recs.push("AI confidence is declining \u2014 context may be becoming unreliable. Verify assumptions.");
1678
1509
  }
1510
+ if (factors.responseLengthCollapse > 40) {
1511
+ recs.push("AI responses are getting shorter \u2014 may be losing context depth. Consider starting fresh.");
1512
+ }
1679
1513
  if (score > 80) {
1680
1514
  recs.push("Strongly recommend starting a new conversation. Copy your key context first.");
1681
1515
  }
@@ -1687,12 +1521,11 @@ function emptyAnalysis(weights) {
1687
1521
  level: "fresh",
1688
1522
  factors: {
1689
1523
  contextSaturation: 0,
1690
- topicScatter: 0,
1691
1524
  uncertaintySignals: 0,
1692
- codeInconsistency: 0,
1693
1525
  repetition: 0,
1694
1526
  goalDistance: 0,
1695
- confidenceDrift: 0
1527
+ confidenceDrift: 0,
1528
+ responseLengthCollapse: 0
1696
1529
  },
1697
1530
  weights,
1698
1531
  messageCount: 0,
@@ -1771,45 +1604,41 @@ var init_config = __esm({
1771
1604
  os4 = __toESM(require("os"));
1772
1605
  init_types();
1773
1606
  WEIGHT_PRESETS = {
1774
- /** Equal importance across all seven factors. */
1607
+ /** Equal importance across all six factors. */
1775
1608
  strict: {
1776
- contextSaturation: 1 / 7,
1777
- topicScatter: 1 / 7,
1778
- uncertaintySignals: 1 / 7,
1779
- codeInconsistency: 1 / 7,
1780
- repetition: 1 / 7,
1781
- goalDistance: 1 / 7,
1782
- confidenceDrift: 1 / 7
1609
+ contextSaturation: 1 / 6,
1610
+ uncertaintySignals: 1 / 6,
1611
+ repetition: 1 / 6,
1612
+ goalDistance: 1 / 6,
1613
+ confidenceDrift: 1 / 6,
1614
+ responseLengthCollapse: 1 / 6
1783
1615
  },
1784
- /** Emphasises code consistency and repetition — good for focused coding sessions. */
1616
+ /** Emphasises repetition, length collapse, and context depth — good for focused coding sessions. */
1785
1617
  coding: {
1786
- contextSaturation: 0.2,
1787
- topicScatter: 0.08,
1618
+ contextSaturation: 0.24,
1788
1619
  uncertaintySignals: 0.1,
1789
- codeInconsistency: 0.22,
1790
- repetition: 0.25,
1791
- goalDistance: 0.1,
1792
- confidenceDrift: 0.05
1620
+ repetition: 0.29,
1621
+ goalDistance: 0.18,
1622
+ confidenceDrift: 0.07,
1623
+ responseLengthCollapse: 0.12
1793
1624
  },
1794
- /** Emphasises topic stability and goal alignment — good for research or planning. */
1625
+ /** Emphasises goal alignment — good for research or planning. */
1795
1626
  research: {
1796
1627
  contextSaturation: 0.15,
1797
- topicScatter: 0.2,
1798
1628
  uncertaintySignals: 0.15,
1799
- codeInconsistency: 0.05,
1800
- repetition: 0.15,
1801
- goalDistance: 0.25,
1802
- confidenceDrift: 0.05
1629
+ repetition: 0.13,
1630
+ goalDistance: 0.45,
1631
+ confidenceDrift: 0.07,
1632
+ responseLengthCollapse: 0.05
1803
1633
  },
1804
- /** Forgiving preset for brainstorming — topic scatter is not penalised heavily. */
1634
+ /** Forgiving preset for brainstorming. */
1805
1635
  brainstorm: {
1806
- contextSaturation: 0.25,
1807
- topicScatter: 0.05,
1808
- uncertaintySignals: 0.15,
1809
- codeInconsistency: 0.05,
1636
+ contextSaturation: 0.22,
1637
+ uncertaintySignals: 0.13,
1810
1638
  repetition: 0.25,
1811
- goalDistance: 0.1,
1812
- confidenceDrift: 0.15
1639
+ goalDistance: 0.12,
1640
+ confidenceDrift: 0.18,
1641
+ responseLengthCollapse: 0.1
1813
1642
  }
1814
1643
  };
1815
1644
  DEFAULT_CONFIG = {
@@ -2004,12 +1833,11 @@ var init_ui = __esm({
2004
1833
  };
2005
1834
  FACTOR_LABELS = {
2006
1835
  contextSaturation: "Context Saturation",
2007
- topicScatter: "Topic Scatter",
2008
1836
  uncertaintySignals: "Uncertainty",
2009
- codeInconsistency: "Code Inconsistency",
2010
1837
  repetition: "Repetition",
2011
1838
  goalDistance: "Goal Distance",
2012
- confidenceDrift: "Confidence Drift"
1839
+ confidenceDrift: "Confidence Drift",
1840
+ responseLengthCollapse: "Length Collapse"
2013
1841
  };
2014
1842
  }
2015
1843
  });
@@ -2157,6 +1985,55 @@ var mcp_server_exports = {};
2157
1985
  __export(mcp_server_exports, {
2158
1986
  main: () => main
2159
1987
  });
1988
+ function bar2(score, width = 10) {
1989
+ const filled = Math.round(Math.min(100, Math.max(0, score)) / 100 * width);
1990
+ return "\u2588".repeat(filled) + "\u2591".repeat(width - filled);
1991
+ }
1992
+ function buildDriftOutput(analysis, messageCount, trendLine, adapterTag) {
1993
+ const { factors, score } = analysis;
1994
+ const needsFreshNow = factors.contextSaturation > 70 || factors.repetition > 65;
1995
+ const needsFreshSoon = factors.contextSaturation > 50 || factors.repetition > 45;
1996
+ const warming = factors.contextSaturation > 35 || factors.repetition > 30;
1997
+ let headline;
1998
+ if (needsFreshNow) {
1999
+ const reasons = [];
2000
+ if (factors.contextSaturation > 70) reasons.push("context is full");
2001
+ if (factors.repetition > 65) reasons.push("responses are repeating heavily");
2002
+ headline = `\u26A0\uFE0F Start fresh now \u2014 ${reasons.join(" and ")}.`;
2003
+ } else if (needsFreshSoon) {
2004
+ const reasons = [];
2005
+ if (factors.contextSaturation > 50) reasons.push("context is getting deep");
2006
+ if (factors.repetition > 45) reasons.push("some repetition detected");
2007
+ headline = `\u{1F7E1} Start fresh soon \u2014 ${reasons.join(" and ")}.`;
2008
+ } else if (warming) {
2009
+ headline = `\u{1F7E1} Context is warming up \u2014 no action needed yet.`;
2010
+ } else {
2011
+ headline = `\u2705 Context is healthy.`;
2012
+ }
2013
+ const rows = [];
2014
+ const row = (label, val) => {
2015
+ rows.push(` ${label.padEnd(20)} ${bar2(val)} ${String(Math.round(val)).padStart(3)}`);
2016
+ };
2017
+ row("Context depth", factors.contextSaturation);
2018
+ row("Repetition", factors.repetition);
2019
+ if (factors.responseLengthCollapse > 5) row("Length collapse", factors.responseLengthCollapse);
2020
+ if (factors.goalDistance > 20) row("Goal distance", factors.goalDistance);
2021
+ if (factors.uncertaintySignals > 10) row("Uncertainty", factors.uncertaintySignals);
2022
+ if (factors.confidenceDrift > 10) row("Confidence drift", factors.confidenceDrift);
2023
+ const lines = [
2024
+ headline,
2025
+ "",
2026
+ ...rows,
2027
+ "",
2028
+ `Score: ${score}/100 \xB7 ${messageCount} messages${adapterTag}`
2029
+ ];
2030
+ if (trendLine) lines.push(trendLine);
2031
+ const shouldHandoff = factors.contextSaturation > 60 || factors.repetition > 50;
2032
+ if (shouldHandoff) {
2033
+ lines.push("", "\u2192 Call get_handoff() to write handoff.md before starting fresh.");
2034
+ }
2035
+ return lines.join("\n");
2036
+ }
2160
2037
  function buildHandoff() {
2161
2038
  return [
2162
2039
  `Please write a \`handoff.md\` file in the current working directory with the following structure:`,
@@ -2183,7 +2060,7 @@ async function main() {
2183
2060
  const transport = new import_stdio.StdioServerTransport();
2184
2061
  await server.connect(transport);
2185
2062
  }
2186
- var path9, import_server, import_stdio, import_types3, config, resolver, storage, server, LEVEL_EMOJI2;
2063
+ var path9, import_server, import_stdio, import_types3, config, resolver, storage, server;
2187
2064
  var init_mcp_server = __esm({
2188
2065
  "src/mcp-server.ts"() {
2189
2066
  "use strict";
@@ -2193,7 +2070,6 @@ var init_mcp_server = __esm({
2193
2070
  import_types3 = require("@modelcontextprotocol/sdk/types.js");
2194
2071
  init_session_resolver();
2195
2072
  init_drift_calculator();
2196
- init_types();
2197
2073
  init_config();
2198
2074
  init_storage();
2199
2075
  init_ui();
@@ -2208,8 +2084,16 @@ var init_mcp_server = __esm({
2208
2084
  tools: [
2209
2085
  {
2210
2086
  name: "get_drift",
2211
- description: "Returns the current drift score and factor breakdown for the active Claude Code session. Call this to check if the conversation context is degrading.",
2212
- inputSchema: { type: "object", properties: {} }
2087
+ description: 'Returns the current drift score and factor breakdown for the active Claude Code session. Call this to check if the conversation context is degrading. Optionally pass a "goal" string to anchor goalDistance scoring to a specific objective.',
2088
+ inputSchema: {
2089
+ type: "object",
2090
+ properties: {
2091
+ goal: {
2092
+ type: "string",
2093
+ description: "Optional: the user's original goal or task for this session. Improves goalDistance accuracy."
2094
+ }
2095
+ }
2096
+ }
2213
2097
  },
2214
2098
  {
2215
2099
  name: "get_handoff",
@@ -2223,12 +2107,6 @@ var init_mcp_server = __esm({
2223
2107
  }
2224
2108
  ]
2225
2109
  }));
2226
- LEVEL_EMOJI2 = {
2227
- fresh: "\u{1F7E2}",
2228
- warming: "\u{1F7E1}",
2229
- drifting: "\u{1F534}",
2230
- polluted: "\u26AB"
2231
- };
2232
2110
  server.setRequestHandler(import_types3.CallToolRequestSchema, async (request) => {
2233
2111
  let sessionFile;
2234
2112
  try {
@@ -2256,9 +2134,8 @@ var init_mcp_server = __esm({
2256
2134
  chatId: "cli",
2257
2135
  ...m.toolTokens !== void 0 ? { toolTokens: m.toolTokens } : {}
2258
2136
  }));
2259
- const analysis = calculateDrift(chatMessages, config.weights);
2260
- const level = scoreToLevel(analysis.score);
2261
- const emoji = LEVEL_EMOJI2[level] ?? "\u2753";
2137
+ const goal = typeof request.params.arguments?.goal === "string" ? request.params.arguments.goal : void 0;
2138
+ const analysis = calculateDrift(chatMessages, config.weights, goal);
2262
2139
  const adapterTag = adapter.name !== "claude" ? ` (${adapter.name})` : "";
2263
2140
  if (request.params.name === "get_drift") {
2264
2141
  let trendLine = "";
@@ -2274,22 +2151,7 @@ var init_mcp_server = __esm({
2274
2151
  trendLine = `Trend (last ${scores.length}): ${sparkline(scores)} ${sign}${delta} over ${scores.length} checks ${arrow}`;
2275
2152
  }
2276
2153
  }
2277
- const factors = Object.entries(analysis.factors).map(([k, v]) => ` ${k}: ${v.toFixed(1)}`).join("\n");
2278
- const isDegrading = analysis.score > config.warnThreshold;
2279
- const lines = [
2280
- `Drift Score: ${analysis.score} ${emoji} ${level.toUpperCase()}${adapterTag}`,
2281
- `Messages: ${messages.length}`,
2282
- ``,
2283
- `Factor breakdown:`,
2284
- factors,
2285
- ``,
2286
- isDegrading ? `\u26A0\uFE0F Context is degrading.` : `Context is healthy.`
2287
- ];
2288
- if (trendLine) lines.push(``, trendLine);
2289
- if (isDegrading) {
2290
- lines.push(``, `---`, buildHandoff());
2291
- }
2292
- return { content: [{ type: "text", text: lines.join("\n") }] };
2154
+ return { content: [{ type: "text", text: buildDriftOutput(analysis, messages.length, trendLine, adapterTag) }] };
2293
2155
  }
2294
2156
  if (request.params.name === "get_handoff") {
2295
2157
  return { content: [{ type: "text", text: buildHandoff() }] };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "driftguard-mcp",
3
- "version": "0.1.7",
3
+ "version": "0.1.9",
4
4
  "description": "Real-time AI conversation drift monitor — MCP server for Claude Code, Gemini CLI, Codex CLI, and Cursor",
5
5
  "main": "dist/bin.js",
6
6
  "bin": {