mcp-memory-keeper 0.10.0 → 0.10.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -7,8 +7,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.10.1] - 2025-01-11
11
+
10
12
  ### Fixed
11
13
 
14
+ - **Token Limit Enforcement** - Fixed MCP protocol token limit errors
15
+
16
+ - Added automatic response truncation when approaching 25,000 token limit
17
+ - Implemented `calculateSafeItemCount()` helper to determine safe result size
18
+ - Enhanced pagination metadata with `truncated` and `truncatedCount` fields
19
+ - Improved warning messages with specific pagination instructions
20
+ - Prevents "response exceeds maximum allowed tokens" errors from MCP clients
21
+
12
22
  - **Pagination Defaults in context_get** - Improved consistency
13
23
  - Added proper validation of pagination parameters at handler level
14
24
  - Default limit of 100 items now properly applied when not specified
package/README.md CHANGED
@@ -1,11 +1,106 @@
1
1
  # MCP Memory Keeper - Claude Code Context Management
2
2
 
3
+ [![npm version](https://img.shields.io/npm/v/mcp-memory-keeper.svg)](https://www.npmjs.com/package/mcp-memory-keeper)
4
+ [![npm downloads](https://img.shields.io/npm/dm/mcp-memory-keeper.svg)](https://www.npmjs.com/package/mcp-memory-keeper)
3
5
  [![CI](https://github.com/mkreyman/mcp-memory-keeper/actions/workflows/ci.yml/badge.svg)](https://github.com/mkreyman/mcp-memory-keeper/actions/workflows/ci.yml)
4
6
  [![codecov](https://codecov.io/gh/mkreyman/mcp-memory-keeper/branch/main/graph/badge.svg)](https://codecov.io/gh/mkreyman/mcp-memory-keeper)
5
- [![npm version](https://badge.fury.io/js/mcp-memory-keeper.svg)](https://badge.fury.io/js/mcp-memory-keeper)
7
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
8
 
7
9
  A Model Context Protocol (MCP) server that provides persistent context management for Claude AI coding assistants. Never lose context during compaction again! This MCP server helps Claude Code maintain context across sessions, preserving your work history, decisions, and progress.
8
10
 
11
+ ## 🚀 Quick Start
12
+
13
+ Get started in under 30 seconds:
14
+
15
+ ```bash
16
+ # Add memory-keeper to Claude
17
+ claude mcp add memory-keeper npx mcp-memory-keeper
18
+
19
+ # Start a new Claude session and use it!
20
+ # Try: Analyze the current repo and save your analysis in memory-keeper
21
+ ```
22
+
23
+ That's it! Memory Keeper is now available in all your Claude sessions. Your context is stored in `~/mcp-data/memory-keeper/` and persists across sessions.
24
+
25
+ ## 🚀 Practical Memory Keeper Workflow Example
26
+
27
+ ### **Custom Command + CLAUDE.md = Automatic Context Management**
28
+
29
+ #### **CLAUDE.md** (condensed example)
30
+
31
+ ```markdown
32
+ # Project Configuration
33
+
34
+ ## Development Rules
35
+
36
+ - Always use memory-keeper to track progress
37
+ - Save architectural decisions and test results
38
+ - Create checkpoints before context limits
39
+
40
+ ## Quality Standards
41
+
42
+ - All tests must pass before marking complete
43
+ - Document actual vs claimed results
44
+ ```
45
+
46
+ #### **Custom Command Example: `/my-dev-workflow`**
47
+
48
+ ```markdown
49
+ # My Development Workflow
50
+
51
+ When working on the provided project:
52
+
53
+ - Use memory-keeper with channel: <project_name>
54
+ - Save progress at every major milestone
55
+ - Document all decisions with category: "decision"
56
+ - Track implementation status with category: "progress"
57
+ - Before claiming anything is complete, save test results
58
+
59
+ ## Workflow Steps
60
+
61
+ 1. Initialize session with project name as channel
62
+ 2. Save findings during investigation
63
+ 3. Create checkpoint before major changes
64
+ 4. Document what actually works vs what should work
65
+ ```
66
+
67
+ #### **Usage Example**
68
+
69
+ ```
70
+ User: /my-dev-workflow authentication-service
71
+
72
+ AI: Setting up workflow for authentication-service.
73
+ [Uses memory-keeper with channel "authentication-service"]
74
+
75
+ [... AI works, automatically saving context ...]
76
+
77
+ User: "Getting close to context limit. Create checkpoint and give me a key"
78
+
79
+ AI: "Checkpoint created: authentication-service-checkpoint-20250126-143026"
80
+
81
+ [Continue working until context reset or compact manually]
82
+
83
+ User: "Restore from key: authentication-service-checkpoint-20250126-143026"
84
+
85
+ AI: "Restored! Continuing OAuth implementation. We completed the token validation, working on refresh logic..."
86
+ ```
87
+
88
+ **The Pattern:**
89
+
90
+ 1. Custom command includes instructions to use memory-keeper
91
+ 2. AI follows those instructions automatically
92
+ 3. **When you notice the conversation getting long, YOU ask Claude to save a checkpoint** (like saving your game before a boss fight!)
93
+ 4. **When Claude runs out of space and starts fresh, YOU tell it to restore using the checkpoint key**
94
+
95
+ **🎯 Key Feature:** Memory Keeper is a shared board! You can:
96
+
97
+ - Continue in the same session after reset
98
+ - Start a completely new session and restore
99
+ - Have multiple Claude sessions running in parallel, all sharing the same memory
100
+ - One session can save context that another session retrieves
101
+
102
+ This enables powerful workflows like having one Claude session doing research while another implements code, both sharing discoveries through Memory Keeper!
103
+
9
104
  ## Why MCP Memory Keeper?
10
105
 
11
106
  Claude Code users often face context loss when the conversation window fills up. This MCP server solves that problem by providing a persistent memory layer for Claude AI. Whether you're working on complex refactoring, multi-file changes, or long debugging sessions, Memory Keeper ensures your Claude assistant remembers important context, decisions, and progress.
@@ -39,31 +134,33 @@ Claude Code users often face context loss when the conversation window fills up.
39
134
 
40
135
  ## Installation
41
136
 
42
- ### Method 1: NPX (Recommended)
43
-
44
- The simplest way to use memory-keeper with Claude:
137
+ ### Recommended: NPX Installation
45
138
 
46
139
  ```bash
47
140
  claude mcp add memory-keeper npx mcp-memory-keeper
48
141
  ```
49
142
 
50
- That's it! This will:
143
+ This single command:
51
144
 
52
- - Always use the latest version
53
- - Handle all dependencies automatically
54
- - Create the data directory at `~/mcp-data/memory-keeper/`
55
- - Work across different platforms
145
+ - Always uses the latest version
146
+ - Handles all dependencies automatically
147
+ - Works across macOS, Linux, and Windows
148
+ - No manual building or native module issues
56
149
 
57
- ### Method 2: Global Installation
150
+ ### Alternative Installation Methods
58
151
 
59
- If you prefer a global installation:
152
+ <details>
153
+ <summary>Global Installation</summary>
60
154
 
61
155
  ```bash
62
156
  npm install -g mcp-memory-keeper
63
157
  claude mcp add memory-keeper mcp-memory-keeper
64
158
  ```
65
159
 
66
- ### Method 3: From Source
160
+ </details>
161
+
162
+ <details>
163
+ <summary>From Source (for development)</summary>
67
164
 
68
165
  ```bash
69
166
  # 1. Clone the repository
@@ -80,6 +177,8 @@ npm run build
80
177
  claude mcp add memory-keeper node /absolute/path/to/mcp-memory-keeper/dist/index.js
81
178
  ```
82
179
 
180
+ </details>
181
+
83
182
  ## Configuration
84
183
 
85
184
  ### Environment Variables
@@ -96,13 +195,13 @@ Choose where to save the configuration:
96
195
 
97
196
  ```bash
98
197
  # Project-specific (default) - only for you in this project
99
- claude mcp add memory-keeper node /path/to/mcp-memory-keeper/dist/index.js
198
+ claude mcp add memory-keeper npx mcp-memory-keeper
100
199
 
101
200
  # Shared with team via .mcp.json
102
- claude mcp add --scope project memory-keeper node /path/to/mcp-memory-keeper/dist/index.js
201
+ claude mcp add --scope project memory-keeper npx mcp-memory-keeper
103
202
 
104
203
  # Available across all your projects
105
- claude mcp add --scope user memory-keeper node /path/to/mcp-memory-keeper/dist/index.js
204
+ claude mcp add --scope user memory-keeper npx mcp-memory-keeper
106
205
  ```
107
206
 
108
207
  #### Verify Configuration
@@ -126,20 +225,14 @@ claude mcp get memory-keeper
126
225
  {
127
226
  "mcpServers": {
128
227
  "memory-keeper": {
129
- "command": "node",
130
- "args": ["/absolute/path/to/mcp-memory-keeper/dist/index.js"]
228
+ "command": "npx",
229
+ "args": ["mcp-memory-keeper"]
131
230
  }
132
231
  }
133
232
  }
134
233
  ```
135
234
 
136
- **Important**: Replace `/absolute/path/to/mcp-memory-keeper` with the actual path where you cloned/installed the project.
137
-
138
- ### Example paths:
139
-
140
- - macOS: `/Users/username/projects/mcp-memory-keeper/dist/index.js`
141
- - Windows: `C:\\Users\\username\\projects\\mcp-memory-keeper\\dist\\index.js`
142
- - Linux: `/home/username/projects/mcp-memory-keeper/dist/index.js`
235
+ That's it! No paths needed - npx automatically handles everything.
143
236
 
144
237
  ### Verify Installation
145
238
 
@@ -166,7 +259,7 @@ If Memory Keeper isn't working:
166
259
  ```bash
167
260
  # Remove and re-add the server
168
261
  claude mcp remove memory-keeper
169
- claude mcp add memory-keeper node /absolute/path/to/mcp-memory-keeper/dist/index.js
262
+ claude mcp add memory-keeper npx mcp-memory-keeper
170
263
 
171
264
  # Check logs for errors
172
265
  # The server output will appear in Claude Code's output panel
@@ -174,26 +267,19 @@ claude mcp add memory-keeper node /absolute/path/to/mcp-memory-keeper/dist/index
174
267
 
175
268
  ### Updating to Latest Version
176
269
 
177
- To get the latest features and bug fixes:
178
-
179
- ```bash
180
- # 1. Navigate to your Memory Keeper directory
181
- cd /path/to/mcp-memory-keeper
270
+ With the npx installation method, you automatically get the latest version every time! No manual updates needed.
182
271
 
183
- # 2. Pull the latest changes
184
- git pull
272
+ If you're using the global installation method:
185
273
 
186
- # 3. Install any new dependencies (if package.json changed)
187
- npm install
188
-
189
- # 4. Rebuild the project
190
- npm run build
274
+ ```bash
275
+ # Update to latest version
276
+ npm update -g mcp-memory-keeper
191
277
 
192
- # 5. Start a new Claude session
278
+ # Start a new Claude session
193
279
  # The updated features will be available immediately
194
280
  ```
195
281
 
196
- **Note**: You don't need to reconfigure the MCP server in Claude after updating. Just pull, build, and start a new session!
282
+ **Note**: You don't need to reconfigure the MCP server in Claude after updating. Just start a new session!
197
283
 
198
284
  ## Usage
199
285
 
@@ -27,7 +27,9 @@ const serverPath = path.join(__dirname, '..', 'dist', 'index.js');
27
27
  // Check if the server is built
28
28
  if (!fs.existsSync(serverPath)) {
29
29
  console.error('Error: Server not built. This should not happen with the npm package.');
30
- console.error('Please report this issue at: https://github.com/mkreyman/mcp-memory-keeper/issues');
30
+ console.error(
31
+ 'Please report this issue at: https://github.com/mkreyman/mcp-memory-keeper/issues'
32
+ );
31
33
  process.exit(1);
32
34
  }
33
35
 
@@ -37,16 +39,16 @@ process.chdir(DATA_DIR);
37
39
  // Spawn the server
38
40
  const child = spawn(process.execPath, [serverPath, ...process.argv.slice(2)], {
39
41
  stdio: 'inherit',
40
- env: process.env
42
+ env: process.env,
41
43
  });
42
44
 
43
45
  // Handle exit
44
- child.on('exit', (code) => {
46
+ child.on('exit', code => {
45
47
  process.exit(code);
46
48
  });
47
49
 
48
50
  // Handle errors
49
- child.on('error', (err) => {
51
+ child.on('error', err => {
50
52
  console.error('Failed to start memory-keeper server:', err);
51
53
  process.exit(1);
52
- });
54
+ });
@@ -0,0 +1,134 @@
1
+ "use strict";
2
+ Object.defineProperty(exports, "__esModule", { value: true });
3
+ const globals_1 = require("@jest/globals");
4
+ // Helper functions from the main index.ts file
5
+ function estimateTokens(text) {
6
+ return Math.ceil(text.length / 4);
7
+ }
8
+ function calculateSafeItemCount(items, tokenLimit) {
9
+ if (items.length === 0)
10
+ return 0;
11
+ let safeCount = 0;
12
+ let currentTokens = 0;
13
+ // Include base response structure in token calculation
14
+ const baseResponse = {
15
+ items: [],
16
+ pagination: {
17
+ total: 0,
18
+ returned: 0,
19
+ offset: 0,
20
+ hasMore: false,
21
+ nextOffset: null,
22
+ totalCount: 0,
23
+ page: 1,
24
+ pageSize: 0,
25
+ totalPages: 1,
26
+ hasNextPage: false,
27
+ hasPreviousPage: false,
28
+ previousOffset: null,
29
+ totalSize: 0,
30
+ averageSize: 0,
31
+ defaultsApplied: {},
32
+ truncated: false,
33
+ truncatedCount: 0,
34
+ },
35
+ };
36
+ // Estimate tokens for base response structure
37
+ const baseTokens = estimateTokens(JSON.stringify(baseResponse, null, 2));
38
+ currentTokens = baseTokens;
39
+ // Add items one by one until we approach the token limit
40
+ for (let i = 0; i < items.length; i++) {
41
+ const itemTokens = estimateTokens(JSON.stringify(items[i], null, 2));
42
+ // Leave some buffer (10%) to account for formatting and additional metadata
43
+ if (currentTokens + itemTokens > tokenLimit * 0.9) {
44
+ break;
45
+ }
46
+ currentTokens += itemTokens;
47
+ safeCount++;
48
+ }
49
+ // Always return at least 1 item if any exist, even if it exceeds limit
50
+ // This prevents infinite loops and ensures progress
51
+ return Math.max(safeCount, items.length > 0 ? 1 : 0);
52
+ }
53
+ (0, globals_1.describe)('Token Limit Enforcement Unit Tests', () => {
54
+ (0, globals_1.describe)('calculateSafeItemCount', () => {
55
+ (0, globals_1.it)('should return 0 for empty items array', () => {
56
+ const result = calculateSafeItemCount([], 20000);
57
+ (0, globals_1.expect)(result).toBe(0);
58
+ });
59
+ (0, globals_1.it)('should return at least 1 item if any exist', () => {
60
+ const largeItem = {
61
+ key: 'large.item',
62
+ value: 'X'.repeat(100000), // Very large item
63
+ category: 'test',
64
+ priority: 'high',
65
+ };
66
+ const result = calculateSafeItemCount([largeItem], 20000);
67
+ (0, globals_1.expect)(result).toBe(1);
68
+ });
69
+ (0, globals_1.it)('should truncate items when approaching token limit', () => {
70
+ // Create multiple medium-sized items
71
+ const items = [];
72
+ for (let i = 0; i < 50; i++) {
73
+ items.push({
74
+ key: `item.${i}`,
75
+ value: 'This is a medium-sized test value that contains enough text to trigger token limit enforcement when many items are returned together. '.repeat(20),
76
+ category: 'test',
77
+ priority: 'high',
78
+ });
79
+ }
80
+ const result = calculateSafeItemCount(items, 20000);
81
+ (0, globals_1.expect)(result).toBeLessThan(50);
82
+ (0, globals_1.expect)(result).toBeGreaterThan(0);
83
+ });
84
+ (0, globals_1.it)('should handle small items that all fit within limit', () => {
85
+ const items = [];
86
+ for (let i = 0; i < 10; i++) {
87
+ items.push({
88
+ key: `small.item.${i}`,
89
+ value: 'Small value',
90
+ category: 'test',
91
+ priority: 'high',
92
+ });
93
+ }
94
+ const result = calculateSafeItemCount(items, 20000);
95
+ (0, globals_1.expect)(result).toBe(10);
96
+ });
97
+ (0, globals_1.it)('should respect token limit with buffer', () => {
98
+ // Create items that would exceed token limit
99
+ const items = [];
100
+ const itemValue = 'X'.repeat(2000); // 2KB item that will definitely cause truncation
101
+ for (let i = 0; i < 100; i++) {
102
+ items.push({
103
+ key: `large.buffer.item.${i}`,
104
+ value: itemValue,
105
+ category: 'test',
106
+ priority: 'high',
107
+ });
108
+ }
109
+ const result = calculateSafeItemCount(items, 20000);
110
+ // Should be significantly less than all items due to token limits
111
+ (0, globals_1.expect)(result).toBeLessThan(100);
112
+ (0, globals_1.expect)(result).toBeGreaterThan(0);
113
+ // Verify that the result respects the buffer by checking actual tokens
114
+ const actualTokens = result * estimateTokens(JSON.stringify(items[0], null, 2));
115
+ (0, globals_1.expect)(actualTokens).toBeLessThan(20000 * 0.9); // Should be under 90% of limit
116
+ });
117
+ });
118
+ (0, globals_1.describe)('estimateTokens', () => {
119
+ (0, globals_1.it)('should estimate tokens correctly', () => {
120
+ const text = 'This is a test string';
121
+ const tokens = estimateTokens(text);
122
+ (0, globals_1.expect)(tokens).toBe(Math.ceil(text.length / 4));
123
+ });
124
+ (0, globals_1.it)('should handle empty strings', () => {
125
+ const tokens = estimateTokens('');
126
+ (0, globals_1.expect)(tokens).toBe(0);
127
+ });
128
+ (0, globals_1.it)('should handle large strings', () => {
129
+ const largeText = 'X'.repeat(10000);
130
+ const tokens = estimateTokens(largeText);
131
+ (0, globals_1.expect)(tokens).toBe(2500); // 10000 / 4
132
+ });
133
+ });
134
+ });
package/dist/index.js CHANGED
@@ -185,6 +185,52 @@ function calculateResponseMetrics(items) {
185
185
  const averageSize = items.length > 0 ? Math.round(totalSize / items.length) : 0;
186
186
  return { totalSize, estimatedTokens, averageSize };
187
187
  }
188
+ // Helper to calculate how many items can fit within token limit
189
+ function calculateSafeItemCount(items, tokenLimit) {
190
+ if (items.length === 0)
191
+ return 0;
192
+ let safeCount = 0;
193
+ let currentTokens = 0;
194
+ // Include base response structure in token calculation
195
+ const baseResponse = {
196
+ items: [],
197
+ pagination: {
198
+ total: 0,
199
+ returned: 0,
200
+ offset: 0,
201
+ hasMore: false,
202
+ nextOffset: null,
203
+ totalCount: 0,
204
+ page: 1,
205
+ pageSize: 0,
206
+ totalPages: 1,
207
+ hasNextPage: false,
208
+ hasPreviousPage: false,
209
+ previousOffset: null,
210
+ totalSize: 0,
211
+ averageSize: 0,
212
+ defaultsApplied: {},
213
+ truncated: false,
214
+ truncatedCount: 0,
215
+ },
216
+ };
217
+ // Estimate tokens for base response structure
218
+ const baseTokens = estimateTokens(JSON.stringify(baseResponse, null, 2));
219
+ currentTokens = baseTokens;
220
+ // Add items one by one until we approach the token limit
221
+ for (let i = 0; i < items.length; i++) {
222
+ const itemTokens = estimateTokens(JSON.stringify(items[i], null, 2));
223
+ // Leave some buffer (10%) to account for formatting and additional metadata
224
+ if (currentTokens + itemTokens > tokenLimit * 0.9) {
225
+ break;
226
+ }
227
+ currentTokens += itemTokens;
228
+ safeCount++;
229
+ }
230
+ // Always return at least 1 item if any exist, even if it exceeds limit
231
+ // This prevents infinite loops and ensures progress
232
+ return Math.max(safeCount, items.length > 0 ? 1 : 0);
233
+ }
188
234
  // Helper to parse relative time strings
189
235
  function parseRelativeTime(relativeTime) {
190
236
  const now = new Date();
@@ -625,16 +671,35 @@ server.setRequestHandler(types_js_1.CallToolRequestSchema, async (request) => {
625
671
  // Calculate response metrics
626
672
  const metrics = calculateResponseMetrics(result.items);
627
673
  const TOKEN_LIMIT = 20000; // Conservative limit to stay well under MCP's 25k limit
628
- // Check if we're approaching token limits
674
+ // Check if we're approaching token limits and enforce truncation
629
675
  const isApproachingLimit = metrics.estimatedTokens > TOKEN_LIMIT;
676
+ let actualItems = result.items;
677
+ let wasTruncated = false;
678
+ let truncatedCount = 0;
679
+ if (isApproachingLimit) {
680
+ // Calculate how many items we can safely return
681
+ const safeItemCount = calculateSafeItemCount(result.items, TOKEN_LIMIT);
682
+ if (safeItemCount < result.items.length) {
683
+ actualItems = result.items.slice(0, safeItemCount);
684
+ wasTruncated = true;
685
+ truncatedCount = result.items.length - safeItemCount;
686
+ }
687
+ }
630
688
  // Calculate pagination metadata
631
689
  // Use the validated limit and offset from paginationValidation
632
690
  const effectiveLimit = limit; // Already validated and defaulted
633
691
  const effectiveOffset = offset; // Already validated and defaulted
634
692
  const currentPage = effectiveLimit > 0 ? Math.floor(effectiveOffset / effectiveLimit) + 1 : 1;
635
693
  const totalPages = effectiveLimit > 0 ? Math.ceil(result.totalCount / effectiveLimit) : 1;
636
- const hasNextPage = currentPage < totalPages;
694
+ // Update pagination to account for truncation
695
+ const hasNextPage = wasTruncated || currentPage < totalPages;
637
696
  const hasPreviousPage = currentPage > 1;
697
+ // Calculate next offset accounting for truncation
698
+ const nextOffset = hasNextPage
699
+ ? wasTruncated
700
+ ? effectiveOffset + actualItems.length
701
+ : effectiveOffset + effectiveLimit
702
+ : null;
638
703
  // Track whether defaults were applied
639
704
  const defaultsApplied = {
640
705
  limit: rawLimit === undefined,
@@ -642,7 +707,7 @@ server.setRequestHandler(types_js_1.CallToolRequestSchema, async (request) => {
642
707
  };
643
708
  // Enhanced response format
644
709
  if (includeMetadata) {
645
- const itemsWithMetadata = result.items.map(item => ({
710
+ const itemsWithMetadata = actualItems.map(item => ({
646
711
  key: item.key,
647
712
  value: item.value,
648
713
  category: item.category,
@@ -657,10 +722,10 @@ server.setRequestHandler(types_js_1.CallToolRequestSchema, async (request) => {
657
722
  items: itemsWithMetadata,
658
723
  pagination: {
659
724
  total: result.totalCount,
660
- returned: result.items.length,
725
+ returned: actualItems.length,
661
726
  offset: effectiveOffset,
662
727
  hasMore: hasNextPage,
663
- nextOffset: hasNextPage ? effectiveOffset + effectiveLimit : null,
728
+ nextOffset: nextOffset,
664
729
  // Extended pagination metadata
665
730
  totalCount: result.totalCount,
666
731
  page: currentPage,
@@ -676,12 +741,20 @@ server.setRequestHandler(types_js_1.CallToolRequestSchema, async (request) => {
676
741
  averageSize: metrics.averageSize,
677
742
  // Defaults applied
678
743
  defaultsApplied: defaultsApplied,
744
+ // Truncation information
745
+ truncated: wasTruncated,
746
+ truncatedCount: truncatedCount,
679
747
  },
680
748
  };
681
749
  // Add warning if approaching token limits
682
750
  if (isApproachingLimit) {
683
- response.pagination.warning =
684
- 'Large result set. Consider using smaller limit or more specific filters.';
751
+ if (wasTruncated) {
752
+ response.pagination.warning = `Response truncated due to token limits. ${truncatedCount} items omitted. Use pagination with offset=${nextOffset} to retrieve remaining items.`;
753
+ }
754
+ else {
755
+ response.pagination.warning =
756
+ 'Large result set. Consider using smaller limit or more specific filters.';
757
+ }
685
758
  }
686
759
  return {
687
760
  content: [
@@ -694,19 +767,27 @@ server.setRequestHandler(types_js_1.CallToolRequestSchema, async (request) => {
694
767
  }
695
768
  // Return enhanced format for all queries to support pagination
696
769
  const response = {
697
- items: result.items,
770
+ items: actualItems,
698
771
  pagination: {
699
772
  total: result.totalCount,
700
- returned: result.items.length,
773
+ returned: actualItems.length,
701
774
  offset: effectiveOffset,
702
775
  hasMore: hasNextPage,
703
- nextOffset: hasNextPage ? effectiveOffset + effectiveLimit : null,
776
+ nextOffset: nextOffset,
777
+ // Truncation information
778
+ truncated: wasTruncated,
779
+ truncatedCount: truncatedCount,
704
780
  },
705
781
  };
706
782
  // Add warning if approaching token limits
707
783
  if (isApproachingLimit) {
708
- response.pagination.warning =
709
- 'Large result set. Consider using smaller limit or more specific filters.';
784
+ if (wasTruncated) {
785
+ response.pagination.warning = `Response truncated due to token limits. ${truncatedCount} items omitted. Use pagination with offset=${nextOffset} to retrieve remaining items.`;
786
+ }
787
+ else {
788
+ response.pagination.warning =
789
+ 'Large result set. Consider using smaller limit or more specific filters.';
790
+ }
710
791
  }
711
792
  return {
712
793
  content: [
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcp-memory-keeper",
3
- "version": "0.10.0",
3
+ "version": "0.10.1",
4
4
  "description": "MCP server for persistent context management in AI coding assistants",
5
5
  "main": "dist/index.js",
6
6
  "bin": {