@aeriondyseti/vector-memory-mcp 0.5.0 → 0.9.0-dev.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,445 +1,129 @@
1
1
  # Vector Memory MCP Server
2
2
 
3
- > Replace static markdown context files with intelligent, semantically-searchable memories that understand what you're working on.
3
+ > Semantic memory storage for AI assistants. Store decisions, patterns, and context that persists across sessions.
4
4
 
5
- A production-ready MCP (Model Context Protocol) server that provides semantic memory storage for AI assistants. Uses local embeddings and vector search to automatically retrieve relevant context without cloud dependencies.
6
-
7
- **Perfect for:** Software teams maintaining architectural knowledge, developers juggling multiple projects, and anyone building with AI assistants like Claude Code.
5
+ A local-first MCP server that provides vector-based memory storage. Uses local embeddings and LanceDB for fast, private semantic search.
8
6
 
9
7
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
10
- [![TypeScript](https://img.shields.io/badge/TypeScript-5.0+-blue.svg)](https://www.typescriptlang.org/)
11
- [![Bun](https://img.shields.io/badge/Bun-Required-black.svg)](https://bun.sh/)
12
- [![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-green.svg)](https://modelcontextprotocol.io)
8
+ [![npm version](https://img.shields.io/npm/v/@aeriondyseti/vector-memory-mcp.svg)](https://www.npmjs.com/package/@aeriondyseti/vector-memory-mcp)
13
9
 
14
10
  ---
15
11
 
16
- ## Features
17
-
18
- ### 🔒 **Local-First & Private**
19
- - All embeddings generated locally (no cloud APIs)
20
- - Data stored in local LanceDB databases
21
- - Complete privacy and control over your memories
22
-
23
- ### 🎯 **Intelligent Semantic Search**
24
- - Vector similarity with multi-factor scoring
25
- - Considers relevance, recency, priority, and usage frequency
26
- - Context-aware retrieval based on conversation flow
27
-
28
- ### 📊 **Smart Memory Storage**
29
- - Stores memories in `~/.local/share/vector-memory-mcp/memories.db`
30
- - Fast LanceDB-based storage with vector search capabilities
31
- - Memories persist across sessions and projects
32
-
33
- ### ⚡ **High Performance**
34
- - Sub-100ms search latency for 1000+ memories
35
- - Efficient storage (<10MB per 1000 memories)
36
- - CPU-optimized local embeddings (no GPU required)
12
+ ## Features
37
13
 
38
- ### 🔌 **MCP Native Integration**
39
- - Works seamlessly with Claude Code
40
- - Session hooks for automatic context injection
41
- - Standard MCP protocol (compatible with future clients)
42
-
43
- ### 🛠️ **Developer-Friendly**
44
- - Zero-configuration setup
45
- - Built with Bun for maximum performance
46
- - Simple MCP tools for storing and searching
47
- - TypeScript for type safety
14
+ - **Local & Private** - All embeddings generated locally, data stored in local LanceDB
15
+ - **Semantic Search** - Vector similarity search with configurable scoring
16
+ - **Batch Operations** - Store, update, delete, and retrieve multiple memories at once
17
+ - **Session Handoffs** - Save and restore project context between sessions
18
+ - **MCP Native** - Standard protocol, works with any MCP-compatible client
48
19
 
49
20
  ---
50
21
 
51
- ## 🚀 Quick Start
22
+ ## Quick Start
52
23
 
53
24
  ### Prerequisites
54
25
 
55
26
  - [Bun](https://bun.sh/) 1.0+
56
- - Claude Code or another MCP-compatible client
57
-
58
- > **Note:** This server requires Bun to run.
27
+ - An MCP-compatible client (Claude Code, Claude Desktop, etc.)
59
28
 
60
- ### Installation & Configuration
29
+ ### Install
61
30
 
62
- #### Option 1: Global Install (Recommended)
63
-
64
- **Install:**
65
31
  ```bash
66
32
  bun install -g @aeriondyseti/vector-memory-mcp
67
33
  ```
68
34
 
69
- > **Note:** The installation automatically downloads ML models (~90MB) and verifies native dependencies. This may take a minute on first install.
70
-
71
- **Configure Claude Code** - Add to `~/.claude/config.json`:
72
- ```json
73
- {
74
- "mcpServers": {
75
- "memory": {
76
- "type": "stdio",
77
- "command": "bunx",
78
- "args": [
79
- "--bun",
80
- "@aeriondyseti/vector-memory-mcp"
81
- ],
82
- "env": {}
83
- }
84
- }
85
- }
86
- ```
35
+ > First install downloads ML models (~90MB). This may take a minute.
87
36
 
88
- #### Option 2: Local Development
37
+ ### Configure
89
38
 
90
- **Install:**
91
- ```bash
92
- git clone https://github.com/AerionDyseti/vector-memory-mcp.git
93
- cd vector-memory-mcp
94
- bun install
95
- ```
39
+ Add to your MCP client config (e.g., `~/.claude/settings.json`):
96
40
 
97
- **Configure Claude Code** - Add to `~/.claude/config.json`:
98
41
  ```json
99
42
  {
100
43
  "mcpServers": {
101
- "memory": {
102
- "command": "bun",
103
- "args": ["run", "/absolute/path/to/vector-memory-mcp/src/index.ts"]
44
+ "vector-memory": {
45
+ "type": "stdio",
46
+ "command": "bunx",
47
+ "args": ["--bun", "@aeriondyseti/vector-memory-mcp"]
104
48
  }
105
49
  }
106
50
  }
107
51
  ```
108
- *Replace `/absolute/path/to/` with your actual installation path.*
109
52
 
110
- ---
53
+ ### Use
111
54
 
112
- **What gets installed:**
113
- - The vector-memory-mcp package and all dependencies
114
- - Native binaries for ONNX Runtime (~32MB) and image processing (~10MB)
115
- - ML model files automatically downloaded during installation (~90MB, cached in `~/.cache/huggingface/`)
116
- - **Total first-time setup:** ~130MB of downloads
55
+ Restart your MCP client. You now have access to:
117
56
 
118
- > 💡 **Tip:** If you need to re-download models or verify dependencies, run: `vector-memory-mcp warmup`
119
-
120
- ### Start Using It
121
-
122
- That's it! Restart Claude Code and you'll have access to memory tools:
123
- - `store_memory` - Save information for later recall
124
- - `search_memories` - Find relevant memories semantically
125
- - `get_memory` - Retrieve a specific memory by ID
126
- - `delete_memory` - Remove a memory
57
+ | Tool | Description |
58
+ |------|-------------|
59
+ | `store_memories` | Save memories (accepts array) |
60
+ | `search_memories` | Find relevant memories semantically |
61
+ | `get_memories` | Retrieve memories by ID (accepts array) |
62
+ | `update_memories` | Update existing memories |
63
+ | `delete_memories` | Remove memories (accepts array) |
64
+ | `store_handoff` | Save session context for later |
65
+ | `get_handoff` | Restore session context |
127
66
 
128
67
  ---
129
68
 
130
- ## 📖 Usage
131
-
132
- ### Storing Memories
133
-
134
- Ask Claude Code to remember things for you:
69
+ ## Usage
135
70
 
71
+ **Store a memory:**
136
72
  ```
137
73
  You: "Remember that we use Drizzle ORM for database access"
138
- Claude: [calls store_memory tool]
74
+ Assistant: [calls store_memories]
139
75
  ```
140
76
 
141
- Or Claude Code can store memories directly:
142
- ```json
143
- {
144
- "content": "Use Drizzle ORM for type-safe database access",
145
- "metadata": {
146
- "tags": ["architecture", "database"],
147
- "category": "tooling"
148
- }
149
- }
150
- ```
151
-
152
- ### Searching Memories
153
-
154
- Claude Code automatically searches memories when relevant, or you can ask:
155
-
77
+ **Search memories:**
156
78
  ```
157
79
  You: "What did we decide about the database?"
158
- Claude: [calls search_memories with query about database decisions]
80
+ Assistant: [calls search_memories with relevant query]
159
81
  ```
160
82
 
161
- Search parameters:
162
- ```json
163
- {
164
- "query": "authentication strategy",
165
- "limit": 10
166
- }
83
+ **Session handoffs:**
167
84
  ```
168
-
169
- ### Managing Memories
170
-
171
- Retrieve a specific memory:
172
- ```json
173
- {
174
- "id": "memory-id-here"
175
- }
176
- ```
177
-
178
- Delete a memory:
179
- ```json
180
- {
181
- "id": "memory-id-here"
182
- }
85
+ You: "Save context for next session"
86
+ Assistant: [calls store_handoff with summary, completed items, next steps]
183
87
  ```
184
88
 
185
89
  ---
186
90
 
187
- ## 🏗️ Architecture
91
+ ## Configuration
188
92
 
189
- ```
190
- vector-memory-mcp/
191
- ├── src/
192
- │ ├── index.ts # Entry point
193
- │ ├── config/ # Configuration management
194
- │ ├── db/ # Database layer (LanceDB)
195
- │ ├── services/
196
- │ │ ├── embeddings.service.ts # Embeddings via @huggingface/transformers
197
- │ │ └── memory.service.ts # Core memory operations
198
- │ └── mcp/
199
- │ ├── server.ts # MCP server setup
200
- │ ├── tools.ts # MCP tool definitions
201
- │ └── handlers.ts # Tool request handlers
202
- ├── tests/
203
- │ ├── memory.test.ts
204
- │ └── embeddings.test.ts
205
- ├── bin/
206
- │ └── vector-memory-mcp.js # Executable entry point
207
- └── package.json
208
- ```
209
-
210
- ### Technology Stack
211
-
212
- - **MCP Framework**: @modelcontextprotocol/sdk (official SDK)
213
- - **Vector Database**: LanceDB (fast, local, vector search)
214
- - **Embeddings**: [@huggingface/transformers](https://huggingface.co/docs/transformers.js) (Xenova/all-MiniLM-L6-v2, 384 dimensions)
215
- - **Language**: TypeScript 5.0+
216
- - **Runtime**: Bun 1.0+
217
- - **Testing**: Bun test
218
-
219
- ---
220
-
221
- ## 🎨 How It Works
222
-
223
- ### 1. Memory Storage
93
+ Environment variables:
224
94
 
225
- ```
226
- Claude Code calls store_memory tool
227
-
228
- Content @huggingface/transformers 384d vector
229
-
230
- Store in LanceDB with metadata
231
-
232
- ~/.local/share/vector-memory-mcp/memories.db
233
- ```
234
-
235
- ### 2. Memory Retrieval
236
-
237
- ```
238
- Claude Code calls search_memories
239
-
240
- Query → @huggingface/transformers → 384d vector
241
-
242
- Vector search in LanceDB
243
-
244
- Vector similarity scoring
245
-
246
- Return top N relevant memories
247
- ```
95
+ | Variable | Default | Description |
96
+ |----------|---------|-------------|
97
+ | `VECTOR_MEMORY_DB_PATH` | `.vector-memory/memories.db` | Database location |
98
+ | `VECTOR_MEMORY_MODEL` | `Xenova/all-MiniLM-L6-v2` | Embedding model |
99
+ | `VECTOR_MEMORY_HTTP_PORT` | `3271` | HTTP server port |
248
100
 
249
101
  ---
250
102
 
251
- ## 🔧 Configuration
252
-
253
- The server uses environment variables for configuration:
254
-
255
- - `VECTOR_MEMORY_DB_PATH` - Custom database path (default: `~/.local/share/vector-memory-mcp/memories.db`)
256
- - `VECTOR_MEMORY_MODEL` - Embedding model to use (default: `Xenova/all-MiniLM-L6-v2`)
103
+ ## Development
257
104
 
258
- Example:
259
105
  ```bash
260
- export VECTOR_MEMORY_DB_PATH="/path/to/custom/memories.db"
261
- export VECTOR_MEMORY_MODEL="Xenova/all-MiniLM-L6-v2"
262
- ```
263
-
264
- Or in your Claude Code config:
265
- ```json
266
- {
267
- "mcpServers": {
268
- "memory": {
269
- "command": "vector-memory-mcp",
270
- "env": {
271
- "VECTOR_MEMORY_DB_PATH": "/custom/path/memories.db"
272
- }
273
- }
274
- }
275
- }
276
- ```
277
-
278
- ---
279
-
280
- ## 🧪 Development
281
-
282
- ### Running Tests
283
-
284
- ```bash
285
- # Run all tests
286
- bun test
287
-
288
- # Run with coverage
289
- bun test --coverage
290
-
291
- # Type checking
292
- bun run typecheck
293
- ```
294
-
295
- ### Development Mode
296
-
297
- ```bash
298
- # Watch mode - auto-restart on file changes
299
- bun run dev
300
-
301
- # Run directly without building
302
- bun run src/index.ts
303
- ```
304
-
305
- ### Building
306
-
307
- ```bash
308
- # Build for production
309
- bun run build
106
+ git clone https://github.com/AerionDyseti/vector-memory-mcp.git
107
+ cd vector-memory-mcp
108
+ bun install
310
109
 
311
- # Output will be in dist/
110
+ bun run test # Run all tests
111
+ bun run dev # Watch mode
112
+ bun run typecheck # Type checking
312
113
  ```
313
114
 
314
- ---
315
-
316
- ## 🗺️ Roadmap
317
-
318
- ### ✅ Phase 1: Foundation (Current)
319
- - ✅ Core database with LanceDB
320
- - ✅ Embedding generation with @huggingface/transformers
321
- - ✅ Basic MCP tools (store, search, get, delete)
322
- - ✅ TypeScript implementation with Drizzle ORM
323
-
324
- ### 🚧 Phase 2: Enhanced Search & Scoring
325
- - Multi-factor scoring algorithm (similarity, recency, priority, usage frequency)
326
- - Configurable scoring weights
327
- - Priority levels for memories
328
- - Usage tracking and frequency-based ranking
329
- - Metadata filtering and advanced tagging
330
-
331
- ### 📋 Phase 3: Dual-Level Memory System
332
- - Project-specific memories (`.memory/db` in repo)
333
- - Global memories (`~/.local/share/vector-memory-mcp/`)
334
- - Automatic precedence handling (project overrides global)
335
- - Project detection and context switching
336
-
337
- ### 🎯 Phase 4: Smart Automation
338
- - Auto-detect architectural decisions
339
- - Capture bug fixes and solutions automatically
340
- - Generate session-end summaries
341
- - Natural language trigger detection (85%+ accuracy)
342
- - Continuous conversation monitoring
343
-
344
- ### 🔮 Phase 5: Advanced Features
345
- - Memory deduplication with similarity threshold
346
- - Batch operations (import/export)
347
- - Markdown import/export
348
- - Memory clustering and visualization
349
- - Cross-project insights
350
- - Multi-modal memories (images, diagrams)
351
- - Session hooks for automatic context injection
352
- - Multi-CLI support (Cursor, Windsurf, etc.)
353
- - Smart priority suggestions
354
-
355
- ---
356
-
357
- ## 🤝 Contributing
358
-
359
- Contributions are welcome! This project is in active development.
360
-
361
- ### Areas We'd Love Help With:
362
- - Testing and bug reports
363
- - Documentation improvements
364
- - Performance optimizations
365
- - New feature ideas
366
-
367
- See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines *(coming soon)*.
115
+ See [CHANGELOG.md](CHANGELOG.md) for release history and [ROADMAP.md](ROADMAP.md) for planned features.
368
116
 
369
117
  ---
370
118
 
371
- ## 📄 License
119
+ ## Contributing
372
120
 
373
- MIT License - see [LICENSE](LICENSE) for details.
121
+ Contributions welcome! See [issues](https://github.com/AerionDyseti/vector-memory-mcp/issues) for areas we'd love help with.
374
122
 
375
- ---
123
+ ## License
376
124
 
377
- ## 🙏 Acknowledgments
378
-
379
- - Built with [@modelcontextprotocol/sdk](https://github.com/modelcontextprotocol/typescript-sdk) - Official MCP TypeScript SDK
380
- - Uses [LanceDB](https://lancedb.com/) for fast, local vector search
381
- - Powered by [@huggingface/transformers](https://huggingface.co/docs/transformers.js) for local embeddings
382
- - Database layer via [Drizzle ORM](https://orm.drizzle.team/)
383
- - Inspired by [doobidoo's mcp-memory-service](https://github.com/doobidoo/mcp-memory-service)
125
+ MIT - see [LICENSE](LICENSE)
384
126
 
385
127
  ---
386
128
 
387
- ## 🔗 Related Projects
388
-
389
- - [Model Context Protocol](https://modelcontextprotocol.io) - Official MCP specification
390
- - [Claude Code](https://claude.ai/code) - AI coding assistant from Anthropic
391
- - [LanceDB](https://lancedb.com/) - Fast, local vector search
392
- - [Transformers.js](https://huggingface.co/docs/transformers.js) - Run transformers in JavaScript
393
-
394
- ---
395
-
396
- ## 💬 Support
397
-
398
- - **Issues**: [GitHub Issues](https://github.com/AerionDyseti/vector-memory-mcp/issues)
399
- - **Discussions**: [GitHub Discussions](https://github.com/AerionDyseti/vector-memory-mcp/discussions)
400
- - **Documentation**: Check the `docs/` directory
401
-
402
- ---
403
-
404
- ## ⚡ Quick Examples
405
-
406
- ### Example 1: Storing a Decision
407
-
408
- ```
409
- You: "Remember that we decided to use Drizzle ORM for type-safe database access"
410
- Claude: I'll store that for you.
411
- [Calls store_memory tool with content and metadata]
412
- ✓ Memory stored successfully
413
- ```
414
-
415
- ### Example 2: Searching Memories
416
-
417
- ```
418
- You: "What did we decide about database tooling?"
419
- Claude: Let me search for that...
420
- [Calls search_memories with query about database]
421
- Found: "Use Drizzle ORM for type-safe database access"
422
-
423
- Based on our previous decision, we're using Drizzle ORM...
424
- ```
425
-
426
- ### Example 3: Managing Memories
427
-
428
- ```
429
- You: "Show me what you remember about authentication"
430
- Claude: [Searches for authentication-related memories]
431
- Found 3 memories:
432
- 1. "Use JWT tokens for API authentication"
433
- 2. "Store refresh tokens in httpOnly cookies"
434
- 3. "Implement rate limiting on auth endpoints"
435
- ```
436
-
437
- ---
438
-
439
- <div align="center">
440
-
441
- **[⬆ Back to Top](#vector-memory-mcp-server)**
442
-
443
- Made with ❤️ for developers who value context continuity
444
-
445
- </div>
129
+ Built with [MCP SDK](https://github.com/modelcontextprotocol/typescript-sdk), [LanceDB](https://lancedb.com/), and [Transformers.js](https://huggingface.co/docs/transformers.js)
@@ -0,0 +1,100 @@
1
+ #!/usr/bin/env bun
2
+ /**
3
+ * SessionStart hook for Claude Code
4
+ *
5
+ * Fetches config from the running vector-memory server's /health endpoint,
6
+ * then retrieves and outputs the latest handoff.
7
+ *
8
+ * Requires the server to be running with HTTP enabled.
9
+ *
10
+ * Usage in ~/.claude/settings.json:
11
+ * {
12
+ * "hooks": {
13
+ * "SessionStart": [{
14
+ * "hooks": [{
15
+ * "type": "command",
16
+ * "command": "bun /path/to/vector-memory-mcp/hooks/session-start.ts"
17
+ * }]
18
+ * }]
19
+ * }
20
+ * }
21
+ */
22
+
23
+ import { existsSync } from "fs";
24
+ import { connectToDatabase } from "../src/db/connection.js";
25
+ import { MemoryRepository } from "../src/db/memory.repository.js";
26
+ import { EmbeddingsService } from "../src/services/embeddings.service.js";
27
+ import { MemoryService } from "../src/services/memory.service.js";
28
+
29
+ const VECTOR_MEMORY_URL = process.env.VECTOR_MEMORY_URL ?? "http://127.0.0.1:3271";
30
+
31
+ interface HealthResponse {
32
+ status: string;
33
+ config: {
34
+ dbPath: string;
35
+ embeddingModel: string;
36
+ embeddingDimension: number;
37
+ };
38
+ }
39
+
40
+ async function main() {
41
+ // Get config from running server
42
+ let health: HealthResponse;
43
+ try {
44
+ const response = await fetch(`${VECTOR_MEMORY_URL}/health`);
45
+ if (!response.ok) {
46
+ throw new Error(`Server returned ${response.status}`);
47
+ }
48
+ health = await response.json();
49
+ } catch (error) {
50
+ if (error instanceof Error && error.message.includes("ECONNREFUSED")) {
51
+ console.log("Vector memory server not running. Starting fresh session.");
52
+ return;
53
+ }
54
+ throw error;
55
+ }
56
+
57
+ const { dbPath, embeddingModel, embeddingDimension } = health.config;
58
+
59
+ // Check if DB exists
60
+ if (!existsSync(dbPath)) {
61
+ console.log("Vector memory database not found. Starting fresh session.");
62
+ return;
63
+ }
64
+
65
+ const db = await connectToDatabase(dbPath);
66
+ const repository = new MemoryRepository(db);
67
+ const embeddings = new EmbeddingsService(embeddingModel, embeddingDimension);
68
+ const service = new MemoryService(repository, embeddings);
69
+
70
+ const handoff = await service.getLatestHandoff();
71
+
72
+ if (!handoff) {
73
+ console.log("No handoff found. Starting fresh session.");
74
+ return;
75
+ }
76
+
77
+ // Fetch referenced memories if any
78
+ const memoryIds = (handoff.metadata.memory_ids as string[] | undefined) ?? [];
79
+ let memoriesSection = "";
80
+
81
+ if (memoryIds.length > 0) {
82
+ const memories: string[] = [];
83
+ for (const id of memoryIds) {
84
+ const memory = await service.get(id);
85
+ if (memory) {
86
+ memories.push(`### Memory: ${id}\n${memory.content}`);
87
+ }
88
+ }
89
+ if (memories.length > 0) {
90
+ memoriesSection = `\n\n## Referenced Memories\n\n${memories.join("\n\n")}`;
91
+ }
92
+ }
93
+
94
+ console.log(handoff.content + memoriesSection);
95
+ }
96
+
97
+ main().catch((err) => {
98
+ console.error("Error loading handoff:", err.message);
99
+ process.exit(1);
100
+ });
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@aeriondyseti/vector-memory-mcp",
3
- "version": "0.5.0",
3
+ "version": "0.9.0-dev.1",
4
4
  "description": "A zero-configuration RAG memory server for MCP clients",
5
5
  "type": "module",
6
6
  "main": "src/index.ts",
@@ -10,6 +10,7 @@
10
10
  "files": [
11
11
  "src",
12
12
  "scripts",
13
+ "hooks",
13
14
  "README.md",
14
15
  "LICENSE"
15
16
  ],
@@ -25,6 +26,7 @@
25
26
  "scripts": {
26
27
  "start": "bun run src/index.ts",
27
28
  "dev": "bun --watch run src/index.ts",
29
+ "build": "bun run typecheck",
28
30
  "typecheck": "bunx tsc --noEmit",
29
31
  "test": "bun run scripts/test-runner.ts",
30
32
  "test:raw": "bun test --preload ./tests/preload.ts",
@@ -32,14 +34,24 @@
32
34
  "test:coverage": "bun test --preload ./tests/preload.ts --coverage",
33
35
  "test:preload": "bun run tests/preload.ts",
34
36
  "warmup": "bun run scripts/warmup.ts",
35
- "postinstall": "bun run scripts/warmup.ts"
37
+ "postinstall": "bun run scripts/warmup.ts",
38
+ "publish:check": "bun run scripts/publish.ts --dry-run",
39
+ "publish:npm": "bun run scripts/publish.ts",
40
+ "publish:dev": "npm publish --access public --tag dev"
36
41
  },
37
- "keywords": ["mcp", "memory", "rag", "embeddings", "lancedb"],
42
+ "keywords": [
43
+ "mcp",
44
+ "memory",
45
+ "rag",
46
+ "embeddings",
47
+ "lancedb"
48
+ ],
38
49
  "license": "MIT",
39
50
  "dependencies": {
40
51
  "@huggingface/transformers": "^3.8.0",
41
52
  "@lancedb/lancedb": "^0.22.3",
42
53
  "@modelcontextprotocol/sdk": "^1.0.0",
54
+ "arg": "^5.0.2",
43
55
  "hono": "^4.11.3"
44
56
  },
45
57
  "devDependencies": {