imprint-memory 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,70 @@
1
+ # imprint-memory configuration
2
+ # Copy to ~/.imprint/.env for hooks/receiver, or export only the values you need.
3
+
4
+ # Storage
5
+ IMPRINT_DATA_DIR=~/.imprint
6
+ # IMPRINT_DB=~/.imprint/memory.db
7
+ TZ_OFFSET=0
8
+
9
+ # Display labels and output language
10
+ IMPRINT_USER_NAME=User
11
+ IMPRINT_AGENT_NAME=Assistant
12
+ IMPRINT_LOCALE=en
13
+
14
+ # Embeddings. Default is Ollama; if it is offline, search falls back to FTS5/LIKE.
15
+ EMBED_PROVIDER=ollama
16
+ EMBED_MODEL=bge-m3
17
+ OLLAMA_URL=http://localhost:11434
18
+
19
+ # OpenAI-compatible embeddings
20
+ # EMBED_PROVIDER=openai
21
+ # OPENAI_API_KEY=sk-...
22
+ # EMBED_API_BASE=https://api.openai.com
23
+ # EMBED_MODEL=text-embedding-3-small
24
+
25
+ # Google Gemini embeddings and optional summary fallback
26
+ # EMBED_PROVIDER=google
27
+ # GOOGLE_API_KEY=
28
+ # GOOGLE_API_KEYS=
29
+ # GEMINI_SUMMARY_MODEL=gemini-2.5-flash-lite
30
+
31
+ # Cloudflare Workers AI for query expansion, reranking, and chunk summaries
32
+ # CF_ACCOUNT_ID=
33
+ # CF_API_TOKEN=
34
+ CF_RERANK_MODEL=@cf/meta/llama-3.3-70b-instruct-fp8-fast
35
+ CF_SUMMARY_MODEL=@cf/meta/llama-3.3-70b-instruct-fp8-fast
36
+
37
+ # MCP HTTP mode
38
+ IMPRINT_HTTP_HOST=0.0.0.0
39
+ IMPRINT_HTTP_PORT=8000
40
+ # IMPRINT_OAUTH_FILE=~/.imprint-oauth.json
41
+ # OAUTH_CLIENT_ID=
42
+ # OAUTH_CLIENT_SECRET=
43
+ # OAUTH_ACCESS_TOKEN=
44
+
45
+ # Chat-sync receiver
46
+ IMPRINT_RECEIVER_HOST=127.0.0.1
47
+ IMPRINT_RECEIVER_PORT=8001
48
+ IMPRINT_RECEIVER_EMBED_DELAY=0.7
49
+ IMPRINT_RECEIVER_SHIFT_THRESHOLD=0.50
50
+ IMPRINT_RECEIVER_CORS_ORIGIN_REGEX=^chrome-extension://.*$
51
+
52
+ # Hook
53
+ # Set this if Claude Code's shell cannot find the Python that installed imprint-memory.
54
+ # IMPRINT_PYTHON=python3.12
55
+ # IMPRINT_ENV_FILE=~/.imprint/.env
56
+ IMPRINT_HOOK_LANG=en
57
+
58
+ # Search/chunk tuning
59
+ STOPWORD_THRESHOLD=0.15
60
+ IMPRINT_STOPWORD_SKIP_PLATFORMS=cc
61
+ IMPRINT_CHUNK_SKIP_PLATFORMS=cc
62
+ # IMPRINT_CAUSAL_BLACKLIST=topic1,topic2
63
+
64
+ # Message bus and compression helpers
65
+ MESSAGE_BUS_LIMIT=40
66
+ OLLAMA_CHAT_URL=http://localhost:11434/api/chat
67
+ OLLAMA_CHAT_MODEL=gemma4:e4b
68
+ COMPRESS_MODEL=qwen3:8b
69
+ COMPRESS_KEEP=30
70
+ COMPRESS_THRESHOLD=50
@@ -0,0 +1,31 @@
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *.egg-info/
5
+ dist/
6
+ build/
7
+ *.egg
8
+
9
+ # Data (user-specific)
10
+ *.db
11
+ *.db-wal
12
+ *.db-shm
13
+ memory/
14
+ MEMORY.md
15
+
16
+ # Environment
17
+ .env
18
+ .venv/
19
+ venv/
20
+
21
+ # IDE
22
+ .idea/
23
+ .vscode/
24
+ *.swp
25
+
26
+ # OS
27
+ .DS_Store
28
+ Thumbs.db
29
+
30
+ # OAuth credentials
31
+ .imprint-oauth.json
@@ -0,0 +1,12 @@
1
+ # Changelog
2
+
3
+ ## 0.1.1 - 2026-05-17
4
+
5
+ - Rewrote onboarding docs around three setup paths: MCP-only, browser sync, and surfacing hook.
6
+ - Added `.env.example` covering storage, embeddings, receiver, HTTP/OAuth, hook, and search tuning.
7
+ - Removed non-memory WebDriverAgent tools and hardcoded personal device/network settings from the public MCP server.
8
+ - Made HTTP host/port, OAuth file path, receiver host/port, receiver CORS, summary models, and chunk skip platforms configurable.
9
+ - Switched default embeddings to local Ollama with graceful FTS5/LIKE fallback when no provider is available.
10
+ - Fixed fresh database setup for browser-ingested conversations by adding the missing `conversation_log.model` column.
11
+ - Hardened `hooks/memory-check.sh` for macOS/Linux shells, env files with spaces, missing Python, and malformed `.env` lines.
12
+ - Added CI import smoke tests for Python 3.10, 3.11, and 3.12.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Qizhan7
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,339 @@
1
+ Metadata-Version: 2.4
2
+ Name: imprint-memory
3
+ Version: 0.1.0
4
+ Summary: Persistent memory system for Claude Code — hybrid search (FTS5 + vector), message bus, task queue
5
+ Project-URL: Homepage, https://github.com/Qizhan7/imprint-memory
6
+ Project-URL: Repository, https://github.com/Qizhan7/imprint-memory
7
+ Project-URL: Issues, https://github.com/Qizhan7/imprint-memory/issues
8
+ Author: Qizhan7
9
+ License-Expression: MIT
10
+ License-File: LICENSE
11
+ Keywords: agent,ai,claude,mcp,memory
12
+ Classifier: Development Status :: 4 - Beta
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
17
+ Requires-Python: >=3.10
18
+ Requires-Dist: mcp[cli]>=1.0.0
19
+ Provides-Extra: all
20
+ Requires-Dist: jieba>=0.42.0; extra == 'all'
21
+ Requires-Dist: jionlp>=1.5.0; extra == 'all'
22
+ Requires-Dist: numpy>=1.24.0; extra == 'all'
23
+ Requires-Dist: starlette>=0.27.0; extra == 'all'
24
+ Requires-Dist: uvicorn>=0.34.0; extra == 'all'
25
+ Provides-Extra: chinese
26
+ Requires-Dist: jieba>=0.42.0; extra == 'chinese'
27
+ Requires-Dist: jionlp>=1.5.0; extra == 'chinese'
28
+ Provides-Extra: http
29
+ Requires-Dist: starlette>=0.27.0; extra == 'http'
30
+ Requires-Dist: uvicorn>=0.34.0; extra == 'http'
31
+ Provides-Extra: receiver
32
+ Requires-Dist: numpy>=1.24.0; extra == 'receiver'
33
+ Requires-Dist: starlette>=0.27.0; extra == 'receiver'
34
+ Requires-Dist: uvicorn>=0.34.0; extra == 'receiver'
35
+ Provides-Extra: vectors
36
+ Requires-Dist: numpy>=1.24.0; extra == 'vectors'
37
+ Description-Content-Type: text/markdown
38
+
39
+ # imprint-memory
40
+
41
+ [中文文档](README_zh.md)
42
+
43
+ Local long-term memory for Claude Code users who want searchable, private recall across notes, conversations, and Claude.ai history.
44
+
45
+ ```
46
+ Claude Code ── MCP stdio ───────────────────────────────┐
47
+
48
+ manual memories / daily logs / bank/*.md ───────────────┤
49
+
50
+ imprint-memory
51
+ SQLite + FTS5 + vectors
52
+
53
+ claude.ai ── Chrome extension ── POST :8001/api/ingest ─┤
54
+
55
+ log → embed → chunk → graph edges → search
56
+
57
+ UserPromptSubmit hook ── surfacing_search ── <recall> ──┘
58
+ ```
59
+
60
+ ## Capabilities
61
+
62
+ | Capability | What it does |
63
+ | --- | --- |
64
+ | Memory CRUD | Store, update, delete, pin, tag, and link memories. |
65
+ | Hybrid retrieval | Combines FTS5 keyword search, exact/LIKE matches, vector similarity, RRF fusion, and optional LLM reranking. |
66
+ | Time-aware search | Parses explicit `after`/`before` filters and Chinese temporal expressions such as `昨天`, `上次`, `三周前`, `去年冬天` when `jionlp` is installed. |
67
+ | Query expansion | Uses Cloudflare Workers AI when configured to add colloquial query variants before retrieval. |
68
+ | Chunk-level conversation retrieval | Searches conversation summaries, then expands the best chunks with matching source messages. |
69
+ | Graph neighbors | Surfaces linked memories and neighboring conversation chunks when they add useful context. |
70
+ | Browser conversation sync | Receives claude.ai conversations from `imprint-chat-sync` through `POST /api/ingest`. |
71
+ | Passive surfacing | A Claude Code `UserPromptSubmit` hook can inject compact `<recall>` blocks when a prompt looks memory-related. |
72
+ | CJK-friendly FTS | Uses `jieba` when available, with character-level fallback, so Chinese/Japanese/Korean text remains searchable. |
73
+ | Zero-provider fallback | If Ollama/API embeddings are unavailable, tools still work with FTS5 and exact matching. |
74
+
75
+ ## Quick Start
76
+
77
+ ### 1. I just want memory in Claude Code
78
+
79
+ ```bash
80
+ pip install imprint-memory
81
+ claude mcp add -s user imprint-memory -- imprint-memory
82
+ ```
83
+
84
+ Restart Claude Code. You should see the `imprint-memory` MCP tools. No API key is required. By default the server stores data in `~/.imprint/memory.db` and tries Ollama embeddings at `http://localhost:11434`; if Ollama is not running, search falls back to keyword-only.
85
+
86
+ Optional local embeddings:
87
+
88
+ ```bash
89
+ ollama pull bge-m3
90
+ ollama serve
91
+ ```
92
+
93
+ ### 2. I also want to sync my claude.ai conversations
94
+
95
+ ```bash
96
+ pip install 'imprint-memory[receiver]'
97
+ imprint-memory-receiver
98
+ ```
99
+
100
+ The receiver listens on `127.0.0.1:8001`. Then install the companion extension:
101
+
102
+ ```bash
103
+ git clone https://github.com/Qizhan7/imprint-chat-sync.git
104
+ ```
105
+
106
+ Open Chrome → `chrome://extensions/` → enable Developer mode → Load unpacked → select the cloned `imprint-chat-sync` folder. Stay logged in to [claude.ai](https://claude.ai), then use the extension popup to sync.
107
+
108
+ ### 3. I want the full experience with the surfacing hook
109
+
110
+ Install the hook script:
111
+
112
+ ```bash
113
+ mkdir -p ~/.claude/hooks
114
+ HOOK_PATH="$(python - <<'PY'
115
+ from importlib.resources import files
116
+ print(files("imprint_memory") / "hooks" / "memory-check.sh")
117
+ PY
118
+ )"
119
+ cp "$HOOK_PATH" ~/.claude/hooks/memory-check.sh
120
+ chmod +x ~/.claude/hooks/memory-check.sh
121
+ ```
122
+
123
+ Add this to `~/.claude/settings.json`:
124
+
125
+ ```json
126
+ {
127
+ "hooks": {
128
+ "UserPromptSubmit": [
129
+ {
130
+ "hooks": [
131
+ {
132
+ "type": "command",
133
+ "command": "bash $HOME/.claude/hooks/memory-check.sh"
134
+ }
135
+ ]
136
+ }
137
+ ]
138
+ }
139
+ }
140
+ ```
141
+
142
+ For API keys used by the hook, copy `.env.example` to `~/.imprint/.env` and fill only what you use.
143
+
144
+ ## Configuration
145
+
146
+ All configuration is via environment variables. Defaults are chosen for local, private use.
147
+
148
+ | Variable | Default | Description |
149
+ | --- | --- | --- |
150
+ | `IMPRINT_DATA_DIR` | `~/.imprint` | Base directory for the database, daily logs, and bank files. |
151
+ | `IMPRINT_DB` | `$IMPRINT_DATA_DIR/memory.db` | SQLite database path. |
152
+ | `TZ_OFFSET` | `0` | Local timezone offset from UTC, in hours. |
153
+ | `IMPRINT_USER_NAME` | `User` | Human speaker label used in summaries and chunk expansion. |
154
+ | `IMPRINT_AGENT_NAME` | `Assistant` | Assistant speaker label used in summaries and chunk expansion. |
155
+ | `IMPRINT_LOCALE` | `en` | Search output labels: `en` or `zh`. |
156
+ | `EMBED_PROVIDER` | `ollama` | Embedding provider: `ollama`, `openai`, or `google`. |
157
+ | `EMBED_MODEL` | provider default | Embedding model. Defaults: `bge-m3`, `text-embedding-3-small`, or `gemini-embedding-2`. |
158
+ | `OLLAMA_URL` | `http://localhost:11434` | Ollama base URL for embeddings. |
159
+ | `OPENAI_API_KEY` | unset | API key for OpenAI or OpenAI-compatible embedding providers. |
160
+ | `EMBED_API_BASE` | `https://api.openai.com` | Base URL for OpenAI-compatible embedding APIs. |
161
+ | `GOOGLE_API_KEY` | unset | Single Google Gemini API key. |
162
+ | `GOOGLE_API_KEYS` | unset | Comma-separated Google keys for round-robin embedding calls. |
163
+ | `CF_ACCOUNT_ID` | unset | Cloudflare account ID for query expansion, reranking, and summaries. |
164
+ | `CF_API_TOKEN` | unset | Cloudflare API token. |
165
+ | `CF_RERANK_MODEL` | `@cf/meta/llama-3.3-70b-instruct-fp8-fast` | Cloudflare model for query expansion and reranking. |
166
+ | `CF_SUMMARY_MODEL` | `@cf/meta/llama-3.3-70b-instruct-fp8-fast` | Cloudflare model for chunk summaries. |
167
+ | `GEMINI_SUMMARY_MODEL` | `gemini-2.5-flash-lite` | Gemini model used as a summary fallback. |
168
+ | `OLLAMA_CHAT_URL` | `http://localhost:11434/api/chat` | Ollama chat endpoint for summary fallback. |
169
+ | `OLLAMA_CHAT_MODEL` | `gemma4:e4b` | Ollama chat model for summaries and causal-edge prediction. |
170
+ | `IMPRINT_HTTP_HOST` | `0.0.0.0` in HTTP mode | MCP HTTP bind host. |
171
+ | `IMPRINT_HTTP_PORT` | `8000` | MCP HTTP port. |
172
+ | `IMPRINT_OAUTH_FILE` | `~/.imprint-oauth.json` | OAuth credential file for HTTP mode. |
173
+ | `OAUTH_CLIENT_ID` | unset | OAuth client ID fallback when no credential file exists. |
174
+ | `OAUTH_CLIENT_SECRET` | unset | OAuth client secret fallback. |
175
+ | `OAUTH_ACCESS_TOKEN` | unset | Bearer token used by HTTP mode. |
176
+ | `IMPRINT_RECEIVER_HOST` | `127.0.0.1` | Chat-sync receiver bind host. Legacy `HOST` is also accepted. |
177
+ | `IMPRINT_RECEIVER_PORT` | `8001` | Chat-sync receiver port. Legacy `PORT` is also accepted. |
178
+ | `IMPRINT_RECEIVER_EMBED_DELAY` | `0.7` | Delay between background embedding calls, in seconds. |
179
+ | `IMPRINT_RECEIVER_SHIFT_THRESHOLD` | `0.50` | Topic-shift cosine threshold for adjacent user messages. |
180
+ | `IMPRINT_RECEIVER_CORS_ORIGIN_REGEX` | `^chrome-extension://.*$` | Allowed browser-extension origins. |
181
+ | `IMPRINT_PYTHON` | auto-detect `python3.12`, `python3.11`, `python3.10`, then generic Python names | Python interpreter used by `hooks/memory-check.sh`. |
182
+ | `IMPRINT_ENV_FILE` | unset | Extra `KEY=VALUE` file loaded by the hook before recall. |
183
+ | `IMPRINT_HOOK_LANG` | `en` | Hook reminder language: `en` or `zh`. |
184
+ | `STOPWORD_THRESHOLD` | `0.15` | Document-frequency threshold for auto-stopwords. |
185
+ | `IMPRINT_STOPWORD_SKIP_PLATFORMS` | `cc` | Platforms ignored when building stopwords. |
186
+ | `IMPRINT_CHUNK_SKIP_PLATFORMS` | `cc` | Platforms ignored by conversation chunking and chunk expansion. |
187
+ | `IMPRINT_CAUSAL_BLACKLIST` | unset | Extra comma-separated terms ignored by causal-edge discovery. |
188
+ | `MESSAGE_BUS_LIMIT` | `40` | Max messages retained in the shared message bus. |
189
+ | `COMPRESS_MODEL` | `qwen3:8b` | Ollama model for `imprint_memory.compress`. |
190
+ | `COMPRESS_KEEP` | `30` | Recent context lines kept uncompressed. |
191
+ | `COMPRESS_THRESHOLD` | `50` | Line count that triggers compression. |
192
+
193
+ ## MCP Tool Reference
194
+
195
+ | Tool | Description |
196
+ | --- | --- |
197
+ | `memory_remember` | Store a memory with category, source, importance, dedup, and embedding when available. |
198
+ | `memory_search` | Search memories, bank files, conversations, chunks, and graph neighbors. |
199
+ | `memory_list` | List recent active memories, optionally by category or time range. |
200
+ | `memory_update` | Update content, category, or importance by memory ID. |
201
+ | `memory_delete` | Delete one memory by ID. |
202
+ | `memory_forget` | Delete memories containing a keyword. |
203
+ | `memory_daily_log` | Append a timestamped entry to today’s daily log. |
204
+ | `memory_pin` / `memory_unpin` | Mark core memories as exempt from search time decay, or restore normal decay. |
205
+ | `memory_add_tags` | Add comma-separated tags to a memory. |
206
+ | `memory_add_edge` | Link two memories with a typed relationship and short context. |
207
+ | `memory_get_graph` | Show a memory’s tags, edges, and neighbor previews. |
208
+ | `memory_find_duplicates` | Read-only semantic duplicate audit. |
209
+ | `memory_find_stale` | Read-only stale-memory audit. |
210
+ | `memory_decay` | Preview or apply importance decay for inactive memories. |
211
+ | `memory_reindex` | Rebuild memory and bank embeddings after provider/model changes. |
212
+ | `stopwords_build` | Rebuild auto-stopwords from document frequency. |
213
+ | `stopwords_show` | Show current stopwords and metadata. |
214
+ | `stopwords_add` / `stopwords_remove` | Manually add or suppress stopwords. |
215
+ | `conversation_search` | Keyword search over conversation logs. |
216
+ | `conversation_search_semantic` | Vector search over chunks first, then message vectors. |
217
+ | `search_telegram` | Convenience search over `telegram` and `heartbeat` platforms. |
218
+ | `search_channel` | Search any named conversation platform. |
219
+ | `message_bus_read` / `message_bus_post` | Read or write the shared message timeline. |
220
+ | `experience_append` | Append a technical note to `memory/bank/experience.md`. |
221
+
222
+ ## Receiver API Reference
223
+
224
+ Run:
225
+
226
+ ```bash
227
+ imprint-memory-receiver --host 127.0.0.1 --port 8001
228
+ ```
229
+
230
+ ### `POST /api/ingest`
231
+
232
+ Submit one conversation batch.
233
+
234
+ ```bash
235
+ curl -X POST http://127.0.0.1:8001/api/ingest \
236
+ -H 'Content-Type: application/json' \
237
+ -d '{
238
+ "platform": "claude.ai",
239
+ "conversation_id": "conv_123",
240
+ "conversation_title": "Planning session",
241
+ "model": "claude-opus-4-1",
242
+ "messages": [
243
+ {
244
+ "direction": "in",
245
+ "speaker": "User",
246
+ "content": "Remember that I prefer short PR summaries.",
247
+ "created_at": "2026-05-16 12:30:00",
248
+ "uuid": "msg_1"
249
+ },
250
+ {
251
+ "direction": "out",
252
+ "speaker": "Assistant",
253
+ "content": "Got it.",
254
+ "created_at": "2026-05-16 12:30:04",
255
+ "uuid": "msg_2"
256
+ }
257
+ ]
258
+ }'
259
+ ```
260
+
261
+ Response:
262
+
263
+ ```json
264
+ {
265
+ "ok": true,
266
+ "ingested": 2,
267
+ "skipped": 0,
268
+ "errors": 0
269
+ }
270
+ ```
271
+
272
+ The receiver returns quickly, then embeds, detects topic shifts, summarizes chunks, and updates graph edges in background tasks.
273
+
274
+ ### `GET /api/health`
275
+
276
+ ```json
277
+ { "ok": true, "service": "imprint-chat-sync-receiver" }
278
+ ```
279
+
280
+ ### `GET /api/status`
281
+
282
+ ```json
283
+ {
284
+ "ok": true,
285
+ "recent_count": 5,
286
+ "last_message": "2026-05-16 12:30:04",
287
+ "vectors": 42
288
+ }
289
+ ```
290
+
291
+ ## How Search Works
292
+
293
+ 1. The query is optionally time-parsed (`昨天`, `上次`, `三周前`) and optionally expanded with Cloudflare Workers AI.
294
+ 2. The system embeds the expanded query when an embedding provider is available.
295
+ 3. Each pool searches independently: memories, bank chunks, raw conversation rows, summarized conversation chunks, and exact/LIKE matches.
296
+ 4. Ranked channels are fused with Reciprocal Rank Fusion.
297
+ 5. Pool-specific rerankers adjust for recency, importance, pinned memories, and file freshness.
298
+ 6. Optional Cloudflare reranking scores the top candidates for semantic relevance.
299
+ 7. Chunk hits expand to source messages; memory and chunk graph neighbors are appended when useful.
300
+ 8. Results update recall counters unless the search is an internal surfacing pass.
301
+
302
+ ## Data Layout
303
+
304
+ ```
305
+ ~/.imprint/
306
+ ├── memory.db
307
+ ├── MEMORY.md
308
+ └── memory/
309
+ ├── 2026-05-16.md
310
+ └── bank/
311
+ └── experience.md
312
+ ```
313
+
314
+ ## Development
315
+
316
+ ```bash
317
+ git clone https://github.com/Qizhan7/imprint-memory.git
318
+ cd imprint-memory
319
+ pip install -e '.[all]'
320
+ pip install pytest
321
+ python -c "from imprint_memory import server"
322
+ pytest
323
+ ```
324
+
325
+ Run the stdio server:
326
+
327
+ ```bash
328
+ imprint-memory
329
+ ```
330
+
331
+ Run HTTP MCP mode:
332
+
333
+ ```bash
334
+ imprint-memory --http --host 0.0.0.0 --port 8000
335
+ ```
336
+
337
+ ## License
338
+
339
+ MIT