better-mem0-mcp 1.1.0b22__tar.gz → 1.1.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.github/workflows/cd.yml +5 -42
  2. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.mise.toml +1 -1
  3. better_mem0_mcp-1.1.3/.vscode/better-mem0-mcp.code-workspace +8 -0
  4. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/CHANGELOG.md +2 -23
  5. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/CONTRIBUTING.md +0 -49
  6. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/PKG-INFO +34 -58
  7. better_mem0_mcp-1.1.3/README.md +120 -0
  8. better_mem0_mcp-1.1.3/SECURITY.md +27 -0
  9. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/pyproject.toml +2 -2
  10. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/config.py +21 -14
  11. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/docs/memory.md +1 -14
  12. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/server.py +9 -16
  13. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/uv.lock +505 -4
  14. better_mem0_mcp-1.1.0b22/.vscode/better-mem0-mcp.code-workspace +0 -23
  15. better_mem0_mcp-1.1.0b22/CODE_OF_CONDUCT.md +0 -82
  16. better_mem0_mcp-1.1.0b22/README.md +0 -144
  17. better_mem0_mcp-1.1.0b22/SECURITY.md +0 -32
  18. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.dockerignore +0 -0
  19. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.editorconfig +0 -0
  20. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.github/scripts/check-ci-cd-status.sh +0 -0
  21. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.github/scripts/merge-with-auto-resolve.sh +0 -0
  22. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.github/workflows/ci.yml +0 -0
  23. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.gitignore +0 -0
  24. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.pre-commit-config.yaml +0 -0
  25. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.python-version +0 -0
  26. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/.releaserc.json +0 -0
  27. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/Dockerfile +0 -0
  28. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/LICENSE +0 -0
  29. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/package-lock.json +0 -0
  30. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/package.json +0 -0
  31. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/scripts/clean-venv.mjs +0 -0
  32. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/__init__.py +0 -0
  33. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/__main__.py +0 -0
  34. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/graph.py +0 -0
  35. {better_mem0_mcp-1.1.0b22 → better_mem0_mcp-1.1.3}/src/better_mem0_mcp/py.typed +0 -0
@@ -46,49 +46,12 @@ jobs:
46
46
  chmod +x .github/scripts/check-ci-cd-status.sh
47
47
  ./.github/scripts/check-ci-cd-status.sh --branch=dev
48
48
 
49
- - name: Sync main into dev (resolve version conflicts)
50
- run: |
51
- git config user.name "github-actions[bot]"
52
- git config user.email "github-actions[bot]@users.noreply.github.com"
53
- git checkout dev
54
- git fetch origin main
55
- git merge origin/main --no-edit -X ours || true
56
- git push origin dev
57
-
58
- - name: Create PR to promote dev to main
49
+ - name: Merge dev to main
59
50
  env:
60
- GH_TOKEN: ${{ secrets.GH_PAT }}
51
+ AUTO_RESOLVE_FILES: "CHANGELOG.md,pyproject.toml"
61
52
  run: |
62
- # Check if PR already exists
63
- EXISTING_PR=$(gh pr list --base main --head dev --json number --jq '.[0].number')
64
- if [ -n "$EXISTING_PR" ]; then
65
- echo "PR #$EXISTING_PR already exists for dev -> main"
66
- echo "URL: https://github.com/${{ github.repository }}/pull/$EXISTING_PR"
67
- exit 0
68
- fi
69
-
70
- # Get latest release version from dev
71
- LATEST_TAG=$(git describe --tags --abbrev=0 origin/dev 2>/dev/null || echo "")
72
- if [ -z "$LATEST_TAG" ]; then
73
- PR_TITLE="chore: promote dev to main"
74
- else
75
- PR_TITLE="chore: promote dev to main ($LATEST_TAG)"
76
- fi
77
-
78
- # Create PR
79
- gh pr create \
80
- --base main \
81
- --head dev \
82
- --title "$PR_TITLE" \
83
- --body "## Promote dev to main
84
-
85
- This PR promotes the latest changes from \`dev\` branch to \`main\`.
86
-
87
- ### Pre-checks passed:
88
- - ✅ CI workflow passed on dev
89
- - ✅ CD workflow passed on dev
90
-
91
- ### Latest beta version: $LATEST_TAG"
53
+ chmod +x .github/scripts/merge-with-auto-resolve.sh
54
+ ./.github/scripts/merge-with-auto-resolve.sh --source=dev --target=main --files="$AUTO_RESOLVE_FILES"
92
55
 
93
56
  release:
94
57
  name: Semantic Release
@@ -108,7 +71,7 @@ jobs:
108
71
  - name: Setup Node.js
109
72
  uses: actions/setup-node@v6
110
73
  with:
111
- node-version: "24"
74
+ node-version: "22"
112
75
 
113
76
  - name: Install uv (for toml-cli)
114
77
  uses: astral-sh/setup-uv@v5
@@ -5,7 +5,7 @@
5
5
 
6
6
  [tools]
7
7
  python = "3.13"
8
- node = "24"
8
+ node = "22"
9
9
  uv = "latest"
10
10
 
11
11
  [settings]
@@ -0,0 +1,8 @@
1
+ {
2
+ "folders": [
3
+ {
4
+ "path": ".."
5
+ }
6
+ ],
7
+ "settings": {}
8
+ }
@@ -1,30 +1,9 @@
1
- # [1.1.0-beta.22](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.0-beta.21...v1.1.0-beta.22) (2026-02-05)
1
+ ## [1.1.3](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.2...v1.1.3) (2026-01-05)
2
2
 
3
3
 
4
4
  ### Bug Fixes
5
5
 
6
- * Pin Python version to 3.13 and remove Python 3.14 compatibility from the dependency lock file. ([ec134ef](https://github.com/n24q02m/better-mem0-mcp/commit/ec134ef10ab6de3eaf800d51a441fe2b44f2d4dc))
7
-
8
- # [1.1.0-beta.21](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.0-beta.20...v1.1.0-beta.21) (2026-01-12)
9
-
10
-
11
- ### Bug Fixes
12
-
13
- * update Node.js version to 24 in CI/CD configuration and Mise setup ([92ab0a9](https://github.com/n24q02m/better-mem0-mcp/commit/92ab0a96cd30643b9ecf927c68b46cde6472d732))
14
-
15
- # [1.1.0-beta.20](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.0-beta.19...v1.1.0-beta.20) (2026-01-05)
16
-
17
-
18
- ### Bug Fixes
19
-
20
- * correct search response parsing and improve documentation ([d4d4753](https://github.com/n24q02m/better-mem0-mcp/commit/d4d4753a7d0d3c8eb23ef15c051a7bcb1b000310))
21
-
22
- # [1.1.0-beta.19](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.0-beta.18...v1.1.0-beta.19) (2026-01-05)
23
-
24
-
25
- ### Bug Fixes
26
-
27
- * correct Mem0 API response parsing for search and get_all ([8a612b1](https://github.com/n24q02m/better-mem0-mcp/commit/8a612b1443c59c229e7583ffedced6d38f1b13d1))
6
+ * Add `__main__.py` entry point and update README with `better-mem0` server configuration. ([5c44f58](https://github.com/n24q02m/better-mem0-mcp/commit/5c44f58a5357c90c586009f75ddd876c45ef6f66))
28
7
 
29
8
  # [1.1.0-beta.18](https://github.com/n24q02m/better-mem0-mcp/compare/v1.1.0-beta.17...v1.1.0-beta.18) (2026-01-05)
30
9
 
@@ -84,8 +84,6 @@ We use [Conventional Commits](https://www.conventionalcommits.org/):
84
84
  - `perf`: Performance improvements
85
85
  - `test`: Adding or updating tests
86
86
  - `chore`: Maintenance tasks
87
- - `ci`: CI/CD changes
88
- - `build`: Build system changes
89
87
 
90
88
  ### Examples
91
89
 
@@ -95,44 +93,6 @@ fix: handle database connection timeout
95
93
  docs: update configuration examples
96
94
  ```
97
95
 
98
- ## Release Process
99
-
100
- Releases are automated using **Semantic Release**. We strictly follow the **Conventional Commits** specification to determine version bumps and generate changelogs automatically.
101
-
102
- ### How to Release
103
-
104
- 1. Create a Pull Request with your changes.
105
- 2. Ensure your commit messages follow the convention above.
106
- 3. Merge the PR to `main`.
107
- 4. The CI pipeline will automatically:
108
- - Analyze the new commits
109
- - Determine the next version number
110
- - Generate release notes
111
- - Update `CHANGELOG.md`
112
- - Publish to PyPI
113
- - Create a GitHub Release
114
- - Build and push Docker images
115
-
116
- You do **not** need to create manual tags or changelog entries.
117
-
118
- ## Pull Request Guidelines
119
-
120
- - Keep PRs focused on a single feature or fix
121
- - Update documentation if needed
122
- - Add tests for new functionality
123
- - Ensure all checks pass
124
-
125
- ### PR Checklist
126
-
127
- Before submitting your PR, ensure:
128
-
129
- - [ ] Code follows Python best practices
130
- - [ ] All tests pass (`uv run pytest`)
131
- - [ ] Linting passes (`uv run ruff check .`)
132
- - [ ] Formatting is correct (`uv run ruff format --check .`)
133
- - [ ] Commit messages follow **Conventional Commits**
134
- - [ ] Documentation updated (if needed)
135
-
136
96
  ## Code Style
137
97
 
138
98
  This project uses **Ruff** for formatting and linting.
@@ -166,15 +126,6 @@ better-mem0-mcp/
166
126
  └── README.md
167
127
  ```
168
128
 
169
- ## Questions?
170
-
171
- Feel free to open an issue for:
172
-
173
- - Bug reports
174
- - Feature requests
175
- - Questions about the codebase
176
- - Discussion about architecture
177
-
178
129
  ## License
179
130
 
180
131
  By contributing, you agree that your contributions will be licensed under the MIT License.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: better-mem0-mcp
3
- Version: 1.1.0b22
3
+ Version: 1.1.3
4
4
  Summary: Zero-setup MCP Server for AI memory - works with Neon/Supabase
5
5
  Project-URL: Homepage, https://github.com/n24q02m/better-mem0-mcp
6
6
  Project-URL: Repository, https://github.com/n24q02m/better-mem0-mcp.git
@@ -17,7 +17,7 @@ Classifier: Operating System :: OS Independent
17
17
  Classifier: Programming Language :: Python :: 3
18
18
  Classifier: Programming Language :: Python :: 3.13
19
19
  Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
20
- Requires-Python: ==3.13.*
20
+ Requires-Python: >=3.13
21
21
  Requires-Dist: google-genai>=1.0.0
22
22
  Requires-Dist: litellm>=1.0.0
23
23
  Requires-Dist: loguru>=0.7.0
@@ -29,28 +29,16 @@ Description-Content-Type: text/markdown
29
29
 
30
30
  # better-mem0-mcp
31
31
 
32
- **Self-hosted MCP Server for AI memory with PostgreSQL (pgvector).**
32
+ **Zero-setup** MCP Server for AI memory. Works with Neon/Supabase free tier.
33
33
 
34
- [![PyPI](https://img.shields.io/pypi/v/better-mem0-mcp)](https://pypi.org/project/better-mem0-mcp/)
35
- [![Docker](https://img.shields.io/docker/v/n24q02m/better-mem0-mcp?label=docker)](https://hub.docker.com/r/n24q02m/better-mem0-mcp)
36
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
37
-
38
- ## Features
39
-
40
- - **Self-hosted PostgreSQL** - Your data stays with you (Neon/Supabase free tier supported)
41
- - **Graph Memory** - SQL-based relationship tracking alongside vector memory
42
- - **Multi-provider LLM** - Gemini, OpenAI, Anthropic, Groq, DeepSeek, Mistral
43
- - **Fallback chains** - Multi-key per provider + multi-model fallback
44
- - **Zero manual setup** - Just `DATABASE_URL` + `API_KEYS`
45
-
46
- ---
34
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
47
35
 
48
36
  ## Quick Start
49
37
 
50
38
  ### 1. Get Prerequisites
51
39
 
52
- - **Database**: [Neon](https://neon.tech) or [Supabase](https://supabase.com) (free tier works)
53
- - **API Key**: Any supported provider ([Google AI Studio](https://aistudio.google.com/apikey) is free)
40
+ - **Database**: [Neon](https://neon.tech) or [Supabase](https://supabase.com) (free tier)
41
+ - **API Key**: [Google AI Studio](https://aistudio.google.com/apikey) (free tier)
54
42
 
55
43
  ### 2. Add to mcp.json
56
44
 
@@ -64,7 +52,7 @@ Description-Content-Type: text/markdown
64
52
  "args": ["better-mem0-mcp@latest"],
65
53
  "env": {
66
54
  "DATABASE_URL": "postgresql://user:pass@xxx.neon.tech/neondb?sslmode=require",
67
- "API_KEYS": "GOOGLE_API_KEY:AIza..."
55
+ "API_KEYS": "gemini:AIza..."
68
56
  }
69
57
  }
70
58
  }
@@ -81,7 +69,7 @@ Description-Content-Type: text/markdown
81
69
  "args": ["run", "-i", "--rm", "-e", "DATABASE_URL", "-e", "API_KEYS", "n24q02m/better-mem0-mcp:latest"],
82
70
  "env": {
83
71
  "DATABASE_URL": "postgresql://...",
84
- "API_KEYS": "GOOGLE_API_KEY:AIza..."
72
+ "API_KEYS": "gemini:AIza..."
85
73
  }
86
74
  }
87
75
  }
@@ -90,7 +78,7 @@ Description-Content-Type: text/markdown
90
78
 
91
79
  ### 3. Done!
92
80
 
93
- Ask your AI: "Remember that I prefer dark mode and use FastAPI"
81
+ Ask Claude: "Remember that I prefer dark mode and use FastAPI"
94
82
 
95
83
  ---
96
84
 
@@ -98,25 +86,22 @@ Ask your AI: "Remember that I prefer dark mode and use FastAPI"
98
86
 
99
87
  | Variable | Required | Description |
100
88
  |----------|----------|-------------|
101
- | `DATABASE_URL` | Yes | PostgreSQL with pgvector extension |
102
- | `API_KEYS` | Yes | `ENV_VAR:key` pairs, comma-separated |
103
- | `LLM_MODELS` | No | Model fallback chain |
104
- | `EMBEDDER_MODELS` | No | Embedding model chain |
89
+ | `DATABASE_URL` | Yes | PostgreSQL connection string |
90
+ | `API_KEYS` | Yes | `provider:key,...` (multi-key per provider OK) |
91
+ | `LLM_MODELS` | No | `provider/model,...` (fallback chain) |
92
+ | `EMBEDDER_MODELS` | No | `provider/model,...` (fallback chain) |
105
93
 
106
- ### Supported LiteLLM Providers
94
+ ### Examples
107
95
 
108
- Use environment variable names from [LiteLLM docs](https://docs.litellm.ai/):
109
- `GOOGLE_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GROQ_API_KEY`, etc.
110
-
111
- **Single provider:**
112
- ```bash
113
- API_KEYS=GOOGLE_API_KEY:AIza...
96
+ **Minimal (Gemini only):**
97
+ ```
98
+ API_KEYS=gemini:AIza...
114
99
  ```
115
100
 
116
101
  **Multi-key with fallback:**
117
- ```bash
118
- API_KEYS=GOOGLE_API_KEY:AIza-1,GOOGLE_API_KEY:AIza-2,OPENAI_API_KEY:sk-xxx
119
- LLM_MODELS=gemini/gemini-3-flash-preview,openai/gpt-4o-mini
102
+ ```
103
+ API_KEYS=gemini:AIza-1,gemini:AIza-2,openai:sk-xxx
104
+ LLM_MODELS=gemini/gemini-2.5-flash,openai/gpt-4o-mini
120
105
  EMBEDDER_MODELS=gemini/gemini-embedding-001,openai/text-embedding-3-small
121
106
  ```
122
107
 
@@ -124,7 +109,7 @@ EMBEDDER_MODELS=gemini/gemini-embedding-001,openai/text-embedding-3-small
124
109
 
125
110
  | Setting | Default |
126
111
  |---------|---------|
127
- | `LLM_MODELS` | `gemini/gemini-3-flash-preview` |
112
+ | `LLM_MODELS` | `gemini/gemini-2.5-flash` |
128
113
  | `EMBEDDER_MODELS` | `gemini/gemini-embedding-001` |
129
114
 
130
115
  ---
@@ -133,41 +118,32 @@ EMBEDDER_MODELS=gemini/gemini-embedding-001,openai/text-embedding-3-small
133
118
 
134
119
  | Tool | Description |
135
120
  |------|-------------|
136
- | `memory` | Memory operations: `add`, `search`, `list`, `delete` |
137
- | `help` | Get full documentation for tools |
121
+ | `memory` | `action`: add, search, list, delete |
122
+ | `help` | Detailed documentation |
138
123
 
139
- ### Usage Examples
124
+ ### Usage
140
125
 
141
126
  ```json
142
127
  {"action": "add", "content": "I prefer TypeScript over JavaScript"}
143
- {"action": "search", "query": "programming preferences"}
128
+ {"action": "search", "query": "preferences"}
144
129
  {"action": "list"}
145
130
  {"action": "delete", "memory_id": "abc123"}
146
131
  ```
147
132
 
148
133
  ---
149
134
 
150
- ## Build from Source
135
+ ## Why better-mem0-mcp?
151
136
 
152
- ```bash
153
- git clone https://github.com/n24q02m/better-mem0-mcp
154
- cd better-mem0-mcp
155
-
156
- # Setup (requires mise: https://mise.jdx.dev/)
157
- mise run setup
158
-
159
- # Run
160
- uv run better-mem0-mcp
161
- ```
162
-
163
- **Requirements:** Python 3.13+
137
+ | Feature | Official mem0-mcp | better-mem0-mcp |
138
+ |---------|-------------------|-----------------|
139
+ | Storage | Mem0 Cloud | **Self-hosted PostgreSQL** |
140
+ | Graph Memory | No | **Yes (SQL-based)** |
141
+ | LLM Provider | OpenAI only | **Any (Gemini/OpenAI/Ollama/...)** |
142
+ | Fallback | No | **Yes (multi-key + multi-model)** |
143
+ | Setup | API Key | **DATABASE_URL + API_KEYS** |
164
144
 
165
145
  ---
166
146
 
167
- ## Contributing
168
-
169
- See [CONTRIBUTING.md](CONTRIBUTING.md)
170
-
171
147
  ## License
172
148
 
173
- MIT - See [LICENSE](LICENSE)
149
+ MIT
@@ -0,0 +1,120 @@
1
+ # better-mem0-mcp
2
+
3
+ **Zero-setup** MCP Server for AI memory. Works with Neon/Supabase free tier.
4
+
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+
7
+ ## Quick Start
8
+
9
+ ### 1. Get Prerequisites
10
+
11
+ - **Database**: [Neon](https://neon.tech) or [Supabase](https://supabase.com) (free tier)
12
+ - **API Key**: [Google AI Studio](https://aistudio.google.com/apikey) (free tier)
13
+
14
+ ### 2. Add to mcp.json
15
+
16
+ #### uvx (Recommended)
17
+
18
+ ```json
19
+ {
20
+ "mcpServers": {
21
+ "better-mem0": {
22
+ "command": "uvx",
23
+ "args": ["better-mem0-mcp@latest"],
24
+ "env": {
25
+ "DATABASE_URL": "postgresql://user:pass@xxx.neon.tech/neondb?sslmode=require",
26
+ "API_KEYS": "gemini:AIza..."
27
+ }
28
+ }
29
+ }
30
+ }
31
+ ```
32
+
33
+ #### Docker
34
+
35
+ ```json
36
+ {
37
+ "mcpServers": {
38
+ "better-mem0": {
39
+ "command": "docker",
40
+ "args": ["run", "-i", "--rm", "-e", "DATABASE_URL", "-e", "API_KEYS", "n24q02m/better-mem0-mcp:latest"],
41
+ "env": {
42
+ "DATABASE_URL": "postgresql://...",
43
+ "API_KEYS": "gemini:AIza..."
44
+ }
45
+ }
46
+ }
47
+ }
48
+ ```
49
+
50
+ ### 3. Done!
51
+
52
+ Ask Claude: "Remember that I prefer dark mode and use FastAPI"
53
+
54
+ ---
55
+
56
+ ## Configuration
57
+
58
+ | Variable | Required | Description |
59
+ |----------|----------|-------------|
60
+ | `DATABASE_URL` | Yes | PostgreSQL connection string |
61
+ | `API_KEYS` | Yes | `provider:key,...` (multi-key per provider OK) |
62
+ | `LLM_MODELS` | No | `provider/model,...` (fallback chain) |
63
+ | `EMBEDDER_MODELS` | No | `provider/model,...` (fallback chain) |
64
+
65
+ ### Examples
66
+
67
+ **Minimal (Gemini only):**
68
+ ```
69
+ API_KEYS=gemini:AIza...
70
+ ```
71
+
72
+ **Multi-key with fallback:**
73
+ ```
74
+ API_KEYS=gemini:AIza-1,gemini:AIza-2,openai:sk-xxx
75
+ LLM_MODELS=gemini/gemini-2.5-flash,openai/gpt-4o-mini
76
+ EMBEDDER_MODELS=gemini/gemini-embedding-001,openai/text-embedding-3-small
77
+ ```
78
+
79
+ ### Defaults
80
+
81
+ | Setting | Default |
82
+ |---------|---------|
83
+ | `LLM_MODELS` | `gemini/gemini-2.5-flash` |
84
+ | `EMBEDDER_MODELS` | `gemini/gemini-embedding-001` |
85
+
86
+ ---
87
+
88
+ ## Tools
89
+
90
+ | Tool | Description |
91
+ |------|-------------|
92
+ | `memory` | `action`: add, search, list, delete |
93
+ | `help` | Detailed documentation |
94
+
95
+ ### Usage
96
+
97
+ ```json
98
+ {"action": "add", "content": "I prefer TypeScript over JavaScript"}
99
+ {"action": "search", "query": "preferences"}
100
+ {"action": "list"}
101
+ {"action": "delete", "memory_id": "abc123"}
102
+ ```
103
+
104
+ ---
105
+
106
+ ## Why better-mem0-mcp?
107
+
108
+ | Feature | Official mem0-mcp | better-mem0-mcp |
109
+ |---------|-------------------|-----------------|
110
+ | Storage | Mem0 Cloud | **Self-hosted PostgreSQL** |
111
+ | Graph Memory | No | **Yes (SQL-based)** |
112
+ | LLM Provider | OpenAI only | **Any (Gemini/OpenAI/Ollama/...)** |
113
+ | Fallback | No | **Yes (multi-key + multi-model)** |
114
+ | Setup | API Key | **DATABASE_URL + API_KEYS** |
115
+
116
+ ---
117
+
118
+ ## License
119
+
120
+ MIT
@@ -0,0 +1,27 @@
1
+ # Security Policy
2
+
3
+ ## Supported Versions
4
+
5
+ | Version | Supported |
6
+ | ------- | ------------------ |
7
+ | 0.x.x | :white_check_mark: |
8
+
9
+ ## Reporting a Vulnerability
10
+
11
+ If you discover a security vulnerability, please report it by:
12
+
13
+ 1. **Do NOT** create a public GitHub issue
14
+ 2. Email the maintainer directly or use GitHub's private vulnerability reporting
15
+ 3. Include detailed steps to reproduce the issue
16
+ 4. Allow reasonable time for a fix before public disclosure
17
+
18
+ We take security seriously and will respond promptly to valid reports.
19
+
20
+ ## Security Best Practices
21
+
22
+ When using better-mem0-mcp:
23
+
24
+ - **Never commit API keys** to version control
25
+ - Use environment variables or secure secret management
26
+ - Keep dependencies updated
27
+ - Use `sslmode=require` for production database connections
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "better-mem0-mcp"
3
- version = "1.1.0-beta.22"
3
+ version = "1.1.3"
4
4
  description = "Zero-setup MCP Server for AI memory - works with Neon/Supabase"
5
5
  readme = "README.md"
6
6
  license = { text = "MIT" }
@@ -16,7 +16,7 @@ classifiers = [
16
16
  "Programming Language :: Python :: 3.13",
17
17
  "Topic :: Scientific/Engineering :: Artificial Intelligence",
18
18
  ]
19
- requires-python = "==3.13.*"
19
+ requires-python = ">=3.13"
20
20
  dependencies = [
21
21
  # MCP Server
22
22
  "mcp[cli]>=1.0.0",
@@ -33,36 +33,43 @@ class Settings(BaseSettings):
33
33
  embedder_models: str = "gemini/gemini-embedding-001"
34
34
 
35
35
  def setup_api_keys(self) -> dict[str, list[str]]:
36
- """Parse API_KEYS (format: ENV_VAR:key,...) and set env vars.
37
-
38
- Example:
39
- API_KEYS="GOOGLE_API_KEY:abc,GOOGLE_API_KEY:def,OPENAI_API_KEY:xyz"
36
+ """
37
+ Parse API_KEYS and set environment variables for LiteLLM.
40
38
 
41
39
  Returns:
42
- Dict mapping env var name to list of API keys.
40
+ Dict mapping provider to list of API keys.
43
41
  """
44
- keys_by_env: dict[str, list[str]] = {}
42
+ env_map = {
43
+ "gemini": "GOOGLE_API_KEY",
44
+ "openai": "OPENAI_API_KEY",
45
+ "anthropic": "ANTHROPIC_API_KEY",
46
+ "groq": "GROQ_API_KEY",
47
+ "deepseek": "DEEPSEEK_API_KEY",
48
+ "mistral": "MISTRAL_API_KEY",
49
+ }
50
+
51
+ keys_by_provider: dict[str, list[str]] = {}
45
52
 
46
53
  for pair in self.api_keys.split(","):
47
54
  pair = pair.strip()
48
55
  if ":" not in pair:
49
56
  continue
50
57
 
51
- env_var, key = pair.split(":", 1)
52
- env_var = env_var.strip()
58
+ provider, key = pair.split(":", 1)
59
+ provider = provider.strip().lower()
53
60
  key = key.strip()
54
61
 
55
62
  if not key:
56
63
  continue
57
64
 
58
- keys_by_env.setdefault(env_var, []).append(key)
65
+ keys_by_provider.setdefault(provider, []).append(key)
59
66
 
60
- # Set first key of each env var (LiteLLM reads from env)
61
- for env_var, keys in keys_by_env.items():
62
- if keys:
63
- os.environ[env_var] = keys[0]
67
+ # Set first key of each provider as env var (LiteLLM reads from env)
68
+ for provider, keys in keys_by_provider.items():
69
+ if provider in env_map and keys:
70
+ os.environ[env_map[provider]] = keys[0]
64
71
 
65
- return keys_by_env
72
+ return keys_by_provider
66
73
 
67
74
  def parse_database_url(self) -> dict:
68
75
  """Parse DATABASE_URL into connection parameters."""
@@ -17,8 +17,6 @@ Save information to long-term memory. Mem0 automatically:
17
17
  {"action": "add", "content": "User prefers dark mode and uses FastAPI"}
18
18
  ```
19
19
 
20
- **Response:** `Saved: {results: [{id, memory, event}]}`
21
-
22
20
  ### search
23
21
  Semantic search across stored memories. Combines vector search with graph context.
24
22
 
@@ -26,8 +24,6 @@ Semantic search across stored memories. Combines vector search with graph contex
26
24
  {"action": "search", "query": "coding preferences", "limit": 5}
27
25
  ```
28
26
 
29
- **Response:** Formatted list of matching memories with optional graph context.
30
-
31
27
  ### list
32
28
  Get all stored memories for a user.
33
29
 
@@ -35,8 +31,6 @@ Get all stored memories for a user.
35
31
  {"action": "list"}
36
32
  ```
37
33
 
38
- **Response:** `Memories (N): - [id] memory text`
39
-
40
34
  ### delete
41
35
  Remove a memory by ID.
42
36
 
@@ -44,17 +38,10 @@ Remove a memory by ID.
44
38
  {"action": "delete", "memory_id": "abc12345-..."}
45
39
  ```
46
40
 
47
- **Response:** `Deleted: {memory_id}`
48
-
49
41
  ## Parameters
50
42
  - `action` - Required: add, search, list, delete
51
43
  - `content` - Required for add: information to remember
52
44
  - `query` - Required for search: what to search for
53
45
  - `memory_id` - Required for delete: ID of memory to remove
54
46
  - `limit` - Optional for search: max results (default: 5)
55
- - `user_id` - Optional: scope memories to a specific user (default: "default")
56
-
57
- ## Technical Notes
58
- - Mem0 API returns `{"results": [...]}` for both `search()` and `get_all()`
59
- - Graph context is automatically included in search results when available
60
- - Embeddings use 1536 dimensions for pgvector HNSW compatibility
47
+ - `user_id` - Optional: scope memories to a specific user