mem-llm 1.2.0__py3-none-any.whl → 1.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mem-llm might be problematic. Click here for more details.

@@ -1,13 +1,13 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: mem-llm
3
- Version: 1.2.0
4
- Summary: Memory-enabled AI assistant with local LLM support - Now with data import/export and multi-database support
3
+ Version: 1.3.1
4
+ Summary: Memory-enabled AI assistant with multi-backend LLM support (Ollama, LM Studio, Gemini) - Local and cloud ready
5
5
  Author-email: "C. Emre Karataş" <karatasqemre@gmail.com>
6
6
  License: MIT
7
7
  Project-URL: Homepage, https://github.com/emredeveloper/Mem-LLM
8
8
  Project-URL: Bug Reports, https://github.com/emredeveloper/Mem-LLM/issues
9
9
  Project-URL: Source, https://github.com/emredeveloper/Mem-LLM
10
- Keywords: llm,ai,memory,agent,chatbot,ollama,local
10
+ Keywords: llm,ai,memory,agent,chatbot,ollama,lmstudio,gemini,multi-backend,local
11
11
  Classifier: Development Status :: 4 - Beta
12
12
  Classifier: Intended Audience :: Developers
13
13
  Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
@@ -22,6 +22,7 @@ Description-Content-Type: text/markdown
22
22
  Requires-Dist: requests>=2.31.0
23
23
  Requires-Dist: pyyaml>=6.0.1
24
24
  Requires-Dist: click>=8.1.0
25
+ Requires-Dist: google-generativeai>=0.3.0
25
26
  Provides-Extra: dev
26
27
  Requires-Dist: pytest>=7.4.0; extra == "dev"
27
28
  Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
@@ -58,9 +59,9 @@ Requires-Dist: pymongo>=4.6.0; extra == "all"
58
59
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
59
60
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
60
61
 
61
- **Memory-enabled AI assistant with local LLM support**
62
+ **Memory-enabled AI assistant with multi-backend LLM support (Ollama, LM Studio, Gemini)**
62
63
 
63
- Mem-LLM is a powerful Python library that brings persistent memory capabilities to local Large Language Models. Build AI assistants that remember user interactions, manage knowledge bases, and work completely offline with Ollama.
64
+ Mem-LLM is a powerful Python library that brings persistent memory capabilities to Large Language Models. Build AI assistants that remember user interactions, manage knowledge bases, and choose between local (Ollama, LM Studio) or cloud (Gemini) backends.
64
65
 
65
66
  ## 🔗 Links
66
67
 
@@ -69,29 +70,31 @@ Mem-LLM is a powerful Python library that brings persistent memory capabilities
69
70
  - **Issues**: https://github.com/emredeveloper/Mem-LLM/issues
70
71
  - **Documentation**: See examples/ directory
71
72
 
72
- ## 🆕 What's New in v1.2.0
73
+ ## 🆕 What's New in v1.3.0
73
74
 
74
- - **Conversation Summarization**: Automatic conversation compression (~40-60% token reduction)
75
- - 📤 **Data Export/Import**: JSON, CSV, SQLite, PostgreSQL, MongoDB support
76
- - 🗄️ **Multi-Database**: Enterprise-ready PostgreSQL & MongoDB integration
77
- - �️ **In-Memory DB**: Use `:memory:` for temporary operations
78
- - **Cleaner Logs**: Default WARNING level for production-ready output
79
- - **Bug Fixes**: Database path handling, organized SQLite files
75
+ - 🔌 **Multi-Backend Support**: Choose between Ollama (local), LM Studio (local), or Google Gemini (cloud)
76
+ - 🏗️ **Factory Pattern**: Clean, extensible architecture for easy backend switching
77
+ - 🔍 **Auto-Detection**: Automatically finds and uses available local LLM services
78
+ - **Unified API**: Same code works across all backends - just change one parameter
79
+ - 📚 **New Examples**: 4 additional examples showing multi-backend usage
80
+ - 🎯 **Backward Compatible**: All v1.2.0 code still works without changes
80
81
 
81
- [See full changelog](CHANGELOG.md#120---2025-10-21)
82
+ [See full changelog](CHANGELOG.md#130---2025-10-31)
82
83
 
83
84
  ## ✨ Key Features
84
85
 
86
+ - 🔌 **Multi-Backend Support** (v1.3.0+) - Choose Ollama, LM Studio, or Gemini with unified API
87
+ - 🔍 **Auto-Detection** (v1.3.0+) - Automatically find and use available LLM services
85
88
  - 🧠 **Persistent Memory** - Remembers conversations across sessions
86
- - 🤖 **Universal Ollama Support** - Works with ALL Ollama models (Qwen3, DeepSeek, Llama3, Granite, etc.)
89
+ - 🤖 **Universal Model Support** - Works with 100+ Ollama models, LM Studio models, and Gemini
87
90
  - 💾 **Dual Storage Modes** - JSON (simple) or SQLite (advanced) memory backends
88
91
  - 📚 **Knowledge Base** - Built-in FAQ/support system with categorized entries
89
92
  - 🎯 **Dynamic Prompts** - Context-aware system prompts that adapt to active features
90
93
  - 👥 **Multi-User Support** - Separate memory spaces for different users
91
94
  - 🔧 **Memory Tools** - Search, export, and manage stored memories
92
95
  - 🎨 **Flexible Configuration** - Personal or business usage modes
93
- - 📊 **Production Ready** - Comprehensive test suite with 34+ automated tests
94
- - 🔒 **100% Local & Private** - No cloud dependencies, your data stays yours
96
+ - 📊 **Production Ready** - Comprehensive test suite with 50+ automated tests
97
+ - 🔒 **Privacy Options** - 100% local (Ollama/LM Studio) or cloud (Gemini)
95
98
  - 🛡️ **Prompt Injection Protection** (v1.1.0+) - Advanced security against prompt attacks (opt-in)
96
99
  - ⚡ **High Performance** (v1.1.0+) - Thread-safe operations, 15K+ msg/s throughput
97
100
  - 🔄 **Retry Logic** (v1.1.0+) - Automatic exponential backoff for network errors
@@ -129,8 +132,9 @@ pip install -U mem-llm
129
132
 
130
133
  ### Prerequisites
131
134
 
132
- Install and start [Ollama](https://ollama.ai):
135
+ **Choose one of the following LLM backends:**
133
136
 
137
+ #### Option 1: Ollama (Local, Privacy-First)
134
138
  ```bash
135
139
  # Install Ollama (visit https://ollama.ai)
136
140
  # Then pull a model
@@ -140,15 +144,38 @@ ollama pull granite4:tiny-h
140
144
  ollama serve
141
145
  ```
142
146
 
147
+ #### Option 2: LM Studio (Local, GUI-Based)
148
+ ```bash
149
+ # 1. Download and install LM Studio: https://lmstudio.ai
150
+ # 2. Download a model from the UI
151
+ # 3. Start the local server (default port: 1234)
152
+ ```
153
+
154
+ #### Option 3: Google Gemini (Cloud, Powerful)
155
+ ```bash
156
+ # Get API key from: https://makersuite.google.com/app/apikey
157
+ # Set environment variable
158
+ export GEMINI_API_KEY="your-api-key-here"
159
+ ```
160
+
143
161
  ### Basic Usage
144
162
 
145
163
  ```python
146
164
  from mem_llm import MemAgent
147
165
 
148
- # Create an agent
166
+ # Option 1: Use Ollama (default)
149
167
  agent = MemAgent(model="granite4:tiny-h")
150
168
 
151
- # Set user and chat
169
+ # Option 2: Use LM Studio
170
+ agent = MemAgent(backend='lmstudio', model='local-model')
171
+
172
+ # Option 3: Use Gemini
173
+ agent = MemAgent(backend='gemini', model='gemini-2.5-flash', api_key='your-key')
174
+
175
+ # Option 4: Auto-detect available backend
176
+ agent = MemAgent(auto_detect_backend=True)
177
+
178
+ # Set user and chat (same for all backends!)
152
179
  agent.set_user("alice")
153
180
  response = agent.chat("My name is Alice and I love Python!")
154
181
  print(response)
@@ -158,10 +185,34 @@ response = agent.chat("What's my name and what do I love?")
158
185
  print(response) # Agent remembers: "Your name is Alice and you love Python!"
159
186
  ```
160
187
 
161
- That's it! Just 5 lines of code to get started.
188
+ That's it! Just 5 lines of code to get started with any backend.
162
189
 
163
190
  ## 📖 Usage Examples
164
191
 
192
+ ### Multi-Backend Examples (v1.3.0+)
193
+
194
+ ```python
195
+ from mem_llm import MemAgent
196
+
197
+ # LM Studio - Fast local inference
198
+ agent = MemAgent(
199
+ backend='lmstudio',
200
+ model='local-model',
201
+ base_url='http://localhost:1234'
202
+ )
203
+
204
+ # Google Gemini - Cloud power
205
+ agent = MemAgent(
206
+ backend='gemini',
207
+ model='gemini-2.5-flash',
208
+ api_key='your-api-key'
209
+ )
210
+
211
+ # Auto-detect - Universal compatibility
212
+ agent = MemAgent(auto_detect_backend=True)
213
+ print(f"Using: {agent.llm.get_backend_info()['name']}")
214
+ ```
215
+
165
216
  ### Multi-User Conversations
166
217
 
167
218
  ```python
@@ -378,16 +429,21 @@ Mem-LLM works with **ALL Ollama models**, including:
378
429
  ```
379
430
  mem-llm/
380
431
  ├── mem_llm/
381
- │ ├── mem_agent.py # Main agent class
382
- │ ├── memory_manager.py # JSON memory backend
383
- │ ├── memory_db.py # SQL memory backend
384
- │ ├── llm_client.py # Ollama API client
385
- │ ├── knowledge_loader.py # Knowledge base system
386
- │ ├── dynamic_prompt.py # Context-aware prompts
387
- ├── memory_tools.py # Memory management tools
388
- │ ├── config_manager.py # Configuration handler
389
- └── cli.py # Command-line interface
390
- └── examples/ # Usage examples
432
+ │ ├── mem_agent.py # Main agent class (multi-backend)
433
+ │ ├── base_llm_client.py # Abstract LLM interface
434
+ │ ├── llm_client_factory.py # Backend factory pattern
435
+ │ ├── clients/ # LLM backend implementations
436
+ ├── ollama_client.py # Ollama integration
437
+ ├── lmstudio_client.py # LM Studio integration
438
+ │ └── gemini_client.py # Google Gemini integration
439
+ │ ├── memory_manager.py # JSON memory backend
440
+ ├── memory_db.py # SQL memory backend
441
+ │ ├── knowledge_loader.py # Knowledge base system
442
+ │ ├── dynamic_prompt.py # Context-aware prompts
443
+ │ ├── memory_tools.py # Memory management tools
444
+ │ ├── config_manager.py # Configuration handler
445
+ │ └── cli.py # Command-line interface
446
+ └── examples/ # Usage examples (14 total)
391
447
  ```
392
448
 
393
449
  ## 🔥 Advanced Features
@@ -429,10 +485,12 @@ stats = agent.get_memory_stats()
429
485
  ## 📦 Project Structure
430
486
 
431
487
  ### Core Components
432
- - **MemAgent**: Main interface for building AI assistants
488
+ - **MemAgent**: Main interface for building AI assistants (multi-backend support)
489
+ - **LLMClientFactory**: Factory pattern for backend creation
490
+ - **BaseLLMClient**: Abstract interface for all LLM backends
491
+ - **OllamaClient / LMStudioClient / GeminiClient**: Backend implementations
433
492
  - **MemoryManager**: JSON-based memory storage (simple)
434
493
  - **SQLMemoryManager**: SQLite-based storage (advanced)
435
- - **OllamaClient**: LLM communication handler
436
494
  - **KnowledgeLoader**: Knowledge base management
437
495
 
438
496
  ### Optional Features
@@ -456,14 +514,19 @@ The `examples/` directory contains ready-to-run demonstrations:
456
514
  8. **08_conversation_summarization.py** - Token compression with auto-summary (v1.2.0+)
457
515
  9. **09_data_export_import.py** - Multi-format export/import demo (v1.2.0+)
458
516
  10. **10_database_connection_test.py** - Enterprise PostgreSQL/MongoDB migration (v1.2.0+)
517
+ 11. **11_lmstudio_example.py** - Using LM Studio backend (v1.3.0+)
518
+ 12. **12_gemini_example.py** - Using Google Gemini API (v1.3.0+)
519
+ 13. **13_multi_backend_comparison.py** - Compare different backends (v1.3.0+)
520
+ 14. **14_auto_detect_backend.py** - Auto-detection feature demo (v1.3.0+)
459
521
 
460
522
  ## 📊 Project Status
461
523
 
462
- - **Version**: 1.2.0
524
+ - **Version**: 1.3.0
463
525
  - **Status**: Production Ready
464
- - **Last Updated**: October 21, 2025
465
- - **Test Coverage**: 16/16 automated tests (100% success rate)
526
+ - **Last Updated**: October 31, 2025
527
+ - **Test Coverage**: 50+ automated tests (100% success rate)
466
528
  - **Performance**: Thread-safe operations, <1ms search latency
529
+ - **Backends**: Ollama, LM Studio, Google Gemini
467
530
  - **Databases**: SQLite, PostgreSQL, MongoDB, In-Memory
468
531
 
469
532
  ## 📈 Roadmap
@@ -475,10 +538,14 @@ The `examples/` directory contains ready-to-run demonstrations:
475
538
  - [x] ~~Conversation Summarization~~ (v1.2.0)
476
539
  - [x] ~~Multi-Database Export/Import~~ (v1.2.0)
477
540
  - [x] ~~In-Memory Database~~ (v1.2.0)
541
+ - [x] ~~Multi-Backend Support (Ollama, LM Studio, Gemini)~~ (v1.3.0)
542
+ - [x] ~~Auto-Detection~~ (v1.3.0)
543
+ - [x] ~~Factory Pattern Architecture~~ (v1.3.0)
544
+ - [ ] OpenAI & Claude backends
545
+ - [ ] Streaming support
478
546
  - [ ] Web UI dashboard
479
547
  - [ ] REST API server
480
548
  - [ ] Vector database integration
481
- - [ ] Advanced analytics dashboard
482
549
 
483
550
  ## 📄 License
484
551
 
@@ -1,4 +1,5 @@
1
- mem_llm/__init__.py,sha256=0ZWXpX9U5-gen1seDNqO8nFz3_D1bWZgO9EwerCiF64,2200
1
+ mem_llm/__init__.py,sha256=e65UMPGqGj69SwDeUkvVE_W4KijKAtI8Jxfn_40Xuc4,2636
2
+ mem_llm/base_llm_client.py,sha256=aCpr8ZnvOsu-a-zp9quTDP42XvjAC1uci6r11s0QdVA,5218
2
3
  mem_llm/cli.py,sha256=DiqQyBZknN8pVagY5jXH85_LZ6odVGopfpa-7DILNNE,8666
3
4
  mem_llm/config.yaml.example,sha256=lgmfaU5pxnIm4zYxwgCcgLSohNx1Jw6oh3Qk0Xoe2DE,917
4
5
  mem_llm/config_from_docs.py,sha256=YFhq1SWyK63C-TNMS73ncNHg8sJ-XGOf2idWVCjxFco,4974
@@ -8,16 +9,21 @@ mem_llm/data_export_import.py,sha256=gQIdD0hBY23qcRvx139yE15RWHXPinL_EoRNY7iabj0
8
9
  mem_llm/dynamic_prompt.py,sha256=8H99QVDRJSVtGb_o4sdEPnG1cJWuer3KiD-nuL1srTA,10244
9
10
  mem_llm/knowledge_loader.py,sha256=oSNhfYYcx7DlZLVogxnbSwaIydq_Q3__RDJFeZR2XVw,2699
10
11
  mem_llm/llm_client.py,sha256=3F04nlnRWRlhkQ3aZO-OfsxeajB2gwbIDfClu04cyb0,8709
12
+ mem_llm/llm_client_factory.py,sha256=jite-4CkgFBd9e0b2cIaZzP-zTqA7tjNqXnJ5CQgcbs,9325
11
13
  mem_llm/logger.py,sha256=dZUmhGgFXtDsDBU_D4kZlJeMp6k-VNPaBcyTt7rZYKE,4507
12
- mem_llm/mem_agent.py,sha256=1HFg-cmDe2D4y-jY--XSJiuyEkReDph6fbCLrybvt_I,33246
14
+ mem_llm/mem_agent.py,sha256=Y4qCHNtdPlOJssQLG1GJdy02FsztYe9sjnbh54qAWWU,37221
13
15
  mem_llm/memory_db.py,sha256=4HbxgfhPrijbBKsEv4ncmjZeK-RhtLkyWBrg-quCsNE,14715
14
16
  mem_llm/memory_manager.py,sha256=CZI3A8pFboHQIgeiXB1h2gZK7mgfbVSU3IxuqE-zXtc,9978
15
17
  mem_llm/memory_tools.py,sha256=ARANFqu_bmL56SlV1RzTjfQsJj-Qe2QvqY0pF92hDxU,8678
16
18
  mem_llm/prompt_security.py,sha256=ehAi6aLiXj0gFFhpyjwEr8LentSTJwOQDLbINV7SaVM,9960
17
19
  mem_llm/retry_handler.py,sha256=z5ZcSQKbvVeNK7plagTLorvOeoYgRpQcsX3PpNqUjKM,6389
18
20
  mem_llm/thread_safe_db.py,sha256=Fq-wSn4ua1qiR6M4ZTIy7UT1IlFj5xODNExgub1blbU,10328
19
- mem_llm-1.2.0.dist-info/METADATA,sha256=63n22mzPVN414NOxGthMxHkY8YqAhP1_DFdMlcno80w,15442
20
- mem_llm-1.2.0.dist-info/WHEEL,sha256=beeZ86-EfXScwlR_HKu4SllMC9wUEj_8Z_4FJ3egI2w,91
21
- mem_llm-1.2.0.dist-info/entry_points.txt,sha256=z9bg6xgNroIobvCMtnSXeFPc-vI1nMen8gejHCdnl0U,45
22
- mem_llm-1.2.0.dist-info/top_level.txt,sha256=_fU1ML-0JwkaxWdhqpwtmTNaJEOvDMQeJdA8d5WqDn8,8
23
- mem_llm-1.2.0.dist-info/RECORD,,
21
+ mem_llm/clients/__init__.py,sha256=Nvr4NuL9ZlDF_dUjr-ZMFxRRrBdHoUOjqncZs3n5Wow,475
22
+ mem_llm/clients/gemini_client.py,sha256=dmRZRd8f-x6J2W7luzcB1BOx_4UpXpCF4YiPGUccWCw,14432
23
+ mem_llm/clients/lmstudio_client.py,sha256=IxUX3sVRfXN46hfEUTCrspGTOeqsn4YAu9WzFuGh940,10156
24
+ mem_llm/clients/ollama_client.py,sha256=2BfYSBiOowhFg9UiCXkILlBG9_4Vri3-Iny_gH6-um0,9710
25
+ mem_llm-1.3.1.dist-info/METADATA,sha256=d6sfUEBBg7Ir7y2495WAg6rqrKO9how5d3AgD-1NxU0,18217
26
+ mem_llm-1.3.1.dist-info/WHEEL,sha256=beeZ86-EfXScwlR_HKu4SllMC9wUEj_8Z_4FJ3egI2w,91
27
+ mem_llm-1.3.1.dist-info/entry_points.txt,sha256=z9bg6xgNroIobvCMtnSXeFPc-vI1nMen8gejHCdnl0U,45
28
+ mem_llm-1.3.1.dist-info/top_level.txt,sha256=_fU1ML-0JwkaxWdhqpwtmTNaJEOvDMQeJdA8d5WqDn8,8
29
+ mem_llm-1.3.1.dist-info/RECORD,,