mem-llm 1.3.0__py3-none-any.whl → 1.3.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mem-llm might be problematic. Click here for more details.

mem_llm/__init__.py CHANGED
@@ -63,7 +63,7 @@ try:
63
63
  except ImportError:
64
64
  __all_export_import__ = []
65
65
 
66
- __version__ = "1.3.0"
66
+ __version__ = "1.3.1"
67
67
  __author__ = "C. Emre Karataş"
68
68
 
69
69
  # Multi-backend LLM support (v1.3.0+)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: mem-llm
3
- Version: 1.3.0
3
+ Version: 1.3.1
4
4
  Summary: Memory-enabled AI assistant with multi-backend LLM support (Ollama, LM Studio, Gemini) - Local and cloud ready
5
5
  Author-email: "C. Emre Karataş" <karatasqemre@gmail.com>
6
6
  License: MIT
@@ -59,9 +59,9 @@ Requires-Dist: pymongo>=4.6.0; extra == "all"
59
59
  [![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
60
60
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
61
61
 
62
- **Memory-enabled AI assistant with local LLM support**
62
+ **Memory-enabled AI assistant with multi-backend LLM support (Ollama, LM Studio, Gemini)**
63
63
 
64
- Mem-LLM is a powerful Python library that brings persistent memory capabilities to local Large Language Models. Build AI assistants that remember user interactions, manage knowledge bases, and work completely offline with Ollama.
64
+ Mem-LLM is a powerful Python library that brings persistent memory capabilities to Large Language Models. Build AI assistants that remember user interactions, manage knowledge bases, and choose between local (Ollama, LM Studio) or cloud (Gemini) backends.
65
65
 
66
66
  ## 🔗 Links
67
67
 
@@ -70,29 +70,31 @@ Mem-LLM is a powerful Python library that brings persistent memory capabilities
70
70
  - **Issues**: https://github.com/emredeveloper/Mem-LLM/issues
71
71
  - **Documentation**: See examples/ directory
72
72
 
73
- ## 🆕 What's New in v1.2.0
73
+ ## 🆕 What's New in v1.3.0
74
74
 
75
- - **Conversation Summarization**: Automatic conversation compression (~40-60% token reduction)
76
- - 📤 **Data Export/Import**: JSON, CSV, SQLite, PostgreSQL, MongoDB support
77
- - 🗄️ **Multi-Database**: Enterprise-ready PostgreSQL & MongoDB integration
78
- - �️ **In-Memory DB**: Use `:memory:` for temporary operations
79
- - **Cleaner Logs**: Default WARNING level for production-ready output
80
- - **Bug Fixes**: Database path handling, organized SQLite files
75
+ - 🔌 **Multi-Backend Support**: Choose between Ollama (local), LM Studio (local), or Google Gemini (cloud)
76
+ - 🏗️ **Factory Pattern**: Clean, extensible architecture for easy backend switching
77
+ - 🔍 **Auto-Detection**: Automatically finds and uses available local LLM services
78
+ - **Unified API**: Same code works across all backends - just change one parameter
79
+ - 📚 **New Examples**: 4 additional examples showing multi-backend usage
80
+ - 🎯 **Backward Compatible**: All v1.2.0 code still works without changes
81
81
 
82
- [See full changelog](CHANGELOG.md#120---2025-10-21)
82
+ [See full changelog](CHANGELOG.md#130---2025-10-31)
83
83
 
84
84
  ## ✨ Key Features
85
85
 
86
+ - 🔌 **Multi-Backend Support** (v1.3.0+) - Choose Ollama, LM Studio, or Gemini with unified API
87
+ - 🔍 **Auto-Detection** (v1.3.0+) - Automatically find and use available LLM services
86
88
  - 🧠 **Persistent Memory** - Remembers conversations across sessions
87
- - 🤖 **Universal Ollama Support** - Works with ALL Ollama models (Qwen3, DeepSeek, Llama3, Granite, etc.)
89
+ - 🤖 **Universal Model Support** - Works with 100+ Ollama models, LM Studio models, and Gemini
88
90
  - 💾 **Dual Storage Modes** - JSON (simple) or SQLite (advanced) memory backends
89
91
  - 📚 **Knowledge Base** - Built-in FAQ/support system with categorized entries
90
92
  - 🎯 **Dynamic Prompts** - Context-aware system prompts that adapt to active features
91
93
  - 👥 **Multi-User Support** - Separate memory spaces for different users
92
94
  - 🔧 **Memory Tools** - Search, export, and manage stored memories
93
95
  - 🎨 **Flexible Configuration** - Personal or business usage modes
94
- - 📊 **Production Ready** - Comprehensive test suite with 34+ automated tests
95
- - 🔒 **100% Local & Private** - No cloud dependencies, your data stays yours
96
+ - 📊 **Production Ready** - Comprehensive test suite with 50+ automated tests
97
+ - 🔒 **Privacy Options** - 100% local (Ollama/LM Studio) or cloud (Gemini)
96
98
  - 🛡️ **Prompt Injection Protection** (v1.1.0+) - Advanced security against prompt attacks (opt-in)
97
99
  - ⚡ **High Performance** (v1.1.0+) - Thread-safe operations, 15K+ msg/s throughput
98
100
  - 🔄 **Retry Logic** (v1.1.0+) - Automatic exponential backoff for network errors
@@ -130,8 +132,9 @@ pip install -U mem-llm
130
132
 
131
133
  ### Prerequisites
132
134
 
133
- Install and start [Ollama](https://ollama.ai):
135
+ **Choose one of the following LLM backends:**
134
136
 
137
+ #### Option 1: Ollama (Local, Privacy-First)
135
138
  ```bash
136
139
  # Install Ollama (visit https://ollama.ai)
137
140
  # Then pull a model
@@ -141,15 +144,38 @@ ollama pull granite4:tiny-h
141
144
  ollama serve
142
145
  ```
143
146
 
147
+ #### Option 2: LM Studio (Local, GUI-Based)
148
+ ```bash
149
+ # 1. Download and install LM Studio: https://lmstudio.ai
150
+ # 2. Download a model from the UI
151
+ # 3. Start the local server (default port: 1234)
152
+ ```
153
+
154
+ #### Option 3: Google Gemini (Cloud, Powerful)
155
+ ```bash
156
+ # Get API key from: https://makersuite.google.com/app/apikey
157
+ # Set environment variable
158
+ export GEMINI_API_KEY="your-api-key-here"
159
+ ```
160
+
144
161
  ### Basic Usage
145
162
 
146
163
  ```python
147
164
  from mem_llm import MemAgent
148
165
 
149
- # Create an agent
166
+ # Option 1: Use Ollama (default)
150
167
  agent = MemAgent(model="granite4:tiny-h")
151
168
 
152
- # Set user and chat
169
+ # Option 2: Use LM Studio
170
+ agent = MemAgent(backend='lmstudio', model='local-model')
171
+
172
+ # Option 3: Use Gemini
173
+ agent = MemAgent(backend='gemini', model='gemini-2.5-flash', api_key='your-key')
174
+
175
+ # Option 4: Auto-detect available backend
176
+ agent = MemAgent(auto_detect_backend=True)
177
+
178
+ # Set user and chat (same for all backends!)
153
179
  agent.set_user("alice")
154
180
  response = agent.chat("My name is Alice and I love Python!")
155
181
  print(response)
@@ -159,10 +185,34 @@ response = agent.chat("What's my name and what do I love?")
159
185
  print(response) # Agent remembers: "Your name is Alice and you love Python!"
160
186
  ```
161
187
 
162
- That's it! Just 5 lines of code to get started.
188
+ That's it! Just 5 lines of code to get started with any backend.
163
189
 
164
190
  ## 📖 Usage Examples
165
191
 
192
+ ### Multi-Backend Examples (v1.3.0+)
193
+
194
+ ```python
195
+ from mem_llm import MemAgent
196
+
197
+ # LM Studio - Fast local inference
198
+ agent = MemAgent(
199
+ backend='lmstudio',
200
+ model='local-model',
201
+ base_url='http://localhost:1234'
202
+ )
203
+
204
+ # Google Gemini - Cloud power
205
+ agent = MemAgent(
206
+ backend='gemini',
207
+ model='gemini-2.5-flash',
208
+ api_key='your-api-key'
209
+ )
210
+
211
+ # Auto-detect - Universal compatibility
212
+ agent = MemAgent(auto_detect_backend=True)
213
+ print(f"Using: {agent.llm.get_backend_info()['name']}")
214
+ ```
215
+
166
216
  ### Multi-User Conversations
167
217
 
168
218
  ```python
@@ -379,16 +429,21 @@ Mem-LLM works with **ALL Ollama models**, including:
379
429
  ```
380
430
  mem-llm/
381
431
  ├── mem_llm/
382
- │ ├── mem_agent.py # Main agent class
383
- │ ├── memory_manager.py # JSON memory backend
384
- │ ├── memory_db.py # SQL memory backend
385
- │ ├── llm_client.py # Ollama API client
386
- │ ├── knowledge_loader.py # Knowledge base system
387
- │ ├── dynamic_prompt.py # Context-aware prompts
388
- ├── memory_tools.py # Memory management tools
389
- │ ├── config_manager.py # Configuration handler
390
- └── cli.py # Command-line interface
391
- └── examples/ # Usage examples
432
+ │ ├── mem_agent.py # Main agent class (multi-backend)
433
+ │ ├── base_llm_client.py # Abstract LLM interface
434
+ │ ├── llm_client_factory.py # Backend factory pattern
435
+ │ ├── clients/ # LLM backend implementations
436
+ ├── ollama_client.py # Ollama integration
437
+ ├── lmstudio_client.py # LM Studio integration
438
+ │ └── gemini_client.py # Google Gemini integration
439
+ │ ├── memory_manager.py # JSON memory backend
440
+ ├── memory_db.py # SQL memory backend
441
+ │ ├── knowledge_loader.py # Knowledge base system
442
+ │ ├── dynamic_prompt.py # Context-aware prompts
443
+ │ ├── memory_tools.py # Memory management tools
444
+ │ ├── config_manager.py # Configuration handler
445
+ │ └── cli.py # Command-line interface
446
+ └── examples/ # Usage examples (14 total)
392
447
  ```
393
448
 
394
449
  ## 🔥 Advanced Features
@@ -430,10 +485,12 @@ stats = agent.get_memory_stats()
430
485
  ## 📦 Project Structure
431
486
 
432
487
  ### Core Components
433
- - **MemAgent**: Main interface for building AI assistants
488
+ - **MemAgent**: Main interface for building AI assistants (multi-backend support)
489
+ - **LLMClientFactory**: Factory pattern for backend creation
490
+ - **BaseLLMClient**: Abstract interface for all LLM backends
491
+ - **OllamaClient / LMStudioClient / GeminiClient**: Backend implementations
434
492
  - **MemoryManager**: JSON-based memory storage (simple)
435
493
  - **SQLMemoryManager**: SQLite-based storage (advanced)
436
- - **OllamaClient**: LLM communication handler
437
494
  - **KnowledgeLoader**: Knowledge base management
438
495
 
439
496
  ### Optional Features
@@ -457,14 +514,19 @@ The `examples/` directory contains ready-to-run demonstrations:
457
514
  8. **08_conversation_summarization.py** - Token compression with auto-summary (v1.2.0+)
458
515
  9. **09_data_export_import.py** - Multi-format export/import demo (v1.2.0+)
459
516
  10. **10_database_connection_test.py** - Enterprise PostgreSQL/MongoDB migration (v1.2.0+)
517
+ 11. **11_lmstudio_example.py** - Using LM Studio backend (v1.3.0+)
518
+ 12. **12_gemini_example.py** - Using Google Gemini API (v1.3.0+)
519
+ 13. **13_multi_backend_comparison.py** - Compare different backends (v1.3.0+)
520
+ 14. **14_auto_detect_backend.py** - Auto-detection feature demo (v1.3.0+)
460
521
 
461
522
  ## 📊 Project Status
462
523
 
463
- - **Version**: 1.2.0
524
+ - **Version**: 1.3.0
464
525
  - **Status**: Production Ready
465
- - **Last Updated**: October 21, 2025
466
- - **Test Coverage**: 16/16 automated tests (100% success rate)
526
+ - **Last Updated**: October 31, 2025
527
+ - **Test Coverage**: 50+ automated tests (100% success rate)
467
528
  - **Performance**: Thread-safe operations, <1ms search latency
529
+ - **Backends**: Ollama, LM Studio, Google Gemini
468
530
  - **Databases**: SQLite, PostgreSQL, MongoDB, In-Memory
469
531
 
470
532
  ## 📈 Roadmap
@@ -476,10 +538,14 @@ The `examples/` directory contains ready-to-run demonstrations:
476
538
  - [x] ~~Conversation Summarization~~ (v1.2.0)
477
539
  - [x] ~~Multi-Database Export/Import~~ (v1.2.0)
478
540
  - [x] ~~In-Memory Database~~ (v1.2.0)
541
+ - [x] ~~Multi-Backend Support (Ollama, LM Studio, Gemini)~~ (v1.3.0)
542
+ - [x] ~~Auto-Detection~~ (v1.3.0)
543
+ - [x] ~~Factory Pattern Architecture~~ (v1.3.0)
544
+ - [ ] OpenAI & Claude backends
545
+ - [ ] Streaming support
479
546
  - [ ] Web UI dashboard
480
547
  - [ ] REST API server
481
548
  - [ ] Vector database integration
482
- - [ ] Advanced analytics dashboard
483
549
 
484
550
  ## 📄 License
485
551
 
@@ -1,4 +1,4 @@
1
- mem_llm/__init__.py,sha256=Nx_7o8uFoK7WzLjY4Ko2sVITQoAcYtwNsUK4AugddbE,2636
1
+ mem_llm/__init__.py,sha256=e65UMPGqGj69SwDeUkvVE_W4KijKAtI8Jxfn_40Xuc4,2636
2
2
  mem_llm/base_llm_client.py,sha256=aCpr8ZnvOsu-a-zp9quTDP42XvjAC1uci6r11s0QdVA,5218
3
3
  mem_llm/cli.py,sha256=DiqQyBZknN8pVagY5jXH85_LZ6odVGopfpa-7DILNNE,8666
4
4
  mem_llm/config.yaml.example,sha256=lgmfaU5pxnIm4zYxwgCcgLSohNx1Jw6oh3Qk0Xoe2DE,917
@@ -22,8 +22,8 @@ mem_llm/clients/__init__.py,sha256=Nvr4NuL9ZlDF_dUjr-ZMFxRRrBdHoUOjqncZs3n5Wow,4
22
22
  mem_llm/clients/gemini_client.py,sha256=dmRZRd8f-x6J2W7luzcB1BOx_4UpXpCF4YiPGUccWCw,14432
23
23
  mem_llm/clients/lmstudio_client.py,sha256=IxUX3sVRfXN46hfEUTCrspGTOeqsn4YAu9WzFuGh940,10156
24
24
  mem_llm/clients/ollama_client.py,sha256=2BfYSBiOowhFg9UiCXkILlBG9_4Vri3-Iny_gH6-um0,9710
25
- mem_llm-1.3.0.dist-info/METADATA,sha256=Ov-FBPV2qYjgWWYv9l0WidhSmr7vGx-NW1uIqxAToi4,15518
26
- mem_llm-1.3.0.dist-info/WHEEL,sha256=beeZ86-EfXScwlR_HKu4SllMC9wUEj_8Z_4FJ3egI2w,91
27
- mem_llm-1.3.0.dist-info/entry_points.txt,sha256=z9bg6xgNroIobvCMtnSXeFPc-vI1nMen8gejHCdnl0U,45
28
- mem_llm-1.3.0.dist-info/top_level.txt,sha256=_fU1ML-0JwkaxWdhqpwtmTNaJEOvDMQeJdA8d5WqDn8,8
29
- mem_llm-1.3.0.dist-info/RECORD,,
25
+ mem_llm-1.3.1.dist-info/METADATA,sha256=d6sfUEBBg7Ir7y2495WAg6rqrKO9how5d3AgD-1NxU0,18217
26
+ mem_llm-1.3.1.dist-info/WHEEL,sha256=beeZ86-EfXScwlR_HKu4SllMC9wUEj_8Z_4FJ3egI2w,91
27
+ mem_llm-1.3.1.dist-info/entry_points.txt,sha256=z9bg6xgNroIobvCMtnSXeFPc-vI1nMen8gejHCdnl0U,45
28
+ mem_llm-1.3.1.dist-info/top_level.txt,sha256=_fU1ML-0JwkaxWdhqpwtmTNaJEOvDMQeJdA8d5WqDn8,8
29
+ mem_llm-1.3.1.dist-info/RECORD,,