mem-llm 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mem-llm might be problematic. Click here for more details.

@@ -0,0 +1,382 @@
1
+ Metadata-Version: 2.4
2
+ Name: mem-llm
3
+ Version: 1.0.0
4
+ Summary: Memory-enabled AI assistant with local LLM support
5
+ Home-page: https://github.com/emredeveloper/Mem-LLM
6
+ Author: C. Emre Karataş
7
+ Author-email: karatasqemre@gmail.com
8
+ Project-URL: Bug Reports, https://github.com/emredeveloper/Mem-LLM/issues
9
+ Project-URL: Source, https://github.com/emredeveloper/Mem-LLM
10
+ Keywords: llm ai memory agent chatbot ollama local
11
+ Classifier: Development Status :: 4 - Beta
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
14
+ Classifier: License :: OSI Approved :: MIT License
15
+ Classifier: Programming Language :: Python :: 3
16
+ Classifier: Programming Language :: Python :: 3.8
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Requires-Python: >=3.8
22
+ Description-Content-Type: text/markdown
23
+ Requires-Dist: requests>=2.31.0
24
+ Requires-Dist: pyyaml>=6.0.1
25
+ Provides-Extra: dev
26
+ Requires-Dist: pytest>=7.4.0; extra == "dev"
27
+ Requires-Dist: black>=23.7.0; extra == "dev"
28
+ Requires-Dist: flake8>=6.1.0; extra == "dev"
29
+ Dynamic: author
30
+ Dynamic: author-email
31
+ Dynamic: classifier
32
+ Dynamic: description
33
+ Dynamic: description-content-type
34
+ Dynamic: home-page
35
+ Dynamic: keywords
36
+ Dynamic: project-url
37
+ Dynamic: provides-extra
38
+ Dynamic: requires-dist
39
+ Dynamic: requires-python
40
+ Dynamic: summary
41
+
42
+ # 🧠 Mem-Agent: Memory-Enabled Mini Assistant
43
+
44
+ <div align="center">
45
+
46
+ [![Python](https://img.shields.io/badge/Python-3.8%2B-blue.svg)](https://www.python.org/downloads/)
47
+ [![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
48
+ [![Ollama](https://img.shields.io/badge/Ollama-Compatible-orange.svg)](https://ollama.ai/)
49
+
50
+ **A local AI assistant that remembers user interactions and responds with context awareness using a lightweight 4-billion parameter LLM.**
51
+
52
+ [Quick Start](#-quick-start) β€’ [Features](#-features) β€’ [Documentation](#-documentation) β€’ [Examples](#-usage-examples)
53
+
54
+ </div>
55
+
56
+ ---
57
+
58
+ ## 🎯 Why Mem-Agent?
59
+
60
+ Most Large Language Models (LLMs) treat every conversation as "new" and don't remember past interactions. **Mem-Agent** uses a small locally-running model to:
61
+
62
+ - βœ… **Remember user history** - Separate memory for each customer/user
63
+ - βœ… **Context awareness** - Responds based on previous conversations
64
+ - βœ… **Fully local** - No internet connection required
65
+ - βœ… **Lightweight & fast** - Only 2.5 GB model size
66
+ - βœ… **Easy integration** - Get started with 3 lines of code
67
+
68
+ ## πŸš€ Quick Start
69
+
70
+ ### 1. Install Ollama
71
+
72
+ ```bash
73
+ # Windows/Mac/Linux: https://ollama.ai/download
74
+ curl https://ollama.ai/install.sh | sh
75
+
76
+ # Start the service
77
+ ollama serve
78
+ ```
79
+
80
+ ### 2. Download Model
81
+
82
+ ```bash
83
+ ollama pull granite4:tiny-h
84
+ ```
85
+
86
+ ### 3. Use Mem-Agent
87
+
88
+ ```python
89
+ from memory_llm import MemAgent
90
+
91
+ # Create agent
92
+ agent = MemAgent(model="granite4:tiny-h")
93
+
94
+ # System check
95
+ status = agent.check_setup()
96
+ if status['status'] == 'ready':
97
+ print("βœ… System ready!")
98
+ else:
99
+ print("❌ Error:", status)
100
+
101
+ # Set user
102
+ agent.set_user("user123")
103
+
104
+ # First conversation
105
+ response = agent.chat("Hello, my name is Ali")
106
+ print(response)
107
+
108
+ # Second conversation - It remembers me!
109
+ response = agent.chat("Do you remember my name?")
110
+ print(response)
111
+ ```
112
+
113
+ ## πŸ“š Example Scripts
114
+
115
+ ### 1. Simple Test
116
+
117
+ ```bash
118
+ python examples/example_simple.py
119
+ ```
120
+
121
+ ### 2. Customer Service Simulation
122
+
123
+ ```bash
124
+ python examples/example_customer_service.py
125
+ ```
126
+
127
+ ## πŸ—οΈ Project Structure
128
+
129
+ ```
130
+ Memory LLM/
131
+ β”œβ”€β”€ memory_llm/ # Main package
132
+ β”‚ β”œβ”€β”€ __init__.py # Package initialization
133
+ β”‚ β”œβ”€β”€ mem_agent.py # Main assistant class
134
+ β”‚ β”œβ”€β”€ memory_manager.py # Memory management
135
+ β”‚ β”œβ”€β”€ memory_db.py # SQL database support
136
+ β”‚ β”œβ”€β”€ llm_client.py # Ollama integration
137
+ β”‚ β”œβ”€β”€ memory_tools.py # User tools
138
+ β”‚ β”œβ”€β”€ knowledge_loader.py # Knowledge base loader
139
+ β”‚ β”œβ”€β”€ prompt_templates.py # Prompt templates
140
+ β”‚ └── config_manager.py # Configuration manager
141
+ β”œβ”€β”€ examples/ # Example scripts
142
+ β”œβ”€β”€ tests/ # Test files
143
+ β”œβ”€β”€ setup.py # Installation script
144
+ β”œβ”€β”€ requirements.txt # Dependencies
145
+ └── README.md # This file
146
+ ```
147
+
148
+ ## πŸ”§ API Usage
149
+
150
+ ### MemAgent Class
151
+
152
+ ```python
153
+ from memory_llm import MemAgent
154
+
155
+ agent = MemAgent(
156
+ model="granite4:tiny-h", # Ollama model name
157
+ memory_dir="memories", # Memory directory
158
+ ollama_url="http://localhost:11434" # Ollama API URL
159
+ )
160
+ ```
161
+
162
+ #### Basic Methods
163
+
164
+ ```python
165
+ # Set user
166
+ agent.set_user("user_id")
167
+
168
+ # Chat
169
+ response = agent.chat(
170
+ message="Hello",
171
+ user_id="optional_user_id", # If set_user not used
172
+ metadata={"key": "value"} # Additional information
173
+ )
174
+
175
+ # Get memory summary
176
+ summary = agent.memory_manager.get_summary("user_id")
177
+
178
+ # Search in history
179
+ results = agent.search_user_history("keyword", "user_id")
180
+
181
+ # Update profile
182
+ agent.update_user_info({
183
+ "name": "Ali",
184
+ "preferences": {"language": "en"}
185
+ })
186
+
187
+ # Get statistics
188
+ stats = agent.get_statistics()
189
+
190
+ # Export memory
191
+ json_data = agent.export_memory("user_id")
192
+
193
+ # Clear memory (WARNING!)
194
+ agent.clear_user_memory("user_id", confirm=True)
195
+ ```
196
+
197
+ ### MemoryManager Class
198
+
199
+ ```python
200
+ from memory_llm import MemoryManager
201
+
202
+ memory = MemoryManager(memory_dir="memories")
203
+
204
+ # Load memory
205
+ data = memory.load_memory("user_id")
206
+
207
+ # Add interaction
208
+ memory.add_interaction(
209
+ user_id="user_id",
210
+ user_message="Hello",
211
+ bot_response="Hello! How can I help you?",
212
+ metadata={"timestamp": "2025-01-13"}
213
+ )
214
+
215
+ # Get recent conversations
216
+ recent = memory.get_recent_conversations("user_id", limit=5)
217
+
218
+ # Search
219
+ results = memory.search_memory("user_id", "order")
220
+ ```
221
+
222
+ ### OllamaClient Class
223
+
224
+ ```python
225
+ from memory_llm import OllamaClient
226
+
227
+ client = OllamaClient(model="granite4:tiny-h")
228
+
229
+ # Simple generation
230
+ response = client.generate("Hello world!")
231
+
232
+ # Chat format
233
+ response = client.chat([
234
+ {"role": "system", "content": "You are a helpful assistant"},
235
+ {"role": "user", "content": "Hello"}
236
+ ])
237
+
238
+ # Connection check
239
+ is_ready = client.check_connection()
240
+
241
+ # Model list
242
+ models = client.list_models()
243
+ ```
244
+
245
+ ## πŸ’‘ Usage Scenarios
246
+
247
+ ### 1. Customer Service Bot
248
+ - Remembers customer history
249
+ - Knows previous issues
250
+ - Makes personalized recommendations
251
+
252
+ ### 2. Personal Assistant
253
+ - Tracks daily activities
254
+ - Learns preferences
255
+ - Makes reminders
256
+
257
+ ### 3. Education Assistant
258
+ - Tracks student progress
259
+ - Adjusts difficulty level
260
+ - Remembers past mistakes
261
+
262
+ ### 4. Support Ticket System
263
+ - Stores ticket history
264
+ - Finds related old tickets
265
+ - Provides solution suggestions
266
+
267
+ ## πŸ“Š Memory Format
268
+
269
+ Memories are stored in JSON format:
270
+
271
+ ```json
272
+ {
273
+ "conversations": [
274
+ {
275
+ "timestamp": "2025-01-13T10:30:00",
276
+ "user_message": "Hello",
277
+ "bot_response": "Hello! How can I help you?",
278
+ "metadata": {
279
+ "topic": "greeting"
280
+ }
281
+ }
282
+ ],
283
+ "profile": {
284
+ "user_id": "user123",
285
+ "first_seen": "2025-01-13T10:30:00",
286
+ "preferences": {},
287
+ "summary": {}
288
+ },
289
+ "last_updated": "2025-01-13T10:35:00"
290
+ }
291
+ ```
292
+
293
+ ## πŸ”’ Privacy and Security
294
+
295
+ - βœ… Works completely locally (no internet connection required)
296
+ - βœ… Data stored on your computer
297
+ - βœ… No data sent to third-party services
298
+ - βœ… Memories in JSON format, easily deletable
299
+
300
+ ## πŸ› οΈ Development
301
+
302
+ ### Test Mode
303
+
304
+ ```python
305
+ # Simple chat without memory (for testing)
306
+ response = agent.simple_chat("Test message")
307
+ ```
308
+
309
+ ### Using Your Own Model
310
+
311
+ ```python
312
+ # Different Ollama model
313
+ agent = MemAgent(model="llama2:7b")
314
+
315
+ # Or another LLM API
316
+ # Customize llm_client.py file
317
+ ```
318
+
319
+ ## πŸ› Troubleshooting
320
+
321
+ ### Ollama Connection Error
322
+
323
+ ```bash
324
+ # Start Ollama service
325
+ ollama serve
326
+
327
+ # Port check
328
+ netstat -an | findstr "11434"
329
+ ```
330
+
331
+ ### Model Not Found
332
+
333
+ ```bash
334
+ # Check model list
335
+ ollama list
336
+
337
+ # Download model
338
+ ollama pull granite4:tiny-h
339
+ ```
340
+
341
+ ### Memory Issues
342
+
343
+ ```python
344
+ # Check memory directory
345
+ import os
346
+ os.path.exists("memories")
347
+
348
+ # List memory files
349
+ os.listdir("memories")
350
+ ```
351
+
352
+ ## πŸ“ˆ Performance
353
+
354
+ - **Model Size**: ~2.5 GB
355
+ - **Response Time**: ~1-3 seconds (depends on CPU)
356
+ - **Memory Usage**: ~4-6 GB RAM
357
+ - **Disk Usage**: ~10-50 KB per user
358
+
359
+ ## 🀝 Contributing
360
+
361
+ 1. Fork the repository
362
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
363
+ 3. Commit changes (`git commit -m 'feat: Add amazing feature'`)
364
+ 4. Push to branch (`git push origin feature/amazing-feature`)
365
+ 5. Open Pull Request
366
+
367
+ ## πŸ“ License
368
+
369
+ MIT License - See LICENSE file for details.
370
+
371
+ ## πŸ™ Acknowledgments
372
+
373
+ - [Ollama](https://ollama.ai/) - Local LLM server
374
+ - [Granite](https://www.ibm.com/granite) - IBM Granite models
375
+
376
+ ## πŸ“ž Contact
377
+
378
+ You can open an issue for your questions.
379
+
380
+ ---
381
+
382
+ **Note**: This project is for educational and research purposes. Please perform comprehensive testing before using in production environment.
@@ -0,0 +1,14 @@
1
+ memory_llm/__init__.py,sha256=74hTFnqEMUtTnTLUtZllFo-8NM-JghqZgPH9SDgQj0g,827
2
+ memory_llm/config.yaml.example,sha256=lgmfaU5pxnIm4zYxwgCcgLSohNx1Jw6oh3Qk0Xoe2DE,917
3
+ memory_llm/config_manager.py,sha256=8PIHs21jZWlI-eG9DgekjOvNxU3-U4xH7SbT8Gr-Z6M,7075
4
+ memory_llm/knowledge_loader.py,sha256=oSNhfYYcx7DlZLVogxnbSwaIydq_Q3__RDJFeZR2XVw,2699
5
+ memory_llm/llm_client.py,sha256=tLNulVEV_tWdktvcQUokdhd0gTkIISUHipglRt17IWk,5255
6
+ memory_llm/mem_agent.py,sha256=AMw8X5cFdHoyphyHf9B4eBXDFGTLEv9nkDBXnO_fGL4,19907
7
+ memory_llm/memory_db.py,sha256=OGWTIHBHh1qETGvmrlZWfmv9szSaFuSCzJGMZg6HBww,12329
8
+ memory_llm/memory_manager.py,sha256=-JM0Qb5dYm1Rj4jd3FQfDpZSaya-ly9rcgEjyvnyDzk,8052
9
+ memory_llm/memory_tools.py,sha256=ARANFqu_bmL56SlV1RzTjfQsJj-Qe2QvqY0pF92hDxU,8678
10
+ memory_llm/prompt_templates.py,sha256=tCiQJw3QQKIaH8NsxEKOIaIVxw4XT43PwdmyfCINzzM,6536
11
+ mem_llm-1.0.0.dist-info/METADATA,sha256=Pdiho_vUo-vCZgKde5WYCqfabFtXoubMJA97u_qLjaY,9359
12
+ mem_llm-1.0.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
13
+ mem_llm-1.0.0.dist-info/top_level.txt,sha256=7I8wePWMtiZ-viJGXLYAiHpxiwpwPbFhNn1cyufySok,11
14
+ mem_llm-1.0.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.9.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1 @@
1
+ memory_llm
memory_llm/__init__.py ADDED
@@ -0,0 +1,34 @@
1
+ """
2
+ Memory-LLM: Memory-Enabled Mini Assistant
3
+ AI library that remembers user interactions
4
+ """
5
+
6
+ from .mem_agent import MemAgent
7
+ from .memory_manager import MemoryManager
8
+ from .llm_client import OllamaClient
9
+
10
+ # Tools (optional)
11
+ try:
12
+ from .memory_tools import MemoryTools, ToolExecutor
13
+ __all_tools__ = ["MemoryTools", "ToolExecutor"]
14
+ except ImportError:
15
+ __all_tools__ = []
16
+
17
+ # Pro version imports (optional)
18
+ try:
19
+ from .memory_db import SQLMemoryManager
20
+ from .prompt_templates import prompt_manager
21
+ from .config_manager import get_config
22
+ __all_pro__ = ["SQLMemoryManager", "prompt_manager", "get_config"]
23
+ except ImportError:
24
+ __all_pro__ = []
25
+
26
+ __version__ = "1.0.0"
27
+ __author__ = "C. Emre Karataş"
28
+
29
+ __all__ = [
30
+ "MemAgent",
31
+ "MemoryManager",
32
+ "OllamaClient",
33
+ ] + __all_tools__ + __all_pro__
34
+
@@ -0,0 +1,52 @@
1
+ # Memory-LLM Configuration File
2
+ # Copy this file to config.yaml and edit as needed
3
+
4
+ # Usage Mode: "personal" or "business"
5
+ usage_mode: "personal"
6
+
7
+ # LLM Settings
8
+ llm:
9
+ model: "granite4:tiny-h"
10
+ base_url: "http://localhost:11434"
11
+ temperature: 0.7
12
+ max_tokens: 500
13
+
14
+ # Memory Settings
15
+ memory:
16
+ backend: "json" # "json" or "sql"
17
+ json_dir: "memories"
18
+ db_path: "memories.db"
19
+
20
+ # System Prompt Template
21
+ prompt:
22
+ template: "personal_assistant"
23
+ variables:
24
+ user_name: "User"
25
+ tone: "friendly"
26
+
27
+ # Knowledge Base
28
+ knowledge_base:
29
+ enabled: true
30
+ auto_load: true
31
+ default_kb: "ecommerce"
32
+ search_limit: 5
33
+
34
+ # Response Settings
35
+ response:
36
+ use_knowledge_base: true
37
+ use_memory: true
38
+ recent_conversations_limit: 5
39
+
40
+ # Logging
41
+ logging:
42
+ enabled: true
43
+ level: "INFO"
44
+ file: "mem_agent.log"
45
+
46
+ # Security
47
+ security:
48
+ filter_sensitive_data: true
49
+ rate_limit:
50
+ enabled: true
51
+ max_requests_per_minute: 60
52
+