mem-llm 1.0.10__py3-none-any.whl → 1.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mem-llm might be problematic. Click here for more details.

@@ -1,1028 +0,0 @@
1
- Metadata-Version: 2.2
2
- Name: mem-llm
3
- Version: 1.0.10
4
- Summary: Memory-enabled AI assistant with local LLM support
5
- Author-email: "C. Emre Karataş" <karatasqemre@gmail.com>
6
- License: MIT
7
- Project-URL: Homepage, https://github.com/emredeveloper/Mem-LLM
8
- Project-URL: Bug Reports, https://github.com/emredeveloper/Mem-LLM/issues
9
- Project-URL: Source, https://github.com/emredeveloper/Mem-LLM
10
- Keywords: llm,ai,memory,agent,chatbot,ollama,local
11
- Classifier: Development Status :: 4 - Beta
12
- Classifier: Intended Audience :: Developers
13
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
14
- Classifier: Programming Language :: Python :: 3
15
- Classifier: Programming Language :: Python :: 3.8
16
- Classifier: Programming Language :: Python :: 3.9
17
- Classifier: Programming Language :: Python :: 3.10
18
- Classifier: Programming Language :: Python :: 3.11
19
- Classifier: Programming Language :: Python :: 3.12
20
- Requires-Python: >=3.8
21
- Description-Content-Type: text/markdown
22
- Requires-Dist: requests>=2.31.0
23
- Requires-Dist: pyyaml>=6.0.1
24
- Requires-Dist: click>=8.1.0
25
- Provides-Extra: dev
26
- Requires-Dist: pytest>=7.4.0; extra == "dev"
27
- Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
28
- Requires-Dist: black>=23.7.0; extra == "dev"
29
- Requires-Dist: flake8>=6.1.0; extra == "dev"
30
- Provides-Extra: web
31
- Requires-Dist: flask>=3.0.0; extra == "web"
32
- Requires-Dist: flask-cors>=4.0.0; extra == "web"
33
- Provides-Extra: api
34
- Requires-Dist: fastapi>=0.104.0; extra == "api"
35
- Requires-Dist: uvicorn>=0.24.0; extra == "api"
36
-
37
- # 🧠 mem-llm# 🧠 mem-llm
38
-
39
-
40
-
41
- **Memory-enabled AI assistant that remembers conversations using local LLMs****Memory-enabled AI assistant that remembers conversations using local LLMs**
42
-
43
-
44
-
45
- [![Python](https://img.shields.io/badge/Python-3.8%2B-blue.svg)](https://www.python.org/downloads/)[![Python](https://img.shields.io/badge/Python-3.8%2B-blue.svg)](https://www.python.org/downloads/)
46
-
47
- [![PyPI](https://img.shields.io/pypi/v/mem-llm?label=PyPI)](https://pypi.org/project/mem-llm/)[![PyPI](https://img.shields.io/pypi/v/mem-llm?label=PyPI)](https://pypi.org/project/mem-llm/)
48
-
49
- [![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)[![License](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
50
-
51
-
52
-
53
- ------
54
-
55
-
56
-
57
- ## 🎯 What is mem-llm?## 📚 İçindekiler
58
-
59
-
60
-
61
- `mem-llm` is a lightweight Python library that adds **persistent memory** to your local LLM chatbots. Each user gets their own conversation history that persists across sessions.- [🎯 mem-llm nedir?](#-mem-llm-nedir)
62
-
63
- - [⚡ Hızlı başlangıç](#-hızlı-başlangıç)
64
-
65
- **Use Cases:**- [🧑‍🏫 Tutorial](#-tutorial)
66
-
67
- - 💬 Customer service bots- [💡 Özellikler](#-özellikler)
68
-
69
- - 🤖 Personal assistants- [📖 Kullanım örnekleri](#-kullanım-örnekleri)
70
-
71
- - 📝 Context-aware applications- [🔧 Yapılandırma seçenekleri](#-yapılandırma-seçenekleri)
72
-
73
- - 🏢 Business automation solutions- [🗂 Bilgi tabanı ve dokümanlardan yapılandırma](#-bilgi-tabanı-ve-dokümanlardan-yapılandırma)
74
-
75
- - [🔥 Desteklenen modeller](#-desteklenen-modeller)
76
-
77
- ---- [📦 Gereksinimler](#-gereksinimler)
78
-
79
- - [🐛 Sık karşılaşılan problemler](#-sık-karşılaşılan-problemler)
80
-
81
- ## ⚡ Quick Start
82
-
83
- ---
84
-
85
- ### 1. Install the package
86
-
87
- ## 🎯 mem-llm nedir?
88
-
89
- ```bash
90
-
91
- pip install mem-llm`mem-llm`, yerel bir LLM ile çalışan sohbet botlarınıza **kalıcı hafıza** kazandıran hafif bir Python kütüphanesidir. Her kullanıcı için ayrı bir konuşma geçmişi tutulur ve yapay zeka bu geçmişi bir sonraki oturumda otomatik olarak kullanır.
92
-
93
- ```
94
-
95
- **Nerelerde kullanılabilir?**
96
-
97
- ### 2. Start Ollama and download a model (one-time setup)- 💬 Müşteri hizmetleri botları
98
-
99
- - 🤖 Kişisel asistanlar
100
-
101
- ```bash- 📝 Bağlama duyarlı uygulamalar
102
-
103
- # Start Ollama service- 🏢 İş süreçlerini otomatikleştiren çözümler
104
-
105
- ollama serve
106
-
107
- ---
108
-
109
- # Download lightweight model (~2.5GB)
110
-
111
- ollama pull granite4:tiny-h## ⚡ Hızlı başlangıç
112
-
113
- ```
114
-
115
- ### 0. Gereksinimleri kontrol edin
116
-
117
- > 💡 Keep `ollama serve` running in one terminal, run your Python code in another.
118
-
119
- - Python 3.8 veya üzeri
120
-
121
- ### 3. Create your first agent- [Ollama](https://ollama.ai/) kurulu ve çalışır durumda
122
-
123
- - En az 4GB RAM ve 5GB disk alanı
124
-
125
- ```python
126
-
127
- from mem_llm import MemAgent### 1. Paketi yükleyin
128
-
129
-
130
-
131
- # Create agent in one line```bash
132
-
133
- agent = MemAgent()pip install mem-llm==1.0.7
134
-
135
- ```
136
-
137
- # Set user (each user gets separate memory)
138
-
139
- agent.set_user("john")### 2. Ollama'yı başlatın ve modeli indirin (tek seferlik)
140
-
141
-
142
-
143
- # Chat with memory!```bash
144
-
145
- response = agent.chat("My name is John")# Ollama servisini başlatın
146
-
147
- print(response)ollama serve
148
-
149
-
150
-
151
- response = agent.chat("What's my name?")# Yaklaşık 2.5GB'lık hafif modeli indirin
152
-
153
- print(response) # Output: "Your name is John"ollama pull granite4:tiny-h
154
-
155
- ``````
156
-
157
-
158
-
159
- ### 4. Verify your setup (optional)> 💡 Ollama `serve` komutu terminalde açık kalmalıdır. Yeni bir terminal sekmesinde Python kodunu çalıştırabilirsiniz.
160
-
161
-
162
-
163
- ```bash### 3. İlk ajanınızı çalıştırın
164
-
165
- # Using CLI
166
-
167
- mem-llm check```python
168
-
169
- from mem_llm import MemAgent
170
-
171
- # Or in Python
172
-
173
- agent.check_setup()# Tek satırda ajan oluşturun
174
-
175
- ```agent = MemAgent()
176
-
177
-
178
-
179
- ---# Kullanıcıyı belirleyin (her kullanıcı için ayrı hafıza tutulur)
180
-
181
- agent.set_user("john")
182
-
183
- ## 💡 Features
184
-
185
- # Sohbet edin - hafıza devrede!
186
-
187
- | Feature | Description |agent.chat("My name is John")
188
-
189
- |---------|-------------|agent.chat("What's my name?") # → "Your name is John"
190
-
191
- | 🧠 **Memory** | Remembers each user's conversation history |```
192
-
193
- | 👥 **Multi-user** | Separate memory for each user |
194
-
195
- | 🔒 **Privacy** | 100% local, no cloud/API needed |### 4. Kurulumunuzu doğrulayın (isteğe bağlı)
196
-
197
- | ⚡ **Fast** | Lightweight SQLite/JSON storage |
198
-
199
- | 🎯 **Simple** | 3 lines of code to get started |```python
200
-
201
- | 📚 **Knowledge Base** | Load information from documents |agent.check_setup()
202
-
203
- | 🌍 **Multi-language** | Works with any language (Turkish, English, etc.) |# {'ollama': 'running', 'model': 'granite4:tiny-h', 'memory_backend': 'sql', ...}
204
-
205
- | 🛠️ **CLI Tool** | Built-in command-line interface |```
206
-
207
-
208
-
209
- ---<<<<<<< HEAD
210
-
211
- | Feature | Description |
212
-
213
- ## 📖 Usage Examples|---------|-------------|
214
-
215
- | 🧠 **Memory** | Remembers each user's conversation history |
216
-
217
- ### Example 1: Basic Conversation with Memory| 👥 **Multi-user** | Separate memory for each user |
218
-
219
- | 🔒 **Privacy** | 100% local, no cloud/API needed |
220
-
221
- ```python| ⚡ **Fast** | Lightweight SQLite/JSON storage |
222
-
223
- from mem_llm import MemAgent| 🎯 **Simple** | 3 lines of code to get started |
224
-
225
- | 📚 **Knowledge Base** | Config-free document integration |
226
-
227
- # Create agent| 🌍 **Multi-language** | Works with any language |
228
-
229
- print("🤖 Creating AI agent...")| 🛠️ **CLI Tool** | Built-in command-line interface |
230
-
231
- agent = MemAgent()
232
-
233
- ---
234
-
235
- # Set user
236
-
237
- print("👤 Setting user: alice\n")## 🔄 Memory Backend Comparison
238
-
239
- agent.set_user("alice")
240
-
241
- Choose the right backend for your needs:
242
-
243
- # First conversation
244
-
245
- print("💬 User: I love pizza")| Feature | JSON Mode | SQL Mode |
246
-
247
- response1 = agent.chat("I love pizza")|---------|-----------|----------|
248
-
249
- print(f"🤖 Bot: {response1}\n")| **Setup** | ✅ Zero config | ⚙️ Minimal config |
250
-
251
- | **Conversation Memory** | ✅ Yes | ✅ Yes |
252
-
253
- # Memory test - bot remembers!| **User Profiles** | ✅ Yes | ✅ Yes |
254
-
255
- print("💬 User: What's my favorite food?")| **Knowledge Base** | ❌ No | ✅ Yes |
256
-
257
- response2 = agent.chat("What's my favorite food?")| **Advanced Search** | ❌ No | ✅ Yes |
258
-
259
- print(f"🤖 Bot: {response2}")| **Multi-user Performance** | ⭐⭐ Good | ⭐⭐⭐ Excellent |
260
-
261
- ```| **Data Queries** | ❌ Limited | ✅ Full SQL |
262
-
263
- | **Best For** | 🏠 Personal use | 🏢 Business use |
264
-
265
- **Output:**
266
-
267
- ```**Recommendation:**
268
-
269
- 🤖 Creating AI agent...- **JSON Mode**: Perfect for personal assistants and quick prototypes
270
-
271
- 👤 Setting user: alice- **SQL Mode**: Ideal for customer service, multi-user apps, and production
272
-
273
- =======
274
-
275
- 💬 User: I love pizzaKurulum sırasında sorun yaşarsanız [🐛 Sık karşılaşılan problemler](#-sık-karşılaşılan-problemler) bölümüne göz atın.
276
-
277
- 🤖 Bot: That's great! Pizza is a popular choice...>>>>>>> f002396c8c531e4cde33d19ac6a755494b1b30cd
278
-
279
-
280
-
281
- 💬 User: What's my favorite food?---
282
-
283
- 🤖 Bot: Based on our conversation, your favorite food is pizza!
284
-
285
- ```## 💡 Özellikler
286
-
287
-
288
-
289
- ---<<<<<<< HEAD
290
-
291
- ### Command Line Interface (CLI)
292
-
293
- ### Example 2: Multi-User Support
294
-
295
- The easiest way to get started:
296
-
297
- ```python
298
-
299
- from mem_llm import MemAgent```bash
300
-
301
- # Install with CLI support
302
-
303
- agent = MemAgent()pip install mem-llm
304
-
305
-
306
-
307
- # Customer 1# Start interactive chat
308
-
309
- print("=" * 60)mem-llm chat --user john
310
-
311
- print("👤 Customer 1: John")
312
-
313
- print("=" * 60)# Check system status
314
-
315
- agent.set_user("customer_john")mem-llm check
316
-
317
-
318
-
319
- print("💬 John: My order #12345 is delayed")# View statistics
320
-
321
- response = agent.chat("My order #12345 is delayed")mem-llm stats
322
-
323
- print(f"🤖 Bot: {response}\n")
324
-
325
- # Export user data
326
-
327
- # Customer 2 - SEPARATE MEMORY!mem-llm export john --format json --output data.json
328
-
329
- print("=" * 60)
330
-
331
- print("👤 Customer 2: Sarah")# Get help
332
-
333
- print("=" * 60)mem-llm --help
334
-
335
- agent.set_user("customer_sarah")```
336
-
337
-
338
-
339
- print("💬 Sarah: I want to return item #67890")**Available CLI Commands:**
340
-
341
- response = agent.chat("I want to return item #67890")
342
-
343
- print(f"🤖 Bot: {response}\n")| Command | Description | Example |
344
-
345
- |---------|-------------|---------|
346
-
347
- # Back to Customer 1 - remembers previous conversation!| `chat` | Interactive chat session | `mem-llm chat --user alice` |
348
-
349
- print("=" * 60)| `check` | Verify system setup | `mem-llm check` |
350
-
351
- print("👤 Back to Customer 1: John")| `stats` | Show statistics | `mem-llm stats --user john` |
352
-
353
- print("=" * 60)| `export` | Export user data | `mem-llm export john` |
354
-
355
- agent.set_user("customer_john")| `clear` | Delete user data | `mem-llm clear john` |
356
-
357
-
358
-
359
- print("💬 John: What was my order number?")### Basic Chat
360
-
361
- response = agent.chat("What was my order number?")=======
362
-
363
- print(f"🤖 Bot: {response}")| Özellik | Açıklama |
364
-
365
- ```|---------|----------|
366
-
367
- | 🧠 **Kalıcı hafıza** | Her kullanıcının sohbet geçmişi saklanır |
368
-
369
- **Output:**| 👥 **Çoklu kullanıcı** | Her kullanıcı için ayrı hafıza yönetimi |
370
-
371
- ```| 🔒 **Gizlilik** | Tamamen yerel çalışır, buluta veri göndermez |
372
-
373
- ============================================================| ⚡ **Hızlı** | Hafif SQLite veya JSON depolama seçenekleri |
374
-
375
- 👤 Customer 1: John| 🎯 **Kolay kullanım** | Üç satırda çalışan örnek |
376
-
377
- ============================================================| 📚 **Bilgi tabanı** | Ek yapılandırma olmadan dökümanlardan bilgi yükleme |
378
-
379
- 💬 John: My order #12345 is delayed| 🌍 **Türkçe desteği** | Türkçe diyaloglarda doğal sonuçlar |
380
-
381
- 🤖 Bot: I'll help you check your order status...| 🛠️ **Araç entegrasyonu** | Gelişmiş araç sistemi ile genişletilebilir |
382
-
383
-
384
-
385
- ============================================================---
386
-
387
- 👤 Customer 2: Sarah
388
-
389
- ============================================================## 🧑‍🏫 Tutorial
390
-
391
- 💬 Sarah: I want to return item #67890
392
-
393
- 🤖 Bot: I can help you with the return process...Tamamlanmış örnekleri adım adım incelemek için [examples](examples) klasöründeki rehberleri izleyebilirsiniz. Bu dizinde hem temel kullanım senaryoları hem de ileri seviye entegrasyonlar yer alır. Öne çıkan içerikler:
394
-
395
-
396
-
397
- ============================================================- [Basic usage walkthrough](examples/basic_usage.py) – ilk hafızalı ajanın nasıl oluşturulacağını gösterir.
398
-
399
- 👤 Back to Customer 1: John- [Customer support workflow](examples/customer_support.py) – çok kullanıcılı müşteri destek senaryosu.
400
-
401
- ============================================================- [Knowledge base ingestion](examples/knowledge_base.py) – dokümanlardan bilgi yükleme.
402
-
403
- 💬 John: What was my order number?
404
-
405
- 🤖 Bot: Your order number is #12345, which you mentioned was delayed.Her dosyada kodun yanında açıklamalar bulunur; komutları kopyalayıp çalıştırarak sonuçları deneyimleyebilirsiniz.
406
-
407
- ```
408
-
409
- ## 📖 Kullanım örnekleri
410
-
411
- ---
412
-
413
- ### Basic conversation
414
-
415
- ### Example 3: Turkish Language Support>>>>>>> f002396c8c531e4cde33d19ac6a755494b1b30cd
416
-
417
-
418
-
419
- ```python```python
420
-
421
- from mem_llm import MemAgentfrom mem_llm import MemAgent
422
-
423
-
424
-
425
- agent = MemAgent()agent = MemAgent()
426
-
427
- agent.set_user("alice")
428
-
429
- print("🇹🇷 Türkçe Konuşma Örneği")
430
-
431
- print("=" * 60)# İlk konuşma
432
-
433
- agent.chat("I love pizza")
434
-
435
- agent.set_user("ahmet")
436
-
437
- # Later on...
438
-
439
- print("💬 Kullanıcı: Benim adım Ahmet ve İstanbul'da yaşıyorum")agent.chat("What's my favorite food?")
440
-
441
- response = agent.chat("Benim adım Ahmet ve İstanbul'da yaşıyorum")# → "Your favorite food is pizza"
442
-
443
- print(f"🤖 Bot: {response}\n")```
444
-
445
-
446
-
447
- print("💬 Kullanıcı: Nerede yaşıyorum?")<<<<<<< HEAD
448
-
449
- response = agent.chat("Nerede yaşıyorum?")### Multi-language Support
450
-
451
- print(f"🤖 Bot: {response}\n")
452
-
453
- ```python
454
-
455
- print("💬 Kullanıcı: Adımı hatırlıyor musun?")# Works with any language
456
-
457
- response = agent.chat("Adımı hatırlıyor musun?")=======
458
-
459
- print(f"🤖 Bot: {response}")### Turkish language support
460
-
461
- ```
462
-
463
- ```python
464
-
465
- **Output:**# Handles Turkish dialogue naturally
466
-
467
- ```>>>>>>> f002396c8c531e4cde33d19ac6a755494b1b30cd
468
-
469
- 🇹🇷 Türkçe Konuşma Örneğiagent.set_user("ahmet")
470
-
471
- ============================================================agent.chat("Benim adım Ahmet ve pizza seviyorum")
472
-
473
- 💬 Kullanıcı: Benim adım Ahmet ve İstanbul'da yaşıyorumagent.chat("Adımı hatırlıyor musun?")
474
-
475
- 🤖 Bot: Memnun oldum Ahmet! İstanbul güzel bir şehir...# → "Evet, adınız Ahmet!"
476
-
477
- ```
478
-
479
- 💬 Kullanıcı: Nerede yaşıyorum?
480
-
481
- 🤖 Bot: İstanbul'da yaşıyorsunuz.### Customer service scenario
482
-
483
-
484
-
485
- 💬 Kullanıcı: Adımı hatırlıyor musun?```python
486
-
487
- 🤖 Bot: Evet, adınız Ahmet!agent = MemAgent()
488
-
489
- ```
490
-
491
- # Müşteri 1
492
-
493
- ---agent.set_user("customer_001")
494
-
495
- agent.chat("My order #12345 is delayed")
496
-
497
- ### Example 4: User Profile Extraction
498
-
499
- # Customer 2 (separate memory!)
500
-
501
- ```pythonagent.set_user("customer_002")
502
-
503
- from mem_llm import MemAgentagent.chat("I want to return item #67890")
504
-
505
- ```
506
-
507
- agent = MemAgent()
508
-
509
- agent.set_user("alice")### Inspecting the user profile
510
-
511
-
512
-
513
- print("📝 Building user profile...")```python
514
-
515
- print("=" * 60)# Retrieve automatically extracted user information
516
-
517
- profile = agent.get_user_profile()
518
-
519
- # Have natural conversations# {'name': 'Alice', 'favorite_food': 'pizza', 'location': 'NYC'}
520
-
521
- conversations = [```
522
-
523
- "My name is Alice and I'm 28 years old",
524
-
525
- "I live in New York City",---
526
-
527
- "I work as a software engineer",
528
-
529
- "My favorite food is pizza"## 🔧 Yapılandırma seçenekleri
530
-
531
- ]
532
-
533
- ### JSON hafıza (varsayılan ve basit)
534
-
535
- for msg in conversations:
536
-
537
- print(f"💬 User: {msg}")```python
538
-
539
- response = agent.chat(msg)agent = MemAgent(
540
-
541
- print(f"🤖 Bot: {response}\n") model="granite4:tiny-h",
542
-
543
- use_sql=False, # JSON dosyaları ile hafıza
544
-
545
- # Extract profile automatically memory_dir="memories"
546
-
547
- print("=" * 60))
548
-
549
- print("📊 Extracted User Profile:")```
550
-
551
- print("=" * 60)
552
-
553
- profile = agent.get_user_profile()### SQL hafıza (gelişmiş ve hızlı)
554
-
555
-
556
-
557
- for key, value in profile.items():```python
558
-
559
- print(f" {key}: {value}")agent = MemAgent(
560
-
561
- ``` model="granite4:tiny-h",
562
-
563
- use_sql=True, # SQLite tabanlı hafıza
564
-
565
- **Output:** memory_dir="memories.db"
566
-
567
- ```)
568
-
569
- 📝 Building user profile...```
570
-
571
- ============================================================
572
-
573
- 💬 User: My name is Alice and I'm 28 years old### Diğer özelleştirmeler
574
-
575
- 🤖 Bot: Nice to meet you, Alice!...
576
-
577
- ```python
578
-
579
- 💬 User: I live in New York Cityagent = MemAgent(
580
-
581
- 🤖 Bot: New York City is a vibrant place... model="llama2", # Herhangi bir Ollama modeli
582
-
583
- ollama_url="http://localhost:11434"
584
-
585
- 💬 User: I work as a software engineer)
586
-
587
- 🤖 Bot: That's an interesting career...```
588
-
589
-
590
-
591
- 💬 User: My favorite food is pizza---
592
-
593
- 🤖 Bot: Pizza is delicious!...
594
-
595
- ## 📚 API referansı
596
-
597
- ============================================================
598
-
599
- 📊 Extracted User Profile:### `MemAgent`
600
-
601
- ============================================================
602
-
603
- name: Alice```python
604
-
605
- age: 28# Initialize
606
-
607
- location: New York Cityagent = MemAgent(model="granite4:tiny-h", use_sql=False)
608
-
609
- occupation: Software Engineer
610
-
611
- favorite_food: Pizza# Set active user
612
-
613
- ```agent.set_user(user_id: str, name: Optional[str] = None)
614
-
615
-
616
-
617
- ---# Chat
618
-
619
- response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str
620
-
621
- ### Example 5: Complete Customer Service Workflow
622
-
623
- # Get profile
624
-
625
- ```pythonprofile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict
626
-
627
- from mem_llm import MemAgent
628
-
629
- # System check
630
-
631
- # Initialize customer service agentstatus = agent.check_setup() -> Dict
632
-
633
- print("🏢 Customer Service Bot Initializing...")```
634
-
635
- agent = MemAgent(use_sql=True) # SQL for better performance
636
-
637
- ---
638
-
639
- # Simulate customer support session
640
-
641
- def handle_customer(customer_id, customer_name):## 🗂 Bilgi tabanı ve dokümanlardan yapılandırma
642
-
643
- print("\n" + "=" * 70)
644
-
645
- print(f"📞 New Customer Session: {customer_name} (ID: {customer_id})")Kurumsal dokümanlarınızdan otomatik `config.yaml` üretin:
646
-
647
- print("=" * 70)
648
-
649
- ```python
650
-
651
- agent.set_user(customer_id, name=customer_name)from mem_llm import create_config_from_document
652
-
653
-
654
-
655
- # Customer introduces issue# PDF'den config.yaml üretin
656
-
657
- print(f"\n💬 {customer_name}: Hi, my order hasn't arrived yet")create_config_from_document(
658
-
659
- response = agent.chat("Hi, my order hasn't arrived yet") doc_path="company_info.pdf",
660
-
661
- print(f"🤖 Support: {response}") output_path="config.yaml",
662
-
663
- company_name="Acme Corp"
664
-
665
- # Ask for details)
666
-
667
- print(f"\n💬 {customer_name}: My order number is #45678")
668
-
669
- response = agent.chat("My order number is #45678")# Oluşan yapılandırmayı kullanın
670
-
671
- print(f"🤖 Support: {response}")agent = MemAgent(config_file="config.yaml")
672
-
673
- ```
674
-
675
- # Follow up later in conversation
676
-
677
- print(f"\n💬 {customer_name}: Can you remind me what we were discussing?")---
678
-
679
- response = agent.chat("Can you remind me what we were discussing?")
680
-
681
- print(f"🤖 Support: {response}")## 🔥 Desteklenen modeller
682
-
683
-
684
-
685
- # Handle multiple customers[Ollama](https://ollama.ai/) üzerindeki tüm modellerle çalışır. Tavsiye edilen modeller:
686
-
687
- handle_customer("cust_001", "Emma")
688
-
689
- handle_customer("cust_002", "Michael")| Model | Size | Speed | Quality |
690
-
691
- |-------|------|-------|---------|
692
-
693
- # Return to first customer - memory persists!| `granite4:tiny-h` | 2.5GB | ⚡⚡⚡ | ⭐⭐ |
694
-
695
- print("\n" + "=" * 70)| `llama2` | 4GB | ⚡⚡ | ⭐⭐⭐ |
696
-
697
- print("📞 Returning Customer: Emma (ID: cust_001)")| `mistral` | 4GB | ⚡⚡ | ⭐⭐⭐⭐ |
698
-
699
- print("=" * 70)| `llama3` | 5GB | ⚡ | ⭐⭐⭐⭐⭐ |
700
-
701
- agent.set_user("cust_001")
702
-
703
- ```bash
704
-
705
- print("\n💬 Emma: What was my order number again?")ollama pull <model-name>
706
-
707
- response = agent.chat("What was my order number again?")```
708
-
709
- print(f"🤖 Support: {response}")
710
-
711
- # Output: "Your order number is #45678"---
712
-
713
- ```
714
-
715
- ## 📦 Gereksinimler
716
-
717
- **Output:**
718
-
719
- ```- Python 3.8+
720
-
721
- 🏢 Customer Service Bot Initializing...- Ollama (LLM için)
722
-
723
- - Minimum 4GB RAM
724
-
725
- ======================================================================- 5GB disk alanı
726
-
727
- 📞 New Customer Session: Emma (ID: cust_001)
728
-
729
- ======================================================================**Kurulum ile gelen bağımlılıklar:**
730
-
731
- - `requests >= 2.31.0`
732
-
733
- 💬 Emma: Hi, my order hasn't arrived yet- `pyyaml >= 6.0.1`
734
-
735
- 🤖 Support: I'm sorry to hear that. I'll help you track your order...- `sqlite3` (Python ile birlikte gelir)
736
-
737
-
738
-
739
- 💬 Emma: My order number is #45678---
740
-
741
- 🤖 Support: Thank you for providing order #45678. Let me check...
742
-
743
- ## 🐛 Sık karşılaşılan problemler
744
-
745
- 💬 Emma: Can you remind me what we were discussing?
746
-
747
- 🤖 Support: We're discussing your order #45678 that hasn't arrived yet...### Ollama çalışmıyor mu?
748
-
749
-
750
-
751
- ======================================================================```bash
752
-
753
- 📞 New Customer Session: Michael (ID: cust_002)ollama serve
754
-
755
- ======================================================================```
756
-
757
-
758
-
759
- 💬 Michael: Hi, my order hasn't arrived yet### Model bulunamadı hatası mı alıyorsunuz?
760
-
761
- 🤖 Support: I'm sorry to hear that. I'll help you track your order...
762
-
763
- ```bash
764
-
765
- 💬 Michael: My order number is #78901ollama pull granite4:tiny-h
766
-
767
- 🤖 Support: Thank you for providing order #78901...```
768
-
769
-
770
-
771
- ======================================================================### ImportError veya bağlantı hatası mı var?
772
-
773
- 📞 Returning Customer: Emma (ID: cust_001)
774
-
775
- ======================================================================```bash
776
-
777
- pip install --upgrade mem-llm
778
-
779
- 💬 Emma: What was my order number again?```
780
-
781
- 🤖 Support: Your order number is #45678.
782
-
783
- ```> Hâlâ sorun yaşıyorsanız `agent.check_setup()` çıktısını ve hata mesajını issue açarken paylaşın.
784
-
785
-
786
-
787
- ------
788
-
789
-
790
-
791
- ## 🔧 Configuration Options## 📄 Lisans
792
-
793
-
794
-
795
- ### JSON Memory (Simple, Default)MIT Lisansı — kişisel veya ticari projelerinizde özgürce kullanabilirsiniz.
796
-
797
-
798
-
799
- ```python---
800
-
801
- agent = MemAgent(
802
-
803
- model="granite4:tiny-h",## 🔗 Faydalı bağlantılar
804
-
805
- use_sql=False, # JSON file-based memory
806
-
807
- memory_dir="memories"- **PyPI:** https://pypi.org/project/mem-llm/
808
-
809
- )- **GitHub:** https://github.com/emredeveloper/Mem-LLM
810
-
811
- ```- **Ollama:** https://ollama.ai/
812
-
813
-
814
-
815
- ### SQL Memory (Advanced, Recommended for Production)---
816
-
817
-
818
-
819
- ```python## 🌟 Bize destek olun
820
-
821
- agent = MemAgent(
822
-
823
- model="granite4:tiny-h",Proje işinize yaradıysa [GitHub](https://github.com/emredeveloper/Mem-LLM) üzerinden ⭐ vermeyi unutmayın!
824
-
825
- use_sql=True, # SQLite-based memory
826
-
827
- memory_dir="memories.db"---
828
-
829
- )
830
-
831
- ```<div align="center">
832
-
833
- Sevgiyle geliştirildi — <a href="https://github.com/emredeveloper">C. Emre Karataş</a>
834
-
835
- ### Custom Configuration</div>
836
-
837
-
838
- ```python
839
- agent = MemAgent(
840
- model="llama2", # Any Ollama model
841
- ollama_url="http://localhost:11434",
842
- check_connection=True # Verify setup on startup
843
- )
844
- ```
845
-
846
- ---
847
-
848
- ## 🛠️ Command Line Interface
849
-
850
- ```bash
851
- # Start interactive chat
852
- mem-llm chat --user john
853
-
854
- # Check system status
855
- mem-llm check
856
-
857
- # View statistics
858
- mem-llm stats
859
-
860
- # Export user data
861
- mem-llm export john --format json
862
-
863
- # Clear user data
864
- mem-llm clear john
865
-
866
- # Get help
867
- mem-llm --help
868
- ```
869
-
870
- ---
871
-
872
- ## 🔄 Memory Backend Comparison
873
-
874
- | Feature | JSON Mode | SQL Mode |
875
- |---------|-----------|----------|
876
- | **Setup** | ✅ Zero config | ⚙️ Minimal config |
877
- | **Conversation Memory** | ✅ Yes | ✅ Yes |
878
- | **User Profiles** | ✅ Yes | ✅ Yes |
879
- | **Knowledge Base** | ❌ No | ✅ Yes |
880
- | **Advanced Search** | ❌ No | ✅ Yes |
881
- | **Multi-user Performance** | ⭐⭐ Good | ⭐⭐⭐ Excellent |
882
- | **Best For** | 🏠 Personal use | 🏢 Business use |
883
-
884
- **Recommendation:**
885
- - **JSON Mode**: Perfect for personal assistants and quick prototypes
886
- - **SQL Mode**: Ideal for customer service, multi-user apps, and production
887
-
888
- ---
889
-
890
- ## 📚 API Reference
891
-
892
- ### MemAgent Class
893
-
894
- ```python
895
- # Initialize
896
- agent = MemAgent(
897
- model="granite4:tiny-h",
898
- use_sql=True,
899
- memory_dir=None,
900
- ollama_url="http://localhost:11434",
901
- check_connection=False
902
- )
903
-
904
- # Set active user
905
- agent.set_user(user_id: str, name: Optional[str] = None)
906
-
907
- # Chat (returns response string)
908
- response = agent.chat(message: str, metadata: Optional[Dict] = None) -> str
909
-
910
- # Get user profile (auto-extracted from conversations)
911
- profile = agent.get_user_profile(user_id: Optional[str] = None) -> Dict
912
-
913
- # System check
914
- status = agent.check_setup() -> Dict
915
- ```
916
-
917
- ---
918
-
919
- ## 🔥 Supported Models
920
-
921
- Works with any [Ollama](https://ollama.ai/) model. Recommended models:
922
-
923
- | Model | Size | Speed | Quality | Best For |
924
- |-------|------|-------|---------|----------|
925
- | `granite4:tiny-h` | 2.5GB | ⚡⚡⚡ | ⭐⭐ | Quick testing |
926
- | `llama2` | 4GB | ⚡⚡ | ⭐⭐⭐ | General use |
927
- | `mistral` | 4GB | ⚡⚡ | ⭐⭐⭐⭐ | Balanced |
928
- | `llama3` | 5GB | ⚡ | ⭐⭐⭐⭐⭐ | Best quality |
929
-
930
- ```bash
931
- # Download a model
932
- ollama pull <model-name>
933
-
934
- # List installed models
935
- ollama list
936
- ```
937
-
938
- ---
939
-
940
- ## 📦 Requirements
941
-
942
- - Python 3.8+
943
- - [Ollama](https://ollama.ai/) (for LLM)
944
- - Minimum 4GB RAM
945
- - 5GB disk space
946
-
947
- **Python Dependencies (auto-installed):**
948
- - `requests >= 2.31.0`
949
- - `pyyaml >= 6.0.1`
950
- - `click >= 8.1.0`
951
-
952
- ---
953
-
954
- ## 🐛 Troubleshooting
955
-
956
- ### Ollama not running?
957
-
958
- ```bash
959
- ollama serve
960
- ```
961
-
962
- ### Model not found error?
963
-
964
- ```bash
965
- # Download the model
966
- ollama pull granite4:tiny-h
967
-
968
- # Check installed models
969
- ollama list
970
- ```
971
-
972
- ### Connection error?
973
-
974
- ```bash
975
- # Check if Ollama is running
976
- curl http://localhost:11434
977
-
978
- # Restart Ollama
979
- ollama serve
980
- ```
981
-
982
- ### Import error?
983
-
984
- ```bash
985
- # Upgrade to latest version
986
- pip install --upgrade mem-llm
987
- ```
988
-
989
- > If issues persist, run `mem-llm check` or `agent.check_setup()` and share the output when opening an issue.
990
-
991
- ---
992
-
993
- ## 📄 License
994
-
995
- MIT License - Free to use in personal and commercial projects.
996
-
997
- ---
998
-
999
- ## 🔗 Links
1000
-
1001
- - **PyPI:** https://pypi.org/project/mem-llm/
1002
- - **GitHub:** https://github.com/emredeveloper/Mem-LLM
1003
- - **Ollama:** https://ollama.ai/
1004
- - **Documentation:** [GitHub Wiki](https://github.com/emredeveloper/Mem-LLM/wiki)
1005
-
1006
- ---
1007
-
1008
- ## 🌟 Support Us
1009
-
1010
- If you find this project useful, please ⭐ [star it on GitHub](https://github.com/emredeveloper/Mem-LLM)!
1011
-
1012
- ---
1013
-
1014
- ## 🤝 Contributing
1015
-
1016
- Contributions are welcome! Please feel free to submit a Pull Request.
1017
-
1018
- 1. Fork the repository
1019
- 2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
1020
- 3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
1021
- 4. Push to the branch (`git push origin feature/AmazingFeature`)
1022
- 5. Open a Pull Request
1023
-
1024
- ---
1025
-
1026
- <div align="center">
1027
- Made with ❤️ by <a href="https://github.com/emredeveloper">C. Emre Karataş</a>
1028
- </div>