chuk-ai-session-manager 0.4__py3-none-any.whl → 0.5__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,896 @@
1
+ Metadata-Version: 2.4
2
+ Name: chuk-ai-session-manager
3
+ Version: 0.5
4
+ Summary: Session manager for AI applications
5
+ Requires-Python: >=3.11
6
+ Description-Content-Type: text/markdown
7
+ Requires-Dist: chuk-sessions>=0.3
8
+ Requires-Dist: chuk-tool-processor>=0.4.1
9
+ Requires-Dist: pydantic>=2.11.3
10
+ Provides-Extra: tiktoken
11
+ Requires-Dist: tiktoken>=0.9.0; extra == "tiktoken"
12
+ Provides-Extra: redis
13
+ Requires-Dist: redis>=4.0.0; extra == "redis"
14
+ Provides-Extra: dev
15
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
16
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
17
+ Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
18
+ Requires-Dist: redis>=4.0.0; extra == "dev"
19
+ Requires-Dist: black>=23.0.0; extra == "dev"
20
+ Requires-Dist: isort>=5.12.0; extra == "dev"
21
+ Requires-Dist: mypy>=1.0.0; extra == "dev"
22
+ Provides-Extra: full
23
+
24
+ # CHUK AI Session Manager Documentation
25
+
26
+ A powerful session management system for AI applications that provides automatic conversation tracking, token usage monitoring, tool call logging, infinite context support, and hierarchical session relationships.
27
+
28
+ ## Table of Contents
29
+
30
+ 1. [Overview](#overview)
31
+ 2. [Import Issues & Fixes](#import-issues--fixes)
32
+ 3. [Core Architecture](#core-architecture)
33
+ 4. [Quick Start](#quick-start)
34
+ 5. [Simple API](#simple-api)
35
+ 6. [Advanced Usage](#advanced-usage)
36
+ 7. [Core Models](#core-models)
37
+ 8. [Infinite Context](#infinite-context)
38
+ 9. [Tool Integration](#tool-integration)
39
+ 10. [Session Storage](#session-storage)
40
+ 11. [Prompt Building](#prompt-building)
41
+ 12. [Configuration](#configuration)
42
+
43
+ ## Overview
44
+
45
+ The CHUK AI Session Manager is designed to solve common challenges in AI application development:
46
+
47
+ - **Conversation Tracking**: Automatically track user-AI interactions
48
+ - **Token Management**: Monitor usage and costs across different models
49
+ - **Infinite Context**: Handle conversations that exceed token limits through automatic summarization
50
+ - **Tool Integration**: Log tool calls and results seamlessly
51
+ - **Session Hierarchy**: Create parent-child relationships between conversation segments
52
+ - **Flexible Storage**: Built on CHUK Sessions for reliable persistence
53
+
54
+ ### Key Features
55
+
56
+ - **Zero-friction API**: Simple functions for common tasks
57
+ - **Async-first**: Built for modern Python async/await patterns
58
+ - **Token-aware**: Automatic token counting and cost estimation
59
+ - **Provider-agnostic**: Works with any LLM provider (OpenAI, Anthropic, etc.)
60
+ - **Hierarchical sessions**: Support for complex conversation structures
61
+ - **Automatic summarization**: Maintains context across session segments
62
+
63
+ ## Import Structure
64
+
65
+ With the clean `__init__.py`, all components should be available at the top level:
66
+
67
+ ```python
68
+ from chuk_ai_session_manager import (
69
+ # Simple API - Primary interface for most users
70
+ SessionManager,
71
+ track_conversation,
72
+ track_llm_call,
73
+ quick_conversation,
74
+ track_infinite_conversation,
75
+ track_tool_use,
76
+ get_session_stats,
77
+ get_conversation_history,
78
+
79
+ # Core Models
80
+ Session,
81
+ SessionEvent,
82
+ SessionMetadata,
83
+ SessionRun,
84
+ RunStatus,
85
+
86
+ # Enums
87
+ EventSource,
88
+ EventType,
89
+
90
+ # Token Management
91
+ TokenUsage,
92
+ TokenSummary,
93
+
94
+ # Advanced Components
95
+ InfiniteConversationManager,
96
+ SummarizationStrategy,
97
+ SessionAwareToolProcessor,
98
+ build_prompt_from_session,
99
+ PromptStrategy,
100
+ truncate_prompt_to_token_limit,
101
+
102
+ # Storage
103
+ setup_chuk_sessions_storage,
104
+
105
+ # Exceptions
106
+ SessionManagerError,
107
+ SessionNotFound,
108
+ SessionAlreadyExists,
109
+ InvalidSessionOperation,
110
+ TokenLimitExceeded,
111
+ StorageError,
112
+ ToolProcessingError,
113
+
114
+ # Utilities
115
+ configure_storage,
116
+ get_version,
117
+ is_available
118
+ )
119
+ ```
120
+
121
+ ### Verifying Installation
122
+
123
+ Check that everything is working:
124
+
125
+ ```python
126
+ import chuk_ai_session_manager as casm
127
+
128
+ print(f"Version: {casm.get_version()}")
129
+ print("Available components:", casm.is_available())
130
+
131
+ # This should show all components as True
132
+ # {
133
+ # "core_enums": True,
134
+ # "core_models": True,
135
+ # "simple_api": True,
136
+ # "storage": True,
137
+ # "infinite_context": True,
138
+ # "tool_processor": True,
139
+ # "prompt_builder": True,
140
+ # "token_tracking": True,
141
+ # "exceptions": True,
142
+ # "session_manager": True
143
+ # }
144
+ ```
145
+
146
+ ## Core Architecture
147
+
148
+ The system is built around several key components working together to provide seamless conversation management:
149
+
150
+ ```
151
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
152
+ │ Simple API │ │ SessionManager │ │ Core Models │
153
+ │ │ │ │ │ │
154
+ │ track_conversation() │ │ High-level API │ │ Session │
155
+ │ track_llm_call() │ │ Infinite context │ │ SessionEvent │
156
+ │ quick_conversation() │ │ Auto-summarization│ │ TokenUsage │
157
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
158
+ │ │ │
159
+ └───────────────────────┼───────────────────────┘
160
+
161
+ ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
162
+ │ Tool Processor │ │ Storage Backend │ │ Prompt Builder │
163
+ │ │ │ │ │ │
164
+ │ Session-aware │ │ CHUK Sessions │ │ Multiple │
165
+ │ Tool execution │ │ JSON persistence │ │ strategies │
166
+ │ Retry & caching │ │ TTL management │ │ Token limits │
167
+ └─────────────────┘ └──────────────────┘ └─────────────────┘
168
+ ```
169
+
170
+ **Key Components:**
171
+ - **Simple API**: One-line functions for common operations
172
+ - **SessionManager**: High-level conversation management with infinite context
173
+ - **Core Models**: Session, SessionEvent, TokenUsage for data modeling
174
+ - **Tool Processor**: Automatic tool call tracking with retry and caching
175
+ - **Storage Backend**: CHUK Sessions for reliable persistence
176
+ - **Prompt Builder**: Intelligent context building for LLM calls
177
+
178
+ ## Quick Start
179
+
180
+ ### Installation
181
+
182
+ ```bash
183
+ # Install the package
184
+ uv add chuk-ai-session-manager
185
+
186
+ # Or with pip
187
+ pip install chuk-ai-session-manager
188
+ ```
189
+
190
+ ### Basic Usage
191
+
192
+ ```python
193
+ from chuk_ai_session_manager import track_conversation
194
+
195
+ # Track a simple conversation
196
+ session_id = await track_conversation(
197
+ user_message="What's the weather like?",
198
+ ai_response="I don't have access to real-time weather data.",
199
+ model="gpt-3.5-turbo",
200
+ provider="openai"
201
+ )
202
+
203
+ print(f"Conversation tracked in session: {session_id}")
204
+ ```
205
+
206
+ ### With Statistics
207
+
208
+ ```python
209
+ from chuk_ai_session_manager import quick_conversation
210
+
211
+ stats = await quick_conversation(
212
+ user_message="Explain quantum computing",
213
+ ai_response="Quantum computing uses quantum mechanical phenomena...",
214
+ model="gpt-4",
215
+ provider="openai"
216
+ )
217
+
218
+ print(f"Tokens used: {stats['total_tokens']}")
219
+ print(f"Estimated cost: ${stats['estimated_cost']:.4f}")
220
+ ```
221
+
222
+ ### Basic Integration with Your LLM
223
+
224
+ ```python
225
+ from chuk_ai_session_manager import track_llm_call
226
+ import openai
227
+
228
+ async def my_openai_call(prompt):
229
+ response = await openai.chat.completions.create(
230
+ model="gpt-3.5-turbo",
231
+ messages=[{"role": "user", "content": prompt}]
232
+ )
233
+ return response.choices[0].message.content
234
+
235
+ # Track your LLM call automatically
236
+ response, session_id = await track_llm_call(
237
+ user_input="Explain machine learning",
238
+ llm_function=my_openai_call,
239
+ model="gpt-3.5-turbo",
240
+ provider="openai"
241
+ )
242
+
243
+ print(f"AI Response: {response}")
244
+ print(f"Tracked in session: {session_id}")
245
+ ```
246
+
247
+ ## Simple API
248
+
249
+ The Simple API provides convenient functions for common tasks:
250
+
251
+ ```python
252
+ from chuk_ai_session_manager import (
253
+ track_conversation,
254
+ track_llm_call,
255
+ quick_conversation,
256
+ track_infinite_conversation,
257
+ track_tool_use,
258
+ get_session_stats,
259
+ get_conversation_history
260
+ )
261
+ ```
262
+
263
+ ### `track_conversation()`
264
+
265
+ The simplest way to track a conversation turn - perfect for one-off tracking:
266
+
267
+ ```python
268
+ session_id = await track_conversation(
269
+ user_message="Hello!",
270
+ ai_response="Hi there! How can I help?",
271
+ model="gpt-3.5-turbo",
272
+ provider="openai",
273
+ session_id=None, # Optional: continue existing session
274
+ infinite_context=False, # Enable infinite context
275
+ token_threshold=4000 # Token limit for segmentation
276
+ )
277
+
278
+ # Returns the session ID for continuing the conversation later
279
+ ```
280
+
281
+ ### `track_llm_call()`
282
+
283
+ Wrap your LLM function calls for automatic tracking:
284
+
285
+ ```python
286
+ async def call_openai(prompt):
287
+ # Your OpenAI API call here
288
+ response = await openai.chat.completions.create(
289
+ model="gpt-3.5-turbo",
290
+ messages=[{"role": "user", "content": prompt}]
291
+ )
292
+ return response.choices[0].message.content
293
+
294
+ response, session_id = await track_llm_call(
295
+ user_input="Explain machine learning",
296
+ llm_function=call_openai,
297
+ model="gpt-3.5-turbo",
298
+ provider="openai"
299
+ )
300
+ ```
301
+
302
+ ### `track_infinite_conversation()`
303
+
304
+ For long conversations that might exceed token limits:
305
+
306
+ ```python
307
+ # Start a conversation
308
+ session_id = await track_infinite_conversation(
309
+ user_message="Tell me about the history of computing",
310
+ ai_response="Computing history begins with ancient calculating devices...",
311
+ model="gpt-4",
312
+ token_threshold=4000, # Auto-segment after 4000 tokens
313
+ max_turns=20 # Or after 20 conversation turns
314
+ )
315
+
316
+ # Continue the conversation
317
+ session_id = await track_infinite_conversation(
318
+ user_message="What about quantum computers?",
319
+ ai_response="Quantum computing represents a fundamental shift...",
320
+ session_id=session_id, # Continue the same conversation
321
+ model="gpt-4"
322
+ )
323
+ ```
324
+
325
+ ### `track_tool_use()`
326
+
327
+ Track tool/function calls:
328
+
329
+ ```python
330
+ session_id = await track_tool_use(
331
+ tool_name="calculator",
332
+ arguments={"operation": "add", "a": 5, "b": 3},
333
+ result={"result": 8},
334
+ session_id=session_id,
335
+ error=None # Optional error message
336
+ )
337
+ ```
338
+
339
+ ## SessionManager Class
340
+
341
+ For more control and persistent conversations, use the `SessionManager` class directly:
342
+
343
+ ```python
344
+ from chuk_ai_session_manager import SessionManager
345
+
346
+ # Create a session manager
347
+ sm = SessionManager(
348
+ system_prompt="You are a helpful assistant specialized in Python programming.",
349
+ infinite_context=True,
350
+ token_threshold=4000,
351
+ max_turns_per_segment=20
352
+ )
353
+
354
+ # Track conversations
355
+ await sm.user_says("How do I create a list comprehension?")
356
+ await sm.ai_responds(
357
+ "A list comprehension is a concise way to create lists in Python...",
358
+ model="gpt-4",
359
+ provider="openai"
360
+ )
361
+
362
+ # Track tool usage
363
+ await sm.tool_used(
364
+ tool_name="code_executor",
365
+ arguments={"code": "print([x**2 for x in range(5)])"},
366
+ result={"output": "[0, 1, 4, 9, 16]"}
367
+ )
368
+
369
+ # Get session statistics
370
+ stats = await sm.get_stats()
371
+ print(f"Session {stats['session_id']}: {stats['total_messages']} messages, ${stats['estimated_cost']:.4f}")
372
+ ```
373
+
374
+ ### Working with System Prompts
375
+
376
+ ```python
377
+ # Set initial system prompt
378
+ sm = SessionManager(system_prompt="You are a creative writing assistant.")
379
+
380
+ # Update system prompt later
381
+ await sm.update_system_prompt("You are now a technical documentation writer.")
382
+
383
+ # Get messages including system prompt for your LLM calls
384
+ messages = await sm.get_messages_for_llm(include_system=True)
385
+ # [{"role": "system", "content": "You are now a technical documentation writer."}, ...]
386
+ ```
387
+
388
+ ### SessionManager Properties
389
+
390
+ ```python
391
+ sm = SessionManager()
392
+
393
+ # Access session information
394
+ print(f"Session ID: {sm.session_id}")
395
+ print(f"System Prompt: {sm.system_prompt}")
396
+ print(f"Infinite Context: {sm.is_infinite}")
397
+
398
+ # Check if this is a new session
399
+ print(f"Is new session: {sm._is_new}") # Useful for initialization logic
400
+ ```
401
+
402
+ ### Managing Long Conversations
403
+
404
+ ```python
405
+ # Enable infinite context with custom settings
406
+ sm = SessionManager(
407
+ infinite_context=True,
408
+ token_threshold=3000, # Segment at 3000 tokens
409
+ max_turns_per_segment=15 # Or 15 conversation turns
410
+ )
411
+
412
+ # The session will auto-segment when limits are reached
413
+ # You don't need to do anything - it happens automatically!
414
+
415
+ # Get full conversation across all segments
416
+ full_conversation = await sm.get_conversation(include_all_segments=True)
417
+
418
+ # Get session chain (list of session IDs in the conversation)
419
+ session_chain = await sm.get_session_chain()
420
+ print(f"Conversation spans {len(session_chain)} sessions: {session_chain}")
421
+ ```
422
+
423
+ ## Core Models
424
+
425
+ ### Session
426
+
427
+ The main container for a conversation:
428
+
429
+ ```python
430
+ from chuk_ai_session_manager import Session
431
+
432
+ # Create a new session
433
+ session = await Session.create(
434
+ parent_id=None, # Optional parent session
435
+ metadata={"user_id": "user123", "topic": "programming"}
436
+ )
437
+
438
+ # Session properties
439
+ print(f"Session ID: {session.id}")
440
+ print(f"Created: {session.metadata.created_at}")
441
+ print(f"Total tokens: {session.total_tokens}")
442
+ print(f"Total cost: ${session.total_cost:.4f}")
443
+
444
+ # Add events
445
+ from chuk_ai_session_manager.models.session_event import SessionEvent
446
+ from chuk_ai_session_manager.models.event_source import EventSource
447
+ from chuk_ai_session_manager.models.event_type import EventType
448
+
449
+ event = await SessionEvent.create_with_tokens(
450
+ message="Hello world!",
451
+ prompt="Hello world!",
452
+ model="gpt-3.5-turbo",
453
+ source=EventSource.USER,
454
+ type=EventType.MESSAGE
455
+ )
456
+
457
+ await session.add_event_and_save(event)
458
+ ```
459
+
460
+ ### SessionEvent
461
+
462
+ Individual events within a session:
463
+
464
+ ```python
465
+ from chuk_ai_session_manager import SessionEvent, EventSource, EventType
466
+
467
+ # Create an event with automatic token counting
468
+ event = await SessionEvent.create_with_tokens(
469
+ message="What is machine learning?",
470
+ prompt="What is machine learning?",
471
+ completion=None, # For user messages
472
+ model="gpt-3.5-turbo",
473
+ source=EventSource.USER,
474
+ type=EventType.MESSAGE
475
+ )
476
+
477
+ # Event properties
478
+ print(f"Event ID: {event.id}")
479
+ print(f"Tokens used: {event.token_usage.total_tokens}")
480
+ print(f"Source: {event.source.value}")
481
+ print(f"Type: {event.type.value}")
482
+
483
+ # Update metadata
484
+ await event.set_metadata("user_id", "user123")
485
+ await event.set_metadata("intent", "question")
486
+
487
+ # Check metadata
488
+ user_id = await event.get_metadata("user_id")
489
+ has_intent = await event.has_metadata("intent")
490
+ ```
491
+
492
+ ### TokenUsage
493
+
494
+ Tracks token consumption and costs:
495
+
496
+ ```python
497
+ from chuk_ai_session_manager import TokenUsage
498
+
499
+ # Create from text
500
+ usage = await TokenUsage.from_text(
501
+ prompt="What is the capital of France?",
502
+ completion="The capital of France is Paris.",
503
+ model="gpt-3.5-turbo"
504
+ )
505
+
506
+ print(f"Prompt tokens: {usage.prompt_tokens}")
507
+ print(f"Completion tokens: {usage.completion_tokens}")
508
+ print(f"Total tokens: {usage.total_tokens}")
509
+ print(f"Estimated cost: ${usage.estimated_cost_usd:.6f}")
510
+
511
+ # Update token usage
512
+ await usage.update(prompt_tokens=10, completion_tokens=5)
513
+
514
+ # Count tokens for any text
515
+ token_count = await TokenUsage.count_tokens("Hello world!", "gpt-4")
516
+ ```
517
+
518
+ ### Event Source and Type Enums
519
+
520
+ ```python
521
+ from chuk_ai_session_manager import EventSource, EventType
522
+
523
+ # Event sources
524
+ EventSource.USER # User input
525
+ EventSource.LLM # AI model response
526
+ EventSource.SYSTEM # System/tool events
527
+
528
+ # Event types
529
+ EventType.MESSAGE # Conversation messages
530
+ EventType.TOOL_CALL # Tool/function calls
531
+ EventType.SUMMARY # Session summaries
532
+ EventType.REFERENCE # References to other content
533
+ EventType.CONTEXT_BRIDGE # Context bridging events
534
+ ```
535
+
536
+ ## Infinite Context
537
+
538
+ The infinite context system automatically handles conversations that exceed token limits by creating linked sessions with summaries.
539
+
540
+ ### InfiniteConversationManager
541
+
542
+ ```python
543
+ from chuk_ai_session_manager import (
544
+ InfiniteConversationManager,
545
+ SummarizationStrategy,
546
+ EventSource
547
+ )
548
+
549
+ # Create manager with custom settings
550
+ icm = InfiniteConversationManager(
551
+ token_threshold=3000,
552
+ max_turns_per_segment=15,
553
+ summarization_strategy=SummarizationStrategy.KEY_POINTS
554
+ )
555
+
556
+ # Process messages (automatically segments when needed)
557
+ async def my_llm_callback(messages):
558
+ # Your LLM call here
559
+ return "Summary of the conversation..."
560
+
561
+ current_session_id = await icm.process_message(
562
+ session_id="session-123",
563
+ message="Tell me about quantum computing",
564
+ source=EventSource.USER,
565
+ llm_callback=my_llm_callback,
566
+ model="gpt-4"
567
+ )
568
+
569
+ # Build context for LLM calls
570
+ context = await icm.build_context_for_llm(
571
+ session_id=current_session_id,
572
+ max_messages=10,
573
+ include_summaries=True
574
+ )
575
+
576
+ # Get session chain
577
+ chain = await icm.get_session_chain(current_session_id)
578
+ print(f"Conversation chain: {[s.id for s in chain]}")
579
+ ```
580
+
581
+ ### Summarization Strategies
582
+
583
+ ```python
584
+ from chuk_ai_session_manager import SummarizationStrategy
585
+
586
+ # Different summarization approaches
587
+ SummarizationStrategy.BASIC # General overview
588
+ SummarizationStrategy.KEY_POINTS # Focus on key information
589
+ SummarizationStrategy.TOPIC_BASED # Organize by topics
590
+ SummarizationStrategy.QUERY_FOCUSED # Focus on user questions
591
+ ```
592
+
593
+ ## Tool Integration
594
+
595
+ ### SessionAwareToolProcessor
596
+
597
+ Integrates with `chuk_tool_processor` for automatic tool call tracking:
598
+
599
+ ```python
600
+ from chuk_ai_session_manager import SessionAwareToolProcessor
601
+
602
+ # Create processor for a session
603
+ processor = await SessionAwareToolProcessor.create(
604
+ session_id="session-123",
605
+ enable_caching=True,
606
+ max_retries=2,
607
+ retry_delay=1.0
608
+ )
609
+
610
+ # Process LLM message with tool calls
611
+ llm_response = {
612
+ "tool_calls": [
613
+ {
614
+ "function": {
615
+ "name": "calculator",
616
+ "arguments": '{"operation": "add", "a": 5, "b": 3}'
617
+ }
618
+ }
619
+ ]
620
+ }
621
+
622
+ results = await processor.process_llm_message(llm_response, None)
623
+ for result in results:
624
+ print(f"Tool: {result.tool}, Result: {result.result}")
625
+ ```
626
+
627
+ ### Sample Tools
628
+
629
+ ```python
630
+ # The package includes sample tools for demonstration
631
+ from chuk_ai_session_manager.sample_tools import (
632
+ CalculatorTool,
633
+ WeatherTool,
634
+ SearchTool
635
+ )
636
+
637
+ # These are registered with chuk_tool_processor
638
+ # You can see how to structure your own tools
639
+ ```
640
+
641
+ ## Session Storage
642
+
643
+ ### CHUK Sessions Backend
644
+
645
+ The storage is built on CHUK Sessions:
646
+
647
+ ```python
648
+ from chuk_ai_session_manager import (
649
+ setup_chuk_sessions_storage,
650
+ SessionStorage,
651
+ ChukSessionsStore
652
+ )
653
+
654
+ # Setup storage (usually done automatically)
655
+ backend = setup_chuk_sessions_storage(
656
+ sandbox_id="my-ai-app",
657
+ default_ttl_hours=48
658
+ )
659
+
660
+ # Get the store
661
+ store = ChukSessionsStore(backend)
662
+
663
+ # Manual session operations
664
+ session = await store.get("session-123")
665
+ await store.save(session)
666
+ await store.delete("session-123")
667
+ session_ids = await store.list_sessions(prefix="user-")
668
+ ```
669
+
670
+ ### Storage Configuration
671
+
672
+ ```python
673
+ # Configure storage at import time
674
+ from chuk_ai_session_manager import configure_storage
675
+
676
+ success = configure_storage(
677
+ sandbox_id="my-application",
678
+ default_ttl_hours=72 # 3 day TTL
679
+ )
680
+
681
+ if success:
682
+ print("Storage configured successfully")
683
+ else:
684
+ print("Storage configuration failed")
685
+ ```
686
+
687
+ ## Prompt Building
688
+
689
+ ### Building Prompts from Sessions
690
+
691
+ ```python
692
+ from chuk_ai_session_manager import (
693
+ build_prompt_from_session,
694
+ PromptStrategy,
695
+ truncate_prompt_to_token_limit
696
+ )
697
+
698
+ # Build prompts with different strategies
699
+ prompt = await build_prompt_from_session(
700
+ session,
701
+ strategy=PromptStrategy.CONVERSATION, # Include conversation history
702
+ max_tokens=3000,
703
+ model="gpt-4",
704
+ include_parent_context=True,
705
+ max_history=10
706
+ )
707
+
708
+ # Prompt strategies
709
+ PromptStrategy.MINIMAL # Just task and latest context
710
+ PromptStrategy.TASK_FOCUSED # Focus on the task
711
+ PromptStrategy.TOOL_FOCUSED # Emphasize tool usage
712
+ PromptStrategy.CONVERSATION # Include conversation history
713
+ PromptStrategy.HIERARCHICAL # Include parent session context
714
+ ```
715
+
716
+ ### Token Limit Management
717
+
718
+ ```python
719
+ from chuk_ai_session_manager import truncate_prompt_to_token_limit
720
+
721
+ # Ensure prompt fits within token limits
722
+ truncated_prompt = await truncate_prompt_to_token_limit(
723
+ prompt=messages,
724
+ max_tokens=3000,
725
+ model="gpt-3.5-turbo"
726
+ )
727
+ ```
728
+
729
+ ## Configuration
730
+
731
+ ### Package Configuration
732
+
733
+ ```python
734
+ import chuk_ai_session_manager as casm
735
+
736
+ # Check what's available
737
+ print("Package version:", casm.get_version())
738
+ availability = casm.is_available()
739
+ print("Available components:", availability)
740
+
741
+ # Configure storage
742
+ success = casm.configure_storage(
743
+ sandbox_id="my-app",
744
+ default_ttl_hours=24
745
+ )
746
+ ```
747
+
748
+ ### Environment Setup
749
+
750
+ The package depends on several components:
751
+
752
+ 1. **Required**: `chuk_sessions` - for storage backend
753
+ 2. **Required**: `pydantic` - for data models
754
+ 3. **Optional**: `tiktoken` - for accurate token counting (falls back to approximation)
755
+ 4. **Optional**: `chuk_tool_processor` - for tool integration
756
+
757
+ ### Error Handling
758
+
759
+ ```python
760
+ from chuk_ai_session_manager import (
761
+ SessionManagerError,
762
+ SessionNotFound,
763
+ TokenLimitExceeded,
764
+ StorageError
765
+ )
766
+
767
+ try:
768
+ session_id = await track_conversation("Hello", "Hi there")
769
+ except SessionNotFound as e:
770
+ print(f"Session not found: {e}")
771
+ except TokenLimitExceeded as e:
772
+ print(f"Token limit exceeded: {e}")
773
+ except StorageError as e:
774
+ print(f"Storage error: {e}")
775
+ except SessionManagerError as e:
776
+ print(f"General session error: {e}")
777
+ ```
778
+
779
+ ## 🌟 What Makes CHUK Special?
780
+
781
+ | Feature | Other Libraries | CHUK AI Session Manager |
782
+ |---------|----------------|------------------------|
783
+ | **Setup Complexity** | Complex configuration | 3 lines of code |
784
+ | **Cost Tracking** | Manual calculation | Automatic across all providers |
785
+ | **Long Conversations** | Token limit errors | Infinite context with auto-segmentation |
786
+ | **Multi-Provider** | Provider-specific code | Works with any LLM |
787
+ | **Production Ready** | Requires additional work | Built for production |
788
+ | **Learning Curve** | Steep | 5 minutes to productivity |
789
+ | **Tool Integration** | Manual tracking | Automatic tool call logging |
790
+ | **Session Management** | Build from scratch | Complete session hierarchy |
791
+
792
+ ## 🎯 Quick Decision Guide
793
+
794
+ **Choose CHUK AI Session Manager if you want:**
795
+ - ✅ Simple conversation tracking with zero configuration
796
+ - ✅ Automatic cost monitoring across all LLM providers
797
+ - ✅ Infinite conversation length without token limit errors
798
+ - ✅ Production-ready session management out of the box
799
+ - ✅ Complete conversation analytics and observability
800
+ - ✅ Framework-agnostic solution that works with any LLM library
801
+ - ✅ Built-in tool call tracking and retry mechanisms
802
+ - ✅ Hierarchical session relationships for complex workflows
803
+
804
+ ## 📊 Monitoring & Analytics
805
+
806
+ ```python
807
+ # Get comprehensive session analytics
808
+ stats = await sm.get_stats(include_all_segments=True)
809
+
810
+ print(f"""
811
+ 🚀 Session Analytics Dashboard
812
+ ============================
813
+ Session ID: {stats['session_id']}
814
+ Total Messages: {stats['total_messages']}
815
+ User Messages: {stats['user_messages']}
816
+ AI Messages: {stats['ai_messages']}
817
+ Tool Calls: {stats['tool_calls']}
818
+ Total Tokens: {stats['total_tokens']}
819
+ Total Cost: ${stats['estimated_cost']:.6f}
820
+ Session Segments: {stats.get('session_segments', 1)}
821
+ Created: {stats['created_at']}
822
+ Last Update: {stats['last_update']}
823
+ Infinite Context: {stats.get('infinite_context', False)}
824
+ """)
825
+
826
+ # Get conversation history
827
+ conversation = await sm.get_conversation(include_all_segments=True)
828
+ for i, turn in enumerate(conversation):
829
+ print(f"{i+1}. {turn['role']}: {turn['content'][:50]}...")
830
+ ```
831
+
832
+ ## 🛡️ Error Handling
833
+
834
+ The package provides specific exceptions for different error conditions:
835
+
836
+ ```python
837
+ from chuk_ai_session_manager import (
838
+ SessionManagerError,
839
+ SessionNotFound,
840
+ TokenLimitExceeded,
841
+ StorageError
842
+ )
843
+
844
+ try:
845
+ session_id = await track_conversation("Hello", "Hi there")
846
+ except SessionNotFound as e:
847
+ print(f"Session not found: {e}")
848
+ except TokenLimitExceeded as e:
849
+ print(f"Token limit exceeded: {e}")
850
+ except StorageError as e:
851
+ print(f"Storage error: {e}")
852
+ except SessionManagerError as e:
853
+ print(f"General session error: {e}")
854
+ ```
855
+
856
+ ## 🔧 Environment Setup
857
+
858
+ The package requires several dependencies that should be automatically installed:
859
+
860
+ 1. **Required**: `chuk_sessions` - for storage backend
861
+ 2. **Required**: `pydantic` - for data models
862
+ 3. **Optional**: `tiktoken` - for accurate token counting (falls back to approximation)
863
+ 4. **Optional**: `chuk_tool_processor` - for tool integration
864
+
865
+ ### Dependencies Check
866
+
867
+ ```python
868
+ import chuk_ai_session_manager as casm
869
+
870
+ # Check if all components are available
871
+ availability = casm.is_available()
872
+ for component, available in availability.items():
873
+ status = "✅" if available else "❌"
874
+ print(f"{status} {component}")
875
+ ```
876
+
877
+ ## 🤝 Community & Support
878
+
879
+ - 📖 **Full Documentation**: Complete API reference and tutorials
880
+ - 🐛 **Issues**: Report bugs and request features on GitHub
881
+ - 💡 **Examples**: Check `/examples` directory for working code
882
+ - 📧 **Support**: Enterprise support available
883
+
884
+ ## 📝 License
885
+
886
+ MIT License - build amazing AI applications with confidence!
887
+
888
+ ---
889
+
890
+ **🎉 Ready to build better AI applications?**
891
+
892
+ ```bash
893
+ uv add chuk-ai-session-manager
894
+ ```
895
+
896
+ **Get started in 30 seconds with one line of code!**