kalibr 1.0.18__py3-none-any.whl → 1.0.20__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,173 @@
1
+ # Enhanced Kalibr SDK Examples
2
+
3
+ This directory contains examples demonstrating both the original function-level Kalibr capabilities and the new enhanced app-level features.
4
+
5
+ ## Examples Included
6
+
7
+ ### 1. Basic Kalibr Example (`basic_kalibr_example.py`)
8
+ Demonstrates the original Kalibr SDK capabilities:
9
+ - Simple function decoration with `@sdk.action()`
10
+ - Basic parameter handling and type inference
11
+ - Compatible with GPT Actions and Claude MCP
12
+ - Simple API endpoints
13
+
14
+ **Features shown:**
15
+ - Text processing functions
16
+ - Mathematical calculations
17
+ - Email validation
18
+ - Text statistics
19
+
20
+ **To run:**
21
+ ```bash
22
+ kalibr serve basic_kalibr_example.py
23
+ ```
24
+
25
+ **Test endpoints:**
26
+ - `POST /proxy/greet` - Greeting function
27
+ - `POST /proxy/calculate` - Basic calculator
28
+ - `POST /proxy/validate_email` - Email validation
29
+ - `POST /proxy/text_stats` - Text analysis
30
+
31
+ ### 2. Enhanced Kalibr Example (`enhanced_kalibr_example.py`)
32
+ Demonstrates the new enhanced app-level capabilities:
33
+ - File upload handling
34
+ - Session management
35
+ - Streaming responses
36
+ - Complex workflows
37
+ - Multi-model schema generation
38
+
39
+ **Features shown:**
40
+ - File upload and analysis
41
+ - Session-based note taking
42
+ - Real-time streaming data
43
+ - Multi-step workflows
44
+ - Advanced parameter handling
45
+
46
+ **To run:**
47
+ ```bash
48
+ kalibr serve enhanced_kalibr_example.py --app-mode
49
+ ```
50
+
51
+ **Test endpoints:**
52
+ - `POST /upload/analyze_document` - File upload analysis
53
+ - `POST /session/save_note` - Session-based note saving
54
+ - `GET /stream/count_with_progress` - Streaming counter
55
+ - `POST /workflow/process_text_analysis` - Complex text workflow
56
+
57
+ ## Multi-Model Integration
58
+
59
+ Both examples automatically generate schemas for multiple AI models:
60
+
61
+ ### Available Schema Endpoints:
62
+ - **Claude MCP**: `/mcp.json`
63
+ - **GPT Actions**: `/openapi.json`
64
+ - **Gemini Extensions**: `/schemas/gemini`
65
+ - **Microsoft Copilot**: `/schemas/copilot`
66
+
67
+ ### Management Endpoints:
68
+ - **Health Check**: `/health`
69
+ - **Supported Models**: `/models/supported`
70
+ - **API Documentation**: `/docs`
71
+
72
+ ## Usage Examples
73
+
74
+ ### Basic Function Call:
75
+ ```bash
76
+ curl -X POST http://localhost:8000/proxy/greet \
77
+ -H "Content-Type: application/json" \
78
+ -d '{"name": "Alice", "greeting": "Hi"}'
79
+ ```
80
+
81
+ ### File Upload:
82
+ ```bash
83
+ curl -X POST http://localhost:8000/upload/analyze_document \
84
+ -F "file=@example.txt"
85
+ ```
86
+
87
+ ### Session Management:
88
+ ```bash
89
+ # Save a note (creates session)
90
+ curl -X POST http://localhost:8000/session/save_note \
91
+ -H "Content-Type: application/json" \
92
+ -d '{"note_title": "My Note", "note_content": "This is a test note"}'
93
+
94
+ # Get notes (use session ID from previous response)
95
+ curl -X POST http://localhost:8000/session/get_notes \
96
+ -H "Content-Type: application/json" \
97
+ -H "x-session-id: <session-id-here>" \
98
+ -d '{}'
99
+ ```
100
+
101
+ ### Streaming Data:
102
+ ```bash
103
+ curl http://localhost:8000/stream/count_with_progress?max_count=5&delay_seconds=1
104
+ ```
105
+
106
+ ### Complex Workflow:
107
+ ```bash
108
+ curl -X POST http://localhost:8000/workflow/process_text_analysis \
109
+ -H "Content-Type: application/json" \
110
+ -d '{"text": "This is a sample text for analysis. It contains multiple sentences and words for testing the workflow capabilities."}'
111
+ ```
112
+
113
+ ## Integration with AI Models
114
+
115
+ ### GPT Actions Setup:
116
+ 1. Copy the OpenAPI schema from `/openapi.json`
117
+ 2. Create a new GPT Action in ChatGPT
118
+ 3. Paste the schema and set the base URL
119
+
120
+ ### Claude MCP Setup:
121
+ 1. Add the MCP server configuration:
122
+ ```json
123
+ {
124
+ "mcp": {
125
+ "servers": {
126
+ "kalibr": {
127
+ "command": "curl",
128
+ "args": ["http://localhost:8000/mcp.json"]
129
+ }
130
+ }
131
+ }
132
+ }
133
+ ```
134
+
135
+ ### Gemini Extensions:
136
+ 1. Use the schema from `/schemas/gemini`
137
+ 2. Configure according to Gemini's extension documentation
138
+
139
+ ### Microsoft Copilot:
140
+ 1. Use the schema from `/schemas/copilot`
141
+ 2. Follow Microsoft's plugin development guidelines
142
+
143
+ ## Advanced Features
144
+
145
+ ### Authentication (Optional):
146
+ Uncomment the authentication line in enhanced example:
147
+ ```python
148
+ app.enable_auth("your-secret-jwt-key-here")
149
+ ```
150
+
151
+ ### Custom Schema Generation:
152
+ The framework supports extensible schema generation for future AI models through the `CustomModelSchemaGenerator` class.
153
+
154
+ ### Error Handling:
155
+ All endpoints include comprehensive error handling with meaningful error messages.
156
+
157
+ ### Type Safety:
158
+ Full support for Python type hints with automatic schema generation.
159
+
160
+ ## Development Notes
161
+
162
+ - The enhanced framework is backward compatible with original Kalibr apps
163
+ - Session data is stored in memory (use external storage for production)
164
+ - File uploads are handled in memory (implement persistent storage as needed)
165
+ - Streaming uses Server-Sent Events (SSE) format
166
+ - All examples include proper async/await handling where needed
167
+
168
+ ## Next Steps
169
+
170
+ 1. Try the examples with different AI models
171
+ 2. Modify the examples to fit your specific use case
172
+ 3. Explore the source code in `/app/backend/kalibr/` for advanced customization
173
+ 4. Build your own enhanced Kalibr applications!
@@ -0,0 +1,66 @@
1
+ """
2
+ Basic Kalibr SDK Example - Function-level API integration
3
+ This demonstrates the original function-level capabilities of Kalibr.
4
+ """
5
+
6
+ from kalibr import Kalibr
7
+
8
+ # Create a basic Kalibr instance
9
+ sdk = Kalibr(title="Basic Kalibr Demo", base_url="http://localhost:8000")
10
+
11
+ @sdk.action("greet", "Greet someone with a personalized message")
12
+ def greet_user(name: str, greeting: str = "Hello"):
13
+ """Simple greeting function"""
14
+ return {"message": f"{greeting}, {name}! Welcome to Kalibr SDK."}
15
+
16
+ @sdk.action("calculate", "Perform basic mathematical operations")
17
+ def calculate(operation: str, a: float, b: float):
18
+ """Basic calculator functionality"""
19
+ operations = {
20
+ "add": a + b,
21
+ "subtract": a - b,
22
+ "multiply": a * b,
23
+ "divide": a / b if b != 0 else None
24
+ }
25
+
26
+ result = operations.get(operation)
27
+ if result is None:
28
+ return {"error": f"Invalid operation '{operation}' or division by zero"}
29
+
30
+ return {
31
+ "operation": operation,
32
+ "operands": [a, b],
33
+ "result": result
34
+ }
35
+
36
+ @sdk.action("validate_email", "Check if an email address is valid")
37
+ def validate_email(email: str):
38
+ """Simple email validation"""
39
+ import re
40
+
41
+ pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
42
+ is_valid = bool(re.match(pattern, email))
43
+
44
+ return {
45
+ "email": email,
46
+ "is_valid": is_valid,
47
+ "message": "Valid email address" if is_valid else "Invalid email format"
48
+ }
49
+
50
+ @sdk.action("text_stats", "Get statistics about a text string")
51
+ def text_statistics(text: str):
52
+ """Analyze text and return statistics"""
53
+ words = text.split()
54
+ sentences = text.split('.') + text.split('!') + text.split('?')
55
+ sentences = [s.strip() for s in sentences if s.strip()]
56
+
57
+ return {
58
+ "character_count": len(text),
59
+ "word_count": len(words),
60
+ "sentence_count": len(sentences),
61
+ "average_word_length": sum(len(word) for word in words) / len(words) if words else 0,
62
+ "longest_word": max(words, key=len) if words else None
63
+ }
64
+
65
+ # The SDK instance is automatically discovered by the Kalibr CLI
66
+ # To run this: kalibr serve basic_kalibr_example.py
@@ -0,0 +1,347 @@
1
+ """
2
+ Enhanced Kalibr App Example - App-level capabilities
3
+ This demonstrates the new enhanced capabilities including file uploads,
4
+ sessions, streaming, workflows, and multi-model schema generation.
5
+ """
6
+
7
+ from kalibr import KalibrApp
8
+ from kalibr.types import FileUpload, Session, StreamingResponse, WorkflowState, AuthenticatedUser
9
+ import asyncio
10
+ import json
11
+ from datetime import datetime
12
+ from typing import List
13
+
14
+ # Create an enhanced KalibrApp instance
15
+ app = KalibrApp(title="Enhanced Kalibr Demo", base_url="http://localhost:8000")
16
+
17
+ # Basic action (compatible with original Kalibr)
18
+ @app.action("hello", "Say hello with enhanced capabilities")
19
+ def hello_enhanced(name: str = "World", include_timestamp: bool = False):
20
+ """Enhanced hello function with optional timestamp"""
21
+ message = f"Hello, {name}! This is Enhanced Kalibr v2.0"
22
+
23
+ response = {"message": message}
24
+ if include_timestamp:
25
+ response["timestamp"] = datetime.now().isoformat()
26
+
27
+ return response
28
+
29
+ # File upload handler
30
+ @app.file_handler("analyze_document", [".txt", ".md", ".py", ".js", ".json"])
31
+ async def analyze_document(file: FileUpload):
32
+ """Analyze uploaded document and return insights"""
33
+ try:
34
+ # Decode file content
35
+ content = file.content.decode('utf-8')
36
+
37
+ # Basic analysis
38
+ lines = content.split('\n')
39
+ words = content.split()
40
+
41
+ # Language detection based on file extension
42
+ language = "text"
43
+ if file.filename.endswith('.py'):
44
+ language = "python"
45
+ elif file.filename.endswith('.js'):
46
+ language = "javascript"
47
+ elif file.filename.endswith('.json'):
48
+ language = "json"
49
+ try:
50
+ json_data = json.loads(content)
51
+ return {
52
+ "upload_id": file.upload_id,
53
+ "filename": file.filename,
54
+ "analysis": {
55
+ "type": "json",
56
+ "valid_json": True,
57
+ "keys": list(json_data.keys()) if isinstance(json_data, dict) else None,
58
+ "size_bytes": file.size
59
+ }
60
+ }
61
+ except json.JSONDecodeError:
62
+ pass
63
+
64
+ return {
65
+ "upload_id": file.upload_id,
66
+ "filename": file.filename,
67
+ "analysis": {
68
+ "language": language,
69
+ "line_count": len(lines),
70
+ "word_count": len(words),
71
+ "character_count": len(content),
72
+ "size_bytes": file.size,
73
+ "non_empty_lines": len([line for line in lines if line.strip()]),
74
+ "estimated_reading_time_minutes": len(words) / 200 # Average reading speed
75
+ }
76
+ }
77
+ except UnicodeDecodeError:
78
+ return {
79
+ "upload_id": file.upload_id,
80
+ "filename": file.filename,
81
+ "error": "File is not text-readable (binary file)",
82
+ "size_bytes": file.size
83
+ }
84
+
85
+ # Session-aware action
86
+ @app.session_action("save_note", "Save a note to user session")
87
+ async def save_note(session: Session, note_title: str, note_content: str):
88
+ """Save a note to the user's session"""
89
+
90
+ # Initialize notes if not exists
91
+ if 'notes' not in session.data:
92
+ session.data['notes'] = []
93
+
94
+ # Create note object
95
+ note = {
96
+ "id": len(session.data['notes']) + 1,
97
+ "title": note_title,
98
+ "content": note_content,
99
+ "created_at": datetime.now().isoformat(),
100
+ "updated_at": datetime.now().isoformat()
101
+ }
102
+
103
+ session.data['notes'].append(note)
104
+ session.set('last_note_id', note['id'])
105
+
106
+ return {
107
+ "status": "saved",
108
+ "note": note,
109
+ "total_notes": len(session.data['notes']),
110
+ "session_id": session.session_id
111
+ }
112
+
113
+ @app.session_action("get_notes", "Retrieve all notes from session")
114
+ async def get_notes(session: Session):
115
+ """Get all notes from the user's session"""
116
+ notes = session.get('notes', [])
117
+
118
+ return {
119
+ "notes": notes,
120
+ "count": len(notes),
121
+ "session_id": session.session_id,
122
+ "last_note_id": session.get('last_note_id')
123
+ }
124
+
125
+ # Streaming action
126
+ @app.stream_action("count_with_progress", "Stream counting with progress updates")
127
+ async def count_with_progress(max_count: int = 10, delay_seconds: float = 1.0):
128
+ """Stream counting numbers with progress indication"""
129
+
130
+ for i in range(max_count + 1):
131
+ progress_percent = (i / max_count) * 100
132
+
133
+ yield {
134
+ "count": i,
135
+ "max_count": max_count,
136
+ "progress_percent": progress_percent,
137
+ "message": f"Counting: {i}/{max_count}",
138
+ "timestamp": datetime.now().isoformat(),
139
+ "is_complete": (i == max_count)
140
+ }
141
+
142
+ if i < max_count: # Don't delay after the last item
143
+ await asyncio.sleep(delay_seconds)
144
+
145
+ @app.stream_action("generate_fibonacci", "Stream Fibonacci sequence")
146
+ async def generate_fibonacci(count: int = 20, delay_seconds: float = 0.5):
147
+ """Generate Fibonacci sequence as a stream"""
148
+
149
+ a, b = 0, 1
150
+ for i in range(count):
151
+ yield {
152
+ "position": i + 1,
153
+ "fibonacci_number": a,
154
+ "sequence_so_far": f"F({i+1}) = {a}",
155
+ "timestamp": datetime.now().isoformat()
156
+ }
157
+
158
+ a, b = b, a + b
159
+ await asyncio.sleep(delay_seconds)
160
+
161
+ # Complex workflow
162
+ @app.workflow("process_text_analysis", "Complete text analysis workflow")
163
+ async def text_analysis_workflow(text: str, workflow_state: WorkflowState):
164
+ """Multi-step text analysis workflow"""
165
+
166
+ # Step 1: Validation
167
+ workflow_state.step = "validation"
168
+ workflow_state.status = "processing"
169
+
170
+ if not text or len(text.strip()) < 10:
171
+ workflow_state.status = "error"
172
+ return {"error": "Text must be at least 10 characters long"}
173
+
174
+ await asyncio.sleep(1) # Simulate processing time
175
+
176
+ # Step 2: Basic analysis
177
+ workflow_state.step = "basic_analysis"
178
+ workflow_state.data["validation_passed"] = True
179
+
180
+ words = text.split()
181
+ sentences = [s.strip() for s in text.replace('!', '.').replace('?', '.').split('.') if s.strip()]
182
+
183
+ basic_stats = {
184
+ "character_count": len(text),
185
+ "word_count": len(words),
186
+ "sentence_count": len(sentences),
187
+ "paragraph_count": len([p for p in text.split('\n\n') if p.strip()])
188
+ }
189
+
190
+ workflow_state.data["basic_stats"] = basic_stats
191
+ await asyncio.sleep(1)
192
+
193
+ # Step 3: Advanced analysis
194
+ workflow_state.step = "advanced_analysis"
195
+
196
+ # Word frequency
197
+ word_freq = {}
198
+ for word in words:
199
+ clean_word = word.lower().strip('.,!?";:')
200
+ word_freq[clean_word] = word_freq.get(clean_word, 0) + 1
201
+
202
+ # Top words
203
+ top_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)[:5]
204
+
205
+ advanced_stats = {
206
+ "unique_words": len(word_freq),
207
+ "average_word_length": sum(len(word) for word in words) / len(words) if words else 0,
208
+ "longest_word": max(words, key=len) if words else None,
209
+ "top_words": top_words,
210
+ "readability_score": min(100, max(0, 100 - (len(words) / len(sentences) if sentences else 1) * 2))
211
+ }
212
+
213
+ workflow_state.data["advanced_stats"] = advanced_stats
214
+ await asyncio.sleep(1)
215
+
216
+ # Step 4: Final compilation
217
+ workflow_state.step = "compilation"
218
+
219
+ result = {
220
+ "workflow_id": workflow_state.workflow_id,
221
+ "analysis_type": "complete_text_analysis",
222
+ "input_text_preview": text[:100] + "..." if len(text) > 100 else text,
223
+ "basic_statistics": basic_stats,
224
+ "advanced_statistics": advanced_stats,
225
+ "processing_steps": ["validation", "basic_analysis", "advanced_analysis", "compilation"],
226
+ "completed_at": datetime.now().isoformat()
227
+ }
228
+
229
+ workflow_state.step = "completed"
230
+ workflow_state.status = "success"
231
+ workflow_state.data["final_result"] = result
232
+
233
+ return result
234
+
235
+ # Data processing workflow
236
+ @app.workflow("batch_text_processor", "Process multiple texts in batch")
237
+ async def batch_text_processor(texts: List[str], workflow_state: WorkflowState):
238
+ """Process multiple texts as a batch workflow"""
239
+
240
+ workflow_state.step = "initialization"
241
+ workflow_state.status = "processing"
242
+
243
+ if not texts or len(texts) == 0:
244
+ workflow_state.status = "error"
245
+ return {"error": "No texts provided for processing"}
246
+
247
+ results = []
248
+ workflow_state.data["total_texts"] = len(texts)
249
+
250
+ for i, text in enumerate(texts):
251
+ workflow_state.step = f"processing_text_{i+1}"
252
+ workflow_state.data["current_text"] = i + 1
253
+ workflow_state.data["progress_percent"] = ((i + 1) / len(texts)) * 100
254
+
255
+ # Process each text
256
+ words = text.split()
257
+ analysis = {
258
+ "text_id": i + 1,
259
+ "text_preview": text[:50] + "..." if len(text) > 50 else text,
260
+ "word_count": len(words),
261
+ "character_count": len(text),
262
+ "sentence_count": len([s for s in text.split('.') if s.strip()])
263
+ }
264
+
265
+ results.append(analysis)
266
+ await asyncio.sleep(0.5) # Simulate processing time
267
+
268
+ # Final aggregation
269
+ workflow_state.step = "aggregation"
270
+
271
+ total_words = sum(r["word_count"] for r in results)
272
+ total_chars = sum(r["character_count"] for r in results)
273
+
274
+ final_result = {
275
+ "workflow_id": workflow_state.workflow_id,
276
+ "batch_summary": {
277
+ "total_texts_processed": len(results),
278
+ "total_words": total_words,
279
+ "total_characters": total_chars,
280
+ "average_words_per_text": total_words / len(results) if results else 0,
281
+ "average_chars_per_text": total_chars / len(results) if results else 0
282
+ },
283
+ "individual_results": results,
284
+ "completed_at": datetime.now().isoformat()
285
+ }
286
+
287
+ workflow_state.step = "completed"
288
+ workflow_state.status = "success"
289
+ workflow_state.data["final_result"] = final_result
290
+
291
+ return final_result
292
+
293
+ # Advanced action with multiple parameters
294
+ @app.action("advanced_search", "Perform advanced search with multiple filters")
295
+ def advanced_search(
296
+ query: str,
297
+ category: str = "all",
298
+ min_score: float = 0.0,
299
+ max_results: int = 10,
300
+ include_metadata: bool = False,
301
+ sort_by: str = "relevance"
302
+ ):
303
+ """Advanced search function demonstrating complex parameter handling"""
304
+
305
+ # Simulate search results
306
+ mock_results = [
307
+ {"id": 1, "title": f"Result matching '{query}'", "score": 0.95, "category": category},
308
+ {"id": 2, "title": f"Another match for '{query}'", "score": 0.87, "category": category},
309
+ {"id": 3, "title": f"Related to '{query}'", "score": 0.73, "category": category},
310
+ ]
311
+
312
+ # Filter by score
313
+ filtered_results = [r for r in mock_results if r["score"] >= min_score]
314
+
315
+ # Limit results
316
+ filtered_results = filtered_results[:max_results]
317
+
318
+ # Sort results
319
+ if sort_by == "score":
320
+ filtered_results.sort(key=lambda x: x["score"], reverse=True)
321
+
322
+ response = {
323
+ "query": query,
324
+ "filters": {
325
+ "category": category,
326
+ "min_score": min_score,
327
+ "max_results": max_results,
328
+ "sort_by": sort_by
329
+ },
330
+ "results": filtered_results,
331
+ "result_count": len(filtered_results)
332
+ }
333
+
334
+ if include_metadata:
335
+ response["metadata"] = {
336
+ "search_performed_at": datetime.now().isoformat(),
337
+ "processing_time_ms": 45,
338
+ "total_available": len(mock_results)
339
+ }
340
+
341
+ return response
342
+
343
+ # Enable authentication (optional)
344
+ # app.enable_auth("your-secret-jwt-key-here")
345
+
346
+ # The app instance is automatically discovered by the Kalibr CLI
347
+ # To run this: kalibr serve enhanced_kalibr_example.py --app-mode