rakam-systems-agent 0.1.1rc7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,533 @@
1
+ # LLM Gateway Tools
2
+
3
+ A comprehensive set of tools that expose LLM Gateway generation functions, enabling agents to use LLM capabilities as tools for meta-reasoning, delegation, and specialized NLP tasks.
4
+
5
+ ## Overview
6
+
7
+ The LLM Gateway Tools bridge the gap between agents and LLM generation capabilities, allowing agents to:
8
+
9
+ - **Meta-Reasoning**: Agents can use LLMs to reason about complex problems
10
+ - **Task Delegation**: Delegate subtasks to specialized models
11
+ - **Multi-Model Workflows**: Compare outputs or build consensus across models
12
+ - **Specialized Operations**: Use LLMs for summarization, entity extraction, translation, etc.
13
+
14
+ ## Available Tools
15
+
16
+ ### Core Generation Tools
17
+
18
+ #### 1. `llm_generate`
19
+ Generate text using an LLM through the gateway.
20
+
21
+ **Parameters:**
22
+ - `user_prompt` (required): The main prompt/question for the LLM
23
+ - `system_prompt` (optional): System prompt to set context/behavior
24
+ - `model` (optional): Model string (e.g., "openai:gpt-4o", "mistral:mistral-large-latest")
25
+ - `temperature` (optional): Temperature for generation (0.0-1.0)
26
+ - `max_tokens` (optional): Maximum tokens to generate
27
+
28
+ **Returns:**
29
+ - `content`: The generated text
30
+ - `model`: Model used for generation
31
+ - `usage`: Token usage information
32
+ - `finish_reason`: Why generation stopped
33
+ - `metadata`: Additional metadata
34
+
35
+ **Use Cases:**
36
+ - Multi-step reasoning
37
+ - Delegation to specialized models
38
+ - Meta-reasoning workflows
39
+
40
+ **Example:**
41
+ ```python
42
+ from ai_agents.components.tools.llm_gateway_tools import llm_generate
43
+
44
+ result = await llm_generate(
45
+ user_prompt="Explain quantum entanglement",
46
+ system_prompt="You are a physics expert",
47
+ model="openai:gpt-4o",
48
+ temperature=0.7
49
+ )
50
+ print(result['content'])
51
+ ```
52
+
53
+ #### 2. `llm_generate_structured`
54
+ Generate structured output conforming to a JSON schema.
55
+
56
+ **Parameters:**
57
+ - `user_prompt` (required): The main prompt/question
58
+ - `schema` (required): JSON schema defining expected output structure
59
+ - `system_prompt` (optional): System prompt
60
+ - `model` (optional): Model string
61
+ - `temperature` (optional): Temperature (0.0-1.0)
62
+ - `max_tokens` (optional): Maximum tokens
63
+
64
+ **Returns:**
65
+ - `structured_output`: The parsed structured output
66
+ - `raw_content`: The raw text response
67
+ - `model`: Model used
68
+ - `usage`: Token usage information
69
+
70
+ **Use Cases:**
71
+ - Extracting structured data
72
+ - Ensuring consistent output format
73
+ - Type-safe responses
74
+
75
+ **Example:**
76
+ ```python
77
+ schema = {
78
+ "type": "object",
79
+ "properties": {
80
+ "title": {"type": "string"},
81
+ "author": {"type": "string"},
82
+ "year": {"type": "integer"}
83
+ }
84
+ }
85
+
86
+ result = await llm_generate_structured(
87
+ user_prompt="Tell me about '1984' by George Orwell",
88
+ schema=schema
89
+ )
90
+ print(result['structured_output'])
91
+ ```
92
+
93
+ #### 3. `llm_count_tokens`
94
+ Count tokens in text using the LLM gateway's tokenizer.
95
+
96
+ **Parameters:**
97
+ - `text` (required): Text to count tokens for
98
+ - `model` (optional): Model string for tokenization
99
+
100
+ **Returns:**
101
+ - `token_count`: Number of tokens
102
+ - `model`: Model used
103
+ - `text_length`: Character length
104
+
105
+ **Use Cases:**
106
+ - Checking prompt lengths before generation
107
+ - Estimating API costs
108
+ - Managing context windows
109
+
110
+ **Example:**
111
+ ```python
112
+ result = await llm_count_tokens(
113
+ text="The quick brown fox jumps over the lazy dog",
114
+ model="openai:gpt-4o"
115
+ )
116
+ print(f"Tokens: {result['token_count']}")
117
+ ```
118
+
119
+ #### 4. `llm_multi_model_generate`
120
+ Generate responses from multiple models in parallel.
121
+
122
+ **Parameters:**
123
+ - `user_prompt` (required): The main prompt
124
+ - `models` (required): List of model strings
125
+ - `system_prompt` (optional): System prompt
126
+ - `temperature` (optional): Temperature (0.0-1.0)
127
+ - `max_tokens` (optional): Maximum tokens
128
+
129
+ **Returns:**
130
+ - `responses`: List of responses from each model
131
+ - `model_count`: Number of models queried
132
+
133
+ **Use Cases:**
134
+ - Comparing outputs across models
135
+ - Building consensus
136
+ - Model ensemble approaches
137
+ - A/B testing prompts
138
+
139
+ **Example:**
140
+ ```python
141
+ result = await llm_multi_model_generate(
142
+ user_prompt="What is the meaning of life?",
143
+ models=["openai:gpt-4o", "mistral:mistral-large-latest"]
144
+ )
145
+
146
+ for response in result['responses']:
147
+ print(f"{response['model']}: {response['content']}")
148
+ ```
149
+
150
+ ### Specialized NLP Tools
151
+
152
+ #### 5. `llm_summarize`
153
+ Summarize text using an LLM.
154
+
155
+ **Parameters:**
156
+ - `text` (required): Text to summarize
157
+ - `model` (optional): Model string
158
+ - `max_length` (optional): Maximum length for summary in words
159
+
160
+ **Returns:**
161
+ - `summary`: The generated summary
162
+ - `original_length`: Length of original text (words)
163
+ - `summary_length`: Length of summary (words)
164
+ - `model`: Model used
165
+ - `usage`: Token usage
166
+
167
+ **Example:**
168
+ ```python
169
+ result = await llm_summarize(
170
+ text="Long article text...",
171
+ max_length=100
172
+ )
173
+ print(result['summary'])
174
+ ```
175
+
176
+ #### 6. `llm_extract_entities`
177
+ Extract named entities from text.
178
+
179
+ **Parameters:**
180
+ - `text` (required): Text to extract entities from
181
+ - `entity_types` (optional): List of entity types to extract (e.g., ["person", "organization", "location"])
182
+ - `model` (optional): Model string
183
+
184
+ **Returns:**
185
+ - `entities`: Extracted entities (as JSON)
186
+ - `model`: Model used
187
+ - `usage`: Token usage
188
+
189
+ **Example:**
190
+ ```python
191
+ result = await llm_extract_entities(
192
+ text="Apple Inc. was founded by Steve Jobs in Cupertino.",
193
+ entity_types=["person", "organization", "location"]
194
+ )
195
+ print(result['entities'])
196
+ ```
197
+
198
+ #### 7. `llm_translate`
199
+ Translate text using an LLM.
200
+
201
+ **Parameters:**
202
+ - `text` (required): Text to translate
203
+ - `target_language` (required): Target language (e.g., "Spanish", "French")
204
+ - `source_language` (optional): Source language (auto-detected if not specified)
205
+ - `model` (optional): Model string
206
+
207
+ **Returns:**
208
+ - `translation`: The translated text
209
+ - `source_language`: Source language used
210
+ - `target_language`: Target language
211
+ - `model`: Model used
212
+ - `usage`: Token usage
213
+
214
+ **Example:**
215
+ ```python
216
+ result = await llm_translate(
217
+ text="Hello, how are you?",
218
+ target_language="Spanish"
219
+ )
220
+ print(result['translation'])
221
+ ```
222
+
223
+ ## Usage Patterns
224
+
225
+ ### 1. Direct Usage
226
+
227
+ Use the tools directly in your code:
228
+
229
+ ```python
230
+ from ai_agents.components.tools.llm_gateway_tools import llm_generate
231
+
232
+ async def my_function():
233
+ result = await llm_generate(
234
+ user_prompt="What is AI?",
235
+ temperature=0.7
236
+ )
237
+ return result['content']
238
+ ```
239
+
240
+ ### 2. Registration with Tool Registry
241
+
242
+ Register tools for use with agents:
243
+
244
+ ```python
245
+ from ai_core.interfaces.tool_registry import ToolRegistry
246
+ from ai_agents.components.tools.llm_gateway_tools import get_all_llm_gateway_tools
247
+
248
+ # Create registry
249
+ registry = ToolRegistry()
250
+
251
+ # Register all LLM gateway tools
252
+ tool_configs = get_all_llm_gateway_tools()
253
+ for config in tool_configs:
254
+ registry.register_direct_tool(
255
+ name=config["name"],
256
+ function=config["function"],
257
+ description=config["description"],
258
+ json_schema=config["json_schema"],
259
+ category=config.get("category"),
260
+ tags=config.get("tags", []),
261
+ )
262
+ ```
263
+
264
+ ### 3. Configuration-Based Loading
265
+
266
+ Load tools from YAML configuration:
267
+
268
+ ```yaml
269
+ # llm_gateway_tools_config.yaml
270
+ tools:
271
+ - name: llm_generate
272
+ type: direct
273
+ module: ai_agents.components.tools.llm_gateway_tools
274
+ function: llm_generate
275
+ description: Generate text using an LLM
276
+ category: llm
277
+ tags: [generation, llm]
278
+ schema:
279
+ type: object
280
+ properties:
281
+ user_prompt:
282
+ type: string
283
+ description: The main prompt
284
+ required: [user_prompt]
285
+ ```
286
+
287
+ ```python
288
+ from ai_core.interfaces.tool_registry import ToolRegistry
289
+ from ai_core.interfaces.tool_loader import ToolLoader
290
+
291
+ registry = ToolRegistry()
292
+ loader = ToolLoader(registry)
293
+ loader.load_from_yaml("llm_gateway_tools_config.yaml")
294
+ ```
295
+
296
+ ### 4. Use with Agents
297
+
298
+ Create agents with LLM gateway tools:
299
+
300
+ ```python
301
+ from ai_agents.components.base_agent import BaseAgent
302
+ from ai_core.interfaces.tool_registry import ToolRegistry
303
+
304
+ # Create registry and register tools
305
+ registry = ToolRegistry()
306
+ # ... register tools ...
307
+
308
+ # Create agent with tools
309
+ agent = BaseAgent(
310
+ name="meta_reasoning_agent",
311
+ model="openai:gpt-4o",
312
+ system_prompt="You are an agent with LLM tool access",
313
+ tool_registry=registry
314
+ )
315
+
316
+ # Agent can now use LLM tools
317
+ result = await agent.arun(
318
+ "Summarize this article and extract key entities: ..."
319
+ )
320
+ ```
321
+
322
+ ## Advanced Patterns
323
+
324
+ ### Meta-Reasoning
325
+
326
+ Use one LLM to reason about which LLM to use for a task:
327
+
328
+ ```python
329
+ # Step 1: Ask planner LLM which model to use
330
+ decision = await llm_generate(
331
+ user_prompt="Which model is best for creative writing?",
332
+ system_prompt="You are an AI model selection expert",
333
+ temperature=0.3
334
+ )
335
+
336
+ # Step 2: Use recommended model for the actual task
337
+ result = await llm_generate(
338
+ user_prompt="Write a creative story about...",
339
+ model="mistral:mistral-large-latest", # Based on recommendation
340
+ temperature=0.9
341
+ )
342
+ ```
343
+
344
+ ### Multi-Model Consensus
345
+
346
+ Get consensus from multiple models:
347
+
348
+ ```python
349
+ result = await llm_multi_model_generate(
350
+ user_prompt="Is this statement true: ...",
351
+ models=["openai:gpt-4o", "mistral:mistral-large-latest"]
352
+ )
353
+
354
+ # Analyze responses for consensus
355
+ responses = [r['content'] for r in result['responses']]
356
+ ```
357
+
358
+ ### Hierarchical Task Decomposition
359
+
360
+ Agent breaks down complex tasks:
361
+
362
+ ```python
363
+ # Agent with LLM tools can:
364
+ # 1. Use llm_generate to break down a complex task
365
+ # 2. Use llm_summarize on long inputs
366
+ # 3. Use llm_extract_entities to find key information
367
+ # 4. Use llm_translate to handle multilingual content
368
+ # 5. Use llm_multi_model_generate to verify results
369
+ ```
370
+
371
+ ### Cost Optimization
372
+
373
+ Check token counts before generation:
374
+
375
+ ```python
376
+ # Check token count
377
+ token_result = await llm_count_tokens(
378
+ text=my_long_prompt,
379
+ model="openai:gpt-4o"
380
+ )
381
+
382
+ # Only proceed if within budget
383
+ if token_result['token_count'] < 1000:
384
+ result = await llm_generate(
385
+ user_prompt=my_long_prompt,
386
+ model="openai:gpt-4o"
387
+ )
388
+ ```
389
+
390
+ ## Configuration
391
+
392
+ ### Model Selection
393
+
394
+ Tools respect the model string format:
395
+ - `"openai:gpt-4o"` - OpenAI GPT-4o
396
+ - `"openai:gpt-4o-mini"` - OpenAI GPT-4o-mini
397
+ - `"mistral:mistral-large-latest"` - Mistral Large
398
+ - `"mistral:mistral-small-latest"` - Mistral Small
399
+
400
+ ### Environment Variables
401
+
402
+ Tools use the LLM Gateway Factory which respects:
403
+ - `DEFAULT_LLM_MODEL` - Default model if not specified
404
+ - `DEFAULT_LLM_TEMPERATURE` - Default temperature
405
+ - `DEFAULT_LLM_PROVIDER` - Default provider
406
+ - `OPENAI_API_KEY` - OpenAI API key
407
+ - `MISTRAL_API_KEY` - Mistral API key
408
+
409
+ ### Temperature Guidelines
410
+
411
+ - `0.0-0.3`: Factual, deterministic tasks (classification, extraction)
412
+ - `0.4-0.7`: Balanced tasks (general Q&A, summarization)
413
+ - `0.8-1.0`: Creative tasks (story writing, brainstorming)
414
+
415
+ ## Examples
416
+
417
+ See `examples/llm_gateway_tools_example.py` for comprehensive examples including:
418
+ - Basic generation
419
+ - Structured output
420
+ - Token counting
421
+ - Multi-model comparison
422
+ - Summarization
423
+ - Entity extraction
424
+ - Translation
425
+ - Meta-reasoning workflows
426
+ - Agent integration
427
+
428
+ Run examples:
429
+ ```bash
430
+ cd examples
431
+ python llm_gateway_tools_example.py
432
+ ```
433
+
434
+ ## Integration with Other Systems
435
+
436
+ ### With Base Agent
437
+
438
+ ```python
439
+ agent = BaseAgent(
440
+ model="openai:gpt-4o",
441
+ tool_registry=registry # Contains LLM gateway tools
442
+ )
443
+ ```
444
+
445
+ ### With Tool Invoker
446
+
447
+ ```python
448
+ from ai_core.interfaces.tool_invoker import ToolInvoker
449
+
450
+ invoker = ToolInvoker(registry)
451
+ result = await invoker.invoke_tool(
452
+ "llm_generate",
453
+ {"user_prompt": "Hello"}
454
+ )
455
+ ```
456
+
457
+ ### With MCP Servers
458
+
459
+ LLM gateway tools can be exposed via MCP servers for remote access.
460
+
461
+ ## Best Practices
462
+
463
+ 1. **Model Selection**: Choose the right model for the task
464
+ - GPT-4o for complex reasoning
465
+ - GPT-4o-mini for simple tasks (cost-effective)
466
+ - Mistral Large for balanced performance
467
+ - Mistral Small for lightweight tasks
468
+
469
+ 2. **Temperature Control**: Adjust based on task type
470
+ - Low for factual tasks
471
+ - High for creative tasks
472
+
473
+ 3. **Token Management**: Always check token counts for large inputs
474
+
475
+ 4. **Error Handling**: Handle API errors and rate limits gracefully
476
+
477
+ 5. **Caching**: Consider caching results for repeated queries
478
+
479
+ 6. **Cost Optimization**:
480
+ - Use token counting before generation
481
+ - Choose appropriate models for task complexity
482
+ - Set reasonable max_tokens limits
483
+
484
+ 7. **Security**:
485
+ - Don't expose sensitive information in prompts
486
+ - Validate and sanitize user inputs
487
+ - Use appropriate system prompts to constrain behavior
488
+
489
+ ## Troubleshooting
490
+
491
+ ### API Key Issues
492
+ ```
493
+ Error: No API key found
494
+ ```
495
+ **Solution**: Set `OPENAI_API_KEY` or `MISTRAL_API_KEY` environment variable
496
+
497
+ ### Model Not Found
498
+ ```
499
+ Error: Unknown provider 'xyz'
500
+ ```
501
+ **Solution**: Use supported model strings (openai:*, mistral:*)
502
+
503
+ ### Token Limits
504
+ ```
505
+ Error: Token limit exceeded
506
+ ```
507
+ **Solution**: Use `llm_count_tokens` to check before generation
508
+
509
+ ### Import Errors
510
+ ```
511
+ Error: Module not found
512
+ ```
513
+ **Solution**: Ensure all dependencies are installed and paths are correct
514
+
515
+ ## Contributing
516
+
517
+ To add new LLM gateway tools:
518
+
519
+ 1. Add function to `llm_gateway_tools.py`
520
+ 2. Follow the async function signature pattern
521
+ 3. Return structured dictionaries
522
+ 4. Add to `get_all_llm_gateway_tools()` configuration list
523
+ 5. Update this README with documentation
524
+ 6. Add example usage to `llm_gateway_tools_example.py`
525
+ 7. Add to YAML configuration in `examples/configs/`
526
+
527
+ ## Related Documentation
528
+
529
+ - [LLM Gateway System](../llm_gateway/README.md)
530
+ - [Tool System Documentation](../../../../ai_core/interfaces/tool_registry.py)
531
+ - [Base Agent Documentation](../base_agent.py)
532
+ - [Configuration System](../../../../ai_core/CONFIG_SYSTEM_README.md)
533
+
@@ -0,0 +1,46 @@
1
+ """Tools package - contains example tools and tool components."""
2
+ from .search_tool import SearchTool
3
+ from .example_tools import (
4
+ get_current_weather,
5
+ calculate_distance,
6
+ format_currency,
7
+ translate_text,
8
+ analyze_sentiment,
9
+ WebSearchTool,
10
+ DatabaseQueryTool,
11
+ FileProcessorTool,
12
+ get_all_example_tools,
13
+ )
14
+ from .llm_gateway_tools import (
15
+ llm_generate,
16
+ llm_generate_structured,
17
+ llm_count_tokens,
18
+ llm_multi_model_generate,
19
+ llm_summarize,
20
+ llm_extract_entities,
21
+ llm_translate,
22
+ get_all_llm_gateway_tools,
23
+ )
24
+
25
+ __all__ = [
26
+ "SearchTool",
27
+ "get_current_weather",
28
+ "calculate_distance",
29
+ "format_currency",
30
+ "translate_text",
31
+ "analyze_sentiment",
32
+ "WebSearchTool",
33
+ "DatabaseQueryTool",
34
+ "FileProcessorTool",
35
+ "get_all_example_tools",
36
+ # LLM Gateway tools
37
+ "llm_generate",
38
+ "llm_generate_structured",
39
+ "llm_count_tokens",
40
+ "llm_multi_model_generate",
41
+ "llm_summarize",
42
+ "llm_extract_entities",
43
+ "llm_translate",
44
+ "get_all_llm_gateway_tools",
45
+ ]
46
+