tinyagent-py 0.0.1__py3-none-any.whl → 0.0.4__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,252 @@
1
+ Metadata-Version: 2.4
2
+ Name: tinyagent-py
3
+ Version: 0.0.4
4
+ Summary: Tiny Agent with MCP Client
5
+ Author-email: Mahdi Golchin <golchin@askdev.ai>
6
+ Project-URL: Homepage, https://github.com/askbudi/tinyagent
7
+ Project-URL: Bug Tracker, https://github.com/askbudi/tinyagent/issues
8
+ Project-URL: Chat, https://askdev.ai/github/askbudi/tinyagent
9
+ Requires-Python: >=3.8
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: mcp
13
+ Requires-Dist: litellm
14
+ Requires-Dist: openai
15
+ Requires-Dist: tiktoken
16
+ Requires-Dist: uuid
17
+ Provides-Extra: dev
18
+ Requires-Dist: pytest; extra == "dev"
19
+ Requires-Dist: black; extra == "dev"
20
+ Requires-Dist: isort; extra == "dev"
21
+ Requires-Dist: mypy; extra == "dev"
22
+ Provides-Extra: postgres
23
+ Requires-Dist: asyncpg>=0.27.0; extra == "postgres"
24
+ Provides-Extra: sqlite
25
+ Requires-Dist: aiosqlite>=0.18.0; extra == "sqlite"
26
+ Provides-Extra: gradio
27
+ Requires-Dist: gradio>=3.50.0; extra == "gradio"
28
+ Provides-Extra: all
29
+ Requires-Dist: asyncpg>=0.27.0; extra == "all"
30
+ Requires-Dist: aiosqlite>=0.18.0; extra == "all"
31
+ Requires-Dist: gradio>=3.50.0; extra == "all"
32
+ Dynamic: license-file
33
+
34
+ # tinyagent
35
+ Tiny Agent: 100 lines Agent with MCP
36
+ ![TinyAgent Logo](https://raw.githubusercontent.com/askbudi/tinyagent/main/public/logo.png)
37
+
38
+
39
+
40
+ Inspired by:
41
+ - [Tiny Agents blog post](https://huggingface.co/blog/tiny-agents)
42
+ - [12-factor-agents repository](https://github.com/humanlayer/12-factor-agents)
43
+ - Created by chatting to the source code of JS Tiny Agent using [AskDev.ai](https://askdev.ai/search)
44
+
45
+ ## Quick Links
46
+ - [Build your own Tiny Agent](https://askdev.ai/github/askbudi/tinyagent)
47
+
48
+ ## Overview
49
+ This is a tiny agent that uses MCP and LiteLLM to interact with a model. You have full control over the agent, you can add any tools you like from MCP and extend the agent using its event system.
50
+
51
+ ## Installation
52
+
53
+ ### Using pip
54
+ ```bash
55
+ # Basic installation
56
+ pip install tinyagent-py
57
+
58
+ # Install with all optional dependencies
59
+ pip install tinyagent-py[all]
60
+
61
+ # Install with PostgreSQL support
62
+ pip install tinyagent-py[postgres]
63
+
64
+ # Install with SQLite support
65
+ pip install tinyagent-py[sqlite]
66
+
67
+ # Install with Gradio UI support
68
+ pip install tinyagent-py[gradio]
69
+
70
+ ```
71
+
72
+ ### Using uv
73
+ ```bash
74
+ # Basic installation
75
+ uv pip install tinyagent-py
76
+
77
+ # Install with PostgreSQL support
78
+ uv pip install tinyagent-py[postgres]
79
+
80
+ # Install with SQLite support
81
+ uv pip install tinyagent-py[sqlite]
82
+
83
+ # Install with Gradio UI support
84
+ uv pip install tinyagent-py[gradio]
85
+
86
+ # Install with all optional dependencies
87
+ uv pip install tinyagent-py[all]
88
+
89
+ # Install with development tools
90
+ uv pip install tinyagent-py[dev]
91
+ ```
92
+
93
+ ## Usage
94
+
95
+ ```python
96
+ from tinyagent import TinyAgent
97
+ from textwrap import dedent
98
+ import asyncio
99
+ import os
100
+
101
+ async def test_agent(task, model="o4-mini", api_key=None):
102
+ # Initialize the agent with model and API key
103
+ agent = TinyAgent(
104
+ model=model, # Or any model supported by LiteLLM
105
+ api_key=os.environ.get("OPENAI_API_KEY") if not api_key else api_key # Set your API key as an env variable
106
+ )
107
+
108
+ try:
109
+ # Connect to an MCP server
110
+ # Replace with your actual server command and args
111
+ await agent.connect_to_server("npx", ["@openbnb/mcp-server-airbnb", "--ignore-robots-txt"])
112
+
113
+ # Run the agent with a user query
114
+ result = await agent.run(task)
115
+ print("\nFinal result:", result)
116
+ return result
117
+ finally:
118
+ # Clean up resources
119
+ await agent.close()
120
+
121
+ # Example usage
122
+ task = dedent("""
123
+ I need accommodation in Toronto between 15th to 20th of May. Give me 5 options for 2 adults.
124
+ """)
125
+ await test_agent(task, model="gpt-4.1-mini")
126
+ ```
127
+
128
+ ## How the TinyAgent Hook System Works
129
+
130
+ TinyAgent is designed to be **extensible** via a simple, event-driven hook (callback) system. This allows you to add custom logic, logging, UI, memory, or any other behavior at key points in the agent's lifecycle.
131
+
132
+ ### How Hooks Work
133
+
134
+ - **Hooks** are just callables (functions or classes with `__call__`) that receive events from the agent.
135
+ - You register hooks using `agent.add_callback(hook)`.
136
+ - Hooks are called with:
137
+ `event_name, agent, **kwargs`
138
+ - Events include:
139
+ - `"agent_start"`: Agent is starting a new run
140
+ - `"message_add"`: A new message is added to the conversation
141
+ - `"llm_start"`: LLM is about to be called
142
+ - `"llm_end"`: LLM call finished
143
+ - `"agent_end"`: Agent is done (final result)
144
+ - (MCPClient also emits `"tool_start"` and `"tool_end"` for tool calls)
145
+
146
+ Hooks can be **async** or regular functions. If a hook is a class with an async `__call__`, it will be awaited.
147
+
148
+ #### Example: Adding a Custom Hook
149
+
150
+ ```python
151
+ def my_logger_hook(event_name, agent, **kwargs):
152
+ print(f"[{event_name}] {kwargs}")
153
+
154
+ agent.add_callback(my_logger_hook)
155
+ ```
156
+
157
+ #### Example: Async Hook
158
+
159
+ ```python
160
+ async def my_async_hook(event_name, agent, **kwargs):
161
+ if event_name == "agent_end":
162
+ print("Agent finished with result:", kwargs.get("result"))
163
+
164
+ agent.add_callback(my_async_hook)
165
+ ```
166
+
167
+ #### Example: Class-based Hook
168
+
169
+ ```python
170
+ class MyHook:
171
+ async def __call__(self, event_name, agent, **kwargs):
172
+ if event_name == "llm_start":
173
+ print("LLM is starting...")
174
+
175
+ agent.add_callback(MyHook())
176
+ ```
177
+
178
+ ### How to Extend the Hook System
179
+
180
+ - **Create your own hook**: Write a function or class as above.
181
+ - **Register it**: Use `agent.add_callback(your_hook)`.
182
+ - **Listen for events**: Check `event_name` and use `**kwargs` for event data.
183
+ - **See examples**: Each official hook (see below) includes a `run_example()` in its file.
184
+
185
+ ---
186
+
187
+ ## List of Available Hooks
188
+
189
+ You can import and use these hooks from `tinyagent.hooks`:
190
+
191
+ | Hook Name | Description | Example Import |
192
+ |--------------------------|--------------------------------------------------|-------------------------------------------------|
193
+ | `LoggingManager` | Granular logging control for all modules | `from tinyagent.hooks.logging_manager import LoggingManager` |
194
+ | `RichUICallback` | Rich terminal UI (with [rich](https://github.com/Textualize/rich)) | `from tinyagent.hooks.rich_ui_callback import RichUICallback` |
195
+ | `GradioCallback` | Interactive browser-based chat UI: file uploads, live thinking, tool calls, token stats | `from tinyagent.hooks.gradio_callback import GradioCallback` |
196
+
197
+ To see more details and usage, check the docstrings and `run_example()` in each hook file.
198
+
199
+ ## Using the GradioCallback Hook
200
+
201
+ The `GradioCallback` hook lets you spin up a full-featured web chat interface for your agent in just a few lines. You get:
202
+
203
+ Features:
204
+ - **Browser-based chat** with streaming updates
205
+ - **File uploads** (\*.pdf, \*.docx, \*.txt) that the agent can reference
206
+ - **Live "thinking" view** so you see intermediate thoughts
207
+ - **Collapsible tool-call sections** showing inputs & outputs
208
+ - **Real-time token usage** (prompt, completion, total)
209
+ - **Toggleable display options** for thinking & tool calls
210
+ - **Non-blocking launch** for asyncio apps (`prevent_thread_lock=True`)
211
+
212
+ ```python
213
+ import asyncio
214
+ from tinyagent import TinyAgent
215
+ from tinyagent.hooks.gradio_callback import GradioCallback
216
+ async def main():
217
+ # 1. Initialize your agent
218
+ agent = TinyAgent(model="gpt-4.1-mini", api_key="YOUR_API_KEY")
219
+ # 2. (Optional) Add tools or connect to MCP servers
220
+ # await agent.connect_to_server("npx", ["-y","@openbnb/mcp-server-airbnb","--ignore-robots-txt"])
221
+ # 3. Instantiate the Gradio UI callback
222
+ gradio_ui = GradioCallback(
223
+ file_upload_folder="uploads/",
224
+ show_thinking=True,
225
+ show_tool_calls=True
226
+ )
227
+ # 4. Register the callback with the agent
228
+ agent.add_callback(gradio_ui)
229
+ # 5. Launch the web interface (non-blocking)
230
+ gradio_ui.launch(
231
+ agent,
232
+ title="TinyAgent Chat",
233
+ description="Ask me to plan a trip or fetch data!",
234
+ share=False,
235
+ prevent_thread_lock=True
236
+ )
237
+ if __name__ == "__main__":
238
+ asyncio.run(main())
239
+ ```
240
+ ---
241
+
242
+ ## Contributing Hooks
243
+
244
+ - Place new hooks in the `tinyagent/hooks/` directory.
245
+ - Add an example usage as `async def run_example()` in the same file.
246
+ - Use `"gpt-4.1-mini"` as the default model in examples.
247
+
248
+ ---
249
+
250
+ ## License
251
+
252
+ MIT License. See [LICENSE](LICENSE).
@@ -0,0 +1,17 @@
1
+ hooks/__init__.py,sha256=UztCHjoqF5JyDolbWwkBsBZkWguDQg23l2GD_zMHt-s,178
2
+ hooks/agno_storage_hook.py,sha256=5qvvjmtraanPa-A46Zstrqq3s1e-sC7Ly0o3zifuw_4,5003
3
+ hooks/gradio_callback.py,sha256=jGsZlObAd6I5lN9cE53dDL_LfiB8I0tBsicuHwwmL-M,44833
4
+ hooks/logging_manager.py,sha256=UpdmpQ7HRPyer-jrmQSXcBwi409tV9LnGvXSHjTcYTI,7935
5
+ hooks/rich_ui_callback.py,sha256=5iCNOiJmhc1lOL7ZjaOt5Sk3rompko4zu_pAxfTVgJQ,22897
6
+ storage/__init__.py,sha256=NebvYxwEGJtvPnRO9dGa-bgOwA7cPkLjFHnMWDxMg5I,261
7
+ storage/agno_storage.py,sha256=ol4qwdH-9jYjBjDvsYkHh7I-vu8uHArPtQylUpoEaCc,4322
8
+ storage/base.py,sha256=GGAMvOoslmm1INLFG_jtwOkRk2Qg39QXx-1LnN7fxDI,1474
9
+ storage/json_file_storage.py,sha256=SYD8lvTHu2-FEHm1tZmsrcgEOirBrlUsUM186X-UPgI,1114
10
+ storage/postgres_storage.py,sha256=IGwan8UXHNnTZFK1F8x4kvMDex3GAAGWUg9ePx_5IF4,9018
11
+ storage/redis_storage.py,sha256=hu3y7wHi49HkpiR-AW7cWVQuTVOUk1WaB8TEPGUKVJ8,1742
12
+ storage/sqlite_storage.py,sha256=7lk1XZpr2t4s2bjVr9-AqrI74w4hwkuK3taWtyJZhBc,5769
13
+ tinyagent_py-0.0.4.dist-info/licenses/LICENSE,sha256=YIogcVQnknaaE4K-oaQylFWo8JGRBWnwmGb3fWB_Pww,1064
14
+ tinyagent_py-0.0.4.dist-info/METADATA,sha256=MDvRoleb36ya8z44BxQvtgSFJ_-WfH5kv7eSWeaMdJQ,8254
15
+ tinyagent_py-0.0.4.dist-info/WHEEL,sha256=zaaOINJESkSfm_4HQVc5ssNzHCPXhJm0kEUakpsEHaU,91
16
+ tinyagent_py-0.0.4.dist-info/top_level.txt,sha256=PfpFqZliMhzue7YU7RrBiZGoAqVBPr9sRc310dWabug,14
17
+ tinyagent_py-0.0.4.dist-info/RECORD,,
@@ -1,5 +1,5 @@
1
1
  Wheel-Version: 1.0
2
- Generator: setuptools (80.3.1)
2
+ Generator: setuptools (80.8.0)
3
3
  Root-Is-Purelib: true
4
4
  Tag: py3-none-any
5
5
 
@@ -0,0 +1,2 @@
1
+ hooks
2
+ storage
tinyagent/__init__.py DELETED
@@ -1,4 +0,0 @@
1
- from .tiny_agent import TinyAgent
2
- from .mcp_client import MCPClient
3
-
4
- __all__ = ["TinyAgent", "MCPClient"]
tinyagent/mcp_client.py DELETED
@@ -1,52 +0,0 @@
1
- import asyncio
2
- import json
3
- import logging
4
- from typing import Dict, List, Optional, Any, Tuple
5
-
6
- # Keep your MCPClient implementation unchanged
7
- import asyncio
8
- from contextlib import AsyncExitStack
9
-
10
- # MCP core imports
11
- from mcp import ClientSession, StdioServerParameters
12
- from mcp.client.stdio import stdio_client
13
-
14
- class MCPClient:
15
- def __init__(self):
16
- self.session = None
17
- self.exit_stack = AsyncExitStack()
18
-
19
- async def connect(self, command: str, args: list[str]):
20
- """
21
- Launches the MCP server subprocess and initializes the client session.
22
- :param command: e.g. "python" or "node"
23
- :param args: list of args to pass, e.g. ["my_server.py"] or ["build/index.js"]
24
- """
25
- # Prepare stdio transport parameters
26
- params = StdioServerParameters(command=command, args=args)
27
- # Open the stdio client transport
28
- self.stdio, self.sock_write = await self.exit_stack.enter_async_context(
29
- stdio_client(params)
30
- )
31
- # Create and initialize the MCP client session
32
- self.session = await self.exit_stack.enter_async_context(
33
- ClientSession(self.stdio, self.sock_write)
34
- )
35
- await self.session.initialize()
36
-
37
- async def list_tools(self):
38
- resp = await self.session.list_tools()
39
- print("Available tools:")
40
- for tool in resp.tools:
41
- print(f" • {tool.name}: {tool.description}")
42
-
43
- async def call_tool(self, name: str, arguments: dict):
44
- """
45
- Invokes a named tool and returns its raw content list.
46
- """
47
- resp = await self.session.call_tool(name, arguments)
48
- return resp.content
49
-
50
- async def close(self):
51
- # Clean up subprocess and streams
52
- await self.exit_stack.aclose()
tinyagent/tiny_agent.py DELETED
@@ -1,247 +0,0 @@
1
- # Import LiteLLM for model interaction
2
- import litellm
3
- import json
4
- import logging
5
- from typing import Dict, List, Optional, Any, Tuple
6
- from .mcp_client import MCPClient
7
-
8
- # Set up logging
9
- logging.basicConfig(level=logging.DEBUG)
10
- logger = logging.getLogger(__name__)
11
- #litellm.callbacks = ["arize_phoenix"]
12
-
13
-
14
-
15
- class TinyAgent:
16
- """
17
- A minimal implementation of an agent powered by MCP and LiteLLM.
18
- This agent is literally just a while loop on top of MCPClient.
19
- """
20
-
21
- def __init__(self, model: str = "gpt-4o", api_key: Optional[str] = None, system_prompt: Optional[str] = None):
22
- """
23
- Initialize the Tiny Agent.
24
-
25
- Args:
26
- model: The model to use with LiteLLM
27
- api_key: The API key for the model provider
28
- system_prompt: Custom system prompt for the agent
29
- """
30
- # Create the MCPClient
31
- self.mcp_client = MCPClient()
32
-
33
- # LiteLLM configuration
34
- self.model = model
35
- self.api_key = api_key
36
- if api_key:
37
- litellm.api_key = api_key
38
-
39
- # Conversation state
40
- self.messages = [{
41
- "role": "system",
42
- "content": system_prompt or (
43
- "You are a helpful AI assistant with access to a variety of tools. "
44
- "Use the tools when appropriate to accomplish tasks. "
45
- "If a tool you need isn't available, just say so."
46
- )
47
- }]
48
-
49
- # Available tools (will be populated after connecting to MCP servers)
50
- self.available_tools = []
51
-
52
- # Control flow tools
53
- self.exit_loop_tools = [
54
- {
55
- "type": "function",
56
- "function": {
57
- "name": "task_complete",
58
- "description": "Call this tool when the task given by the user is complete",
59
- "parameters": {"type": "object", "properties": {}}
60
- }
61
- },
62
- {
63
- "type": "function",
64
- "function": {
65
- "name": "ask_question",
66
- "description": "Ask a question to the user to get more info required to solve or clarify their problem.",
67
- "parameters": {
68
- "type": "object",
69
- "properties": {
70
- "question": {
71
- "type": "string",
72
- "description": "The question to ask the user"
73
- }
74
- },
75
- "required": ["question"]
76
- }
77
- }
78
- }
79
- ]
80
-
81
- async def connect_to_server(self, command: str, args: List[str]) -> None:
82
- """
83
- Connect to an MCP server and fetch available tools.
84
-
85
- Args:
86
- command: The command to run the server
87
- args: List of arguments for the server
88
- """
89
- await self.mcp_client.connect(command, args)
90
-
91
- # Get available tools from the server and format them for LiteLLM
92
- resp = await self.mcp_client.session.list_tools()
93
-
94
- tool_descriptions = []
95
- for tool in resp.tools:
96
- tool_descriptions.append({
97
- "type": "function",
98
- "function": {
99
- "name": tool.name,
100
- "description": tool.description,
101
- "parameters": tool.inputSchema
102
- }
103
- })
104
-
105
- logger.info(f"Added {len(tool_descriptions)} tools from MCP server")
106
- self.available_tools.extend(tool_descriptions)
107
-
108
- async def run(self, user_input: str, max_turns: int = 10) -> str:
109
- """
110
- Run the agent with user input.
111
-
112
- Args:
113
- user_input: The user's request
114
- max_turns: Maximum number of turns before giving up
115
-
116
- Returns:
117
- The final agent response
118
- """
119
- # Add user message to conversation
120
- self.messages.append({"role": "user", "content": user_input})
121
-
122
- # Initialize loop control variables
123
- num_turns = 0
124
- next_turn_should_call_tools = True
125
-
126
- # The main agent loop
127
- while True:
128
- # Get all available tools including exit loop tools
129
- all_tools = self.available_tools + self.exit_loop_tools
130
-
131
- # Call LLM with messages and tools
132
- try:
133
- logger.info(f"Calling LLM with {len(self.messages)} messages and {len(all_tools)} tools")
134
- response = await litellm.acompletion(
135
- model=self.model,
136
- messages=self.messages,
137
- tools=all_tools,
138
- tool_choice="auto"
139
- )
140
-
141
- # Process the response - properly handle the object
142
- response_message = response.choices[0].message
143
- logger.debug(f"🔥🔥🔥🔥🔥🔥 Response : {response_message}")
144
-
145
- # Create a proper message dictionary from the response object's attributes
146
- assistant_message = {
147
- "role": "assistant",
148
- "content": response_message.content if hasattr(response_message, "content") else ""
149
- }
150
-
151
- # Check if the message has tool_calls attribute and it's not empty
152
- has_tool_calls = hasattr(response_message, "tool_calls") and response_message.tool_calls
153
-
154
- if has_tool_calls:
155
- # Add tool_calls to the message if present
156
- assistant_message["tool_calls"] = response_message.tool_calls
157
-
158
- # Add the properly formatted assistant message to conversation
159
- self.messages.append(assistant_message)
160
-
161
- # Process tool calls if they exist
162
- if has_tool_calls:
163
- tool_calls = response_message.tool_calls
164
- logger.info(f"Tool calls detected: {len(tool_calls)}")
165
-
166
- # Process each tool call one by one
167
- for tool_call in tool_calls:
168
- tool_call_id = tool_call.id
169
- function_info = tool_call.function
170
- tool_name = function_info.name
171
-
172
- # Create a tool message
173
- tool_message = {
174
- "role": "tool",
175
- "tool_call_id": tool_call_id,
176
- "name": tool_name,
177
- "content": "" # Default empty content
178
- }
179
-
180
- try:
181
- # Parse tool arguments
182
- try:
183
- tool_args = json.loads(function_info.arguments)
184
- except json.JSONDecodeError:
185
- logger.error(f"Could not parse tool arguments: {function_info.arguments}")
186
- tool_args = {}
187
-
188
- # Handle control flow tools
189
- if tool_name == "task_complete":
190
- # Add a response for this tool call before returning
191
- tool_message["content"] = "Task has been completed successfully."
192
- self.messages.append(tool_message)
193
- return "Task completed."
194
- elif tool_name == "ask_question":
195
- question = tool_args.get("question", "Could you provide more details?")
196
- # Add a response for this tool call before returning
197
- tool_message["content"] = f"Question asked: {question}"
198
- self.messages.append(tool_message)
199
- return f"I need more information: {question}"
200
- else:
201
- # Call the actual tool using MCPClient
202
- try:
203
- content_list = await self.mcp_client.call_tool(tool_name, tool_args)
204
-
205
- # Safely extract text from the content
206
- if content_list:
207
- # Try different ways to extract the content
208
- if hasattr(content_list[0], 'text'):
209
- tool_message["content"] = content_list[0].text
210
- elif isinstance(content_list[0], dict) and 'text' in content_list[0]:
211
- tool_message["content"] = content_list[0]['text']
212
- else:
213
- tool_message["content"] = str(content_list)
214
- else:
215
- tool_message["content"] = "Tool returned no content"
216
- except Exception as e:
217
- logger.error(f"Error calling tool {tool_name}: {str(e)}")
218
- tool_message["content"] = f"Error executing tool {tool_name}: {str(e)}"
219
- except Exception as e:
220
- # If any error occurs during tool call processing, make sure we still have a tool response
221
- logger.error(f"Unexpected error processing tool call {tool_call_id}: {str(e)}")
222
- tool_message["content"] = f"Error processing tool call: {str(e)}"
223
-
224
- # Always add the tool message to ensure each tool call has a response
225
- self.messages.append(tool_message)
226
-
227
- next_turn_should_call_tools = False
228
- else:
229
- # No tool calls in this message
230
- if next_turn_should_call_tools and num_turns > 0:
231
- # If we expected tool calls but didn't get any, we're done
232
- return assistant_message["content"] or ""
233
-
234
- next_turn_should_call_tools = True
235
-
236
- num_turns += 1
237
- if num_turns >= max_turns:
238
- return "Max turns reached. Task incomplete."
239
-
240
- except Exception as e:
241
- logger.error(f"Error in agent loop: {str(e)}")
242
- return f"Error: {str(e)}"
243
-
244
-
245
- async def close(self):
246
- """Clean up resources."""
247
- await self.mcp_client.close()
@@ -1,79 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: tinyagent-py
3
- Version: 0.0.1
4
- Summary: Tiny Agent with MCP Client
5
- Author-email: Mahdi Golchin <golchin@askdev.ai>
6
- License: MIT
7
- Project-URL: Homepage, https://github.com/askbudi/tinyagent
8
- Project-URL: Bug Tracker, https://github.com/askbudi/tinyagent/issues
9
- Project-URL: Chat, https://askdev.ai/github/askbudi/tinyagent
10
- Requires-Python: >=3.8
11
- Description-Content-Type: text/markdown
12
- License-File: LICENSE
13
- Requires-Dist: mcp
14
- Requires-Dist: litellm
15
- Requires-Dist: openai
16
- Provides-Extra: dev
17
- Requires-Dist: pytest; extra == "dev"
18
- Dynamic: license-file
19
-
20
- # tinyagent
21
- Tiny Agent: 100 lines Agent with MCP
22
-
23
- Inspired by:
24
- - [Tiny Agents blog post](https://huggingface.co/blog/tiny-agents)
25
- - [12-factor-agents repository](https://github.com/humanlayer/12-factor-agents)
26
- - Created by chatting to the source code of JS Tiny Agent using [AskDev.ai](https://askdev.ai/search)
27
-
28
- ## Quick Links
29
- - [Build your own Tiny Agent](https://askdev.ai/github/askbudi/tinyagent)
30
-
31
- ## Overview
32
- This is a tiny agent that uses MCP and LiteLLM to interact with a model. You have full control over the agent, you can add any tools you like from MCP and extend the agent using its event system.
33
-
34
- ## Installation
35
-
36
- ### Using pip
37
- ```bash
38
- pip install tinyagent
39
- ```
40
-
41
- ### Using uv
42
- ```bash
43
- uv pip install tinyagent
44
- ```
45
-
46
- ## Usage
47
-
48
- ```python
49
- from tinyagent import TinyAgent
50
- from textwrap import dedent
51
- import asyncio
52
- import os
53
-
54
- async def test_agent(task, model="o4-mini", api_key=None):
55
- # Initialize the agent with model and API key
56
- agent = TinyAgent(
57
- model=model, # Or any model supported by LiteLLM
58
- api_key=os.environ.get("OPENAI_API_KEY") if not api_key else api_key # Set your API key as an env variable
59
- )
60
-
61
- try:
62
- # Connect to an MCP server
63
- # Replace with your actual server command and args
64
- await agent.connect_to_server("npx", ["@openbnb/mcp-server-airbnb", "--ignore-robots-txt"])
65
-
66
- # Run the agent with a user query
67
- result = await agent.run(task)
68
- print("\nFinal result:", result)
69
- return result
70
- finally:
71
- # Clean up resources
72
- await agent.close()
73
-
74
- # Example usage
75
- task = dedent("""
76
- I need accommodation in Toronto between 15th to 20th of May. Give me 5 options for 2 adults.
77
- """)
78
- await test_agent(task, model="gpt-4.1-mini")
79
- ```
@@ -1,8 +0,0 @@
1
- tinyagent/__init__.py,sha256=V3nU-BA-Ddi8ErLJ5CoYsdRZpV2l-vIE5D5e1nrXhI8,105
2
- tinyagent/mcp_client.py,sha256=vVzItP9fI5khxq8O4HZwNIyo3nv0_7ITWw7GB2tbTyg,1776
3
- tinyagent/tiny_agent.py,sha256=ORVb1ipSXJ09A2tTBqrtdoikGuYu8P6o-zZg6Ra6CRA,10990
4
- tinyagent_py-0.0.1.dist-info/licenses/LICENSE,sha256=YIogcVQnknaaE4K-oaQylFWo8JGRBWnwmGb3fWB_Pww,1064
5
- tinyagent_py-0.0.1.dist-info/METADATA,sha256=puT81QO-JHyF15KO1yQPsz7cBxEgFMY1FzPEgJuh5Jk,2335
6
- tinyagent_py-0.0.1.dist-info/WHEEL,sha256=0CuiUZ_p9E4cD6NyLD6UG80LBXYyiSYZOKDm5lp32xk,91
7
- tinyagent_py-0.0.1.dist-info/top_level.txt,sha256=Ny8aJNchZpc2Vvhp3306L5vjceJakvFxBk-UjjVeA_I,10
8
- tinyagent_py-0.0.1.dist-info/RECORD,,
@@ -1 +0,0 @@
1
- tinyagent