agentic-blocks 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Magnus Bjelkenhed
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,234 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentic-blocks
3
+ Version: 0.1.0
4
+ Summary: Simple building blocks for agentic AI systems with MCP client and conversation management
5
+ Author-email: Magnus Bjelkenhed <bjelkenhed@gmail.com>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/bjelkenhed/agentic-blocks
8
+ Project-URL: Repository, https://github.com/bjelkenhed/agentic-blocks
9
+ Project-URL: Issues, https://github.com/bjelkenhed/agentic-blocks/issues
10
+ Keywords: ai,mcp,model-control-protocol,agent,llm,openai,conversation
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.11
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Requires-Python: >=3.11
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: mcp
23
+ Requires-Dist: requests
24
+ Requires-Dist: python-dotenv
25
+ Requires-Dist: openai
26
+ Provides-Extra: test
27
+ Requires-Dist: pytest; extra == "test"
28
+ Provides-Extra: dev
29
+ Requires-Dist: pytest; extra == "dev"
30
+ Requires-Dist: build; extra == "dev"
31
+ Requires-Dist: twine; extra == "dev"
32
+ Dynamic: license-file
33
+
34
+ # Agentic Blocks
35
+
36
+ Building blocks for agentic systems with a focus on simplicity and ease of use.
37
+
38
+ ## Overview
39
+
40
+ Agentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:
41
+
42
+ - **MCP Client**: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API
43
+ - **Messages**: Manage LLM conversation history with OpenAI-compatible format
44
+
45
+ Both components follow principles of simplicity, maintainability, and ease of use.
46
+
47
+ ## Installation
48
+
49
+ ```bash
50
+ pip install -e .
51
+ ```
52
+
53
+ For development:
54
+ ```bash
55
+ pip install -e ".[dev]"
56
+ ```
57
+
58
+ ## Quick Start
59
+
60
+ ### MCPClient - Connect to MCP Endpoints
61
+
62
+ The MCPClient provides a unified interface for connecting to different types of MCP endpoints:
63
+
64
+ ```python
65
+ from agentic_blocks import MCPClient
66
+
67
+ # Connect to an SSE endpoint (sync by default)
68
+ client = MCPClient("https://example.com/mcp/server/sse")
69
+
70
+ # List available tools
71
+ tools = client.list_tools()
72
+ print(f"Available tools: {len(tools)}")
73
+
74
+ # Call a tool
75
+ result = client.call_tool("search", {"query": "What is MCP?"})
76
+ print(result)
77
+ ```
78
+
79
+ **Supported endpoint types:**
80
+ - **SSE endpoints**: URLs with `/sse` in the path
81
+ - **HTTP endpoints**: URLs with `/mcp` in the path
82
+ - **Local scripts**: File paths to Python MCP servers
83
+
84
+ **Async support for advanced users:**
85
+ ```python
86
+ # Async versions available
87
+ tools = await client.list_tools_async()
88
+ result = await client.call_tool_async("search", {"query": "async example"})
89
+ ```
90
+
91
+ ### Messages - Manage Conversation History
92
+
93
+ The Messages class helps build and manage LLM conversations in OpenAI-compatible format:
94
+
95
+ ```python
96
+ from agentic_blocks import Messages
97
+
98
+ # Initialize with system prompt
99
+ messages = Messages(
100
+ system_prompt="You are a helpful assistant.",
101
+ user_prompt="Hello, how can you help me?",
102
+ add_date_and_time=True
103
+ )
104
+
105
+ # Add assistant response
106
+ messages.add_assistant_message("I can help you with various tasks!")
107
+
108
+ # Add tool calls
109
+ tool_call = {
110
+ "id": "call_123",
111
+ "type": "function",
112
+ "function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
113
+ }
114
+ messages.add_tool_call(tool_call)
115
+
116
+ # Add tool response
117
+ messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
118
+
119
+ # Get messages for LLM API
120
+ conversation = messages.get_messages()
121
+
122
+ # View readable format
123
+ print(messages)
124
+ ```
125
+
126
+ ## Complete Example - Agent with MCP Tools
127
+
128
+ ```python
129
+ from agentic_blocks import MCPClient, Messages
130
+
131
+ def simple_agent():
132
+ # Initialize MCP client and conversation
133
+ client = MCPClient("https://example.com/mcp/server/sse")
134
+ messages = Messages(
135
+ system_prompt="You are a helpful research assistant.",
136
+ add_date_and_time=True
137
+ )
138
+
139
+ # Get available tools
140
+ tools = client.list_tools()
141
+ print(f"Connected to MCP server with {len(tools)} tools")
142
+
143
+ # Simulate user query
144
+ user_query = "What's the latest news about AI?"
145
+ messages.add_user_message(user_query)
146
+
147
+ # Agent decides to use a search tool
148
+ if tools:
149
+ search_tool = next((t for t in tools if "search" in t["function"]["name"]), None)
150
+ if search_tool:
151
+ # Add tool call to messages
152
+ tool_call = {
153
+ "id": "search_001",
154
+ "type": "function",
155
+ "function": {
156
+ "name": search_tool["function"]["name"],
157
+ "arguments": '{"query": "latest AI news"}'
158
+ }
159
+ }
160
+ messages.add_tool_call(tool_call)
161
+
162
+ # Execute the tool
163
+ result = client.call_tool(
164
+ search_tool["function"]["name"],
165
+ {"query": "latest AI news"}
166
+ )
167
+
168
+ # Add tool response
169
+ if result["content"]:
170
+ response_text = result["content"][0]["text"]
171
+ messages.add_tool_response("search_001", response_text)
172
+
173
+ # Add final assistant response
174
+ messages.add_assistant_message(
175
+ "Based on my search, here's what I found about the latest AI news..."
176
+ )
177
+
178
+ # Print conversation
179
+ print("\\nConversation:")
180
+ print(messages)
181
+
182
+ return messages.get_messages()
183
+
184
+ if __name__ == "__main__":
185
+ simple_agent()
186
+ ```
187
+
188
+ ## Development Principles
189
+
190
+ This project follows these core principles:
191
+
192
+ - **Simplicity First**: Keep code simple, readable, and focused on core functionality
193
+ - **Sync-by-Default**: Primary methods are synchronous for ease of use, with optional async versions
194
+ - **Minimal Dependencies**: Avoid over-engineering and complex error handling unless necessary
195
+ - **Clean APIs**: Prefer straightforward method names and clear parameter expectations
196
+ - **Maintainable Code**: Favor fewer lines of clear code over comprehensive edge case handling
197
+
198
+ ## API Reference
199
+
200
+ ### MCPClient
201
+
202
+ ```python
203
+ MCPClient(endpoint: str, timeout: int = 30)
204
+ ```
205
+
206
+ **Methods:**
207
+ - `list_tools() -> List[Dict]`: Get available tools (sync)
208
+ - `call_tool(name: str, args: Dict) -> Dict`: Call a tool (sync)
209
+ - `list_tools_async() -> List[Dict]`: Async version of list_tools
210
+ - `call_tool_async(name: str, args: Dict) -> Dict`: Async version of call_tool
211
+
212
+ ### Messages
213
+
214
+ ```python
215
+ Messages(system_prompt=None, user_prompt=None, add_date_and_time=False)
216
+ ```
217
+
218
+ **Methods:**
219
+ - `add_system_message(content: str)`: Add system message
220
+ - `add_user_message(content: str)`: Add user message
221
+ - `add_assistant_message(content: str)`: Add assistant message
222
+ - `add_tool_call(tool_call: Dict)`: Add tool call to assistant message
223
+ - `add_tool_response(call_id: str, content: str)`: Add tool response
224
+ - `get_messages() -> List[Dict]`: Get all messages
225
+ - `has_pending_tool_calls() -> bool`: Check for pending tool calls
226
+
227
+ ## Requirements
228
+
229
+ - Python >= 3.11
230
+ - Dependencies: `mcp`, `requests`, `python-dotenv`, `openai`
231
+
232
+ ## License
233
+
234
+ MIT
@@ -0,0 +1,201 @@
1
+ # Agentic Blocks
2
+
3
+ Building blocks for agentic systems with a focus on simplicity and ease of use.
4
+
5
+ ## Overview
6
+
7
+ Agentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:
8
+
9
+ - **MCP Client**: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API
10
+ - **Messages**: Manage LLM conversation history with OpenAI-compatible format
11
+
12
+ Both components follow principles of simplicity, maintainability, and ease of use.
13
+
14
+ ## Installation
15
+
16
+ ```bash
17
+ pip install -e .
18
+ ```
19
+
20
+ For development:
21
+ ```bash
22
+ pip install -e ".[dev]"
23
+ ```
24
+
25
+ ## Quick Start
26
+
27
+ ### MCPClient - Connect to MCP Endpoints
28
+
29
+ The MCPClient provides a unified interface for connecting to different types of MCP endpoints:
30
+
31
+ ```python
32
+ from agentic_blocks import MCPClient
33
+
34
+ # Connect to an SSE endpoint (sync by default)
35
+ client = MCPClient("https://example.com/mcp/server/sse")
36
+
37
+ # List available tools
38
+ tools = client.list_tools()
39
+ print(f"Available tools: {len(tools)}")
40
+
41
+ # Call a tool
42
+ result = client.call_tool("search", {"query": "What is MCP?"})
43
+ print(result)
44
+ ```
45
+
46
+ **Supported endpoint types:**
47
+ - **SSE endpoints**: URLs with `/sse` in the path
48
+ - **HTTP endpoints**: URLs with `/mcp` in the path
49
+ - **Local scripts**: File paths to Python MCP servers
50
+
51
+ **Async support for advanced users:**
52
+ ```python
53
+ # Async versions available
54
+ tools = await client.list_tools_async()
55
+ result = await client.call_tool_async("search", {"query": "async example"})
56
+ ```
57
+
58
+ ### Messages - Manage Conversation History
59
+
60
+ The Messages class helps build and manage LLM conversations in OpenAI-compatible format:
61
+
62
+ ```python
63
+ from agentic_blocks import Messages
64
+
65
+ # Initialize with system prompt
66
+ messages = Messages(
67
+ system_prompt="You are a helpful assistant.",
68
+ user_prompt="Hello, how can you help me?",
69
+ add_date_and_time=True
70
+ )
71
+
72
+ # Add assistant response
73
+ messages.add_assistant_message("I can help you with various tasks!")
74
+
75
+ # Add tool calls
76
+ tool_call = {
77
+ "id": "call_123",
78
+ "type": "function",
79
+ "function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
80
+ }
81
+ messages.add_tool_call(tool_call)
82
+
83
+ # Add tool response
84
+ messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
85
+
86
+ # Get messages for LLM API
87
+ conversation = messages.get_messages()
88
+
89
+ # View readable format
90
+ print(messages)
91
+ ```
92
+
93
+ ## Complete Example - Agent with MCP Tools
94
+
95
+ ```python
96
+ from agentic_blocks import MCPClient, Messages
97
+
98
+ def simple_agent():
99
+ # Initialize MCP client and conversation
100
+ client = MCPClient("https://example.com/mcp/server/sse")
101
+ messages = Messages(
102
+ system_prompt="You are a helpful research assistant.",
103
+ add_date_and_time=True
104
+ )
105
+
106
+ # Get available tools
107
+ tools = client.list_tools()
108
+ print(f"Connected to MCP server with {len(tools)} tools")
109
+
110
+ # Simulate user query
111
+ user_query = "What's the latest news about AI?"
112
+ messages.add_user_message(user_query)
113
+
114
+ # Agent decides to use a search tool
115
+ if tools:
116
+ search_tool = next((t for t in tools if "search" in t["function"]["name"]), None)
117
+ if search_tool:
118
+ # Add tool call to messages
119
+ tool_call = {
120
+ "id": "search_001",
121
+ "type": "function",
122
+ "function": {
123
+ "name": search_tool["function"]["name"],
124
+ "arguments": '{"query": "latest AI news"}'
125
+ }
126
+ }
127
+ messages.add_tool_call(tool_call)
128
+
129
+ # Execute the tool
130
+ result = client.call_tool(
131
+ search_tool["function"]["name"],
132
+ {"query": "latest AI news"}
133
+ )
134
+
135
+ # Add tool response
136
+ if result["content"]:
137
+ response_text = result["content"][0]["text"]
138
+ messages.add_tool_response("search_001", response_text)
139
+
140
+ # Add final assistant response
141
+ messages.add_assistant_message(
142
+ "Based on my search, here's what I found about the latest AI news..."
143
+ )
144
+
145
+ # Print conversation
146
+ print("\\nConversation:")
147
+ print(messages)
148
+
149
+ return messages.get_messages()
150
+
151
+ if __name__ == "__main__":
152
+ simple_agent()
153
+ ```
154
+
155
+ ## Development Principles
156
+
157
+ This project follows these core principles:
158
+
159
+ - **Simplicity First**: Keep code simple, readable, and focused on core functionality
160
+ - **Sync-by-Default**: Primary methods are synchronous for ease of use, with optional async versions
161
+ - **Minimal Dependencies**: Avoid over-engineering and complex error handling unless necessary
162
+ - **Clean APIs**: Prefer straightforward method names and clear parameter expectations
163
+ - **Maintainable Code**: Favor fewer lines of clear code over comprehensive edge case handling
164
+
165
+ ## API Reference
166
+
167
+ ### MCPClient
168
+
169
+ ```python
170
+ MCPClient(endpoint: str, timeout: int = 30)
171
+ ```
172
+
173
+ **Methods:**
174
+ - `list_tools() -> List[Dict]`: Get available tools (sync)
175
+ - `call_tool(name: str, args: Dict) -> Dict`: Call a tool (sync)
176
+ - `list_tools_async() -> List[Dict]`: Async version of list_tools
177
+ - `call_tool_async(name: str, args: Dict) -> Dict`: Async version of call_tool
178
+
179
+ ### Messages
180
+
181
+ ```python
182
+ Messages(system_prompt=None, user_prompt=None, add_date_and_time=False)
183
+ ```
184
+
185
+ **Methods:**
186
+ - `add_system_message(content: str)`: Add system message
187
+ - `add_user_message(content: str)`: Add user message
188
+ - `add_assistant_message(content: str)`: Add assistant message
189
+ - `add_tool_call(tool_call: Dict)`: Add tool call to assistant message
190
+ - `add_tool_response(call_id: str, content: str)`: Add tool response
191
+ - `get_messages() -> List[Dict]`: Get all messages
192
+ - `has_pending_tool_calls() -> bool`: Check for pending tool calls
193
+
194
+ ## Requirements
195
+
196
+ - Python >= 3.11
197
+ - Dependencies: `mcp`, `requests`, `python-dotenv`, `openai`
198
+
199
+ ## License
200
+
201
+ MIT
@@ -0,0 +1,64 @@
1
+ [build-system]
2
+ requires = [
3
+ "setuptools>=45",
4
+ "build",
5
+ ]
6
+ build-backend = "setuptools.build_meta"
7
+
8
+ [tool.setuptools.packages.find]
9
+ where = ["src"]
10
+ include = ["agentic_blocks*"]
11
+
12
+ [tool.setuptools.package-data]
13
+ agentic_blocks = []
14
+
15
+ [project]
16
+ name = "agentic-blocks"
17
+ version = "0.1.0"
18
+ description = "Simple building blocks for agentic AI systems with MCP client and conversation management"
19
+ readme = "README.md"
20
+ requires-python = ">=3.11"
21
+ license = {text = "MIT"}
22
+ keywords = ["ai", "mcp", "model-control-protocol", "agent", "llm", "openai", "conversation"]
23
+ authors = [
24
+ { name = "Magnus Bjelkenhed", email = "bjelkenhed@gmail.com" }
25
+ ]
26
+ classifiers = [
27
+ "Development Status :: 3 - Alpha",
28
+ "Intended Audience :: Developers",
29
+ "License :: OSI Approved :: MIT License",
30
+ "Programming Language :: Python :: 3",
31
+ "Programming Language :: Python :: 3.11",
32
+ "Programming Language :: Python :: 3.12",
33
+ "Topic :: Software Development :: Libraries :: Python Modules",
34
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
35
+ ]
36
+
37
+ dependencies = [
38
+ "mcp",
39
+ "requests",
40
+ "python-dotenv",
41
+ "openai",
42
+ ]
43
+
44
+ [project.urls]
45
+ Homepage = "https://github.com/bjelkenhed/agentic-blocks"
46
+ Repository = "https://github.com/bjelkenhed/agentic-blocks"
47
+ Issues = "https://github.com/bjelkenhed/agentic-blocks/issues"
48
+
49
+ [project.optional-dependencies]
50
+ test = [
51
+ "pytest",
52
+ ]
53
+ dev = [
54
+ "pytest",
55
+ "build",
56
+ "twine",
57
+ ]
58
+
59
+ [dependency-groups]
60
+ dev = [
61
+ "build>=1.3.0",
62
+ "twine>=6.1.0",
63
+ ]
64
+
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,8 @@
1
+ """Agentic Blocks - Building blocks for agentic systems."""
2
+
3
+ from .mcp_client import MCPClient, MCPEndpointError
4
+ from .messages import Messages
5
+
6
+ __version__ = "0.1.0"
7
+
8
+ __all__ = ["MCPClient", "MCPEndpointError", "Messages"]
@@ -0,0 +1,262 @@
1
+ """
2
+ Simplified MCP Client for connecting to MCP endpoints with sync-by-default API.
3
+ """
4
+
5
+ import asyncio
6
+ import logging
7
+ import os
8
+ from typing import List, Dict, Any
9
+ from urllib.parse import urlparse
10
+
11
+ from mcp import ClientSession, StdioServerParameters
12
+ from mcp.client.sse import sse_client
13
+ from mcp.client.stdio import stdio_client
14
+ from mcp.client.streamable_http import streamablehttp_client
15
+
16
+ logger = logging.getLogger(__name__)
17
+
18
+
19
+ class MCPEndpointError(Exception):
20
+ """Exception raised when there's an error connecting to or using an MCP endpoint."""
21
+
22
+ pass
23
+
24
+
25
+ class MCPClient:
26
+ """
27
+ A simplified MCP client that can connect to MCP endpoints with sync-by-default API.
28
+
29
+ Supports:
30
+ - SSE endpoints (e.g., 'https://example.com/mcp/server/sse')
31
+ - Streamable HTTP endpoints (e.g., 'https://example.com/mcp/server')
32
+ - Local StdioServer scripts (e.g., 'path/to/server.py')
33
+ """
34
+
35
+ def __init__(self, endpoint: str, timeout: int = 30):
36
+ """
37
+ Initialize the MCP client.
38
+
39
+ Args:
40
+ endpoint: Either a URL (for SSE/HTTP) or a file path (for StdioServer)
41
+ timeout: Connection timeout in seconds
42
+ """
43
+ self.endpoint = endpoint
44
+ self.timeout = timeout
45
+ self.transport_type = self._detect_transport_type(endpoint)
46
+
47
+ def _detect_transport_type(self, endpoint: str) -> str:
48
+ """
49
+ Detect the transport type based on the endpoint.
50
+
51
+ Args:
52
+ endpoint: The endpoint URL or file path
53
+
54
+ Returns:
55
+ Transport type: 'sse', 'streamable-http', or 'stdio'
56
+ """
57
+ if endpoint.startswith(("http://", "https://")):
58
+ parsed = urlparse(endpoint)
59
+ path = parsed.path.lower()
60
+
61
+ if "/sse" in path:
62
+ return "sse"
63
+ elif "/mcp" in path:
64
+ return "streamable-http"
65
+ else:
66
+ return "streamable-http"
67
+ else:
68
+ return "stdio"
69
+
70
+ def list_tools(self) -> List[Dict[str, Any]]:
71
+ """
72
+ List all available tools from the MCP endpoint in OpenAI standard format.
73
+
74
+ Returns:
75
+ List of tools in OpenAI function calling format
76
+
77
+ Raises:
78
+ MCPEndpointError: If connection or listing fails
79
+ """
80
+ return asyncio.run(self.list_tools_async())
81
+
82
+ async def list_tools_async(self) -> List[Dict[str, Any]]:
83
+ """
84
+ Async version of list_tools for advanced users.
85
+
86
+ Returns:
87
+ List of tools in OpenAI function calling format
88
+
89
+ Raises:
90
+ MCPEndpointError: If connection or listing fails
91
+ """
92
+ try:
93
+ if self.transport_type == "sse":
94
+ async with sse_client(url=self.endpoint, timeout=self.timeout) as (
95
+ read_stream,
96
+ write_stream,
97
+ ):
98
+ return await self._get_tools_from_session(read_stream, write_stream)
99
+ elif self.transport_type == "streamable-http":
100
+ async with streamablehttp_client(
101
+ url=self.endpoint, timeout=self.timeout
102
+ ) as (read_stream, write_stream, session_id_getter):
103
+ return await self._get_tools_from_session(read_stream, write_stream)
104
+ elif self.transport_type == "stdio":
105
+ if not os.path.exists(self.endpoint):
106
+ raise MCPEndpointError(
107
+ f"StdioServer script not found: {self.endpoint}"
108
+ )
109
+
110
+ server_params = StdioServerParameters(
111
+ command="python", args=[self.endpoint]
112
+ )
113
+ async with stdio_client(server_params) as (read_stream, write_stream):
114
+ return await self._get_tools_from_session(read_stream, write_stream)
115
+ else:
116
+ raise MCPEndpointError(
117
+ f"Unsupported transport type: {self.transport_type}"
118
+ )
119
+ except Exception as e:
120
+ logger.error(f"Failed to list tools from {self.endpoint}: {e}")
121
+ raise MCPEndpointError(f"Failed to list tools: {e}")
122
+
123
+ async def _get_tools_from_session(
124
+ self, read_stream, write_stream
125
+ ) -> List[Dict[str, Any]]:
126
+ """Get tools from an MCP session in OpenAI standard format."""
127
+ async with ClientSession(read_stream, write_stream) as session:
128
+ await session.initialize()
129
+ tools_response = await session.list_tools()
130
+
131
+ tools = []
132
+ for tool in tools_response.tools:
133
+ function_dict = {
134
+ "name": tool.name,
135
+ "description": tool.description or "",
136
+ "parameters": tool.inputSchema or {},
137
+ }
138
+
139
+ openai_tool = {
140
+ "type": "function",
141
+ "function": function_dict,
142
+ }
143
+ tools.append(openai_tool)
144
+
145
+ return tools
146
+
147
+ def call_tool(self, tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
148
+ """
149
+ Call a specific tool on the MCP endpoint.
150
+
151
+ Args:
152
+ tool_name: Name of the tool to call
153
+ arguments: Arguments to pass to the tool
154
+
155
+ Returns:
156
+ Tool call result dictionary
157
+
158
+ Raises:
159
+ MCPEndpointError: If connection or tool call fails
160
+ """
161
+ return asyncio.run(self.call_tool_async(tool_name, arguments))
162
+
163
+ async def call_tool_async(
164
+ self, tool_name: str, arguments: Dict[str, Any]
165
+ ) -> Dict[str, Any]:
166
+ """
167
+ Async version of call_tool for advanced users.
168
+
169
+ Args:
170
+ tool_name: Name of the tool to call
171
+ arguments: Arguments to pass to the tool
172
+
173
+ Returns:
174
+ Tool call result dictionary
175
+
176
+ Raises:
177
+ MCPEndpointError: If connection or tool call fails
178
+ """
179
+ try:
180
+ if self.transport_type == "sse":
181
+ async with sse_client(url=self.endpoint, timeout=self.timeout) as (
182
+ read_stream,
183
+ write_stream,
184
+ ):
185
+ return await self._call_tool_from_session(
186
+ read_stream, write_stream, tool_name, arguments
187
+ )
188
+ elif self.transport_type == "streamable-http":
189
+ async with streamablehttp_client(
190
+ url=self.endpoint, timeout=self.timeout
191
+ ) as (read_stream, write_stream, session_id_getter):
192
+ return await self._call_tool_from_session(
193
+ read_stream, write_stream, tool_name, arguments
194
+ )
195
+ elif self.transport_type == "stdio":
196
+ if not os.path.exists(self.endpoint):
197
+ raise MCPEndpointError(
198
+ f"StdioServer script not found: {self.endpoint}"
199
+ )
200
+
201
+ server_params = StdioServerParameters(
202
+ command="python", args=[self.endpoint]
203
+ )
204
+ async with stdio_client(server_params) as (read_stream, write_stream):
205
+ return await self._call_tool_from_session(
206
+ read_stream, write_stream, tool_name, arguments
207
+ )
208
+ else:
209
+ raise MCPEndpointError(
210
+ f"Unsupported transport type: {self.transport_type}"
211
+ )
212
+ except Exception as e:
213
+ logger.error(f"Failed to call tool {tool_name} on {self.endpoint}: {e}")
214
+ raise MCPEndpointError(f"Failed to call tool {tool_name}: {e}")
215
+
216
+ async def _call_tool_from_session(
217
+ self, read_stream, write_stream, tool_name: str, arguments: Dict[str, Any]
218
+ ) -> Dict[str, Any]:
219
+ """Call a tool from an MCP session."""
220
+ async with ClientSession(read_stream, write_stream) as session:
221
+ await session.initialize()
222
+ result = await session.call_tool(tool_name, arguments)
223
+
224
+ result_dict = {
225
+ "content": [],
226
+ "is_error": result.isError,
227
+ }
228
+
229
+ for content in result.content:
230
+ if hasattr(content, "type") and content.type == "text":
231
+ result_dict["content"].append(
232
+ {"type": "text", "text": content.text}
233
+ )
234
+ else:
235
+ result_dict["content"].append(str(content))
236
+
237
+ return result_dict
238
+
239
+
240
+ # Example usage
241
+ def example_usage():
242
+ """Example of how to use the simplified MCPClient."""
243
+ # Simple usage with sync API
244
+ client = MCPClient("https://ai-center.se/mcp/think-mcp-server/sse")
245
+
246
+ try:
247
+ # List available tools
248
+ tools = client.list_tools()
249
+ print(f"Found {len(tools)} tools")
250
+
251
+ # Call a tool if any are available
252
+ if tools:
253
+ result = client.call_tool(
254
+ tools[0]["function"]["name"], {"query": "What is MCP"}
255
+ )
256
+ print(f"Tool result: {result}")
257
+ except MCPEndpointError as e:
258
+ print(f"Error: {e}")
259
+
260
+
261
+ if __name__ == "__main__":
262
+ example_usage()
@@ -0,0 +1,225 @@
1
+ """
2
+ Simplified Messages class for managing LLM conversation history.
3
+ """
4
+
5
+ from typing import List, Dict, Any, Optional
6
+ from datetime import datetime
7
+
8
+
9
+ class Messages:
10
+ """A simplified class for managing LLM conversation messages."""
11
+
12
+ def __init__(
13
+ self,
14
+ system_prompt: Optional[str] = None,
15
+ user_prompt: Optional[str] = None,
16
+ add_date_and_time: bool = False,
17
+ ):
18
+ """
19
+ Initialize the Messages instance.
20
+
21
+ Args:
22
+ system_prompt: Optional system prompt to add to the messages list
23
+ user_prompt: Optional initial user prompt to add to the messages list
24
+ add_date_and_time: If True, adds a message with current date and time
25
+ """
26
+ self.messages: List[Dict[str, Any]] = []
27
+
28
+ if system_prompt:
29
+ self.add_system_message(system_prompt)
30
+
31
+ if add_date_and_time:
32
+ self._add_date_time_message()
33
+
34
+ if user_prompt:
35
+ self.add_user_message(user_prompt)
36
+
37
+ def _add_date_time_message(self):
38
+ """Add a message with the current date and time."""
39
+ now = datetime.now()
40
+ day = now.day
41
+ if 4 <= day <= 20 or 24 <= day <= 30:
42
+ suffix = "th"
43
+ else:
44
+ suffix = ["st", "nd", "rd"][day % 10 - 1]
45
+
46
+ date_str = now.strftime(f"%d{suffix} of %B %Y")
47
+ time_str = now.strftime("%H:%M")
48
+ date_time_message = f"Today is {date_str} and the current time is {time_str}."
49
+ self.messages.append({"role": "system", "content": date_time_message})
50
+
51
+ def add_system_message(self, content: str):
52
+ """Add a system message to the messages list."""
53
+ self.messages.append({"role": "system", "content": content})
54
+
55
+ def add_user_message(self, content: str):
56
+ """Add a user message to the messages list."""
57
+ self.messages.append({"role": "user", "content": content})
58
+
59
+ def add_assistant_message(self, content: str):
60
+ """Add an assistant message to the messages list."""
61
+ self.messages.append({"role": "assistant", "content": content})
62
+
63
+ def add_tool_call(self, tool_call: Dict[str, Any]):
64
+ """
65
+ Add a tool call to the latest assistant message or create a new one.
66
+
67
+ Args:
68
+ tool_call: The tool call dictionary with id, type, function, etc.
69
+ """
70
+ # Check if the latest message is an assistant message with tool_calls
71
+ if (self.messages
72
+ and self.messages[-1].get("role") == "assistant"
73
+ and "tool_calls" in self.messages[-1]):
74
+ # Append to existing assistant message
75
+ self.messages[-1]["tool_calls"].append(tool_call)
76
+ else:
77
+ # Create new assistant message with tool call
78
+ assistant_message = {
79
+ "role": "assistant",
80
+ "content": "",
81
+ "tool_calls": [tool_call],
82
+ }
83
+ self.messages.append(assistant_message)
84
+
85
+ def add_tool_response(self, tool_call_id: str, content: str):
86
+ """
87
+ Add a tool response message.
88
+
89
+ Args:
90
+ tool_call_id: The ID of the tool call this response belongs to
91
+ content: The response content
92
+ """
93
+ tool_message = {
94
+ "role": "tool",
95
+ "tool_call_id": tool_call_id,
96
+ "content": content,
97
+ }
98
+ self.messages.append(tool_message)
99
+
100
+ def add_tool_responses(self, tool_responses: List[Dict[str, Any]]):
101
+ """
102
+ Add multiple tool responses to the conversation history.
103
+
104
+ Args:
105
+ tool_responses: List of tool response dictionaries with tool_call_id,
106
+ tool_response, and is_error fields
107
+ """
108
+ for response in tool_responses:
109
+ tool_call_id = response.get("tool_call_id", "unknown")
110
+ is_error = response.get("is_error", False)
111
+
112
+ if is_error:
113
+ content = f"Error: {response.get('error', 'Unknown error')}"
114
+ else:
115
+ tool_response = response.get("tool_response", {})
116
+ # Simple content extraction
117
+ if isinstance(tool_response, dict) and "content" in tool_response:
118
+ content_list = tool_response["content"]
119
+ if content_list and isinstance(content_list[0], dict):
120
+ content = content_list[0].get("text", str(tool_response))
121
+ else:
122
+ content = str(tool_response)
123
+ else:
124
+ content = str(tool_response)
125
+
126
+ self.add_tool_response(tool_call_id, content)
127
+
128
+ def get_messages(self) -> List[Dict[str, Any]]:
129
+ """Get the current messages list."""
130
+ return self.messages
131
+
132
+ def has_pending_tool_calls(self) -> bool:
133
+ """
134
+ Check if the last message has tool calls that need execution.
135
+
136
+ Returns:
137
+ True if there are tool calls waiting for responses
138
+ """
139
+ if not self.messages:
140
+ return False
141
+
142
+ last_message = self.messages[-1]
143
+
144
+ # Check if the last message is an assistant message with tool calls
145
+ if last_message.get("role") == "assistant" and "tool_calls" in last_message:
146
+ # Check if there are subsequent tool responses
147
+ tool_call_ids = {tc.get("id") for tc in last_message["tool_calls"]}
148
+
149
+ # Look for tool responses after this message
150
+ for msg in reversed(self.messages):
151
+ if msg.get("role") == "tool" and msg.get("tool_call_id") in tool_call_ids:
152
+ tool_call_ids.remove(msg.get("tool_call_id"))
153
+
154
+ # If there are still unresponded tool call IDs, we have pending calls
155
+ return len(tool_call_ids) > 0
156
+
157
+ return False
158
+
159
+ def __str__(self) -> str:
160
+ """Return messages in a simple, readable format."""
161
+ if not self.messages:
162
+ return "No messages"
163
+
164
+ lines = []
165
+ for i, message in enumerate(self.messages, 1):
166
+ role = message.get("role", "unknown")
167
+ content = message.get("content", "")
168
+
169
+ # Handle tool calls in assistant messages
170
+ if role == "assistant" and message.get("tool_calls"):
171
+ lines.append(f"{i}. {role}: {content}")
172
+ for j, tool_call in enumerate(message["tool_calls"], 1):
173
+ function_name = tool_call.get("function", {}).get("name", "unknown")
174
+ lines.append(f" └─ Tool Call {j}: {function_name}")
175
+
176
+ # Handle tool messages
177
+ elif role == "tool":
178
+ tool_call_id = message.get("tool_call_id", "unknown")
179
+ # Truncate long content for readability
180
+ if len(content) > 200:
181
+ content = content[:197] + "..."
182
+ lines.append(f"{i}. {role} [{tool_call_id[:8]}...]: {content}")
183
+
184
+ # Handle other message types
185
+ else:
186
+ # Truncate long content for readability
187
+ if len(content) > 100:
188
+ content = content[:97] + "..."
189
+ lines.append(f"{i}. {role}: {content}")
190
+
191
+ return "\n".join(lines)
192
+
193
+
194
+ # Example usage
195
+ def example_usage():
196
+ """Example of how to use the simplified Messages class."""
197
+ # Create messages with system prompt
198
+ messages = Messages(
199
+ system_prompt="You are a helpful assistant.",
200
+ user_prompt="Hello, how are you?",
201
+ add_date_and_time=True
202
+ )
203
+
204
+ # Add assistant response
205
+ messages.add_assistant_message("I'm doing well, thank you!")
206
+
207
+ # Add a tool call
208
+ tool_call = {
209
+ "id": "call_123",
210
+ "type": "function",
211
+ "function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
212
+ }
213
+ messages.add_tool_call(tool_call)
214
+
215
+ # Add tool response
216
+ messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
217
+
218
+ print("Conversation:")
219
+ print(messages)
220
+
221
+ print(f"\nHas pending tool calls: {messages.has_pending_tool_calls()}")
222
+
223
+
224
+ if __name__ == "__main__":
225
+ example_usage()
@@ -0,0 +1,234 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentic-blocks
3
+ Version: 0.1.0
4
+ Summary: Simple building blocks for agentic AI systems with MCP client and conversation management
5
+ Author-email: Magnus Bjelkenhed <bjelkenhed@gmail.com>
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/bjelkenhed/agentic-blocks
8
+ Project-URL: Repository, https://github.com/bjelkenhed/agentic-blocks
9
+ Project-URL: Issues, https://github.com/bjelkenhed/agentic-blocks/issues
10
+ Keywords: ai,mcp,model-control-protocol,agent,llm,openai,conversation
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.11
16
+ Classifier: Programming Language :: Python :: 3.12
17
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
18
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19
+ Requires-Python: >=3.11
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: mcp
23
+ Requires-Dist: requests
24
+ Requires-Dist: python-dotenv
25
+ Requires-Dist: openai
26
+ Provides-Extra: test
27
+ Requires-Dist: pytest; extra == "test"
28
+ Provides-Extra: dev
29
+ Requires-Dist: pytest; extra == "dev"
30
+ Requires-Dist: build; extra == "dev"
31
+ Requires-Dist: twine; extra == "dev"
32
+ Dynamic: license-file
33
+
34
+ # Agentic Blocks
35
+
36
+ Building blocks for agentic systems with a focus on simplicity and ease of use.
37
+
38
+ ## Overview
39
+
40
+ Agentic Blocks provides clean, simple components for building AI agent systems, specifically focused on:
41
+
42
+ - **MCP Client**: Connect to Model Control Protocol (MCP) endpoints with a sync-by-default API
43
+ - **Messages**: Manage LLM conversation history with OpenAI-compatible format
44
+
45
+ Both components follow principles of simplicity, maintainability, and ease of use.
46
+
47
+ ## Installation
48
+
49
+ ```bash
50
+ pip install -e .
51
+ ```
52
+
53
+ For development:
54
+ ```bash
55
+ pip install -e ".[dev]"
56
+ ```
57
+
58
+ ## Quick Start
59
+
60
+ ### MCPClient - Connect to MCP Endpoints
61
+
62
+ The MCPClient provides a unified interface for connecting to different types of MCP endpoints:
63
+
64
+ ```python
65
+ from agentic_blocks import MCPClient
66
+
67
+ # Connect to an SSE endpoint (sync by default)
68
+ client = MCPClient("https://example.com/mcp/server/sse")
69
+
70
+ # List available tools
71
+ tools = client.list_tools()
72
+ print(f"Available tools: {len(tools)}")
73
+
74
+ # Call a tool
75
+ result = client.call_tool("search", {"query": "What is MCP?"})
76
+ print(result)
77
+ ```
78
+
79
+ **Supported endpoint types:**
80
+ - **SSE endpoints**: URLs with `/sse` in the path
81
+ - **HTTP endpoints**: URLs with `/mcp` in the path
82
+ - **Local scripts**: File paths to Python MCP servers
83
+
84
+ **Async support for advanced users:**
85
+ ```python
86
+ # Async versions available
87
+ tools = await client.list_tools_async()
88
+ result = await client.call_tool_async("search", {"query": "async example"})
89
+ ```
90
+
91
+ ### Messages - Manage Conversation History
92
+
93
+ The Messages class helps build and manage LLM conversations in OpenAI-compatible format:
94
+
95
+ ```python
96
+ from agentic_blocks import Messages
97
+
98
+ # Initialize with system prompt
99
+ messages = Messages(
100
+ system_prompt="You are a helpful assistant.",
101
+ user_prompt="Hello, how can you help me?",
102
+ add_date_and_time=True
103
+ )
104
+
105
+ # Add assistant response
106
+ messages.add_assistant_message("I can help you with various tasks!")
107
+
108
+ # Add tool calls
109
+ tool_call = {
110
+ "id": "call_123",
111
+ "type": "function",
112
+ "function": {"name": "get_weather", "arguments": '{"location": "Paris"}'}
113
+ }
114
+ messages.add_tool_call(tool_call)
115
+
116
+ # Add tool response
117
+ messages.add_tool_response("call_123", "The weather in Paris is sunny, 22°C")
118
+
119
+ # Get messages for LLM API
120
+ conversation = messages.get_messages()
121
+
122
+ # View readable format
123
+ print(messages)
124
+ ```
125
+
126
+ ## Complete Example - Agent with MCP Tools
127
+
128
+ ```python
129
+ from agentic_blocks import MCPClient, Messages
130
+
131
+ def simple_agent():
132
+ # Initialize MCP client and conversation
133
+ client = MCPClient("https://example.com/mcp/server/sse")
134
+ messages = Messages(
135
+ system_prompt="You are a helpful research assistant.",
136
+ add_date_and_time=True
137
+ )
138
+
139
+ # Get available tools
140
+ tools = client.list_tools()
141
+ print(f"Connected to MCP server with {len(tools)} tools")
142
+
143
+ # Simulate user query
144
+ user_query = "What's the latest news about AI?"
145
+ messages.add_user_message(user_query)
146
+
147
+ # Agent decides to use a search tool
148
+ if tools:
149
+ search_tool = next((t for t in tools if "search" in t["function"]["name"]), None)
150
+ if search_tool:
151
+ # Add tool call to messages
152
+ tool_call = {
153
+ "id": "search_001",
154
+ "type": "function",
155
+ "function": {
156
+ "name": search_tool["function"]["name"],
157
+ "arguments": '{"query": "latest AI news"}'
158
+ }
159
+ }
160
+ messages.add_tool_call(tool_call)
161
+
162
+ # Execute the tool
163
+ result = client.call_tool(
164
+ search_tool["function"]["name"],
165
+ {"query": "latest AI news"}
166
+ )
167
+
168
+ # Add tool response
169
+ if result["content"]:
170
+ response_text = result["content"][0]["text"]
171
+ messages.add_tool_response("search_001", response_text)
172
+
173
+ # Add final assistant response
174
+ messages.add_assistant_message(
175
+ "Based on my search, here's what I found about the latest AI news..."
176
+ )
177
+
178
+ # Print conversation
179
+ print("\\nConversation:")
180
+ print(messages)
181
+
182
+ return messages.get_messages()
183
+
184
+ if __name__ == "__main__":
185
+ simple_agent()
186
+ ```
187
+
188
+ ## Development Principles
189
+
190
+ This project follows these core principles:
191
+
192
+ - **Simplicity First**: Keep code simple, readable, and focused on core functionality
193
+ - **Sync-by-Default**: Primary methods are synchronous for ease of use, with optional async versions
194
+ - **Minimal Dependencies**: Avoid over-engineering and complex error handling unless necessary
195
+ - **Clean APIs**: Prefer straightforward method names and clear parameter expectations
196
+ - **Maintainable Code**: Favor fewer lines of clear code over comprehensive edge case handling
197
+
198
+ ## API Reference
199
+
200
+ ### MCPClient
201
+
202
+ ```python
203
+ MCPClient(endpoint: str, timeout: int = 30)
204
+ ```
205
+
206
+ **Methods:**
207
+ - `list_tools() -> List[Dict]`: Get available tools (sync)
208
+ - `call_tool(name: str, args: Dict) -> Dict`: Call a tool (sync)
209
+ - `list_tools_async() -> List[Dict]`: Async version of list_tools
210
+ - `call_tool_async(name: str, args: Dict) -> Dict`: Async version of call_tool
211
+
212
+ ### Messages
213
+
214
+ ```python
215
+ Messages(system_prompt=None, user_prompt=None, add_date_and_time=False)
216
+ ```
217
+
218
+ **Methods:**
219
+ - `add_system_message(content: str)`: Add system message
220
+ - `add_user_message(content: str)`: Add user message
221
+ - `add_assistant_message(content: str)`: Add assistant message
222
+ - `add_tool_call(tool_call: Dict)`: Add tool call to assistant message
223
+ - `add_tool_response(call_id: str, content: str)`: Add tool response
224
+ - `get_messages() -> List[Dict]`: Get all messages
225
+ - `has_pending_tool_calls() -> bool`: Check for pending tool calls
226
+
227
+ ## Requirements
228
+
229
+ - Python >= 3.11
230
+ - Dependencies: `mcp`, `requests`, `python-dotenv`, `openai`
231
+
232
+ ## License
233
+
234
+ MIT
@@ -0,0 +1,11 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ src/agentic_blocks/__init__.py
5
+ src/agentic_blocks/mcp_client.py
6
+ src/agentic_blocks/messages.py
7
+ src/agentic_blocks.egg-info/PKG-INFO
8
+ src/agentic_blocks.egg-info/SOURCES.txt
9
+ src/agentic_blocks.egg-info/dependency_links.txt
10
+ src/agentic_blocks.egg-info/requires.txt
11
+ src/agentic_blocks.egg-info/top_level.txt
@@ -0,0 +1,12 @@
1
+ mcp
2
+ requests
3
+ python-dotenv
4
+ openai
5
+
6
+ [dev]
7
+ pytest
8
+ build
9
+ twine
10
+
11
+ [test]
12
+ pytest
@@ -0,0 +1 @@
1
+ agentic_blocks