sokrates-mcp 0.3.0__tar.gz → 0.4.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {sokrates_mcp-0.3.0/src/sokrates_mcp.egg-info → sokrates_mcp-0.4.0}/PKG-INFO +18 -90
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/README.md +17 -89
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/pyproject.toml +1 -1
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/main.py +24 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/workflow.py +50 -14
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0/src/sokrates_mcp.egg-info}/PKG-INFO +18 -90
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/LICENSE +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/MANIFEST.in +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/config.yml.example +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/setup.cfg +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/__init__.py +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/mcp_config.py +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/utils.py +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/SOURCES.txt +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/dependency_links.txt +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/entry_points.txt +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/requires.txt +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/top_level.txt +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp_client/__init__.py +0 -0
- {sokrates_mcp-0.3.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp_client/mcp_client_example.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: sokrates-mcp
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.4.0
|
4
4
|
Summary: A templated MCP server for demonstration and quick start.
|
5
5
|
Author-email: Julian Weber <julianweberdev@gmail.com>
|
6
6
|
License: MIT License
|
@@ -158,12 +158,20 @@ providers:
|
|
158
158
|
### Starting the Server
|
159
159
|
|
160
160
|
```bash
|
161
|
+
# from local git repo
|
161
162
|
uv run sokrates-mcp
|
163
|
+
|
164
|
+
# without checking out the git repo
|
165
|
+
uvx sokrates-mcp
|
162
166
|
```
|
163
167
|
|
164
168
|
### Listing available command line options
|
165
169
|
```bash
|
170
|
+
# from local git repo
|
166
171
|
uv run sokrates-mcp --help
|
172
|
+
|
173
|
+
# without checking out the git repo
|
174
|
+
uvx sokrates-mcp --help
|
167
175
|
```
|
168
176
|
|
169
177
|
## Architecture & Technical Details
|
@@ -185,110 +193,30 @@ The server follows a modular design pattern:
|
|
185
193
|
|
186
194
|
## Available Tools
|
187
195
|
|
188
|
-
|
189
|
-
|
190
|
-
- **refine_prompt**: Refines a given prompt by enriching it with additional context.
|
191
|
-
- Parameters:
|
192
|
-
- `prompt` (str): The input prompt to be refined
|
193
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
194
|
-
- `model` (str, optional): Model name for refinement. Default is 'default'
|
195
|
-
|
196
|
-
- **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
|
197
|
-
- Parameters:
|
198
|
-
- `prompt` (str): The input prompt to be refined and executed
|
199
|
-
- `refinement_model` (str, optional): Model for refinement. Default is 'default'
|
200
|
-
- `execution_model` (str, optional): Model for execution. Default is 'default'
|
201
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
202
|
-
|
203
|
-
- **handover_prompt**: Hands over a prompt to an external LLM for processing.
|
204
|
-
- Parameters:
|
205
|
-
- `prompt` (str): The prompt to be executed externally
|
206
|
-
- `model` (str, optional): Model name for execution. Default is 'default'
|
207
|
-
|
208
|
-
- **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
|
209
|
-
- Parameters:
|
210
|
-
- `task` (str): The full task description to break down
|
211
|
-
- `model` (str, optional): Model name for processing. Default is 'default'
|
212
|
-
|
213
|
-
- **list_available_models**: Lists all available large language models accessible by the server.
|
214
|
-
|
215
|
-
### mcp_config.py
|
216
|
-
|
217
|
-
- **MCPConfig** class: Manages configuration settings for the MCP server.
|
218
|
-
- Parameters:
|
219
|
-
- `config_file_path` (str, optional): Path to YAML config file
|
220
|
-
- `api_endpoint` (str, optional): API endpoint URL
|
221
|
-
- `api_key` (str, optional): API key for authentication
|
222
|
-
- `model` (str, optional): Model name
|
223
|
-
|
224
|
-
### workflow.py
|
225
|
-
|
226
|
-
- **Workflow** class: Implements the business logic for prompt refinement and execution.
|
227
|
-
- e.g.:
|
228
|
-
- `refine_prompt`: Refines a given prompt
|
229
|
-
- `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
|
230
|
-
- `handover_prompt`: Hands over a prompt to an external LLM for processing
|
231
|
-
- `breakdown_task`: Breaks down a task into sub-tasks
|
232
|
-
- `list_available_models`: Lists all available models
|
196
|
+
See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
|
233
197
|
|
234
198
|
## Project Structure
|
235
199
|
|
236
200
|
- `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
|
237
201
|
- `src/sokrates_mcp/mcp_config.py`: Configuration management
|
202
|
+
- `src/sokrates_mcp/utils.py`: Helper and utility methods
|
238
203
|
- `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
|
239
204
|
- `pyproject.toml`: Dependency management
|
240
205
|
|
241
206
|
|
242
|
-
## Script List
|
243
|
-
|
244
|
-
### `main.py`
|
245
|
-
Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
|
246
|
-
#### Usage
|
247
|
-
- `uv run python main.py` - Start the MCP server (default port: 8000)
|
248
|
-
- `uv run fastmcp dev main.py` - Run in development mode with auto-reload
|
249
|
-
|
250
|
-
### `mcp_config.py`
|
251
|
-
Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
|
252
|
-
#### Usage
|
253
|
-
- Import and use in other scripts:
|
254
|
-
```python
|
255
|
-
from mcp_config import MCPConfig
|
256
|
-
config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
|
257
|
-
```
|
258
|
-
|
259
|
-
### `workflow.py`
|
260
|
-
Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
|
261
|
-
#### Usage
|
262
|
-
- Import and use in other scripts:
|
263
|
-
```python
|
264
|
-
from workflow import Workflow
|
265
|
-
from mcp_config import MCPConfig
|
266
|
-
|
267
|
-
config = MCPConfig()
|
268
|
-
workflow = Workflow(config)
|
269
|
-
result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
|
270
|
-
```
|
271
|
-
|
272
|
-
### `src/mcp_client_example.py`
|
273
|
-
Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
|
274
|
-
|
275
|
-
#### Usage
|
276
|
-
- Run as a standalone script:
|
277
|
-
```bash
|
278
|
-
python src/mcp_client_example.py
|
279
|
-
```
|
280
|
-
- Or use with an ASGI server like Uvicorn:
|
281
|
-
```bash
|
282
|
-
uvicorn src.mcp_client_example:main --factory
|
283
|
-
```
|
284
|
-
|
285
207
|
**Common Error:**
|
286
208
|
If you see "ModuleNotFoundError: fastmcp", ensure:
|
287
|
-
1. Dependencies are installed (`uv
|
209
|
+
1. Dependencies are installed (`uv sync`)
|
288
210
|
2. Python virtual environment is activated
|
289
211
|
|
290
212
|
## Changelog
|
291
213
|
|
214
|
+
**0.4.0 (Aug 2025)**
|
215
|
+
- adds new tools:
|
216
|
+
- read_files_from_directory
|
217
|
+
- directory_tree
|
218
|
+
- logging refactoring in workflow.py
|
219
|
+
|
292
220
|
**0.3.0 (Aug 2025)**
|
293
221
|
- adds new tools:
|
294
222
|
- roll_dice
|
@@ -120,12 +120,20 @@ providers:
|
|
120
120
|
### Starting the Server
|
121
121
|
|
122
122
|
```bash
|
123
|
+
# from local git repo
|
123
124
|
uv run sokrates-mcp
|
125
|
+
|
126
|
+
# without checking out the git repo
|
127
|
+
uvx sokrates-mcp
|
124
128
|
```
|
125
129
|
|
126
130
|
### Listing available command line options
|
127
131
|
```bash
|
132
|
+
# from local git repo
|
128
133
|
uv run sokrates-mcp --help
|
134
|
+
|
135
|
+
# without checking out the git repo
|
136
|
+
uvx sokrates-mcp --help
|
129
137
|
```
|
130
138
|
|
131
139
|
## Architecture & Technical Details
|
@@ -147,110 +155,30 @@ The server follows a modular design pattern:
|
|
147
155
|
|
148
156
|
## Available Tools
|
149
157
|
|
150
|
-
|
151
|
-
|
152
|
-
- **refine_prompt**: Refines a given prompt by enriching it with additional context.
|
153
|
-
- Parameters:
|
154
|
-
- `prompt` (str): The input prompt to be refined
|
155
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
156
|
-
- `model` (str, optional): Model name for refinement. Default is 'default'
|
157
|
-
|
158
|
-
- **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
|
159
|
-
- Parameters:
|
160
|
-
- `prompt` (str): The input prompt to be refined and executed
|
161
|
-
- `refinement_model` (str, optional): Model for refinement. Default is 'default'
|
162
|
-
- `execution_model` (str, optional): Model for execution. Default is 'default'
|
163
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
164
|
-
|
165
|
-
- **handover_prompt**: Hands over a prompt to an external LLM for processing.
|
166
|
-
- Parameters:
|
167
|
-
- `prompt` (str): The prompt to be executed externally
|
168
|
-
- `model` (str, optional): Model name for execution. Default is 'default'
|
169
|
-
|
170
|
-
- **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
|
171
|
-
- Parameters:
|
172
|
-
- `task` (str): The full task description to break down
|
173
|
-
- `model` (str, optional): Model name for processing. Default is 'default'
|
174
|
-
|
175
|
-
- **list_available_models**: Lists all available large language models accessible by the server.
|
176
|
-
|
177
|
-
### mcp_config.py
|
178
|
-
|
179
|
-
- **MCPConfig** class: Manages configuration settings for the MCP server.
|
180
|
-
- Parameters:
|
181
|
-
- `config_file_path` (str, optional): Path to YAML config file
|
182
|
-
- `api_endpoint` (str, optional): API endpoint URL
|
183
|
-
- `api_key` (str, optional): API key for authentication
|
184
|
-
- `model` (str, optional): Model name
|
185
|
-
|
186
|
-
### workflow.py
|
187
|
-
|
188
|
-
- **Workflow** class: Implements the business logic for prompt refinement and execution.
|
189
|
-
- e.g.:
|
190
|
-
- `refine_prompt`: Refines a given prompt
|
191
|
-
- `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
|
192
|
-
- `handover_prompt`: Hands over a prompt to an external LLM for processing
|
193
|
-
- `breakdown_task`: Breaks down a task into sub-tasks
|
194
|
-
- `list_available_models`: Lists all available models
|
158
|
+
See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
|
195
159
|
|
196
160
|
## Project Structure
|
197
161
|
|
198
162
|
- `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
|
199
163
|
- `src/sokrates_mcp/mcp_config.py`: Configuration management
|
164
|
+
- `src/sokrates_mcp/utils.py`: Helper and utility methods
|
200
165
|
- `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
|
201
166
|
- `pyproject.toml`: Dependency management
|
202
167
|
|
203
168
|
|
204
|
-
## Script List
|
205
|
-
|
206
|
-
### `main.py`
|
207
|
-
Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
|
208
|
-
#### Usage
|
209
|
-
- `uv run python main.py` - Start the MCP server (default port: 8000)
|
210
|
-
- `uv run fastmcp dev main.py` - Run in development mode with auto-reload
|
211
|
-
|
212
|
-
### `mcp_config.py`
|
213
|
-
Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
|
214
|
-
#### Usage
|
215
|
-
- Import and use in other scripts:
|
216
|
-
```python
|
217
|
-
from mcp_config import MCPConfig
|
218
|
-
config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
|
219
|
-
```
|
220
|
-
|
221
|
-
### `workflow.py`
|
222
|
-
Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
|
223
|
-
#### Usage
|
224
|
-
- Import and use in other scripts:
|
225
|
-
```python
|
226
|
-
from workflow import Workflow
|
227
|
-
from mcp_config import MCPConfig
|
228
|
-
|
229
|
-
config = MCPConfig()
|
230
|
-
workflow = Workflow(config)
|
231
|
-
result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
|
232
|
-
```
|
233
|
-
|
234
|
-
### `src/mcp_client_example.py`
|
235
|
-
Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
|
236
|
-
|
237
|
-
#### Usage
|
238
|
-
- Run as a standalone script:
|
239
|
-
```bash
|
240
|
-
python src/mcp_client_example.py
|
241
|
-
```
|
242
|
-
- Or use with an ASGI server like Uvicorn:
|
243
|
-
```bash
|
244
|
-
uvicorn src.mcp_client_example:main --factory
|
245
|
-
```
|
246
|
-
|
247
169
|
**Common Error:**
|
248
170
|
If you see "ModuleNotFoundError: fastmcp", ensure:
|
249
|
-
1. Dependencies are installed (`uv
|
171
|
+
1. Dependencies are installed (`uv sync`)
|
250
172
|
2. Python virtual environment is activated
|
251
173
|
|
252
174
|
## Changelog
|
253
175
|
|
176
|
+
**0.4.0 (Aug 2025)**
|
177
|
+
- adds new tools:
|
178
|
+
- read_files_from_directory
|
179
|
+
- directory_tree
|
180
|
+
- logging refactoring in workflow.py
|
181
|
+
|
254
182
|
**0.3.0 (Aug 2025)**
|
255
183
|
- adds new tools:
|
256
184
|
- roll_dice
|
@@ -305,6 +305,30 @@ async def read_from_file(
|
|
305
305
|
) -> str:
|
306
306
|
return await workflow.read_from_file(ctx=ctx, file_path=file_path)
|
307
307
|
|
308
|
+
@mcp.tool(
|
309
|
+
name="read_files_from_directory",
|
310
|
+
description="Read files from the local disk from the given directory path and return the file contents. You can also provide a list of file extentsions to include optionally.",
|
311
|
+
tags={"directory","read","load","local"}
|
312
|
+
)
|
313
|
+
async def read_files_from_directory(
|
314
|
+
ctx: Context,
|
315
|
+
directory_path: Annotated[str, Field(description="The source directory path to use for reading the files. This should be an absolute file path on the disk.")],
|
316
|
+
file_extensions: Annotated[list[str], Field(description="A list of file extensions to include when reading the files. For markdown files you could use ['.md']", default=None)],
|
317
|
+
) -> str:
|
318
|
+
return await workflow.read_files_from_directory(ctx=ctx, directory_path=directory_path, file_extensions=file_extensions)
|
319
|
+
|
320
|
+
@mcp.tool(
|
321
|
+
name="directory_tree",
|
322
|
+
description="Provides a recursive directory file listing for the given directory path.",
|
323
|
+
tags={"directory","list","local"}
|
324
|
+
)
|
325
|
+
async def directory_tree(
|
326
|
+
ctx: Context,
|
327
|
+
directory_path: Annotated[str, Field(description="The source directory path to use for reading the files. This should be an absolute file path on the disk.")]
|
328
|
+
) -> str:
|
329
|
+
return await workflow.directory_tree(ctx=ctx, directory_path=directory_path)
|
330
|
+
|
331
|
+
|
308
332
|
@mcp.tool(
|
309
333
|
name="store_to_file",
|
310
334
|
description="Store a file with the provided content to the local drive at the provided file path.",
|
@@ -1,4 +1,5 @@
|
|
1
1
|
from pathlib import Path
|
2
|
+
import logging
|
2
3
|
from typing import List
|
3
4
|
|
4
5
|
from fastmcp import Context
|
@@ -19,12 +20,8 @@ class Workflow:
|
|
19
20
|
Args:
|
20
21
|
config (MCPConfig): The MCP configuration object
|
21
22
|
"""
|
23
|
+
self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
|
22
24
|
self.config = config
|
23
|
-
default_provider = self.config.get_default_provider()
|
24
|
-
self.default_model = default_provider['default_model']
|
25
|
-
self.default_api_endpoint = default_provider['api_endpoint']
|
26
|
-
self.default_api_key = default_provider['api_key']
|
27
|
-
|
28
25
|
self.prompt_refiner = PromptRefiner()
|
29
26
|
|
30
27
|
def _get_model(self, provider, model=''):
|
@@ -81,7 +78,8 @@ class Workflow:
|
|
81
78
|
"""
|
82
79
|
refinement_prompt = self.load_refinement_prompt(refinement_type)
|
83
80
|
workflow = self._initialize_refinement_workflow(provider_name=provider, model=model)
|
84
|
-
|
81
|
+
self.logger.info(f"Starting refinement workflow with provider: {provider} and model: {model}")
|
82
|
+
|
85
83
|
await ctx.info(f"Prompt refinement and execution workflow started with refinement model: {workflow.model} . Waiting for the response from the LLM...")
|
86
84
|
refined = workflow.refine_prompt(input_prompt=prompt, refinement_prompt=refinement_prompt)
|
87
85
|
await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
|
@@ -107,6 +105,8 @@ class Workflow:
|
|
107
105
|
refinement_model = self._get_model(provider=prov, model=refinement_model)
|
108
106
|
execution_model = self._get_model(provider=prov, model=execution_model)
|
109
107
|
|
108
|
+
self.logger.info(f"Starting refinement workflow with provider: {provider} with refinement model: {refinement_model} and execution model: {execution_model}")
|
109
|
+
|
110
110
|
workflow = self._initialize_refinement_workflow(provider_name=provider, model=execution_model)
|
111
111
|
await ctx.info(f"Prompt refinement and execution workflow started with refinement model: {refinement_model} and execution model {execution_model} . Waiting for the responses from the LLMs...")
|
112
112
|
result = workflow.refine_and_send_prompt(input_prompt=prompt, refinement_prompt=refinement_prompt, refinement_model=refinement_model, execution_model=execution_model)
|
@@ -130,6 +130,7 @@ class Workflow:
|
|
130
130
|
|
131
131
|
prov = self._get_provider(provider)
|
132
132
|
model = self._get_model(provider=prov, model=model)
|
133
|
+
self.logger.info(f"Handing over prompt to provider: {provider} and model: {model}")
|
133
134
|
llm_api = LLMApi(api_endpoint=prov['api_endpoint'], api_key=prov['api_key'])
|
134
135
|
|
135
136
|
result = llm_api.send(prompt,model=model, temperature=temperature)
|
@@ -151,6 +152,7 @@ class Workflow:
|
|
151
152
|
Returns:
|
152
153
|
str: A JSON string containing the list of sub-tasks with complexity ratings.
|
153
154
|
"""
|
155
|
+
self.logger.info(f"Breaking down task with provider: {provider} and model: {model}")
|
154
156
|
workflow = self._initialize_refinement_workflow(provider_name=provider, model=model)
|
155
157
|
await ctx.info(f"Task break-down started with model: {workflow.model} . Waiting for the response from the LLM...")
|
156
158
|
result = workflow.breakdown_task(task=task)
|
@@ -172,6 +174,8 @@ class Workflow:
|
|
172
174
|
"""
|
173
175
|
prov = self._get_provider(provider)
|
174
176
|
model = self._get_model(provider=prov, model=model)
|
177
|
+
|
178
|
+
self.logger.info(f"Generating random ideas with provider: {provider} and model: {model}")
|
175
179
|
await ctx.info(f"Task `generate random ideas` started at provider: {prov['name']} with model: {model} , idea_count: {idea_count} and temperature: {temperature}. Waiting for the response from the LLM...")
|
176
180
|
|
177
181
|
idea_generation_workflow = IdeaGenerationWorkflow(api_endpoint=prov['api_endpoint'],
|
@@ -205,6 +209,7 @@ class Workflow:
|
|
205
209
|
prov = self._get_provider(provider)
|
206
210
|
model = self._get_model(provider=prov, model=model)
|
207
211
|
|
212
|
+
self.logger.info(f"Generating ideas on topic with provider: {provider} and model: {model}")
|
208
213
|
await ctx.info(f"Task `generate ideas on topic` started with topic: '{topic}' , model: {model} , idea_count: {idea_count} and temperature: {temperature}. Waiting for the response from the LLM...")
|
209
214
|
idea_generation_workflow = IdeaGenerationWorkflow(api_endpoint=prov['api_endpoint'],
|
210
215
|
api_key=prov['api_key'],
|
@@ -238,6 +243,7 @@ class Workflow:
|
|
238
243
|
prov = self._get_provider(provider)
|
239
244
|
model = self._get_model(provider=prov, model=model)
|
240
245
|
|
246
|
+
self.logger.info(f"Generating code review of type: {review_type} with provider: {provider} and model: {model}")
|
241
247
|
await ctx.info(f"Generating code review of type: {review_type} - using model: {model} ...")
|
242
248
|
run_code_review(file_paths=source_file_paths,
|
243
249
|
directory_path=source_directory,
|
@@ -261,6 +267,7 @@ class Workflow:
|
|
261
267
|
Returns:
|
262
268
|
str: Formatted list of configured providers.
|
263
269
|
"""
|
270
|
+
self.logger.info(f"Listing available providers")
|
264
271
|
providers = self.config.available_providers()
|
265
272
|
result = "# Configured providers"
|
266
273
|
for prov in providers:
|
@@ -279,6 +286,7 @@ class Workflow:
|
|
279
286
|
Returns:
|
280
287
|
str: Formatted list of available models and API endpoint.
|
281
288
|
"""
|
289
|
+
self.logger.info(f"Listing models for provider: {provider_name}")
|
282
290
|
await ctx.info(f"Retrieving endpoint information and list of available models for configured provider {provider_name} ...")
|
283
291
|
if not provider_name:
|
284
292
|
provider = self.config.get_default_provider()
|
@@ -302,6 +310,7 @@ class Workflow:
|
|
302
310
|
|
303
311
|
"""
|
304
312
|
await ctx.info(f"Storing file to: {file_path} ...")
|
313
|
+
self.logger.info(f"Storing content to file: {file_path}")
|
305
314
|
if not file_path:
|
306
315
|
raise ValueError("No file_path provided.")
|
307
316
|
if not file_content:
|
@@ -318,13 +327,30 @@ class Workflow:
|
|
318
327
|
|
319
328
|
"""
|
320
329
|
await ctx.info(f"Reading file from: {file_path} ...")
|
321
|
-
|
322
|
-
|
323
|
-
|
324
|
-
|
325
|
-
|
326
|
-
|
327
|
-
|
330
|
+
templated_file = self._read_file_to_templated_format(file_path=file_path)
|
331
|
+
await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
|
332
|
+
return templated_file
|
333
|
+
|
334
|
+
async def read_files_from_directory(self, ctx: Context, directory_path: str, file_extensions: List[str]) -> str:
|
335
|
+
file_exts_str = '.*'
|
336
|
+
if file_extensions:
|
337
|
+
file_exts_str = ','.join(file_extensions)
|
338
|
+
self.logger.info(f"Reading content for directory: {directory_path}")
|
339
|
+
await ctx.info(f"Reading files from directory: {directory_path} with file extensions: {file_exts_str} ...")
|
340
|
+
all_files = FileHelper.directory_tree(directory=directory_path, file_extensions=file_extensions)
|
341
|
+
result = ""
|
342
|
+
for file_path in all_files:
|
343
|
+
file_content = self._read_file_to_templated_format(file_path)
|
344
|
+
result = "\n".join([result, file_content])
|
345
|
+
await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
|
346
|
+
return result
|
347
|
+
|
348
|
+
async def directory_tree(self, ctx: Context, directory_path: str) -> str:
|
349
|
+
self.logger.info(f"Listing directory tree for directory: {directory_path}")
|
350
|
+
await ctx.info(f"Listing files recursively for directory: {directory_path} ...")
|
351
|
+
all_file_paths = FileHelper.directory_tree(directory=directory_path)
|
352
|
+
result = f"Directory: {directory_path}\n{"\n- ".join(all_file_paths)}"
|
353
|
+
|
328
354
|
await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
|
329
355
|
return result
|
330
356
|
|
@@ -332,6 +358,7 @@ class Workflow:
|
|
332
358
|
"""Roll a dice with the provided number of sides and return the result
|
333
359
|
|
334
360
|
"""
|
361
|
+
self.logger.info(f"Rolling {number_of_dice} dice with {side_count} sides {number_of_rolls} times")
|
335
362
|
await ctx.info(f"Throwing {number_of_dice} dice with {side_count} sides {number_of_rolls} times ...")
|
336
363
|
result = ""
|
337
364
|
for throw_number in range(1,number_of_rolls):
|
@@ -340,4 +367,13 @@ class Workflow:
|
|
340
367
|
dice_result = Utils.rand_int_inclusive(1, side_count)
|
341
368
|
result = f"- Dice {dice_number} result: {dice_result}\n"
|
342
369
|
await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
|
343
|
-
return result
|
370
|
+
return result
|
371
|
+
|
372
|
+
def _read_file_to_templated_format(self, file_path: str) -> str:
|
373
|
+
if not file_path:
|
374
|
+
raise ValueError("No file_path provided.")
|
375
|
+
if not Path(file_path).is_file():
|
376
|
+
raise ValueError("No file exists at the given file path.")
|
377
|
+
|
378
|
+
content = FileHelper.read_file(file_path=file_path)
|
379
|
+
return f"<file source_file_path='{file_path}'>\n{content}\n</file>"
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: sokrates-mcp
|
3
|
-
Version: 0.
|
3
|
+
Version: 0.4.0
|
4
4
|
Summary: A templated MCP server for demonstration and quick start.
|
5
5
|
Author-email: Julian Weber <julianweberdev@gmail.com>
|
6
6
|
License: MIT License
|
@@ -158,12 +158,20 @@ providers:
|
|
158
158
|
### Starting the Server
|
159
159
|
|
160
160
|
```bash
|
161
|
+
# from local git repo
|
161
162
|
uv run sokrates-mcp
|
163
|
+
|
164
|
+
# without checking out the git repo
|
165
|
+
uvx sokrates-mcp
|
162
166
|
```
|
163
167
|
|
164
168
|
### Listing available command line options
|
165
169
|
```bash
|
170
|
+
# from local git repo
|
166
171
|
uv run sokrates-mcp --help
|
172
|
+
|
173
|
+
# without checking out the git repo
|
174
|
+
uvx sokrates-mcp --help
|
167
175
|
```
|
168
176
|
|
169
177
|
## Architecture & Technical Details
|
@@ -185,110 +193,30 @@ The server follows a modular design pattern:
|
|
185
193
|
|
186
194
|
## Available Tools
|
187
195
|
|
188
|
-
|
189
|
-
|
190
|
-
- **refine_prompt**: Refines a given prompt by enriching it with additional context.
|
191
|
-
- Parameters:
|
192
|
-
- `prompt` (str): The input prompt to be refined
|
193
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
194
|
-
- `model` (str, optional): Model name for refinement. Default is 'default'
|
195
|
-
|
196
|
-
- **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
|
197
|
-
- Parameters:
|
198
|
-
- `prompt` (str): The input prompt to be refined and executed
|
199
|
-
- `refinement_model` (str, optional): Model for refinement. Default is 'default'
|
200
|
-
- `execution_model` (str, optional): Model for execution. Default is 'default'
|
201
|
-
- `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
|
202
|
-
|
203
|
-
- **handover_prompt**: Hands over a prompt to an external LLM for processing.
|
204
|
-
- Parameters:
|
205
|
-
- `prompt` (str): The prompt to be executed externally
|
206
|
-
- `model` (str, optional): Model name for execution. Default is 'default'
|
207
|
-
|
208
|
-
- **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
|
209
|
-
- Parameters:
|
210
|
-
- `task` (str): The full task description to break down
|
211
|
-
- `model` (str, optional): Model name for processing. Default is 'default'
|
212
|
-
|
213
|
-
- **list_available_models**: Lists all available large language models accessible by the server.
|
214
|
-
|
215
|
-
### mcp_config.py
|
216
|
-
|
217
|
-
- **MCPConfig** class: Manages configuration settings for the MCP server.
|
218
|
-
- Parameters:
|
219
|
-
- `config_file_path` (str, optional): Path to YAML config file
|
220
|
-
- `api_endpoint` (str, optional): API endpoint URL
|
221
|
-
- `api_key` (str, optional): API key for authentication
|
222
|
-
- `model` (str, optional): Model name
|
223
|
-
|
224
|
-
### workflow.py
|
225
|
-
|
226
|
-
- **Workflow** class: Implements the business logic for prompt refinement and execution.
|
227
|
-
- e.g.:
|
228
|
-
- `refine_prompt`: Refines a given prompt
|
229
|
-
- `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
|
230
|
-
- `handover_prompt`: Hands over a prompt to an external LLM for processing
|
231
|
-
- `breakdown_task`: Breaks down a task into sub-tasks
|
232
|
-
- `list_available_models`: Lists all available models
|
196
|
+
See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
|
233
197
|
|
234
198
|
## Project Structure
|
235
199
|
|
236
200
|
- `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
|
237
201
|
- `src/sokrates_mcp/mcp_config.py`: Configuration management
|
202
|
+
- `src/sokrates_mcp/utils.py`: Helper and utility methods
|
238
203
|
- `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
|
239
204
|
- `pyproject.toml`: Dependency management
|
240
205
|
|
241
206
|
|
242
|
-
## Script List
|
243
|
-
|
244
|
-
### `main.py`
|
245
|
-
Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
|
246
|
-
#### Usage
|
247
|
-
- `uv run python main.py` - Start the MCP server (default port: 8000)
|
248
|
-
- `uv run fastmcp dev main.py` - Run in development mode with auto-reload
|
249
|
-
|
250
|
-
### `mcp_config.py`
|
251
|
-
Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
|
252
|
-
#### Usage
|
253
|
-
- Import and use in other scripts:
|
254
|
-
```python
|
255
|
-
from mcp_config import MCPConfig
|
256
|
-
config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
|
257
|
-
```
|
258
|
-
|
259
|
-
### `workflow.py`
|
260
|
-
Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
|
261
|
-
#### Usage
|
262
|
-
- Import and use in other scripts:
|
263
|
-
```python
|
264
|
-
from workflow import Workflow
|
265
|
-
from mcp_config import MCPConfig
|
266
|
-
|
267
|
-
config = MCPConfig()
|
268
|
-
workflow = Workflow(config)
|
269
|
-
result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
|
270
|
-
```
|
271
|
-
|
272
|
-
### `src/mcp_client_example.py`
|
273
|
-
Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
|
274
|
-
|
275
|
-
#### Usage
|
276
|
-
- Run as a standalone script:
|
277
|
-
```bash
|
278
|
-
python src/mcp_client_example.py
|
279
|
-
```
|
280
|
-
- Or use with an ASGI server like Uvicorn:
|
281
|
-
```bash
|
282
|
-
uvicorn src.mcp_client_example:main --factory
|
283
|
-
```
|
284
|
-
|
285
207
|
**Common Error:**
|
286
208
|
If you see "ModuleNotFoundError: fastmcp", ensure:
|
287
|
-
1. Dependencies are installed (`uv
|
209
|
+
1. Dependencies are installed (`uv sync`)
|
288
210
|
2. Python virtual environment is activated
|
289
211
|
|
290
212
|
## Changelog
|
291
213
|
|
214
|
+
**0.4.0 (Aug 2025)**
|
215
|
+
- adds new tools:
|
216
|
+
- read_files_from_directory
|
217
|
+
- directory_tree
|
218
|
+
- logging refactoring in workflow.py
|
219
|
+
|
292
220
|
**0.3.0 (Aug 2025)**
|
293
221
|
- adds new tools:
|
294
222
|
- roll_dice
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|