sokrates-mcp 0.2.0__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (20) hide show
  1. {sokrates_mcp-0.2.0/src/sokrates_mcp.egg-info → sokrates_mcp-0.4.0}/PKG-INFO +25 -90
  2. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/README.md +24 -89
  3. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/pyproject.toml +2 -2
  4. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/main.py +61 -0
  5. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/mcp_config.py +88 -39
  6. sokrates_mcp-0.4.0/src/sokrates_mcp/utils.py +28 -0
  7. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/workflow.py +96 -10
  8. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0/src/sokrates_mcp.egg-info}/PKG-INFO +25 -90
  9. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/SOURCES.txt +1 -0
  10. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/LICENSE +0 -0
  11. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/MANIFEST.in +0 -0
  12. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/config.yml.example +0 -0
  13. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/setup.cfg +0 -0
  14. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp/__init__.py +0 -0
  15. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/dependency_links.txt +0 -0
  16. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/entry_points.txt +0 -0
  17. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/requires.txt +0 -0
  18. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp.egg-info/top_level.txt +0 -0
  19. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp_client/__init__.py +0 -0
  20. {sokrates_mcp-0.2.0 → sokrates_mcp-0.4.0}/src/sokrates_mcp_client/mcp_client_example.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sokrates-mcp
3
- Version: 0.2.0
3
+ Version: 0.4.0
4
4
  Summary: A templated MCP server for demonstration and quick start.
5
5
  Author-email: Julian Weber <julianweberdev@gmail.com>
6
6
  License: MIT License
@@ -158,12 +158,20 @@ providers:
158
158
  ### Starting the Server
159
159
 
160
160
  ```bash
161
+ # from local git repo
161
162
  uv run sokrates-mcp
163
+
164
+ # without checking out the git repo
165
+ uvx sokrates-mcp
162
166
  ```
163
167
 
164
168
  ### Listing available command line options
165
169
  ```bash
170
+ # from local git repo
166
171
  uv run sokrates-mcp --help
172
+
173
+ # without checking out the git repo
174
+ uvx sokrates-mcp --help
167
175
  ```
168
176
 
169
177
  ## Architecture & Technical Details
@@ -185,110 +193,37 @@ The server follows a modular design pattern:
185
193
 
186
194
  ## Available Tools
187
195
 
188
- ### main.py
189
-
190
- - **refine_prompt**: Refines a given prompt by enriching it with additional context.
191
- - Parameters:
192
- - `prompt` (str): The input prompt to be refined
193
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
194
- - `model` (str, optional): Model name for refinement. Default is 'default'
195
-
196
- - **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
197
- - Parameters:
198
- - `prompt` (str): The input prompt to be refined and executed
199
- - `refinement_model` (str, optional): Model for refinement. Default is 'default'
200
- - `execution_model` (str, optional): Model for execution. Default is 'default'
201
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
202
-
203
- - **handover_prompt**: Hands over a prompt to an external LLM for processing.
204
- - Parameters:
205
- - `prompt` (str): The prompt to be executed externally
206
- - `model` (str, optional): Model name for execution. Default is 'default'
207
-
208
- - **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
209
- - Parameters:
210
- - `task` (str): The full task description to break down
211
- - `model` (str, optional): Model name for processing. Default is 'default'
212
-
213
- - **list_available_models**: Lists all available large language models accessible by the server.
214
-
215
- ### mcp_config.py
216
-
217
- - **MCPConfig** class: Manages configuration settings for the MCP server.
218
- - Parameters:
219
- - `config_file_path` (str, optional): Path to YAML config file
220
- - `api_endpoint` (str, optional): API endpoint URL
221
- - `api_key` (str, optional): API key for authentication
222
- - `model` (str, optional): Model name
223
-
224
- ### workflow.py
225
-
226
- - **Workflow** class: Implements the business logic for prompt refinement and execution.
227
- - Methods:
228
- - `refine_prompt`: Refines a given prompt
229
- - `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
230
- - `handover_prompt`: Hands over a prompt to an external LLM for processing
231
- - `breakdown_task`: Breaks down a task into sub-tasks
232
- - `list_available_models`: Lists all available models
196
+ See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
233
197
 
234
198
  ## Project Structure
235
199
 
236
200
  - `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
237
201
  - `src/sokrates_mcp/mcp_config.py`: Configuration management
202
+ - `src/sokrates_mcp/utils.py`: Helper and utility methods
238
203
  - `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
239
204
  - `pyproject.toml`: Dependency management
240
205
 
241
206
 
242
- ## Script List
243
-
244
- ### `main.py`
245
- Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
246
- #### Usage
247
- - `uv run python main.py` - Start the MCP server (default port: 8000)
248
- - `uv run fastmcp dev main.py` - Run in development mode with auto-reload
249
-
250
- ### `mcp_config.py`
251
- Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
252
- #### Usage
253
- - Import and use in other scripts:
254
- ```python
255
- from mcp_config import MCPConfig
256
- config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
257
- ```
258
-
259
- ### `workflow.py`
260
- Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
261
- #### Usage
262
- - Import and use in other scripts:
263
- ```python
264
- from workflow import Workflow
265
- from mcp_config import MCPConfig
266
-
267
- config = MCPConfig()
268
- workflow = Workflow(config)
269
- result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
270
- ```
271
-
272
- ### `src/mcp_client_example.py`
273
- Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
274
-
275
- #### Usage
276
- - Run as a standalone script:
277
- ```bash
278
- python src/mcp_client_example.py
279
- ```
280
- - Or use with an ASGI server like Uvicorn:
281
- ```bash
282
- uvicorn src.mcp_client_example:main --factory
283
- ```
284
-
285
207
  **Common Error:**
286
208
  If you see "ModuleNotFoundError: fastmcp", ensure:
287
- 1. Dependencies are installed (`uv pip install .`)
209
+ 1. Dependencies are installed (`uv sync`)
288
210
  2. Python virtual environment is activated
289
211
 
290
212
  ## Changelog
291
213
 
214
+ **0.4.0 (Aug 2025)**
215
+ - adds new tools:
216
+ - read_files_from_directory
217
+ - directory_tree
218
+ - logging refactoring in workflow.py
219
+
220
+ **0.3.0 (Aug 2025)**
221
+ - adds new tools:
222
+ - roll_dice
223
+ - read_from_file
224
+ - store_to_file
225
+ - refactorings - code quality - still ongoing
226
+
292
227
  **0.2.0 (Aug 2025)**
293
228
  - First published version
294
229
  - Update to latest sokrates library version
@@ -120,12 +120,20 @@ providers:
120
120
  ### Starting the Server
121
121
 
122
122
  ```bash
123
+ # from local git repo
123
124
  uv run sokrates-mcp
125
+
126
+ # without checking out the git repo
127
+ uvx sokrates-mcp
124
128
  ```
125
129
 
126
130
  ### Listing available command line options
127
131
  ```bash
132
+ # from local git repo
128
133
  uv run sokrates-mcp --help
134
+
135
+ # without checking out the git repo
136
+ uvx sokrates-mcp --help
129
137
  ```
130
138
 
131
139
  ## Architecture & Technical Details
@@ -147,110 +155,37 @@ The server follows a modular design pattern:
147
155
 
148
156
  ## Available Tools
149
157
 
150
- ### main.py
151
-
152
- - **refine_prompt**: Refines a given prompt by enriching it with additional context.
153
- - Parameters:
154
- - `prompt` (str): The input prompt to be refined
155
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
156
- - `model` (str, optional): Model name for refinement. Default is 'default'
157
-
158
- - **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
159
- - Parameters:
160
- - `prompt` (str): The input prompt to be refined and executed
161
- - `refinement_model` (str, optional): Model for refinement. Default is 'default'
162
- - `execution_model` (str, optional): Model for execution. Default is 'default'
163
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
164
-
165
- - **handover_prompt**: Hands over a prompt to an external LLM for processing.
166
- - Parameters:
167
- - `prompt` (str): The prompt to be executed externally
168
- - `model` (str, optional): Model name for execution. Default is 'default'
169
-
170
- - **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
171
- - Parameters:
172
- - `task` (str): The full task description to break down
173
- - `model` (str, optional): Model name for processing. Default is 'default'
174
-
175
- - **list_available_models**: Lists all available large language models accessible by the server.
176
-
177
- ### mcp_config.py
178
-
179
- - **MCPConfig** class: Manages configuration settings for the MCP server.
180
- - Parameters:
181
- - `config_file_path` (str, optional): Path to YAML config file
182
- - `api_endpoint` (str, optional): API endpoint URL
183
- - `api_key` (str, optional): API key for authentication
184
- - `model` (str, optional): Model name
185
-
186
- ### workflow.py
187
-
188
- - **Workflow** class: Implements the business logic for prompt refinement and execution.
189
- - Methods:
190
- - `refine_prompt`: Refines a given prompt
191
- - `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
192
- - `handover_prompt`: Hands over a prompt to an external LLM for processing
193
- - `breakdown_task`: Breaks down a task into sub-tasks
194
- - `list_available_models`: Lists all available models
158
+ See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
195
159
 
196
160
  ## Project Structure
197
161
 
198
162
  - `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
199
163
  - `src/sokrates_mcp/mcp_config.py`: Configuration management
164
+ - `src/sokrates_mcp/utils.py`: Helper and utility methods
200
165
  - `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
201
166
  - `pyproject.toml`: Dependency management
202
167
 
203
168
 
204
- ## Script List
205
-
206
- ### `main.py`
207
- Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
208
- #### Usage
209
- - `uv run python main.py` - Start the MCP server (default port: 8000)
210
- - `uv run fastmcp dev main.py` - Run in development mode with auto-reload
211
-
212
- ### `mcp_config.py`
213
- Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
214
- #### Usage
215
- - Import and use in other scripts:
216
- ```python
217
- from mcp_config import MCPConfig
218
- config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
219
- ```
220
-
221
- ### `workflow.py`
222
- Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
223
- #### Usage
224
- - Import and use in other scripts:
225
- ```python
226
- from workflow import Workflow
227
- from mcp_config import MCPConfig
228
-
229
- config = MCPConfig()
230
- workflow = Workflow(config)
231
- result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
232
- ```
233
-
234
- ### `src/mcp_client_example.py`
235
- Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
236
-
237
- #### Usage
238
- - Run as a standalone script:
239
- ```bash
240
- python src/mcp_client_example.py
241
- ```
242
- - Or use with an ASGI server like Uvicorn:
243
- ```bash
244
- uvicorn src.mcp_client_example:main --factory
245
- ```
246
-
247
169
  **Common Error:**
248
170
  If you see "ModuleNotFoundError: fastmcp", ensure:
249
- 1. Dependencies are installed (`uv pip install .`)
171
+ 1. Dependencies are installed (`uv sync`)
250
172
  2. Python virtual environment is activated
251
173
 
252
174
  ## Changelog
253
175
 
176
+ **0.4.0 (Aug 2025)**
177
+ - adds new tools:
178
+ - read_files_from_directory
179
+ - directory_tree
180
+ - logging refactoring in workflow.py
181
+
182
+ **0.3.0 (Aug 2025)**
183
+ - adds new tools:
184
+ - roll_dice
185
+ - read_from_file
186
+ - store_to_file
187
+ - refactorings - code quality - still ongoing
188
+
254
189
  **0.2.0 (Aug 2025)**
255
190
  - First published version
256
191
  - Update to latest sokrates library version
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "sokrates-mcp"
3
- version = "0.2.0"
3
+ version = "0.4.0"
4
4
  description = "A templated MCP server for demonstration and quick start."
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.10"
@@ -41,4 +41,4 @@ explicit = true
41
41
  name = "testpypi"
42
42
  url = "https://test.pypi.org/simple/"
43
43
  publish-url = "https://test.pypi.org/legacy/"
44
- explicit = true
44
+ explicit = true
@@ -294,6 +294,67 @@ async def generate_code_review(
294
294
  ) -> str:
295
295
  return await workflow.generate_code_review(ctx=ctx, provider=provider, model=model, review_type=review_type, source_directory=source_directory, source_file_paths=source_file_paths, target_directory=target_directory)
296
296
 
297
+ @mcp.tool(
298
+ name="read_from_file",
299
+ description="Read a file from the local disk at the given file path and return it's contents.",
300
+ tags={"file","read","load","local"}
301
+ )
302
+ async def read_from_file(
303
+ ctx: Context,
304
+ file_path: Annotated[str, Field(description="The source file path to use for reading the file. This should be an absolute file path on the disk.")],
305
+ ) -> str:
306
+ return await workflow.read_from_file(ctx=ctx, file_path=file_path)
307
+
308
+ @mcp.tool(
309
+ name="read_files_from_directory",
310
+ description="Read files from the local disk from the given directory path and return the file contents. You can also provide a list of file extentsions to include optionally.",
311
+ tags={"directory","read","load","local"}
312
+ )
313
+ async def read_files_from_directory(
314
+ ctx: Context,
315
+ directory_path: Annotated[str, Field(description="The source directory path to use for reading the files. This should be an absolute file path on the disk.")],
316
+ file_extensions: Annotated[list[str], Field(description="A list of file extensions to include when reading the files. For markdown files you could use ['.md']", default=None)],
317
+ ) -> str:
318
+ return await workflow.read_files_from_directory(ctx=ctx, directory_path=directory_path, file_extensions=file_extensions)
319
+
320
+ @mcp.tool(
321
+ name="directory_tree",
322
+ description="Provides a recursive directory file listing for the given directory path.",
323
+ tags={"directory","list","local"}
324
+ )
325
+ async def directory_tree(
326
+ ctx: Context,
327
+ directory_path: Annotated[str, Field(description="The source directory path to use for reading the files. This should be an absolute file path on the disk.")]
328
+ ) -> str:
329
+ return await workflow.directory_tree(ctx=ctx, directory_path=directory_path)
330
+
331
+
332
+ @mcp.tool(
333
+ name="store_to_file",
334
+ description="Store a file with the provided content to the local drive at the provided file path.",
335
+ tags={"file","store","save","local"}
336
+ )
337
+ async def store_to_file(
338
+ ctx: Context,
339
+ file_path: Annotated[str, Field(description="The target file path to use for storing the file. This should be an absolute file path on the disk.")],
340
+ file_content: Annotated[str, Field(description="The content that should be written to the target file.")],
341
+ ) -> str:
342
+ return await workflow.store_to_file(ctx=ctx, file_path=file_path, file_content=file_content)
343
+
344
+ @mcp.tool(
345
+ name="roll_dice",
346
+ description="Rolls the given number of dice with the specified number of sides for the given number of times and returns the result. For example you can also instruct to throw a W12, which should then set the side_count to 12.",
347
+ tags={"dice","roll","random"}
348
+ )
349
+ async def roll_dice(
350
+ ctx: Context,
351
+ number_of_dice: Annotated[int, Field(description="The number of dice to to use for rolling.", default=1)],
352
+ side_count: Annotated[int, Field(description="The number of sides of the dice to use for rolling.", default=6)],
353
+ number_of_rolls: Annotated[int, Field(description="The count of dice rolls to execute.", default=1)]
354
+ ) -> str:
355
+ return await workflow.roll_dice(ctx=ctx, number_of_dice=number_of_dice, side_count=side_count, number_of_rolls=number_of_rolls)
356
+
357
+
297
358
  @mcp.tool(
298
359
  name="list_available_models_for_provider",
299
360
  description="Lists all available large language models and the target api endpoint configured as provider for the sokrates-mcp server.",
@@ -18,6 +18,7 @@ import logging
18
18
  from urllib.parse import urlparse
19
19
  from pathlib import Path
20
20
  from sokrates import Config
21
+ from typing import Dict, List, Optional, Any
21
22
 
22
23
  DEFAULT_API_ENDPOINT = "http://localhost:1234/v1"
23
24
  DEFAULT_API_KEY = "mykey"
@@ -53,7 +54,7 @@ class MCPConfig:
53
54
  "openai"
54
55
  ]
55
56
 
56
- def __init__(self, config_file_path=CONFIG_FILE_PATH, api_endpoint = DEFAULT_API_ENDPOINT, api_key = DEFAULT_API_KEY, model= DEFAULT_MODEL, verbose=False):
57
+ def __init__(self, config_file_path: str = CONFIG_FILE_PATH, api_endpoint: str = DEFAULT_API_ENDPOINT, api_key: str = DEFAULT_API_KEY, model: str = DEFAULT_MODEL, verbose: bool = False):
57
58
  """Initialize MCP configuration.
58
59
 
59
60
  Args:
@@ -64,15 +65,15 @@ class MCPConfig:
64
65
  model (str): Model name to use. Defaults to DEFAULT_MODEL.
65
66
  verbose (bool): Enable verbose logging. Defaults to False.
66
67
 
67
- Returns:
68
- None
69
-
70
68
  Side Effects:
71
69
  Initializes instance attributes with values from config file or defaults
72
70
  Sets up logging based on verbose parameter
73
71
  """
74
72
  self.logger = logging.getLogger(__name__)
75
73
  self.config_file_path = config_file_path
74
+ # Validate config file path
75
+ if not self._validate_config_file_path(config_file_path):
76
+ raise ValueError(f"Invalid config file path: {config_file_path}")
76
77
  config_data = self._load_config_from_file(self.config_file_path)
77
78
 
78
79
  prompts_directory = config_data.get("prompts_directory", self.DEFAULT_PROMPTS_DIRECTORY)
@@ -80,14 +81,13 @@ class MCPConfig:
80
81
  raise ValueError(f"Invalid prompts directory: {prompts_directory}")
81
82
  self.prompts_directory = prompts_directory
82
83
 
84
+ # Validate prompt files using helper method
83
85
  refinement_prompt_filename = config_data.get("refinement_prompt_filename", self.DEFAULT_REFINEMENT_PROMPT_FILENAME)
84
- if not os.path.exists(os.path.join(prompts_directory, refinement_prompt_filename)):
85
- raise FileNotFoundError(f"Refinement prompt file not found: {refinement_prompt_filename}")
86
+ self._validate_prompt_file_exists(prompts_directory, refinement_prompt_filename)
86
87
  self.refinement_prompt_filename = refinement_prompt_filename
87
88
 
88
89
  refinement_coding_prompt_filename = config_data.get("refinement_coding_prompt_filename", self.DEFAULT_REFINEMENT_CODING_PROMPT_FILENAME)
89
- if not os.path.exists(os.path.join(prompts_directory, refinement_coding_prompt_filename)):
90
- raise FileNotFoundError(f"Refinement coding prompt file not found: {refinement_coding_prompt_filename}")
90
+ self._validate_prompt_file_exists(prompts_directory, refinement_coding_prompt_filename)
91
91
  self.refinement_coding_prompt_filename = refinement_coding_prompt_filename
92
92
 
93
93
 
@@ -98,25 +98,74 @@ class MCPConfig:
98
98
  self.logger.info(f" Refinement Coding Prompt Filename: {self.refinement_coding_prompt_filename}")
99
99
  self.logger.info(f" Default Provider: {self.default_provider}")
100
100
  for prov in self.providers:
101
- self.logger.info(f"Configured provider name: {prov["name"]} , api_endpoint: {prov["api_endpoint"]} , default_model: {prov["default_model"]}")
101
+ self.logger.info(f"Configured provider name: {prov['name']} , api_endpoint: {prov['api_endpoint']} , default_model: {prov['default_model']}")
102
+
103
+ def _validate_prompt_file_exists(self, prompts_directory: str, filename: str) -> None:
104
+ """Validate that a prompt file exists in the specified directory.
105
+
106
+ Args:
107
+ prompts_directory (str): Directory where prompt files are located
108
+ filename (str): Name of the prompt file to check
109
+
110
+ Raises:
111
+ FileNotFoundError: If the prompt file does not exist
112
+ """
113
+ if not os.path.exists(os.path.join(prompts_directory, filename)):
114
+ raise FileNotFoundError(f"Prompt file not found: {filename}")
115
+
116
+ def _validate_config_file_path(self, config_file_path: str) -> bool:
117
+ """Validate that the configuration file path is valid and accessible.
118
+
119
+ Args:
120
+ config_file_path (str): Path to the configuration file
121
+
122
+ Returns:
123
+ bool: True if path is valid and accessible, False otherwise
124
+ """
125
+ try:
126
+ # Check if we can write to the directory
127
+ dir_path = os.path.dirname(config_file_path) or "."
128
+ if not os.path.exists(dir_path):
129
+ os.makedirs(dir_path, exist_ok=True)
130
+ # Test that we can actually access the file path
131
+ Path(config_file_path).touch(exist_ok=True)
132
+ return True
133
+ except (OSError, IOError):
134
+ return False
102
135
 
103
- def available_providers(self):
104
- return list(map(lambda prov: {'name': prov['name'], 'api_endpoint': prov['api_endpoint'], 'type': prov['type']}, self.providers))
136
+ def available_providers(self) -> List[Dict[str, Any]]:
137
+ return [{'name': p['name'], 'api_endpoint': p['api_endpoint'], 'type': p['type']} for p in self.providers]
105
138
 
106
- def get_provider_by_name(self, provider_name):
107
- providers = list(filter(lambda x: x['name'] == provider_name, self.providers))
108
- return providers[0]
139
+ def get_provider_by_name(self, provider_name: str) -> Dict[str, Any]:
140
+ """Get a provider by its name.
141
+
142
+ Args:
143
+ provider_name (str): Name of the provider to find
144
+
145
+ Returns:
146
+ dict: Provider configuration dictionary
147
+
148
+ Raises:
149
+ IndexError: If no provider with the given name is found
150
+ """
151
+ for provider in self.providers:
152
+ if provider['name'] == provider_name:
153
+ return provider
154
+ raise IndexError(f"Provider '{provider_name}' not found")
109
155
 
110
- def get_default_provider(self):
156
+ def get_default_provider(self) -> Dict[str, Any]:
111
157
  return self.get_provider_by_name(self.default_provider)
112
158
 
113
- def _configure_providers(self, config_data):
159
+ def _configure_providers(self, config_data: Dict[str, Any]) -> None:
114
160
  # configure defaults if not config_data could be loaded
115
- self.providers = config_data.get("providers", {})
161
+ providers = config_data.get("providers", [])
162
+ if not isinstance(providers, list):
163
+ raise ValueError("'providers' must be a list in the configuration file")
164
+ self.providers = providers
116
165
  if len(self.providers) < 1:
117
- self.providers = [
118
- DEFAULT_PROVIDER_CONFIGURATION
119
- ]
166
+ # Validate defaults before use
167
+ self._validate_provider(DEFAULT_PROVIDER_CONFIGURATION)
168
+ self.providers = [DEFAULT_PROVIDER_CONFIGURATION]
120
169
  self.default_provider = DEFAULT_PROVIDER_NAME
121
170
  return
122
171
 
@@ -127,41 +176,42 @@ class MCPConfig:
127
176
  self._validate_provider(provider)
128
177
  provider_names.append(provider['name'])
129
178
 
130
- if not config_data['default_provider']:
179
+ if not config_data.get('default_provider'):
131
180
  raise ValueError(f"No default_provider was configured at the root level of the config file in {self.config_file_path}")
132
181
  self.default_provider = config_data['default_provider']
133
182
 
134
- def _validate_provider(self, provider):
183
+ def _validate_provider(self, provider: Dict[str, Any]) -> None:
135
184
  self._validate_provider_name(provider.get("name", ""))
136
185
  self._validate_provider_type(provider.get("type", ""))
137
186
  self._validate_url(provider.get("api_endpoint", ""))
138
187
  self._validate_api_key(provider.get("api_key", ""))
139
188
  self._validate_model_name(provider.get("default_model", ""))
140
189
 
141
- def _validate_provider_name(self, provider_name):
190
+ def _validate_provider_name(self, provider_name: str) -> None:
142
191
  if len(provider_name) < 1:
143
192
  raise ValueError(f"The provider name: {provider_name} is not a valid provider name")
144
193
 
145
- def _validate_provider_type(self, provider_type):
194
+ def _validate_provider_type(self, provider_type: str) -> None:
146
195
  if not provider_type in self.PROVIDER_TYPES:
147
196
  raise ValueError(f"The provider type: {provider_type} is not supported by sokrates-mcp")
148
197
 
149
- def _validate_url(self, url):
198
+ def _validate_url(self, url: str) -> None:
150
199
  """Validate URL format.
151
200
 
152
201
  Args:
153
202
  url (str): URL to validate
154
203
 
155
- Returns:
156
- bool: True if valid URL, False otherwise
204
+ Raises:
205
+ ValueError: If the URL is invalid
157
206
  """
158
207
  try:
159
208
  result = urlparse(url)
160
- return all([result.scheme in ['http', 'https'], result.netloc])
161
- except:
162
- raise ValueError(f"The api_endpoint: {url} is not a valid llm API endpoint")
209
+ if not (result.scheme in ['http', 'https'] and result.netloc):
210
+ raise ValueError(f"Invalid API endpoint: {url}")
211
+ except Exception as e:
212
+ raise ValueError(f"Invalid API endpoint format: {url}") from e
163
213
 
164
- def _validate_api_key(self, api_key):
214
+ def _validate_api_key(self, api_key: str) -> None:
165
215
  """Validate API key format.
166
216
 
167
217
  Args:
@@ -173,7 +223,7 @@ class MCPConfig:
173
223
  if len(api_key) < 1:
174
224
  raise ValueError("The api key is empty")
175
225
 
176
- def _validate_model_name(self, model):
226
+ def _validate_model_name(self, model: str) -> None:
177
227
  """Validate model name format.
178
228
 
179
229
  Args:
@@ -185,7 +235,7 @@ class MCPConfig:
185
235
  if len(model) < 1:
186
236
  raise ValueError("The model is empty")
187
237
 
188
- def _ensure_directory_exists(self, directory_path):
238
+ def _ensure_directory_exists(self, directory_path: str) -> bool:
189
239
  """Ensure directory exists and is valid.
190
240
 
191
241
  Args:
@@ -203,7 +253,7 @@ class MCPConfig:
203
253
  self.logger.error(f"Error ensuring directory exists: {e}")
204
254
  return False
205
255
 
206
- def _load_config_from_file(self, config_file_path):
256
+ def _load_config_from_file(self, config_file_path: str) -> Dict[str, Any]:
207
257
  """Load configuration data from a YAML file.
208
258
 
209
259
  Args:
@@ -224,13 +274,12 @@ class MCPConfig:
224
274
  with open(config_file_path, 'r') as f:
225
275
  return yaml.safe_load(f) or {}
226
276
  else:
227
- self.logger.warning(f"Config file not found at {config_file_path}. Using defaults.")
228
- # Create empty config file
229
- with open(config_file_path, 'w') as f:
230
- yaml.dump({}, f)
277
+ self.logger.warning(f"Config file not found at {config_file_path}. Using defaults (no config created).")
231
278
  return {}
232
279
  except yaml.YAMLError as e:
233
280
  self.logger.error(f"Error parsing YAML config file {config_file_path}: {e}")
281
+ except OSError as e:
282
+ self.logger.error(f"OS error reading config file {config_file_path}: {e}")
234
283
  except Exception as e:
235
- self.logger.error(f"Error reading config file {config_file_path}: {e}")
284
+ self.logger.error(f"Unexpected error reading config file {config_file_path}: {e}")
236
285
  return {}
@@ -0,0 +1,28 @@
1
+ import secrets
2
+
3
+ class Utils:
4
+
5
+ @staticmethod
6
+ def rand_int_inclusive(min_val: int, max_val: int) -> int:
7
+ """
8
+ Return a random integer N such that min_val <= N <= max_val.
9
+ Uses `secrets.randbelow` which is cryptographically secure.
10
+
11
+ Parameters
12
+ ----------
13
+ min_val : int
14
+ Lower bound (inclusive).
15
+ max_val : int
16
+ Upper bound (inclusive).
17
+
18
+ Returns
19
+ -------
20
+ int
21
+ Random integer in the specified range.
22
+ """
23
+ if min_val > max_val:
24
+ raise ValueError("min_val must be <= max_val")
25
+
26
+ # randbelow(n) returns 0 .. n-1. We need a window of size (max-min+1)
27
+ range_size = max_val - min_val + 1
28
+ return secrets.randbelow(range_size) + min_val
@@ -1,9 +1,15 @@
1
+ from pathlib import Path
2
+ import logging
3
+ from typing import List
4
+
1
5
  from fastmcp import Context
2
- from .mcp_config import MCPConfig
6
+
3
7
  from sokrates import FileHelper, RefinementWorkflow, LLMApi, PromptRefiner, IdeaGenerationWorkflow
4
8
  from sokrates.coding.code_review_workflow import run_code_review
5
- from pathlib import Path
6
- from typing import List
9
+
10
+ from .mcp_config import MCPConfig
11
+ from .utils import Utils
12
+
7
13
  class Workflow:
8
14
 
9
15
  WORKFLOW_COMPLETION_MESSAGE = "Workflow completed."
@@ -14,12 +20,8 @@ class Workflow:
14
20
  Args:
15
21
  config (MCPConfig): The MCP configuration object
16
22
  """
23
+ self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
17
24
  self.config = config
18
- default_provider = self.config.get_default_provider()
19
- self.default_model = default_provider['default_model']
20
- self.default_api_endpoint = default_provider['api_endpoint']
21
- self.default_api_key = default_provider['api_key']
22
-
23
25
  self.prompt_refiner = PromptRefiner()
24
26
 
25
27
  def _get_model(self, provider, model=''):
@@ -76,7 +78,8 @@ class Workflow:
76
78
  """
77
79
  refinement_prompt = self.load_refinement_prompt(refinement_type)
78
80
  workflow = self._initialize_refinement_workflow(provider_name=provider, model=model)
79
-
81
+ self.logger.info(f"Starting refinement workflow with provider: {provider} and model: {model}")
82
+
80
83
  await ctx.info(f"Prompt refinement and execution workflow started with refinement model: {workflow.model} . Waiting for the response from the LLM...")
81
84
  refined = workflow.refine_prompt(input_prompt=prompt, refinement_prompt=refinement_prompt)
82
85
  await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
@@ -102,6 +105,8 @@ class Workflow:
102
105
  refinement_model = self._get_model(provider=prov, model=refinement_model)
103
106
  execution_model = self._get_model(provider=prov, model=execution_model)
104
107
 
108
+ self.logger.info(f"Starting refinement workflow with provider: {provider} with refinement model: {refinement_model} and execution model: {execution_model}")
109
+
105
110
  workflow = self._initialize_refinement_workflow(provider_name=provider, model=execution_model)
106
111
  await ctx.info(f"Prompt refinement and execution workflow started with refinement model: {refinement_model} and execution model {execution_model} . Waiting for the responses from the LLMs...")
107
112
  result = workflow.refine_and_send_prompt(input_prompt=prompt, refinement_prompt=refinement_prompt, refinement_model=refinement_model, execution_model=execution_model)
@@ -125,6 +130,7 @@ class Workflow:
125
130
 
126
131
  prov = self._get_provider(provider)
127
132
  model = self._get_model(provider=prov, model=model)
133
+ self.logger.info(f"Handing over prompt to provider: {provider} and model: {model}")
128
134
  llm_api = LLMApi(api_endpoint=prov['api_endpoint'], api_key=prov['api_key'])
129
135
 
130
136
  result = llm_api.send(prompt,model=model, temperature=temperature)
@@ -146,6 +152,7 @@ class Workflow:
146
152
  Returns:
147
153
  str: A JSON string containing the list of sub-tasks with complexity ratings.
148
154
  """
155
+ self.logger.info(f"Breaking down task with provider: {provider} and model: {model}")
149
156
  workflow = self._initialize_refinement_workflow(provider_name=provider, model=model)
150
157
  await ctx.info(f"Task break-down started with model: {workflow.model} . Waiting for the response from the LLM...")
151
158
  result = workflow.breakdown_task(task=task)
@@ -167,6 +174,8 @@ class Workflow:
167
174
  """
168
175
  prov = self._get_provider(provider)
169
176
  model = self._get_model(provider=prov, model=model)
177
+
178
+ self.logger.info(f"Generating random ideas with provider: {provider} and model: {model}")
170
179
  await ctx.info(f"Task `generate random ideas` started at provider: {prov['name']} with model: {model} , idea_count: {idea_count} and temperature: {temperature}. Waiting for the response from the LLM...")
171
180
 
172
181
  idea_generation_workflow = IdeaGenerationWorkflow(api_endpoint=prov['api_endpoint'],
@@ -200,6 +209,7 @@ class Workflow:
200
209
  prov = self._get_provider(provider)
201
210
  model = self._get_model(provider=prov, model=model)
202
211
 
212
+ self.logger.info(f"Generating ideas on topic with provider: {provider} and model: {model}")
203
213
  await ctx.info(f"Task `generate ideas on topic` started with topic: '{topic}' , model: {model} , idea_count: {idea_count} and temperature: {temperature}. Waiting for the response from the LLM...")
204
214
  idea_generation_workflow = IdeaGenerationWorkflow(api_endpoint=prov['api_endpoint'],
205
215
  api_key=prov['api_key'],
@@ -233,6 +243,7 @@ class Workflow:
233
243
  prov = self._get_provider(provider)
234
244
  model = self._get_model(provider=prov, model=model)
235
245
 
246
+ self.logger.info(f"Generating code review of type: {review_type} with provider: {provider} and model: {model}")
236
247
  await ctx.info(f"Generating code review of type: {review_type} - using model: {model} ...")
237
248
  run_code_review(file_paths=source_file_paths,
238
249
  directory_path=source_directory,
@@ -256,6 +267,7 @@ class Workflow:
256
267
  Returns:
257
268
  str: Formatted list of configured providers.
258
269
  """
270
+ self.logger.info(f"Listing available providers")
259
271
  providers = self.config.available_providers()
260
272
  result = "# Configured providers"
261
273
  for prov in providers:
@@ -274,6 +286,7 @@ class Workflow:
274
286
  Returns:
275
287
  str: Formatted list of available models and API endpoint.
276
288
  """
289
+ self.logger.info(f"Listing models for provider: {provider_name}")
277
290
  await ctx.info(f"Retrieving endpoint information and list of available models for configured provider {provider_name} ...")
278
291
  if not provider_name:
279
292
  provider = self.config.get_default_provider()
@@ -290,4 +303,77 @@ class Workflow:
290
303
  model_list = "\n".join([f"- {model}" for model in models])
291
304
  result = f"{api_headline}\n# List of available models\n{model_list}"
292
305
  await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
293
- return result
306
+ return result
307
+
308
+ async def store_to_file(self, ctx: Context, file_path: str, file_content: str) -> str:
309
+ """Store the provided content to a file on disk
310
+
311
+ """
312
+ await ctx.info(f"Storing file to: {file_path} ...")
313
+ self.logger.info(f"Storing content to file: {file_path}")
314
+ if not file_path:
315
+ raise ValueError("No file_path provided.")
316
+ if not file_content:
317
+ raise ValueError("No file_content provided.")
318
+
319
+ FileHelper.write_to_file(file_path=file_path, content=file_content)
320
+
321
+ result = f"The file has been stored to {file_path}"
322
+ await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
323
+ return result
324
+
325
+ async def read_from_file(self, ctx: Context, file_path: str) -> str:
326
+ """Read content of the provided file path
327
+
328
+ """
329
+ await ctx.info(f"Reading file from: {file_path} ...")
330
+ templated_file = self._read_file_to_templated_format(file_path=file_path)
331
+ await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
332
+ return templated_file
333
+
334
+ async def read_files_from_directory(self, ctx: Context, directory_path: str, file_extensions: List[str]) -> str:
335
+ file_exts_str = '.*'
336
+ if file_extensions:
337
+ file_exts_str = ','.join(file_extensions)
338
+ self.logger.info(f"Reading content for directory: {directory_path}")
339
+ await ctx.info(f"Reading files from directory: {directory_path} with file extensions: {file_exts_str} ...")
340
+ all_files = FileHelper.directory_tree(directory=directory_path, file_extensions=file_extensions)
341
+ result = ""
342
+ for file_path in all_files:
343
+ file_content = self._read_file_to_templated_format(file_path)
344
+ result = "\n".join([result, file_content])
345
+ await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
346
+ return result
347
+
348
+ async def directory_tree(self, ctx: Context, directory_path: str) -> str:
349
+ self.logger.info(f"Listing directory tree for directory: {directory_path}")
350
+ await ctx.info(f"Listing files recursively for directory: {directory_path} ...")
351
+ all_file_paths = FileHelper.directory_tree(directory=directory_path)
352
+ result = f"Directory: {directory_path}\n{"\n- ".join(all_file_paths)}"
353
+
354
+ await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
355
+ return result
356
+
357
+ async def roll_dice(self, ctx: Context, number_of_dice: int=1, side_count: int=6, number_of_rolls: int=1) -> str:
358
+ """Roll a dice with the provided number of sides and return the result
359
+
360
+ """
361
+ self.logger.info(f"Rolling {number_of_dice} dice with {side_count} sides {number_of_rolls} times")
362
+ await ctx.info(f"Throwing {number_of_dice} dice with {side_count} sides {number_of_rolls} times ...")
363
+ result = ""
364
+ for throw_number in range(1,number_of_rolls):
365
+ result = f"{result}# Roll {throw_number}\n"
366
+ for dice_number in range(1, number_of_dice):
367
+ dice_result = Utils.rand_int_inclusive(1, side_count)
368
+ result = f"- Dice {dice_number} result: {dice_result}\n"
369
+ await ctx.info(self.WORKFLOW_COMPLETION_MESSAGE)
370
+ return result
371
+
372
+ def _read_file_to_templated_format(self, file_path: str) -> str:
373
+ if not file_path:
374
+ raise ValueError("No file_path provided.")
375
+ if not Path(file_path).is_file():
376
+ raise ValueError("No file exists at the given file path.")
377
+
378
+ content = FileHelper.read_file(file_path=file_path)
379
+ return f"<file source_file_path='{file_path}'>\n{content}\n</file>"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sokrates-mcp
3
- Version: 0.2.0
3
+ Version: 0.4.0
4
4
  Summary: A templated MCP server for demonstration and quick start.
5
5
  Author-email: Julian Weber <julianweberdev@gmail.com>
6
6
  License: MIT License
@@ -158,12 +158,20 @@ providers:
158
158
  ### Starting the Server
159
159
 
160
160
  ```bash
161
+ # from local git repo
161
162
  uv run sokrates-mcp
163
+
164
+ # without checking out the git repo
165
+ uvx sokrates-mcp
162
166
  ```
163
167
 
164
168
  ### Listing available command line options
165
169
  ```bash
170
+ # from local git repo
166
171
  uv run sokrates-mcp --help
172
+
173
+ # without checking out the git repo
174
+ uvx sokrates-mcp --help
167
175
  ```
168
176
 
169
177
  ## Architecture & Technical Details
@@ -185,110 +193,37 @@ The server follows a modular design pattern:
185
193
 
186
194
  ## Available Tools
187
195
 
188
- ### main.py
189
-
190
- - **refine_prompt**: Refines a given prompt by enriching it with additional context.
191
- - Parameters:
192
- - `prompt` (str): The input prompt to be refined
193
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
194
- - `model` (str, optional): Model name for refinement. Default is 'default'
195
-
196
- - **refine_and_execute_external_prompt**: Refines a prompt and executes it with an external LLM.
197
- - Parameters:
198
- - `prompt` (str): The input prompt to be refined and executed
199
- - `refinement_model` (str, optional): Model for refinement. Default is 'default'
200
- - `execution_model` (str, optional): Model for execution. Default is 'default'
201
- - `refinement_type` (str, optional): Type of refinement ('code' or 'default'). Default is 'default'
202
-
203
- - **handover_prompt**: Hands over a prompt to an external LLM for processing.
204
- - Parameters:
205
- - `prompt` (str): The prompt to be executed externally
206
- - `model` (str, optional): Model name for execution. Default is 'default'
207
-
208
- - **breakdown_task**: Breaks down a task into sub-tasks with complexity ratings.
209
- - Parameters:
210
- - `task` (str): The full task description to break down
211
- - `model` (str, optional): Model name for processing. Default is 'default'
212
-
213
- - **list_available_models**: Lists all available large language models accessible by the server.
214
-
215
- ### mcp_config.py
216
-
217
- - **MCPConfig** class: Manages configuration settings for the MCP server.
218
- - Parameters:
219
- - `config_file_path` (str, optional): Path to YAML config file
220
- - `api_endpoint` (str, optional): API endpoint URL
221
- - `api_key` (str, optional): API key for authentication
222
- - `model` (str, optional): Model name
223
-
224
- ### workflow.py
225
-
226
- - **Workflow** class: Implements the business logic for prompt refinement and execution.
227
- - Methods:
228
- - `refine_prompt`: Refines a given prompt
229
- - `refine_and_execute_external_prompt`: Refines and executes a prompt with an external LLM
230
- - `handover_prompt`: Hands over a prompt to an external LLM for processing
231
- - `breakdown_task`: Breaks down a task into sub-tasks
232
- - `list_available_models`: Lists all available models
196
+ See the [main.py](src/sokrates_mcp/main.py) file for a list of all mcp tools in the server
233
197
 
234
198
  ## Project Structure
235
199
 
236
200
  - `src/sokrates_mcp/main.py`: Sets up the MCP server and registers tools
237
201
  - `src/sokrates_mcp/mcp_config.py`: Configuration management
202
+ - `src/sokrates_mcp/utils.py`: Helper and utility methods
238
203
  - `src/sokrates_mcp/workflow.py`: Business logic for prompt refinement and execution
239
204
  - `pyproject.toml`: Dependency management
240
205
 
241
206
 
242
- ## Script List
243
-
244
- ### `main.py`
245
- Sets up an MCP server using the FastMCP framework to provide tools for prompt refinement and execution workflows.
246
- #### Usage
247
- - `uv run python main.py` - Start the MCP server (default port: 8000)
248
- - `uv run fastmcp dev main.py` - Run in development mode with auto-reload
249
-
250
- ### `mcp_config.py`
251
- Provides configuration management for the MCP server. Loads configuration from a YAML file and sets default values if needed.
252
- #### Usage
253
- - Import and use in other scripts:
254
- ```python
255
- from mcp_config import MCPConfig
256
- config = MCPConfig(api_endpoint="https://api.example.com", model="my-model")
257
- ```
258
-
259
- ### `workflow.py`
260
- Implements the business logic for prompt refinement and execution workflows. Contains methods to refine prompts, execute them with external LLMs, break down tasks, etc.
261
- #### Usage
262
- - Import and use in other scripts:
263
- ```python
264
- from workflow import Workflow
265
- from mcp_config import MCPConfig
266
-
267
- config = MCPConfig()
268
- workflow = Workflow(config)
269
- result = await workflow.refine_prompt("Write a Python function to sort a list", refinement_type="code")
270
- ```
271
-
272
- ### `src/mcp_client_example.py`
273
- Demonstrates a basic Model Context Protocol (MCP) client using the fastmcp library. Defines a simple model and registers it with the client.
274
-
275
- #### Usage
276
- - Run as a standalone script:
277
- ```bash
278
- python src/mcp_client_example.py
279
- ```
280
- - Or use with an ASGI server like Uvicorn:
281
- ```bash
282
- uvicorn src.mcp_client_example:main --factory
283
- ```
284
-
285
207
  **Common Error:**
286
208
  If you see "ModuleNotFoundError: fastmcp", ensure:
287
- 1. Dependencies are installed (`uv pip install .`)
209
+ 1. Dependencies are installed (`uv sync`)
288
210
  2. Python virtual environment is activated
289
211
 
290
212
  ## Changelog
291
213
 
214
+ **0.4.0 (Aug 2025)**
215
+ - adds new tools:
216
+ - read_files_from_directory
217
+ - directory_tree
218
+ - logging refactoring in workflow.py
219
+
220
+ **0.3.0 (Aug 2025)**
221
+ - adds new tools:
222
+ - roll_dice
223
+ - read_from_file
224
+ - store_to_file
225
+ - refactorings - code quality - still ongoing
226
+
292
227
  **0.2.0 (Aug 2025)**
293
228
  - First published version
294
229
  - Update to latest sokrates library version
@@ -6,6 +6,7 @@ pyproject.toml
6
6
  src/sokrates_mcp/__init__.py
7
7
  src/sokrates_mcp/main.py
8
8
  src/sokrates_mcp/mcp_config.py
9
+ src/sokrates_mcp/utils.py
9
10
  src/sokrates_mcp/workflow.py
10
11
  src/sokrates_mcp.egg-info/PKG-INFO
11
12
  src/sokrates_mcp.egg-info/SOURCES.txt
File without changes
File without changes
File without changes