skydeckai-code 0.1.39__py3-none-any.whl → 0.1.41__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {skydeckai_code-0.1.39.dist-info → skydeckai_code-0.1.41.dist-info}/METADATA +113 -245
- {skydeckai_code-0.1.39.dist-info → skydeckai_code-0.1.41.dist-info}/RECORD +9 -7
- src/aidd/tools/__init__.py +16 -0
- src/aidd/tools/file_tools.py +90 -21
- src/aidd/tools/todo_store.py +257 -0
- src/aidd/tools/todo_tools.py +157 -0
- {skydeckai_code-0.1.39.dist-info → skydeckai_code-0.1.41.dist-info}/WHEEL +0 -0
- {skydeckai_code-0.1.39.dist-info → skydeckai_code-0.1.41.dist-info}/entry_points.txt +0 -0
- {skydeckai_code-0.1.39.dist-info → skydeckai_code-0.1.41.dist-info}/licenses/LICENSE +0 -0
@@ -1,7 +1,7 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: skydeckai-code
|
3
|
-
Version: 0.1.
|
4
|
-
Summary: This MCP server provides a comprehensive set of tools for AI-driven Development workflows including file operations, code analysis, multi-language execution, web content fetching with HTML-to-markdown conversion, multi-engine web search, code content searching, and system information retrieval.
|
3
|
+
Version: 0.1.41
|
4
|
+
Summary: This MCP server provides a comprehensive set of tools for AI-driven Development workflows including file operations, code analysis, multi-language execution, web content fetching with HTML-to-markdown conversion, multi-engine web search, code content searching, persistent task management, and system information retrieval.
|
5
5
|
Project-URL: Homepage, https://github.com/skydeckai/skydeckai-code
|
6
6
|
Project-URL: Repository, https://github.com/skydeckai/skydeckai-code
|
7
7
|
Project-URL: Documentation, https://github.com/skydeckai/skydeckai-code/blob/main/README.md
|
@@ -43,13 +43,13 @@ An MCP server that provides a comprehensive set of tools for AI-driven developme
|
|
43
43
|
|
44
44
|
This mcp server was formerly known as `mcp-server-aidd`. It was renamed to `skydeckai-code` to credit the team at [SkyDeck.ai](https://skydeck.ai) with creating this application along with [East Agile](https://eastagile.com). But more importantly we realized that the term AI Driven Development (AIDD) was just not catching on. People did not understand at a glance what it was about. And nor did LLMs. "Code" was far more intuitive. And linguistically intuitive is important in the world of agentic AI.
|
45
45
|
|
46
|
-
|
46
|
+
[](https://mseep.ai/app/fe7a40fd-30c1-4767-84f9-d33bf997497e)
|
47
47
|
|
48
48
|
## Installation
|
49
49
|
|
50
50
|
```bash
|
51
|
-
# Using
|
52
|
-
|
51
|
+
# Using uvx
|
52
|
+
uvx skydeckai-code
|
53
53
|
```
|
54
54
|
|
55
55
|
## Claude Desktop Setup
|
@@ -69,9 +69,9 @@ Add to your `claude_desktop_config.json`:
|
|
69
69
|
|
70
70
|
## SkyDeck AI Helper App
|
71
71
|
|
72
|
-
If you're using
|
72
|
+
If you're using MseeP AI Helper app, you can search for "SkyDeckAI Code" and install it.
|
73
73
|
|
74
|
-

|
75
75
|
|
76
76
|
## Key Features
|
77
77
|
|
@@ -87,7 +87,7 @@ If you're using SkyDeck AI Helper app, you can search for "SkyDeckAI Code" and i
|
|
87
87
|
- Screenshot and screen context tools
|
88
88
|
- Image handling tools
|
89
89
|
|
90
|
-
## Available Tools (
|
90
|
+
## Available Tools (29)
|
91
91
|
|
92
92
|
| Category | Tool Name | Description |
|
93
93
|
| ---------------- | -------------------------- | -------------------------------------------- |
|
@@ -117,6 +117,9 @@ If you're using SkyDeck AI Helper app, you can search for "SkyDeckAI Code" and i
|
|
117
117
|
| **System** | `get_system_info` | Get detailed system information |
|
118
118
|
| **Utility** | `batch_tools` | Run multiple tool operations together |
|
119
119
|
| | `think` | Document reasoning without making changes |
|
120
|
+
| **Todo** | `todo_read` | Read current workspace todo list |
|
121
|
+
| | `todo_write` | Replace entire todo list with validation |
|
122
|
+
| | `todo_update` | Update specific todo item by ID |
|
120
123
|
|
121
124
|
## Detailed Tool Documentation
|
122
125
|
|
@@ -131,34 +134,6 @@ If you're using SkyDeck AI Helper app, you can search for "SkyDeckAI Code" and i
|
|
131
134
|
| delete_file | path: string | Success confirmation |
|
132
135
|
| get_file_info | path: string | File metadata (size, timestamps, permissions) |
|
133
136
|
|
134
|
-
**CLI Usage:**
|
135
|
-
|
136
|
-
```bash
|
137
|
-
# Read entire file
|
138
|
-
skydeckai-code-cli --tool read_file --args '{"files": [{"path": "src/main.py"}]}'
|
139
|
-
|
140
|
-
# Read 10 lines starting from line 20
|
141
|
-
skydeckai-code-cli --tool read_file --args '{"files": [{"path": "src/main.py", "offset": 20, "limit": 10}]}'
|
142
|
-
|
143
|
-
# Read from line 50 to the end of the file
|
144
|
-
skydeckai-code-cli --tool read_file --args '{"files": [{"path": "src/main.py", "offset": 50}]}'
|
145
|
-
|
146
|
-
# Read multiple files with different line ranges
|
147
|
-
skydeckai-code-cli --tool read_file --args '{"files": [
|
148
|
-
{"path": "src/main.py", "offset": 1, "limit": 10},
|
149
|
-
{"path": "README.md"}
|
150
|
-
]}'
|
151
|
-
|
152
|
-
# Write file
|
153
|
-
skydeckai-code-cli --tool write_file --args '{"path": "output.txt", "content": "Hello World"}'
|
154
|
-
|
155
|
-
# Copy file or directory
|
156
|
-
skydeckai-code-cli --tool copy_file --args '{"source": "config.json", "destination": "config.backup.json"}'
|
157
|
-
|
158
|
-
# Get file info
|
159
|
-
skydeckai-code-cli --tool get_file_info --args '{"path": "src/main.py"}'
|
160
|
-
```
|
161
|
-
|
162
137
|
### Complex File Operations
|
163
138
|
|
164
139
|
#### edit_file
|
@@ -208,16 +183,6 @@ Generates complete directory structure:
|
|
208
183
|
|
209
184
|
Returns: JSON tree structure of directory contents.
|
210
185
|
|
211
|
-
**CLI Usage:**
|
212
|
-
|
213
|
-
```bash
|
214
|
-
# List directory
|
215
|
-
skydeckai-code-cli --tool list_directory --args '{"path": "."}'
|
216
|
-
|
217
|
-
# Search for Python files
|
218
|
-
skydeckai-code-cli --tool search_files --args '{"pattern": ".py", "path": "src"}'
|
219
|
-
```
|
220
|
-
|
221
186
|
### Code Analysis
|
222
187
|
|
223
188
|
#### codebase_mapper
|
@@ -252,19 +217,6 @@ Supported Languages:
|
|
252
217
|
- C# (.cs)
|
253
218
|
- Kotlin (.kt, .kts)
|
254
219
|
|
255
|
-
**CLI Usage:**
|
256
|
-
|
257
|
-
```bash
|
258
|
-
# Map the entire codebase structure
|
259
|
-
skydeckai-code-cli --tool codebase_mapper --args '{"path": "."}'
|
260
|
-
|
261
|
-
# Map only the source directory
|
262
|
-
skydeckai-code-cli --tool codebase_mapper --args '{"path": "src"}'
|
263
|
-
|
264
|
-
# Map a specific component or module
|
265
|
-
skydeckai-code-cli --tool codebase_mapper --args '{"path": "src/components"}'
|
266
|
-
```
|
267
|
-
|
268
220
|
#### search_code
|
269
221
|
|
270
222
|
Fast content search tool using regular expressions:
|
@@ -295,29 +247,6 @@ Matching lines grouped by file with line numbers, sorted by file modification ti
|
|
295
247
|
|
296
248
|
This tool uses ripgrep when available for optimal performance, with a Python fallback implementation. It's ideal for finding specific code patterns like function declarations, imports, variable usages, or error handling.
|
297
249
|
|
298
|
-
**CLI Usage:**
|
299
|
-
|
300
|
-
```bash
|
301
|
-
# Find function and class declarations in JavaScript files
|
302
|
-
skydeckai-code-cli --tool search_code --args '{
|
303
|
-
"patterns": ["function\\s+\\w+", "class\\s+\\w+"],
|
304
|
-
"include": "*.js"
|
305
|
-
}'
|
306
|
-
|
307
|
-
# Find all console.log statements with errors or warnings
|
308
|
-
skydeckai-code-cli --tool search_code --args '{
|
309
|
-
"patterns": ["console\\.log.*[eE]rror", "console\\.log.*[wW]arning"],
|
310
|
-
"path": "src"
|
311
|
-
}'
|
312
|
-
|
313
|
-
# Find import and export statements in TypeScript files
|
314
|
-
skydeckai-code-cli --tool search_code --args '{
|
315
|
-
"patterns": ["import.*from", "export.*"],
|
316
|
-
"include": "*.{ts,tsx}",
|
317
|
-
"exclude": "node_modules/**"
|
318
|
-
}'
|
319
|
-
```
|
320
|
-
|
321
250
|
### System Information
|
322
251
|
|
323
252
|
| Tool | Parameters | Returns |
|
@@ -348,13 +277,6 @@ Returns:
|
|
348
277
|
|
349
278
|
Provides essential system information in a clean, readable format.
|
350
279
|
|
351
|
-
**CLI Usage:**
|
352
|
-
|
353
|
-
```bash
|
354
|
-
# Get system information
|
355
|
-
skydeckai-code-cli --tool get_system_info
|
356
|
-
```
|
357
|
-
|
358
280
|
### Screen Context and Image Tools
|
359
281
|
|
360
282
|
#### get_active_apps
|
@@ -518,28 +440,6 @@ Response content as text with HTTP status code and size information. For binary
|
|
518
440
|
|
519
441
|
This tool can be used to access web APIs, fetch documentation, or download content from the web while respecting size limits (10MB max) and security constraints.
|
520
442
|
|
521
|
-
**CLI Usage:**
|
522
|
-
|
523
|
-
```bash
|
524
|
-
# Fetch JSON from an API
|
525
|
-
skydeckai-code-cli --tool web_fetch --args '{
|
526
|
-
"url": "https://api.github.com/users/octocat",
|
527
|
-
"headers": {"Accept": "application/json"}
|
528
|
-
}'
|
529
|
-
|
530
|
-
# Download content to a file
|
531
|
-
skydeckai-code-cli --tool web_fetch --args '{
|
532
|
-
"url": "https://github.com/github/github-mcp-server/blob/main/README.md",
|
533
|
-
"save_to_file": "downloads/readme.md"
|
534
|
-
}'
|
535
|
-
|
536
|
-
# Fetch a webpage and convert to markdown for better readability
|
537
|
-
skydeckai-code-cli --tool web_fetch --args '{
|
538
|
-
"url": "https://example.com",
|
539
|
-
"convert_html_to_markdown": true
|
540
|
-
}'
|
541
|
-
```
|
542
|
-
|
543
443
|
#### web_search
|
544
444
|
|
545
445
|
Performs a robust web search using multiple search engines and returns concise, relevant results.
|
@@ -566,27 +466,6 @@ A list of search results formatted in markdown, including titles, URLs, and snip
|
|
566
466
|
|
567
467
|
This tool uses a multi-engine approach that tries different search engines with various parsing strategies to ensure reliable results. You can specify a preferred engine, but some engines may block automated access, in which case the tool will fall back to alternative engines when "auto" is selected.
|
568
468
|
|
569
|
-
**CLI Usage:**
|
570
|
-
|
571
|
-
```bash
|
572
|
-
# Search with default settings (auto engine selection)
|
573
|
-
skydeckai-code-cli --tool web_search --args '{
|
574
|
-
"query": "latest python release features"
|
575
|
-
}'
|
576
|
-
|
577
|
-
# Try DuckDuckGo if you want alternative results
|
578
|
-
skydeckai-code-cli --tool web_search --args '{
|
579
|
-
"query": "machine learning frameworks comparison",
|
580
|
-
"search_engine": "duckduckgo"
|
581
|
-
}'
|
582
|
-
|
583
|
-
# Use Bing for reliable results
|
584
|
-
skydeckai-code-cli --tool web_search --args '{
|
585
|
-
"query": "best programming practices 2023",
|
586
|
-
"search_engine": "bing"
|
587
|
-
}'
|
588
|
-
```
|
589
|
-
|
590
469
|
### Utility Tools
|
591
470
|
|
592
471
|
#### batch_tools
|
@@ -640,47 +519,6 @@ This tool provides efficient execution of multiple operations in a single reques
|
|
640
519
|
1. Use paths relative to the current working directory (e.g., "project/src" rather than just "src"), or
|
641
520
|
2. Include an explicit tool invocation to change directories using `update_allowed_directory`
|
642
521
|
|
643
|
-
**CLI Usage:**
|
644
|
-
|
645
|
-
```bash
|
646
|
-
# Setup a new project with multiple steps in sequential order (using proper paths)
|
647
|
-
skydeckai-code-cli --tool batch_tools --args '{
|
648
|
-
"description": "Setup new project",
|
649
|
-
"sequential": true,
|
650
|
-
"invocations": [
|
651
|
-
{"tool": "create_directory", "arguments": {"path": "project"}},
|
652
|
-
{"tool": "create_directory", "arguments": {"path": "project/src"}},
|
653
|
-
{"tool": "write_file", "arguments": {"path": "project/README.md", "content": "# Project\n\nA new project."}}
|
654
|
-
]
|
655
|
-
}'
|
656
|
-
|
657
|
-
# Create nested structure using relative paths (without changing directory)
|
658
|
-
skydeckai-code-cli --tool batch_tools --args '{
|
659
|
-
"description": "Create project structure",
|
660
|
-
"sequential": true,
|
661
|
-
"invocations": [
|
662
|
-
{"tool": "create_directory", "arguments": {"path": "project/src"}},
|
663
|
-
{"tool": "create_directory", "arguments": {"path": "project/docs"}},
|
664
|
-
{"tool": "write_file", "arguments": {"path": "project/README.md", "content": "# Project"}}
|
665
|
-
]
|
666
|
-
}'
|
667
|
-
|
668
|
-
# Gather system information and take a screenshot (tasks can run in parallel)
|
669
|
-
skydeckai-code-cli --tool batch_tools --args '{
|
670
|
-
"description": "System diagnostics",
|
671
|
-
"sequential": false,
|
672
|
-
"invocations": [
|
673
|
-
{"tool": "get_system_info", "arguments": {}},
|
674
|
-
{"tool": "capture_screenshot", "arguments": {
|
675
|
-
"output_path": "diagnostics/screen.png",
|
676
|
-
"capture_mode": {
|
677
|
-
"type": "full"
|
678
|
-
}
|
679
|
-
}}
|
680
|
-
]
|
681
|
-
}'
|
682
|
-
```
|
683
|
-
|
684
522
|
#### think
|
685
523
|
|
686
524
|
A tool for complex reasoning and brainstorming without making changes to the repository.
|
@@ -701,20 +539,6 @@ Your thoughts formatted as markdown, with a note indicating this was a thinking
|
|
701
539
|
|
702
540
|
This tool is useful for thinking through complex problems, brainstorming solutions, or laying out implementation plans without making any actual changes. It's a great way to document your reasoning process, evaluate different approaches, or plan out a multi-step strategy before taking action.
|
703
541
|
|
704
|
-
**CLI Usage:**
|
705
|
-
|
706
|
-
```bash
|
707
|
-
# Analyze a bug and plan a fix
|
708
|
-
skydeckai-code-cli --tool think --args '{
|
709
|
-
"thought": "# Bug Analysis\n\n## Observed Behavior\nThe login endpoint returns a 500 error when email contains Unicode characters.\n\n## Root Cause\nThe database adapter is not properly encoding Unicode strings before constructing the SQL query.\n\n## Potential Fixes\n1. Update the database adapter to use parameterized queries\n2. Add input validation to reject Unicode in emails\n3. Encode email input manually before database operations\n\nFix #1 is the best approach as it solves the core issue and improves security."
|
710
|
-
}'
|
711
|
-
|
712
|
-
# Evaluate design alternatives
|
713
|
-
skydeckai-code-cli --tool think --args '{
|
714
|
-
"thought": "# API Design Options\n\n## REST vs GraphQL\nFor this use case, GraphQL would provide more flexible data fetching but adds complexity. REST is simpler and sufficient for our current needs.\n\n## Authentication Methods\nJWT-based authentication offers stateless operation and better scalability compared to session-based auth.\n\nRecommendation: Use REST with JWT authentication for the initial implementation."
|
715
|
-
}'
|
716
|
-
```
|
717
|
-
|
718
542
|
### Code Execution
|
719
543
|
|
720
544
|
#### execute_code
|
@@ -745,34 +569,6 @@ Executes code in various programming languages with safety measures and restrict
|
|
745
569
|
| code | string | Yes | Code to execute |
|
746
570
|
| timeout | integer | No | Maximum execution time (default: 5s) |
|
747
571
|
|
748
|
-
**CLI Usage:**
|
749
|
-
|
750
|
-
```bash
|
751
|
-
# Python example
|
752
|
-
skydeckai-code-cli --tool execute_code --args '{
|
753
|
-
"language": "python",
|
754
|
-
"code": "print(sum(range(10)))"
|
755
|
-
}'
|
756
|
-
|
757
|
-
# JavaScript example
|
758
|
-
skydeckai-code-cli --tool execute_code --args '{
|
759
|
-
"language": "javascript",
|
760
|
-
"code": "console.log(Array.from({length: 5}, (_, i) => i*2))"
|
761
|
-
}'
|
762
|
-
|
763
|
-
# Ruby example
|
764
|
-
skydeckai-code-cli --tool execute_code --args '{
|
765
|
-
"language": "ruby",
|
766
|
-
"code": "puts (1..5).reduce(:+)"
|
767
|
-
}'
|
768
|
-
|
769
|
-
# Go example
|
770
|
-
skydeckai-code-cli --tool execute_code --args '{
|
771
|
-
"language": "go",
|
772
|
-
"code": "fmt.Println(\"Hello, Go!\")"
|
773
|
-
}'
|
774
|
-
```
|
775
|
-
|
776
572
|
**Requirements:**
|
777
573
|
|
778
574
|
- Respective language runtimes must be installed
|
@@ -805,25 +601,6 @@ Executes shell scripts (bash/sh) with safety measures and restrictions.
|
|
805
601
|
| script | string | Yes | Shell script to execute |
|
806
602
|
| timeout | integer | No | Maximum execution time (default: 300s, max: 600s) |
|
807
603
|
|
808
|
-
**CLI Usage:**
|
809
|
-
|
810
|
-
```bash
|
811
|
-
# List directory contents with details
|
812
|
-
skydeckai-code-cli --tool execute_shell_script --args '{
|
813
|
-
"script": "ls -la"
|
814
|
-
}'
|
815
|
-
|
816
|
-
# Find all Python files recursively
|
817
|
-
skydeckai-code-cli --tool execute_shell_script --args '{
|
818
|
-
"script": "find . -name \"*.py\" -type f"
|
819
|
-
}'
|
820
|
-
|
821
|
-
# Complex script with multiple commands
|
822
|
-
skydeckai-code-cli --tool execute_shell_script --args '{
|
823
|
-
"script": "echo \"System Info:\" && uname -a && echo \"\nDisk Usage:\" && df -h"
|
824
|
-
}'
|
825
|
-
```
|
826
|
-
|
827
604
|
**Features:**
|
828
605
|
|
829
606
|
- Uses /bin/sh for maximum compatibility across systems
|
@@ -840,28 +617,119 @@ This tool executes arbitrary shell commands on your system. Always:
|
|
840
617
|
4. Be aware of potential system impacts
|
841
618
|
5. Monitor execution output
|
842
619
|
|
843
|
-
|
620
|
+
### Todo Tools
|
844
621
|
|
845
|
-
|
622
|
+
The todo tools provide sequential task management capabilities for workspace-first development workflows. Tasks are executed in order without priority systems, ensuring structured progress through development phases.
|
846
623
|
|
624
|
+
#### todo_read
|
625
|
+
|
626
|
+
Read the current todo list for the workspace.
|
627
|
+
|
628
|
+
```json
|
629
|
+
{}
|
630
|
+
```
|
631
|
+
|
632
|
+
**Returns:**
|
847
633
|
```json
|
848
634
|
{
|
849
|
-
|
635
|
+
"todos": [
|
636
|
+
{
|
637
|
+
"id": "abc123",
|
638
|
+
"content": "Implement user authentication",
|
639
|
+
"status": "in_progress",
|
640
|
+
"metadata": {
|
641
|
+
"custom_key": "custom_value"
|
642
|
+
},
|
643
|
+
"created_at": "2023-10-01T10:00:00Z",
|
644
|
+
"updated_at": "2023-10-01T11:30:00Z"
|
645
|
+
}
|
646
|
+
],
|
647
|
+
"count": 1,
|
648
|
+
"workspace": "/path/to/workspace"
|
850
649
|
}
|
851
650
|
```
|
852
651
|
|
853
|
-
|
652
|
+
#### todo_write
|
854
653
|
|
855
|
-
|
654
|
+
Replace the entire todo list for sequential execution workflow. Tasks are executed in array order, building upon previous work.
|
856
655
|
|
857
|
-
```
|
858
|
-
|
656
|
+
```json
|
657
|
+
{
|
658
|
+
"todos": [
|
659
|
+
{
|
660
|
+
"id": "task1",
|
661
|
+
"content": "Set up database schema",
|
662
|
+
"status": "pending"
|
663
|
+
},
|
664
|
+
{
|
665
|
+
"id": "task2",
|
666
|
+
"content": "Create API endpoints",
|
667
|
+
"status": "pending",
|
668
|
+
"metadata": {
|
669
|
+
"custom_key": "custom_value"
|
670
|
+
}
|
671
|
+
}
|
672
|
+
]
|
673
|
+
}
|
674
|
+
```
|
675
|
+
|
676
|
+
**Sequential Workflow Rules:**
|
677
|
+
- Each todo must have unique ID
|
678
|
+
- Only one task can be "in_progress" at a time (sequential execution)
|
679
|
+
- Tasks execute in array order - no priority system
|
680
|
+
- Required fields: id, content, status
|
681
|
+
- Status values: "pending", "in_progress", "completed"
|
682
|
+
- Workspace-first: Todo management is mandatory for all workspace operations
|
683
|
+
|
684
|
+
#### todo_update
|
859
685
|
|
860
|
-
|
861
|
-
skydeckai-code-cli --list-tools
|
686
|
+
Update a specific todo item by ID for sequential workflow progression.
|
862
687
|
|
863
|
-
|
864
|
-
|
688
|
+
```json
|
689
|
+
{
|
690
|
+
"todo_id": "task1",
|
691
|
+
"updates": {
|
692
|
+
"status": "in_progress",
|
693
|
+
"metadata": {
|
694
|
+
"new_key": "new_value"
|
695
|
+
}
|
696
|
+
}
|
697
|
+
}
|
698
|
+
```
|
699
|
+
|
700
|
+
**Returns:**
|
701
|
+
```json
|
702
|
+
{
|
703
|
+
"success": true,
|
704
|
+
"updated_todo": {
|
705
|
+
"id": "task1",
|
706
|
+
"content": "Set up database schema",
|
707
|
+
"status": "in_progress",
|
708
|
+
"updated_at": "2023-10-01T12:00:00Z",
|
709
|
+
"metadata": {
|
710
|
+
"new_key": "new_value"
|
711
|
+
}
|
712
|
+
},
|
713
|
+
"counts": {
|
714
|
+
"pending": 1,
|
715
|
+
"in_progress": 1,
|
716
|
+
"completed": 0,
|
717
|
+
"total": 2
|
718
|
+
},
|
719
|
+
"workspace": "/path/to/workspace"
|
720
|
+
}
|
721
|
+
```
|
722
|
+
|
723
|
+
The todo system maintains separate sequential task lists for each workspace, enforcing mandatory usage for all workspace operations. Tasks execute in order, building upon previous work without priority-based scheduling.
|
724
|
+
|
725
|
+
## Configuration
|
726
|
+
|
727
|
+
Configuration file: `~/.skydeckai_code/config.json`
|
728
|
+
|
729
|
+
```json
|
730
|
+
{
|
731
|
+
"allowed_directory": "/path/to/workspace"
|
732
|
+
}
|
865
733
|
```
|
866
734
|
|
867
735
|
## Debugging
|
@@ -2,13 +2,13 @@ src/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
2
2
|
src/aidd/__init__.py,sha256=c9HBWxWruCxoAqLCJqltylAwz_7xmaK3g8DKViJZs0Q,222
|
3
3
|
src/aidd/cli.py,sha256=cLtaQJmMBfr7fHkd0dyJqpDrVTIwybL48PotniWGrFM,5031
|
4
4
|
src/aidd/server.py,sha256=kPRyWeWkMCZjabelC65XTmzZG7yw8htMJKSfnUcKnb0,1575
|
5
|
-
src/aidd/tools/__init__.py,sha256=
|
5
|
+
src/aidd/tools/__init__.py,sha256=LzDxyOEf_Shp_4W3_DAwTpBP9W4CFW6dfyqvU8mOnMM,4108
|
6
6
|
src/aidd/tools/base.py,sha256=wHSAaGGYWM8ECmoYd7KEcmjsZRWesNQFf3zMjCKGMcc,380
|
7
7
|
src/aidd/tools/code_analysis.py,sha256=fDpm2o_If5PsngXzHN2-ezSkPVT0ZxivLuzmHrOAmVU,33188
|
8
8
|
src/aidd/tools/code_execution.py,sha256=7HKstQ-LTjGEUn87LhowOJbd4Pq_zG0xkO-K0JJ-EFs,15513
|
9
9
|
src/aidd/tools/code_tools.py,sha256=rJx_CMq0mB7aBJ6YcNB_6geFnjHU4OaGcXyuu909xhM,16010
|
10
10
|
src/aidd/tools/directory_tools.py,sha256=GMG4-9iO5RfTkbhlWaW40GPKa1qujMPTN32pwxjUU4E,18052
|
11
|
-
src/aidd/tools/file_tools.py,sha256=
|
11
|
+
src/aidd/tools/file_tools.py,sha256=8Z38Tva8Qe0Fd2d6vU-DW-weHyExfUrIiRsxqrBlw6A,46720
|
12
12
|
src/aidd/tools/get_active_apps_tool.py,sha256=BjLF7iXSDgyAmm_gfFgAul2Gn3iX-CNVYHM7Sh4jTAI,19427
|
13
13
|
src/aidd/tools/get_available_windows_tool.py,sha256=OVIYhItTn9u_DftOr3vPCT-R0DOFvMEEJXA6tD6gqWQ,15952
|
14
14
|
src/aidd/tools/image_tools.py,sha256=wT3EcJAfZWcM0IsXdDfbTNjgFhKZM9nu2wHN6Mk_TTQ,5970
|
@@ -17,9 +17,11 @@ src/aidd/tools/path_tools.py,sha256=RGoOhqP69eHJzM8tEgn_5-GRaR0gp25fd0XZIJ_RnQE,
|
|
17
17
|
src/aidd/tools/screenshot_tool.py,sha256=NMO5B4UG8qfMEOMRd2YoOjtwz_oQ2y1UAGU22jV1yGU,46337
|
18
18
|
src/aidd/tools/state.py,sha256=RWSw0Jfsui8FqC0xsI7Ik07tAg35hRwLHa5xGBVbiI4,1493
|
19
19
|
src/aidd/tools/system_tools.py,sha256=XgdIgKeqePZx5pj59zH7Jhs2Abn55XUf0tvKbKMVtPo,7400
|
20
|
+
src/aidd/tools/todo_store.py,sha256=cE2yASBm03Ns8c0UL5pm95tdE25FX7QowjDJ_z-Qhog,9624
|
21
|
+
src/aidd/tools/todo_tools.py,sha256=ownbYPBB41GeRNxo2S3YGUwKUe2PeiI5ypy7kzWQlrs,9325
|
20
22
|
src/aidd/tools/web_tools.py,sha256=gdsj2DEVYb_oYChItK5I1ugt2w25U7IAa5kEw9q6MVg,35534
|
21
|
-
skydeckai_code-0.1.
|
22
|
-
skydeckai_code-0.1.
|
23
|
-
skydeckai_code-0.1.
|
24
|
-
skydeckai_code-0.1.
|
25
|
-
skydeckai_code-0.1.
|
23
|
+
skydeckai_code-0.1.41.dist-info/METADATA,sha256=oRQ0r_BB6oTzL3-jInu3h_2XvWMKiLnrZyqOanJlgz4,27242
|
24
|
+
skydeckai_code-0.1.41.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
|
25
|
+
skydeckai_code-0.1.41.dist-info/entry_points.txt,sha256=ZkU1spOhLEnz5MpUn4dDihVcE0DMUC6ejzbsF-eNth4,88
|
26
|
+
skydeckai_code-0.1.41.dist-info/licenses/LICENSE,sha256=uHse04vmI6ZjW7TblegFl30X-sDyyF0-QvH8ItPca3c,10865
|
27
|
+
skydeckai_code-0.1.41.dist-info/RECORD,,
|
src/aidd/tools/__init__.py
CHANGED
@@ -47,6 +47,14 @@ from .screenshot_tool import (
|
|
47
47
|
handle_capture_screenshot,
|
48
48
|
)
|
49
49
|
from .system_tools import get_system_info_tool, handle_get_system_info
|
50
|
+
from .todo_tools import (
|
51
|
+
todo_read_tool,
|
52
|
+
todo_write_tool,
|
53
|
+
todo_update_tool,
|
54
|
+
handle_todo_read,
|
55
|
+
handle_todo_write,
|
56
|
+
handle_todo_update,
|
57
|
+
)
|
50
58
|
from .web_tools import web_fetch_tool, handle_web_fetch, web_search_tool, handle_web_search
|
51
59
|
|
52
60
|
# Export all tools definitions
|
@@ -82,6 +90,10 @@ TOOL_DEFINITIONS = [
|
|
82
90
|
web_search_tool(),
|
83
91
|
# System tools
|
84
92
|
get_system_info_tool(),
|
93
|
+
# Todo tools
|
94
|
+
todo_read_tool(),
|
95
|
+
todo_write_tool(),
|
96
|
+
todo_update_tool(),
|
85
97
|
]
|
86
98
|
|
87
99
|
# Export all handlers
|
@@ -116,4 +128,8 @@ TOOL_HANDLERS = {
|
|
116
128
|
# Web handlers
|
117
129
|
"web_fetch": handle_web_fetch,
|
118
130
|
"web_search": handle_web_search,
|
131
|
+
# Todo handlers
|
132
|
+
"todo_read": handle_todo_read,
|
133
|
+
"todo_write": handle_todo_write,
|
134
|
+
"todo_update": handle_todo_update,
|
119
135
|
}
|
src/aidd/tools/file_tools.py
CHANGED
@@ -232,11 +232,15 @@ def edit_file_tool():
|
|
232
232
|
"description": "Make line-based edits to a text file. "
|
233
233
|
"WHEN TO USE: When you need to make selective changes to specific parts of a file while preserving the rest of the content. "
|
234
234
|
"Useful for modifying configuration values, updating text while maintaining file structure, or making targeted code changes. "
|
235
|
+
"IMPORTANT: For multiple edits to the same file, use a single tool call with multiple edits in the 'edits' array rather than multiple tool calls. "
|
236
|
+
"This is more efficient and ensures all edits are applied atomically. "
|
235
237
|
"WHEN NOT TO USE: When you want to completely replace a file's contents (use write_file instead), when you need to create a new file (use write_file instead), "
|
236
238
|
"or when you want to apply highly complex edits with context. "
|
237
239
|
"RETURNS: A git-style diff showing the changes made, along with information about any failed matches. "
|
238
240
|
"The response includes sections for failed matches (if any) and the unified diff output. "
|
239
|
-
"
|
241
|
+
"Only works within the allowed directory. "
|
242
|
+
"EXAMPLES: For a single edit: {\"path\": \"config.js\", \"edits\": [{\"oldText\": \"port: 3000\", \"newText\": \"port: 8080\"}]}. "
|
243
|
+
"For multiple edits: {\"path\": \"app.py\", \"edits\": [{\"oldText\": \"debug=False\", \"newText\": \"debug=True\"}, {\"oldText\": \"version='1.0'\", \"newText\": \"version='2.0'\"}]}",
|
240
244
|
"inputSchema": {
|
241
245
|
"type": "object",
|
242
246
|
"properties": {
|
@@ -260,20 +264,26 @@ def edit_file_tool():
|
|
260
264
|
},
|
261
265
|
"required": ["oldText", "newText"]
|
262
266
|
},
|
263
|
-
"description": "
|
264
|
-
|
265
|
-
|
266
|
-
|
267
|
-
|
268
|
-
"default": False
|
267
|
+
"description": "MUST be an array of edit objects, NOT a string. Each edit object must contain 'oldText' and 'newText' properties. "
|
268
|
+
"For multiple edits, use: [{\"oldText\": \"text1\", \"newText\": \"replacement1\"}, {\"oldText\": \"text2\", \"newText\": \"replacement2\"}]. "
|
269
|
+
"For single edit, still use array: [{\"oldText\": \"text\", \"newText\": \"replacement\"}]. "
|
270
|
+
"The edits are applied in sequence, and each one can modify the result of previous edits. "
|
271
|
+
"AVOID multiple tool calls for the same file - instead, group all edits into a single call."
|
269
272
|
},
|
270
273
|
"options": {
|
271
274
|
"type": "object",
|
272
275
|
"properties": {
|
273
276
|
"partialMatch": {
|
274
277
|
"type": "boolean",
|
275
|
-
"description": "Enable fuzzy matching for finding text. When true, the tool will try to find the best match even if it's not an exact match, using
|
278
|
+
"description": "Enable fuzzy matching for finding text. When true, the tool will try to find the best match even if it's not an exact match, using the confidenceThreshold (default 80%).",
|
276
279
|
"default": True
|
280
|
+
},
|
281
|
+
"confidenceThreshold": {
|
282
|
+
"type": "number",
|
283
|
+
"description": "Minimum confidence threshold for fuzzy matching (0.0 to 1.0). Higher values require more exact matches. Default is 0.8 (80% confidence).",
|
284
|
+
"minimum": 0.0,
|
285
|
+
"maximum": 1.0,
|
286
|
+
"default": 0.8
|
277
287
|
}
|
278
288
|
}
|
279
289
|
}
|
@@ -784,11 +794,17 @@ def find_best_match(content: str, pattern: str, partial_match: bool = True) -> t
|
|
784
794
|
|
785
795
|
return best_start, best_end, best_score
|
786
796
|
|
787
|
-
async def apply_file_edits(file_path: str, edits: List[dict],
|
788
|
-
"""Apply edits to a file with optional formatting and return diff.
|
797
|
+
async def apply_file_edits(file_path: str, edits: List[dict], options: dict = None) -> tuple[str, bool, int, int]:
|
798
|
+
"""Apply edits to a file with optional formatting and return diff.
|
799
|
+
|
800
|
+
Returns:
|
801
|
+
tuple: (result_text, has_changes, successful_edits, failed_edits)
|
802
|
+
"""
|
789
803
|
# Set default options
|
790
804
|
options = options or {}
|
791
805
|
partial_match = options.get('partialMatch', True)
|
806
|
+
# Use 0.8 confidence threshold to prevent false positives while allowing reasonable fuzzy matches
|
807
|
+
confidence_threshold = options.get('confidenceThreshold', 0.8)
|
792
808
|
|
793
809
|
# Read file content
|
794
810
|
with open(file_path, 'r', encoding='utf-8') as f:
|
@@ -797,9 +813,10 @@ async def apply_file_edits(file_path: str, edits: List[dict], dry_run: bool = Fa
|
|
797
813
|
# Track modifications
|
798
814
|
modified_content = content
|
799
815
|
failed_matches = []
|
816
|
+
successful_edits = []
|
800
817
|
|
801
818
|
# Apply each edit
|
802
|
-
for edit in edits:
|
819
|
+
for edit_idx, edit in enumerate(edits):
|
803
820
|
old_text = edit['oldText']
|
804
821
|
new_text = edit['newText']
|
805
822
|
|
@@ -810,7 +827,7 @@ async def apply_file_edits(file_path: str, edits: List[dict], dry_run: bool = Fa
|
|
810
827
|
# Find best match
|
811
828
|
start, end, confidence = find_best_match(working_content, search_text, partial_match)
|
812
829
|
|
813
|
-
if confidence >=
|
830
|
+
if confidence >= confidence_threshold:
|
814
831
|
# Fix indentation while preserving relative structure
|
815
832
|
if start >= 0:
|
816
833
|
# Get the indentation of the first line of the matched text
|
@@ -851,31 +868,77 @@ async def apply_file_edits(file_path: str, edits: List[dict], dry_run: bool = Fa
|
|
851
868
|
|
852
869
|
# Apply the edit
|
853
870
|
modified_content = modified_content[:start] + replacement + modified_content[end:]
|
871
|
+
successful_edits.append({
|
872
|
+
'index': edit_idx,
|
873
|
+
'oldText': old_text,
|
874
|
+
'newText': new_text,
|
875
|
+
'confidence': confidence
|
876
|
+
})
|
854
877
|
else:
|
855
878
|
failed_matches.append({
|
879
|
+
'index': edit_idx,
|
856
880
|
'oldText': old_text,
|
881
|
+
'newText': new_text,
|
857
882
|
'confidence': confidence,
|
858
883
|
'bestMatch': working_content[start:end] if start >= 0 and end > start else None
|
859
884
|
})
|
860
885
|
|
861
886
|
# Create diff
|
862
887
|
diff = create_unified_diff(content, modified_content, os.path.basename(file_path))
|
888
|
+
has_changes = modified_content != content
|
863
889
|
|
864
|
-
# Write changes if
|
865
|
-
|
890
|
+
# CRITICAL FIX: Write changes even if some edits failed (partial success)
|
891
|
+
# This prevents the infinite retry loop
|
892
|
+
if has_changes:
|
866
893
|
with open(file_path, 'w', encoding='utf-8') as f:
|
867
894
|
f.write(modified_content)
|
868
895
|
|
869
|
-
#
|
870
|
-
|
871
|
-
|
872
|
-
|
896
|
+
# Build comprehensive result message
|
897
|
+
result_parts = []
|
898
|
+
|
899
|
+
# Summary
|
900
|
+
total_edits = len(edits)
|
901
|
+
successful_count = len(successful_edits)
|
902
|
+
failed_count = len(failed_matches)
|
903
|
+
|
904
|
+
result_parts.append(f'=== Edit Summary ===')
|
905
|
+
result_parts.append(f'Total edits: {total_edits}')
|
906
|
+
result_parts.append(f'Successful: {successful_count}')
|
907
|
+
result_parts.append(f'Failed: {failed_count}')
|
908
|
+
result_parts.append(f'File modified: {has_changes}')
|
909
|
+
result_parts.append('')
|
910
|
+
|
911
|
+
# Failed matches details
|
912
|
+
if failed_matches:
|
913
|
+
result_parts.append('=== Failed Matches ===')
|
914
|
+
for failed in failed_matches:
|
915
|
+
result_parts.append(f"Edit #{failed['index'] + 1}: Confidence {failed['confidence']:.2f}")
|
916
|
+
result_parts.append(f" Searched for: {repr(failed['oldText'][:100])}...")
|
917
|
+
if failed['bestMatch']:
|
918
|
+
result_parts.append(f" Best match: {repr(failed['bestMatch'][:100])}...")
|
919
|
+
result_parts.append('')
|
920
|
+
|
921
|
+
# Successful edits
|
922
|
+
if successful_edits:
|
923
|
+
result_parts.append('=== Successful Edits ===')
|
924
|
+
for success in successful_edits:
|
925
|
+
result_parts.append(f"Edit #{success['index'] + 1}: Confidence {success['confidence']:.2f}")
|
926
|
+
result_parts.append('')
|
927
|
+
|
928
|
+
# Diff
|
929
|
+
if diff.strip():
|
930
|
+
result_parts.append('=== Diff ===')
|
931
|
+
result_parts.append(diff)
|
932
|
+
else:
|
933
|
+
result_parts.append('=== No Changes ===')
|
934
|
+
result_parts.append('No modifications were made to the file.')
|
935
|
+
|
936
|
+
return '\n'.join(result_parts), has_changes, successful_count, failed_count
|
873
937
|
|
874
938
|
async def handle_edit_file(arguments: dict):
|
875
939
|
"""Handle editing a file with pattern matching and formatting."""
|
876
940
|
path = arguments.get("path")
|
877
941
|
edits = arguments.get("edits")
|
878
|
-
dry_run = arguments.get("dryRun", False)
|
879
942
|
options = arguments.get("options", {})
|
880
943
|
|
881
944
|
if not path:
|
@@ -900,8 +963,14 @@ async def handle_edit_file(arguments: dict):
|
|
900
963
|
raise ValueError(f"Access denied: Path ({full_path}) must be within allowed directory ({state.allowed_directory})")
|
901
964
|
|
902
965
|
try:
|
903
|
-
|
904
|
-
|
966
|
+
result_text, has_changes, successful_count, failed_count = await apply_file_edits(full_path, edits, options)
|
967
|
+
|
968
|
+
# CRITICAL FIX: Raise an exception only if ALL edits failed AND no changes were made
|
969
|
+
# This prevents silent failures that cause infinite retry loops
|
970
|
+
if failed_count > 0 and successful_count == 0:
|
971
|
+
raise ValueError(f"All {failed_count} edits failed to match. No changes were made to the file. Check the 'oldText' patterns and ensure they match the file content exactly.")
|
972
|
+
|
973
|
+
return [TextContent(type="text", text=result_text)]
|
905
974
|
except Exception as e:
|
906
975
|
raise ValueError(f"Error editing file: {str(e)}")
|
907
976
|
|
@@ -0,0 +1,257 @@
|
|
1
|
+
from .state import state
|
2
|
+
from datetime import datetime
|
3
|
+
from pathlib import Path
|
4
|
+
from typing import Any, Dict, List
|
5
|
+
import json
|
6
|
+
|
7
|
+
|
8
|
+
class TodoStore:
|
9
|
+
"""Manages todo persistence and operations."""
|
10
|
+
|
11
|
+
def __init__(self):
|
12
|
+
self._cached_store = None
|
13
|
+
self._last_workspace = None
|
14
|
+
|
15
|
+
@property
|
16
|
+
def workspace_path(self) -> Path:
|
17
|
+
"""Get the current workspace directory."""
|
18
|
+
return Path(state.allowed_directory)
|
19
|
+
|
20
|
+
@property
|
21
|
+
def todos_file_path(self) -> Path:
|
22
|
+
"""Get the path to the global todos file."""
|
23
|
+
return state.config_dir / "todos.json"
|
24
|
+
|
25
|
+
def _detect_workspace_change(self) -> bool:
|
26
|
+
"""Check if workspace has changed since last access."""
|
27
|
+
current_workspace = str(self.workspace_path)
|
28
|
+
if self._last_workspace != current_workspace:
|
29
|
+
self._last_workspace = current_workspace
|
30
|
+
self._cached_store = None
|
31
|
+
return True
|
32
|
+
return False
|
33
|
+
|
34
|
+
def _load_store(self) -> Dict[str, Any]:
|
35
|
+
"""Load todos from file with caching."""
|
36
|
+
self._detect_workspace_change()
|
37
|
+
|
38
|
+
if self._cached_store is not None:
|
39
|
+
return self._cached_store
|
40
|
+
|
41
|
+
workspace_key = str(self.workspace_path)
|
42
|
+
|
43
|
+
if not self.todos_file_path.exists():
|
44
|
+
self._cached_store = {"lastModified": datetime.now().isoformat(), "todos": []}
|
45
|
+
return self._cached_store
|
46
|
+
|
47
|
+
try:
|
48
|
+
with open(self.todos_file_path, "r", encoding="utf-8") as f:
|
49
|
+
global_data = json.load(f)
|
50
|
+
workspace_data = global_data.get(workspace_key, {})
|
51
|
+
self._cached_store = {"lastModified": workspace_data.get("lastModified", datetime.now().isoformat()), "todos": workspace_data.get("todos", [])}
|
52
|
+
return self._cached_store
|
53
|
+
except (json.JSONDecodeError, IOError, OSError):
|
54
|
+
# Return empty store if file is corrupted
|
55
|
+
self._cached_store = {"lastModified": datetime.now().isoformat(), "todos": []}
|
56
|
+
return self._cached_store
|
57
|
+
|
58
|
+
def _save_store(self, store: Dict[str, Any]) -> None:
|
59
|
+
"""Save todos to file atomically."""
|
60
|
+
# Ensure the ~/.skydeckai-code directory exists
|
61
|
+
self.todos_file_path.parent.mkdir(exist_ok=True)
|
62
|
+
|
63
|
+
workspace_key = str(self.workspace_path)
|
64
|
+
|
65
|
+
# Load existing global data
|
66
|
+
global_data = {}
|
67
|
+
if self.todos_file_path.exists():
|
68
|
+
try:
|
69
|
+
with open(self.todos_file_path, "r", encoding="utf-8") as f:
|
70
|
+
global_data = json.load(f)
|
71
|
+
except (json.JSONDecodeError, IOError, OSError):
|
72
|
+
global_data = {}
|
73
|
+
|
74
|
+
# Update the workspace data
|
75
|
+
global_data[workspace_key] = store
|
76
|
+
|
77
|
+
# Write to temporary file first (atomic write)
|
78
|
+
temp_path = self.todos_file_path.with_suffix(".json.tmp")
|
79
|
+
|
80
|
+
try:
|
81
|
+
with open(temp_path, "w", encoding="utf-8") as f:
|
82
|
+
json.dump(global_data, f, indent=2, ensure_ascii=False)
|
83
|
+
|
84
|
+
# Atomic rename
|
85
|
+
temp_path.replace(self.todos_file_path)
|
86
|
+
|
87
|
+
# Update cache
|
88
|
+
self._cached_store = store
|
89
|
+
|
90
|
+
except Exception:
|
91
|
+
# Clean up temp file if something went wrong
|
92
|
+
if temp_path.exists():
|
93
|
+
temp_path.unlink()
|
94
|
+
raise
|
95
|
+
|
96
|
+
def _add_to_gitignore(self) -> None:
|
97
|
+
"""No longer needed since todos are stored in ~/.skydeckai-code/todo.json"""
|
98
|
+
pass
|
99
|
+
|
100
|
+
def read_todos(self) -> List[Dict[str, Any]]:
|
101
|
+
"""Read all todos from storage."""
|
102
|
+
store = self._load_store()
|
103
|
+
return store["todos"]
|
104
|
+
|
105
|
+
def write_todos(self, todos: List[Dict[str, Any]]) -> int:
|
106
|
+
"""Write todos to storage with validation."""
|
107
|
+
# Validate todos
|
108
|
+
self._validate_todos(todos)
|
109
|
+
|
110
|
+
# Process todos (add timestamps, etc.)
|
111
|
+
processed_todos = []
|
112
|
+
current_time = datetime.now().isoformat()
|
113
|
+
|
114
|
+
for todo in todos:
|
115
|
+
processed_todo = dict(todo)
|
116
|
+
|
117
|
+
# Ensure required fields have defaults
|
118
|
+
processed_todo.setdefault("id", self._generate_id())
|
119
|
+
processed_todo.setdefault("status", "pending")
|
120
|
+
processed_todo.setdefault("created_at", current_time)
|
121
|
+
processed_todo["updated_at"] = current_time
|
122
|
+
|
123
|
+
processed_todos.append(processed_todo)
|
124
|
+
|
125
|
+
# Create new store
|
126
|
+
new_store = {"lastModified": current_time, "todos": processed_todos}
|
127
|
+
|
128
|
+
# Save to file
|
129
|
+
self._save_store(new_store)
|
130
|
+
|
131
|
+
return len(processed_todos)
|
132
|
+
|
133
|
+
def update_todo(self, todo_id: str, updates: Dict[str, Any]) -> Dict[str, Any]:
|
134
|
+
"""Update a specific todo by ID."""
|
135
|
+
store = self._load_store()
|
136
|
+
todos = store["todos"]
|
137
|
+
|
138
|
+
# Find the todo to update
|
139
|
+
todo_index = None
|
140
|
+
original_todo = None
|
141
|
+
for i, todo in enumerate(todos):
|
142
|
+
if todo["id"] == todo_id:
|
143
|
+
todo_index = i
|
144
|
+
original_todo = todo
|
145
|
+
break
|
146
|
+
|
147
|
+
if todo_index is None or original_todo is None:
|
148
|
+
raise ValueError(f"Todo with ID '{todo_id}' not found")
|
149
|
+
|
150
|
+
# Check if status is changing to completed
|
151
|
+
original_status = original_todo["status"]
|
152
|
+
new_status = updates.get("status", original_status)
|
153
|
+
is_completing = original_status != "completed" and new_status == "completed"
|
154
|
+
|
155
|
+
# Create updated todo
|
156
|
+
updated_todo = dict(todos[todo_index])
|
157
|
+
updated_todo.update(updates)
|
158
|
+
updated_todo["updated_at"] = datetime.now().isoformat()
|
159
|
+
|
160
|
+
# Replace the todo in the list
|
161
|
+
updated_todos = todos.copy()
|
162
|
+
updated_todos[todo_index] = updated_todo
|
163
|
+
|
164
|
+
# Validate the entire list with the update
|
165
|
+
self._validate_todos(updated_todos)
|
166
|
+
|
167
|
+
# Save updated list
|
168
|
+
new_store = {"lastModified": datetime.now().isoformat(), "todos": updated_todos}
|
169
|
+
self._save_store(new_store)
|
170
|
+
|
171
|
+
# Return status counts
|
172
|
+
pending_count = sum(1 for t in updated_todos if t["status"] == "pending")
|
173
|
+
in_progress_count = sum(1 for t in updated_todos if t["status"] == "in_progress")
|
174
|
+
completed_count = sum(1 for t in updated_todos if t["status"] == "completed")
|
175
|
+
|
176
|
+
result = {"updated_todo": updated_todo, "counts": {"pending": pending_count, "in_progress": in_progress_count, "completed": completed_count, "total": len(updated_todos)}}
|
177
|
+
|
178
|
+
# If a todo was just completed, find and include the next pending todo
|
179
|
+
if is_completing:
|
180
|
+
next_todo = self._find_next_pending_todo(updated_todos, todo_index)
|
181
|
+
if next_todo:
|
182
|
+
result["next_todo"] = next_todo
|
183
|
+
else:
|
184
|
+
result["next_todo"] = None
|
185
|
+
result["message"] = "All todos completed! No more pending tasks."
|
186
|
+
|
187
|
+
return result
|
188
|
+
|
189
|
+
def _find_next_pending_todo(self, todos: List[Dict[str, Any]], completed_index: int) -> Dict[str, Any] | None:
|
190
|
+
"""Find the next pending todo after the completed one in sequential order."""
|
191
|
+
# Look for the next pending todo starting from the position after the completed one
|
192
|
+
for i in range(completed_index + 1, len(todos)):
|
193
|
+
if todos[i]["status"] == "pending":
|
194
|
+
return todos[i]
|
195
|
+
|
196
|
+
# If no pending todo found after the completed one, look from the beginning
|
197
|
+
# This handles cases where todos might be reordered or the completed one wasn't the first in-progress
|
198
|
+
for i in range(completed_index):
|
199
|
+
if todos[i]["status"] == "pending":
|
200
|
+
return todos[i]
|
201
|
+
|
202
|
+
# No pending todos found
|
203
|
+
return None
|
204
|
+
|
205
|
+
def _validate_todos(self, todos: List[Dict[str, Any]]) -> None:
|
206
|
+
"""Validate todos according to business rules."""
|
207
|
+
if not isinstance(todos, list):
|
208
|
+
raise ValueError("Todos must be a list")
|
209
|
+
|
210
|
+
# Check for required fields and collect IDs
|
211
|
+
required_fields = {"id", "content", "status"}
|
212
|
+
seen_ids = set()
|
213
|
+
in_progress_count = 0
|
214
|
+
|
215
|
+
for i, todo in enumerate(todos):
|
216
|
+
if not isinstance(todo, dict):
|
217
|
+
raise ValueError(f"Todo at index {i} must be a dictionary")
|
218
|
+
|
219
|
+
# Check required fields
|
220
|
+
missing_fields = required_fields - set(todo.keys())
|
221
|
+
if missing_fields:
|
222
|
+
raise ValueError(f"Todo at index {i} missing required fields: {missing_fields}")
|
223
|
+
|
224
|
+
# Validate ID uniqueness
|
225
|
+
todo_id = todo["id"]
|
226
|
+
if not isinstance(todo_id, str) or not todo_id.strip():
|
227
|
+
raise ValueError(f"Todo at index {i} must have a non-empty string ID")
|
228
|
+
|
229
|
+
if todo_id in seen_ids:
|
230
|
+
raise ValueError(f"Duplicate todo ID found: {todo_id}")
|
231
|
+
seen_ids.add(todo_id)
|
232
|
+
|
233
|
+
# Validate status
|
234
|
+
if todo["status"] not in ["pending", "in_progress", "completed"]:
|
235
|
+
raise ValueError(f"Todo at index {i} has invalid status: {todo['status']}")
|
236
|
+
|
237
|
+
if todo["status"] == "in_progress":
|
238
|
+
in_progress_count += 1
|
239
|
+
|
240
|
+
|
241
|
+
# Validate content
|
242
|
+
if not isinstance(todo["content"], str) or not todo["content"].strip():
|
243
|
+
raise ValueError(f"Todo at index {i} must have non-empty content")
|
244
|
+
|
245
|
+
# Business rule: only one task can be in progress
|
246
|
+
if in_progress_count > 1:
|
247
|
+
raise ValueError("Only one task can be in_progress at a time")
|
248
|
+
|
249
|
+
def _generate_id(self) -> str:
|
250
|
+
"""Generate a unique ID for a todo."""
|
251
|
+
import uuid
|
252
|
+
|
253
|
+
return str(uuid.uuid4())[:8]
|
254
|
+
|
255
|
+
|
256
|
+
# Global todo store instance
|
257
|
+
todo_store = TodoStore()
|
@@ -0,0 +1,157 @@
|
|
1
|
+
import json
|
2
|
+
from .todo_store import todo_store
|
3
|
+
from mcp.types import TextContent
|
4
|
+
|
5
|
+
|
6
|
+
def todo_read_tool():
|
7
|
+
"""Define the todo_read tool."""
|
8
|
+
return {
|
9
|
+
"name": "todo_read",
|
10
|
+
"description": "Read the current task list from the workspace-specific todo storage. "
|
11
|
+
"MANDATORY: This tool MUST be called FIRST before ANY workspace operations including file reads, code analysis, development tasks, project creation, or directory manipulation. "
|
12
|
+
"WORKSPACE-FIRST RULE: Every LLM interaction with workspaces, directories, projects, or repositories requires todo management. "
|
13
|
+
"WHEN TO USE: Always when starting workspace operations - check task status, review pending/completed work, understand project context, pick up previous progress. "
|
14
|
+
"Essential for: Project initialization, code development, file operations, debugging, testing, deployment, refactoring. "
|
15
|
+
"WHEN NOT TO USE: Pure informational queries unrelated to workspace development. "
|
16
|
+
"RETURNS: Sequential task list with status and metadata. Tasks execute in order - no priority system. "
|
17
|
+
"Each workspace maintains isolated todo lists for project organization. "
|
18
|
+
"CRITICAL: Bypassing this tool for workspace operations violates mandatory protocol. "
|
19
|
+
"EXAMPLES: \n- Creating new projects: Read todos to check existing project structure tasks\n- Code development: Check current development phase and next sequential steps\n- File modifications: Understand task context before making changes\n- Project analysis: Review completed analysis and next investigation steps",
|
20
|
+
"inputSchema": {
|
21
|
+
"type": "object",
|
22
|
+
"properties": {},
|
23
|
+
"required": [],
|
24
|
+
},
|
25
|
+
}
|
26
|
+
|
27
|
+
|
28
|
+
def todo_write_tool():
|
29
|
+
"""Define the todo_write tool."""
|
30
|
+
return {
|
31
|
+
"name": "todo_write",
|
32
|
+
"description": "Update the entire task list (complete replacement) for the current workspace. "
|
33
|
+
"MANDATORY: This tool MUST be called when planning, adding, or reorganizing tasks during ANY workspace operations. "
|
34
|
+
"WORKSPACE-FIRST RULE: All workspace development requires structured task management through sequential execution. "
|
35
|
+
"WHEN TO USE: Task planning for new projects, adding development phases, reorganizing workflow, batch status updates. "
|
36
|
+
"Sequential execution model: Tasks are completed in order, building upon previous work. No priority system - order determines execution. "
|
37
|
+
"Essential for: Project planning, development workflows, feature implementation, debugging sequences, deployment phases. "
|
38
|
+
"WHEN NOT TO USE: Single task updates (use todo_update), pure reading (use todo_read). "
|
39
|
+
"RETURNS: Success status and task count. Enforces sequential execution (only one in-progress task). "
|
40
|
+
"CRITICAL: Sequential task management is mandatory for all workspace development activities. "
|
41
|
+
"EXAMPLES: \n- New project setup: Create sequential tasks for initialization, structure, dependencies\n- Feature development: Plan design, implementation, testing, documentation phases\n- Bug fixing: Create investigation, fix, test, validation sequence\n- Code refactoring: Plan analysis, changes, testing, cleanup steps",
|
42
|
+
"inputSchema": {
|
43
|
+
"type": "object",
|
44
|
+
"properties": {
|
45
|
+
"todos": {
|
46
|
+
"type": "array",
|
47
|
+
"description": "Complete list of todo items to replace the current list for sequential execution. Each todo must contain id, content, and status fields. Tasks execute in array order.",
|
48
|
+
"items": {
|
49
|
+
"type": "object",
|
50
|
+
"properties": {
|
51
|
+
"id": {"type": "string", "description": "Unique identifier for the task. Must be unique across all todos."},
|
52
|
+
"content": {"type": "string", "description": "Task description or content. Cannot be empty."},
|
53
|
+
"status": {"type": "string", "enum": ["pending", "in_progress", "completed"], "description": "Current status of the task. Only one task can be 'in_progress' at a time."},
|
54
|
+
"metadata": {"type": "object", "description": "Optional additional data for the task.", "additionalProperties": True},
|
55
|
+
},
|
56
|
+
"required": ["id", "content", "status"],
|
57
|
+
"additionalProperties": True,
|
58
|
+
},
|
59
|
+
}
|
60
|
+
},
|
61
|
+
"required": ["todos"],
|
62
|
+
},
|
63
|
+
}
|
64
|
+
|
65
|
+
|
66
|
+
def todo_update_tool():
|
67
|
+
"""Define the todo_update tool."""
|
68
|
+
return {
|
69
|
+
"name": "todo_update",
|
70
|
+
"description": "Update a specific todo item by ID for sequential workflow management. "
|
71
|
+
"MANDATORY: This tool MUST be called when progressing through tasks during workspace operations. "
|
72
|
+
"WORKSPACE-FIRST RULE: Task progress updates are required for all workspace development activities. "
|
73
|
+
"WHEN TO USE: Mark tasks in-progress when starting, completed when finished, update content for clarification. "
|
74
|
+
"Sequential workflow: Progress through tasks in order, maintaining single active task constraint. "
|
75
|
+
"Essential for: Task status transitions, progress tracking, workflow advancement, content updates. "
|
76
|
+
"WHEN NOT TO USE: Multiple task updates (use todo_write), adding new tasks (use todo_write). "
|
77
|
+
"RETURNS: Updated todo with status counts showing workflow progress. "
|
78
|
+
"Enforces sequential execution - only one task can be in-progress at any time. "
|
79
|
+
"CRITICAL: Sequential progress tracking is mandatory for workspace development workflows. "
|
80
|
+
"EXAMPLES: \n- Starting work: Update task from 'pending' to 'in_progress'\n- Completing work: Update task from 'in_progress' to 'completed'\n- Task refinement: Update content for better clarity\n- Workflow progression: Move to next sequential task",
|
81
|
+
"inputSchema": {
|
82
|
+
"type": "object",
|
83
|
+
"properties": {
|
84
|
+
"todo_id": {"type": "string", "description": "The unique ID of the todo to update."},
|
85
|
+
"updates": {
|
86
|
+
"type": "object",
|
87
|
+
"description": "Fields to update in the todo for sequential workflow. Can include content, status, or metadata.",
|
88
|
+
"properties": {
|
89
|
+
"content": {"type": "string", "description": "New task description or content."},
|
90
|
+
"status": {"type": "string", "enum": ["pending", "in_progress", "completed"], "description": "New status of the task."},
|
91
|
+
"metadata": {"type": "object", "description": "Additional data for the task.", "additionalProperties": True},
|
92
|
+
},
|
93
|
+
"additionalProperties": True,
|
94
|
+
},
|
95
|
+
},
|
96
|
+
"required": ["todo_id", "updates"],
|
97
|
+
},
|
98
|
+
}
|
99
|
+
|
100
|
+
|
101
|
+
async def handle_todo_read(arguments: dict) -> list[TextContent]:
|
102
|
+
"""Handle reading todos from storage."""
|
103
|
+
try:
|
104
|
+
todos = todo_store.read_todos()
|
105
|
+
|
106
|
+
result = {"todos": todos, "count": len(todos), "workspace": str(todo_store.workspace_path)}
|
107
|
+
|
108
|
+
return [TextContent(type="text", text=json.dumps(result, indent=2))]
|
109
|
+
|
110
|
+
except Exception as e:
|
111
|
+
error_result = {"error": {"code": "READ_ERROR", "message": f"Failed to read todos: {str(e)}"}}
|
112
|
+
return [TextContent(type="text", text=json.dumps(error_result, indent=2))]
|
113
|
+
|
114
|
+
|
115
|
+
async def handle_todo_write(arguments: dict) -> list[TextContent]:
|
116
|
+
"""Handle writing todos to storage."""
|
117
|
+
try:
|
118
|
+
todos = arguments.get("todos", [])
|
119
|
+
|
120
|
+
if not isinstance(todos, list):
|
121
|
+
raise ValueError("Todos must be provided as a list")
|
122
|
+
|
123
|
+
count = todo_store.write_todos(todos)
|
124
|
+
|
125
|
+
result = {"success": True, "count": count, "workspace": str(todo_store.workspace_path)}
|
126
|
+
|
127
|
+
return [TextContent(type="text", text=json.dumps(result, indent=2))]
|
128
|
+
|
129
|
+
except Exception as e:
|
130
|
+
error_result = {"error": {"code": "VALIDATION_ERROR" if "validation" in str(e).lower() or "invalid" in str(e).lower() or "duplicate" in str(e).lower() else "WRITE_ERROR", "message": str(e)}}
|
131
|
+
return [TextContent(type="text", text=json.dumps(error_result, indent=2))]
|
132
|
+
|
133
|
+
|
134
|
+
async def handle_todo_update(arguments: dict) -> list[TextContent]:
|
135
|
+
"""Handle updating a specific todo."""
|
136
|
+
try:
|
137
|
+
todo_id = arguments.get("todo_id")
|
138
|
+
updates = arguments.get("updates", {})
|
139
|
+
|
140
|
+
if not todo_id:
|
141
|
+
raise ValueError("todo_id is required")
|
142
|
+
|
143
|
+
if not isinstance(updates, dict):
|
144
|
+
raise ValueError("Updates must be provided as a dictionary")
|
145
|
+
|
146
|
+
if not updates:
|
147
|
+
raise ValueError("Updates cannot be empty")
|
148
|
+
|
149
|
+
result = todo_store.update_todo(todo_id, updates)
|
150
|
+
result["success"] = True
|
151
|
+
result["workspace"] = str(todo_store.workspace_path)
|
152
|
+
|
153
|
+
return [TextContent(type="text", text=json.dumps(result, indent=2))]
|
154
|
+
|
155
|
+
except Exception as e:
|
156
|
+
error_result = {"error": {"code": "VALIDATION_ERROR" if "validation" in str(e).lower() or "invalid" in str(e).lower() or "not found" in str(e).lower() else "UPDATE_ERROR", "message": str(e)}}
|
157
|
+
return [TextContent(type="text", text=json.dumps(error_result, indent=2))]
|
File without changes
|
File without changes
|
File without changes
|