ctroy-code-analysis 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- ctroy_code_analysis-0.1.0/PKG-INFO +6 -0
- ctroy_code_analysis-0.1.0/ai-coding-tools/context/2026-02-10-request-interrupted-by-user-for-tool.md +203 -0
- ctroy_code_analysis-0.1.0/pyproject.toml +18 -0
- ctroy_code_analysis-0.1.0/src/ctroy_code_analysis/__init__.py +0 -0
- ctroy_code_analysis-0.1.0/src/ctroy_code_analysis/server.py +327 -0
|
@@ -0,0 +1,6 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: ctroy-code-analysis
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: MCP server exposing code review tools: comment review, name review, cohesion review, performance review, test coverage enforcement, and gridmat diagrams.
|
|
5
|
+
Requires-Python: >=3.10
|
|
6
|
+
Requires-Dist: mcp[cli]
|
ctroy_code_analysis-0.1.0/ai-coding-tools/context/2026-02-10-request-interrupted-by-user-for-tool.md
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
1
|
+
# Session Summary — 2026-02-10 15:02
|
|
2
|
+
Session ID: `9a24fbac-96bd-42a7-beb2-2fc295a7c22c`
|
|
3
|
+
|
|
4
|
+
## User
|
|
5
|
+
[Request interrupted by user for tool use]
|
|
6
|
+
|
|
7
|
+
## User
|
|
8
|
+
Implement the following plan:
|
|
9
|
+
|
|
10
|
+
# MCP Server for Shared Code Reviewers
|
|
11
|
+
|
|
12
|
+
## Context
|
|
13
|
+
|
|
14
|
+
You have 5 code review agent patterns (comment-reviewer, name-reviewer, cohesion-reviewer, speed-performance-reviewer, test-coverage-enforcer) plus a gridmat diagram tool that you want other developers to be able to use. Currently these exist as Claude Code built-in agents and slash commands local to your project. Building an MCP server packages them as a standalone, distributable service that any Claude Code (or
|
|
15
|
+
[...truncated]
|
|
16
|
+
|
|
17
|
+
## Assistant
|
|
18
|
+
I'll implement this step by step. Let me start by creating the repository structure and all the files.
|
|
19
|
+
|
|
20
|
+
## Assistant
|
|
21
|
+
Now let me check the plan transcript to get the exact review instruction text for each tool, since those were likely discussed during planning.
|
|
22
|
+
|
|
23
|
+
## Assistant
|
|
24
|
+
Now I have all the detailed review instructions. Let me build the server.
|
|
25
|
+
|
|
26
|
+
## Assistant
|
|
27
|
+
I have a Python name collision issue — the `@mcp.tool()` and `@mcp.prompt()` decorators use the same function names. Let me fix that by giving prompt functions distinct Python names while keeping their MCP names clean.
|
|
28
|
+
|
|
29
|
+
## Assistant
|
|
30
|
+
Good, I can see the latest MCP SDK uses `MCPServer` from `mcp.server.mcpserver`, not `FastMCP`. Let me also check if there's a `name` parameter to handle the tool/prompt naming.
|
|
31
|
+
|
|
32
|
+
## Assistant
|
|
33
|
+
The docs don't show a `name` parameter on the decorators — tool/prompt names come from the function name. I'll use `_prompt` suffix for prompt functions to avoid Python name collisions. Let me rewrite `server.py` with the correct `MCPServer` import.
|
|
34
|
+
|
|
35
|
+
## Assistant
|
|
36
|
+
Now let me rewrite with the correct import and fix the name collisions.
|
|
37
|
+
|
|
38
|
+
## Assistant
|
|
39
|
+
Now let me initialize git and install the package to test it.
|
|
40
|
+
|
|
41
|
+
## Assistant
|
|
42
|
+
Package installed. Now let me test that the server starts without import errors.
|
|
43
|
+
|
|
44
|
+
## Assistant
|
|
45
|
+
The installed version (1.26.0) doesn't have `MCPServer` yet — that's likely in a newer release. Let me check what's available.
|
|
46
|
+
|
|
47
|
+
## Assistant
|
|
48
|
+
`FastMCP` is the correct class for the installed version. Let me fix the import.
|
|
49
|
+
|
|
50
|
+
## Assistant
|
|
51
|
+
Import works. Now let me verify the tools and prompts are registered, and test the server starts on stdio.
|
|
52
|
+
|
|
53
|
+
## Assistant
|
|
54
|
+
All 6 tools and 6 prompts are registered. Let me test one of the tools actually works by pointing it at a real file.
|
|
55
|
+
|
|
56
|
+
## Assistant
|
|
57
|
+
Tools work correctly. Now let me test that the server actually starts on stdio transport. I'll send an MCP initialize request to it.
|
|
58
|
+
|
|
59
|
+
## Assistant
|
|
60
|
+
The server responded with a valid `initialize` result — I can see `protocolVersion`, `capabilities` with `prompts`, `resources`, `tools` in the response. The errors are just from my test sending HTTP-style headers that the raw stdio transport doesn't expect. The server works.
|
|
61
|
+
|
|
62
|
+
Let me verify the console script entry point also works.
|
|
63
|
+
|
|
64
|
+
## Assistant
|
|
65
|
+
Now let me verify the final file is clean and complete.
|
|
66
|
+
|
|
67
|
+
## Assistant
|
|
68
|
+
Everything looks good. Let me verify the complete repository structure and confirm the final state.
|
|
69
|
+
|
|
70
|
+
## Assistant
|
|
71
|
+
Repository structure is exactly as planned. Here's a summary of what was built:
|
|
72
|
+
|
|
73
|
+
## Created: `/Users/chelseatroy/teaching/ctroy-code-analysis/`
|
|
74
|
+
|
|
75
|
+
**Files:**
|
|
76
|
+
- `pyproject.toml` — Package config with `mcp[cli]` dependency, `ctroy-code-analysis` console script entry point
|
|
77
|
+
- `src/ctroy_code_analysis/__init__.py` — Empty package init
|
|
78
|
+
- `src/ctroy_code_analysis/server.py` — All 6 tools + 6 prompts
|
|
79
|
+
|
|
80
|
+
**6 Tools** (read file/directory, return content + review instructions):
|
|
81
|
+
| Tool | Input | Purpose |
|
|
82
|
+
|------|-------|---------|
|
|
83
|
+
| `review_comments` | `filepath` | Identifies superfluous and inaccurate comments |
|
|
84
|
+
| `review_names` | `filepath` | Identifies unclear, mismatched, shadowed names |
|
|
85
|
+
| `review_cohesion` | `filepath` | Identifies scattered logic that should be colocated |
|
|
86
|
+
| `review_performance` | `filepath` | Identifies inefficient algorithms and unnecessary work |
|
|
87
|
+
| `review_test_coverage` | `filepath` | Identifies untested code paths |
|
|
88
|
+
| `draw_gridmat` | `directory` | Lists structure for ASCII execution-path diagram |
|
|
89
|
+
|
|
90
|
+
**6 Prompts** (same names with `prompt_` prefix — return instruction template only, no file reading):
|
|
91
|
+
`prompt_review_comments`, `prompt_review_names`, `prompt_review_cohesion`, `prompt_review_performance`, `prompt_review_test_coverage`, `prompt_draw_gridmat`
|
|
92
|
+
|
|
93
|
+
**Verified:**
|
|
94
|
+
- Package installs cleanly with `pip install -e .`
|
|
95
|
+
- All 12 registrations (6 tools + 6 prompts) confirmed
|
|
96
|
+
- Tools correctly read files and return content + instructions
|
|
97
|
+
- Server starts on stdio and responds to MCP `initialize` handshake
|
|
98
|
+
- `ctroy-code-analysis` console script is on PATH
|
|
99
|
+
|
|
100
|
+
**To connect from another Claude Code project**, add to `.mcp.json`:
|
|
101
|
+
```json
|
|
102
|
+
{
|
|
103
|
+
"mcpServers": {
|
|
104
|
+
"ctroy-code-analysis": {
|
|
105
|
+
"command": "ctroy-code-analysis"
|
|
106
|
+
}
|
|
107
|
+
}
|
|
108
|
+
}
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
## User
|
|
112
|
+
Nice! Okay, so now how do I get this on the pip server so people can install it from there?
|
|
113
|
+
|
|
114
|
+
## Assistant
|
|
115
|
+
Here's the process:
|
|
116
|
+
|
|
117
|
+
## 1. Create a PyPI account
|
|
118
|
+
- Go to https://pypi.org/account/register/
|
|
119
|
+
- Enable 2FA (required for publishing)
|
|
120
|
+
|
|
121
|
+
## 2. Create an API token
|
|
122
|
+
- Go to https://pypi.org/manage/account/token/
|
|
123
|
+
- Create a token scoped to "Entire account" (for first upload; you can scope it to the project after)
|
|
124
|
+
|
|
125
|
+
## 3. Install the build/upload tools
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
pip install build twine
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
## 4. Build the distribution
|
|
132
|
+
|
|
133
|
+
```bash
|
|
134
|
+
cd /Users/chelseatroy/teaching/ctroy-code-analysis
|
|
135
|
+
python -m build
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
This creates `dist/ctroy_code_analysis-0.1.0.tar.gz` and `dist/ctroy_code_analysis-0.1.0-py3-none-any.whl`.
|
|
139
|
+
|
|
140
|
+
## 5. (Optional) Test on TestPyPI first
|
|
141
|
+
|
|
142
|
+
```bash
|
|
143
|
+
twine upload --repository testpypi dist/*
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
Then verify with `pip install --index-url https://test.pypi.org/simple/ ctroy-code-analysis`.
|
|
147
|
+
|
|
148
|
+
## 6. Upload to real PyPI
|
|
149
|
+
|
|
150
|
+
```bash
|
|
151
|
+
twine upload dist/*
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
It will prompt for username (`__token__`) and password (your API token including the `pypi-` prefix).
|
|
155
|
+
|
|
156
|
+
## 7. After first upload
|
|
157
|
+
|
|
158
|
+
You can configure `~/.pypirc` to avoid re-entering credentials:
|
|
159
|
+
```ini
|
|
160
|
+
[pypi]
|
|
161
|
+
username = __token__
|
|
162
|
+
password = pypi-YOUR_TOKEN_HERE
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
---
|
|
166
|
+
|
|
167
|
+
**One thing to check first:** the name `ctroy-code-analysis` needs to be available on PyPI. You can search https://pypi.org/search/?q=ctroy-code-analysis to verify. Given the `ctroy` prefix, it's almost certainly unclaimed.
|
|
168
|
+
|
|
169
|
+
After that, anyone can run:
|
|
170
|
+
```bash
|
|
171
|
+
pip install ctroy-code-analysis
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
Want me to run the `python -m build` step now so the distributions are ready to upload?
|
|
175
|
+
|
|
176
|
+
## User
|
|
177
|
+
Please switch to the mcp directory, make a venv, install twine, and give me the filepath where I should put the API token.
|
|
178
|
+
|
|
179
|
+
## Assistant
|
|
180
|
+
All set. Put your API token in:
|
|
181
|
+
|
|
182
|
+
```
|
|
183
|
+
~/.pypirc
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
That's `/Users/chelseatroy/.pypirc`. Create it with this format:
|
|
187
|
+
|
|
188
|
+
```ini
|
|
189
|
+
[pypi]
|
|
190
|
+
username = __token__
|
|
191
|
+
password = pypi-YOUR_TOKEN_HERE
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
The username is literally `__token__` — that's not a placeholder. The password is the full token string starting with `pypi-` that you get from https://pypi.org/manage/account/token/.
|
|
195
|
+
|
|
196
|
+
Once that file exists, you can publish with:
|
|
197
|
+
|
|
198
|
+
```bash
|
|
199
|
+
cd /Users/chelseatroy/teaching/ctroy-code-analysis
|
|
200
|
+
source .venv/bin/activate
|
|
201
|
+
python -m build
|
|
202
|
+
twine upload dist/*
|
|
203
|
+
```
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
[project]
|
|
2
|
+
name = "ctroy-code-analysis"
|
|
3
|
+
version = "0.1.0"
|
|
4
|
+
description = "MCP server exposing code review tools: comment review, name review, cohesion review, performance review, test coverage enforcement, and gridmat diagrams."
|
|
5
|
+
requires-python = ">=3.10"
|
|
6
|
+
dependencies = [
|
|
7
|
+
"mcp[cli]",
|
|
8
|
+
]
|
|
9
|
+
|
|
10
|
+
[project.scripts]
|
|
11
|
+
ctroy-code-analysis = "ctroy_code_analysis.server:main"
|
|
12
|
+
|
|
13
|
+
[build-system]
|
|
14
|
+
requires = ["hatchling"]
|
|
15
|
+
build-backend = "hatchling.build"
|
|
16
|
+
|
|
17
|
+
[tool.hatch.build.targets.wheel]
|
|
18
|
+
packages = ["src/ctroy_code_analysis"]
|
|
File without changes
|
|
@@ -0,0 +1,327 @@
|
|
|
1
|
+
"""MCP server exposing code review tools and prompts."""
|
|
2
|
+
|
|
3
|
+
import os
|
|
4
|
+
from pathlib import Path
|
|
5
|
+
|
|
6
|
+
from mcp.server.fastmcp import FastMCP
|
|
7
|
+
|
|
8
|
+
mcp = FastMCP("ctroy-code-analysis")
|
|
9
|
+
|
|
10
|
+
# ---------------------------------------------------------------------------
|
|
11
|
+
# Review instruction templates
|
|
12
|
+
# ---------------------------------------------------------------------------
|
|
13
|
+
|
|
14
|
+
COMMENT_REVIEW_INSTRUCTIONS = """\
|
|
15
|
+
Review the code for comment quality. Identify two categories of problems:
|
|
16
|
+
|
|
17
|
+
1. **Superfluous comments** — the code beneath clearly states what the \
|
|
18
|
+
comment says.
|
|
19
|
+
2. **Inaccurate comments** — the comment does NOT match what the code \
|
|
20
|
+
actually does.
|
|
21
|
+
|
|
22
|
+
List problems in descending order of severity (inaccurate before superfluous).
|
|
23
|
+
Consider historical/legal context and platform quirks before flagging.
|
|
24
|
+
Do NOT make any changes — suggestions only.
|
|
25
|
+
|
|
26
|
+
For each problematic comment, output:
|
|
27
|
+
|
|
28
|
+
### Comment: `"the problematic comment text"`
|
|
29
|
+
**Location:** [File and line number]
|
|
30
|
+
**Problem:** One sentence explaining why this comment is problematic.
|
|
31
|
+
**Recommendation:** Remove or Replace
|
|
32
|
+
**Suggested alternatives** (if Replace):
|
|
33
|
+
- `// alternative comment 1`
|
|
34
|
+
- `// alternative comment 2`
|
|
35
|
+
"""
|
|
36
|
+
|
|
37
|
+
NAME_REVIEW_INSTRUCTIONS = """\
|
|
38
|
+
Review the code for naming quality in classes, functions, variables, \
|
|
39
|
+
parameters, and constants. Identify these categories of problems:
|
|
40
|
+
|
|
41
|
+
1. **Unclear names** — don't provide a clear label for what they represent.
|
|
42
|
+
2. **Inaccurate names** — don't match what the code actually does.
|
|
43
|
+
3. **Shadowed variables** — local names that shadow outer-scope names.
|
|
44
|
+
4. **Overloaded temporaries** — generic names like `tmp`, `i`, `x`, `val` \
|
|
45
|
+
reused across unrelated blocks.
|
|
46
|
+
5. **Low-confidence blocks** — names that reduce confidence in understanding \
|
|
47
|
+
code functionality.
|
|
48
|
+
|
|
49
|
+
List in descending order of severity. Prioritize misleading names over vague \
|
|
50
|
+
ones. Respect codebase and language conventions. Do NOT perform renames — \
|
|
51
|
+
suggestions only.
|
|
52
|
+
|
|
53
|
+
For each problematic name, output:
|
|
54
|
+
|
|
55
|
+
### [Name]: `the_problematic_name`
|
|
56
|
+
**Location:** [File and line number]
|
|
57
|
+
**Problem:** One sentence explaining why this name is problematic.
|
|
58
|
+
**Suggestions:**
|
|
59
|
+
- `alternative_name_1`
|
|
60
|
+
- `alternative_name_2`
|
|
61
|
+
- `alternative_name_3`
|
|
62
|
+
"""
|
|
63
|
+
|
|
64
|
+
COHESION_REVIEW_INSTRUCTIONS = """\
|
|
65
|
+
Review the code for cohesion issues — when related pieces of information, \
|
|
66
|
+
logic, or configuration are scattered across the codebase instead of \
|
|
67
|
+
colocated. A developer should find everything related to a single concept \
|
|
68
|
+
in one place or clearly related nearby files.
|
|
69
|
+
|
|
70
|
+
Signs of poor cohesion to look for:
|
|
71
|
+
1. Distant Configuration — settings far from usage
|
|
72
|
+
2. Split Validation — validation rules scattered across files
|
|
73
|
+
3. Fragmented Type Definitions — related types in separate locations
|
|
74
|
+
4. Separated Tests — tests organized by type rather than colocated with code
|
|
75
|
+
5. Dispersed Error Handling — error types/messages spread across codebase
|
|
76
|
+
6. Remote Utilities — helper functions in generic utility folders
|
|
77
|
+
7. Disconnected Documentation — docs far from code
|
|
78
|
+
8. Scattered State Management — state/reducers/actions in different trees
|
|
79
|
+
9. Split Domain Logic — business rules across multiple modules
|
|
80
|
+
10. Orphaned Dependencies — imports from many distant locations
|
|
81
|
+
|
|
82
|
+
Review process:
|
|
83
|
+
1. Identify Core Concepts
|
|
84
|
+
2. Trace Dependencies (config, types, validation, tests, utilities, docs)
|
|
85
|
+
3. Measure Scatter (how many locations must a developer visit?)
|
|
86
|
+
4. Assess Change Impact (how many files need modification for one change?)
|
|
87
|
+
5. Propose Colocation (specific reorganization recommendations)
|
|
88
|
+
|
|
89
|
+
For each issue, output:
|
|
90
|
+
|
|
91
|
+
### Issue: [Brief Description]
|
|
92
|
+
**Scattered Elements:**
|
|
93
|
+
- List the files/locations where related pieces are spread
|
|
94
|
+
|
|
95
|
+
**Why This Matters:**
|
|
96
|
+
- Explain cognitive load or maintenance burden
|
|
97
|
+
|
|
98
|
+
**Suggested Colocation:**
|
|
99
|
+
- Specific recommendation for reorganization
|
|
100
|
+
- Include proposed file/folder structure
|
|
101
|
+
|
|
102
|
+
**Example Change:**
|
|
103
|
+
- Brief before/after
|
|
104
|
+
|
|
105
|
+
Prioritize scattering that genuinely hurts understandability. Respect \
|
|
106
|
+
existing architectural boundaries. Suggest incremental improvements, not \
|
|
107
|
+
wholesale rewrites.
|
|
108
|
+
"""
|
|
109
|
+
|
|
110
|
+
PERFORMANCE_REVIEW_INSTRUCTIONS = """\
|
|
111
|
+
Review the code for performance optimization opportunities. Analyze across \
|
|
112
|
+
five categories:
|
|
113
|
+
|
|
114
|
+
1. **Algorithmic Complexity** — identify Big O complexity, nested loops that \
|
|
115
|
+
could be flattened, O(n²) or worse patterns, memoization opportunities, \
|
|
116
|
+
unnecessary sorting/searching.
|
|
117
|
+
|
|
118
|
+
2. **Data Structure Efficiency** — evaluate data structure choices for access \
|
|
119
|
+
patterns, find O(1) lookup opportunities (sets/dicts vs lists), identify \
|
|
120
|
+
generator opportunities, check for unnecessary copying/conversion.
|
|
121
|
+
|
|
122
|
+
3. **Unnecessary Work** — find computations that could be hoisted outside \
|
|
123
|
+
loops, redundant operations, early-exit opportunities, over-fetching or \
|
|
124
|
+
over-processing.
|
|
125
|
+
|
|
126
|
+
4. **I/O and External Operations** — identify N+1 query patterns, batching \
|
|
127
|
+
opportunities, missing caching, sync operations that could be parallelized.
|
|
128
|
+
|
|
129
|
+
5. **Language-Specific Optimizations** — recommend faster built-in functions, \
|
|
130
|
+
identify anti-patterns, suggest more efficient idioms.
|
|
131
|
+
|
|
132
|
+
Report genuine optimization opportunities, not micro-optimizations. Prioritize \
|
|
133
|
+
by expected impact (high/medium/low). Be quantitative where possible.
|
|
134
|
+
|
|
135
|
+
For each issue, output:
|
|
136
|
+
|
|
137
|
+
### Issue: [Brief description]
|
|
138
|
+
**Location:** [File and line numbers or code snippet]
|
|
139
|
+
**Current Complexity:** [Time/space complexity if relevant]
|
|
140
|
+
**Problem:** [Clear explanation of inefficiency]
|
|
141
|
+
**Recommendation:** [Specific code changes]
|
|
142
|
+
**Expected Improvement:** [Estimated performance gain]
|
|
143
|
+
**Tradeoffs:** [Memory, CPU, code complexity, or other impacts]
|
|
144
|
+
"""
|
|
145
|
+
|
|
146
|
+
TEST_COVERAGE_INSTRUCTIONS = """\
|
|
147
|
+
Review the code and identify untested code paths. Check for:
|
|
148
|
+
|
|
149
|
+
- Happy paths (normal/expected inputs and outputs)
|
|
150
|
+
- Branch coverage (every if/elif/else, case in match)
|
|
151
|
+
- Edge cases (empty, None, boundary values, min/max)
|
|
152
|
+
- Error paths (exception handling, invalid inputs)
|
|
153
|
+
- Boundary conditions (off-by-one, empty collections)
|
|
154
|
+
- Integration points
|
|
155
|
+
- Return values
|
|
156
|
+
- State changes and side effects
|
|
157
|
+
- Concurrency concerns
|
|
158
|
+
|
|
159
|
+
For each gap found, suggest a test. Use descriptive method names like \
|
|
160
|
+
`test_validate_email_rejects_missing_at_sign`. Include both positive and \
|
|
161
|
+
negative cases.
|
|
162
|
+
|
|
163
|
+
Output:
|
|
164
|
+
|
|
165
|
+
### Coverage Analysis
|
|
166
|
+
| Code Path / Construct | File:Line | Existing Test? | Gap Description |
|
|
167
|
+
|---|---|---|---|
|
|
168
|
+
| ... | ... | ... | ... |
|
|
169
|
+
|
|
170
|
+
### Suggested Tests
|
|
171
|
+
For each missing test, provide:
|
|
172
|
+
- The test code
|
|
173
|
+
- What code path it covers
|
|
174
|
+
- Why this coverage was missing
|
|
175
|
+
"""
|
|
176
|
+
|
|
177
|
+
GRIDMAT_INSTRUCTIONS = """\
|
|
178
|
+
Using the directory listing, draw an ASCII execution-path diagram:
|
|
179
|
+
|
|
180
|
+
1. Choose up to 5 example entry points from this codebase.
|
|
181
|
+
2. Pick an emoji with a distinctive color to represent each entry point.
|
|
182
|
+
3. Place the entry points at the top of the diagram.
|
|
183
|
+
4. Trace execution paths downward into implementation files.
|
|
184
|
+
5. Include fully qualified names and ~20 word descriptions of what happens \
|
|
185
|
+
at each level.
|
|
186
|
+
6. Use box-drawing characters to connect the paths.
|
|
187
|
+
|
|
188
|
+
The result should give a reader a quick mental model of how the major \
|
|
189
|
+
code paths flow through this codebase.
|
|
190
|
+
"""
|
|
191
|
+
|
|
192
|
+
|
|
193
|
+
# ---------------------------------------------------------------------------
|
|
194
|
+
# Tools — accept a filepath (or directory), read content, return it with
|
|
195
|
+
# review instructions for the LLM to act on.
|
|
196
|
+
# ---------------------------------------------------------------------------
|
|
197
|
+
|
|
198
|
+
@mcp.tool()
|
|
199
|
+
def review_comments(filepath: str) -> str:
|
|
200
|
+
"""Read a source file and return its contents with instructions to identify superfluous and misleading comments."""
|
|
201
|
+
content = Path(filepath).read_text()
|
|
202
|
+
return f"File: {filepath}\n```\n{content}\n```\n\n{COMMENT_REVIEW_INSTRUCTIONS}"
|
|
203
|
+
|
|
204
|
+
|
|
205
|
+
@mcp.tool()
|
|
206
|
+
def review_names(filepath: str) -> str:
|
|
207
|
+
"""Read a source file and return its contents with instructions to identify unclear, mismatched, or shadowed names."""
|
|
208
|
+
content = Path(filepath).read_text()
|
|
209
|
+
return f"File: {filepath}\n```\n{content}\n```\n\n{NAME_REVIEW_INSTRUCTIONS}"
|
|
210
|
+
|
|
211
|
+
|
|
212
|
+
@mcp.tool()
|
|
213
|
+
def review_cohesion(filepath: str) -> str:
|
|
214
|
+
"""Read a source file and return its contents with instructions to identify scattered related logic that should be colocated."""
|
|
215
|
+
content = Path(filepath).read_text()
|
|
216
|
+
return f"File: {filepath}\n```\n{content}\n```\n\n{COHESION_REVIEW_INSTRUCTIONS}"
|
|
217
|
+
|
|
218
|
+
|
|
219
|
+
@mcp.tool()
|
|
220
|
+
def review_performance(filepath: str) -> str:
|
|
221
|
+
"""Read a source file and return its contents with instructions to identify unnecessary computation and inefficient algorithms."""
|
|
222
|
+
content = Path(filepath).read_text()
|
|
223
|
+
return f"File: {filepath}\n```\n{content}\n```\n\n{PERFORMANCE_REVIEW_INSTRUCTIONS}"
|
|
224
|
+
|
|
225
|
+
|
|
226
|
+
@mcp.tool()
|
|
227
|
+
def review_test_coverage(filepath: str) -> str:
|
|
228
|
+
"""Read a source file and return its contents with instructions to identify untested code paths and suggest missing tests."""
|
|
229
|
+
content = Path(filepath).read_text()
|
|
230
|
+
return f"File: {filepath}\n```\n{content}\n```\n\n{TEST_COVERAGE_INSTRUCTIONS}"
|
|
231
|
+
|
|
232
|
+
|
|
233
|
+
@mcp.tool()
|
|
234
|
+
def draw_gridmat(directory: str) -> str:
|
|
235
|
+
"""List a directory's file structure and return it with instructions to draw an ASCII execution-path diagram with emoji-coded entry points."""
|
|
236
|
+
lines = []
|
|
237
|
+
root = Path(directory)
|
|
238
|
+
for dirpath, dirnames, filenames in os.walk(root):
|
|
239
|
+
dirnames[:] = sorted(
|
|
240
|
+
d for d in dirnames
|
|
241
|
+
if not d.startswith(".") and d != "__pycache__"
|
|
242
|
+
)
|
|
243
|
+
depth = Path(dirpath).relative_to(root).parts
|
|
244
|
+
indent = " " * len(depth)
|
|
245
|
+
lines.append(f"{indent}{Path(dirpath).name}/")
|
|
246
|
+
for fname in sorted(filenames):
|
|
247
|
+
if not fname.startswith("."):
|
|
248
|
+
lines.append(f"{indent} {fname}")
|
|
249
|
+
|
|
250
|
+
listing = "\n".join(lines)
|
|
251
|
+
return (
|
|
252
|
+
f"Directory structure of `{directory}`:\n"
|
|
253
|
+
f"```\n{listing}\n```\n\n"
|
|
254
|
+
f"{GRIDMAT_INSTRUCTIONS}"
|
|
255
|
+
)
|
|
256
|
+
|
|
257
|
+
|
|
258
|
+
# ---------------------------------------------------------------------------
|
|
259
|
+
# Prompts — return just the review instruction template referencing the
|
|
260
|
+
# filepath, so the LLM applies it to code already in context.
|
|
261
|
+
# ---------------------------------------------------------------------------
|
|
262
|
+
|
|
263
|
+
@mcp.prompt()
|
|
264
|
+
def prompt_review_comments(filepath: str) -> str:
|
|
265
|
+
"""Prompt template for reviewing comments in a source file."""
|
|
266
|
+
return (
|
|
267
|
+
f"Please review the file at `{filepath}` for comment quality.\n\n"
|
|
268
|
+
f"{COMMENT_REVIEW_INSTRUCTIONS}"
|
|
269
|
+
)
|
|
270
|
+
|
|
271
|
+
|
|
272
|
+
@mcp.prompt()
|
|
273
|
+
def prompt_review_names(filepath: str) -> str:
|
|
274
|
+
"""Prompt template for reviewing names in a source file."""
|
|
275
|
+
return (
|
|
276
|
+
f"Please review the file at `{filepath}` for naming quality.\n\n"
|
|
277
|
+
f"{NAME_REVIEW_INSTRUCTIONS}"
|
|
278
|
+
)
|
|
279
|
+
|
|
280
|
+
|
|
281
|
+
@mcp.prompt()
|
|
282
|
+
def prompt_review_cohesion(filepath: str) -> str:
|
|
283
|
+
"""Prompt template for reviewing cohesion in a source file."""
|
|
284
|
+
return (
|
|
285
|
+
f"Please review the file at `{filepath}` for cohesion issues.\n\n"
|
|
286
|
+
f"{COHESION_REVIEW_INSTRUCTIONS}"
|
|
287
|
+
)
|
|
288
|
+
|
|
289
|
+
|
|
290
|
+
@mcp.prompt()
|
|
291
|
+
def prompt_review_performance(filepath: str) -> str:
|
|
292
|
+
"""Prompt template for reviewing performance in a source file."""
|
|
293
|
+
return (
|
|
294
|
+
f"Please review the file at `{filepath}` for performance issues.\n\n"
|
|
295
|
+
f"{PERFORMANCE_REVIEW_INSTRUCTIONS}"
|
|
296
|
+
)
|
|
297
|
+
|
|
298
|
+
|
|
299
|
+
@mcp.prompt()
|
|
300
|
+
def prompt_review_test_coverage(filepath: str) -> str:
|
|
301
|
+
"""Prompt template for reviewing test coverage of a source file."""
|
|
302
|
+
return (
|
|
303
|
+
f"Please review the file at `{filepath}` for test coverage gaps.\n\n"
|
|
304
|
+
f"{TEST_COVERAGE_INSTRUCTIONS}"
|
|
305
|
+
)
|
|
306
|
+
|
|
307
|
+
|
|
308
|
+
@mcp.prompt()
|
|
309
|
+
def prompt_draw_gridmat(directory: str) -> str:
|
|
310
|
+
"""Prompt template for drawing a gridmat diagram of a directory."""
|
|
311
|
+
return (
|
|
312
|
+
f"Please draw an execution-path diagram for the codebase at "
|
|
313
|
+
f"`{directory}`.\n\n"
|
|
314
|
+
f"{GRIDMAT_INSTRUCTIONS}"
|
|
315
|
+
)
|
|
316
|
+
|
|
317
|
+
|
|
318
|
+
# ---------------------------------------------------------------------------
|
|
319
|
+
# Entry point
|
|
320
|
+
# ---------------------------------------------------------------------------
|
|
321
|
+
|
|
322
|
+
def main():
|
|
323
|
+
mcp.run(transport="stdio")
|
|
324
|
+
|
|
325
|
+
|
|
326
|
+
if __name__ == "__main__":
|
|
327
|
+
main()
|