@igor-olikh/openspec-mcp-server 1.0.4 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -116,3 +116,14 @@ If you want to modify this server's code:
116
116
  1. `npm install` (Installs dependencies)
117
117
  2. `npm run build` (Compiles the code)
118
118
  3. `npm run start` (Runs the server to test standard input/output)
119
+
120
+ ---
121
+
122
+ ## Upcoming Features (Planned via OpenSpec)
123
+ This server is actively evolving! The following features are currently being designed utilizing OpenSpec and will be implemented soon:
124
+
125
+ - ⏳ **Built-in MCP Prompts**: A pre-made "cheat sheet" standard prompt (e.g., `openspec_kickoff`) that automatically injects massive hidden rules into your AI, steering it to behave perfectly when generating structured features.
126
+ - ⏳ **Structured JSON Outputs**: Replacing raw, colorful terminal output with parsed JavaScript objects, preventing the AI from misreading states and reducing hallucinations.
127
+ - ⏳ **Direct File Readers**: Highly targeted tools (e.g., `openspec_read_active_proposal`) allowing the AI to skip hunting around your directory structure to read active designs.
128
+ - ⏳ **Smart Error Handling**: Coaching the LLM when OpenSpec validations fail (e.g. intercepting terminal errors to output: *"Hey, you forgot the 'Tasks' header in design.md"*).
129
+
@@ -0,0 +1,2 @@
1
+ schema: spec-driven
2
+ created: 2026-04-02
@@ -0,0 +1,8 @@
1
+ # Design: Direct File Readers
2
+
3
+ ## Architecture
4
+ - We will resolve the active change path dynamically inside NodeJS and use standard filesystem `fs` tools to read it out completely.
5
+
6
+ ## Technical Details
7
+ - Read `.openspec` internal pointers or execute a silent status check to find out which feature is active.
8
+ - Target `openspec/changes/{feature_name}/proposal|design.md`.
@@ -0,0 +1,12 @@
1
+ # Proposal: Direct File Reading Tools
2
+
3
+ ## What we are going to do
4
+ We are going to add specific, highly targeted tools to the MCP server like `openspec_read_active_proposal` and `openspec_read_active_design`.
5
+
6
+ ## Why we need this
7
+ Currently, if the AI wants to read the active proposal, it has to first figure out what the active change is, guess the directory structure (`openspec/changes/feature/proposal.md`), and use a generic system file reader to fetch it. This wastes tokens, takes multiple chat turns, and invites errors. By giving the AI a direct tool, it gets exactly the context it needs in a single action.
8
+
9
+ ## How we will do it
10
+ 1. Intercept the standard `openspec status` to dynamically read which feature branch is currently marked as "active".
11
+ 2. Add a new tool called `openspec_read_active_spec` in `src/tools.ts`.
12
+ 3. When the AI calls this tool, we use Node's `fs.readFileSync` to grab the content of the `proposal.md` or `design.md` for that active feature, and return the raw text back to the AI.
@@ -0,0 +1,5 @@
1
+ # Tasks
2
+
3
+ - [ ] Add `openspec_read_active_proposal` configuration array logic.
4
+ - [ ] Add `openspec_read_active_design` configuration.
5
+ - [ ] Write the targeted wrapper `fs.readFileSync` hook in `src/tools.ts`.
@@ -0,0 +1,2 @@
1
+ schema: spec-driven
2
+ created: 2026-04-02
@@ -0,0 +1,8 @@
1
+ # Design: Structured JSON Outputs
2
+
3
+ ## Architecture
4
+ - The MCP server needs to receive clean data so it doesn't just pipe ANSI colored terminal strings to the AI.
5
+
6
+ ## Technical Details
7
+ - Modify `execAsync` execution calls.
8
+ - If OpenSpec CLI supports JSON natively, inject the CLI flag. Otherwise, clean standard stdout to extract objects.
@@ -0,0 +1,12 @@
1
+ # Proposal: Structured JSON Outputs
2
+
3
+ ## What we are going to do
4
+ We are going to change our tool wrappers (`openspec_list`, `openspec_status`, etc.) to request structured JSON output from the underlying OpenSpec CLI, rather than taking the raw colorful terminal text.
5
+
6
+ ## Why we need this
7
+ When the AI (like Codex) asks for the status of a project, handing it raw terminal text (which often includes weird color codes or spacing) makes it harder for the AI to understand. AI models reason exponentially better when they are fed clean, predictable JSON objects representing the exact status of files and tasks. This minimizes hallucinations and mistakes.
8
+
9
+ ## How we will do it
10
+ 1. Modify `src/tools.ts` to append a `--json` or `--format=json` flag when calling `execAsync` for the Fission-AI OpenSpec package.
11
+ 2. Parse that JSON output natively in our server (`JSON.parse(stdout)`).
12
+ 3. Format and return it cleanly to the MCP protocol response. If OpenSpec doesn't yet support `--json`, we will write a tiny parser script to clean the terminal text into a structured response.
@@ -0,0 +1,5 @@
1
+ # Tasks
2
+
3
+ - [ ] Inspect underlying OpenSpec CLI for JSON output flags.
4
+ - [ ] Update tool handlers in `src/tools.ts`.
5
+ - [ ] Upgrade MCP response format to structure the textual JSON payload properly.
@@ -0,0 +1,2 @@
1
+ schema: spec-driven
2
+ created: 2026-04-02
@@ -0,0 +1,8 @@
1
+ # Design: Built-in MCP Prompts
2
+
3
+ ## Architecture
4
+ - Use `@modelcontextprotocol/sdk` to hook into `server.setRequestHandler(ListPromptsRequestSchema, ...)` and `server.setRequestHandler(GetPromptRequestSchema, ...)`.
5
+ - Define an `openspec_kickoff` prompt definition containing the rules of engagement.
6
+
7
+ ## Technical Details
8
+ - The prompt context must be robust enough to instruct the LLM on exactly which tools to call first (e.g. `openspec_init` if missing, then `openspec_list`).
@@ -0,0 +1,13 @@
1
+ # Proposal: Built-in MCP Prompts
2
+
3
+ ## What we are going to do
4
+ We want to add a feature called "MCP Prompts" directly into the server. This gives the AI a pre-made "cheat sheet" standard prompt (like `openspec_kickoff`) that provides a massive set of hidden rules on how it should behave when using OpenSpec.
5
+
6
+ ## Why we need this
7
+ Right now, every time you chat with Codex or Claude, you have to manually tell it: "Please use OpenSpec, remember to create a proposal first, don't write code until the design is done."
8
+ If you forget to say that, the AI might act wildly. With MCP Prompts, the AI is instantly injected with all the necessary system instructions and guardrails automatically when you click the prompt.
9
+
10
+ ## How we will do it
11
+ 1. Use the `@modelcontextprotocol/sdk` to define a `ListPromptsRequest` and `GetPromptRequest` handler in our `src/server.ts`.
12
+ 2. Write a deeply robust instructional prompt text (the "cheat sheet").
13
+ 3. Expose it so that Codex users can see it as a clickable Quick Action in their chat UI.
@@ -0,0 +1,5 @@
1
+ # Tasks
2
+
3
+ - [ ] Implement `ListPromptsRequestSchema` in `src/server.ts`.
4
+ - [ ] Implement `GetPromptRequestSchema` returning the massive text instruction string.
5
+ - [ ] Test in Codex UI using the Quick Action.
@@ -0,0 +1,2 @@
1
+ schema: spec-driven
2
+ created: 2026-04-02
@@ -0,0 +1,8 @@
1
+ # Design: Smart Error Handling
2
+
3
+ ## Architecture
4
+ - By intercepting terminal crash errors natively in TS, we output standard JSON-RPC text responses to the LLM masking the raw crash.
5
+
6
+ ## Technical Details
7
+ - Parse `stderr`.
8
+ - If regex matches validation failure regarding markdown structure, rewrite the error.
@@ -0,0 +1,12 @@
1
+ # Proposal: Smart Error Handling
2
+
3
+ ## What we are going to do
4
+ We want to dramatically improve the feedback the AI gets when it runs an invalid command. If `openspec_validate` fails, rather than just returning "Command Failed: exit code 1", we will intercept the exact error and give the AI a helpful, coaching reply.
5
+
6
+ ## Why we need this
7
+ If the AI incorrectly writes a spec (e.g. it forgets to include the "Tasks" header in the design file), OpenSpec validation turns red and fails. The AI sometimes gets confused by raw CLI errors. If we coach the AI ("Hey AI, your validation failed specifically because you forgot the 'Tasks' header in design.md"), it can immediately fix it without human intervention! This creates a self-healing AI loop.
8
+
9
+ ## How we will do it
10
+ 1. In `src/tools.ts`, inside the `catch (error)` block for `execAsync`, we will read `error.stderr` instead of just crashing.
11
+ 2. We will analyze the `stderr` string. If it contains "missing section", we map it to a friendly message.
12
+ 3. We will return this friendly, parsed diagnostic string natively through the standard MCP response payload back to the assistant.
@@ -0,0 +1,5 @@
1
+ # Tasks
2
+
3
+ - [ ] Open `src/tools.ts`.
4
+ - [ ] Expand the `catch` argument block around the primary execution.
5
+ - [ ] Programmatically map string heuristics.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@igor-olikh/openspec-mcp-server",
3
- "version": "1.0.4",
3
+ "version": "1.1.0",
4
4
  "description": "An MCP server that connects OpenSpec to AI assistants like Codex, Claude, and Cursor.",
5
5
  "main": "dist/index.js",
6
6
  "type": "module",
@@ -29,6 +29,7 @@
29
29
  "dev": "tsx src/index.ts"
30
30
  },
31
31
  "dependencies": {
32
+ "@igor-olikh/openspec-mcp-server": "^1.0.4",
32
33
  "@modelcontextprotocol/sdk": "^1.24.3",
33
34
  "zod": "^3.23.8"
34
35
  },