claude-orator-mcp 0.2.0-beta.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,38 @@
1
+ ---
2
+ name: claude-orator
3
+ description: Use when dispatching subagents with non-trivial prompts, writing system prompts or skill descriptions, or when any prompt feels vague — scores 7 quality dimensions, auto-selects from 11 techniques, and rewrites with before/after scores.
4
+ ---
5
+
6
+ # Claude Orator
7
+
8
+ Make prompts measurably better before sending them.
9
+
10
+ ## When to Use
11
+
12
+ **Dispatching subagents** → Run the prompt through `orator_optimize` first. Better prompt = less back-and-forth, more accurate results.
13
+
14
+ **Writing system prompts** → SKILL.md files, agent instructions, tool descriptions. Small improvements compound over many invocations.
15
+
16
+ **Prompt feels vague or under-specified** → Score it. Orator identifies weak dimensions and applies targeted techniques.
17
+
18
+ ## Quick Reference
19
+
20
+ ```
21
+ orator_optimize(prompt: "...", intent?: "code|analysis|creative|extraction|system", techniques?: ["xml-tags", "few-shot"])
22
+ ```
23
+
24
+ **Output:** Before score → techniques applied → optimized prompt → after score.
25
+
26
+ **Already good?** One-line confirmation: `🪶 ━━ already well-structured (8.4)`
27
+
28
+ ## When to Skip
29
+
30
+ Skip for trivial prompts, simple questions, or prompts already scoring above 7.0. The overhead isn't worth it for single-step instructions.
31
+
32
+ ## Common Mistakes
33
+
34
+ | Mistake | Fix |
35
+ |---------|-----|
36
+ | Optimizing everything | Focus on high-leverage: subagent prompts, system prompts |
37
+ | Ignoring the score delta | Close before/after scores mean the prompt was already good |
38
+ | Not using `techniques` override | When you know which techniques apply, force them |
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Vvkmnn
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,375 @@
1
+ <img align="right" src="claude-orator.svg" alt="claude-orator-mcp" width="220">
2
+
3
+ # claude-orator-mcp
4
+
5
+ A [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) server that **optimizes prompts** for [Claude Code](https://docs.anthropic.com/en/docs/claude-code). Heuristic analysis, Anthropic technique selection, and structural rewriting — zero external dependencies, fully deterministic.
6
+
7
+ <br clear="right">
8
+
9
+ [![npm version](https://img.shields.io/npm/v/claude-orator-mcp.svg)](https://www.npmjs.com/package/claude-orator-mcp) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![TypeScript](https://img.shields.io/badge/TypeScript-007ACC?logo=typescript&logoColor=white)](https://www.typescriptlang.org/) [![Node.js](https://img.shields.io/badge/node-%3E%3D20-brightgreen)](https://nodejs.org/) [![Claude](https://img.shields.io/badge/Claude-D97757?logo=claude&logoColor=fff)](#) [![GitHub stars](https://img.shields.io/github/stars/Vvkmnn/claude-orator-mcp?style=social)](https://github.com/Vvkmnn/claude-orator-mcp)
10
+
11
+ ---
12
+
13
+ Orator is the rhetoric coach — Claude is the orator. The MCP provides deterministic heuristic analysis and technique selection; Claude does the actual rewriting with full context. Built on [Anthropic's prompt engineering best practices](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview): XML tags, multishot examples, chain-of-thought, structured output, role assignment, prefill, prompt chaining, and uncertainty permission.
14
+
15
+ ## what's new in 0.2.0
16
+
17
+ - **Intent disambiguation** — `"You are an expert Rust dev... build me an app"` now correctly resolves to `code`, not `system`. Fallback heuristics catch code blocks, "build me", and debugging language.
18
+ - **Claude 4.6 anti-patterns** — 4 new detections: thoroughness backfire, imperative tool instructions, plan-sharing penalties, and suggest-framing traps.
19
+ - **Context-first assembly** — template now front-loads `<context>` before `<task>`, matching Codex research on grounding data ordering.
20
+ - **Scorer overhaul** — recalibrated dimension heuristics produce meaningful score jumps (avg +2.6, up from ~0.9).
21
+ - **Structured output format** — replaces the old prefill technique for Claude 4.6+ compatibility.
22
+ - **25 regression tests** — comprehensive self-test suite covering all intent categories, anti-patterns, and edge cases.
23
+
24
+ ## install
25
+
26
+ **Requirements:**
27
+
28
+ [![Claude Code](https://img.shields.io/badge/Claude_Code-555?logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxOCAxMCIgc2hhcGUtcmVuZGVyaW5nPSJjcmlzcEVkZ2VzIj4KICAKICAKICAKICAKICAKICAKICAKICAKICAKICAKPC9zdmc+Cg==)](https://claude.ai/code)
29
+
30
+ **From shell:**
31
+
32
+ ```bash
33
+ claude mcp add claude-orator-mcp -- npx claude-orator-mcp
34
+ ```
35
+
36
+ **From inside Claude** (restart required):
37
+
38
+ ```
39
+ Add this to our global mcp config: npx claude-orator-mcp
40
+
41
+ Install this mcp: https://github.com/Vvkmnn/claude-orator-mcp
42
+ ```
43
+
44
+ **From any manually configurable `mcp.json`:** (Cursor, Windsurf, etc.)
45
+
46
+ ```json
47
+ {
48
+ "mcpServers": {
49
+ "claude-orator-mcp": {
50
+ "command": "npx",
51
+ "args": ["claude-orator-mcp"],
52
+ "env": {}
53
+ }
54
+ }
55
+ }
56
+ ```
57
+
58
+ There is **no `npm install` required** — no external dependencies or databases, only deterministic heuristics.
59
+
60
+ However, if `npx` resolves the wrong package, you can force resolution with:
61
+
62
+ ```bash
63
+ npm install -g claude-orator-mcp
64
+ ```
65
+
66
+ ## [skill](.claude/skills/claude-orator)
67
+
68
+ Optionally, install the skill to teach Claude when to proactively optimize prompts:
69
+
70
+ ```bash
71
+ npx skills add Vvkmnn/claude-orator-mcp --skill claude-orator --global
72
+ # Optional: add --yes to skip interactive prompt and install to all agents
73
+ ```
74
+
75
+ This makes Claude automatically optimize prompts before dispatching subagents, writing system prompts, or crafting any prompt worth improving. The MCP works without the skill, but the skill improves discoverability.
76
+
77
+ ## [plugin](https://github.com/Vvkmnn/claude-emporium)
78
+
79
+ For automatic prompt optimization hooks and commands, install from the [claude-emporium](https://github.com/Vvkmnn/claude-emporium) marketplace:
80
+
81
+ ```bash
82
+ /plugin marketplace add Vvkmnn/claude-emporium
83
+ /plugin install claude-orator@claude-emporium
84
+ ```
85
+
86
+ The **claude-orator** plugin provides:
87
+
88
+ **Hooks** (targeted, zero overhead on good prompts):
89
+
90
+ - `PreToolUse (Task)` — suggest optimization for under-specified subagent prompts
91
+ - Before dispatching any subagent → quick heuristic score, suggest `orator_optimize` if < 5.0
92
+
93
+ **Command:** `/reprompt-orator <prompt>` — manual prompt optimization
94
+
95
+ Requires the MCP server installed first. See the emporium for other Claude Code plugins and MCPs.
96
+
97
+ ## features
98
+
99
+ [MCP server](https://modelcontextprotocol.io/) with a single tool. Prompt in, optimized prompt out.
100
+
101
+ #### `orator_optimize`
102
+
103
+ Analyze a prompt across 7 quality dimensions, auto-select from 11 Anthropic techniques, and return a structurally optimized scaffold with before/after scores.
104
+
105
+ ```
106
+ orator_optimize prompt="Write a function that sorts users"
107
+ > Returns optimized scaffold with XML tags, output format, examples section
108
+
109
+ orator_optimize prompt="You are a helpful assistant" intent="system"
110
+ > Returns role-assigned system prompt with structure and constraints
111
+
112
+ orator_optimize prompt="Extract all emails from this text" techniques=["xml-tags", "few-shot"]
113
+ > Force-applies specific techniques regardless of auto-selection
114
+ ```
115
+
116
+ **Score meter** (unique notification format — gradient fill bar):
117
+
118
+ ```
119
+ 🪶 3.2 ░░░▓▓▓▓▓▓▓▓ 7.8
120
+ +xml-tags +few-shot +structured-output · 3 issues
121
+ Wrapped in XML tags, added examples, specified output format
122
+ ```
123
+
124
+ Three-zone bar: `░░░` (baseline) `▓▓▓▓▓` (improvement) `░░` (headroom to 10).
125
+
126
+ **Minimal case** (already well-structured):
127
+
128
+ ```
129
+ 🪶 ━━ already well-structured (8.4)
130
+ ```
131
+
132
+ **Input:**
133
+
134
+ | Parameter | Type | Required | Description |
135
+ |-----------|------|----------|-------------|
136
+ | `prompt` | string | Yes | The raw prompt to optimize |
137
+ | `intent` | enum | No | `code \| analysis \| creative \| extraction \| conversation \| system` (auto-detected) |
138
+ | `target` | enum | No | `claude-code \| claude-api \| claude-desktop \| generic` (default: `claude-code`) |
139
+ | `techniques` | string[] | No | Force-apply specific technique IDs |
140
+
141
+ **Output:**
142
+
143
+ | Field | Type | Description |
144
+ |-------|------|-------------|
145
+ | `optimized_prompt` | string | Rewritten prompt scaffold (primary output) |
146
+ | `score_before` | number | Quality score of original (0-10) |
147
+ | `score_after` | number | Quality score after optimization (0-10) |
148
+ | `summary` | string | 1-line explanation of improvements |
149
+ | `detected_intent` | string | Auto-detected intent category |
150
+ | `applied_techniques` | string[] | Technique IDs applied |
151
+ | `issues` | string[] | Detected problems |
152
+ | `suggestions` | string[] | Actionable fixes |
153
+
154
+ The `optimized_prompt` is a structural scaffold. Claude refines it with domain knowledge, codebase context, and conversation history.
155
+
156
+ ## methodology
157
+
158
+ How [claude-orator-mcp](https://github.com/Vvkmnn/claude-orator-mcp) [works](https://github.com/Vvkmnn/claude-orator-mcp/tree/main/src):
159
+
160
+ ```
161
+ 🪶 claude-orator-mcp
162
+ ════════════════════
163
+
164
+
165
+ orator_optimize
166
+ ──────────────
167
+
168
+ PROMPT
169
+
170
+ ┌────────────┴────────────┐
171
+ ▼ ▼
172
+ ┌───────────┐ ┌────────────┐
173
+ │ Detect │ │ Measure │
174
+ │ Intent │ │ Complexity │
175
+ └─────┬─────┘ └──────┬─────┘
176
+ │ │
177
+ system > code > word count +
178
+ extraction > clause depth
179
+ analysis > │
180
+ creative > │
181
+ conversation │
182
+ + disambiguation │
183
+ + fallback heuristics │
184
+ │ │
185
+ └────────────┬────────────┘
186
+
187
+
188
+ ┌───────────────────┐
189
+ │ Score Before │
190
+ │ │
191
+ │ clarity 20% │ strong verbs, single task
192
+ │ specificity 20% │ named tech, constraints
193
+ │ structure 15% │ XML tags, headers, lists
194
+ │ examples 15% │ input/output pairs
195
+ │ constraints 10% │ scope, edge cases
196
+ │ output_fmt 10% │ format specification
197
+ │ efficiency 10% │ no filler, no redundancy
198
+ │ │
199
+ │ ░░░░░░░░░░ 3.2 │
200
+ └────────┬──────────┘
201
+
202
+
203
+ ┌───────────────────┐ techniques?
204
+ │ Select Techniques │◄──── (force override)
205
+ │ │
206
+ │ when_to_use() × │ 11 predicates
207
+ │ intent match × │ filtered
208
+ │ score gaps × │ sorted by impact
209
+ │ cap at 4 │
210
+ └────────┬──────────┘
211
+
212
+
213
+ ┌───────────────────┐
214
+ │ Template Assembly │
215
+ │ │
216
+ │ role preamble │ expert identity
217
+ │ → <context> │ grounding data first
218
+ │ → <task> │ XML-wrapped prompt
219
+ │ → <requirements> │ constraints + gaps
220
+ │ → <examples> │ multishot I/O pairs
221
+ │ → output format │ format specification
222
+ └────────┬──────────┘
223
+
224
+
225
+ ┌───────────────────┐
226
+ │ Score After │
227
+ │ │
228
+ │ ░░░▓▓▓▓▓▓▓░░ 7.8│
229
+ └────────┬──────────┘
230
+
231
+
232
+ OUTPUT
233
+ optimized_prompt
234
+ + scores + techniques
235
+ + issues + suggestions
236
+
237
+
238
+ score meter (gradient fill bar):
239
+ ─────────────────────────────────
240
+
241
+ 🪶 3.2 ░░░▓▓▓▓▓▓▓▓ 7.8
242
+ +xml-tags +few-shot +structured-output
243
+ Wrapped in XML, added examples, format
244
+
245
+ ░░░ baseline ▓▓▓ improvement ░░ headroom
246
+ ```
247
+
248
+ **7 quality dimensions** (weighted scoring, deterministic):
249
+
250
+ | Dimension | Weight | Measures |
251
+ |-----------|--------|----------|
252
+ | Clarity | 20% | Strong verbs, single task, no hedging |
253
+ | Specificity | 20% | Named tech, numbers, constraints |
254
+ | Structure | 15% | XML tags, headers, lists |
255
+ | Examples | 15% | Input/output pairs, demonstrations |
256
+ | Constraints | 10% | Negative constraints, scope, edge cases |
257
+ | Output Format | 10% | Format spec, structure definition |
258
+ | Token Efficiency | 10% | No filler, no redundancy |
259
+
260
+ **11 Anthropic techniques** (auto-selected based on intent, scores, and complexity):
261
+
262
+ | ID | Name | Auto-selected when |
263
+ |----|------|--------------------|
264
+ | `chain-of-thought` | [Let Claude Think](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/chain-of-thought) | Analysis intent, complex tasks |
265
+ | `xml-tags` | [Use XML Tags](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/use-xml-tags) | Long prompt + low structure score |
266
+ | `few-shot` | [Multishot Examples](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/multishot-prompting) | Low example score + extraction/code |
267
+ | `role-assignment` | [System Prompts & Roles](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts) | System intent or low specificity |
268
+ | `structured-output` | [Control Output Format](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/fill-in-the-blank) | Low output format score |
269
+ | `prefill` | [Structured Output Format](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response) | API target + extraction/code |
270
+ | `prompt-chaining` | [Chain Complex Tasks](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/chain-prompts) | Complex + multiple subtasks |
271
+ | `uncertainty-permission` | [Say "I Don't Know"](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/ask-claude-for-rewrites) | Analysis or extraction intent |
272
+ | `extended-thinking` | [Extended Thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) | Complex + analysis/code intent |
273
+ | `long-context-tips` | [Long Context](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/long-context-tips) | Long prompt (>2000 chars or >50 lines) |
274
+ | `tool-use` | [Tool Use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview) | Prompt mentions tool/function calling |
275
+
276
+ **Core algorithms:**
277
+
278
+ - **[Intent detection](https://github.com/Vvkmnn/claude-orator-mcp/blob/main/src/analysis/detector.ts)** (`detectIntent`): Priority-ordered regex patterns across 6 categories — `system > code > extraction > analysis > creative > conversation`. Includes disambiguation (e.g., `system` + `code` signals → resolves to `code`) and fallback heuristics for code blocks, "build me" patterns, and debugging language.
279
+ - **[Heuristic scoring](https://github.com/Vvkmnn/claude-orator-mcp/blob/main/src/analysis/heuristics.ts)** (`scorePrompt`): 7-dimension weighted analysis. Each dimension 0-10, overall is weighted sum. Also generates flat `issues[]` and `suggestions[]` arrays.
280
+ - **[Technique selection](https://github.com/Vvkmnn/claude-orator-mcp/blob/main/src/techniques/index.ts)** (`selectTechniques`): Each technique has a `when_to_use()` predicate. Auto-selected based on intent + scores + complexity. Sorted by impact, capped at 4.
281
+ - **[Template assembly](https://github.com/Vvkmnn/claude-orator-mcp/blob/main/src/optimize.ts)** (`optimize`): Builds structural scaffold from selected techniques. Context-first ordering: role → `<context>` → `<task>` → `<requirements>` → `<examples>` → output format.
282
+
283
+ **Design principles:**
284
+
285
+ - **Single tool** — one entry point, minimal cognitive overhead
286
+ - **Deterministic** — same input = same output, no LLM calls, no network
287
+ - **Scaffold, not final** — the optimized prompt is structural; Claude adds substance
288
+ - **Lean output** — flat string arrays for issues/suggestions, no nested objects
289
+ - **Weighted dimensions** — clarity and specificity matter most (20% each)
290
+ - **Technique cap** — max 4 techniques per optimization (diminishing returns beyond)
291
+ - **Anti-pattern detection** — 10 Claude-specific anti-patterns including 4 for Claude 4.6 (thoroughness backfire, tool over-triggering, plan-sharing penalty, suggest framing)
292
+ - **Zero dependencies** — only `@modelcontextprotocol/sdk` + `zod`
293
+
294
+ ## alternatives
295
+
296
+ Every existing prompt optimization tool requires LLM calls, labeled datasets, or evaluation infrastructure. When you need structural improvement at zero latency — during CI/CD, before subagent dispatch, or offline — they cannot help.
297
+
298
+ | Feature | **orator** | DSPy | promptfoo | TextGrad | OPRO | LLMLingua | Anthropic Generator |
299
+ |---|---|---|---|---|---|---|---|
300
+ | **Zero latency** | **Yes (<1ms)** | No (LLM calls) | No (eval runs) | No (LLM calls) | No (LLM calls) | No (LLM calls) | No (LLM call) |
301
+ | **Offline/airgapped** | **Yes** | No | Partial | No | No | No | No |
302
+ | **Deterministic** | **Yes** | No | No | No | No | Partial | No |
303
+ | **No labeled data** | **Yes** | No (examples) | No (test cases) | No (feedback) | No (examples) | Yes | Yes |
304
+ | **Claude-specific** | **Yes (anti-patterns)** | No | No | No | No | No | Yes |
305
+ | **MCP native** | **Yes** | No | No | No | No | No | No |
306
+ | **Structural scoring** | **7 dimensions** | None | Custom metrics | None | None | None | None |
307
+ | **Dependencies** | **0 (pure TS)** | PyTorch + LLM | Node + LLM | PyTorch + LLM | LLM | PyTorch + LLM | LLM API |
308
+
309
+ **[DSPy](https://github.com/stanfordnlp/dspy)** — Stanford's framework for compiling LM programs with automatic prompt optimization. Requires labeled examples, LLM calls for optimization, and PyTorch. Optimizes for task accuracy, not structural quality. Latency: seconds to minutes per optimization. Use DSPy when you have labeled data and want to tune for a specific metric.
310
+
311
+ **[promptfoo](https://github.com/promptfoo/promptfoo)** — Test-driven prompt evaluation framework. Requires test cases, LLM calls for evaluation, and an evaluation dataset. Measures output quality, not prompt structure. Complementary: use Orator for structural scaffolding, then promptfoo to evaluate output quality.
312
+
313
+ **[TextGrad](https://github.com/zou-group/textgrad)** — Automatic differentiation via text feedback from LLMs. Requires LLM calls for both forward and backward passes. Research-oriented, PyTorch dependency. Latency: minutes. Use when iterating on prompt wording with measurable objectives.
314
+
315
+ **[OPRO](https://github.com/google-deepmind/opro)** — DeepMind's optimization by prompting: uses an LLM to iteratively rewrite prompts. Requires examples of good/bad outputs, multiple LLM calls per iteration. Latency: minutes. Use when exploring creative prompt variations with evaluation feedback.
316
+
317
+ **[LLMLingua](https://github.com/microsoft/LLMLingua)** — Microsoft's prompt compression via perplexity-based token removal. Reduces token count by 2-20x but requires a local LLM for perplexity scoring. Different goal: compression, not structural improvement. Use when context window is the bottleneck.
318
+
319
+ **[Anthropic Prompt Generator](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prompt-generator)** — Anthropic's own tool that generates prompts via Claude. Excellent quality but requires an LLM call, non-deterministic, and not available offline or via MCP. Use when you want Claude to write your prompt from scratch.
320
+
321
+ Orator's approach is deliberately different: structural analysis via deterministic heuristics. No LLM calls means no API keys, no latency variance, no cost per optimization, and identical results every run. The trade-off is that Orator optimizes prompt *structure* (clarity, specificity, constraints, format) rather than prompt *wording* — it can't tell you if your prompt produces good *output*, only that it's well-formed for Claude. This makes it complementary to evaluation tools like promptfoo: scaffold with Orator, then validate with eval.
322
+
323
+ ## development
324
+
325
+ ```bash
326
+ git clone https://github.com/Vvkmnn/claude-orator-mcp && cd claude-orator-mcp
327
+ npm install && npm run build
328
+ npm test
329
+ ```
330
+
331
+ **Package requirements:**
332
+
333
+ - **Node.js**: >=20.0.0 (ES modules)
334
+ - **Runtime**: `@modelcontextprotocol/sdk`, `zod`
335
+ - **Zero external databases** — works with `npx`
336
+
337
+ **Development workflow:**
338
+
339
+ ```bash
340
+ npm run build # TypeScript compilation with executable permissions
341
+ npm run dev # Watch mode with tsc --watch
342
+ npm run start # Run the MCP server directly
343
+ npm run lint # ESLint code quality checks
344
+ npm run lint:fix # Auto-fix linting issues
345
+ npm run format # Prettier formatting (src/)
346
+ npm run format:check # Check formatting without changes
347
+ npm run typecheck # TypeScript validation without emit
348
+ npm run test # Lint + type check
349
+ npm run prepublishOnly # Pre-publish validation (build + lint + format:check)
350
+ ```
351
+
352
+ **Git hooks (via Husky):**
353
+
354
+ - **pre-commit**: Auto-formats staged `.ts` files with Prettier and ESLint
355
+
356
+ Contributing:
357
+
358
+ - Fork the repository and create feature branches
359
+ - Follow TypeScript strict mode and [MCP protocol](https://modelcontextprotocol.io/specification) standards
360
+
361
+ Learn from examples:
362
+
363
+ - [Official MCP servers](https://github.com/modelcontextprotocol/servers) for reference implementations
364
+ - [TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) for best practices
365
+ - [Anthropic prompt engineering docs](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) for technique details
366
+
367
+ ## license
368
+
369
+ [MIT](LICENSE)
370
+
371
+ <hr>
372
+
373
+ <a href="https://en.wikipedia.org/wiki/Cicero_Denounces_Catiline"><img src="logo/maccari-cicero.jpg" alt="Cicero Denounces Catiline — Cesare Maccari" width="100%"></a>
374
+
375
+ _**[Cicero Denounces Catiline](https://en.wikipedia.org/wiki/Cicero_Denounces_Catiline)** by **[Cesare Maccari](https://en.wikipedia.org/wiki/Cesare_Maccari)** (1889). "Quo usque tandem abutere, Catilina, patientia nostra?" [How long, Catiline, will you abuse our patience?] [Claudius](https://en.wikipedia.org/wiki/Claudius), once dismissed for his stammer, later addressed this same Senate — proof that the right words, well-structured, can move an empire._
@@ -0,0 +1,13 @@
1
+ /**
2
+ * Intent detection via keyword pattern-matching with disambiguation.
3
+ *
4
+ * Strategy:
5
+ * 1. First-pass: priority-ordered pattern matching (system > code > ... > conversation)
6
+ * 2. Disambiguation: "You are an expert... build me X" → system match overridden to code
7
+ * 3. Fallback heuristics: code blocks, "build me", debugging language → code before conversation
8
+ */
9
+ import type { Complexity, Intent } from '../types.js';
10
+ /** Detect intent from prompt content with disambiguation and fallback heuristics. */
11
+ export declare function detectIntent(prompt: string): Intent;
12
+ /** Detect complexity based on word count and structural indicators. */
13
+ export declare function detectComplexity(prompt: string): Complexity;
@@ -0,0 +1,160 @@
1
+ /**
2
+ * Intent detection via keyword pattern-matching with disambiguation.
3
+ *
4
+ * Strategy:
5
+ * 1. First-pass: priority-ordered pattern matching (system > code > ... > conversation)
6
+ * 2. Disambiguation: "You are an expert... build me X" → system match overridden to code
7
+ * 3. Fallback heuristics: code blocks, "build me", debugging language → code before conversation
8
+ */
9
+ // Ordered by priority: most distinctive patterns first
10
+ const INTENT_PATTERNS = [
11
+ [
12
+ 'system',
13
+ [
14
+ /^you are\b/i,
15
+ /\bact as\b/i,
16
+ /\byour role\b/i,
17
+ /\bbehave as\b/i,
18
+ /\bsystem prompt\b/i,
19
+ /\byou\s+must\s+always\b/i,
20
+ /\byour\s+task\s+is\b/i,
21
+ ],
22
+ ],
23
+ [
24
+ 'code',
25
+ [
26
+ /\bwrite\s+(a\s+)?(\w+\s+)?function\b/i,
27
+ /\bimplement\b/i,
28
+ /\brefactor\b/i,
29
+ /\bdebug\b/i,
30
+ /\bfix\s+(the\s+)?(bug|error|issue|crash)\b/i,
31
+ /\bcreate\s+(a\s+)?(\w+\s+)*(class|component|api|endpoint|module|service|app|application|middleware|hook|plugin|decorator|wrapper)\b/i,
32
+ /\badd\s+(a\s+)?(method|function|handler|route|feature)\b/i,
33
+ /\bcode\s+(that|which|to)\b/i,
34
+ /\banalyze\s+(this\s+)?(code|function|class|module)\b/i,
35
+ /\breview\s+(this\s+)?(code|function|PR|pull\s+request|diff)\b/i,
36
+ /\bbuild\s+(me\s+)?(a\s+)?(\w+\s+)*(app|application|tool|script|server|client|cli|bot|crawler|fetcher|scraper|parser|service|api|site|website|page|dashboard|plugin|extension|library|package|module)\b/i,
37
+ /\bmake\s+(me\s+)?(a\s+)?(\w+\s+)*(app|application|tool|script|server|client|cli|bot)\b/i,
38
+ /\bwrite\s+(me\s+)?(a\s+)?(\w+\s+)*(script|program|app|tool|cli|bot)\b/i,
39
+ /\bwhat'?s?\s+wrong\s+with\b/i,
40
+ /\bhow\s+do\s+I\s+(fix|solve|implement|build|make|write|create)\b/i,
41
+ /\bhere'?s?\s+(my|the|some)\s+code\b/i,
42
+ ],
43
+ ],
44
+ [
45
+ 'extraction',
46
+ [
47
+ /\bextract\b/i,
48
+ /\bparse\b/i,
49
+ /\bfind\s+all\b/i,
50
+ /\blist\s+(all|the|every)\b/i,
51
+ /\bidentify\b/i,
52
+ /\bcollect\b/i,
53
+ /\bpull\s+out\b/i,
54
+ /\bscrape\b/i,
55
+ ],
56
+ ],
57
+ [
58
+ 'analysis',
59
+ [
60
+ /\banalyze\b/i,
61
+ /\breview\b/i,
62
+ /\bexplain\b/i,
63
+ /\bevaluate\b/i,
64
+ /\bcompare\b/i,
65
+ /\bassess\b/i,
66
+ /\baudit\b/i,
67
+ /\bwhy\s+(does|did|is|are|was)\b/i,
68
+ /\bwhat\s+causes?\b/i,
69
+ ],
70
+ ],
71
+ [
72
+ 'creative',
73
+ [
74
+ /\bwrite\s+(a\s+)?(story|poem|essay|blog|article|post|letter|email)\b/i,
75
+ /\bbrainstorm\b/i,
76
+ /\bdraft\b/i,
77
+ /\bgenerate\s+(a\s+)?(name|title|tagline|slogan|headline)\b/i,
78
+ /\bcreate\s+(a\s+)?(story|narrative|description)\b/i,
79
+ ],
80
+ ],
81
+ [
82
+ 'conversation',
83
+ [
84
+ /\bchat\s+(with|about)\b/i,
85
+ /\bdiscuss\b/i,
86
+ /\bhelp\s+me\s+(understand|think|decide)\b/i,
87
+ /\btalk\s+(about|through)\b/i,
88
+ /\bwhat\s+do\s+you\s+think\b/i,
89
+ ],
90
+ ],
91
+ ];
92
+ /**
93
+ * Signals that a prompt body is primarily about code, even if the opening
94
+ * matches a non-code intent (e.g., "You are an expert... implement X").
95
+ */
96
+ const CODE_BODY_SIGNALS = [
97
+ /\bimplement\b/i,
98
+ /\bbuild\b/i,
99
+ /\bwrite\s+(a\s+)?(\w+\s+)*(function|class|method|script|program|app|tool|cli|bot|server|client)\b/i,
100
+ /\brefactor\b/i,
101
+ /\bdebug\b/i,
102
+ /\bcreate\s+(a\s+)?(class|component|api|endpoint|module|service|app)\b/i,
103
+ /```[\s\S]*?```/, // fenced code blocks
104
+ /\b(async|await|function|const|let|var|import|export|class|interface|type|def|fn|pub|struct|enum)\b/,
105
+ /\breturn\s+(a|the|an)?\s*\w/i,
106
+ /\b(typescript|javascript|python|rust|go|java|ruby)\b/i,
107
+ ];
108
+ /**
109
+ * Fallback heuristics: detect code intent from prompts that didn't match
110
+ * any explicit pattern (would otherwise default to 'conversation').
111
+ */
112
+ const CODE_FALLBACK_SIGNALS = [
113
+ /```[\s\S]*?```/, // contains code blocks
114
+ /\bbuild\s+me\b/i, // "build me a ..."
115
+ /\bmake\s+(it|this)\s+(work|run|compile|pass)\b/i, // "make it work"
116
+ /\bhere'?s?\s+(my|the|some)\s+code\b/i,
117
+ /\b(TypeError|SyntaxError|ReferenceError|Error|Exception|stack\s*trace|segfault)\b/,
118
+ /\b(npm|pip|cargo|yarn|pnpm|go\s+get|brew|apt|gem)\s+(install|add|run|build|test)\b/i,
119
+ ];
120
+ /** Detect intent from prompt content with disambiguation and fallback heuristics. */
121
+ export function detectIntent(prompt) {
122
+ let matched = null;
123
+ for (const [intent, patterns] of INTENT_PATTERNS) {
124
+ if (patterns.some((p) => p.test(prompt))) {
125
+ matched = intent;
126
+ break;
127
+ }
128
+ }
129
+ // Disambiguation: "You are an expert X" + code body → code, not system
130
+ if (matched === 'system') {
131
+ const signalCount = CODE_BODY_SIGNALS.filter((p) => p.test(prompt)).length;
132
+ if (signalCount >= 2) {
133
+ return 'code';
134
+ }
135
+ }
136
+ if (matched)
137
+ return matched;
138
+ // Fallback heuristics before defaulting to 'conversation'
139
+ if (CODE_FALLBACK_SIGNALS.some((p) => p.test(prompt))) {
140
+ return 'code';
141
+ }
142
+ return 'conversation';
143
+ }
144
+ /** Detect complexity based on word count and structural indicators. */
145
+ export function detectComplexity(prompt) {
146
+ const wordCount = prompt.split(/\s+/).filter(Boolean).length;
147
+ const hasMultipleClauses = /\b(and\s+also|additionally|furthermore|moreover|then\s+also)\b/i.test(prompt);
148
+ const hasMultipleSteps = /\b(first|second|third|step\s+\d|then)\b/i.test(prompt);
149
+ const hasConditions = /\b(if\s+.+then|when\s+.+should|unless|except\s+when)\b/i.test(prompt);
150
+ if (wordCount > 200 ||
151
+ (wordCount > 100 && (hasMultipleClauses || hasMultipleSteps)) ||
152
+ (hasMultipleSteps && hasConditions)) {
153
+ return 'complex';
154
+ }
155
+ if (wordCount > 50 || hasMultipleClauses || hasMultipleSteps) {
156
+ return 'moderate';
157
+ }
158
+ return 'simple';
159
+ }
160
+ //# sourceMappingURL=detector.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"detector.js","sourceRoot":"","sources":["../../src/analysis/detector.ts"],"names":[],"mappings":"AAAA;;;;;;;GAOG;AAIH,uDAAuD;AACvD,MAAM,eAAe,GAAyB;IAC5C;QACE,QAAQ;QACR;YACE,aAAa;YACb,aAAa;YACb,gBAAgB;YAChB,gBAAgB;YAChB,oBAAoB;YACpB,0BAA0B;YAC1B,uBAAuB;SACxB;KACF;IACD;QACE,MAAM;QACN;YACE,uCAAuC;YACvC,gBAAgB;YAChB,eAAe;YACf,YAAY;YACZ,6CAA6C;YAC7C,sIAAsI;YACtI,2DAA2D;YAC3D,6BAA6B;YAC7B,uDAAuD;YACvD,gEAAgE;YAChE,yMAAyM;YACzM,yFAAyF;YACzF,wEAAwE;YACxE,8BAA8B;YAC9B,mEAAmE;YACnE,sCAAsC;SACvC;KACF;IACD;QACE,YAAY;QACZ;YACE,cAAc;YACd,YAAY;YACZ,iBAAiB;YACjB,6BAA6B;YAC7B,eAAe;YACf,cAAc;YACd,iBAAiB;YACjB,aAAa;SACd;KACF;IACD;QACE,UAAU;QACV;YACE,cAAc;YACd,aAAa;YACb,cAAc;YACd,eAAe;YACf,cAAc;YACd,aAAa;YACb,YAAY;YACZ,kCAAkC;YAClC,qBAAqB;SACtB;KACF;IACD;QACE,UAAU;QACV;YACE,uEAAuE;YACvE,iBAAiB;YACjB,YAAY;YACZ,6DAA6D;YAC7D,oDAAoD;SACrD;KACF;IACD;QACE,cAAc;QACd;YACE,0BAA0B;YAC1B,cAAc;YACd,4CAA4C;YAC5C,6BAA6B;YAC7B,8BAA8B;SAC/B;KACF;CACF,CAAC;AAEF;;;GAGG;AACH,MAAM,iBAAiB,GAAG;IACxB,gBAAgB;IAChB,YAAY;IACZ,oGAAoG;IACpG,eAAe;IACf,YAAY;IACZ,wEAAwE;IACxE,gBAAgB,EAAE,qBAAqB;IACvC,oGAAoG;IACpG,8BAA8B;IAC9B,uDAAuD;CACxD,CAAC;AAEF;;;GAGG;AACH,MAAM,qBAAqB,GAAG;IAC5B,gBAAgB,EAAE,uBAAuB;IACzC,iBAAiB,EAAE,mBAAmB;IACtC,iDAAiD,EAAE,iBAAiB;IACpE,sCAAsC;IACtC,mFAAmF;IACnF,qFAAqF;CACtF,CAAC;AAEF,qFAAqF;AACrF,MAAM,UAAU,YAAY,CAAC,MAAc;IACzC,IAAI,OAAO,GAAkB,IAAI,CAAC;IAElC,KAAK,MAAM,CAAC,MAAM,EAAE,QAAQ,CAAC,IAAI,eAAe,EAAE,CAAC;QACjD,IAAI,QAAQ,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC,EAAE,CAAC;YACzC,OAAO,GAAG,MAAM,CAAC;YACjB,MAAM;QACR,CAAC;IACH,CAAC;IAED,uEAAuE;IACvE,IAAI,OAAO,KAAK,QAAQ,EAAE,CAAC;QACzB,MAAM,WAAW,GAAG,iBAAiB,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC,CAAC,MAAM,CAAC;QAC3E,IAAI,WAAW,IAAI,CAAC,EAAE,CAAC;YACrB,OAAO,MAAM,CAAC;QAChB,CAAC;IACH,CAAC;IAED,IAAI,OAAO;QAAE,OAAO,OAAO,CAAC;IAE5B,0DAA0D;IAC1D,IAAI,qBAAqB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC,EAAE,CAAC;QACtD,OAAO,MAAM,CAAC;IAChB,CAAC;IAED,OAAO,cAAc,CAAC;AACxB,CAAC;AAED,uEAAuE;AACvE,MAAM,UAAU,gBAAgB,CAAC,MAAc;IAC7C,MAAM,SAAS,GAAG,MAAM,CAAC,KAAK,CAAC,KAAK,CAAC,CAAC,MAAM,CAAC,OAAO,CAAC,CAAC,MAAM,CAAC;IAC7D,MAAM,kBAAkB,GAAG,iEAAiE,CAAC,IAAI,CAC/F,MAAM,CACP,CAAC;IACF,MAAM,gBAAgB,GAAG,0CAA0C,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC;IACjF,MAAM,aAAa,GAAG,yDAAyD,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC;IAE7F,IACE,SAAS,GAAG,GAAG;QACf,CAAC,SAAS,GAAG,GAAG,IAAI,CAAC,kBAAkB,IAAI,gBAAgB,CAAC,CAAC;QAC7D,CAAC,gBAAgB,IAAI,aAAa,CAAC,EACnC,CAAC;QACD,OAAO,SAAS,CAAC;IACnB,CAAC;IACD,IAAI,SAAS,GAAG,EAAE,IAAI,kBAAkB,IAAI,gBAAgB,EAAE,CAAC;QAC7D,OAAO,UAAU,CAAC;IACpB,CAAC;IACD,OAAO,QAAQ,CAAC;AAClB,CAAC"}
@@ -0,0 +1,14 @@
1
+ /**
2
+ * 7-dimension quality scoring for prompts.
3
+ * All scoring is deterministic: same input always produces same output.
4
+ * Each dimension is 0-10, overall is weighted sum.
5
+ */
6
+ import { type Scores } from '../types.js';
7
+ /** Score a prompt across all 7 dimensions. Returns individual scores. */
8
+ export declare function scorePrompt(prompt: string): Scores;
9
+ /** Compute weighted overall score from dimension scores. */
10
+ export declare function overallScore(scores: Scores): number;
11
+ /** Detect issues in the prompt as flat string descriptions. */
12
+ export declare function detectIssues(prompt: string, scores: Scores): string[];
13
+ /** Generate actionable suggestions as flat strings. */
14
+ export declare function generateSuggestions(prompt: string, scores: Scores): string[];