codingbuddy-rules 4.2.0 → 4.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -21,12 +21,20 @@ End users access rules **only through MCP tools**. No local rule files needed.
21
21
  "mcpServers": {
22
22
  "codingbuddy": {
23
23
  "command": "npx",
24
- "args": ["-y", "codingbuddy"]
24
+ "args": ["-y", "codingbuddy"],
25
+ "env": {
26
+ "CODINGBUDDY_PROJECT_ROOT": "/absolute/path/to/your/project"
27
+ }
25
28
  }
26
29
  }
27
30
  }
28
31
  ```
29
32
 
33
+ > **Important:** Cursor may not support the `roots/list` MCP capability.
34
+ > Without `CODINGBUDDY_PROJECT_ROOT`, the server cannot locate your project's
35
+ > `codingbuddy.config.json`, causing `language` and other settings to use defaults.
36
+ > Always set this environment variable to your project's absolute path.
37
+
30
38
  Optional: Create `.cursor/rules/codingbuddy.mdc` for basic integration:
31
39
 
32
40
  ```yaml
@@ -122,26 +130,196 @@ Available codingbuddy MCP tools in Cursor:
122
130
 
123
131
  | Tool | Purpose |
124
132
  |------|---------|
125
- | `parse_mode` | Parse mode keywords + load Agent/rules |
126
- | `get_agent_details` | Get specific Agent details |
127
- | `get_project_config` | Get project configuration |
128
- | `recommend_skills` | Recommend skills based on prompt |
129
- | `prepare_parallel_agents` | Prepare parallel Agent execution |
133
+ | `parse_mode` | Parse mode keywords (PLAN/ACT/EVAL/AUTO) + load Agent/rules |
134
+ | `search_rules` | Search rules and guidelines by query |
135
+ | `get_agent_details` | Get specific Agent profile and expertise |
136
+ | `get_project_config` | Get project configuration (language, tech stack) |
137
+ | `get_code_conventions` | Get project code conventions and style guide |
138
+ | `suggest_config_updates` | Analyze project and suggest config updates |
139
+ | `recommend_skills` | Recommend skills based on prompt → then call `get_skill` |
140
+ | `get_skill` | Load full skill content by name (e.g., `get_skill("systematic-debugging")`) |
141
+ | `list_skills` | List all available skills with optional filtering |
142
+ | `get_agent_system_prompt` | Get complete system prompt for a specialist agent |
143
+ | `prepare_parallel_agents` | Prepare specialist agents for sequential execution |
144
+ | `dispatch_agents` | Get Task tool-ready dispatch params (Claude Code optimized) |
145
+ | `generate_checklist` | Generate contextual checklists (security, a11y, performance) |
146
+ | `analyze_task` | Analyze task for risk assessment and specialist recommendations |
147
+ | `read_context` | Read context document (`docs/codingbuddy/context.md`) |
148
+ | `update_context` | Update context document with decisions, notes, progress |
149
+ | `cleanup_context` | Manually trigger context document cleanup |
150
+ | `set_project_root` | ~~Set project root directory~~ **(deprecated)** — use `CODINGBUDDY_PROJECT_ROOT` env var instead |
151
+
152
+ ## Specialist Agents Execution
153
+
154
+ Cursor does not have a `Task` tool for spawning background subagents. When `parse_mode` returns `parallelAgentsRecommendation`, execute specialists **sequentially**.
155
+
156
+ ### Auto-Detection
157
+
158
+ The MCP server automatically detects Cursor as the client and returns a sequential execution hint in `parallelAgentsRecommendation.hint`. No manual configuration is needed.
159
+
160
+ ### Sequential Workflow
161
+
162
+ ```
163
+ parse_mode returns parallelAgentsRecommendation
164
+
165
+ Call prepare_parallel_agents with recommended specialists
166
+
167
+ For each specialist (sequentially):
168
+ - Announce: "🔍 Analyzing from [icon] [specialist-name] perspective..."
169
+ - Apply the specialist's system prompt as analysis context
170
+ - Analyze the target code/design from that specialist's viewpoint
171
+ - Record findings
172
+
173
+ Consolidate all specialist findings into unified summary
174
+ ```
175
+
176
+ ### Example (EVAL mode)
177
+
178
+ ```
179
+ parse_mode({ prompt: "EVAL review auth implementation" })
180
+ → parallelAgentsRecommendation:
181
+ specialists: ["security-specialist", "accessibility-specialist", "performance-specialist"]
182
+
183
+ prepare_parallel_agents({
184
+ mode: "EVAL",
185
+ specialists: ["security-specialist", "accessibility-specialist", "performance-specialist"]
186
+ })
187
+ → agents[]: each has systemPrompt
188
+
189
+ Sequential analysis:
190
+ 1. 🔒 Security: Apply security-specialist prompt, analyze, record findings
191
+ 2. ♿ Accessibility: Apply accessibility-specialist prompt, analyze, record findings
192
+ 3. ⚡ Performance: Apply performance-specialist prompt, analyze, record findings
193
+
194
+ Present: Consolidated findings from all 3 specialists
195
+ ```
196
+
197
+ ### Specialist Icons
198
+
199
+ | Icon | Specialist |
200
+ |------|------------|
201
+ | 🔒 | security-specialist |
202
+ | ♿ | accessibility-specialist |
203
+ | ⚡ | performance-specialist |
204
+ | 📏 | code-quality-specialist |
205
+ | 🧪 | test-strategy-specialist |
206
+ | 🏛️ | architecture-specialist |
207
+ | 📚 | documentation-specialist |
208
+ | 🔍 | seo-specialist |
209
+ | 🎨 | design-system-specialist |
210
+ | 📨 | event-architecture-specialist |
211
+ | 🔗 | integration-specialist |
212
+ | 📊 | observability-specialist |
213
+ | 🔄 | migration-specialist |
214
+ | 🌐 | i18n-specialist |
130
215
 
131
216
  ## Skills
132
217
 
218
+ Cursor accesses codingbuddy skills through three patterns:
219
+
220
+ 1. **Auto-recommend** — AI calls `recommend_skills` based on intent detection
221
+ 2. **Browse and select** — User calls `list_skills` to discover, then `get_skill` to load
222
+ 3. **Slash-command** — User types `/<command>`, AI maps to `get_skill`
223
+
133
224
  ### Using Skills in Cursor
134
225
 
135
- Load skills via file reference (monorepo only):
226
+ **Method 1: MCP Tool Chain (End Users — Recommended)**
227
+
228
+ The AI should follow this chain when a skill might apply:
229
+
230
+ 1. `recommend_skills({ prompt: "user's message" })` — Get skill recommendations
231
+ 2. `get_skill("skill-name")` — Load the recommended skill's full content
232
+ 3. Follow the skill instructions in the response
233
+
234
+ Example flow:
235
+ ```
236
+ User: "There is a bug in the authentication logic"
237
+ → AI calls recommend_skills({ prompt: "There is a bug in the authentication logic" })
238
+ → Response: { recommendations: [{ skillName: "systematic-debugging", ... }], nextAction: "Call get_skill..." }
239
+ → AI calls get_skill("systematic-debugging")
240
+ → AI follows the systematic-debugging skill instructions
241
+ ```
242
+
243
+ **Method 2: File Reference (Monorepo Contributors Only)**
136
244
 
137
245
  ```
138
246
  @packages/rules/.ai-rules/skills/test-driven-development/SKILL.md
139
247
  ```
140
248
 
141
- For end users, use `recommend_skills` MCP tool instead.
249
+ > **Note:** `parse_mode` already embeds matched skill content in `included_skills` no separate `get_skill` call needed when using mode keywords (PLAN/ACT/EVAL/AUTO).
250
+
251
+ ### Skill Discovery
252
+
253
+ Use `list_skills` to browse available skills before deciding which one to load:
254
+
255
+ ```
256
+ list_skills() # Browse all skills
257
+ list_skills({ minPriority: 1, maxPriority: 3 }) # Filter by priority
258
+ ```
259
+
260
+ **Discovery flow:**
261
+
262
+ 1. `list_skills()` — Browse available skills and descriptions
263
+ 2. Identify the skill relevant to the current task
264
+ 3. `get_skill("skill-name")` — Load the full skill content
265
+ 4. Follow the skill instructions
266
+
267
+ > **Tip:** Use `recommend_skills` when you want AI to automatically pick the best skill. Use `list_skills` when you want to manually browse and select.
268
+
269
+ ### Slash-Command Mapping
270
+
271
+ Cursor has no native slash-command skill invocation. When a user types `/<command>`, the AI must call `get_skill` — this is Cursor's equivalent of Claude Code's built-in Skill tool.
272
+
273
+ **Rule:** When user input matches `/<command>`, call `get_skill("<skill-name>")` and follow the returned instructions. This table is a curated subset — use `list_skills()` to discover all available skills.
274
+
275
+ | User Types | MCP Call |
276
+ |---|---|
277
+ | `/debug` or `/debugging` | `get_skill("systematic-debugging")` |
278
+ | `/tdd` | `get_skill("test-driven-development")` |
279
+ | `/brainstorm` | `get_skill("brainstorming")` |
280
+ | `/plan` or `/write-plan` | `get_skill("writing-plans")` |
281
+ | `/execute` or `/exec` | `get_skill("executing-plans")` |
282
+ | `/design` or `/frontend` | `get_skill("frontend-design")` |
283
+ | `/refactor` | `get_skill("refactoring")` |
284
+ | `/security` or `/audit` | `get_skill("security-audit")` |
285
+ | `/pr` | `get_skill("pr-all-in-one")` |
286
+ | `/review` or `/pr-review` | `get_skill("pr-review")` |
287
+ | `/parallel` or `/agents` | `get_skill("dispatching-parallel-agents")` |
288
+ | `/subagent` | `get_skill("subagent-driven-development")` |
289
+
290
+ For unrecognized slash commands, call `recommend_skills({ prompt: "<user's full message>" })` to find the closest match.
291
+
292
+ > **Disambiguation:** `/plan` (with slash prefix) triggers `get_skill("writing-plans")`. `PLAN` (without slash, at message start) triggers `parse_mode`. Similarly, `/execute` triggers `get_skill("executing-plans")` while `ACT` triggers `parse_mode`. The slash prefix is the distinguishing signal.
293
+
294
+ ### Proactive Skill Activation
295
+
296
+ Cursor lacks session hooks that automatically enforce skill invocation (unlike Claude Code). The AI must detect intent patterns and call `recommend_skills` proactively — without waiting for the user to explicitly request a skill.
297
+
298
+ **Rule:** When the user's message suggests a skill would help, call `recommend_skills` at the start of the response — before any other action. The `recommend_skills` engine matches trigger patterns across multiple languages and is the authoritative source of truth.
299
+
300
+ Common trigger examples (not exhaustive):
301
+
302
+ | User Intent Signal | Likely Skill |
303
+ |---|---|
304
+ | Bug report, error, "not working", exception | `systematic-debugging` |
305
+ | "Brainstorm", "build", "create", "implement" | `brainstorming` |
306
+ | "Test first", TDD, write tests before code | `test-driven-development` |
307
+ | "Plan", "design", implementation approach | `writing-plans` |
308
+ | PR, commit, code review workflow | `pr-all-in-one` |
309
+
310
+ ```
311
+ User: "I need to plan the implementation for user authentication"
312
+ → AI calls recommend_skills({ prompt: "plan implementation for user authentication" })
313
+ → Loads writing-plans via get_skill
314
+ → Follows skill instructions to create structured plan
315
+ ```
316
+
317
+ > **Note:** When the user message starts with a mode keyword (`PLAN`, `ACT`, `EVAL`, `AUTO`), `parse_mode` already handles skill matching automatically via `included_skills` — no separate `recommend_skills` call is needed.
142
318
 
143
319
  ### Available Skills
144
320
 
321
+ Highlighted skills (use `list_skills()` for the complete list):
322
+
145
323
  - `brainstorming/SKILL.md` - Idea → Design
146
324
  - `test-driven-development/SKILL.md` - TDD workflow
147
325
  - `systematic-debugging/SKILL.md` - Systematic debugging
@@ -270,6 +448,72 @@ module.exports = {
270
448
  - Bug fixes needing comprehensive testing
271
449
  - Code quality improvements with measurable criteria
272
450
 
451
+ > **Cursor limitation:** AUTO mode has no enforcement mechanism in Cursor. See [Known Limitations](#known-limitations) for details.
452
+
453
+ ## Context Document Management
454
+
455
+ codingbuddy uses a fixed-path context document (`docs/codingbuddy/context.md`) to persist decisions across mode transitions.
456
+
457
+ ### How It Works
458
+
459
+ | Mode | Behavior |
460
+ |------|----------|
461
+ | PLAN / AUTO | Resets (clears) existing content and starts fresh |
462
+ | ACT / EVAL | Appends new section to existing content |
463
+
464
+ ### Required Workflow
465
+
466
+ 1. `parse_mode` automatically reads/creates the context document
467
+ 2. Review `contextDocument` in the response for previous decisions
468
+ 3. **Before completing each mode:** call `update_context` to persist current work
469
+
470
+ ### Available Tools
471
+
472
+ | Tool | Purpose |
473
+ |------|---------|
474
+ | `read_context` | Read current context document |
475
+ | `update_context` | Persist decisions, notes, progress, findings |
476
+ | `cleanup_context` | Summarize older sections to reduce document size |
477
+
478
+ ### Cursor-Specific Note
479
+
480
+ Unlike Claude Code, Cursor has no hooks to enforce `update_context` calls. You must **manually remember** to call `update_context` before concluding each mode to avoid losing context across sessions.
481
+
482
+ ## Known Limitations
483
+
484
+ Cursor environment does not support several features available in Claude Code:
485
+
486
+ | Feature | Status | Workaround |
487
+ |---------|--------|------------|
488
+ | **Task tool** (background subagents) | ❌ Not available | Use `prepare_parallel_agents` for sequential execution |
489
+ | **Native Skill tool** (`/skill-name`) | ❌ Not available | Use MCP tool chain: `recommend_skills` → `get_skill` |
490
+ | **Session hooks** (PreToolUse, etc.) | ❌ Not available | Rely on `.cursor/rules/*.mdc` for always-on instructions |
491
+ | **Autonomous loop mechanism** | ❌ Not available | AUTO mode depends on Cursor AI voluntarily looping |
492
+ | **Context compaction hooks** | ❌ Not available | Manually call `update_context` before ending each mode |
493
+ | **`dispatch_agents` full usage** | ⚠️ Partial | Returns Claude Code-specific `dispatchParams`; use `prepare_parallel_agents` instead |
494
+ | **`restart_tui`** | ❌ Not applicable | Claude Code TUI-only tool |
495
+
496
+ ### AUTO Mode Reliability
497
+
498
+ AUTO mode documents autonomous PLAN → ACT → EVAL cycling. In Cursor, this depends entirely on the AI model voluntarily continuing the loop — there is no enforcement mechanism like Claude Code's hooks. Results may vary:
499
+
500
+ - The AI may stop after one iteration instead of looping
501
+ - Quality exit criteria (`Critical = 0 AND High = 0`) are advisory, not enforced
502
+ - For reliable multi-iteration workflows, prefer manual `PLAN` → `ACT` → `EVAL` cycling
503
+
504
+ ## Verification Status
505
+
506
+ > Audit per [#609](https://github.com/JeremyDev87/codingbuddy/issues/609). Code-level analysis complete, Cursor runtime verification pending.
507
+
508
+ | Pattern | Status | Notes |
509
+ |---------|--------|-------|
510
+ | MCP Tools Table | ✅ Updated | All 18 tools documented (including 1 deprecated) |
511
+ | Mode keyword detection (imports.mdc) | ⚠️ Code-verified | `parse_mode` handler exists; Cursor AI invocation depends on model behavior |
512
+ | File pattern → Agent mapping (auto-agent.mdc) | ⚠️ Code-verified | Mappings reference valid agents; Cursor auto-apply depends on glob matching |
513
+ | AUTO mode workflow | ⚠️ Documented with caveat | No enforcement mechanism in Cursor; see Known Limitations |
514
+ | Context document management | ✅ Documented | New section added with Cursor-specific guidance |
515
+ | Known Limitations | ✅ Added | Task tool, hooks, autonomous loop, TUI limitations documented |
516
+
273
517
  ## Reference
274
518
 
275
519
  - [AGENTS.md Official Spec](https://agents.md)