codingbuddy-rules 4.3.0 → 4.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai-rules/adapters/antigravity.md +648 -160
- package/.ai-rules/adapters/codex.md +500 -10
- package/.ai-rules/adapters/cursor.md +252 -8
- package/.ai-rules/adapters/kiro.md +551 -93
- package/.ai-rules/adapters/opencode-skills.md +179 -188
- package/.ai-rules/adapters/opencode.md +245 -44
- package/.ai-rules/skills/README.md +92 -24
- package/.ai-rules/skills/agent-design/SKILL.md +269 -0
- package/.ai-rules/skills/code-explanation/SKILL.md +259 -0
- package/.ai-rules/skills/context-management/SKILL.md +244 -0
- package/.ai-rules/skills/deployment-checklist/SKILL.md +233 -0
- package/.ai-rules/skills/documentation-generation/SKILL.md +293 -0
- package/.ai-rules/skills/error-analysis/SKILL.md +250 -0
- package/.ai-rules/skills/legacy-modernization/SKILL.md +292 -0
- package/.ai-rules/skills/mcp-builder/SKILL.md +356 -0
- package/.ai-rules/skills/prompt-engineering/SKILL.md +318 -0
- package/.ai-rules/skills/rule-authoring/SKILL.md +273 -0
- package/.ai-rules/skills/security-audit/SKILL.md +241 -0
- package/.ai-rules/skills/tech-debt/SKILL.md +224 -0
- package/package.json +1 -1
|
@@ -0,0 +1,318 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: prompt-engineering
|
|
3
|
+
description: Use when writing prompts for AI tools, optimizing agent system prompts, or designing AI-readable instructions. Covers prompt structure, meta-prompting, chain-of-thought, and tool-specific optimization.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Prompt Engineering
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
A prompt is an API contract with an AI system. Precision matters. Ambiguous prompts produce inconsistent results; clear prompts produce consistent, predictable behavior.
|
|
11
|
+
|
|
12
|
+
**Core principle:** Prompts are executable specifications. Write them like you write tests: with clear inputs, expected behavior, and success criteria.
|
|
13
|
+
|
|
14
|
+
**Iron Law:**
|
|
15
|
+
```
|
|
16
|
+
TEST YOUR PROMPT WITH AT LEAST 3 DIFFERENT INPUTS BEFORE USING IN PRODUCTION
|
|
17
|
+
One input is anecdote. Three is pattern. Ten is confidence.
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## When to Use
|
|
21
|
+
|
|
22
|
+
- Writing system prompts for codingbuddy agents
|
|
23
|
+
- Creating CLAUDE.md / .cursorrules instructions
|
|
24
|
+
- Designing tool descriptions for MCP servers
|
|
25
|
+
- Optimizing prompts that produce inconsistent results
|
|
26
|
+
- Building prompt chains for multi-step workflows
|
|
27
|
+
|
|
28
|
+
## Prompt Anatomy
|
|
29
|
+
|
|
30
|
+
Every effective prompt has these components:
|
|
31
|
+
|
|
32
|
+
```
|
|
33
|
+
┌─────────────────────────────────────────┐
|
|
34
|
+
│ ROLE Who/what is the AI? │
|
|
35
|
+
│ CONTEXT What situation are we in? │
|
|
36
|
+
│ TASK What specifically to do? │
|
|
37
|
+
│ CONSTRAINTS What rules must be obeyed? │
|
|
38
|
+
│ FORMAT How to structure output? │
|
|
39
|
+
│ EXAMPLES Show, don't just tell │
|
|
40
|
+
└─────────────────────────────────────────┘
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
Not every prompt needs all components, but most production prompts need most of them.
|
|
44
|
+
|
|
45
|
+
## Prompt Patterns
|
|
46
|
+
|
|
47
|
+
### Pattern 1: Role + Task (Basic)
|
|
48
|
+
|
|
49
|
+
```
|
|
50
|
+
You are a [specific role].
|
|
51
|
+
|
|
52
|
+
Your task: [specific action] for [specific context].
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
**Example:**
|
|
56
|
+
```
|
|
57
|
+
You are a TypeScript code reviewer specializing in security.
|
|
58
|
+
|
|
59
|
+
Your task: Review the authentication module below for OWASP Top 10 vulnerabilities.
|
|
60
|
+
Output a list of findings ordered by severity (Critical → High → Medium → Low).
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Pattern 2: Chain-of-Thought (Complex Reasoning)
|
|
64
|
+
|
|
65
|
+
Force step-by-step reasoning before conclusions:
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
Before answering, think through:
|
|
69
|
+
1. [First consideration]
|
|
70
|
+
2. [Second consideration]
|
|
71
|
+
3. [Third consideration]
|
|
72
|
+
|
|
73
|
+
Then provide your conclusion.
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
**Example:**
|
|
77
|
+
```
|
|
78
|
+
Before suggesting a fix, think through:
|
|
79
|
+
1. What is the root cause of this bug?
|
|
80
|
+
2. What are the possible fix approaches?
|
|
81
|
+
3. What are the trade-offs of each approach?
|
|
82
|
+
4. Which approach has the least risk?
|
|
83
|
+
|
|
84
|
+
Then provide your recommendation with rationale.
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Pattern 3: Few-Shot (Examples)
|
|
88
|
+
|
|
89
|
+
Show the AI what good output looks like:
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
[Task description]
|
|
93
|
+
|
|
94
|
+
Examples:
|
|
95
|
+
|
|
96
|
+
Input: [example input 1]
|
|
97
|
+
Output: [example output 1]
|
|
98
|
+
|
|
99
|
+
Input: [example input 2]
|
|
100
|
+
Output: [example output 2]
|
|
101
|
+
|
|
102
|
+
Now complete:
|
|
103
|
+
Input: [actual input]
|
|
104
|
+
Output:
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
**Example (agent expertise):**
|
|
108
|
+
```
|
|
109
|
+
Format agent expertise as specific, actionable skills.
|
|
110
|
+
|
|
111
|
+
Examples:
|
|
112
|
+
|
|
113
|
+
Input: security
|
|
114
|
+
Output: "OWASP Top 10 Vulnerability Assessment", "JWT Authentication Design", "SQL Injection Prevention"
|
|
115
|
+
|
|
116
|
+
Input: databases
|
|
117
|
+
Output: "PostgreSQL Query Optimization", "Zero-Downtime Schema Migrations", "Connection Pool Tuning"
|
|
118
|
+
|
|
119
|
+
Now complete:
|
|
120
|
+
Input: frontend
|
|
121
|
+
Output:
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
### Pattern 4: Constraint-First
|
|
125
|
+
|
|
126
|
+
Lead with what NOT to do (especially useful for restrictive behaviors):
|
|
127
|
+
|
|
128
|
+
```
|
|
129
|
+
NEVER [prohibited action].
|
|
130
|
+
ALWAYS [required action].
|
|
131
|
+
If [edge case], then [specific handling].
|
|
132
|
+
|
|
133
|
+
Your task: [task description]
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
**Example:**
|
|
137
|
+
```
|
|
138
|
+
NEVER include markdown formatting in your output — plain text only.
|
|
139
|
+
ALWAYS include the file path in references (e.g., src/app.ts:42).
|
|
140
|
+
If you are uncertain, say "I'm not sure" rather than guessing.
|
|
141
|
+
|
|
142
|
+
Your task: Analyze the build error below and identify the root cause.
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
### Pattern 5: Meta-Prompting
|
|
146
|
+
|
|
147
|
+
Prompt the AI to generate or improve prompts:
|
|
148
|
+
|
|
149
|
+
```
|
|
150
|
+
I need a prompt for [purpose].
|
|
151
|
+
|
|
152
|
+
The prompt should:
|
|
153
|
+
- [Requirement 1]
|
|
154
|
+
- [Requirement 2]
|
|
155
|
+
- [Requirement 3]
|
|
156
|
+
|
|
157
|
+
Generate 3 prompt variations ranked from most to least structured.
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
## System Prompt Design (codingbuddy Agents)
|
|
161
|
+
|
|
162
|
+
Agent system prompts follow a specific structure:
|
|
163
|
+
|
|
164
|
+
```markdown
|
|
165
|
+
# [Agent Display Name]
|
|
166
|
+
|
|
167
|
+
You are a [role] specialist focused on [narrow domain].
|
|
168
|
+
|
|
169
|
+
## Your Expertise
|
|
170
|
+
|
|
171
|
+
You excel at:
|
|
172
|
+
- [Specific skill 1 — concrete, not vague]
|
|
173
|
+
- [Specific skill 2]
|
|
174
|
+
- [Specific skill 3]
|
|
175
|
+
|
|
176
|
+
## Your Approach
|
|
177
|
+
|
|
178
|
+
When activated:
|
|
179
|
+
1. [First action — what you always do first]
|
|
180
|
+
2. [Analysis approach]
|
|
181
|
+
3. [Output structure]
|
|
182
|
+
|
|
183
|
+
## What You Do NOT Handle
|
|
184
|
+
|
|
185
|
+
Redirect to appropriate specialists for:
|
|
186
|
+
- [Out-of-scope 1] → use [other-agent-name]
|
|
187
|
+
- [Out-of-scope 2] → use [other-agent-name]
|
|
188
|
+
|
|
189
|
+
## Output Format
|
|
190
|
+
|
|
191
|
+
Structure all responses as:
|
|
192
|
+
### Findings
|
|
193
|
+
[Severity: Critical/High/Medium/Low] — [Description]
|
|
194
|
+
|
|
195
|
+
### Recommendations
|
|
196
|
+
[Prioritized list]
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
## MCP Tool Description Design
|
|
200
|
+
|
|
201
|
+
Tool descriptions are prompts read by AI to decide when and how to use a tool:
|
|
202
|
+
|
|
203
|
+
```typescript
|
|
204
|
+
{
|
|
205
|
+
name: 'search_rules',
|
|
206
|
+
// ❌ Bad: vague
|
|
207
|
+
description: 'Search rules',
|
|
208
|
+
|
|
209
|
+
// ✅ Good: specific use case + when to use
|
|
210
|
+
description: 'Search AI coding rules by keyword or topic. Use this when looking for specific guidelines about coding practices, TDD, security, or workflow modes. Returns matching rules with their content.',
|
|
211
|
+
|
|
212
|
+
inputSchema: {
|
|
213
|
+
properties: {
|
|
214
|
+
query: {
|
|
215
|
+
type: 'string',
|
|
216
|
+
// ❌ Bad: no guidance
|
|
217
|
+
description: 'Query string',
|
|
218
|
+
|
|
219
|
+
// ✅ Good: what makes a good query
|
|
220
|
+
description: 'Search term such as "TDD", "security", "TypeScript strict", or a question like "how to handle migrations"',
|
|
221
|
+
}
|
|
222
|
+
}
|
|
223
|
+
}
|
|
224
|
+
}
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
## Prompt Testing
|
|
228
|
+
|
|
229
|
+
### Test Matrix
|
|
230
|
+
|
|
231
|
+
For each prompt, test these dimensions:
|
|
232
|
+
|
|
233
|
+
| Dimension | Test Cases |
|
|
234
|
+
|-----------|-----------|
|
|
235
|
+
| Happy path | Ideal input, expected output |
|
|
236
|
+
| Edge cases | Empty input, very long input, unusual characters |
|
|
237
|
+
| Adversarial | Input designed to break the constraint |
|
|
238
|
+
| Ambiguous | Input that could be interpreted multiple ways |
|
|
239
|
+
|
|
240
|
+
### Evaluation Criteria
|
|
241
|
+
|
|
242
|
+
```markdown
|
|
243
|
+
## Prompt Evaluation Rubric
|
|
244
|
+
|
|
245
|
+
**Accuracy:** Does output match expected behavior? (1-5)
|
|
246
|
+
**Consistency:** Same input → same output across runs? (1-5)
|
|
247
|
+
**Format compliance:** Does output match requested format? (1-5)
|
|
248
|
+
**Boundary respect:** Does it honor constraints? (1-5)
|
|
249
|
+
**Efficiency:** Is it unnecessarily verbose? (1-5)
|
|
250
|
+
|
|
251
|
+
Total: 20 points. Target: ≥ 16 for production use.
|
|
252
|
+
```
|
|
253
|
+
|
|
254
|
+
## Tool-Specific Optimization
|
|
255
|
+
|
|
256
|
+
### Claude Code (CLAUDE.md)
|
|
257
|
+
|
|
258
|
+
```markdown
|
|
259
|
+
## Best practices for Claude Code instructions:
|
|
260
|
+
|
|
261
|
+
- Use ## headers to organize sections
|
|
262
|
+
- Bold critical rules: **NEVER do X**
|
|
263
|
+
- Use code blocks for exact command examples
|
|
264
|
+
- Include trigger conditions: "When user types PLAN..."
|
|
265
|
+
- Reference other files: "See .claude/rules/tool-priority.md"
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
### Cursor (.cursorrules)
|
|
269
|
+
|
|
270
|
+
```markdown
|
|
271
|
+
## Best practices for Cursor rules:
|
|
272
|
+
|
|
273
|
+
- One rule per line for reliable parsing
|
|
274
|
+
- Start each rule with the trigger: "When writing TypeScript..."
|
|
275
|
+
- Avoid multi-paragraph rules (Cursor truncates)
|
|
276
|
+
- Use concrete examples, not abstract principles
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
### GitHub Copilot (.github/copilot-instructions.md)
|
|
280
|
+
|
|
281
|
+
```markdown
|
|
282
|
+
## Best practices for Copilot instructions:
|
|
283
|
+
|
|
284
|
+
- Lead with task-oriented rules
|
|
285
|
+
- Examples are more effective than descriptions
|
|
286
|
+
- Copilot follows "do X" more reliably than "don't do Y"
|
|
287
|
+
- Keep total instructions under 8000 characters
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
## Common Mistakes
|
|
291
|
+
|
|
292
|
+
| Mistake | Fix |
|
|
293
|
+
|---------|-----|
|
|
294
|
+
| Vague task ("be helpful") | Specific task ("list 3 alternatives with trade-offs") |
|
|
295
|
+
| No format specification | Add explicit format: "Respond as JSON: {field: value}" |
|
|
296
|
+
| Contradictory constraints | Test for conflicts before deploying |
|
|
297
|
+
| No examples for complex tasks | Add 2-3 few-shot examples |
|
|
298
|
+
| Testing with one input | Test with at least 3 diverse inputs |
|
|
299
|
+
| Long monolithic prompt | Break into focused sections with headers |
|
|
300
|
+
|
|
301
|
+
## Quick Reference
|
|
302
|
+
|
|
303
|
+
```
|
|
304
|
+
Prompt Length Guidelines:
|
|
305
|
+
──────────────────────────
|
|
306
|
+
Simple instruction → 50-100 tokens
|
|
307
|
+
Structured prompt → 100-500 tokens
|
|
308
|
+
Agent system prompt → 500-2000 tokens
|
|
309
|
+
Complex workflow → 2000-4000 tokens
|
|
310
|
+
(>4000 = consider splitting into sub-prompts)
|
|
311
|
+
|
|
312
|
+
Reliability Ranking (most to least reliable):
|
|
313
|
+
──────────────────────────────────────────────
|
|
314
|
+
1. Explicit format + examples (highest)
|
|
315
|
+
2. Explicit format, no examples
|
|
316
|
+
3. Implicit format with examples
|
|
317
|
+
4. Implicit format, no examples (lowest)
|
|
318
|
+
```
|
|
@@ -0,0 +1,273 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: rule-authoring
|
|
3
|
+
description: Use when writing AI coding rules for codingbuddy that must work consistently across multiple AI tools (Cursor, Claude Code, Codex, GitHub Copilot, Amazon Q, Kiro). Covers rule clarity, trigger design, and multi-tool compatibility.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Rule Authoring
|
|
7
|
+
|
|
8
|
+
## Overview
|
|
9
|
+
|
|
10
|
+
AI coding rules are instructions that shape how AI assistants behave. Poorly written rules are ignored, misinterpreted, or cause inconsistent behavior across tools.
|
|
11
|
+
|
|
12
|
+
**Core principle:** Rules must be unambiguous, actionable, and testable. If an AI assistant can interpret a rule two different ways, it will choose the wrong one at the worst moment.
|
|
13
|
+
|
|
14
|
+
**Iron Law:**
|
|
15
|
+
```
|
|
16
|
+
EVERY RULE MUST HAVE A TESTABLE "DID IT WORK?" CRITERION
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## When to Use
|
|
20
|
+
|
|
21
|
+
- Writing new rules for `.ai-rules/rules/`
|
|
22
|
+
- Updating existing rules that produce inconsistent behavior
|
|
23
|
+
- Adapting rules for a new AI tool (cursor, codex, q, kiro)
|
|
24
|
+
- Auditing rules for ambiguity or overlap
|
|
25
|
+
|
|
26
|
+
## Rule Quality Criteria
|
|
27
|
+
|
|
28
|
+
A good rule is:
|
|
29
|
+
|
|
30
|
+
| Quality | Bad Example | Good Example |
|
|
31
|
+
|---------|-------------|--------------|
|
|
32
|
+
| **Specific** | "Write good code" | "Functions must have a single return type" |
|
|
33
|
+
| **Actionable** | "Be careful with auth" | "All endpoints must check authentication before executing" |
|
|
34
|
+
| **Testable** | "Follow best practices" | "Test coverage must be ≥ 80% for new files" |
|
|
35
|
+
| **Bounded** | "Always use TypeScript" | "Use TypeScript strict mode in all .ts files" |
|
|
36
|
+
| **Non-overlapping** | Two rules about the same thing | One rule per concern |
|
|
37
|
+
|
|
38
|
+
## Rule Structure
|
|
39
|
+
|
|
40
|
+
### Core Rule Format
|
|
41
|
+
|
|
42
|
+
```markdown
|
|
43
|
+
## [Rule Category]: [Rule Name]
|
|
44
|
+
|
|
45
|
+
**When:** [Trigger condition — when does this rule apply?]
|
|
46
|
+
|
|
47
|
+
**Do:** [Specific action to take]
|
|
48
|
+
|
|
49
|
+
**Don't:** [Specific anti-pattern to avoid]
|
|
50
|
+
|
|
51
|
+
**Example:**
|
|
52
|
+
\`\`\`typescript
|
|
53
|
+
// ✅ Good
|
|
54
|
+
function getUser(id: string): Promise<User>
|
|
55
|
+
|
|
56
|
+
// ❌ Bad
|
|
57
|
+
function getUser(id: any): any
|
|
58
|
+
\`\`\`
|
|
59
|
+
|
|
60
|
+
**Why:** [One-sentence rationale]
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Rule File Structure
|
|
64
|
+
|
|
65
|
+
```markdown
|
|
66
|
+
# [Category Name]
|
|
67
|
+
|
|
68
|
+
Brief description of what rules in this file govern.
|
|
69
|
+
|
|
70
|
+
## Rules
|
|
71
|
+
|
|
72
|
+
### Rule 1: [Name]
|
|
73
|
+
...
|
|
74
|
+
|
|
75
|
+
### Rule 2: [Name]
|
|
76
|
+
...
|
|
77
|
+
|
|
78
|
+
## Rationale
|
|
79
|
+
|
|
80
|
+
Why these rules exist for this project.
|
|
81
|
+
|
|
82
|
+
## Exceptions
|
|
83
|
+
|
|
84
|
+
Cases where these rules do not apply (keep this list short).
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Writing Process
|
|
88
|
+
|
|
89
|
+
### Phase 1: Identify the Rule Need
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
1. What behavior is inconsistent or incorrect?
|
|
93
|
+
→ "AI assistants sometimes use 'any' type in TypeScript"
|
|
94
|
+
|
|
95
|
+
2. What is the desired behavior?
|
|
96
|
+
→ "All variables must have explicit types"
|
|
97
|
+
|
|
98
|
+
3. What is the trigger condition?
|
|
99
|
+
→ "When writing TypeScript code"
|
|
100
|
+
|
|
101
|
+
4. Can an AI verify compliance?
|
|
102
|
+
→ "Yes: TypeScript compiler will error on 'any' in strict mode"
|
|
103
|
+
|
|
104
|
+
5. Is there already a rule covering this?
|
|
105
|
+
→ Check existing rules in .ai-rules/rules/
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
### Phase 2: Write the Rule
|
|
109
|
+
|
|
110
|
+
**Template:**
|
|
111
|
+
```markdown
|
|
112
|
+
### [Rule Name]
|
|
113
|
+
|
|
114
|
+
**When:** [Specific trigger condition]
|
|
115
|
+
|
|
116
|
+
**Do:** [Concrete action in imperative mood]
|
|
117
|
+
|
|
118
|
+
**Don't:** [Specific anti-pattern]
|
|
119
|
+
|
|
120
|
+
**Check:** [How to verify the rule was followed]
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
**Good examples:**
|
|
124
|
+
```markdown
|
|
125
|
+
### No `any` Type
|
|
126
|
+
|
|
127
|
+
**When:** Writing TypeScript code
|
|
128
|
+
|
|
129
|
+
**Do:** Always specify explicit types for function parameters and return values
|
|
130
|
+
|
|
131
|
+
**Don't:** Use `any` type — use `unknown` for truly unknown types, then narrow
|
|
132
|
+
|
|
133
|
+
**Check:** TypeScript compiler passes with `noImplicitAny: true` in tsconfig
|
|
134
|
+
|
|
135
|
+
---
|
|
136
|
+
|
|
137
|
+
### Test Before Implement (TDD)
|
|
138
|
+
|
|
139
|
+
**When:** Implementing a new function or feature
|
|
140
|
+
|
|
141
|
+
**Do:** Write the failing test first, then write minimal implementation to pass it
|
|
142
|
+
|
|
143
|
+
**Don't:** Write implementation first and add tests after
|
|
144
|
+
|
|
145
|
+
**Check:** Running tests shows RED (failure) before GREEN (pass)
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
### Phase 3: Test Multi-Tool Compatibility
|
|
149
|
+
|
|
150
|
+
Different AI tools parse rules differently. Test your rules with each tool:
|
|
151
|
+
|
|
152
|
+
```
|
|
153
|
+
Compatibility checklist for each new rule:
|
|
154
|
+
|
|
155
|
+
Claude Code:
|
|
156
|
+
- [ ] Rule triggers correctly from CLAUDE.md or custom-instructions.md
|
|
157
|
+
- [ ] Rule doesn't conflict with default Claude behavior
|
|
158
|
+
|
|
159
|
+
Cursor:
|
|
160
|
+
- [ ] Rule works in .cursorrules or .cursor/rules/
|
|
161
|
+
- [ ] Pattern matching works as expected
|
|
162
|
+
|
|
163
|
+
GitHub Copilot / Codex:
|
|
164
|
+
- [ ] Rule understandable from .github/copilot-instructions.md
|
|
165
|
+
- [ ] No Copilot-specific syntax required
|
|
166
|
+
|
|
167
|
+
Amazon Q:
|
|
168
|
+
- [ ] Compatible with .q/rules/ format
|
|
169
|
+
|
|
170
|
+
Kiro:
|
|
171
|
+
- [ ] Compatible with .kiro/ format
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
### Phase 4: Anti-Ambiguity Review
|
|
175
|
+
|
|
176
|
+
Read each rule and ask: "Could this be interpreted two different ways?"
|
|
177
|
+
|
|
178
|
+
**Ambiguity red flags:**
|
|
179
|
+
```
|
|
180
|
+
❌ "appropriate" → What's appropriate? Define it.
|
|
181
|
+
❌ "when necessary" → When is that? Specify the condition.
|
|
182
|
+
❌ "best practices" → Which ones? List them.
|
|
183
|
+
❌ "avoid" → How strongly? Use "never" or "prefer X over Y".
|
|
184
|
+
❌ "clean code" → What does clean mean? Measurable criteria only.
|
|
185
|
+
```
|
|
186
|
+
|
|
187
|
+
**Ambiguity fixes:**
|
|
188
|
+
```
|
|
189
|
+
❌ "Use appropriate error handling"
|
|
190
|
+
✅ "Catch specific error types, never catch Exception or Error base class"
|
|
191
|
+
|
|
192
|
+
❌ "Write clean functions"
|
|
193
|
+
✅ "Functions must be ≤ 30 lines and have a single return type"
|
|
194
|
+
|
|
195
|
+
❌ "When necessary, add comments"
|
|
196
|
+
✅ "Add comments only for non-obvious logic. Self-documenting code needs no comments."
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
## Rule Categories
|
|
200
|
+
|
|
201
|
+
| Category | File | Covers |
|
|
202
|
+
|----------|------|--------|
|
|
203
|
+
| Core workflow | `rules/core.md` | PLAN/ACT/EVAL modes, TDD |
|
|
204
|
+
| Project | `rules/project.md` | Tech stack, architecture |
|
|
205
|
+
| Augmented coding | `rules/augmented-coding.md` | Code quality, testing |
|
|
206
|
+
|
|
207
|
+
## Adapter-Specific Formatting
|
|
208
|
+
|
|
209
|
+
### Claude Code (`adapters/claude-code.md`)
|
|
210
|
+
|
|
211
|
+
```markdown
|
|
212
|
+
## Claude Code Specific Rules
|
|
213
|
+
|
|
214
|
+
- Use `parse_mode` for PLAN/ACT/EVAL detection
|
|
215
|
+
- Follow `dispatch_agents` pattern for parallel agents
|
|
216
|
+
- Context persists via `docs/codingbuddy/context.md`
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
### Cursor (`adapters/cursor.md`)
|
|
220
|
+
|
|
221
|
+
```markdown
|
|
222
|
+
## Cursor-Specific Rules
|
|
223
|
+
|
|
224
|
+
Rules in `.cursorrules` are parsed line-by-line.
|
|
225
|
+
Keep rules to one line each for Cursor compatibility.
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
### Codex (`adapters/codex.md`)
|
|
229
|
+
|
|
230
|
+
```markdown
|
|
231
|
+
## GitHub Copilot / Codex Rules
|
|
232
|
+
|
|
233
|
+
Place in `.github/copilot-instructions.md`.
|
|
234
|
+
Copilot prefers explicit examples over abstract rules.
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
## Rule Maintenance
|
|
238
|
+
|
|
239
|
+
### Auditing Existing Rules
|
|
240
|
+
|
|
241
|
+
```bash
|
|
242
|
+
# Find rules that haven't been updated recently
|
|
243
|
+
git log --since="6 months ago" -- packages/rules/.ai-rules/rules/
|
|
244
|
+
|
|
245
|
+
# Find duplicate rule concepts
|
|
246
|
+
grep -h "^###" packages/rules/.ai-rules/rules/*.md | sort | uniq -d
|
|
247
|
+
```
|
|
248
|
+
|
|
249
|
+
**Quarterly audit questions:**
|
|
250
|
+
1. Is this rule still relevant to our current stack?
|
|
251
|
+
2. Is this rule being followed consistently?
|
|
252
|
+
3. Does this rule conflict with any new tool defaults?
|
|
253
|
+
4. Are there new patterns that need new rules?
|
|
254
|
+
|
|
255
|
+
## Quick Reference
|
|
256
|
+
|
|
257
|
+
```
|
|
258
|
+
Rule Strength Vocabulary:
|
|
259
|
+
─────────────────────────
|
|
260
|
+
MUST / ALWAYS → Required, no exceptions
|
|
261
|
+
SHOULD / PREFER → Default behavior, exceptions allowed
|
|
262
|
+
AVOID / PREFER NOT → Discouraged, explain if used
|
|
263
|
+
NEVER / MUST NOT → Prohibited
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
## Red Flags — STOP
|
|
267
|
+
|
|
268
|
+
| Thought | Reality |
|
|
269
|
+
|---------|---------|
|
|
270
|
+
| "This rule is obvious" | Write it anyway — different AI tools need explicit guidance |
|
|
271
|
+
| "The existing rule covers this" | Check carefully — overlap causes conflicts |
|
|
272
|
+
| "Rules don't need testing" | Test with each target AI tool |
|
|
273
|
+
| "Abstract rules are more flexible" | Abstract rules are ignored or misapplied |
|