rubocop-prompt 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.rspec +3 -0
- data/.rubocop.yml +18 -0
- data/AGENTS.md +37 -0
- data/CHANGELOG.md +23 -0
- data/CODE_OF_CONDUCT.md +132 -0
- data/LICENSE.txt +21 -0
- data/README.md +604 -0
- data/Rakefile +12 -0
- data/config/default.yml +31 -0
- data/docs/development-guidelines.md +31 -0
- data/docs/project-overview.md +32 -0
- data/docs/rubocop-integration.md +26 -0
- data/lib/rubocop/cop/prompt/critical_first_last.rb +146 -0
- data/lib/rubocop/cop/prompt/invalid_format.rb +95 -0
- data/lib/rubocop/cop/prompt/max_tokens.rb +111 -0
- data/lib/rubocop/cop/prompt/missing_stop.rb +145 -0
- data/lib/rubocop/cop/prompt/system_injection.rb +99 -0
- data/lib/rubocop/cop/prompt/temperature_range.rb +235 -0
- data/lib/rubocop/prompt/plugin.rb +31 -0
- data/lib/rubocop/prompt/version.rb +7 -0
- data/lib/rubocop/prompt.rb +16 -0
- data/sig/rubocop/prompt.rbs +6 -0
- metadata +138 -0
data/README.md
ADDED
@@ -0,0 +1,604 @@
|
|
1
|
+
# RuboCop::Prompt
|
2
|
+
|
3
|
+
A RuboCop plugin for analyzing and improving AI prompt quality in Ruby code. This gem provides cops to detect common anti-patterns in AI prompt engineering, helping developers write better prompts for LLM interactions.
|
4
|
+
|
5
|
+
## Why Use RuboCop::Prompt?
|
6
|
+
|
7
|
+
AI prompt engineering is critical for reliable LLM applications, but common mistakes can lead to:
|
8
|
+
- **Security vulnerabilities** from prompt injection
|
9
|
+
- **Unexpected costs** from runaway token generation
|
10
|
+
- **Poor AI responses** from badly structured prompts
|
11
|
+
- **Inconsistent results** from inappropriate temperature settings
|
12
|
+
|
13
|
+
This plugin helps catch these issues early in your development cycle.
|
14
|
+
|
15
|
+
## Available Cops
|
16
|
+
|
17
|
+
| Cop | Purpose | Key Benefit |
|
18
|
+
|-----|---------|-------------|
|
19
|
+
| **Prompt/InvalidFormat** | Ensures prompts start with Markdown headings | Better structure and readability |
|
20
|
+
| **Prompt/CriticalFirstLast** | Keeps important sections at beginning/end | Prevents buried critical instructions |
|
21
|
+
| **Prompt/SystemInjection** | Detects dynamic interpolation vulnerabilities | Prevents prompt injection attacks |
|
22
|
+
| **Prompt/MaxTokens** | Limits token count using tiktoken_ruby | Controls costs and context limits |
|
23
|
+
| **Prompt/MissingStop** | Requires stop/max_tokens parameters | Prevents runaway generation |
|
24
|
+
| **Prompt/TemperatureRange** | Validates temperature for task type | Ensures appropriate creativity levels |
|
25
|
+
|
26
|
+
## Quick Start
|
27
|
+
|
28
|
+
### 1. Installation
|
29
|
+
|
30
|
+
Add this line to your application's Gemfile:
|
31
|
+
|
32
|
+
```ruby
|
33
|
+
gem 'rubocop-prompt'
|
34
|
+
```
|
35
|
+
|
36
|
+
And then execute:
|
37
|
+
|
38
|
+
```bash
|
39
|
+
bundle install
|
40
|
+
```
|
41
|
+
|
42
|
+
Or install it yourself as:
|
43
|
+
|
44
|
+
```bash
|
45
|
+
gem install rubocop-prompt
|
46
|
+
```
|
47
|
+
|
48
|
+
### 2. Configuration
|
49
|
+
|
50
|
+
Add the following to your `.rubocop.yml`:
|
51
|
+
|
52
|
+
```yaml
|
53
|
+
plugins:
|
54
|
+
- rubocop-prompt
|
55
|
+
|
56
|
+
# Enable all cops with recommended settings
|
57
|
+
Prompt/InvalidFormat:
|
58
|
+
Enabled: true
|
59
|
+
|
60
|
+
Prompt/CriticalFirstLast:
|
61
|
+
Enabled: true
|
62
|
+
|
63
|
+
Prompt/SystemInjection:
|
64
|
+
Enabled: true
|
65
|
+
|
66
|
+
Prompt/MaxTokens:
|
67
|
+
Enabled: true
|
68
|
+
MaxTokens: 4000 # Customize for your model's context window
|
69
|
+
|
70
|
+
Prompt/MissingStop:
|
71
|
+
Enabled: true
|
72
|
+
|
73
|
+
Prompt/TemperatureRange:
|
74
|
+
Enabled: true
|
75
|
+
```
|
76
|
+
|
77
|
+
### 3. Run RuboCop
|
78
|
+
|
79
|
+
```bash
|
80
|
+
bundle exec rubocop
|
81
|
+
```
|
82
|
+
|
83
|
+
That's it! RuboCop will now analyze your prompt-related code and suggest improvements.
|
84
|
+
|
85
|
+
## Detailed Cop Documentation
|
86
|
+
|
87
|
+
### 🛡️ Prompt/SystemInjection
|
88
|
+
|
89
|
+
**Purpose**: Prevents prompt injection vulnerabilities by detecting dynamic variable interpolation in SYSTEM heredocs.
|
90
|
+
|
91
|
+
**Why it matters**: User input directly interpolated into system prompts can allow attackers to override your instructions.
|
92
|
+
|
93
|
+
<details>
|
94
|
+
<summary>Show examples</summary>
|
95
|
+
|
96
|
+
Detects dynamic variable interpolation in SYSTEM heredocs to prevent prompt injection vulnerabilities.
|
97
|
+
|
98
|
+
This cop identifies code in classes, modules, or methods with "prompt" in their names and ensures that SYSTEM heredocs do not contain dynamic variable interpolations like `#{user_msg}`.
|
99
|
+
|
100
|
+
**Bad:**
|
101
|
+
```ruby
|
102
|
+
class PromptHandler
|
103
|
+
def generate_system_prompt(user_input)
|
104
|
+
<<~SYSTEM
|
105
|
+
You are an AI assistant. User said: #{user_input}
|
106
|
+
SYSTEM
|
107
|
+
end
|
108
|
+
end
|
109
|
+
```
|
110
|
+
|
111
|
+
**Good:**
|
112
|
+
```ruby
|
113
|
+
class PromptHandler
|
114
|
+
def generate_system_prompt
|
115
|
+
<<~SYSTEM
|
116
|
+
You are an AI assistant.
|
117
|
+
SYSTEM
|
118
|
+
end
|
119
|
+
|
120
|
+
# Handle user input separately
|
121
|
+
def process_user_input(user_input)
|
122
|
+
# Process and sanitize user input here
|
123
|
+
end
|
124
|
+
end
|
125
|
+
```
|
126
|
+
|
127
|
+
</details>
|
128
|
+
|
129
|
+
---
|
130
|
+
|
131
|
+
### 📏 Prompt/MaxTokens
|
132
|
+
|
133
|
+
**Purpose**: Ensures prompt content stays within token limits using accurate tiktoken_ruby calculations.
|
134
|
+
|
135
|
+
**Why it matters**: Exceeding token limits can cause API errors or unexpected truncation of your carefully crafted prompts.
|
136
|
+
|
137
|
+
<details>
|
138
|
+
<summary>Show examples</summary>
|
139
|
+
|
140
|
+
Checks that documentation text in prompt-related code doesn't exceed the maximum token limit using tiktoken_ruby.
|
141
|
+
|
142
|
+
This cop identifies code in classes, modules, or methods with "prompt" in their names and calculates the token count for any string literals or heredoc content. By default, it warns when the content exceeds 4000 tokens, which is suitable for most LLM contexts.
|
143
|
+
|
144
|
+
**Key Features:**
|
145
|
+
- Uses `tiktoken_ruby` with `cl100k_base` encoding (GPT-3.5/GPT-4 compatible)
|
146
|
+
- Configurable token limit via `MaxTokens` setting
|
147
|
+
- Includes fallback token approximation if tiktoken_ruby fails
|
148
|
+
- Only analyzes prompt-related contexts to avoid false positives
|
149
|
+
|
150
|
+
**Bad:**
|
151
|
+
```ruby
|
152
|
+
class PromptGenerator
|
153
|
+
def create_system_prompt
|
154
|
+
# This example assumes a very long prompt that exceeds the token limit
|
155
|
+
<<~PROMPT
|
156
|
+
# System Instructions
|
157
|
+
|
158
|
+
You are an AI assistant with extensive knowledge about many topics.
|
159
|
+
[... thousands of lines of detailed instructions that exceed 4000 tokens ...]
|
160
|
+
Please follow all these detailed guidelines carefully.
|
161
|
+
PROMPT
|
162
|
+
end
|
163
|
+
end
|
164
|
+
```
|
165
|
+
|
166
|
+
**Good:**
|
167
|
+
```ruby
|
168
|
+
class PromptGenerator
|
169
|
+
def create_system_prompt
|
170
|
+
<<~PROMPT
|
171
|
+
# System Instructions
|
172
|
+
|
173
|
+
You are a helpful AI assistant.
|
174
|
+
|
175
|
+
## Guidelines
|
176
|
+
- Be concise and accurate
|
177
|
+
- Ask for clarification when needed
|
178
|
+
- Provide helpful responses
|
179
|
+
PROMPT
|
180
|
+
end
|
181
|
+
|
182
|
+
# For complex prompts, consider breaking them into smaller, focused components
|
183
|
+
def create_specialized_prompt(domain)
|
184
|
+
base_prompt = create_system_prompt
|
185
|
+
domain_specific = load_domain_instructions(domain) # Keep each part manageable
|
186
|
+
"#{base_prompt}\n\n#{domain_specific}"
|
187
|
+
end
|
188
|
+
end
|
189
|
+
```
|
190
|
+
|
191
|
+
**Configuration:**
|
192
|
+
```yaml
|
193
|
+
Prompt/MaxTokens:
|
194
|
+
MaxTokens: 4000 # Default: 4000 tokens
|
195
|
+
# MaxTokens: 8000 # For models with larger context windows
|
196
|
+
# MaxTokens: 2000 # For more conservative token usage
|
197
|
+
```
|
198
|
+
|
199
|
+
</details>
|
200
|
+
|
201
|
+
---
|
202
|
+
|
203
|
+
### ⏹️ Prompt/MissingStop
|
204
|
+
|
205
|
+
**Purpose**: Ensures OpenAI API calls include proper termination parameters to prevent runaway generation.
|
206
|
+
|
207
|
+
**Why it matters**: Without stop conditions, AI responses can continue indefinitely, consuming excessive tokens and costs.
|
208
|
+
|
209
|
+
<details>
|
210
|
+
<summary>Show examples</summary>
|
211
|
+
|
212
|
+
Ensures that OpenAI::Client.chat calls include stop: or max_tokens: parameter to prevent runaway generation.
|
213
|
+
|
214
|
+
This cop identifies OpenAI::Client.chat method calls and ensures they include either stop: or max_tokens: parameters. These parameters are essential for controlling generation length and preventing unexpectedly long responses that could consume excessive tokens or processing time.
|
215
|
+
|
216
|
+
**Key Features:**
|
217
|
+
- Detects explicit OpenAI::Client.new.chat calls
|
218
|
+
- Checks for presence of stop: or max_tokens: parameters
|
219
|
+
- Helps prevent runaway generation and unexpected token consumption
|
220
|
+
- Only analyzes explicit OpenAI::Client calls to avoid false positives
|
221
|
+
|
222
|
+
**Bad:**
|
223
|
+
```ruby
|
224
|
+
class ChatService
|
225
|
+
def generate_response
|
226
|
+
OpenAI::Client.new.chat(
|
227
|
+
parameters: {
|
228
|
+
model: "gpt-4",
|
229
|
+
messages: [{ role: "user", content: "Hello" }]
|
230
|
+
}
|
231
|
+
)
|
232
|
+
end
|
233
|
+
end
|
234
|
+
```
|
235
|
+
|
236
|
+
**Good:**
|
237
|
+
```ruby
|
238
|
+
class ChatService
|
239
|
+
def generate_response_with_limit
|
240
|
+
OpenAI::Client.new.chat(
|
241
|
+
parameters: {
|
242
|
+
model: "gpt-4",
|
243
|
+
messages: [{ role: "user", content: "Hello" }],
|
244
|
+
max_tokens: 100
|
245
|
+
}
|
246
|
+
)
|
247
|
+
end
|
248
|
+
|
249
|
+
def generate_response_with_stop
|
250
|
+
OpenAI::Client.new.chat(
|
251
|
+
parameters: {
|
252
|
+
model: "gpt-4",
|
253
|
+
messages: [{ role: "user", content: "Hello" }],
|
254
|
+
stop: ["END", "\n"]
|
255
|
+
}
|
256
|
+
)
|
257
|
+
end
|
258
|
+
|
259
|
+
def generate_response_with_both
|
260
|
+
OpenAI::Client.new.chat(
|
261
|
+
parameters: {
|
262
|
+
model: "gpt-4",
|
263
|
+
messages: [{ role: "user", content: "Hello" }],
|
264
|
+
max_tokens: 1000,
|
265
|
+
stop: ["END"]
|
266
|
+
}
|
267
|
+
)
|
268
|
+
end
|
269
|
+
end
|
270
|
+
```
|
271
|
+
|
272
|
+
</details>
|
273
|
+
|
274
|
+
---
|
275
|
+
|
276
|
+
### 🌡️ Prompt/TemperatureRange
|
277
|
+
|
278
|
+
**Purpose**: Validates that temperature settings match the task requirements (low for precision, high for creativity).
|
279
|
+
|
280
|
+
**Why it matters**: Using high temperature (>0.7) for analytical tasks reduces accuracy, while low temperature limits creativity.
|
281
|
+
|
282
|
+
<details>
|
283
|
+
<summary>Show examples</summary>
|
284
|
+
|
285
|
+
Ensures that high temperature values (> 0.7) are not used for precision tasks requiring accuracy.
|
286
|
+
|
287
|
+
This cop identifies code in classes, modules, or methods with "prompt" in their names and ensures that when temperature > 0.7, it's not being used for tasks requiring precision, accuracy, or factual correctness. High temperature values increase randomness and creativity but can reduce accuracy for analytical tasks.
|
288
|
+
|
289
|
+
**Key Features:**
|
290
|
+
- Detects temperature values > 0.7 in chat/completion API calls
|
291
|
+
- Analyzes message content for precision-related keywords
|
292
|
+
- Helps ensure appropriate temperature settings for different task types
|
293
|
+
- Only triggers for prompt-related code contexts
|
294
|
+
|
295
|
+
**Bad:**
|
296
|
+
```ruby
|
297
|
+
class PromptGenerator
|
298
|
+
def analysis_prompt
|
299
|
+
OpenAI::Client.new.chat(
|
300
|
+
parameters: {
|
301
|
+
temperature: 0.9, # Too high for precision task
|
302
|
+
messages: [
|
303
|
+
{ role: "system", content: "Analyze this data accurately and provide precise results" }
|
304
|
+
]
|
305
|
+
}
|
306
|
+
)
|
307
|
+
end
|
308
|
+
|
309
|
+
def calculation_prompt
|
310
|
+
client.chat(
|
311
|
+
temperature: 0.8, # Too high for calculation task
|
312
|
+
messages: [
|
313
|
+
{ role: "user", content: "Calculate the exact total of these numbers" }
|
314
|
+
]
|
315
|
+
)
|
316
|
+
end
|
317
|
+
end
|
318
|
+
```
|
319
|
+
|
320
|
+
**Good:**
|
321
|
+
```ruby
|
322
|
+
class PromptGenerator
|
323
|
+
# Low temperature for precision tasks
|
324
|
+
def analysis_prompt
|
325
|
+
OpenAI::Client.new.chat(
|
326
|
+
parameters: {
|
327
|
+
temperature: 0.3, # Appropriate for precision
|
328
|
+
messages: [
|
329
|
+
{ role: "system", content: "Analyze this data accurately and provide precise results" }
|
330
|
+
]
|
331
|
+
}
|
332
|
+
)
|
333
|
+
end
|
334
|
+
|
335
|
+
# High temperature is fine for creative tasks
|
336
|
+
def creative_prompt
|
337
|
+
client.chat(
|
338
|
+
parameters: {
|
339
|
+
temperature: 0.9, # Fine for creative tasks
|
340
|
+
messages: [
|
341
|
+
{ role: "user", content: "Write a creative story about space adventures" }
|
342
|
+
]
|
343
|
+
}
|
344
|
+
)
|
345
|
+
end
|
346
|
+
end
|
347
|
+
```
|
348
|
+
|
349
|
+
**Detected precision keywords**: accurate, accuracy, precise, exact, analyze, calculate, fact, factual, classify, code, debug, technical, and others.
|
350
|
+
|
351
|
+
</details>
|
352
|
+
|
353
|
+
---
|
354
|
+
|
355
|
+
### 📋 Prompt/InvalidFormat
|
356
|
+
|
357
|
+
**Purpose**: Enforces Markdown formatting conventions in system prompts for better structure and readability.
|
358
|
+
|
359
|
+
**Why it matters**: Well-structured prompts with clear headings are easier to maintain and more effective.
|
360
|
+
|
361
|
+
<details>
|
362
|
+
<summary>Show examples</summary>
|
363
|
+
|
364
|
+
Ensures system prompts follow Markdown formatting conventions for better structure and readability.
|
365
|
+
|
366
|
+
**Anti-pattern**: System prompts without clear structure
|
367
|
+
```ruby
|
368
|
+
# Bad - will trigger offense
|
369
|
+
class PromptService
|
370
|
+
def call
|
371
|
+
{ system: "You are an AI assistant." }
|
372
|
+
end
|
373
|
+
end
|
374
|
+
```
|
375
|
+
|
376
|
+
**Good practice**: System prompts that start with Markdown headings
|
377
|
+
```ruby
|
378
|
+
# Good - properly structured
|
379
|
+
class PromptService
|
380
|
+
def call
|
381
|
+
{
|
382
|
+
system: <<~PROMPT
|
383
|
+
# System Instructions
|
384
|
+
|
385
|
+
You are an AI assistant that helps users with their questions.
|
386
|
+
|
387
|
+
## Guidelines
|
388
|
+
- Be helpful and accurate
|
389
|
+
- Provide clear explanations
|
390
|
+
PROMPT
|
391
|
+
}
|
392
|
+
end
|
393
|
+
end
|
394
|
+
```
|
395
|
+
|
396
|
+
</details>
|
397
|
+
|
398
|
+
---
|
399
|
+
|
400
|
+
### 🎯 Prompt/CriticalFirstLast
|
401
|
+
|
402
|
+
**Purpose**: Ensures important labeled sections (### text) appear at the beginning or end, not buried in the middle.
|
403
|
+
|
404
|
+
**Why it matters**: Critical instructions in the middle of prompts are often overlooked by AI models due to attention bias.
|
405
|
+
|
406
|
+
<details>
|
407
|
+
<summary>Show examples</summary>
|
408
|
+
|
409
|
+
Ensures that labeled sections (### text) appear at the beginning or end of content, not in the middle sections.
|
410
|
+
|
411
|
+
**Anti-pattern**: Critical labeled sections buried in the middle
|
412
|
+
```ruby
|
413
|
+
# Bad - will trigger offense
|
414
|
+
class PromptHandler
|
415
|
+
def process
|
416
|
+
{
|
417
|
+
system: <<~PROMPT
|
418
|
+
# System Instructions
|
419
|
+
You are an AI assistant.
|
420
|
+
Please help users with their questions.
|
421
|
+
### Important Note
|
422
|
+
This is a critical section.
|
423
|
+
More instructions follow.
|
424
|
+
Final instructions here.
|
425
|
+
PROMPT
|
426
|
+
}
|
427
|
+
end
|
428
|
+
end
|
429
|
+
```
|
430
|
+
|
431
|
+
**Good practice**: Critical sections at beginning or end
|
432
|
+
```ruby
|
433
|
+
# Good - critical section at beginning
|
434
|
+
class PromptHandler
|
435
|
+
def process
|
436
|
+
{
|
437
|
+
system: <<~PROMPT
|
438
|
+
### Important Note
|
439
|
+
This is a critical section.
|
440
|
+
# System Instructions
|
441
|
+
You are an AI assistant.
|
442
|
+
Please help users with their questions.
|
443
|
+
PROMPT
|
444
|
+
}
|
445
|
+
end
|
446
|
+
end
|
447
|
+
|
448
|
+
# Good - critical section at end
|
449
|
+
class PromptHandler
|
450
|
+
def process
|
451
|
+
{
|
452
|
+
system: <<~PROMPT
|
453
|
+
# System Instructions
|
454
|
+
You are an AI assistant.
|
455
|
+
Please help users with their questions.
|
456
|
+
### Important Note
|
457
|
+
This is a critical section.
|
458
|
+
PROMPT
|
459
|
+
}
|
460
|
+
end
|
461
|
+
end
|
462
|
+
```
|
463
|
+
|
464
|
+
</details>
|
465
|
+
|
466
|
+
---
|
467
|
+
|
468
|
+
## 🔍 Scope and Detection
|
469
|
+
|
470
|
+
All cops are designed to be **smart and focused**:
|
471
|
+
- Only analyze files with "prompt" in class/module/method names (case-insensitive)
|
472
|
+
- Avoid false positives in unrelated code
|
473
|
+
- Focus on actual AI/LLM integration patterns
|
474
|
+
|
475
|
+
## 💡 Quick Examples
|
476
|
+
|
477
|
+
### ❌ Code That Triggers Offenses
|
478
|
+
|
479
|
+
```ruby
|
480
|
+
# Triggers Prompt/InvalidFormat - no heading
|
481
|
+
class UserPromptGenerator
|
482
|
+
def system_message
|
483
|
+
{ system: "Help the user with their request" }
|
484
|
+
end
|
485
|
+
end
|
486
|
+
|
487
|
+
# Triggers Prompt/SystemInjection - dangerous interpolation
|
488
|
+
class ChatPromptService
|
489
|
+
def generate_system_prompt(user_input)
|
490
|
+
<<~SYSTEM
|
491
|
+
You are an AI assistant. User said: #{user_input}
|
492
|
+
SYSTEM
|
493
|
+
end
|
494
|
+
end
|
495
|
+
|
496
|
+
# Triggers Prompt/MissingStop - no termination control
|
497
|
+
class ChatService
|
498
|
+
def generate_response
|
499
|
+
OpenAI::Client.new.chat(
|
500
|
+
parameters: {
|
501
|
+
model: "gpt-4",
|
502
|
+
messages: [{ role: "user", content: "Hello" }]
|
503
|
+
}
|
504
|
+
)
|
505
|
+
end
|
506
|
+
end
|
507
|
+
```
|
508
|
+
|
509
|
+
### ✅ Code That Follows Best Practices
|
510
|
+
|
511
|
+
```ruby
|
512
|
+
# ✅ Proper formatting with headings
|
513
|
+
class PromptBuilder
|
514
|
+
def build
|
515
|
+
{
|
516
|
+
system: <<~MARKDOWN
|
517
|
+
# AI Assistant Instructions
|
518
|
+
|
519
|
+
You are a helpful AI assistant.
|
520
|
+
|
521
|
+
## Guidelines
|
522
|
+
- Be helpful and accurate
|
523
|
+
- Ask for clarification when needed
|
524
|
+
MARKDOWN
|
525
|
+
}
|
526
|
+
end
|
527
|
+
end
|
528
|
+
|
529
|
+
# ✅ Safe prompt handling without injection risks
|
530
|
+
class SecurePromptService
|
531
|
+
def generate_system_prompt
|
532
|
+
<<~SYSTEM
|
533
|
+
# AI Assistant Instructions
|
534
|
+
|
535
|
+
You are a helpful AI assistant.
|
536
|
+
SYSTEM
|
537
|
+
end
|
538
|
+
|
539
|
+
def process_user_input(user_input)
|
540
|
+
# Handle user input separately with proper validation
|
541
|
+
end
|
542
|
+
end
|
543
|
+
|
544
|
+
# ✅ Proper API usage with termination controls
|
545
|
+
class ControlledChatService
|
546
|
+
def generate_response
|
547
|
+
OpenAI::Client.new.chat(
|
548
|
+
parameters: {
|
549
|
+
model: "gpt-4",
|
550
|
+
messages: [{ role: "user", content: "Hello" }],
|
551
|
+
max_tokens: 100,
|
552
|
+
stop: ["END"]
|
553
|
+
}
|
554
|
+
)
|
555
|
+
end
|
556
|
+
end
|
557
|
+
|
558
|
+
# ✅ Won't be flagged - not prompt-related
|
559
|
+
class DatabaseService
|
560
|
+
def config
|
561
|
+
{ system: "production" } # Ignored by all prompt cops
|
562
|
+
end
|
563
|
+
end
|
564
|
+
```
|
565
|
+
|
566
|
+
## 🚀 Development
|
567
|
+
|
568
|
+
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
569
|
+
|
570
|
+
### Local Development
|
571
|
+
```bash
|
572
|
+
# Setup
|
573
|
+
git clone https://github.com/[USERNAME]/rubocop-prompt
|
574
|
+
cd rubocop-prompt
|
575
|
+
bin/setup
|
576
|
+
|
577
|
+
# Run tests
|
578
|
+
rake spec
|
579
|
+
|
580
|
+
# Test with your own code
|
581
|
+
bin/console
|
582
|
+
```
|
583
|
+
|
584
|
+
### Release Process
|
585
|
+
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
586
|
+
|
587
|
+
## 🤝 Contributing
|
588
|
+
|
589
|
+
We welcome contributions! Here's how you can help:
|
590
|
+
|
591
|
+
1. **Report bugs** - Found an issue? Open a GitHub issue
|
592
|
+
2. **Suggest new cops** - Have ideas for prompt anti-patterns to detect?
|
593
|
+
3. **Improve existing cops** - Better detection logic or clearer error messages
|
594
|
+
4. **Documentation** - Help make our docs even clearer
|
595
|
+
|
596
|
+
Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/rubocop-prompt. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/[USERNAME]/rubocop-prompt/blob/main/CODE_OF_CONDUCT.md).
|
597
|
+
|
598
|
+
## 📄 License
|
599
|
+
|
600
|
+
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
601
|
+
|
602
|
+
## 📋 Code of Conduct
|
603
|
+
|
604
|
+
Everyone interacting in the RuboCop::Prompt project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/[USERNAME]/rubocop-prompt/blob/main/CODE_OF_CONDUCT.md).
|
data/Rakefile
ADDED
data/config/default.yml
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
---
|
2
|
+
Prompt/InvalidFormat:
|
3
|
+
Description: 'Ensure that system: blocks start with a Markdown heading.'
|
4
|
+
Enabled: true
|
5
|
+
VersionAdded: '0.1.0'
|
6
|
+
|
7
|
+
Prompt/CriticalFirstLast:
|
8
|
+
Description: 'Ensure that labeled sections (### text) appear at the beginning or end, not in the middle.'
|
9
|
+
Enabled: true
|
10
|
+
VersionAdded: '0.1.0'
|
11
|
+
|
12
|
+
Prompt/SystemInjection:
|
13
|
+
Description: 'Avoid dynamic interpolation in SYSTEM heredocs to prevent prompt injection vulnerabilities.'
|
14
|
+
Enabled: true
|
15
|
+
VersionAdded: '0.1.0'
|
16
|
+
|
17
|
+
Prompt/MaxTokens:
|
18
|
+
Description: 'Ensure that documentation text in prompt-related code does not exceed the maximum token limit.'
|
19
|
+
Enabled: true
|
20
|
+
VersionAdded: '0.1.0'
|
21
|
+
MaxTokens: 4000
|
22
|
+
|
23
|
+
Prompt/MissingStop:
|
24
|
+
Description: 'Ensure that OpenAI::Client.chat calls include stop: or max_tokens: parameter.'
|
25
|
+
Enabled: true
|
26
|
+
VersionAdded: '0.1.0'
|
27
|
+
|
28
|
+
Prompt/TemperatureRange:
|
29
|
+
Description: 'Ensure that high temperature values (> 0.7) are not used for precision tasks requiring accuracy.'
|
30
|
+
Enabled: true
|
31
|
+
VersionAdded: '0.1.0'
|
@@ -0,0 +1,31 @@
|
|
1
|
+
# Development Guidelines
|
2
|
+
|
3
|
+
## Coding Standards
|
4
|
+
- Follow Ruby community best practices
|
5
|
+
- Adhere to RuboCop style guidelines
|
6
|
+
- Use frozen string literals
|
7
|
+
- Write comprehensive tests using RSpec
|
8
|
+
- Maintain type signatures using RBS
|
9
|
+
|
10
|
+
## File Organization
|
11
|
+
- Keep main logic in `lib/rubocop/prompt.rb`
|
12
|
+
- Place version information in `lib/rubocop/prompt/version.rb`
|
13
|
+
- Write tests in `spec/` directory mirroring `lib/` structure
|
14
|
+
- Maintain RBS type definitions in `sig/` directory
|
15
|
+
|
16
|
+
## Testing
|
17
|
+
- Use RSpec for all tests
|
18
|
+
- Maintain good test coverage
|
19
|
+
- Test both positive and negative cases
|
20
|
+
- Include integration tests for RuboCop functionality
|
21
|
+
|
22
|
+
## Dependencies
|
23
|
+
- Ruby >= 3.1.0
|
24
|
+
- RuboCop (as peer dependency)
|
25
|
+
- Development dependencies managed via Gemfile
|
26
|
+
|
27
|
+
## Release Process
|
28
|
+
1. Update version in `lib/rubocop/prompt/version.rb`
|
29
|
+
2. Update CHANGELOG (if exists)
|
30
|
+
3. Run full test suite
|
31
|
+
4. Use `bundle exec rake release` for gem publication
|
@@ -0,0 +1,32 @@
|
|
1
|
+
# Project Overview
|
2
|
+
|
3
|
+
## Project: rubocop-prompt
|
4
|
+
|
5
|
+
A Ruby gem that extends RuboCop with prompt-based functionality for enhanced code analysis and suggestions.
|
6
|
+
|
7
|
+
### Project Structure
|
8
|
+
```
|
9
|
+
lib/
|
10
|
+
rubocop/
|
11
|
+
prompt.rb # Main module
|
12
|
+
prompt/
|
13
|
+
version.rb # Version management
|
14
|
+
spec/
|
15
|
+
rubocop/
|
16
|
+
prompt_spec.rb # Test specifications
|
17
|
+
sig/
|
18
|
+
rubocop/
|
19
|
+
prompt.rbs # Type signatures (RBS)
|
20
|
+
```
|
21
|
+
|
22
|
+
### Development Workflow
|
23
|
+
- Use `bin/setup` to install dependencies
|
24
|
+
- Run `rake spec` to execute tests
|
25
|
+
- Use `bin/console` for interactive development
|
26
|
+
- Follow Ruby community best practices and RuboCop guidelines
|
27
|
+
|
28
|
+
### Architecture
|
29
|
+
- Follows standard Ruby gem structure
|
30
|
+
- Integrates with RuboCop as an extension
|
31
|
+
- Uses RBS for type definitions
|
32
|
+
- RSpec for testing
|