aia 0.9.11 → 0.9.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +66 -2
- data/README.md +133 -4
- data/docs/advanced-prompting.md +721 -0
- data/docs/cli-reference.md +582 -0
- data/docs/configuration.md +347 -0
- data/docs/contributing.md +332 -0
- data/docs/directives-reference.md +490 -0
- data/docs/examples/index.md +277 -0
- data/docs/examples/mcp/index.md +479 -0
- data/docs/examples/prompts/analysis/index.md +78 -0
- data/docs/examples/prompts/automation/index.md +108 -0
- data/docs/examples/prompts/development/index.md +125 -0
- data/docs/examples/prompts/index.md +333 -0
- data/docs/examples/prompts/learning/index.md +127 -0
- data/docs/examples/prompts/writing/index.md +62 -0
- data/docs/examples/tools/index.md +292 -0
- data/docs/faq.md +414 -0
- data/docs/guides/available-models.md +366 -0
- data/docs/guides/basic-usage.md +477 -0
- data/docs/guides/chat.md +474 -0
- data/docs/guides/executable-prompts.md +417 -0
- data/docs/guides/first-prompt.md +454 -0
- data/docs/guides/getting-started.md +455 -0
- data/docs/guides/image-generation.md +507 -0
- data/docs/guides/index.md +46 -0
- data/docs/guides/models.md +507 -0
- data/docs/guides/tools.md +856 -0
- data/docs/index.md +173 -0
- data/docs/installation.md +238 -0
- data/docs/mcp-integration.md +612 -0
- data/docs/prompt_management.md +579 -0
- data/docs/security.md +629 -0
- data/docs/tools-and-mcp-examples.md +1186 -0
- data/docs/workflows-and-pipelines.md +563 -0
- data/examples/tools/mcp/github_mcp_server.json +11 -0
- data/examples/tools/mcp/imcp.json +7 -0
- data/lib/aia/chat_processor_service.rb +19 -3
- data/lib/aia/config/base.rb +224 -0
- data/lib/aia/config/cli_parser.rb +409 -0
- data/lib/aia/config/defaults.rb +88 -0
- data/lib/aia/config/file_loader.rb +131 -0
- data/lib/aia/config/validator.rb +184 -0
- data/lib/aia/config.rb +10 -860
- data/lib/aia/directive_processor.rb +27 -372
- data/lib/aia/directives/configuration.rb +114 -0
- data/lib/aia/directives/execution.rb +37 -0
- data/lib/aia/directives/models.rb +178 -0
- data/lib/aia/directives/registry.rb +120 -0
- data/lib/aia/directives/utility.rb +70 -0
- data/lib/aia/directives/web_and_file.rb +71 -0
- data/lib/aia/prompt_handler.rb +23 -3
- data/lib/aia/ruby_llm_adapter.rb +307 -128
- data/lib/aia/session.rb +27 -14
- data/lib/aia/utility.rb +12 -8
- data/lib/aia.rb +11 -2
- data/lib/extensions/ruby_llm/.irbrc +56 -0
- data/mkdocs.yml +165 -0
- metadata +77 -20
- /data/{images → docs/assets/images}/aia.png +0 -0
@@ -0,0 +1,721 @@
|
|
1
|
+
# Advanced Prompting Techniques
|
2
|
+
|
3
|
+
Master sophisticated prompting strategies to get the most out of AIA's capabilities with complex workflows, dynamic content generation, and expert-level AI interactions.
|
4
|
+
|
5
|
+
## Advanced Directive Usage
|
6
|
+
|
7
|
+
### Conditional Execution
|
8
|
+
Execute directives based on runtime conditions:
|
9
|
+
|
10
|
+
```markdown
|
11
|
+
<%
|
12
|
+
environment = ENV['RAILS_ENV'] || 'development'
|
13
|
+
config_file = "config/#{environment}.yml"
|
14
|
+
if File.exist?(config_file)
|
15
|
+
%>
|
16
|
+
//include <%= config_file %><%
|
17
|
+
<% else %>
|
18
|
+
//include config/default.yml
|
19
|
+
<% end %>
|
20
|
+
```
|
21
|
+
|
22
|
+
```markdown
|
23
|
+
<%
|
24
|
+
model = AIA.config.model
|
25
|
+
case model
|
26
|
+
when /gpt-4/
|
27
|
+
%>
|
28
|
+
Provide detailed, step-by-step analysis with code examples.
|
29
|
+
<% when /gpt-3.5/ %>
|
30
|
+
Provide concise, practical guidance with brief examples.
|
31
|
+
<% when /claude/ %>
|
32
|
+
Provide thorough analysis with emphasis on reasoning process.
|
33
|
+
<% end %>
|
34
|
+
```
|
35
|
+
|
36
|
+
### Dynamic Configuration
|
37
|
+
Adjust settings based on content or context:
|
38
|
+
|
39
|
+
```markdown
|
40
|
+
//ruby
|
41
|
+
<%
|
42
|
+
task_type = '<%= task_type %>'
|
43
|
+
temperature = case task_type
|
44
|
+
when 'creative' then 1.2
|
45
|
+
when 'analytical' then 0.3
|
46
|
+
when 'balanced' then 0.7
|
47
|
+
else 0.7
|
48
|
+
end
|
49
|
+
%>
|
50
|
+
//config temperature <%= temperature %>
|
51
|
+
```
|
52
|
+
|
53
|
+
```markdown
|
54
|
+
<%
|
55
|
+
content_size = File.read('<%= input_file %>').length
|
56
|
+
model = content_size > 50000 ? 'claude-3-sonnet' : 'gpt-4'
|
57
|
+
max_tokens = content_size > 50000 ? 8000 : 4000
|
58
|
+
%>
|
59
|
+
//config model <%= model %>
|
60
|
+
//config max_tokens <%= max_tokens %>
|
61
|
+
```
|
62
|
+
|
63
|
+
## Complex Workflow Patterns
|
64
|
+
|
65
|
+
### Multi-Stage Analysis Pipeline
|
66
|
+
Create sophisticated analysis workflows with intermediate processing:
|
67
|
+
|
68
|
+
```markdown
|
69
|
+
# Stage 1: Data Preparation
|
70
|
+
//config model gpt-3.5-turbo
|
71
|
+
//config temperature 0.2
|
72
|
+
|
73
|
+
# Data Analysis Pipeline - Stage 1: Preparation
|
74
|
+
|
75
|
+
## Input Data Overview
|
76
|
+
//shell file <%= input_file %>
|
77
|
+
//shell wc -l <%= input_file %>
|
78
|
+
<%= "File size: #{File.size('<%= input_file %>')} bytes" %>
|
79
|
+
|
80
|
+
## Data Quality Assessment
|
81
|
+
//include <%= input_file %>
|
82
|
+
|
83
|
+
Analyze the data structure and identify:
|
84
|
+
1. Data format and schema
|
85
|
+
2. Missing or inconsistent values
|
86
|
+
3. Potential data quality issues
|
87
|
+
4. Preprocessing requirements
|
88
|
+
|
89
|
+
Save findings to: preprocessing_notes.md
|
90
|
+
|
91
|
+
//next data_cleaning
|
92
|
+
//pipeline analysis_deep_dive,pattern_recognition,insight_generation,final_report
|
93
|
+
```
|
94
|
+
|
95
|
+
### Adaptive Decision Trees
|
96
|
+
Create prompts that adapt their approach based on intermediate results:
|
97
|
+
|
98
|
+
```markdown
|
99
|
+
<%
|
100
|
+
file_ext = File.extname('<%= code_file %>')
|
101
|
+
file_size = File.size('<%= code_file %>')
|
102
|
+
|
103
|
+
# Determine analysis approach
|
104
|
+
if file_size > 10000
|
105
|
+
analysis_type = 'comprehensive'
|
106
|
+
%>
|
107
|
+
//config model gpt-4
|
108
|
+
//config max_tokens 6000
|
109
|
+
<%
|
110
|
+
elsif file_ext == '.py'
|
111
|
+
analysis_type = 'python_specific'
|
112
|
+
%>
|
113
|
+
//config model gpt-4
|
114
|
+
Including Python-specific analysis patterns
|
115
|
+
<%
|
116
|
+
else
|
117
|
+
analysis_type = 'standard'
|
118
|
+
%>
|
119
|
+
//config model gpt-3.5-turbo
|
120
|
+
<%
|
121
|
+
end
|
122
|
+
%>
|
123
|
+
Selected <%= analysis_type %> analysis for <%= file_ext %> file (<%= file_size %> bytes)
|
124
|
+
```
|
125
|
+
|
126
|
+
## Advanced Context Management
|
127
|
+
|
128
|
+
### Hierarchical Context Building
|
129
|
+
Build context progressively through multiple layers:
|
130
|
+
|
131
|
+
```markdown
|
132
|
+
# Layer 1: Project Context
|
133
|
+
//include README.md
|
134
|
+
//include ARCHITECTURE.md
|
135
|
+
|
136
|
+
# Layer 2: Domain Context
|
137
|
+
<%
|
138
|
+
domain = '<%= domain || "general" %>'
|
139
|
+
domain_file = "docs/#{domain}_context.md"
|
140
|
+
if File.exist?(domain_file)
|
141
|
+
%>
|
142
|
+
//include <%= domain_file %>
|
143
|
+
<% end %>
|
144
|
+
```
|
145
|
+
|
146
|
+
# Layer 3: Task-Specific Context
|
147
|
+
<% if task_context_file %>
|
148
|
+
//include <%= task_context_file %>
|
149
|
+
<% end %>
|
150
|
+
|
151
|
+
# Layer 4: Historical Context
|
152
|
+
<%
|
153
|
+
history_file = ".aia/history/#{Date.today.strftime('%Y%m')}_context.md"
|
154
|
+
if File.exist?(history_file)
|
155
|
+
%>
|
156
|
+
//include <%= history_file %>
|
157
|
+
<% end %>
|
158
|
+
```
|
159
|
+
|
160
|
+
Now analyze <%= task %> using all available context layers.
|
161
|
+
```
|
162
|
+
|
163
|
+
### Context Filtering and Summarization
|
164
|
+
Manage large contexts intelligently:
|
165
|
+
|
166
|
+
```markdown
|
167
|
+
<%
|
168
|
+
max_context_size = 20000 # characters
|
169
|
+
context_files = ['docs/spec.md', 'docs/api.md', 'docs/examples.md']
|
170
|
+
total_size = 0
|
171
|
+
|
172
|
+
context_files.each do |file|
|
173
|
+
if File.exist?(file)
|
174
|
+
file_size = File.read(file).length
|
175
|
+
if total_size + file_size <= max_context_size
|
176
|
+
%>
|
177
|
+
//include <%= file %>
|
178
|
+
<%
|
179
|
+
total_size += file_size
|
180
|
+
else
|
181
|
+
%>
|
182
|
+
Summarizing <%= file %> (too large for full inclusion):
|
183
|
+
|
184
|
+
<%= AIA.summarize_file(file, max_length: 500) %>
|
185
|
+
<%
|
186
|
+
end
|
187
|
+
end
|
188
|
+
end
|
189
|
+
%>
|
190
|
+
```
|
191
|
+
|
192
|
+
## Dynamic Content Generation
|
193
|
+
|
194
|
+
### Template-Based Generation
|
195
|
+
Create flexible templates that adapt to different scenarios:
|
196
|
+
|
197
|
+
```erb
|
198
|
+
# Multi-format document generator
|
199
|
+
//config model <%= model || "gpt-4" %>
|
200
|
+
//config temperature <%= creativity || "0.7" %>
|
201
|
+
|
202
|
+
# <%= document_type.capitalize %> Document
|
203
|
+
|
204
|
+
<% case format %>
|
205
|
+
<% when 'technical' %>
|
206
|
+
Generate a technical document with:
|
207
|
+
- Executive summary
|
208
|
+
- Detailed technical specifications
|
209
|
+
- Implementation guidelines
|
210
|
+
- Code examples and APIs
|
211
|
+
- Testing and validation procedures
|
212
|
+
|
213
|
+
<% when 'business' %>
|
214
|
+
Generate a business document with:
|
215
|
+
- Executive summary
|
216
|
+
- Market analysis
|
217
|
+
- Financial projections
|
218
|
+
- Risk assessment
|
219
|
+
- Implementation timeline
|
220
|
+
|
221
|
+
<% when 'academic' %>
|
222
|
+
Generate an academic document with:
|
223
|
+
- Abstract and keywords
|
224
|
+
- Literature review
|
225
|
+
- Methodology
|
226
|
+
- Results and analysis
|
227
|
+
- Conclusions and future work
|
228
|
+
<% end %>
|
229
|
+
|
230
|
+
## Source Material
|
231
|
+
//include <%= source_file %>
|
232
|
+
|
233
|
+
## Additional Context
|
234
|
+
<% if context_files %>
|
235
|
+
<% context_files.each do |file| %>
|
236
|
+
//include <%= file %>
|
237
|
+
<% end %>
|
238
|
+
<% end %>
|
239
|
+
|
240
|
+
Target audience: <%= audience || "general professional" %>
|
241
|
+
Document length: <%= length || "2000-3000 words" %>
|
242
|
+
```
|
243
|
+
|
244
|
+
### Recursive Prompt Generation
|
245
|
+
Generate prompts that create other prompts:
|
246
|
+
|
247
|
+
```markdown
|
248
|
+
<%
|
249
|
+
domain = '<%= domain %>'
|
250
|
+
tasks = ['analyze', 'design', 'implement', 'test', 'document']
|
251
|
+
|
252
|
+
tasks.each do |task|
|
253
|
+
prompt_content = <<~PROMPT
|
254
|
+
# #{domain.capitalize} #{task.capitalize} Prompt
|
255
|
+
|
256
|
+
//config model gpt-4
|
257
|
+
//config temperature 0.5
|
258
|
+
|
259
|
+
You are a #{domain} expert performing #{task} tasks.
|
260
|
+
|
261
|
+
Task: <%= specific_task %>
|
262
|
+
Context: //include <%= context_file %>
|
263
|
+
|
264
|
+
Provide expert-level guidance specific to #{domain} #{task}.
|
265
|
+
PROMPT
|
266
|
+
|
267
|
+
filename = "generated_#{domain}_#{task}.txt"
|
268
|
+
File.write(filename, prompt_content)
|
269
|
+
%>
|
270
|
+
Generated: <%= filename %>
|
271
|
+
<% end %>
|
272
|
+
```
|
273
|
+
|
274
|
+
## Expert-Level Model Interaction
|
275
|
+
|
276
|
+
### Multi-Model Orchestration
|
277
|
+
Coordinate multiple models for complex tasks:
|
278
|
+
|
279
|
+
```markdown
|
280
|
+
# Multi-model analysis system
|
281
|
+
//config consensus false
|
282
|
+
|
283
|
+
## Phase 1: Creative Ideation (High Temperature)
|
284
|
+
<%= "Using GPT-4 for creative brainstorming..." %>
|
285
|
+
<%
|
286
|
+
gpt4_creative = RubyLLM.chat(model: 'gpt-4', temperature: 1.3)
|
287
|
+
ideas = gpt4_creative.ask("Generate 10 innovative approaches to: <%= problem %>")
|
288
|
+
%>
|
289
|
+
<%= ideas.content %>
|
290
|
+
```
|
291
|
+
|
292
|
+
## Phase 2: Technical Analysis (Low Temperature)
|
293
|
+
|
294
|
+
<%= "Using Claude for technical analysis..." %>
|
295
|
+
<%
|
296
|
+
claude_technical = RubyLLM.chat(model: 'claude-3-sonnet', temperature: 0.2)
|
297
|
+
analysis = claude_technical.ask("Analyze technical feasibility of these approaches: #{ideas.content}")
|
298
|
+
%>
|
299
|
+
<%= analysis.content %>
|
300
|
+
```
|
301
|
+
|
302
|
+
## Phase 3: Synthesis and Recommendation
|
303
|
+
|
304
|
+
<%= "Using GPT-4 for final synthesis..." %>
|
305
|
+
<%
|
306
|
+
gpt4_synthesis = RubyLLM.chat(model: 'gpt-4', temperature: 0.7)
|
307
|
+
final_rec = gpt4_synthesis.ask("Synthesize and recommend best approach: Ideas: #{ideas.content} Analysis: #{analysis.content}")
|
308
|
+
%>
|
309
|
+
<%= final_rec.content %>
|
310
|
+
```
|
311
|
+
|
312
|
+
### Model-Specific Optimization
|
313
|
+
Tailor prompts for specific model strengths:
|
314
|
+
|
315
|
+
```markdown
|
316
|
+
<%
|
317
|
+
model = AIA.config.model
|
318
|
+
case model
|
319
|
+
when /gpt-4/
|
320
|
+
# GPT-4 excels at complex reasoning and code
|
321
|
+
instruction_style = "detailed step-by-step analysis with code examples"
|
322
|
+
context_depth = "comprehensive background and multiple perspectives"
|
323
|
+
when /claude/
|
324
|
+
# Claude excels at long-form analysis and following instructions precisely
|
325
|
+
instruction_style = "thorough systematic analysis with clear reasoning"
|
326
|
+
context_depth = "complete context with relevant documentation"
|
327
|
+
when /gemini/
|
328
|
+
# Gemini excels at structured data and mathematical reasoning
|
329
|
+
instruction_style = "structured analysis with quantitative metrics"
|
330
|
+
context_depth = "organized data with clear relationships"
|
331
|
+
end
|
332
|
+
%>
|
333
|
+
Optimizing for <%= model %>: <%= instruction_style %>
|
334
|
+
```
|
335
|
+
|
336
|
+
Apply <%= instruction_style %> to analyze <%= task %>.
|
337
|
+
|
338
|
+
Include <%= context_depth %> for comprehensive understanding.
|
339
|
+
```
|
340
|
+
|
341
|
+
## Advanced Tool Integration
|
342
|
+
|
343
|
+
### Custom Tool Workflows
|
344
|
+
Create sophisticated tool integration patterns:
|
345
|
+
|
346
|
+
```markdown
|
347
|
+
//tools advanced_analysis_tools.rb
|
348
|
+
|
349
|
+
<%
|
350
|
+
# Initialize analysis workflow
|
351
|
+
workflow = AnalysisWorkflow.new
|
352
|
+
workflow.add_tool('data_preprocessor', weight: 0.3)
|
353
|
+
workflow.add_tool('statistical_analyzer', weight: 0.4)
|
354
|
+
workflow.add_tool('pattern_detector', weight: 0.2)
|
355
|
+
workflow.add_tool('insight_generator', weight: 0.1)
|
356
|
+
|
357
|
+
results = workflow.execute('<%= input_data %>')
|
358
|
+
%>
|
359
|
+
Analysis complete. Confidence: <%= results[:confidence] %>
|
360
|
+
<%= results[:summary] %>
|
361
|
+
```
|
362
|
+
|
363
|
+
Based on multi-tool analysis, provide expert interpretation of:
|
364
|
+
<%= results[:detailed_findings] %>
|
365
|
+
```
|
366
|
+
|
367
|
+
### Dynamic Tool Selection
|
368
|
+
Select tools based on content analysis:
|
369
|
+
|
370
|
+
```markdown
|
371
|
+
<%
|
372
|
+
content = File.read('<%= input_file %>')
|
373
|
+
|
374
|
+
# Analyze content to determine best tools
|
375
|
+
tools = []
|
376
|
+
tools << 'text_analyzer' if content.match?(/[a-zA-Z]{100,}/)
|
377
|
+
tools << 'code_analyzer' if content.match?(/def\s+\w+|function\s+\w+|class\s+\w+/)
|
378
|
+
tools << 'data_analyzer' if content.match?(/\d+[,.]?\d*\s*[%$]?/)
|
379
|
+
tools << 'web_scraper' if content.match?(/https?:\/\//)
|
380
|
+
%>
|
381
|
+
Selected tools: <%= tools.join(', ') %>
|
382
|
+
<% tools.each do |tool| %>
|
383
|
+
//tools <%= tool %>.rb
|
384
|
+
<% end %>
|
385
|
+
```
|
386
|
+
|
387
|
+
## Sophisticated Output Formatting
|
388
|
+
|
389
|
+
### Multi-Format Output Generation
|
390
|
+
Generate output in multiple formats simultaneously:
|
391
|
+
|
392
|
+
```markdown
|
393
|
+
//config model gpt-4
|
394
|
+
|
395
|
+
# Multi-Format Report Generator
|
396
|
+
|
397
|
+
Generate analysis in multiple formats:
|
398
|
+
|
399
|
+
## 1. Executive Summary (Business Format)
|
400
|
+
Provide a 200-word executive summary suitable for C-level presentation.
|
401
|
+
|
402
|
+
## 2. Technical Detail (Developer Format)
|
403
|
+
Provide detailed technical analysis with:
|
404
|
+
- Architecture diagrams (textual description)
|
405
|
+
- Code examples
|
406
|
+
- Implementation steps
|
407
|
+
- Testing strategies
|
408
|
+
|
409
|
+
## 3. Academic Format (Research Paper Style)
|
410
|
+
Provide structured analysis with:
|
411
|
+
- Abstract and keywords
|
412
|
+
- Methodology description
|
413
|
+
- Results and discussion
|
414
|
+
- References and citations
|
415
|
+
|
416
|
+
## 4. Action Items (Project Management Format)
|
417
|
+
Extract concrete action items with:
|
418
|
+
- Priority levels (High/Medium/Low)
|
419
|
+
- Estimated effort
|
420
|
+
- Dependencies
|
421
|
+
- Assigned roles
|
422
|
+
- Success criteria
|
423
|
+
|
424
|
+
Source: //include <%= source_file %>
|
425
|
+
```
|
426
|
+
|
427
|
+
### Structured Data Extraction
|
428
|
+
Extract structured data from unstructured content:
|
429
|
+
|
430
|
+
```ruby
|
431
|
+
# Structured data extraction
|
432
|
+
//config model gpt-4
|
433
|
+
//config temperature 0.1
|
434
|
+
|
435
|
+
Extract structured information from the following content and format as JSON:
|
436
|
+
|
437
|
+
Required fields:
|
438
|
+
- entities: List of people, organizations, locations
|
439
|
+
- dates: Important dates and deadlines
|
440
|
+
- metrics: Numerical data and KPIs
|
441
|
+
- actions: Required actions and decisions
|
442
|
+
- risks: Identified risks and concerns
|
443
|
+
- opportunities: Growth and improvement opportunities
|
444
|
+
|
445
|
+
Content:
|
446
|
+
//include <%= unstructured_content %>
|
447
|
+
|
448
|
+
Output valid JSON only, no explanatory text.
|
449
|
+
|
450
|
+
# Post-process extracted JSON
|
451
|
+
json_output = response.content
|
452
|
+
begin
|
453
|
+
data = JSON.parse(json_output)
|
454
|
+
puts "Successfully extracted #{data.keys.length} data categories"
|
455
|
+
|
456
|
+
# Save to structured file
|
457
|
+
File.write('extracted_data.json', JSON.pretty_generate(data))
|
458
|
+
puts "Data saved to extracted_data.json"
|
459
|
+
rescue JSON::ParserError => e
|
460
|
+
puts "JSON parsing failed: #{e.message}"
|
461
|
+
end
|
462
|
+
```
|
463
|
+
|
464
|
+
## Advanced Error Handling and Recovery
|
465
|
+
|
466
|
+
### Graceful Degradation
|
467
|
+
Handle errors and provide fallback options:
|
468
|
+
|
469
|
+
```ruby
|
470
|
+
# Robust prompt with fallbacks
|
471
|
+
<%=
|
472
|
+
begin
|
473
|
+
primary_content = File.read('<%= primary_source %>')
|
474
|
+
puts "//include <%= primary_source %>"
|
475
|
+
rescue => e
|
476
|
+
puts "Primary source unavailable (#{e.message})"
|
477
|
+
|
478
|
+
# Try fallback sources
|
479
|
+
fallback_sources = ['backup.txt', 'default_context.md', 'minimal_info.txt']
|
480
|
+
fallback_found = false
|
481
|
+
|
482
|
+
fallback_sources.each do |source|
|
483
|
+
if File.exist?(source)
|
484
|
+
puts "Using fallback source: #{source}"
|
485
|
+
puts "//include #{source}"
|
486
|
+
fallback_found = true
|
487
|
+
break
|
488
|
+
end
|
489
|
+
end
|
490
|
+
|
491
|
+
unless fallback_found
|
492
|
+
puts "No sources available. Proceeding with minimal context."
|
493
|
+
puts "Please provide basic information about: <%= topic %>"
|
494
|
+
end
|
495
|
+
end
|
496
|
+
%>
|
497
|
+
```
|
498
|
+
|
499
|
+
### Validation and Quality Assurance
|
500
|
+
Implement quality checks for AI outputs:
|
501
|
+
|
502
|
+
```ruby
|
503
|
+
# Output validation system
|
504
|
+
<%=
|
505
|
+
class OutputValidator
|
506
|
+
def self.validate_code_review(output)
|
507
|
+
required_sections = ['Summary', 'Issues Found', 'Recommendations']
|
508
|
+
severity_levels = ['Critical', 'Major', 'Minor']
|
509
|
+
|
510
|
+
issues = []
|
511
|
+
required_sections.each do |section|
|
512
|
+
issues << "Missing section: #{section}" unless output.include?(section)
|
513
|
+
end
|
514
|
+
|
515
|
+
has_severity = severity_levels.any? { |level| output.include?(level) }
|
516
|
+
issues << "No severity levels found" unless has_severity
|
517
|
+
|
518
|
+
issues.empty? ? "✓ Validation passed" : "⚠ Issues: #{issues.join(', ')}"
|
519
|
+
end
|
520
|
+
end
|
521
|
+
|
522
|
+
# This will be used to validate the AI response
|
523
|
+
puts "Response will be validated for: <%= validation_criteria %>"
|
524
|
+
%>
|
525
|
+
```
|
526
|
+
|
527
|
+
## Performance Optimization Techniques
|
528
|
+
|
529
|
+
### Intelligent Caching
|
530
|
+
Implement smart caching for expensive operations:
|
531
|
+
|
532
|
+
```ruby
|
533
|
+
# Smart caching system
|
534
|
+
<%=
|
535
|
+
require 'digest'
|
536
|
+
|
537
|
+
cache_key = Digest::MD5.hexdigest('<%= input_data %>' + AIA.config.model)
|
538
|
+
cache_file = "/tmp/aia_cache_#{cache_key}.json"
|
539
|
+
cache_duration = 3600 # 1 hour
|
540
|
+
|
541
|
+
if File.exist?(cache_file) && (Time.now - File.mtime(cache_file)) < cache_duration
|
542
|
+
puts "Using cached result for similar query..."
|
543
|
+
cached_result = JSON.parse(File.read(cache_file))
|
544
|
+
puts cached_result['content']
|
545
|
+
exit # Skip AI processing
|
546
|
+
else
|
547
|
+
puts "Processing fresh query (no valid cache found)..."
|
548
|
+
# Continue with normal processing
|
549
|
+
# Result will be cached by post-processing script
|
550
|
+
end
|
551
|
+
%>
|
552
|
+
```
|
553
|
+
|
554
|
+
### Batch Processing Strategies
|
555
|
+
Optimize for processing multiple items:
|
556
|
+
|
557
|
+
```ruby
|
558
|
+
# Intelligent batch processing
|
559
|
+
<%=
|
560
|
+
files = Dir.glob('<%= pattern %>')
|
561
|
+
batch_size = 5
|
562
|
+
model_switching_threshold = 10
|
563
|
+
|
564
|
+
puts "Processing #{files.length} files in batches of #{batch_size}"
|
565
|
+
|
566
|
+
# Switch to faster model for large batches
|
567
|
+
if files.length > model_switching_threshold
|
568
|
+
puts "//config model gpt-3.5-turbo # Using faster model for large batch"
|
569
|
+
else
|
570
|
+
puts "//config model gpt-4 # Using quality model for small batch"
|
571
|
+
end
|
572
|
+
|
573
|
+
files.each_slice(batch_size).with_index do |batch, index|
|
574
|
+
puts "\n## Batch #{index + 1}: #{batch.map(&:basename).join(', ')}"
|
575
|
+
batch.each { |file| puts "//include #{file}" }
|
576
|
+
puts "\nAnalyze this batch focusing on common patterns and unique aspects."
|
577
|
+
end
|
578
|
+
%>
|
579
|
+
```
|
580
|
+
|
581
|
+
## Best Practices for Advanced Prompting
|
582
|
+
|
583
|
+
### Modular Design Principles
|
584
|
+
1. **Separation of Concerns**: Keep configuration, data, and instructions separate
|
585
|
+
2. **Reusable Components**: Create modular prompt components
|
586
|
+
3. **Clear Dependencies**: Document and manage prompt dependencies
|
587
|
+
4. **Version Control**: Track changes and maintain prompt versioning
|
588
|
+
|
589
|
+
### Performance Considerations
|
590
|
+
1. **Model Selection**: Choose appropriate models for task complexity
|
591
|
+
2. **Context Management**: Balance completeness with efficiency
|
592
|
+
3. **Caching Strategies**: Cache expensive computations and API calls
|
593
|
+
4. **Batch Processing**: Optimize for multiple similar tasks
|
594
|
+
|
595
|
+
### Error Prevention
|
596
|
+
1. **Validation**: Validate inputs and outputs
|
597
|
+
2. **Fallbacks**: Provide graceful degradation options
|
598
|
+
3. **Testing**: Test prompts with various inputs
|
599
|
+
4. **Monitoring**: Track performance and error rates
|
600
|
+
|
601
|
+
### Security and Privacy
|
602
|
+
1. **Input Sanitization**: Clean and validate user inputs
|
603
|
+
2. **Access Control**: Limit file and system access
|
604
|
+
3. **Data Privacy**: Avoid exposing sensitive information
|
605
|
+
4. **Audit Trails**: Log usage and maintain accountability
|
606
|
+
|
607
|
+
## Real-World Advanced Examples
|
608
|
+
|
609
|
+
### Automated Code Review System
|
610
|
+
A comprehensive code review system using multiple models and tools:
|
611
|
+
|
612
|
+
```markdown
|
613
|
+
# enterprise_code_review.txt
|
614
|
+
//config model gpt-4
|
615
|
+
//tools security_scanner.rb,performance_analyzer.rb,style_checker.rb
|
616
|
+
|
617
|
+
# Enterprise Code Review System
|
618
|
+
|
619
|
+
<%=
|
620
|
+
# Multi-phase review process
|
621
|
+
phases = {
|
622
|
+
security: { model: 'gpt-4', temperature: 0.1, tools: ['security_scanner'] },
|
623
|
+
performance: { model: 'claude-3-sonnet', temperature: 0.2, tools: ['performance_analyzer'] },
|
624
|
+
style: { model: 'gpt-3.5-turbo', temperature: 0.3, tools: ['style_checker'] },
|
625
|
+
architecture: { model: 'gpt-4', temperature: 0.5, tools: [] }
|
626
|
+
}
|
627
|
+
|
628
|
+
current_phase = '<%= phase || "security" %>'
|
629
|
+
config = phases[current_phase.to_sym]
|
630
|
+
|
631
|
+
puts "//config model #{config[:model]}"
|
632
|
+
puts "//config temperature #{config[:temperature]}"
|
633
|
+
config[:tools].each { |tool| puts "//tools #{tool}.rb" }
|
634
|
+
|
635
|
+
puts "\n## Phase: #{current_phase.capitalize} Review"
|
636
|
+
%>
|
637
|
+
```
|
638
|
+
|
639
|
+
### Code to Analyze:
|
640
|
+
//include <%= code_file %>
|
641
|
+
|
642
|
+
Perform comprehensive <%= current_phase %> analysis following enterprise standards.
|
643
|
+
|
644
|
+
<%=
|
645
|
+
next_phase = phases.keys[phases.keys.index(current_phase.to_sym) + 1]
|
646
|
+
puts next_phase ? "//next enterprise_code_review --phase #{next_phase}" : "# Review complete"
|
647
|
+
%>
|
648
|
+
```
|
649
|
+
|
650
|
+
### Intelligent Research Assistant
|
651
|
+
A research system that adapts its approach based on query complexity:
|
652
|
+
|
653
|
+
```markdown
|
654
|
+
<%=
|
655
|
+
# adaptive_research_assistant.txt
|
656
|
+
query = research_query
|
657
|
+
complexity = query.split.length > 10 ? 'complex' : 'simple'
|
658
|
+
domain = domain || "general"
|
659
|
+
|
660
|
+
if complexity == 'complex'
|
661
|
+
puts "//config model claude-3-sonnet"
|
662
|
+
puts "//config max_tokens 6000"
|
663
|
+
research_depth = 'comprehensive'
|
664
|
+
else
|
665
|
+
puts "//config model gpt-4"
|
666
|
+
puts "//config max_tokens 3000"
|
667
|
+
research_depth = 'focused'
|
668
|
+
end
|
669
|
+
|
670
|
+
puts "Research mode: #{research_depth} analysis for #{domain} domain"
|
671
|
+
%>
|
672
|
+
```
|
673
|
+
|
674
|
+
# Adaptive Research Analysis
|
675
|
+
|
676
|
+
## Query: <%= research_query %>
|
677
|
+
|
678
|
+
# Dynamic source inclusion based on domain
|
679
|
+
<%=
|
680
|
+
source_map = {
|
681
|
+
'technology' => ['tech_sources.md', 'industry_reports/', 'patent_db.txt'],
|
682
|
+
'business' => ['market_data.csv', 'financial_reports/', 'competitor_analysis.md'],
|
683
|
+
'academic' => ['literature_db.txt', 'citation_index.md', 'peer_reviews/'],
|
684
|
+
'general' => ['general_sources.md', 'news_feeds.txt', 'reference_materials/']
|
685
|
+
}
|
686
|
+
|
687
|
+
sources = source_map[domain] || source_map['general']
|
688
|
+
sources.each do |source|
|
689
|
+
if File.exist?(source) || Dir.exist?(source)
|
690
|
+
puts "//include #{source}"
|
691
|
+
end
|
692
|
+
end
|
693
|
+
%>
|
694
|
+
```
|
695
|
+
|
696
|
+
Provide <%= research_depth %> research analysis addressing:
|
697
|
+
1. Current state of knowledge
|
698
|
+
2. Key findings and insights
|
699
|
+
3. Research gaps and limitations
|
700
|
+
4. Future research directions
|
701
|
+
5. Practical implications
|
702
|
+
|
703
|
+
<%=
|
704
|
+
if research_depth == 'comprehensive'
|
705
|
+
puts "//next citation_generator"
|
706
|
+
puts "//pipeline fact_checker,source_validator,bibliography_creator"
|
707
|
+
end
|
708
|
+
%>
|
709
|
+
```
|
710
|
+
|
711
|
+
## Related Documentation
|
712
|
+
|
713
|
+
- [Prompt Management](prompt_management.md) - Organizing and managing prompts
|
714
|
+
- [Directives Reference](directives-reference.md) - All available directives
|
715
|
+
- [Working with Models](guides/models.md) - Model selection and optimization
|
716
|
+
- [Tools Integration](guides/tools.md) - Advanced tool usage
|
717
|
+
- [Examples](examples/index.md) - Real-world advanced examples
|
718
|
+
|
719
|
+
---
|
720
|
+
|
721
|
+
Advanced prompting is where AIA truly shines. These techniques enable you to create sophisticated, intelligent workflows that adapt to complex requirements and deliver expert-level results. Experiment with these patterns and develop your own advanced techniques!
|