tsikol 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (75) hide show
  1. checksums.yaml +7 -0
  2. data/CHANGELOG.md +22 -0
  3. data/CONTRIBUTING.md +84 -0
  4. data/LICENSE +21 -0
  5. data/README.md +579 -0
  6. data/Rakefile +12 -0
  7. data/docs/README.md +69 -0
  8. data/docs/api/middleware.md +721 -0
  9. data/docs/api/prompt.md +858 -0
  10. data/docs/api/resource.md +651 -0
  11. data/docs/api/server.md +509 -0
  12. data/docs/api/test-helpers.md +591 -0
  13. data/docs/api/tool.md +527 -0
  14. data/docs/cookbook/authentication.md +651 -0
  15. data/docs/cookbook/caching.md +877 -0
  16. data/docs/cookbook/dynamic-tools.md +970 -0
  17. data/docs/cookbook/error-handling.md +887 -0
  18. data/docs/cookbook/logging.md +1044 -0
  19. data/docs/cookbook/rate-limiting.md +717 -0
  20. data/docs/examples/code-assistant.md +922 -0
  21. data/docs/examples/complete-server.md +726 -0
  22. data/docs/examples/database-manager.md +1198 -0
  23. data/docs/examples/devops-tools.md +1382 -0
  24. data/docs/examples/echo-server.md +501 -0
  25. data/docs/examples/weather-service.md +822 -0
  26. data/docs/guides/completion.md +472 -0
  27. data/docs/guides/getting-started.md +462 -0
  28. data/docs/guides/middleware.md +823 -0
  29. data/docs/guides/project-structure.md +434 -0
  30. data/docs/guides/prompts.md +920 -0
  31. data/docs/guides/resources.md +720 -0
  32. data/docs/guides/sampling.md +804 -0
  33. data/docs/guides/testing.md +863 -0
  34. data/docs/guides/tools.md +627 -0
  35. data/examples/README.md +92 -0
  36. data/examples/advanced_features.rb +129 -0
  37. data/examples/basic-migrated/app/prompts/weather_chat.rb +44 -0
  38. data/examples/basic-migrated/app/resources/weather_alerts.rb +18 -0
  39. data/examples/basic-migrated/app/tools/get_current_weather.rb +34 -0
  40. data/examples/basic-migrated/app/tools/get_forecast.rb +30 -0
  41. data/examples/basic-migrated/app/tools/get_weather_by_coords.rb +48 -0
  42. data/examples/basic-migrated/server.rb +25 -0
  43. data/examples/basic.rb +73 -0
  44. data/examples/full_featured.rb +175 -0
  45. data/examples/middleware_example.rb +112 -0
  46. data/examples/sampling_example.rb +104 -0
  47. data/examples/weather-service/app/prompts/weather/chat.rb +90 -0
  48. data/examples/weather-service/app/resources/weather/alerts.rb +59 -0
  49. data/examples/weather-service/app/tools/weather/get_current.rb +82 -0
  50. data/examples/weather-service/app/tools/weather/get_forecast.rb +90 -0
  51. data/examples/weather-service/server.rb +28 -0
  52. data/exe/tsikol +6 -0
  53. data/lib/tsikol/cli/templates/Gemfile.erb +10 -0
  54. data/lib/tsikol/cli/templates/README.md.erb +38 -0
  55. data/lib/tsikol/cli/templates/gitignore.erb +49 -0
  56. data/lib/tsikol/cli/templates/prompt.rb.erb +53 -0
  57. data/lib/tsikol/cli/templates/resource.rb.erb +29 -0
  58. data/lib/tsikol/cli/templates/server.rb.erb +24 -0
  59. data/lib/tsikol/cli/templates/tool.rb.erb +60 -0
  60. data/lib/tsikol/cli.rb +203 -0
  61. data/lib/tsikol/error_handler.rb +141 -0
  62. data/lib/tsikol/health.rb +198 -0
  63. data/lib/tsikol/http_transport.rb +72 -0
  64. data/lib/tsikol/lifecycle.rb +149 -0
  65. data/lib/tsikol/middleware.rb +168 -0
  66. data/lib/tsikol/prompt.rb +101 -0
  67. data/lib/tsikol/resource.rb +53 -0
  68. data/lib/tsikol/router.rb +190 -0
  69. data/lib/tsikol/server.rb +660 -0
  70. data/lib/tsikol/stdio_transport.rb +108 -0
  71. data/lib/tsikol/test_helpers.rb +261 -0
  72. data/lib/tsikol/tool.rb +111 -0
  73. data/lib/tsikol/version.rb +5 -0
  74. data/lib/tsikol.rb +72 -0
  75. metadata +219 -0
@@ -0,0 +1,804 @@
1
+ # Sampling Guide
2
+
3
+ Sampling enables MCP servers to request text generation from LLMs through the client. This powerful feature allows your tools to leverage AI capabilities for content generation, analysis, and decision-making.
4
+
5
+ ## Table of Contents
6
+
7
+ 1. [What is Sampling?](#what-is-sampling)
8
+ 2. [Enabling Sampling](#enabling-sampling)
9
+ 3. [Basic Usage](#basic-usage)
10
+ 4. [Sampling Parameters](#sampling-parameters)
11
+ 5. [Advanced Patterns](#advanced-patterns)
12
+ 6. [Error Handling](#error-handling)
13
+ 7. [Testing Sampling](#testing-sampling)
14
+
15
+ ## What is Sampling?
16
+
17
+ Sampling allows MCP servers to:
18
+ - Request text generation from the client's LLM
19
+ - Create AI-powered tools and features
20
+ - Implement intelligent assistants
21
+ - Generate dynamic content
22
+ - Make context-aware decisions
23
+
24
+ The client handles the actual LLM interaction, while your server provides the prompts.
25
+
26
+ ## Enabling Sampling
27
+
28
+ ### Server Configuration
29
+
30
+ Enable sampling in your server:
31
+
32
+ ```ruby
33
+ Tsikol.server "my-server" do
34
+ # Enable sampling capability
35
+ sampling true
36
+
37
+ # Register tools that use sampling
38
+ tool AiWriter
39
+ tool CodeGenerator
40
+ tool SmartAnalyzer
41
+ end
42
+ ```
43
+
44
+ ### Checking Sampling Availability
45
+
46
+ Tools should check if sampling is available:
47
+
48
+ ```ruby
49
+ class AiPoweredTool < Tsikol::Tool
50
+ def execute(input:)
51
+ unless @server.sampling_enabled?
52
+ return "Sampling is not enabled. Please enable it in the client."
53
+ end
54
+
55
+ # Use sampling
56
+ result = @server.sample_text(
57
+ messages: build_messages(input),
58
+ temperature: 0.7
59
+ )
60
+
61
+ result
62
+ end
63
+ end
64
+ ```
65
+
66
+ ## Basic Usage
67
+
68
+ ### Simple Text Generation
69
+
70
+ ```ruby
71
+ class StoryWriter < Tsikol::Tool
72
+ description "Generate creative stories"
73
+
74
+ parameter :topic do
75
+ type :string
76
+ required
77
+ description "Story topic or theme"
78
+ end
79
+
80
+ parameter :style do
81
+ type :string
82
+ optional
83
+ default "fantasy"
84
+ enum ["fantasy", "sci-fi", "mystery", "romance", "horror"]
85
+ end
86
+
87
+ parameter :length do
88
+ type :string
89
+ optional
90
+ default "short"
91
+ enum ["micro", "short", "medium", "long"]
92
+ end
93
+
94
+ def execute(topic:, style: "fantasy", length: "short")
95
+ messages = [
96
+ {
97
+ role: "system",
98
+ content: {
99
+ type: "text",
100
+ text: "You are a creative story writer specializing in #{style} stories."
101
+ }
102
+ },
103
+ {
104
+ role: "user",
105
+ content: {
106
+ type: "text",
107
+ text: build_prompt(topic, style, length)
108
+ }
109
+ }
110
+ ]
111
+
112
+ # Request generation from LLM
113
+ response = @server.sample_text(
114
+ messages: messages,
115
+ temperature: 0.8,
116
+ max_tokens: length_to_tokens(length)
117
+ )
118
+
119
+ if response[:error]
120
+ "Error generating story: #{response[:error]}"
121
+ else
122
+ response[:text]
123
+ end
124
+ end
125
+
126
+ private
127
+
128
+ def build_prompt(topic, style, length)
129
+ "Write a #{length} #{style} story about: #{topic}"
130
+ end
131
+
132
+ def length_to_tokens(length)
133
+ case length
134
+ when "micro" then 200
135
+ when "short" then 500
136
+ when "medium" then 1500
137
+ when "long" then 3000
138
+ else 500
139
+ end
140
+ end
141
+ end
142
+ ```
143
+
144
+ ### Code Generation
145
+
146
+ ```ruby
147
+ class CodeGenerator < Tsikol::Tool
148
+ description "Generate code with AI assistance"
149
+
150
+ parameter :description do
151
+ type :string
152
+ required
153
+ description "What the code should do"
154
+ end
155
+
156
+ parameter :language do
157
+ type :string
158
+ required
159
+ description "Programming language"
160
+
161
+ complete do |partial|
162
+ languages = ["ruby", "python", "javascript", "go", "rust", "java"]
163
+ languages.select { |l| l.start_with?(partial.downcase) }
164
+ end
165
+ end
166
+
167
+ parameter :include_tests do
168
+ type :boolean
169
+ optional
170
+ default false
171
+ description "Include unit tests"
172
+ end
173
+
174
+ def execute(description:, language:, include_tests: false)
175
+ messages = build_code_messages(description, language, include_tests)
176
+
177
+ response = @server.sample_text(
178
+ messages: messages,
179
+ temperature: 0.3, # Lower temperature for code
180
+ max_tokens: 2000
181
+ )
182
+
183
+ if response[:error]
184
+ return "Error generating code: #{response[:error]}"
185
+ end
186
+
187
+ format_code_response(response[:text], language)
188
+ end
189
+
190
+ private
191
+
192
+ def build_code_messages(description, language, include_tests)
193
+ system_prompt = <<~PROMPT
194
+ You are an expert #{language} programmer.
195
+ Generate clean, efficient, well-commented code.
196
+ Follow #{language} best practices and idioms.
197
+ #{include_tests ? "Include comprehensive unit tests." : ""}
198
+ PROMPT
199
+
200
+ [
201
+ {
202
+ role: "system",
203
+ content: { type: "text", text: system_prompt }
204
+ },
205
+ {
206
+ role: "user",
207
+ content: {
208
+ type: "text",
209
+ text: "Generate #{language} code to: #{description}"
210
+ }
211
+ }
212
+ ]
213
+ end
214
+
215
+ def format_code_response(code, language)
216
+ # Extract code blocks if the LLM included markdown
217
+ if code.include?("```")
218
+ code = code.scan(/```#{language}?\n(.*?)```/m).flatten.join("\n\n")
219
+ end
220
+
221
+ code
222
+ end
223
+ end
224
+ ```
225
+
226
+ ## Sampling Parameters
227
+
228
+ ### Temperature
229
+
230
+ Controls randomness (0.0 - 1.0):
231
+
232
+ ```ruby
233
+ # Deterministic (good for code, analysis)
234
+ @server.sample_text(
235
+ messages: messages,
236
+ temperature: 0.0
237
+ )
238
+
239
+ # Creative (good for stories, brainstorming)
240
+ @server.sample_text(
241
+ messages: messages,
242
+ temperature: 0.9
243
+ )
244
+ ```
245
+
246
+ ### Max Tokens
247
+
248
+ Limit response length:
249
+
250
+ ```ruby
251
+ # Short response
252
+ @server.sample_text(
253
+ messages: messages,
254
+ max_tokens: 100
255
+ )
256
+
257
+ # Detailed response
258
+ @server.sample_text(
259
+ messages: messages,
260
+ max_tokens: 2000
261
+ )
262
+ ```
263
+
264
+ ### Stop Sequences
265
+
266
+ Stop generation at specific strings:
267
+
268
+ ```ruby
269
+ # Stop at newline for single-line responses
270
+ @server.sample_text(
271
+ messages: messages,
272
+ stop_sequences: ["\n"]
273
+ )
274
+
275
+ # Stop at specific markers
276
+ @server.sample_text(
277
+ messages: messages,
278
+ stop_sequences: ["END", "---", "###"]
279
+ )
280
+ ```
281
+
282
+ ### Model Hints
283
+
284
+ Suggest preferred models:
285
+
286
+ ```ruby
287
+ @server.sample_text(
288
+ messages: messages,
289
+ model_hint: "claude-3-opus", # Hint, not requirement
290
+ temperature: 0.5
291
+ )
292
+ ```
293
+
294
+ ## Advanced Patterns
295
+
296
+ ### Multi-Step Generation
297
+
298
+ Build complex outputs iteratively:
299
+
300
+ ```ruby
301
+ class ReportGenerator < Tsikol::Tool
302
+ description "Generate comprehensive reports"
303
+
304
+ parameter :data do
305
+ type :object
306
+ required
307
+ description "Data to analyze"
308
+ end
309
+
310
+ parameter :sections do
311
+ type :array
312
+ optional
313
+ default ["summary", "analysis", "recommendations"]
314
+ end
315
+
316
+ def execute(data:, sections: ["summary", "analysis", "recommendations"])
317
+ report = {}
318
+ context = [initial_context_message(data)]
319
+
320
+ sections.each do |section|
321
+ log :info, "Generating section: #{section}"
322
+
323
+ # Add request for specific section
324
+ context << {
325
+ role: "user",
326
+ content: {
327
+ type: "text",
328
+ text: "Generate the #{section} section based on the data."
329
+ }
330
+ }
331
+
332
+ # Generate section
333
+ response = @server.sample_text(
334
+ messages: context,
335
+ temperature: 0.5,
336
+ max_tokens: 1000
337
+ )
338
+
339
+ if response[:error]
340
+ report[section] = "Error generating #{section}: #{response[:error]}"
341
+ else
342
+ report[section] = response[:text]
343
+
344
+ # Add response to context for next section
345
+ context << {
346
+ role: "assistant",
347
+ content: {
348
+ type: "text",
349
+ text: response[:text]
350
+ }
351
+ }
352
+ end
353
+ end
354
+
355
+ format_report(report)
356
+ end
357
+
358
+ private
359
+
360
+ def initial_context_message(data)
361
+ {
362
+ role: "system",
363
+ content: {
364
+ type: "text",
365
+ text: "You are a professional analyst. Analyze this data: #{data.to_json}"
366
+ }
367
+ }
368
+ end
369
+
370
+ def format_report(sections)
371
+ sections.map { |title, content|
372
+ "## #{title.capitalize}\n\n#{content}"
373
+ }.join("\n\n")
374
+ end
375
+ end
376
+ ```
377
+
378
+ ### Structured Output Generation
379
+
380
+ Generate parseable output:
381
+
382
+ ```ruby
383
+ class DataExtractor < Tsikol::Tool
384
+ description "Extract structured data from text"
385
+
386
+ parameter :text do
387
+ type :string
388
+ required
389
+ description "Text to analyze"
390
+ end
391
+
392
+ parameter :schema do
393
+ type :object
394
+ required
395
+ description "Expected data structure"
396
+ end
397
+
398
+ def execute(text:, schema:)
399
+ messages = [
400
+ {
401
+ role: "system",
402
+ content: {
403
+ type: "text",
404
+ text: <<~PROMPT
405
+ Extract information from text and return ONLY valid JSON.
406
+ Match this schema: #{schema.to_json}
407
+ Do not include any explanation or markdown.
408
+ PROMPT
409
+ }
410
+ },
411
+ {
412
+ role: "user",
413
+ content: {
414
+ type: "text",
415
+ text: text
416
+ }
417
+ }
418
+ ]
419
+
420
+ response = @server.sample_text(
421
+ messages: messages,
422
+ temperature: 0.0, # Deterministic for structured output
423
+ max_tokens: 1000
424
+ )
425
+
426
+ if response[:error]
427
+ return { error: response[:error] }
428
+ end
429
+
430
+ # Parse JSON response
431
+ begin
432
+ JSON.parse(response[:text])
433
+ rescue JSON::ParserError => e
434
+ {
435
+ error: "Failed to parse response as JSON",
436
+ raw_response: response[:text]
437
+ }
438
+ end
439
+ end
440
+ end
441
+ ```
442
+
443
+ ### Streaming Generation
444
+
445
+ For long-running generations:
446
+
447
+ ```ruby
448
+ class StreamingWriter < Tsikol::Tool
449
+ description "Generate content with progress updates"
450
+
451
+ parameter :prompt do
452
+ type :string
453
+ required
454
+ end
455
+
456
+ parameter :chunks do
457
+ type :integer
458
+ optional
459
+ default 5
460
+ description "Number of sections to generate"
461
+ end
462
+
463
+ def execute(prompt:, chunks: 5)
464
+ output = []
465
+
466
+ chunks.times do |i|
467
+ chunk_prompt = "#{prompt} (Part #{i + 1}/#{chunks})"
468
+
469
+ response = @server.sample_text(
470
+ messages: [
471
+ {
472
+ role: "user",
473
+ content: { type: "text", text: chunk_prompt }
474
+ }
475
+ ],
476
+ temperature: 0.7,
477
+ max_tokens: 500
478
+ )
479
+
480
+ if response[:error]
481
+ output << "Error in chunk #{i + 1}: #{response[:error]}"
482
+ else
483
+ output << response[:text]
484
+
485
+ # Could notify progress here if supported
486
+ log :info, "Generated chunk #{i + 1}/#{chunks}"
487
+ end
488
+ end
489
+
490
+ output.join("\n\n")
491
+ end
492
+ end
493
+ ```
494
+
495
+ ### Decision Making
496
+
497
+ Use sampling for intelligent decisions:
498
+
499
+ ```ruby
500
+ class SmartRouter < Tsikol::Tool
501
+ description "Route requests intelligently"
502
+
503
+ parameter :request do
504
+ type :string
505
+ required
506
+ description "User request"
507
+ end
508
+
509
+ parameter :available_tools do
510
+ type :array
511
+ optional
512
+ default ["file_manager", "database_query", "web_search"]
513
+ end
514
+
515
+ def execute(request:, available_tools: ["file_manager", "database_query", "web_search"])
516
+ messages = [
517
+ {
518
+ role: "system",
519
+ content: {
520
+ type: "text",
521
+ text: <<~PROMPT
522
+ You are a request router. Analyze the user request and determine
523
+ which tool would best handle it. Available tools:
524
+ #{format_tool_descriptions(available_tools)}
525
+
526
+ Respond with ONLY the tool name, nothing else.
527
+ PROMPT
528
+ }
529
+ },
530
+ {
531
+ role: "user",
532
+ content: {
533
+ type: "text",
534
+ text: request
535
+ }
536
+ }
537
+ ]
538
+
539
+ response = @server.sample_text(
540
+ messages: messages,
541
+ temperature: 0.0,
542
+ max_tokens: 50,
543
+ stop_sequences: ["\n", " "]
544
+ )
545
+
546
+ if response[:error]
547
+ return { error: response[:error] }
548
+ end
549
+
550
+ selected_tool = response[:text].strip.downcase
551
+
552
+ if available_tools.include?(selected_tool)
553
+ { tool: selected_tool, confidence: "high" }
554
+ else
555
+ { tool: available_tools.first, confidence: "low", raw: selected_tool }
556
+ end
557
+ end
558
+
559
+ private
560
+
561
+ def format_tool_descriptions(tools)
562
+ tool_info = {
563
+ "file_manager" => "Handles file operations",
564
+ "database_query" => "Queries databases",
565
+ "web_search" => "Searches the internet"
566
+ }
567
+
568
+ tools.map { |tool| "- #{tool}: #{tool_info[tool]}" }.join("\n")
569
+ end
570
+ end
571
+ ```
572
+
573
+ ## Error Handling
574
+
575
+ ### Graceful Degradation
576
+
577
+ ```ruby
578
+ class RobustSampler < Tsikol::Tool
579
+ def execute(input:)
580
+ # Try sampling with fallback
581
+ response = @server.sample_text(
582
+ messages: build_messages(input),
583
+ temperature: 0.7,
584
+ max_tokens: 1000
585
+ )
586
+
587
+ if response[:error]
588
+ # Fallback to simpler logic
589
+ return fallback_handler(input, response[:error])
590
+ end
591
+
592
+ # Validate response
593
+ if response[:text].nil? || response[:text].empty?
594
+ return "Received empty response from AI"
595
+ end
596
+
597
+ process_response(response[:text])
598
+ end
599
+
600
+ private
601
+
602
+ def fallback_handler(input, error)
603
+ log :warning, "Sampling failed, using fallback", error: error
604
+
605
+ # Simple rule-based fallback
606
+ case input
607
+ when /analyze/i
608
+ "Analysis unavailable. Sampling error: #{error}"
609
+ when /generate/i
610
+ "Generation unavailable. Sampling error: #{error}"
611
+ else
612
+ "AI assistance unavailable. Error: #{error}"
613
+ end
614
+ end
615
+ end
616
+ ```
617
+
618
+ ### Retry Logic
619
+
620
+ ```ruby
621
+ class RetryingSampler < Tsikol::Tool
622
+ def execute(input:)
623
+ max_retries = 3
624
+ retry_count = 0
625
+
626
+ begin
627
+ response = @server.sample_text(
628
+ messages: build_messages(input),
629
+ temperature: 0.7
630
+ )
631
+
632
+ if response[:error]
633
+ raise SamplingError, response[:error]
634
+ end
635
+
636
+ response[:text]
637
+
638
+ rescue SamplingError => e
639
+ retry_count += 1
640
+
641
+ if retry_count < max_retries
642
+ log :warning, "Sampling failed, retrying",
643
+ attempt: retry_count,
644
+ error: e.message
645
+
646
+ sleep(retry_count) # Exponential backoff
647
+ retry
648
+ else
649
+ "Failed after #{max_retries} attempts: #{e.message}"
650
+ end
651
+ end
652
+ end
653
+ end
654
+
655
+ class SamplingError < StandardError; end
656
+ ```
657
+
658
+ ## Testing Sampling
659
+
660
+ ### Mock Sampling
661
+
662
+ ```ruby
663
+ require 'minitest/autorun'
664
+ require 'tsikol/test_helpers'
665
+
666
+ class SamplingToolTest < Minitest::Test
667
+ def setup
668
+ @server = Tsikol::Server.new(name: "test")
669
+ @server.sampling true
670
+
671
+ # Mock the sampling method
672
+ @server.define_singleton_method(:sample_text) do |**params|
673
+ # Return predictable responses for testing
674
+ messages = params[:messages]
675
+ user_content = messages.find { |m| m[:role] == "user" }[:content][:text]
676
+
677
+ case user_content
678
+ when /error/i
679
+ { error: "Simulated error" }
680
+ when /generate.*code/i
681
+ { text: "def example\n puts 'Hello'\nend" }
682
+ else
683
+ { text: "Test response for: #{user_content}" }
684
+ end
685
+ end
686
+
687
+ @server.register_tool_instance(CodeGenerator.new)
688
+ @client = Tsikol::TestHelpers::TestClient.new(@server)
689
+ end
690
+
691
+ def test_successful_generation
692
+ response = @client.call_tool("code_generator", {
693
+ "description" => "generate hello world",
694
+ "language" => "ruby"
695
+ })
696
+
697
+ assert_successful_response(response)
698
+ result = response.dig(:result, :content, 0, :text)
699
+ assert_match /def example/, result
700
+ end
701
+
702
+ def test_error_handling
703
+ response = @client.call_tool("code_generator", {
704
+ "description" => "trigger error",
705
+ "language" => "ruby"
706
+ })
707
+
708
+ assert_successful_response(response)
709
+ result = response.dig(:result, :content, 0, :text)
710
+ assert_match /Error generating code/, result
711
+ end
712
+ end
713
+ ```
714
+
715
+ ### Testing Sampling Parameters
716
+
717
+ ```ruby
718
+ def test_sampling_parameters
719
+ # Capture sampling calls
720
+ sampling_calls = []
721
+
722
+ @server.define_singleton_method(:sample_text) do |**params|
723
+ sampling_calls << params
724
+ { text: "Generated text" }
725
+ end
726
+
727
+ @client.call_tool("story_writer", {
728
+ "topic" => "space adventure",
729
+ "style" => "sci-fi",
730
+ "length" => "long"
731
+ })
732
+
733
+ # Verify parameters
734
+ assert_equal 1, sampling_calls.length
735
+
736
+ params = sampling_calls.first
737
+ assert_equal 0.8, params[:temperature]
738
+ assert_equal 3000, params[:max_tokens]
739
+
740
+ # Check message format
741
+ messages = params[:messages]
742
+ assert_equal 2, messages.length
743
+ assert_equal "system", messages[0][:role]
744
+ assert_match /sci-fi/, messages[0][:content][:text]
745
+ end
746
+ ```
747
+
748
+ ## Best Practices
749
+
750
+ 1. **Clear Prompts**: Write specific, clear prompts for better results
751
+ 2. **Temperature Control**: Use appropriate temperature for the task
752
+ 3. **Error Handling**: Always handle sampling failures gracefully
753
+ 4. **Token Limits**: Be mindful of token limits and costs
754
+ 5. **Response Validation**: Validate and sanitize AI responses
755
+ 6. **User Feedback**: Show progress for long operations
756
+ 7. **Fallback Logic**: Provide alternatives when sampling fails
757
+
758
+ ## Security Considerations
759
+
760
+ ### Input Sanitization
761
+
762
+ ```ruby
763
+ def execute(user_input:)
764
+ # Sanitize user input before sending to LLM
765
+ sanitized = user_input.gsub(/[<>]/, '') # Remove potential HTML
766
+ .strip
767
+ .slice(0, 1000) # Limit length
768
+
769
+ messages = [
770
+ {
771
+ role: "user",
772
+ content: { type: "text", text: sanitized }
773
+ }
774
+ ]
775
+
776
+ @server.sample_text(messages: messages)
777
+ end
778
+ ```
779
+
780
+ ### Output Validation
781
+
782
+ ```ruby
783
+ def process_ai_response(response)
784
+ # Validate response format
785
+ return "Invalid response" unless response.is_a?(String)
786
+
787
+ # Remove potential code injection
788
+ cleaned = response.gsub(/`.*?`/, '[code removed]')
789
+
790
+ # Check for sensitive data patterns
791
+ if contains_sensitive_data?(cleaned)
792
+ return "Response contained sensitive information"
793
+ end
794
+
795
+ cleaned
796
+ end
797
+ ```
798
+
799
+ ## Next Steps
800
+
801
+ - Add [Middleware](middleware.md) for cross-cutting concerns
802
+ - Implement [Error Handling](../cookbook/error-handling.md)
803
+ - Learn about [Testing](testing.md)
804
+ - Explore [Advanced Patterns](../cookbook/advanced-patterns.md)