pedicab 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. checksums.yaml +4 -4
  2. data/API.md +401 -0
  3. data/EXAMPLES.md +884 -0
  4. data/Gemfile.lock +10 -24
  5. data/INSTALLATION.md +652 -0
  6. data/README.md +329 -10
  7. data/lib/pedicab/#city.rb# +27 -0
  8. data/lib/pedicab/ride.rb +60 -81
  9. data/lib/pedicab/version.rb +1 -1
  10. data/lib/pedicab.py +3 -8
  11. data/lib/pedicab.rb +141 -133
  12. metadata +6 -89
  13. data/#README.md# +0 -51
  14. data/books/Arnold_Bennett-How_to_Live_on_24_Hours_a_Day.txt +0 -1247
  15. data/books/Edward_L_Bernays-crystallizing_public_opinion.txt +0 -4422
  16. data/books/Emma_Goldman-Anarchism_and_Other_Essays.txt +0 -7654
  17. data/books/Office_of_Strategic_Services-Simple_Sabotage_Field_Manual.txt +0 -1057
  18. data/books/Sigmund_Freud-Group_Psychology_and_The_Analysis_of_The_Ego.txt +0 -2360
  19. data/books/Steve_Hassan-The_Bite_Model.txt +0 -130
  20. data/books/Steve_Hassan-The_Bite_Model.txt~ +0 -132
  21. data/books/Sun_Tzu-Art_of_War.txt +0 -159
  22. data/books/Sun_Tzu-Art_of_War.txt~ +0 -166
  23. data/books/US-Constitution.txt +0 -502
  24. data/books/US-Constitution.txt~ +0 -502
  25. data/books/cia-kubark.txt +0 -4637
  26. data/books/machiavelli-the_prince.txt +0 -4599
  27. data/books/sun_tzu-art_of_war.txt +0 -1017
  28. data/books/us_army-bayonette.txt +0 -843
  29. data/lib/pedicab/calc.rb~ +0 -8
  30. data/lib/pedicab/link.rb +0 -38
  31. data/lib/pedicab/link.rb~ +0 -14
  32. data/lib/pedicab/mark.rb +0 -9
  33. data/lib/pedicab/mark.rb~ +0 -5
  34. data/lib/pedicab/on.rb +0 -6
  35. data/lib/pedicab/on.rb~ +0 -6
  36. data/lib/pedicab/poke.rb +0 -14
  37. data/lib/pedicab/poke.rb~ +0 -15
  38. data/lib/pedicab/query.rb +0 -92
  39. data/lib/pedicab/query.rb~ +0 -93
  40. data/lib/pedicab/rank.rb +0 -92
  41. data/lib/pedicab/rank.rb~ +0 -89
  42. data/lib/pedicab/ride.rb~ +0 -101
  43. data/lib/pedicab.sh~ +0 -3
data/EXAMPLES.md ADDED
@@ -0,0 +1,884 @@
1
+ # Usage Examples and Tutorials
2
+
3
+ ## Table of Contents
4
+
5
+ 1. [Basic Conversations](#basic-conversations)
6
+ 2. [Context Management](#context-management)
7
+ 3. [Custom Handlers](#custom-handlers)
8
+ 4. [Conditional Logic](#conditional-logic)
9
+ 5. [State Management](#state-management)
10
+ 6. [Performance Monitoring](#performance-monitoring)
11
+ 7. [Error Handling](#error-handling)
12
+ 8. [Advanced Patterns](#advanced-patterns)
13
+ 9. [Real-world Examples](#real-world-examples)
14
+
15
+ ---
16
+
17
+ ## Basic Conversations
18
+
19
+ ### Simple Q&A
20
+
21
+ ```ruby
22
+ require 'pedicab'
23
+
24
+ # Create a conversation instance
25
+ ai = Pedicab['chatbot']
26
+
27
+ # Ask a question
28
+ response = ai["What is the capital of France?"]
29
+ puts response.out
30
+ # => "The capital of France is Paris."
31
+
32
+ # Get timing information
33
+ puts "Response time: #{response.took} seconds"
34
+ ```
35
+
36
+ ### Multi-turn Conversations
37
+
38
+ ```ruby
39
+ ai = Pedicab['conversation']
40
+
41
+ # First exchange
42
+ response1 = ai["Tell me about Ruby programming"]
43
+ puts response1.out
44
+
45
+ # Continue with context - AI remembers the previous exchange
46
+ response2 = ai << "What are its main features?"
47
+ puts response2.out
48
+
49
+ # Continue again
50
+ response3 = ai << "How does it compare to Python?"
51
+ puts response3.out
52
+
53
+ # View conversation history
54
+ puts "Prompts asked: #{ai.last.inspect}"
55
+ puts "Responses received: #{ai.response.inspect}"
56
+ ```
57
+
58
+ ---
59
+
60
+ ## Context Management
61
+
62
+ ### Fresh Conversations vs. Continued Context
63
+
64
+ ```ruby
65
+ ai = Pedicab['context_demo']
66
+
67
+ # Using [] - starts fresh each time
68
+ response1 = ai["What is 2+2?"]
69
+ puts response1.out # => "2+2 equals 4"
70
+
71
+ response2 = ai["What about 3+3?"] # Fresh conversation, no memory of previous
72
+ puts response2.out # => "3+3 equals 6"
73
+
74
+ # Using << - continues with context
75
+ ai.reset!
76
+ response3 = ai["What is 2+2?"]
77
+ puts response3.out # => "2+2 equals 4"
78
+
79
+ response4 = ai << "What about 3+3?" # Knows we're doing math
80
+ puts response4.out # => "Following the math pattern, 3+3 equals 6."
81
+ ```
82
+
83
+ ### Context Resetting
84
+
85
+ ```ruby
86
+ ai = Pedicab['reset_demo']
87
+
88
+ # Build up some context
89
+ ai["I'm learning programming"]
90
+ ai << "Teach me about variables"
91
+ ai << "What about functions?"
92
+
93
+ # View current context
94
+ puts "Current conversation: #{ai.last.length} exchanges"
95
+
96
+ # Reset and start fresh
97
+ ai.reset!
98
+ puts "After reset: #{ai.last.length} exchanges"
99
+
100
+ # Start new topic
101
+ response = ai["What is machine learning?"]
102
+ puts response.out # Fresh context, no memory of programming discussion
103
+ ```
104
+
105
+ ---
106
+
107
+ ## Custom Handlers
108
+
109
+ ### Response Formatting
110
+
111
+ ```ruby
112
+ ai = Pedicab['formatted_bot']
113
+
114
+ # Set up custom response handler
115
+ ai.handle do |response|
116
+ puts "=" * 50
117
+ puts "Question: #{response.prompt}"
118
+ puts "Answer: #{response.out}"
119
+ puts "Thinking: #{response.thoughts.last}" if response.thoughts.last
120
+ puts "Time: #{response.took.round(3)}s"
121
+ puts "=" * 50
122
+ response
123
+ end
124
+
125
+ # Now all responses will be formatted
126
+ ai["What is photosynthesis?"]
127
+ ai << "Why is it important?"]
128
+ ```
129
+
130
+ ### Response Transformation
131
+
132
+ ```ruby
133
+ ai = Pedicab['transformer']
134
+
135
+ # Handler that modifies the response
136
+ ai.handle do |response|
137
+ # Convert to uppercase and add emphasis
138
+ modified = response.out.upcase + "!!!"
139
+ response.out = modified
140
+ response
141
+ end
142
+
143
+ response = ai["Hello there"]
144
+ puts response.out
145
+ # => "HELLO THERE!!!"
146
+ ```
147
+
148
+ ### Logging Handler
149
+
150
+ ```ruby
151
+ ai = Pedicab['logger']
152
+
153
+ # Set up logging handler
154
+ ai.handle do |response|
155
+ # Log to file
156
+ File.open('conversation.log', 'a') do |f|
157
+ f.puts "[#{Time.now}] Prompt: #{response.prompt}"
158
+ f.puts "[#{Time.now}] Response: #{response.out}"
159
+ f.puts "[#{Time.now}] Time: #{response.took}s"
160
+ f.puts "---"
161
+ end
162
+
163
+ # Also print to console
164
+ puts response.out
165
+ response
166
+ end
167
+
168
+ ai["How do you work?"]
169
+ ai << "What can you do?"]
170
+ ```
171
+
172
+ ---
173
+
174
+ ## Conditional Logic
175
+
176
+ ### Basic Conditions
177
+
178
+ ```ruby
179
+ ai = Pedicab['conditional']
180
+
181
+ # Ask a yes/no question
182
+ is_technical = ai.ride.if?("Ruby is a programming language")
183
+ puts is_technical # => true
184
+
185
+ # Use condition in logic
186
+ if ai.ride.if?("the user is asking about technology")
187
+ response = ai["I'd be happy to help with technology questions!"]
188
+ else
189
+ response = ai["I can help with many topics. What interests you?"]
190
+ end
191
+ ```
192
+
193
+ ### Conditional Blocks
194
+
195
+ ```ruby
196
+ ai = Pedicab['smart_bot']
197
+
198
+ # Use conditional blocks for different responses
199
+ ai.ride.if?("user is asking for help") do
200
+ ai["I'm here to help! What do you need assistance with?"]
201
+ end
202
+
203
+ ai.ride.if?("user is greeting") do
204
+ ai["Hello! How can I assist you today?"]
205
+ end
206
+
207
+ # Get result without block
208
+ wants_code = ai.ride.if?("user wants programming help")
209
+ if wants_code
210
+ ai["Let's write some code together!"]
211
+ end
212
+ ```
213
+
214
+ ### Multi-Condition Logic
215
+
216
+ ```ruby
217
+ ai = Pedicab['classifier']
218
+
219
+ # Classify user intent
220
+ prompt = "I want to build a website"
221
+
222
+ is_web = ai.ride.if?("user wants to build a website")
223
+ is_app = ai.ride.if?("user wants to build a mobile app")
224
+ is_code = ai.ride.if?("user wants programming help")
225
+
226
+ case
227
+ when is_web
228
+ ai["Let's discuss web development! HTML, CSS, and JavaScript are great starting points."]
229
+ when is_app
230
+ ai["Mobile app development is exciting! Are you thinking iOS or Android?"]
231
+ when is_code
232
+ ai["Programming is a valuable skill! What language interests you?"]
233
+ else
234
+ ai["I'd be happy to help! Can you tell me more about what you'd like to create?"]
235
+ end
236
+ ```
237
+
238
+ ---
239
+
240
+ ## State Management
241
+
242
+ ### Custom State Variables
243
+
244
+ ```ruby
245
+ ride = Pedicab.ride['stateful']
246
+
247
+ # Set custom state
248
+ ride[:user_name] = "Alice"
249
+ ride[:user_level] = "beginner"
250
+ ride[:last_topic] = "Ruby basics"
251
+
252
+ # Use state in prompts
253
+ prompt = "Hi, I'm #{ride[:user_name]} and I'm a #{ride[:user_level]}"
254
+ response = ride.go(prompt)
255
+
256
+ # Update state based on conversation
257
+ ride[:last_topic] = "introduction"
258
+ ```
259
+
260
+ ### Persistent State Across Sessions
261
+
262
+ ```ruby
263
+ class PersistentBot
264
+ def initialize(id)
265
+ @ride = Pedicab.ride[id]
266
+ load_state
267
+ end
268
+
269
+ def chat(message)
270
+ # Use stored state
271
+ response = @ride.go(message)
272
+
273
+ # Save important information
274
+ if @ride.if?("user introduced themselves")
275
+ @ride[:user_name] = extract_name(response.out)
276
+ end
277
+
278
+ save_state
279
+ response
280
+ end
281
+
282
+ private
283
+
284
+ def load_state
285
+ # Load from file or database
286
+ # Implementation depends on your needs
287
+ end
288
+
289
+ def save_state
290
+ # Save to file or database
291
+ end
292
+
293
+ def extract_name(text)
294
+ # Extract name from response
295
+ # Simple regex or more sophisticated NLP
296
+ text.match(/My name is (\w+)/)&.[](1) || "Unknown"
297
+ end
298
+ end
299
+
300
+ # Usage
301
+ bot = PersistentBot['my_bot']
302
+ bot.chat("Hi, my name is John and I want to learn programming")
303
+ ```
304
+
305
+ ---
306
+
307
+ ## Performance Monitoring
308
+
309
+ ### Request Timing
310
+
311
+ ```ruby
312
+ ai = Pedicab['monitored']
313
+
314
+ # Single request timing
315
+ response = ai["Explain quantum computing"]
316
+ puts "Last request: #{response.took} seconds"
317
+
318
+ # Cumulative timing
319
+ ai["What is machine learning?"]
320
+ ai["How does AI work?"]
321
+
322
+ puts "Total conversation time: #{ai.life} seconds"
323
+ puts "Average per request: #{ai.life / ai.response.length} seconds"
324
+
325
+ # Detailed timing breakdown
326
+ ai.time.each_with_index do |time, index|
327
+ puts "Request #{index + 1}: #{time} seconds"
328
+ end
329
+ ```
330
+
331
+ ### Performance Comparison
332
+
333
+ ```ruby
334
+ # Compare different models or prompts
335
+ def benchmark_prompt(prompt, model = nil)
336
+ old_model = ENV['MODEL']
337
+ ENV['MODEL'] = model if model
338
+
339
+ ai = Pedicab['benchmark']
340
+ start_time = Time.now
341
+ response = ai[prompt]
342
+ end_time = Time.now
343
+
344
+ ENV['MODEL'] = old_model
345
+
346
+ {
347
+ model: model || 'default',
348
+ prompt_length: prompt.length,
349
+ response_length: response.out.length,
350
+ time: response.took,
351
+ total_time: end_time - start_time
352
+ }
353
+ end
354
+
355
+ # Run benchmarks
356
+ results = [
357
+ benchmark_prompt("What is Ruby?", 'qwen'),
358
+ benchmark_prompt("What is Ruby?", 'llama2'),
359
+ benchmark_prompt("What is Ruby?", 'mistral')
360
+ ]
361
+
362
+ results.each do |result|
363
+ puts "#{result[:model]}: #{result[:time]}s"
364
+ end
365
+ ```
366
+
367
+ ---
368
+
369
+ ## Error Handling
370
+
371
+ ### Robust Error Handling
372
+
373
+ ```ruby
374
+ class SafeAIBot
375
+ def initialize(id)
376
+ @ai = Pedicab[id]
377
+ end
378
+
379
+ def safe_chat(prompt)
380
+ begin
381
+ response = @ai[prompt]
382
+
383
+ # Check for empty or problematic responses
384
+ if response.out.strip.empty?
385
+ return "I apologize, but I couldn't generate a response."
386
+ end
387
+
388
+ # Check for error indicators in response
389
+ if response.out.include?("Error:") || response.out.include?("I don't know")
390
+ return "I'm having trouble processing that request. Could you rephrase it?"
391
+ end
392
+
393
+ response.out
394
+
395
+ rescue Pedicab::Error => e
396
+ puts "Pedicab error: #{e.message}"
397
+ "I'm experiencing technical difficulties. Please try again later."
398
+
399
+ rescue => e
400
+ puts "Unexpected error: #{e.message}"
401
+ "Something went wrong. Please try again."
402
+ end
403
+ end
404
+ end
405
+
406
+ # Usage
407
+ bot = SafeAIBot['safe_bot']
408
+ puts bot.safe_chat("Tell me about something")
409
+ ```
410
+
411
+ ### Retry Logic
412
+
413
+ ```ruby
414
+ class RetryBot
415
+ def initialize(id, max_retries: 3)
416
+ @ai = Pedicab[id]
417
+ @max_retries = max_retries
418
+ end
419
+
420
+ def chat_with_retry(prompt)
421
+ attempts = 0
422
+
423
+ while attempts < @max_retries
424
+ begin
425
+ response = @ai[prompt]
426
+
427
+ # Check if response is satisfactory
428
+ if satisfactory_response?(response.out)
429
+ return response.out
430
+ end
431
+
432
+ attempts += 1
433
+ puts "Retry attempt #{attempts}" if attempts < @max_retries
434
+
435
+ rescue => e
436
+ attempts += 1
437
+ puts "Error: #{e.message}. Retry #{attempts}/#{@max_retries}"
438
+ return "Failed after #{@max_retries} attempts" if attempts >= @max_retries
439
+ end
440
+ end
441
+
442
+ "Unable to generate satisfactory response after #{@max_retries} attempts"
443
+ end
444
+
445
+ private
446
+
447
+ def satisfactory_response?(response)
448
+ response.length > 10 &&
449
+ !response.include?("Error:") &&
450
+ !response.include?("I don't know")
451
+ end
452
+ end
453
+ ```
454
+
455
+ ---
456
+
457
+ ## Advanced Patterns
458
+
459
+ ### Conversation Templates
460
+
461
+ ```ruby
462
+ class ConversationTemplate
463
+ def initialize(id, template)
464
+ @ai = Pedicab[id]
465
+ @template = template
466
+ end
467
+
468
+ def ask(variables = {})
469
+ prompt = apply_template(@template, variables)
470
+ response = @ai[prompt]
471
+ post_process(response, variables)
472
+ end
473
+
474
+ private
475
+
476
+ def apply_template(template, variables)
477
+ result = template.dup
478
+ variables.each do |key, value|
479
+ result.gsub!("{{#{key}}}", value.to_s)
480
+ end
481
+ result
482
+ end
483
+
484
+ def post_process(response, variables)
485
+ # Apply any post-processing based on variables
486
+ response.out
487
+ end
488
+ end
489
+
490
+ # Usage
491
+ code_review = ConversationTemplate['review_bot', <<~TEMPLATE]
492
+ Review the following code:
493
+
494
+ Language: {{language}}
495
+ Code: {{code}}
496
+
497
+ Provide feedback on:
498
+ 1. Code quality
499
+ 2. Best practices
500
+ 3. Potential improvements
501
+ TEMPLATE
502
+
503
+ result = code_review.ask(
504
+ language: 'Ruby',
505
+ code: 'def hello; puts "world"; end'
506
+ )
507
+ puts result
508
+ ```
509
+
510
+ ### Multi-Expert System
511
+
512
+ ```ruby
513
+ class ExpertSystem
514
+ def initialize
515
+ @experts = {
516
+ programming: Pedicab['programming_expert'],
517
+ writing: Pedicab['writing_expert'],
518
+ analysis: Pedicab['analysis_expert']
519
+ }
520
+ end
521
+
522
+ def consult(question, domain = nil)
523
+ if domain && @experts[domain]
524
+ expert_response = @experts[domain][question]
525
+
526
+ # Have analysis expert review the response
527
+ review_prompt = "Review this expert response for accuracy and completeness:\n\n#{expert_response.out}"
528
+ review = @experts[:analysis][review_prompt]
529
+
530
+ "Expert Response:\n#{expert_response.out}\n\nReview:\n#{review.out}"
531
+ else
532
+ # Use analysis expert to determine domain
533
+ domain_prompt = "What domain of expertise does this question belong to: #{question}"
534
+ suggested_domain = determine_domain(@experts[:analysis][domain_prompt].out)
535
+
536
+ consult(question, suggested_domain)
537
+ end
538
+ end
539
+
540
+ private
541
+
542
+ def determine_domain(response_text)
543
+ case response_text.downcase
544
+ when /programming|code|software|development/
545
+ :programming
546
+ when /writing|grammar|composition|literature/
547
+ :writing
548
+ else
549
+ :analysis
550
+ end
551
+ end
552
+ end
553
+
554
+ # Usage
555
+ system = ExpertSystem.new
556
+ puts system.consult("How do I implement a binary search tree in Ruby?")
557
+ ```
558
+
559
+ ---
560
+
561
+ ## Real-world Examples
562
+
563
+ ### Code Review Assistant
564
+
565
+ ```ruby
566
+ class CodeReviewer
567
+ def initialize
568
+ @ai = Pedicab['code_reviewer']
569
+ setup_handler
570
+ end
571
+
572
+ def review_code(code, language = 'Ruby')
573
+ prompt = <<~PROMPT
574
+ Please review this #{language} code for:
575
+ 1. Code quality and readability
576
+ 2. Best practices adherence
577
+ 3. Security vulnerabilities
578
+ 4. Performance considerations
579
+ 5. Potential bugs
580
+
581
+ Code to review:
582
+ ```#{language.downcase}
583
+ #{code}
584
+ ```
585
+
586
+ Provide specific, actionable feedback.
587
+ PROMPT
588
+
589
+ response = @ai[prompt]
590
+ format_review(response.out)
591
+ end
592
+
593
+ private
594
+
595
+ def setup_handler
596
+ @ai.handle do |response|
597
+ # Add structure to the review
598
+ response.out = add_review_sections(response.out)
599
+ response
600
+ end
601
+ end
602
+
603
+ def add_review_sections(review_text)
604
+ sections = {
605
+ 'Summary' => extract_section(review_text, ['overall', 'summary', 'general']),
606
+ 'Issues' => extract_section(review_text, ['issue', 'problem', 'bug', 'concern']),
607
+ 'Recommendations' => extract_section(review_text, ['recommend', 'suggest', 'improve']),
608
+ 'Positive Notes' => extract_section(review_text, ['good', 'well', 'correct', 'nice'])
609
+ }
610
+
611
+ formatted = []
612
+ sections.each do |title, content|
613
+ formatted << "\n## #{title}\n#{content}" if content && !content.empty?
614
+ end
615
+
616
+ formatted.empty? ? review_text : formatted.join("\n")
617
+ end
618
+
619
+ def extract_section(text, keywords)
620
+ lines = text.split("\n")
621
+ relevant = []
622
+
623
+ lines.each do |line|
624
+ if keywords.any? { |kw| line.downcase.include?(kw) }
625
+ relevant << line
626
+ end
627
+ end
628
+
629
+ relevant.join("\n")
630
+ end
631
+
632
+ def format_review(review)
633
+ "# Code Review Report\n\n#{review}\n\n---\n*Generated by AI Code Reviewer*"
634
+ end
635
+ end
636
+
637
+ # Usage
638
+ reviewer = CodeReviewer.new
639
+ code = <<~RUBY
640
+ def calculate_sum(numbers)
641
+ total = 0
642
+ numbers.each do |num|
643
+ total = total + num
644
+ end
645
+ return total
646
+ end
647
+ RUBY
648
+
649
+ puts reviewer.review_code(code)
650
+ ```
651
+
652
+ ### Learning Assistant
653
+
654
+ ```ruby
655
+ class LearningAssistant
656
+ def initialize(student_name)
657
+ @student = student_name
658
+ @ai = Pedicab['learning_assistant']
659
+ @ride = @ai.ride
660
+
661
+ # Initialize student state
662
+ @ride[:student_name] = student_name
663
+ @ride[:learning_history] = []
664
+ @ride[:current_topic] = nil
665
+ @ride[:proficiency_level] = 'beginner'
666
+ end
667
+
668
+ def learn(topic, question = nil)
669
+ @ride[:current_topic] = topic
670
+
671
+ if question
672
+ ask_question(topic, question)
673
+ else
674
+ start_lesson(topic)
675
+ end
676
+ end
677
+
678
+ def ask_question(topic, question)
679
+ prompt = <<~PROMPT
680
+ #{@student} is learning about #{topic} and asks: "#{question}"
681
+
682
+ Provide a clear, educational answer that:
683
+ 1. Is appropriate for a #{@ride[:proficiency_level]} level
684
+ 2. Builds on previous learning if relevant
685
+ 3. Is encouraging and motivating
686
+ 4. Suggests follow-up questions or exercises
687
+
688
+ Previous topics: #{@ride[:learning_history].join(', ')}
689
+ PROMPT
690
+
691
+ response = @ai[prompt]
692
+ record_interaction(topic, question, response.out)
693
+ response.out
694
+ end
695
+
696
+ def start_lesson(topic)
697
+ prompt = <<~PROMPT
698
+ #{@student} wants to learn about #{topic}.
699
+
700
+ Create an engaging introduction that:
701
+ 1. Explains what #{topic} is
702
+ 2. Shows why it's important or interesting
703
+ 3. Sets appropriate expectations for a #{@ride[:proficiency_level]}
704
+ 4. Provides 2-3 key concepts to start with
705
+ 5. Suggests a first learning activity
706
+
707
+ Make it conversational and encouraging.
708
+ PROMPT
709
+
710
+ response = @ai[prompt]
711
+ record_interaction(topic, "Introduction to #{topic}", response.out)
712
+ response.out
713
+ end
714
+
715
+ def assess_progress(topic)
716
+ prompt = <<~PROMPT
717
+ Based on this learning history about #{topic}, assess #{@student}'s progress:
718
+
719
+ #{@ride[:learning_history].map { |h| "- #{h}" }.join("\n")}
720
+
721
+ Provide:
722
+ 1. Current proficiency assessment
723
+ 2. Strengths demonstrated
724
+ 3. Areas for improvement
725
+ 4. Next learning recommendations
726
+ PROMPT
727
+
728
+ response = @ai[prompt]
729
+ update_proficiency(response.out)
730
+ response.out
731
+ end
732
+
733
+ private
734
+
735
+ def record_interaction(topic, question, answer)
736
+ interaction = "#{Time.now.strftime('%Y-%m-%d')}: #{topic} - #{question}"
737
+ @ride[:learning_history] << interaction
738
+
739
+ # Keep history manageable
740
+ if @ride[:learning_history].length > 50
741
+ @ride[:learning_history] = @ride[:learning_history].last(30)
742
+ end
743
+ end
744
+
745
+ def update_proficiency(assessment)
746
+ # Simple logic to update proficiency based on assessment
747
+ if assessment.include?('intermediate') || assessment.include?('progressing')
748
+ @ride[:proficiency_level] = 'intermediate'
749
+ elsif assessment.include?('advanced') || assessment.include?('excellent')
750
+ @ride[:proficiency_level] = 'advanced'
751
+ end
752
+ end
753
+ end
754
+
755
+ # Usage
756
+ assistant = LearningAssistant['Sarah']
757
+
758
+ puts assistant.learn('Ruby variables')
759
+ puts assistant.learn('Ruby variables', 'What is a symbol in Ruby?')
760
+ puts assistant.assess_progress('Ruby variables')
761
+ ```
762
+
763
+ ### Content Generator
764
+
765
+ ```ruby
766
+ class ContentGenerator
767
+ def initialize
768
+ @ai = Pedicab['content_generator']
769
+ @ride = @ai.ride
770
+ end
771
+
772
+ def blog_post(topic, tone = 'informative', length = 'medium')
773
+ prompt = <<~PROMPT
774
+ Write a #{length} blog post about #{topic} with a #{tone} tone.
775
+
776
+ Structure:
777
+ 1. Catchy title
778
+ 2. Engaging introduction
779
+ 3. 3-5 main points with clear explanations
780
+ 4. Practical examples or applications
781
+ 5. Conclusion with call-to-action
782
+
783
+ Guidelines:
784
+ - Use clear, accessible language
785
+ - Include relevant subheadings
786
+ - Add formatting for readability
787
+ - Ensure logical flow
788
+ - Target word count: #{word_count(length)}
789
+ PROMPT
790
+
791
+ response = @ai[prompt]
792
+ format_content(response.out, 'blog')
793
+ end
794
+
795
+ def tutorial(topic, difficulty = 'beginner')
796
+ prompt = <<~PROMPT
797
+ Create a step-by-step tutorial for #{difficulty} level learners about #{topic}.
798
+
799
+ Include:
800
+ 1. Prerequisites
801
+ 2. Learning objectives
802
+ 3. Step-by-step instructions
803
+ 4. Code examples (if applicable)
804
+ 5. Practice exercises
805
+ 6. Common pitfalls and solutions
806
+ 7. Next steps for further learning
807
+
808
+ Make it hands-on and practical.
809
+ PROMPT
810
+
811
+ response = @ai[prompt]
812
+ format_content(response.out, 'tutorial')
813
+ end
814
+
815
+ def social_media_post(topic, platform = 'twitter')
816
+ prompts = {
817
+ 'twitter' => "Write a tweet about #{topic}. Include relevant hashtags and keep it under 280 characters.",
818
+ 'linkedin' => "Write a LinkedIn post about #{topic}. Make it professional and insightful, include 3-5 hashtags.",
819
+ 'instagram' => "Write an Instagram caption about #{topic}. Include engaging questions and relevant hashtags."
820
+ }
821
+
822
+ response = @ai[prompts[platform]]
823
+ format_content(response.out, 'social')
824
+ end
825
+
826
+ private
827
+
828
+ def word_count(length)
829
+ counts = {
830
+ 'short' => '300-500',
831
+ 'medium' => '800-1200',
832
+ 'long' => '1500-2000'
833
+ }
834
+ counts[length] || counts['medium']
835
+ end
836
+
837
+ def format_content(content, type)
838
+ case type
839
+ when 'blog'
840
+ "# Blog Post\n\n#{content}\n\n*Generated by AI Content Generator*"
841
+ when 'tutorial'
842
+ "# Tutorial\n\n#{content}\n\n*Generated by AI Content Generator*"
843
+ when 'social'
844
+ "#{content}\n\n*Generated by AI Content Generator*"
845
+ else
846
+ content
847
+ end
848
+ end
849
+ end
850
+
851
+ # Usage
852
+ generator = ContentGenerator.new
853
+
854
+ puts generator.blog_post('Machine Learning Basics')
855
+ puts generator.tutorial('Getting Started with Git', 'beginner')
856
+ puts generator.social_media_post('Importance of Code Reviews', 'linkedin')
857
+ ```
858
+
859
+ ---
860
+
861
+ ## Best Practices
862
+
863
+ ### Performance Tips
864
+
865
+ 1. **Choose appropriate models** - Smaller models for simple tasks, larger for complex analysis
866
+ 2. **Reset context appropriately** - Use `reset!` when changing topics
867
+ 3. **Monitor performance** - Track `took` and `life` metrics
868
+ 4. **Cache responses** - For common questions, consider caching
869
+
870
+ ### Security Considerations
871
+
872
+ 1. **Validate inputs** - Sanitize user inputs before sending to LLM
873
+ 2. **Filter outputs** - Check for sensitive information in responses
874
+ 3. **Rate limiting** - Implement limits to prevent abuse
875
+ 4. **Error boundaries** - Handle failures gracefully
876
+
877
+ ### Usability Tips
878
+
879
+ 1. **Provide clear prompts** - Be specific about what you want
880
+ 2. **Set expectations** - Let users know about capabilities and limitations
881
+ 3. **Use appropriate handlers** - Custom handlers can greatly improve user experience
882
+ 4. **Handle edge cases** - Prepare for empty or unexpected responses
883
+
884
+ These examples should help you get the most out of Pedicab in various real-world scenarios!