pedicab 0.3.0 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. checksums.yaml +4 -4
  2. data/API.md +401 -0
  3. data/EXAMPLES.md +884 -0
  4. data/Gemfile.lock +10 -24
  5. data/INSTALLATION.md +652 -0
  6. data/README.md +329 -10
  7. data/lib/pedicab/#city.rb# +27 -0
  8. data/lib/pedicab/ride.rb +60 -81
  9. data/lib/pedicab/version.rb +1 -1
  10. data/lib/pedicab.py +3 -8
  11. data/lib/pedicab.rb +141 -133
  12. metadata +6 -89
  13. data/#README.md# +0 -51
  14. data/books/Arnold_Bennett-How_to_Live_on_24_Hours_a_Day.txt +0 -1247
  15. data/books/Edward_L_Bernays-crystallizing_public_opinion.txt +0 -4422
  16. data/books/Emma_Goldman-Anarchism_and_Other_Essays.txt +0 -7654
  17. data/books/Office_of_Strategic_Services-Simple_Sabotage_Field_Manual.txt +0 -1057
  18. data/books/Sigmund_Freud-Group_Psychology_and_The_Analysis_of_The_Ego.txt +0 -2360
  19. data/books/Steve_Hassan-The_Bite_Model.txt +0 -130
  20. data/books/Steve_Hassan-The_Bite_Model.txt~ +0 -132
  21. data/books/Sun_Tzu-Art_of_War.txt +0 -159
  22. data/books/Sun_Tzu-Art_of_War.txt~ +0 -166
  23. data/books/US-Constitution.txt +0 -502
  24. data/books/US-Constitution.txt~ +0 -502
  25. data/books/cia-kubark.txt +0 -4637
  26. data/books/machiavelli-the_prince.txt +0 -4599
  27. data/books/sun_tzu-art_of_war.txt +0 -1017
  28. data/books/us_army-bayonette.txt +0 -843
  29. data/lib/pedicab/calc.rb~ +0 -8
  30. data/lib/pedicab/link.rb +0 -38
  31. data/lib/pedicab/link.rb~ +0 -14
  32. data/lib/pedicab/mark.rb +0 -9
  33. data/lib/pedicab/mark.rb~ +0 -5
  34. data/lib/pedicab/on.rb +0 -6
  35. data/lib/pedicab/on.rb~ +0 -6
  36. data/lib/pedicab/poke.rb +0 -14
  37. data/lib/pedicab/poke.rb~ +0 -15
  38. data/lib/pedicab/query.rb +0 -92
  39. data/lib/pedicab/query.rb~ +0 -93
  40. data/lib/pedicab/rank.rb +0 -92
  41. data/lib/pedicab/rank.rb~ +0 -89
  42. data/lib/pedicab/ride.rb~ +0 -101
  43. data/lib/pedicab.sh~ +0 -3
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 702ea275719220820fd42a4619ff3d1bc23ec7436effb74b1c6df2e67a6aad96
4
- data.tar.gz: 5c1c20e35ae8d24c7011f45b1dab63d30f3306dcdc72a2d9016ed05cc413681e
3
+ metadata.gz: f02a2dcd924ea83d8d045d1366f1ea6043141f8887c1d5ae40e36372d49deac6
4
+ data.tar.gz: 5487597657aaec6b3474599a4d2d74d7b99822f4a876e10037825147603c6e2b
5
5
  SHA512:
6
- metadata.gz: ad3f2468798402d8f14e6f566e22ce2cd8c9c8aadced16cc882b2fff0daa0468d993788f10ee4d2edab84dffbffe9ae7e7e011c49198ae01aa08447c0c95a5c3
7
- data.tar.gz: 7e0ea54c87fdaf6f54f25afa1df7cf91f09f1c902c88b161d2ae178b5a9565fb3f2b54ffeb931c4e18792645acdcbf5c5a4bd51fbf3db73d85d8cf6fa263e96a
6
+ metadata.gz: 3bcf9fc60bf98a9bd1e864343752a01ab1971c9e071dc778623a70500e5b06eb0f1161cbe0a82e38ee4e9f555a46ba866ec4b3f69bee76a839cef57be96224dc
7
+ data.tar.gz: 3be66f8fcaf8bfb84905c13eb0c7ad5d476d1b521126cd014ee891caa67043ace0ce9953f6c4fbf82b056582a6259e837c52583105d31260c26ab51708ed4f3f
data/API.md ADDED
@@ -0,0 +1,401 @@
1
+ # API Documentation
2
+
3
+ ## Pedicab Module
4
+
5
+ ### Module Methods
6
+
7
+ #### `Pedicab.memory -> allowcated context size`
8
+
9
+ Get the number of inputs and outputs to consider as context when handling a request.
10
+
11
+ **Returns:** allowcated context
12
+
13
+ **Example:**
14
+ ```ruby
15
+ Pedicab.memory
16
+ ```
17
+
18
+ #### `Pedicab.[](id) -> Pedicab::P`
19
+
20
+ Set the number of inputs and outputs to consider as context when handling a request.
21
+
22
+ **Parameters:**
23
+ - `id` (Integer) - new context size
24
+
25
+ **Example:**
26
+ ```ruby
27
+ Pedicab.memory = 5
28
+ ```
29
+
30
+
31
+ #### `Pedicab.[](id) -> Pedicab::P`
32
+
33
+ Creates a new conversation instance with the given identifier.
34
+
35
+ **Parameters:**
36
+ - `id` (String) - Unique identifier for the conversation instance
37
+
38
+ **Returns:** `Pedicab::P` instance
39
+
40
+ **Example:**
41
+ ```ruby
42
+ ai = Pedicab['my_assistant']
43
+ response = ai["Hello, world!"]
44
+ ```
45
+
46
+ #### `Pedicab.models -> Array[String]`
47
+
48
+ Lists all available GGUF models in the `/models/` directory.
49
+
50
+ **Returns:** Array of model names (without .gguf extension)
51
+
52
+ **Example:**
53
+ ```ruby
54
+ puts Pedicab.models
55
+ # => ["qwen", "llama2", "mistral"]
56
+ ```
57
+
58
+ #### `Pedicab.ride(id) -> Pedicab::Ride`
59
+
60
+ Creates a new low-level ride instance for direct model interaction.
61
+
62
+ **Parameters:**
63
+ - `id` (String) - Identifier for the ride instance
64
+
65
+ **Returns:** `Pedicab::Ride` instance
66
+
67
+ **Example:**
68
+ ```ruby
69
+ ride = Pedicab.ride['direct_ride']
70
+ response = ride.go("What is Ruby?")
71
+ ```
72
+
73
+ ---
74
+
75
+ ## Pedicab::P Class
76
+
77
+ The main conversation interface that manages context, state, and interactions with the LLM.
78
+
79
+ ### Constructor
80
+
81
+ #### `initialize(id)`
82
+
83
+ Creates a new conversation instance.
84
+
85
+ **Parameters:**
86
+ - `id` (String) - Unique identifier for the conversation
87
+
88
+ **Attributes:**
89
+ - `id` - Conversation identifier
90
+ - `ride` - Associated Ride instance
91
+ - `handler` - Response handler lambda (default: returns `out`)
92
+ - `context` - Current conversation context hash
93
+
94
+ ### Public Methods
95
+
96
+ #### `reset! -> void`
97
+
98
+ Clears all conversation state, context, and history.
99
+
100
+ **Example:**
101
+ ```ruby
102
+ ai = Pedicab['bot']
103
+ ai["First question"]
104
+ ai.reset! # Clears everything for fresh start
105
+ ```
106
+
107
+ #### `[](prompt) -> Object`
108
+
109
+ Starts a fresh conversation with the given prompt. Clears previous context.
110
+
111
+ **Parameters:**
112
+ - `prompt` (String) - The prompt to send to the LLM
113
+
114
+ **Returns:** Result of the handler (typically the response object)
115
+
116
+ **Example:**
117
+ ```ruby
118
+ response = ai["What is the meaning of life?"]
119
+ puts response.out
120
+ ```
121
+
122
+ #### `<<(prompt) -> Object`
123
+
124
+ Continues an existing conversation with context. Maintains previous exchanges.
125
+
126
+ **Parameters:**
127
+ - `prompt` (String) - The prompt to send to the LLM
128
+
129
+ **Returns:** Result of the handler (typically the response object)
130
+
131
+ **Example:**
132
+ ```ruby
133
+ ai["What is Ruby?"] # First question
134
+ ai << "What about Python?" # Continues with context
135
+ ```
136
+
137
+ #### `handle(&block) -> Proc/Object`
138
+
139
+ Sets or executes the response handler.
140
+
141
+ **Parameters:**
142
+ - `block` (Proc) - Optional handler that receives the response object
143
+
144
+ **Returns:** Handler proc if setting, or handler result if executing
145
+
146
+ **Example:**
147
+ ```ruby
148
+ # Set custom handler
149
+ ai.handle do |response|
150
+ puts "AI said: #{response.out}"
151
+ puts "Time: #{response.took}s"
152
+ response
153
+ end
154
+
155
+ # Execute handler
156
+ ai.handle
157
+ ```
158
+
159
+ #### `life -> Float`
160
+
161
+ Returns total processing time for all requests in the current conversation.
162
+
163
+ **Returns:** Total time in seconds
164
+
165
+ **Example:**
166
+ ```ruby
167
+ ai["Question 1"]
168
+ ai["Question 2"]
169
+ puts "Total time: #{ai.life}s"
170
+ ```
171
+
172
+ ### Attributes (Readers)
173
+
174
+ - `took` (Float) - Time taken for the last request in seconds
175
+ - `time` (Array[Float]) - Array of processing times for all requests
176
+ - `thoughts` (Array[String]) - Array of LLM thinking processes
177
+ - `prompt` (String) - The last prompt sent
178
+ - `response` (Array[String]) - Array of all responses received
179
+ - `out` (String) - The last response content
180
+ - `ride` (Pedicab::Ride) - The associated ride instance
181
+ - `last` (Array[String]) - Array of previous prompts
182
+ - `context` (Hash) - Current conversation context
183
+
184
+ ### Attributes (Accessors)
185
+
186
+ - `handler` (Proc) - Response handler lambda
187
+ - `context` (Hash) - Current conversation context
188
+
189
+ ---
190
+
191
+ ## Pedicab::Ride Class
192
+
193
+ Low-level interface for direct communication with the LLM model.
194
+
195
+ ### Constructor
196
+
197
+ #### `initialize(id)`
198
+
199
+ Creates a new ride instance.
200
+
201
+ **Parameters:**
202
+ - `id` (String) - Identifier for the ride
203
+
204
+ **Attributes:**
205
+ - `id` - Ride identifier
206
+ - `model` (String) - Model name (from ENV['MODEL'] or 'qwen')
207
+ - `state` (Hash) - Current ride state
208
+
209
+ ### Public Methods
210
+
211
+ #### `go(prompt, &block) -> String/Nil`
212
+
213
+ Sends a prompt to the LLM and returns the response.
214
+
215
+ **Parameters:**
216
+ - `prompt` (String) - The prompt to send
217
+ - `block` (Proc, optional) - Block to execute for each line of response
218
+
219
+ **Returns:** Response content, or nil if block given
220
+
221
+ **Example:**
222
+ ```ruby
223
+ # Simple usage
224
+ response = ride.go("What is 2+2?")
225
+ puts response
226
+
227
+ # With block for line-by-line processing
228
+ ride.go("List 3 colors") do |line|
229
+ puts "Color: #{line[:for]}"
230
+ end
231
+ ```
232
+
233
+ #### `if?(condition, &block) -> Boolean/Object`
234
+
235
+ Evaluates a boolean condition using the LLM.
236
+
237
+ **Parameters:**
238
+ - `condition` (String) - The condition to evaluate
239
+ - `block` (Proc, optional) - Block to execute if condition is true
240
+
241
+ **Returns:** Boolean result, or block result if block given and true
242
+
243
+ **Example:**
244
+ ```ruby
245
+ # Boolean check
246
+ is_helpful = ride.if?("Ruby is a programming language")
247
+
248
+ # With block
249
+ ride.if?("user wants help") do
250
+ puts "Providing help..."
251
+ end
252
+ ```
253
+
254
+ #### `reset! -> void`
255
+
256
+ Clears the ride state.
257
+
258
+ #### `[](key) -> Object`
259
+
260
+ Accesses a state value.
261
+
262
+ **Parameters:**
263
+ - `key` (Symbol/String) - State key
264
+
265
+ **Returns:** State value
266
+
267
+ #### `[]=(key, value) -> Object`
268
+
269
+ Sets a state value.
270
+
271
+ **Parameters:**
272
+ - `key` (Symbol/String) - State key
273
+ - `value` (Object) - State value
274
+
275
+ **Returns:** Set value
276
+
277
+ #### `to_h -> Hash`
278
+
279
+ Returns the complete state as a hash.
280
+
281
+ ### State Keys
282
+
283
+ The ride maintains these state keys:
284
+
285
+ - `:content` - Model response content
286
+ - `:thought` - Model thinking process (if available)
287
+ - `:action` - Description of current action
288
+ - `:yes` - Boolean result for conditional queries
289
+ - `:took` - Processing time in seconds
290
+ - `:for` - Current line when iterating (internal use)
291
+
292
+ ---
293
+
294
+ ## Environment Variables
295
+
296
+ ### MODEL
297
+
298
+ Specifies the default model name to use.
299
+
300
+ **Default:** `'qwen'`
301
+
302
+ **Example:**
303
+ ```bash
304
+ export MODEL=llama2
305
+ ```
306
+
307
+ ### DEBUG
308
+
309
+ Controls debug output level.
310
+
311
+ **Values:**
312
+ - `0` - No debug output (default)
313
+ - `1` - Basic debugging (action descriptions)
314
+ - `2` - Verbose debugging (full state dumps)
315
+
316
+ **Example:**
317
+ ```bash
318
+ export DEBUG=1
319
+ ```
320
+
321
+ ---
322
+
323
+ ## Error Handling
324
+
325
+ ### Pedicab::Error
326
+
327
+ Base exception class for Pedicab-specific errors.
328
+
329
+ ### Common Error Scenarios
330
+
331
+ 1. **Model not found** - When the specified GGUF file doesn't exist
332
+ 2. **Python process error** - When the backend process fails to start
333
+ 3. **Invalid JSON** - When communication with Python fails
334
+ 4. **Model inference error** - When the LLM fails to process the prompt
335
+
336
+ **Example:**
337
+ ```ruby
338
+ begin
339
+ response = ai["Some prompt"]
340
+ rescue Pedicab::Error => e
341
+ puts "Pedicab error: #{e.message}"
342
+ # Handle the error appropriately
343
+ end
344
+ ```
345
+
346
+ ---
347
+
348
+ ## Communication Protocol
349
+
350
+ ### Ruby → Python JSON Format
351
+
352
+ ```json
353
+ {
354
+ "role": "user|assistant|system",
355
+ "model": "model_name",
356
+ "content": "prompt_content",
357
+ "response": "grammar_type" // Optional, for constrained outputs
358
+ }
359
+ ```
360
+
361
+ ### Python → Ruby JSON Format
362
+
363
+ ```json
364
+ {
365
+ "role": "assistant",
366
+ "content": "response_content",
367
+ "thought": "thinking_process" // Optional
368
+ }
369
+ ```
370
+
371
+ ### Grammar Types
372
+
373
+ The Python backend supports these grammar constraints:
374
+
375
+ - `"bool"` - Yes/No responses
376
+ - `"number"` - Numeric responses
377
+ - `"string"` - Single-line string responses
378
+
379
+ ---
380
+
381
+ ## Performance Considerations
382
+
383
+ ### Benchmarking
384
+
385
+ Both `Pedicab::P` and `Pedicab::Ride` track processing times:
386
+
387
+ - `P.took` - Last request time
388
+ - `P.life` - Total conversation time
389
+ - `Ride.state[:took]` - Last ride request time
390
+
391
+ ### Memory Usage
392
+
393
+ - Conversation history grows with each exchange
394
+ - Use `reset!` to clear memory when starting new topics
395
+ - State is maintained in memory for the duration of the instance
396
+
397
+ ### Model Selection
398
+
399
+ - Larger models provide better quality but slower responses
400
+ - Consider model size based on your use case and hardware constraints
401
+ - The `MODEL` environment variable allows easy switching between models