scout-ai 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. checksums.yaml +4 -4
  2. data/.vimproject +80 -15
  3. data/README.md +296 -0
  4. data/Rakefile +2 -0
  5. data/VERSION +1 -1
  6. data/doc/Agent.md +279 -0
  7. data/doc/Chat.md +258 -0
  8. data/doc/LLM.md +446 -0
  9. data/doc/Model.md +513 -0
  10. data/doc/RAG.md +129 -0
  11. data/lib/scout/llm/agent/chat.rb +51 -1
  12. data/lib/scout/llm/agent/delegate.rb +39 -0
  13. data/lib/scout/llm/agent/iterate.rb +44 -0
  14. data/lib/scout/llm/agent.rb +42 -21
  15. data/lib/scout/llm/ask.rb +38 -6
  16. data/lib/scout/llm/backends/anthropic.rb +147 -0
  17. data/lib/scout/llm/backends/bedrock.rb +1 -1
  18. data/lib/scout/llm/backends/ollama.rb +23 -29
  19. data/lib/scout/llm/backends/openai.rb +34 -40
  20. data/lib/scout/llm/backends/responses.rb +158 -110
  21. data/lib/scout/llm/chat.rb +250 -94
  22. data/lib/scout/llm/embed.rb +4 -4
  23. data/lib/scout/llm/mcp.rb +28 -0
  24. data/lib/scout/llm/parse.rb +1 -0
  25. data/lib/scout/llm/rag.rb +9 -0
  26. data/lib/scout/llm/tools/call.rb +66 -0
  27. data/lib/scout/llm/tools/knowledge_base.rb +158 -0
  28. data/lib/scout/llm/tools/mcp.rb +59 -0
  29. data/lib/scout/llm/tools/workflow.rb +69 -0
  30. data/lib/scout/llm/tools.rb +58 -143
  31. data/lib/scout-ai.rb +1 -0
  32. data/scout-ai.gemspec +31 -18
  33. data/scout_commands/agent/ask +28 -71
  34. data/scout_commands/documenter +148 -0
  35. data/scout_commands/llm/ask +2 -2
  36. data/scout_commands/llm/server +319 -0
  37. data/share/server/chat.html +138 -0
  38. data/share/server/chat.js +468 -0
  39. data/test/scout/llm/backends/test_anthropic.rb +134 -0
  40. data/test/scout/llm/backends/test_openai.rb +45 -6
  41. data/test/scout/llm/backends/test_responses.rb +124 -0
  42. data/test/scout/llm/test_agent.rb +0 -70
  43. data/test/scout/llm/test_ask.rb +3 -1
  44. data/test/scout/llm/test_chat.rb +43 -1
  45. data/test/scout/llm/test_mcp.rb +29 -0
  46. data/test/scout/llm/tools/test_knowledge_base.rb +22 -0
  47. data/test/scout/llm/tools/test_mcp.rb +11 -0
  48. data/test/scout/llm/tools/test_workflow.rb +39 -0
  49. metadata +56 -17
  50. data/README.rdoc +0 -18
  51. data/python/scout_ai/__pycache__/__init__.cpython-310.pyc +0 -0
  52. data/python/scout_ai/__pycache__/__init__.cpython-311.pyc +0 -0
  53. data/python/scout_ai/__pycache__/huggingface.cpython-310.pyc +0 -0
  54. data/python/scout_ai/__pycache__/huggingface.cpython-311.pyc +0 -0
  55. data/python/scout_ai/__pycache__/util.cpython-310.pyc +0 -0
  56. data/python/scout_ai/__pycache__/util.cpython-311.pyc +0 -0
  57. data/python/scout_ai/atcold/plot_lib.py +0 -141
  58. data/python/scout_ai/atcold/spiral.py +0 -27
  59. data/python/scout_ai/huggingface/train/__pycache__/__init__.cpython-310.pyc +0 -0
  60. data/python/scout_ai/huggingface/train/__pycache__/next_token.cpython-310.pyc +0 -0
  61. data/python/scout_ai/language_model.py +0 -70
  62. /data/{python/scout_ai/atcold/__init__.py → test/scout/llm/tools/test_call.rb} +0 -0
data/doc/LLM.md ADDED
@@ -0,0 +1,446 @@
1
+ # LLM
2
+
3
+ LLM is a compact, extensible layer to interact with Large Language Models (LLMs) and Agentic workflows in Scout. It provides:
4
+
5
+ - A high-level ask function with pluggable backends (OpenAI-style, Responses, Ollama, OpenWebUI, AWS Bedrock, and a simple Relay).
6
+ - A chat/message DSL that parses conversational files or inline text, supporting imports, files, tasks/jobs, and tool wiring.
7
+ - First-class tool use: define tools from Workflow tasks or KnowledgeBase queries and let models call them functionally.
8
+ - An Agent wrapper that maintains chat context, injects Workflow and KnowledgeBase tools, and provides JSON/iteration helpers.
9
+ - Embedding and a tiny RAG helper.
10
+ - A set of command-line tools (scout llm … and scout agent …) for interactive and batch usage.
11
+
12
+ Sections:
13
+ - Core API overview
14
+ - Message and Chat DSL
15
+ - ask function (multi-backend)
16
+ - Backends
17
+ - Tools and function calling
18
+ - Agent (LLM::Agent)
19
+ - Embeddings and RAG
20
+ - Parse/Utils helpers
21
+ - CLI (scout llm, scout agent, documenter)
22
+ - Examples
23
+
24
+ ---
25
+
26
+ ## Core API overview
27
+
28
+ Top‑level entry points:
29
+ - LLM.ask(question, options={}, &block) → String or messages
30
+ - Parse question into a structured chat, enrich with options from chat (endpoint/model/format), run the chosen backend, return the assistant output.
31
+ - Tool calling supported via the block (see Tools section).
32
+ - LLM.embed(text, options={}) → embedding Array (per backend).
33
+ - LLM.messages(text_or_messages, role=nil) → normalized messages Array.
34
+ - LLM.chat(file_or_text_or_messages) → messages Array after applying imports, tasks/jobs, files, options, etc.
35
+ - LLM.print(messages) → human‑readable chat file serialization.
36
+
37
+ Auxiliary:
38
+ - LLM.workflow_ask(workflow, question, options={}) — ask with tools exported from a Workflow; block triggers workflow jobs.
39
+ - LLM.knowledge_base_ask(kb, question, options={}) — ask with KB traversal tool(s); block returns associations (children).
40
+
41
+ Agent wrapper:
42
+ - LLM::Agent — maintains a current Chat, injects tools (workflow, knowledge base), and provides chat, json, iterate helpers.
43
+
44
+ ---
45
+
46
+ ## Message and Chat DSL
47
+
48
+ LLM can parse either:
49
+ - Free-form text in a “chat file” style, or
50
+ - Explicit arrays of message hashes ({role:, content:}).
51
+
52
+ Parsing (LLM.messages and LLM.parse):
53
+ - Role headers “role:” mark segments (system:, user:, assistant:, etc.).
54
+ - Protected blocks:
55
+ - Triple backticks ```…``` are preserved inside a single message.
56
+ - [[ … ]] structured blocks support actions:
57
+ - cmd TITLE? then content → tag a file with captured command output (via CMD.cmd).
58
+ - file TITLE? then a file path → tag with file contents.
59
+ - directory TITLE? then a directory → inline all files beneath recursively as <file> tags.
60
+ - import TITLE? → handled at LLM.chat level.
61
+ - XML-style tags are preserved as protected content.
62
+
63
+ Chat loader (LLM.chat):
64
+ - Accepts:
65
+ - Path to a chat file,
66
+ - Inline text,
67
+ - An Array of messages.
68
+ - Expands in order:
69
+ 1) imports — role: import / continue loads referenced chats and inlines their messages (continue appends last).
70
+ 2) clear — role: clear prunes any messages before it (except previous_response_id).
71
+ 3) clean — removes empty/skip lines.
72
+ 4) tasks — role: task or inline_task turns “Workflow task input-options” into a produced job (via Workflow.produce), replacing with job/tool messages.
73
+ 5) jobs — role: job/inline_job transforms a saved job result into a function_call + tool output pair or inlined result file.
74
+ 6) files — role: file or directory inlines files as tagged content.
75
+
76
+ Options in chat (LLM.options):
77
+ - Scans chat messages to collect transient options:
78
+ - endpoint, model, backend, persist, format (JSON schema or :json): role lines that set settings.
79
+ - previous_response_id is “strong” and kept across assistant replies.
80
+ - option lines (role: option) allow free‑form key value.
81
+
82
+ Printing (LLM.print):
83
+ - Pretty‑prints a messages array back to a chat-like text (handy for saving updated conversations).
84
+
85
+ Chat module (Chat):
86
+ - Lightweight builder over an Array with Annotation. Provides:
87
+ - message(role, content), user/system/assistant/import/file/directory/continue/format/tool/task/inline_task/job/inline_job/association.
88
+ - option(:key, value), endpoint(value), model(value), image(file).
89
+ - tag(content, name=nil, tag=:file, role=:user) → wrap as <file>…</file>.
90
+ - ask/respond/chat/json/json_format, print/save/write/write_answer, branch/shed, answer.
91
+
92
+ Example: building a chat and printing
93
+ ```ruby
94
+ a = LLM::Agent.new
95
+ a.start_chat.system 'you are a robot'
96
+ a.user "hi"
97
+ puts a.print
98
+ ```
99
+
100
+ ---
101
+
102
+ ## ask function
103
+
104
+ LLM.ask(question, options={}, &block):
105
+ - question:
106
+ - String (free text or filename) or messages Array.
107
+ - Internally normalized via LLM.chat (expands imports/tasks/files).
108
+ - options:
109
+ - endpoint — selects endpoint presets (Scout.etc.AI[endpoint].yaml merged as defaults).
110
+ - backend — one of: :openai, :responses, :ollama, :openwebui, :bedrock, :relay. Defaults per Scout::Config (ASK_BACKEND/LLM_BACKEND).
111
+ - model — backend‑specific model ID (e.g., gpt‑4.1, mistral).
112
+ - persist — default true; responses are cached with Persist.persist keyed by endpoint+messages+options.
113
+ - format — request JSON or schema‑based outputs (backend dependent).
114
+ - return_messages — when true, returns an array of messages (including tool call outputs) rather than just the content.
115
+ - Additional backend‑specific keys (e.g., tool_choice, previous_response_id, websearch).
116
+
117
+ - Tool calling:
118
+ - If you pass a block, it is invoked for each function call the model asks to perform:
119
+ - block.call(name, parameters_hash) → result
120
+ - The result is serialized back into a “tool” message for the model to continue.
121
+
122
+ - workflow_ask / knowledge_base_ask:
123
+ - Prepares tools automatically (Workflow tasks or KB traversal), and supplies an appropriate block.
124
+
125
+ Example (OpenAI backend, JSON output)
126
+ ```ruby
127
+ prompt = <<~EOF
128
+ system:
129
+
130
+ Respond in json format with a hash of strings as keys and string arrays as values, at most three in length
131
+
132
+ user:
133
+
134
+ What other movies have the protagonists of the original gost busters played on, just the top.
135
+ EOF
136
+
137
+ LLM.ask prompt, format: :json
138
+ ```
139
+
140
+ ---
141
+
142
+ ## Backends
143
+
144
+ All backends share the block‑based tool calling contract and honor options extracted from the chat (endpoint/model/format/etc.).
145
+
146
+ Common configuration:
147
+ - Per endpoint defaults can be stored under Scout.etc.AI[endpoint].yaml.
148
+ - Env/config helpers are used for URLs/keys (LLM.get_url_config and Scout::Config.get).
149
+
150
+ 1) OpenAI (LLM::OpenAI)
151
+ - client: Object::OpenAI::Client (supports uri_base for Azure-compatible setups).
152
+ - process_input filters out unsupported image role (use :responses for images).
153
+ - tools: converted via LLM.tools_to_openai.
154
+ - format: response_format {type: 'json_object'} for JSON or other formats.
155
+ - Embedding: LLM::OpenAI.embed(text, model: …).
156
+ - Example tool calls (from tests):
157
+ - Provide tools array (OpenAI schema). The block returns a result string; ask loops until tools exhausted.
158
+
159
+ 2) Responses (LLM::Responses)
160
+ - Designed for OpenAI Responses API (multimodal):
161
+ - Supports images (role: image file → base64) and PDFs (role: pdf file).
162
+ - websearch opt: adds a 'web_search_preview' tool.
163
+ - Tool message marshalling via tools_to_responses/process_input/process_response.
164
+ - Handles previous_response_id to continue a session.
165
+ - Image creation: Responses.image with messages → client.images.generate.
166
+
167
+ 3) Ollama (LLM::OLlama)
168
+ - Local models via Ollama server (defaults to http://localhost:11434).
169
+ - Converts tools via tools_to_ollama.
170
+ - ask returns last message content or messages when return_messages.
171
+ - embed uses /api/embed.
172
+
173
+ 4) OpenWebUI (LLM::OpenWebUI)
174
+ - REST wrapper to chat/completions endpoint on OpenWebUI‑compatible servers.
175
+ - Note: focuses on completions; tool loops example stubbed.
176
+
177
+ 5) Bedrock (LLM::Bedrock)
178
+ - AWS Bedrock Runtime client. Supports:
179
+ - type: :messages or :prompt modes.
180
+ - model options via model: {model:'…', …}.
181
+ - Tool call loop: inspects message['content'] for tool_calls, and re‑invokes with augmented messages.
182
+ - Embeddings via titan embed endpoint by default.
183
+
184
+ 6) Relay (LLM::Relay)
185
+ - Minimal “scp file, poll for reply” relay:
186
+ - LLM::Relay.ask(question, server: 'host') → uploads JSON to server’s ~/.scout/var/ask/, waits for reply JSON.
187
+
188
+ ---
189
+
190
+ ## Tools and function calling
191
+
192
+ Define tools from Workflows or KnowledgeBase and use them with LLMs that support function calling.
193
+
194
+ - LLM.task_tool_definition(workflow, task, inputs=nil) → OpenAI-like tool schema.
195
+ - LLM.workflow_tools(workflow) → list of exportable task schemas for the workflow.
196
+ - LLM.knowledge_base_tool_definition(kb) → a single tool “children” with database/entity parameters (KB traversal).
197
+ - LLM.association_tool_definition(name) → generic association tool (source~target edges).
198
+
199
+ - In chats:
200
+ - role: tool lines (e.g., “tool: Baking bake_muffin_tray”) register tasks as callable tools.
201
+ - role: association lines register KB databases dynamically (path + options like undirected/source/target) and expose KB lookups.
202
+
203
+ - The block passed to LLM.ask(name, params) receives function name and parameters and returns a value:
204
+ - String → used as tool output content.
205
+ - Any Object → serialized as JSON for content.
206
+ - Exception → serialized exception + backtrace.
207
+
208
+ Examples:
209
+ ```ruby
210
+ # Using a tool to bake muffins
211
+ question = <<~EOF
212
+ user:
213
+ Use the provided tool to learn the instructions of baking a tray of muffins.
214
+ tool: Baking bake_muffin_tray
215
+ EOF
216
+
217
+ LLM.ask question
218
+ ```
219
+
220
+ Knowledge base traversal:
221
+ ```ruby
222
+ TmpFile.with_dir do |dir|
223
+ kb = KnowledgeBase.new dir
224
+ kb.register :brothers, datafile_test(:person).brothers, undirected: true
225
+ kb.register :parents, datafile_test(:person).parents
226
+ LLM.knowledge_base_ask(kb, "Who is Miki's brother in law?")
227
+ end
228
+ ```
229
+
230
+ ---
231
+
232
+ ## Agent (LLM::Agent)
233
+
234
+ An Agent is a thin orchestrator that:
235
+ - Keeps a current chat (Chat.setup []),
236
+ - Injects system content (including KnowledgeBase markdown overview),
237
+ - Automatically exports Workflow tasks and KnowledgeBase traversal as tools,
238
+ - Forwards prompts to LLM.ask with configured defaults.
239
+
240
+ Constructor:
241
+ - LLM::Agent.new(workflow: nil|Module or String name, knowledge_base: kb=nil, start_chat: messages=nil, **other_options)
242
+ - If workflow is a String, Workflow.require_workflow is called.
243
+
244
+ Methods:
245
+ - start_chat → initializes an empty chat (Chat.setup []).
246
+ - start(chat=nil) → start new branch or adopt provided chat/messages.
247
+ - current_chat → returns the active chat; method_missing forwards Chat DSL methods (user/system/tool/task/etc.).
248
+ - ask(messages, model=nil, options={}) → calls LLM.ask with tools+options; internal block dispatch:
249
+ - 'children' → KB.children(db, entities)
250
+ - other function names → calls Workflow job run/exec based on workflow.exec_exports.
251
+ - chat(model=nil, options={}) → ask with return_messages: true, append assistant reply to chat, return content.
252
+ - json, json_format(format, …) → set format and parse JSON outputs into Ruby objects.
253
+ - iterate(prompt=nil) { |item| … } → ask with a JSON schema requesting an array 'content', iterate through items; then resets format to text.
254
+ - iterate_dictionary(prompt=nil) { |k,v| … } → ask with object schema with arbitrary string values.
255
+
256
+ Agent loader:
257
+ - LLM::Agent.load_from_path(path) expects:
258
+ - path/workflow.rb → Require a workflow.
259
+ - path/knowledge_base → Initialize a KB from path.
260
+ - path/start_chat → A chat file to bootstrap conversation.
261
+
262
+ Example (tests)
263
+ ```ruby
264
+ m = Module.new do
265
+ extend Workflow
266
+ self.name = "Registration"
267
+ input :name, :string
268
+ input :age, :integer
269
+ input :gender, :select, nil, :select_options => %w(male female)
270
+ task :person => :yaml do inputs.to_hash end
271
+ end
272
+
273
+ LLM.workflow_ask(m, "Register Eduard Smith, a 25 yo male, using a tool call",
274
+ backend: 'ollama', model: 'llama3')
275
+ ```
276
+
277
+ ---
278
+
279
+ ## Embeddings and RAG
280
+
281
+ Embeddings:
282
+ - LLM.embed(text, options={}) → Vector
283
+ - Selects backend via options[:backend] or ENV/Config.
284
+ - Supports openai, ollama, openwebui, relay (bedrock embeds via LLM::Bedrock.embed directly).
285
+ - If text is an Array, returns array of vectors per backend convention (Ollama returns arrays).
286
+
287
+ Simple RAG index:
288
+ - LLM::RAG.index(data) → Hnswlib::HierarchicalNSW index
289
+ - data: array of embedding vectors.
290
+ - Example:
291
+ ```ruby
292
+ data = [ LLM.embed("Crime, Killing and Theft."),
293
+ LLM.embed("Murder, felony and violence"),
294
+ LLM.embed("Puppies, cats and flowers") ]
295
+ i = LLM::RAG.index(data)
296
+ nodes, _ = i.search_knn LLM.embed('I love the zoo'), 1
297
+ # => 2 (closest to “Puppies, cats and flowers”)
298
+ ```
299
+
300
+ ---
301
+
302
+ ## Parse/Utils helpers
303
+
304
+ - LLM.parse(question, role=nil) — split text into role messages and preserve protected blocks.
305
+ - LLM.tag(tag, content, name=nil) — build an XML‑like tagged snippet (used by Chat#tag).
306
+ - LLM.get_url_server_tokens(url, prefix=nil) — token expansion helper for config.
307
+ - LLM.get_url_config(key, url=nil, *tokens) — layered config lookup using URL tokens and namespaces.
308
+
309
+ ---
310
+
311
+ ## Command Line Interface
312
+
313
+ The scout command locates scripts by path fragments (scout_commands/…), showing directory listings when the fragment maps to a directory. Scripts use SOPT for options parsing.
314
+
315
+ LLM-related commands installed by scout-ai:
316
+
317
+ - Ask an LLM
318
+ - scout llm ask [options] [question]
319
+ - Options:
320
+ - -t|--template <file_or_key> — load a prompt template; if it contains “???”, question is substituted.
321
+ - -c|--chat <chat_file> — load/extend a chat; append model reply to the file.
322
+ - -i|--inline <file> — answer questions embedded as “# ask: …” comments inside a file; inject responses inline between “# Response start/end”.
323
+ - -f|--file <file> — prepend file contents, or use “...” in question to insert STDIN/file content in place.
324
+ - -m|--model, -e|--endpoint, -b|--backend — backend selection; options are merged with per-endpoint config (Scout.etc.AI).
325
+ - -d|--dry_run — print the fully expanded conversation (LLM.print) and exit without asking.
326
+ - Behavior:
327
+ - If chat is given, the conversation is expanded via LLM.chat(chat) and new messages appended to the file.
328
+ - If inline is given, scans the file for ask directives and writes responses inline.
329
+ - Otherwise prints the model’s answer.
330
+
331
+ - Agent ask (agent awareness: workflow + knowledge base)
332
+ - scout agent ask [options] [agent_name] [question]
333
+ - agent_name resolves via Scout.workflows[agent_name] or Scout.chats[agent_name] (Path subsystem).
334
+ - Same flags as llm ask; additionally:
335
+ - -wt|--workflow_tasks <tasks> — export selected tasks to the agent as tools.
336
+ - The script loads the agent via a workflow.rb or a whole agent directory (workflow.rb, knowledge_base, start_chat). The agent is used to ask the question and append to chat if requested.
337
+
338
+ - Agent KnowledgeBase passthrough:
339
+ - scout agent kb <agent_name> <kb_subcommand...>
340
+ - Utility to run knowledge base CLI with the agent’s preconfigured kb (sets --knowledge_base to agent_dir/knowledge_base).
341
+
342
+ - LLm relay processor (for Relay backend)
343
+ - scout llm process [directory]
344
+ - Watches a directory (defaults to ~/.scout/var/ask) for queued ask JSON files, runs them, writes replies under reply/.
345
+
346
+ - LLM server (simple chat web UI)
347
+ - scout llm server
348
+ - Starts a small Sinatra server serving a static chat UI and a JSON API to list/load/save/run chat files under ./chats.
349
+ - Endpoints: GET /, /chat.js, /list, /load, POST /save, POST /run, GET /ping.
350
+
351
+ - List templates
352
+ - scout llm template
353
+ - Lists templates found under Scout.questions.
354
+
355
+ - Documentation builder (meta)
356
+ - scout-ai documenter <topic>
357
+ - An internal documentation tool that scans source/tests for topic modules and uses an LLM agent to generate markdown docs.
358
+
359
+ CLI discovery:
360
+ - The scout command resolves fragments to files under scout_commands (including other packages/workflows).
361
+ - E.g., “scout llm” shows available subcommands; “scout agent” lists agent commands.
362
+
363
+ ---
364
+
365
+ ## Examples
366
+
367
+ Ask with OpenAI, tool calls:
368
+ ```ruby
369
+ prompt = <<~EOF
370
+ user:
371
+ What is the weather in London. Should I take my umbrella?
372
+ EOF
373
+
374
+ tools = [
375
+ {
376
+ "type": "function",
377
+ "function": {
378
+ "name": "get_current_temperature",
379
+ "description": "Get the current temperature and raining conditions for a specific location",
380
+ "parameters": {
381
+ "type": "object",
382
+ "properties": {
383
+ "location": {"type": "string"},
384
+ "unit": {"type": "string", "enum": ["Celsius","Fahrenheit"]}
385
+ },
386
+ "required": ["location","unit"]
387
+ }
388
+ }
389
+ },
390
+ ]
391
+ LLM::OpenAI.ask prompt, tool_choice: 'required', tools: tools, model: "gpt-4.1-mini" do |name, args|
392
+ "It's 15 degrees and raining."
393
+ end
394
+ ```
395
+
396
+ Ask with Responses (images, PDF)
397
+ ```ruby
398
+ prompt = <<~EOF
399
+ image: #{datafile_test 'cat.jpg'}
400
+
401
+ user:
402
+ What animal is represented in the image?
403
+ EOF
404
+
405
+ LLM::Responses.ask prompt
406
+ ```
407
+
408
+ Use a Workflow tool from chat:
409
+ ```ruby
410
+ question = <<~EOF
411
+ user:
412
+
413
+ Use the provided tool to learn the instructions of baking a tray of muffins.
414
+ tool: Baking bake_muffin_tray
415
+ EOF
416
+
417
+ LLM.ask question
418
+ ```
419
+
420
+ Agent with Workflow tool-calling:
421
+ ```ruby
422
+ m = Module.new do
423
+ extend Workflow
424
+ self.name = "Registration"
425
+ input :name, :string
426
+ input :age, :integer
427
+ input :gender, :select, nil, select_options: %w(male female)
428
+ task :person => :yaml do inputs.to_hash end
429
+ end
430
+ LLM.workflow_ask(m, "Register Eduard Smith, a 25 yo male",
431
+ backend: 'ollama', model: 'llama3')
432
+ ```
433
+
434
+ RAG:
435
+ ```ruby
436
+ data = [ LLM.embed("Crime, Killing and Theft."),
437
+ LLM.embed("Murder, felony and violence"),
438
+ LLM.embed("Puppies, cats and flowers") ]
439
+ idx = LLM::RAG.index(data)
440
+ nodes, scores = idx.search_knn LLM.embed('I love the zoo'), 1
441
+ # => nodes.first == 2
442
+ ```
443
+
444
+ ---
445
+
446
+ LLM integrates chat parsing, tool wiring, multiple model backends, and agentic orchestration in a compact API. Use chat files (or the Chat builder) to define conversations and tools declaratively, pick a backend with a single option, and compose model calls with Workflow/KnowledgeBase tools for powerful, reproducible automations.