scout-ai 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (62) hide show
  1. checksums.yaml +4 -4
  2. data/.vimproject +80 -15
  3. data/README.md +296 -0
  4. data/Rakefile +2 -0
  5. data/VERSION +1 -1
  6. data/doc/Agent.md +279 -0
  7. data/doc/Chat.md +258 -0
  8. data/doc/LLM.md +446 -0
  9. data/doc/Model.md +513 -0
  10. data/doc/RAG.md +129 -0
  11. data/lib/scout/llm/agent/chat.rb +51 -1
  12. data/lib/scout/llm/agent/delegate.rb +39 -0
  13. data/lib/scout/llm/agent/iterate.rb +44 -0
  14. data/lib/scout/llm/agent.rb +42 -21
  15. data/lib/scout/llm/ask.rb +38 -6
  16. data/lib/scout/llm/backends/anthropic.rb +147 -0
  17. data/lib/scout/llm/backends/bedrock.rb +1 -1
  18. data/lib/scout/llm/backends/ollama.rb +23 -29
  19. data/lib/scout/llm/backends/openai.rb +34 -40
  20. data/lib/scout/llm/backends/responses.rb +158 -110
  21. data/lib/scout/llm/chat.rb +250 -94
  22. data/lib/scout/llm/embed.rb +4 -4
  23. data/lib/scout/llm/mcp.rb +28 -0
  24. data/lib/scout/llm/parse.rb +1 -0
  25. data/lib/scout/llm/rag.rb +9 -0
  26. data/lib/scout/llm/tools/call.rb +66 -0
  27. data/lib/scout/llm/tools/knowledge_base.rb +158 -0
  28. data/lib/scout/llm/tools/mcp.rb +59 -0
  29. data/lib/scout/llm/tools/workflow.rb +69 -0
  30. data/lib/scout/llm/tools.rb +58 -143
  31. data/lib/scout-ai.rb +1 -0
  32. data/scout-ai.gemspec +31 -18
  33. data/scout_commands/agent/ask +28 -71
  34. data/scout_commands/documenter +148 -0
  35. data/scout_commands/llm/ask +2 -2
  36. data/scout_commands/llm/server +319 -0
  37. data/share/server/chat.html +138 -0
  38. data/share/server/chat.js +468 -0
  39. data/test/scout/llm/backends/test_anthropic.rb +134 -0
  40. data/test/scout/llm/backends/test_openai.rb +45 -6
  41. data/test/scout/llm/backends/test_responses.rb +124 -0
  42. data/test/scout/llm/test_agent.rb +0 -70
  43. data/test/scout/llm/test_ask.rb +3 -1
  44. data/test/scout/llm/test_chat.rb +43 -1
  45. data/test/scout/llm/test_mcp.rb +29 -0
  46. data/test/scout/llm/tools/test_knowledge_base.rb +22 -0
  47. data/test/scout/llm/tools/test_mcp.rb +11 -0
  48. data/test/scout/llm/tools/test_workflow.rb +39 -0
  49. metadata +56 -17
  50. data/README.rdoc +0 -18
  51. data/python/scout_ai/__pycache__/__init__.cpython-310.pyc +0 -0
  52. data/python/scout_ai/__pycache__/__init__.cpython-311.pyc +0 -0
  53. data/python/scout_ai/__pycache__/huggingface.cpython-310.pyc +0 -0
  54. data/python/scout_ai/__pycache__/huggingface.cpython-311.pyc +0 -0
  55. data/python/scout_ai/__pycache__/util.cpython-310.pyc +0 -0
  56. data/python/scout_ai/__pycache__/util.cpython-311.pyc +0 -0
  57. data/python/scout_ai/atcold/plot_lib.py +0 -141
  58. data/python/scout_ai/atcold/spiral.py +0 -27
  59. data/python/scout_ai/huggingface/train/__pycache__/__init__.cpython-310.pyc +0 -0
  60. data/python/scout_ai/huggingface/train/__pycache__/next_token.cpython-310.pyc +0 -0
  61. data/python/scout_ai/language_model.py +0 -70
  62. /data/{python/scout_ai/atcold/__init__.py → test/scout/llm/tools/test_call.rb} +0 -0
data/doc/Agent.md ADDED
@@ -0,0 +1,279 @@
1
+ # Agent
2
+
3
+ Agent is a thin orchestrator around LLM, Chat, Workflow and KnowledgeBase that maintains a conversation state, injects tools automatically, and streamlines structured interactions (JSON lists/dictionaries) with helper methods.
4
+
5
+ Core ideas:
6
+ - Keep a live chat (conversation) object and forward the Chat DSL (user/system/…).
7
+ - Export Workflow tasks and KnowledgeBase queries as tool definitions so the model can call them functionally.
8
+ - Centralize backend/model/endpoint defaults for repeated asks.
9
+ - Provide convenience helpers to iterate over structured results (lists/dictionaries).
10
+
11
+ Sections:
12
+ - Quick start
13
+ - Construction and state
14
+ - Tool wiring (Workflow and KnowledgeBase)
15
+ - Running interactions
16
+ - Iterate helpers
17
+ - Loading an agent from a directory
18
+ - API reference
19
+ - CLI: scout agent …
20
+
21
+ ---
22
+
23
+ ## Quick start
24
+
25
+ Build a live conversation and print it
26
+
27
+ ```ruby
28
+ a = LLM::Agent.new
29
+ a.start_chat.system 'you are a robot'
30
+ a.user "hi"
31
+ puts a.print # via Chat#print, forwarded through Agent
32
+ ```
33
+
34
+ Run a Workflow tool via tool calling
35
+
36
+ ```ruby
37
+ m = Module.new do
38
+ extend Workflow
39
+ self.name = "Registration"
40
+
41
+ desc "Register a person"
42
+ input :name, :string, "Last, first name"
43
+ input :age, :integer, "Age"
44
+ input :gender, :select, "Gender", nil, :select_options => %w(male female)
45
+ task :person => :yaml do
46
+ inputs.to_hash
47
+ end
48
+ end
49
+
50
+ agent = LLM::Agent.new workflow: m, backend: 'ollama', model: 'llama3'
51
+ agent.ask "Register Eduard Smith, a 25 yo male, using a tool call to the tool provided"
52
+ ```
53
+
54
+ Query a KnowledgeBase (tool exported automatically)
55
+
56
+ ```ruby
57
+ TmpFile.with_dir do |dir|
58
+ kb = KnowledgeBase.new dir
59
+ kb.format = {"Person" => "Alias"}
60
+ kb.register :brothers, datafile_test(:person).brothers, undirected: true
61
+ kb.register :marriages, datafile_test(:person).marriages, undirected: true, source: "=>Alias", target: "=>Alias"
62
+ kb.register :parents, datafile_test(:person).parents
63
+
64
+ agent = LLM::Agent.new knowledge_base: kb
65
+ puts agent.ask "Who is Miki's brother in law?"
66
+ end
67
+ ```
68
+
69
+ ---
70
+
71
+ ## Construction and state
72
+
73
+ - LLM::Agent.new(workflow: nil, knowledge_base: nil, start_chat: nil, **kwargs)
74
+ - workflow: a Workflow module or a String (loaded via Workflow.require_workflow).
75
+ - knowledge_base: a KnowledgeBase instance (optional).
76
+ - start_chat: initial messages (Chat or Array) to seed new chat branches.
77
+ - **kwargs: stored as @other_options and merged into calls to LLM.ask (e.g., backend:, model:, endpoint:, log_errors:, etc.).
78
+
79
+ Conversation lifecycle:
80
+ - start_chat → returns a base Chat (Chat.setup []) lazily allocated once.
81
+ - start(chat=nil)
82
+ - With chat: adopt it (if not already a Chat, annotate and set).
83
+ - Without: branch the start_chat (non-destructive copy).
84
+ - current_chat → the active chat instance (created on demand).
85
+
86
+ Forwarding:
87
+ - method_missing forwards any unknown method to current_chat, so you can call:
88
+ - agent.user "text", agent.system "policy", agent.tool "WF" "task", agent.format :json, etc.
89
+
90
+ ---
91
+
92
+ ## Tool wiring (Workflow and KnowledgeBase)
93
+
94
+ When you call Agent#ask:
95
+ - If workflow or knowledge_base is present, Agent builds a tools array:
96
+ - Workflow: LLM.workflow_tools(workflow) produces one tool per exported task (OpenAI/Responses-compatible function schemas).
97
+ - KnowledgeBase: LLM.knowledge_base_tool_definition(kb) produces a “children” tool that takes {database, entities}.
98
+ - Agent invokes LLM.ask(messages, tools: ..., **@other_options, **user_options) with a block that handles tool calls:
99
+ - 'children' → returns kb.children(database, entities)
100
+ - any other tool name → runs workflow.job(name, jobname?, parameters).run or .exec (exec if workflow.exec_exports includes the task).
101
+
102
+ Notes:
103
+ - Tool outputs are serialized back to the model as tool results. If your tool returns a Ruby object, it is JSON-encoded automatically inside the tool response.
104
+ - For KnowledgeBase integrations, Agent also enriches the system prompt with markdown descriptions of each registered database (system_prompt).
105
+
106
+ ---
107
+
108
+ ## Running interactions
109
+
110
+ - ask(messages_or_chat, model=nil, options={}) → String (assistant content) or messages (when return_messages: true)
111
+ - messages_or_chat can be: Array of messages, a Chat, or a simple string (Agent will pass through LLM.chat parsing).
112
+ - model parameter can override the default; typically you set backend/model/endpoint in the Agent constructor.
113
+ - If workflow/knowledge_base is configured, tools are injected and tool calls are handled automatically.
114
+
115
+ - respond(...) → ask(current_chat, ...)
116
+ - chat(...) → ask with return_messages: true, then append the assistant reply to current_chat and return the reply content.
117
+ - json(...) → sets current_chat.format :json, runs ask, parses JSON, returns the object (if object == {"content": ...}, returns that inner content).
118
+ - json_format(format, ...) → sets current_chat.format to a JSON schema Hash and parses accordingly.
119
+
120
+ Formatting helpers:
121
+ - format_message and prompt are internal helpers for building a system + user prompt (used by some agents). Not required for normal use; Agent relies on LLM.chat to parse and LLM.ask to execute.
122
+
123
+ ---
124
+
125
+ ## Iterate helpers
126
+
127
+ For models that support JSON schema outputs (e.g., OpenAI Responses), Agent provides sugar to iterate over structured results:
128
+
129
+ - iterate(prompt=nil) { |item| … }
130
+ - Sets endpoint :responses (so the Responses backend is used).
131
+ - If prompt present, appends as user message.
132
+ - Requests a JSON object with an array property "content".
133
+ - Resets format back to :text afterwards.
134
+ - Yields each item in the content list.
135
+
136
+ - iterate_dictionary(prompt=nil) { |k,v| … }
137
+ - Similar, but requests a JSON object with string values (arbitrary properties).
138
+ - Yields each key/value pair.
139
+
140
+ Example:
141
+ ```ruby
142
+ agent = LLM::Agent.new
143
+ agent.iterate("List three steps to bake bread") { |s| puts "- #{s}" }
144
+
145
+ agent.iterate_dictionary("Give capital cities for FR, ES, IT") do |country, capital|
146
+ puts "#{country}: #{capital}"
147
+ end
148
+ ```
149
+
150
+ ---
151
+
152
+ ## Loading an agent from a directory
153
+
154
+ - LLM::Agent.load_from_path(path)
155
+ - Expects a directory containing:
156
+ - workflow.rb — a Workflow definition (optional),
157
+ - knowledge_base — a KnowledgeBase directory (optional),
158
+ - start_chat — a chat file (optional).
159
+ - Returns a configured Agent with those components.
160
+
161
+ Paths are resolved via the Path subsystem; files like workflow.rb can be located relative to the given directory.
162
+
163
+ ---
164
+
165
+ ## API reference
166
+
167
+ Constructor and state:
168
+ - Agent.new(workflow: nil|String|Module, knowledge_base: nil, start_chat: nil, **kwargs)
169
+ - start_chat → Chat
170
+ - start(chat=nil) → Chat (branch or adopt provided)
171
+ - current_chat → Chat
172
+
173
+ Chat DSL forwarding (method_missing):
174
+ - All Chat methods available: user, system, assistant, file, directory, tool, task, inline_task, job, inline_job, association, format, option, endpoint, model, image, save/write/print, etc.
175
+
176
+ Asking and replies:
177
+ - ask(messages, model=nil, options={}) → String (or messages if return_messages: true)
178
+ - respond(...) → ask(current_chat, ...)
179
+ - chat(...) → append answer to current_chat, return answer String
180
+ - json(...), json_format(format, ...) → parse JSON outputs
181
+
182
+ Structured iteration:
183
+ - iterate(prompt=nil) { |item| ... } — endpoint :responses, expects {content: [String]}
184
+ - iterate_dictionary(prompt=nil) { |k,v| ... } — endpoint :responses, expects arbitrary object of string values
185
+
186
+ System prompt (internal):
187
+ - system_prompt / prompt — build a combined system message injecting KB database descriptions if knowledge_base present.
188
+
189
+ Utilities:
190
+ - self.load_from_path(path) → Agent
191
+
192
+ ---
193
+
194
+ ## CLI: scout agent commands
195
+
196
+ The scout command resolves subcommands by scanning “scout_commands/**” paths using the Path subsystem, so packages and workflows can add their own. If you target a directory instead of a script, a listing of subcommands is shown.
197
+
198
+ Two commands are provided by scout-ai:
199
+
200
+ - Agent ask
201
+ - scout agent ask [options] [agent_name] [question]
202
+ - Options:
203
+ - -l|--log <level> — set log severity.
204
+ - -t|--template <file_or_key> — use a prompt template; positional question replaces '???' if present.
205
+ - -c|--chat <chat_file> — load/extend a conversation file; appends new messages to it.
206
+ - -m|--model, -e|--endpoint — backend/model selection (merged with per-endpoint config at Scout.etc.AI).
207
+ - -f|--file <path> — include file content at the start (or substitute where “...” appears in the question).
208
+ - -wt|--workflow_tasks <names> — limit exported workflow tasks for this agent call.
209
+ - Resolution:
210
+ - agent_name is resolved via Scout.workflows[agent_name] (a workflow) or Scout.chats[agent_name] (an agent directory with workflow.rb/knowledge_base/start_chat). The Path subsystem handles discovery across packages.
211
+ - Behavior:
212
+ - If --chat is given, the conversation is expanded (LLM.chat) and the new model output is appended (Chat.print).
213
+ - Supports inline file Q&A mode (not typical for agent ask).
214
+
215
+ - Agent KnowledgeBase passthrough
216
+ - scout agent kb <agent_name> <kb subcommand...>
217
+ - Loads the agent’s knowledge base (agent_dir/knowledge_base) and forwards to “scout kb …” with --knowledge_base prefilled (and current Log level).
218
+ - Useful to manage the KB tied to an agent from the CLI.
219
+
220
+ Command resolution:
221
+ - The bin/scout dispatcher walks nested directories (e.g., “agent/kb”) and lists available scripts when a directory is targeted.
222
+
223
+ ---
224
+
225
+ ## Examples
226
+
227
+ Minimal conversation with an Agent (tests)
228
+ ```ruby
229
+ a = LLM::Agent.new
230
+ a.start_chat.system 'you are a robot'
231
+ a.user "hi"
232
+ puts a.print
233
+ ```
234
+
235
+ Register and run a simple workflow tool call
236
+ ```ruby
237
+ m = Module.new do
238
+ extend Workflow
239
+ self.name = "Registration"
240
+ input :name, :string
241
+ input :age, :integer
242
+ input :gender, :select, nil, :select_options => %w(male female)
243
+ task :person => :yaml do
244
+ inputs.to_hash
245
+ end
246
+ end
247
+
248
+ puts LLM.workflow_ask(m, "Register Eduard Smith, a 25 yo male, using a tool call",
249
+ backend: 'ollama', model: 'llama3')
250
+ # Or equivalently through an Agent:
251
+ agent = LLM::Agent.new workflow: m, backend: 'ollama', model: 'llama3'
252
+ puts agent.ask "Register Eduard Smith, a 25 yo male, using a tool call to the tool provided"
253
+ ```
254
+
255
+ Knowledge base reasoning with an Agent (tests pattern)
256
+ ```ruby
257
+ TmpFile.with_dir do |dir|
258
+ kb = KnowledgeBase.new dir
259
+ kb.format = {"Person" => "Alias"}
260
+ kb.register :brothers, datafile_test(:person).brothers, undirected: true
261
+ kb.register :marriages, datafile_test(:person).marriages, undirected: true, source: "=>Alias", target: "=>Alias"
262
+ kb.register :parents, datafile_test(:person).parents
263
+
264
+ agent = LLM::Agent.new knowledge_base: kb
265
+ puts agent.ask "Who is Miki's brother in law?"
266
+ end
267
+ ```
268
+
269
+ Iterate structured results
270
+ ```ruby
271
+ agent = LLM::Agent.new
272
+ agent.iterate("List three steps to bake bread") do |step|
273
+ puts "- #{step}"
274
+ end
275
+ ```
276
+
277
+ ---
278
+
279
+ Agent gives you a stateful, tool‑aware façade over LLM.ask and Chat, so you can build conversational applications that call Workflows and explore KnowledgeBases with minimal ceremony—both from Ruby APIs and via the scout command-line.
data/doc/Chat.md ADDED
@@ -0,0 +1,258 @@
1
+ # Chat
2
+
3
+ Chat is a lightweight builder around an Array of messages that lets you construct, persist, and run LLM conversations declaratively. It integrates tightly with the LLM pipeline (LLM.chat/LLM.ask), Workflows (tool calls), and KnowledgeBase traversal.
4
+
5
+ A Chat is “just” an Array annotated with Chat behavior (via Annotation). Each element is a Hash like {role: 'user', content: '…'}.
6
+
7
+ Key capabilities:
8
+ - Build conversations programmatically (user/system/assistant/…).
9
+ - Declare tools and jobs inline (Workflow tasks, saved Step results).
10
+ - Inline files and directories into messages (as tagged content).
11
+ - Set per-turn options (format, endpoint, model), request JSON structures.
12
+ - Run a conversation (ask), append responses (chat), and save/write to files.
13
+
14
+ Sections:
15
+ - Data model and setup
16
+ - Adding messages
17
+ - Files, directories and tagging
18
+ - Declaring tools, tasks and jobs
19
+ - Options, endpoint and formats
20
+ - Running a chat: ask/chat/json/json_format
21
+ - Persistence helpers and branching
22
+ - Interop with LLM.chat
23
+ - CLI: using Chat with scout llm and scout agent
24
+ - Examples
25
+
26
+ ---
27
+
28
+ ## Data model and setup
29
+
30
+ A Chat is an Array annotated with Chat, so every method mutates/appends to the underlying array.
31
+
32
+ - Chat.setup([]) → an annotated empty conversation.
33
+ - Each message appended is a Hash with keys:
34
+ - role: String or Symbol (e.g., 'user', 'system', 'assistant', …).
35
+ - content: String (or structured content passed through).
36
+
37
+ Example
38
+ ```ruby
39
+ chat = Chat.setup []
40
+ chat.system "You are an assistant"
41
+ chat.user "Hi"
42
+ puts chat.print
43
+ ```
44
+
45
+ Chat uses Annotation. You can annotate/cloned arrays and preserve Chat behavior.
46
+
47
+ ---
48
+
49
+ ## Adding messages
50
+
51
+ All helpers append a message:
52
+
53
+ - message(role, content) — base append.
54
+ - user(text), system(text), assistant(text).
55
+ - import(file), continue(file) — declarative import markers for LLM.chat (see Interop).
56
+ - file(path), directory(path) — inline content (see next section).
57
+ - format(value) — set desired output format (e.g. :json, or JSON Schema Hash).
58
+ - tool(workflow, task, inputs?) — register a Workflow task tool declaration (see Tools below).
59
+ - task(workflow, task_name, inputs={}) — declare a Workflow task to run, converted to a job (Step) via LLM.chat.
60
+ - inline_task(workflow, task_name, inputs={}) — like task but inlined result.
61
+ - job(step), inline_job(step) — attach a precomputed Step’s result (file content or function call output).
62
+ - association(name, path, options={}) — register a KnowledgeBase association (LLM will build a tool for it).
63
+
64
+ Utilities:
65
+ - tag(content, name=nil, tag=:file, role=:user) — wrap content in a tagged block (e.g., <file name="…">…</file>) and append it as role (default :user).
66
+
67
+ ---
68
+
69
+ ## Files, directories and tagging
70
+
71
+ Use file/directory/tag to place content into the chat:
72
+
73
+ - file(path) — appends the contents of path tagged as <file name="…">…</file>.
74
+ - directory(path) — appends all files inside as a sequence of <file> tags.
75
+ - tag(content, name=nil, tag=:file, role=:user) — manual variant to tag any text.
76
+
77
+ Tagged content is respected by LLM.parse/LLM.chat and protected from unintended parsing/splitting.
78
+
79
+ ---
80
+
81
+ ## Declaring tools, tasks and jobs
82
+
83
+ Chat supports wiring external tools into conversations:
84
+
85
+ - tool workflow task [input options] — declares a callable tool from a Workflow task. Example:
86
+ ```ruby
87
+ chat.tool "Baking", "bake_muffin_tray"
88
+ ```
89
+ LLM.ask will export this task as a function tool; when the model calls it, the function block (or the default runner) will run the job and feed the result back to the model.
90
+
91
+ - task workflow task [input options] — enqueues a Workflow job to be produced before the conversation proceeds; replaces itself with a job marker.
92
+
93
+ - inline_task workflow task [input options] — like task, but inlines the result (as a file-like message) for immediate context.
94
+
95
+ - job(step) / inline_job(step) — attach an existing Step result. job inserts a function_call + tool output pair so models can reason over the output; inline_job inserts the raw result file.
96
+
97
+ - association name path [options] — registers a KnowledgeBase association file as a tool (e.g., undirected=true, source/target formats). LLM will add a “children” style tool or per-association tool definition.
98
+
99
+ These declarations are processed by LLM.chat (see Interop) to produce steps and tools before the model is queried.
100
+
101
+ ---
102
+
103
+ ## Options, endpoint and formats
104
+
105
+ Set transient options as messages; LLM.options will read them and merge into the ask options:
106
+
107
+ - option(key, value) — arbitrary key/value (e.g., temperature).
108
+ - endpoint(value) — named endpoint (merges with Scout.etc.AI[endpoint].yaml).
109
+ - model(value) — backend model ID.
110
+ - format(value) — request output format:
111
+ - :json or 'json_object' for JSON.
112
+ - JSON Schema Hash for structured object/array replies (Responses/OpenAI support).
113
+
114
+ You can also insert a previous_response_id message to continue a Responses session.
115
+
116
+ Note: Most options reset after an assistant turn, except previous_response_id which persists until overwritten.
117
+
118
+ ---
119
+
120
+ ## Running a chat: ask/chat/json/json_format
121
+
122
+ - ask(…): returns the assistant reply (String) by calling LLM.ask(LLM.chat(self), …).
123
+ - respond(…): equivalent to ask(current_chat, …) when used through an Agent.
124
+ - chat(…): calls ask with return_messages: true, appends the assistant reply to the conversation, and returns the reply content.
125
+ - json(…): sets format :json, calls ask, parses JSON, and returns:
126
+ - obj['content'] if the parsed object is {"content": …}, else the object.
127
+ - json_format(schema, …): sets format to the provided schema (Hash), calls ask, and parses the JSON accordingly. Returns obj or obj['content'] when applicable.
128
+
129
+ These helpers adapt the conversation to common usage patterns: accumulate messages, call the model, and parse outputs when needed.
130
+
131
+ ---
132
+
133
+ ## Persistence helpers and branching
134
+
135
+ - print — pretty-prints the conversation (using LLM.print of the processed chat).
136
+ - save(path, force=true) — writes print output. If path is a name (Symbol/String without path), it resolves via Scout.chats.
137
+ - write(path, force=true) — alias writing with print content.
138
+ - write_answer(path, force=true) — writes the last assistant answer only.
139
+ - branch — returns a deep-annotated dup so you can explore branches without mutating the original.
140
+ - shed — returns a Chat containing only the last message (useful for prompts that must include only the latest instruction).
141
+ - answer — returns the content of the last message.
142
+
143
+ ---
144
+
145
+ ## Interop with LLM.chat
146
+
147
+ A Chat instance can be passed to LLM.ask by Chat#ask; internally it runs:
148
+
149
+ 1) LLM.chat(self) — expands the Array of messages into a pipeline:
150
+ - imports, clear, clean,
151
+ - tasks → produces dependencies (Workflow.produce),
152
+ - jobs → turns Steps into function calls or inline files,
153
+ - files/directories → expand into <file> tags.
154
+
155
+ 2) LLM.options to collect endpoint/model/format/etc.
156
+
157
+ 3) Backend ask with optional tool wiring (from tool/association declarations).
158
+
159
+ Thus, your Chat can be both a declarative script (like a “chat file”) and a runnable conversation object.
160
+
161
+ ---
162
+
163
+ ## CLI: using Chat with scout llm and scout agent
164
+
165
+ The CLI uses the same message DSL and processing. Useful commands:
166
+
167
+ - Ask an LLM:
168
+ - scout llm ask [options] [question]
169
+ - -t|--template <file_or_key> — load a prompt template; if it contains “???”, the trailing question replaces it; otherwise concatenates as a new user message.
170
+ - -c|--chat <chat_file> — open a conversation file; the response is appended to the file (using Chat.print).
171
+ - -i|--inline <file> — answer comments of the form “# ask: …” inside a source file; writes answers inline between “# Response start/end”.
172
+ - -f|--file <file> — prepend the file contents as a tagged <file> message; or embed STDIN/file where “...” appears in the question.
173
+ - -m|--model, -e|--endpoint, -b|--backend — select backend and model (merged with per-endpoint configs).
174
+ - -d|--dry_run — expand and print the conversation (LLM.print) without asking.
175
+
176
+ - Ask via an Agent (workflow + knowledge base):
177
+ - scout agent ask [options] [agent_name] [question]
178
+ - Loads the agent from:
179
+ - Scout.workflows[agent_name] (workflow.rb) or
180
+ - Scout.chats[agent_name] or a directory with workflow.rb, knowledge_base/, start_chat.
181
+ - Same flags as llm ask; adds:
182
+ - -wt|--workflow_tasks list — export only these tasks to the agent as callable tools.
183
+
184
+ - Auxiliary:
185
+ - scout llm template — list prompt templates (Scout.questions).
186
+ - scout llm server — minimal chat web UI over ./chats with a REST API (save/run lists).
187
+ - scout agent kb <agent_name> … — runs KnowledgeBase CLI pre-wired to the agent’s KB.
188
+
189
+ This CLI uses the same LLM.chat pipeline and Chat.print/save semantics as the Ruby API.
190
+
191
+ ---
192
+
193
+ ## Examples
194
+
195
+ Create and print a minimal conversation
196
+ ```ruby
197
+ a = LLM::Agent.new
198
+ a.start_chat.system 'you are a robot'
199
+ a.user "hi"
200
+ puts a.print
201
+ ```
202
+
203
+ Compile messages with roles inline
204
+ ```ruby
205
+ text = <<~EOF
206
+ system:
207
+
208
+ you are a terse assistant that only write in short sentences
209
+
210
+ assistant:
211
+
212
+ Here is some stuff
213
+
214
+ user: feedback
215
+
216
+ that continues here
217
+ EOF
218
+ LLM.chat(text) # → messages array ready to ask
219
+ ```
220
+
221
+ Register and use a Workflow tool
222
+ ```ruby
223
+ chat = Chat.setup []
224
+ chat.user "Use the provided tool to learn the instructions of baking a tray of muffins. Don't give me your own recipe."
225
+ chat.tool "Baking", "bake_muffin_tray"
226
+ LLM.ask(chat)
227
+ ```
228
+
229
+ Declare KnowledgeBase associations and ask
230
+ ```ruby
231
+ chat = Chat.setup []
232
+ chat.system "Query the knowledge base of familiar relationships to answer the question"
233
+ chat.user "Who is Miki's brother in law?"
234
+ chat.message(:association, "brothers #{datafile_test(:person).brothers} undirected=true")
235
+ chat.message(:association, "marriages #{datafile_test(:person).marriages} undirected=true source=\"=>Alias\" target=\"=>Alias\"")
236
+ LLM.ask(chat)
237
+ ```
238
+
239
+ Request structured JSON
240
+ ```ruby
241
+ chat = Chat.setup []
242
+ chat.system "Respond in json format with a hash of strings as keys and string arrays as values, at most three in length"
243
+ chat.user "What other movies have the protagonists of the original gost busters played on, just the top."
244
+ chat.format :json
245
+ puts chat.ask
246
+ ```
247
+
248
+ Iterate structured results with an Agent
249
+ ```ruby
250
+ agent = LLM::Agent.new
251
+ agent.iterate("List the 3 steps to bake bread") do |step|
252
+ puts "- #{step}"
253
+ end
254
+ ```
255
+
256
+ ---
257
+
258
+ Chat provides an ergonomic, declarative way to build conversations in code and on disk. It composes seamlessly with LLM.chat/ask, Workflows (as tools), KnowledgeBases (as associations), and the Scout CLI, making it easy to author, run, and persist agentic interactions.