scout-ai 1.1.6 → 1.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d5ca142f92d36e5107d70899a8556f715b9e179772015da1e52eb8275d66e25f
4
- data.tar.gz: 2f1ba6c8a1476cbee42ec1c0f1b0cf2fcae15044f9eeed13d6e42b7b7a5e5618
3
+ metadata.gz: 1e939ad5a06a271bd6379845c3c3f119c2191dcbe250482b622d83cc03c5a27a
4
+ data.tar.gz: '009d4893f3a473f48149db3bed025ebd8d2603295a525f99a86c07437dd1340d'
5
5
  SHA512:
6
- metadata.gz: 93b8db1128eb2be1f4eee9b9edc68e5b7ada4be68c5bbffefcc52f98281d91ce1e8b6337703fcbf01e2af188c355db1141a40fd2337b8bfe530a04b77b472828
7
- data.tar.gz: 573aa7f7423d2b5329a768baa5eb0ebb17925c9383ab8d1a3346adecaed76f680328ab29080685ce64c91d886d4eb3c3cfa538d9b0f2abe33f55b53d0915f346
6
+ metadata.gz: ae56ea4a63d69ae58ceaeb4dcf76196c25c1a75a68590ae36031d088181c213bbbc2ace780229e9923f3260ec0e330fbc6f7b6844201804e5daa460c865c11fb
7
+ data.tar.gz: 28803032d0d2666384f0dcbbbb809544e11649a460bd331807fa94df13bb9c7d391d6622cb91599dc9ad155775d0b14b8835203ac87b0fbfbce6d22aabc878d4
data/.vimproject CHANGED
@@ -6,35 +6,22 @@ scout-ai=$PWD filter="*.rb *.rake Rakefile *.rdoc *.R *.sh *.js *.haml *.sass *.
6
6
  scout-ai
7
7
  }
8
8
  chats=chats filter="*"{
9
- parse
9
+ agent_doc
10
10
 
11
- test_tool
12
11
 
13
- ask_agent
14
-
15
- test_ollama_tool
12
+ multi_agent.rb
16
13
 
17
- test_github
14
+ network
18
15
 
19
- test_stdio
16
+ intro
20
17
 
21
- test_claude
18
+ dev
22
19
 
23
- mcp_server
20
+ inline_task
24
21
 
25
- chat
26
- text_options
22
+ inline_task.rb
27
23
 
28
- multi_agent.rb
29
-
30
- pdf
31
- agent
32
- AGS
33
- documenter.rb
34
- genome_editing
35
- hello
36
- high_level
37
- test_chat
24
+ course
38
25
  system=system{
39
26
  scout-ai
40
27
  }
@@ -97,6 +84,11 @@ scout-ai=$PWD filter="*.rb *.rake Rakefile *.rdoc *.R *.sh *.js *.haml *.sass *.
97
84
  lib=lib {
98
85
  scout-ai.rb
99
86
  scout=scout{
87
+ network=network{
88
+ paths.rb
89
+ knowledge_base.rb
90
+ entity.rb
91
+ }
100
92
  llm=llm{
101
93
  utils.rb
102
94
  tools.rb
data/VERSION CHANGED
@@ -1 +1 @@
1
- 1.1.6
1
+ 1.1.7
data/doc/Agent.md CHANGED
@@ -1,279 +1,249 @@
1
- # Agent
1
+ Agent
2
2
 
3
- Agent is a thin orchestrator around LLM, Chat, Workflow and KnowledgeBase that maintains a conversation state, injects tools automatically, and streamlines structured interactions (JSON lists/dictionaries) with helper methods.
3
+ Agent is a thin, stateful façade over the LLM and Chat systems that:
4
+ - Maintains a live conversation (Chat) and forwards the Chat DSL (user/system/file/…)
5
+ - Automatically exposes Workflow tasks and KnowledgeBase queries as callable tools
6
+ - Centralizes backend/model/endpoint defaults for repeated asks
7
+ - Provides convenience helpers for structured outputs (JSON schema, iteration)
8
+ - Provides a simple way to delegate work to other Agent instances
4
9
 
5
- Core ideas:
6
- - Keep a live chat (conversation) object and forward the Chat DSL (user/system/…).
7
- - Export Workflow tasks and KnowledgeBase queries as tool definitions so the model can call them functionally.
8
- - Centralize backend/model/endpoint defaults for repeated asks.
9
- - Provide convenience helpers to iterate over structured results (lists/dictionaries).
10
+ The Agent API makes it convenient to build conversational applications that call registered Workflow tasks, query a KnowledgeBase, include files/images/pdfs in the conversation, and iterate over structured outputs.
10
11
 
11
12
  Sections:
12
13
  - Quick start
13
- - Construction and state
14
- - Tool wiring (Workflow and KnowledgeBase)
15
- - Running interactions
16
- - Iterate helpers
17
- - Loading an agent from a directory
14
+ - Conversation lifecycle (start_chat / start / current_chat)
15
+ - Including files, images, PDFs in a chat
16
+ - Tools and automatic wiring (Workflow and KnowledgeBase)
17
+ - Delegation (handing a message to another Agent)
18
+ - Asking, chatting and structured outputs (json/json_format/iterate)
19
+ - Loading an Agent from a directory
18
20
  - API reference
19
- - CLI: scout agent …
20
21
 
21
22
  ---
22
23
 
23
- ## Quick start
24
+ Quick start
24
25
 
25
- Build a live conversation and print it
26
+ Create an Agent and run a simple conversation
26
27
 
27
28
  ```ruby
28
- a = LLM::Agent.new
29
- a.start_chat.system 'you are a robot'
30
- a.user "hi"
31
- puts a.print # via Chat#print, forwarded through Agent
29
+ agent = LLM::Agent.new
30
+ agent.start_chat.system 'You are a helpful assistant'
31
+ agent.user 'Hi'
32
+ puts agent.print # forwarded to Chat#print
32
33
  ```
33
34
 
34
- Run a Workflow tool via tool calling
35
+ You can also use the convenience factory:
35
36
 
36
37
  ```ruby
37
- m = Module.new do
38
- extend Workflow
39
- self.name = "Registration"
38
+ agent = LLM.agent(endpoint: 'ollama', model: 'llama3')
39
+ ```
40
40
 
41
- desc "Register a person"
42
- input :name, :string, "Last, first name"
43
- input :age, :integer, "Age"
44
- input :gender, :select, "Gender", nil, :select_options => %w(male female)
45
- task :person => :yaml do
46
- inputs.to_hash
47
- end
48
- end
41
+ Conversation lifecycle and state
49
42
 
50
- agent = LLM::Agent.new workflow: m, backend: 'ollama', model: 'llama3'
51
- agent.ask "Register Eduard Smith, a 25 yo male, using a tool call to the tool provided"
52
- ```
43
+ - start_chat
44
+ - A Chat instance that is used as the immutable base for new conversation branches. Use start_chat to seed messages that should always be present (policy, examples, templates).
45
+ - Messages added to start_chat are preserved across calls to start (they form the base).
46
+
47
+ - start(chat = nil)
48
+ - With no argument: branches the start_chat and makes the branch the current conversation (non-destructive copy).
49
+ - With a Chat/message array: adopts the provided chat (annotating it if necessary) and sets it as current.
50
+
51
+ - current_chat
52
+ - Returns the active Chat (created via start when needed).
53
53
 
54
- Query a KnowledgeBase (tool exported automatically)
54
+ Forwarding and Chat DSL
55
+
56
+ Agent forwards unknown method calls to the current_chat (method_missing), so you can use the Chat DSL directly on an Agent:
55
57
 
56
58
  ```ruby
57
- TmpFile.with_dir do |dir|
58
- kb = KnowledgeBase.new dir
59
- kb.format = {"Person" => "Alias"}
60
- kb.register :brothers, datafile_test(:person).brothers, undirected: true
61
- kb.register :marriages, datafile_test(:person).marriages, undirected: true, source: "=>Alias", target: "=>Alias"
62
- kb.register :parents, datafile_test(:person).parents
63
-
64
- agent = LLM::Agent.new knowledge_base: kb
65
- puts agent.ask "Who is Miki's brother in law?"
66
- end
59
+ agent.user "Please evaluate this sample"
60
+ agent.system "You are a domain expert"
61
+ agent.file "paper.md" # expands the file contents into the chat (see files handling below)
62
+ agent.pdf "/path/to/figure.pdf"
63
+ agent.image "/path/to/figure.png"
67
64
  ```
68
65
 
69
- ---
66
+ Including files, PDFs and images in a chat
70
67
 
71
- ## Construction and state
68
+ The Chat DSL supports roles for file, pdf, image and directory. The behaviours are:
69
+ - file: the contents of the file are read and inserted into the chat wrapped in a <file>...</file> tag
70
+ - directory: expands to a list of files and inserts each file as above
71
+ - pdf / image: the message content is replaced with a Path (the file is not inlined). These message roles are left for backends that support uploading files (LLM backends may upload them when supported)
72
72
 
73
- - LLM::Agent.new(workflow: nil, knowledge_base: nil, start_chat: nil, **kwargs)
74
- - workflow: a Workflow module or a String (loaded via Workflow.require_workflow).
75
- - knowledge_base: a KnowledgeBase instance (optional).
76
- - start_chat: initial messages (Chat or Array) to seed new chat branches.
77
- - **kwargs: stored as @other_options and merged into calls to LLM.ask (e.g., backend:, model:, endpoint:, log_errors:, etc.).
73
+ Example (from the project examples):
78
74
 
79
- Conversation lifecycle:
80
- - start_chat → returns a base Chat (Chat.setup []) lazily allocated once.
81
- - start(chat=nil)
82
- - With chat: adopt it (if not already a Chat, annotate and set).
83
- - Without: branch the start_chat (non-destructive copy).
84
- - current_chat → the active chat instance (created on demand).
75
+ ```ruby
76
+ agent.start_chat.system <<-EOF
77
+ You are a technician working in a molecular biology laboratory...
78
+ EOF
85
79
 
86
- Forwarding:
87
- - method_missing forwards any unknown method to current_chat, so you can call:
88
- - agent.user "text", agent.system "policy", agent.tool "WF" "task", agent.format :json, etc.
80
+ agent.start_chat.user "The following files are from a bad sequence"
81
+ agent.start_chat.pdf bad_sequence_pdf_path
82
+ agent.start_chat.image bad_sequence_png_path
83
+ ```
89
84
 
90
- ---
85
+ Tool wiring (Workflow and KnowledgeBase)
91
86
 
92
- ## Tool wiring (Workflow and KnowledgeBase)
87
+ When you call Agent#ask / Agent#chat and the Agent has a Workflow or KnowledgeBase, the Agent will automatically expose those as "tools" to the model.
93
88
 
94
- When you call Agent#ask:
95
- - If workflow or knowledge_base is present, Agent builds a tools array:
96
- - Workflow: LLM.workflow_tools(workflow) produces one tool per exported task (OpenAI/Responses-compatible function schemas).
97
- - KnowledgeBase: LLM.knowledge_base_tool_definition(kb) produces a “children” tool that takes {database, entities}.
98
- - Agent invokes LLM.ask(messages, tools: ..., **@other_options, **user_options) with a block that handles tool calls:
99
- - 'children' → returns kb.children(database, entities)
100
- - any other tool name → runs workflow.job(name, jobname?, parameters).run or .exec (exec if workflow.exec_exports includes the task).
89
+ - Workflow: LLM.workflow_tools(workflow) is called and produces a tool (function) definition for each exported task. The model may call these functions and the Agent (via LLM.ask internal wiring) will execute the corresponding workflow.job(task_name, ...) via LLM.call_workflow.
101
90
 
102
- Notes:
103
- - Tool outputs are serialized back to the model as tool results. If your tool returns a Ruby object, it is JSON-encoded automatically inside the tool response.
104
- - For KnowledgeBase integrations, Agent also enriches the system prompt with markdown descriptions of each registered database (system_prompt).
91
+ - KnowledgeBase: LLM.knowledge_base_tool_definition(kb) produces one tool per registered database and optionally a *_association_details tool when the database has fields. Calls to those functions invoke LLM.call_knowledge_base which returns associations or association details.
105
92
 
106
- ---
93
+ How tools are merged:
94
+ - The Agent stores default call options in @other_options (constructor kwargs). If @other_options[:tools] are present they are merged with any tools injected from workflow/knowledge_base and with any tools passed explicitly to ask() via options[:tools].
107
95
 
108
- ## Running interactions
96
+ Agent#ask(messages, options = {})
97
+ - messages may be a Chat, an Array of messages, or a single string (converted to messages via the LLM.chat parsing helpers).
98
+ - options are merged with @other_options and passed to LLM.ask. If workflow or knowledge_base are present, their tools are merged into options[:tools].
99
+ - Exceptions raised during ask are routed through process_exception if set: if process_exception is a Proc it is called with the exception and may return truthy to retry.
109
100
 
110
- - ask(messages_or_chat, model=nil, options={}) String (assistant content) or messages (when return_messages: true)
111
- - messages_or_chat can be: Array of messages, a Chat, or a simple string (Agent will pass through LLM.chat parsing).
112
- - model parameter can override the default; typically you set backend/model/endpoint in the Agent constructor.
113
- - If workflow/knowledge_base is configured, tools are injected and tool calls are handled automatically.
101
+ Delegation (handing messages to other Agent instances)
114
102
 
115
- - respond(...) ask(current_chat, ...)
116
- - chat(...) ask with return_messages: true, then append the assistant reply to current_chat and return the reply content.
117
- - json(...) sets current_chat.format :json, runs ask, parses JSON, returns the object (if object == {"content": ...}, returns that inner content).
118
- - json_format(format, ...) → sets current_chat.format to a JSON schema Hash and parses accordingly.
103
+ Agent#delegate(agent, name, description, &block)
104
+ - Adds a tool named "hand_off_to_<name>" to the Agent's tools. When the model calls that tool the provided block will be executed.
105
+ - If no block is given, a default block is installed which:
106
+ - logs the delegation
107
+ - if parameters[:new_conversation] is truthy, calls agent.start to create/clear the delegated agent conversation, otherwise calls agent.purge
108
+ - appends the message (parameters[:message]) as a user message to the delegated agent and runs agent.chat to get its response
109
+ - The function schema for delegation expects:
110
+ - message: string (required)
111
+ - new_conversation: boolean (default: false)
119
112
 
120
- Formatting helpers:
121
- - format_message and prompt are internal helpers for building a system + user prompt (used by some agents). Not required for normal use; Agent relies on LLM.chat to parse and LLM.ask to execute.
113
+ Example (from multi_agent.rb):
122
114
 
123
- ---
115
+ ```ruby
116
+ joker = LLM.agent endpoint: :mav
117
+ joker.start_chat.system 'You only answer with knock knock jokes'
124
118
 
125
- ## Iterate helpers
119
+ judge = LLM.agent endpoint: :nano, text_verbosity: :low, format: {judgement: :boolean}
120
+ judge.start_chat.system 'Tell me if a joke is funny. Be a hard audience.'
126
121
 
127
- For models that support JSON schema outputs (e.g., OpenAI Responses), Agent provides sugar to iterate over structured results:
122
+ supervisor = LLM.agent endpoint: :nano
123
+ supervisor.start_chat.system 'If you are asked a joke, send it to the joke agent. To see if it\'s funny ask the judge.'
128
124
 
129
- - iterate(prompt=nil) { |item| }
130
- - Sets endpoint :responses (so the Responses backend is used).
131
- - If prompt present, appends as user message.
132
- - Requests a JSON object with an array property "content".
133
- - Resets format back to :text afterwards.
134
- - Yields each item in the content list.
125
+ supervisor.delegate joker, :joker, 'Use this agent for jokes'
126
+ supervisor.delegate judge, :judge, 'Use this agent for testing if the jokes land or not'
135
127
 
136
- - iterate_dictionary(prompt=nil) { |k,v| … }
137
- - Similar, but requests a JSON object with string values (arbitrary properties).
138
- - Yields each key/value pair.
128
+ supervisor.user <<-EOF
129
+ Ask the joke agent for jokes and ask the judge to evaluate them, repeat until the judge is satisfied or 5 attempts
130
+ EOF
131
+ ```
139
132
 
140
- Example:
141
- ```ruby
142
- agent = LLM::Agent.new
143
- agent.iterate("List three steps to bake bread") { |s| puts "- #{s}" }
133
+ Asking, responding and structured outputs
144
134
 
145
- agent.iterate_dictionary("Give capital cities for FR, ES, IT") do |country, capital|
146
- puts "#{country}: #{capital}"
147
- end
148
- ```
135
+ - ask(messages, options = {})
136
+ - Low level: calls LLM.ask with the Agent defaults merged. Returns the assistant content string (or raw messages when return_messages: true is used).
149
137
 
150
- ---
138
+ - respond(...) → ask(current_chat, ...)
139
+ - Convenience to ask the model using the current chat.
151
140
 
152
- ## Loading an agent from a directory
141
+ - chat(options = {})
142
+ - Calls ask(current_chat, return_messages: true). If the response is an Array of messages, it concatenates them onto current_chat and returns current_chat.answer (the assistant message). If the response is a simple string it pushes that as an assistant message and returns it.
153
143
 
154
- - LLM::Agent.load_from_path(path)
155
- - Expects a directory containing:
156
- - workflow.rb — a Workflow definition (optional),
157
- - knowledge_base — a KnowledgeBase directory (optional),
158
- - start_chat — a chat file (optional).
159
- - Returns a configured Agent with those components.
144
+ - json(...)
145
+ - Sets the current chat format to :json, runs ask(...) and parses the returned JSON. If the top-level parsed object is a Hash with only the key "content" it returns that inner value.
160
146
 
161
- Paths are resolved via the Path subsystem; files like workflow.rb can be located relative to the given directory.
147
+ - json_format(format_hash, ...)
148
+ - Similar but sets the chat format using a provided JSON schema (format_hash) instead of the generic :json shorthand.
162
149
 
163
- ---
150
+ - get_previous_response_id
151
+ - Utility to find a prior message with role :previous_response_id and return its content (if present).
164
152
 
165
- ## API reference
153
+ Iterate helpers
166
154
 
167
- Constructor and state:
168
- - Agent.new(workflow: nil|String|Module, knowledge_base: nil, start_chat: nil, **kwargs)
169
- - start_chat → Chat
170
- - start(chat=nil) → Chat (branch or adopt provided)
171
- - current_chat → Chat
155
+ Agent provides helpers to request responses constrained by a JSON schema and iterate over the returned items:
172
156
 
173
- Chat DSL forwarding (method_missing):
174
- - All Chat methods available: user, system, assistant, file, directory, tool, task, inline_task, job, inline_job, association, format, option, endpoint, model, image, save/write/print, etc.
157
+ - iterate(prompt = nil){ |item| ... }
158
+ - Sets endpoint :responses (intended for the Responses backend), optionally appends the prompt as a user message, then requests a JSON object with schema {content: [string,...]}. The helper resets format back to :text and yields each item of the returned content array.
175
159
 
176
- Asking and replies:
177
- - ask(messages, model=nil, options={}) String (or messages if return_messages: true)
178
- - respond(...) → ask(current_chat, ...)
179
- - chat(...) → append answer to current_chat, return answer String
180
- - json(...), json_format(format, ...) → parse JSON outputs
160
+ - iterate_dictionary(prompt = nil){ |k,v| ... }
161
+ - Similar but requests an arbitrary object whose values are strings (additionalProperties: {type: :string}). Yields each key/value pair.
181
162
 
182
- Structured iteration:
183
- - iterate(prompt=nil) { |item| ... } — endpoint :responses, expects {content: [String]}
184
- - iterate_dictionary(prompt=nil) { |k,v| ... } — endpoint :responses, expects arbitrary object of string values
163
+ These helpers are convenient for model outputs that should be returned as a structured list/dictionary and handled item-by-item in Ruby.
185
164
 
186
- System prompt (internal):
187
- - system_prompt / prompt — build a combined system message injecting KB database descriptions if knowledge_base present.
165
+ File and chat processing behaviour
188
166
 
189
- Utilities:
190
- - self.load_from_path(path) Agent
167
+ Chat processing includes several convenient behaviours when a Chat is expanded prior to sending to a backend:
168
+ - import / continue / last: include the contents of other chat files (useful to compose long prompts or templates)
169
+ - file / directory: inline file contents in a tagged <file> block or expand directories into multiple file blocks
170
+ - pdf / image: keep a message whose content is the Path to the file; some backends will upload these to the model (behaviour depends on backend)
191
171
 
192
- ---
172
+ Loading an Agent from a directory
193
173
 
194
- ## CLI: scout agent commands
174
+ - LLM::Agent.load_from_path(path)
175
+ - path is a Path-like object representing a directory that may contain:
176
+ - workflow.rb — a Workflow definition (optional)
177
+ - knowledge_base — a KnowledgeBase directory (optional)
178
+ - start_chat — a Chat file to seed the agent (optional)
179
+ - Returns a configured Agent instance with those components loaded.
195
180
 
196
- The scout command resolves subcommands by scanning “scout_commands/**” paths using the Path subsystem, so packages and workflows can add their own. If you target a directory instead of a script, a listing of subcommands is shown.
181
+ API reference (high-level)
197
182
 
198
- Two commands are provided by scout-ai:
183
+ - LLM.agent(...) convenience factory for LLM::Agent.new(...)
184
+ - LLM::Agent.new(workflow: nil|String|Module, knowledge_base: nil, start_chat: nil, **kwargs)
185
+ - kwargs are stored under @other_options and merged into calls to LLM.ask (e.g., backend:, model:, endpoint:, log_errors:, tools: etc.)
199
186
 
200
- - Agent ask
201
- - scout agent ask [options] [agent_name] [question]
202
- - Options:
203
- - -l|--log <level> — set log severity.
204
- - -t|--template <file_or_key> — use a prompt template; positional question replaces '???' if present.
205
- - -c|--chat <chat_file> — load/extend a conversation file; appends new messages to it.
206
- - -m|--model, -e|--endpoint — backend/model selection (merged with per-endpoint config at Scout.etc.AI).
207
- - -f|--file <path> — include file content at the start (or substitute where “...” appears in the question).
208
- - -wt|--workflow_tasks <names> — limit exported workflow tasks for this agent call.
209
- - Resolution:
210
- - agent_name is resolved via Scout.workflows[agent_name] (a workflow) or Scout.chats[agent_name] (an agent directory with workflow.rb/knowledge_base/start_chat). The Path subsystem handles discovery across packages.
211
- - Behavior:
212
- - If --chat is given, the conversation is expanded (LLM.chat) and the new model output is appended (Chat.print).
213
- - Supports inline file Q&A mode (not typical for agent ask).
187
+ - start_chat → Chat (the immutable base messages for new conversations)
188
+ - start(chat=nil) Chat (branch or adopt provided chat)
189
+ - current_chat → Chat (active conversation)
214
190
 
215
- - Agent KnowledgeBase passthrough
216
- - scout agent kb <agent_name> <kb subcommand...>
217
- - Loads the agent’s knowledge base (agent_dir/knowledge_base) and forwards to “scout kb …” with --knowledge_base prefilled (and current Log level).
218
- - Useful to manage the KB tied to an agent from the CLI.
191
+ - ask(messages, options = {}) → String or messages (if return_messages: true)
192
+ - respond(...) ask(current_chat, ...)
193
+ - chat(options = {}) append assistant output to current_chat and return it
194
+ - json(...), json_format(format_hash, ...) parse JSON outputs and return Ruby objects
195
+ - iterate(prompt = nil) { |item| ... } — use Responses-like backend and expected schema {content: [string]}
196
+ - iterate_dictionary(prompt = nil) { |k,v| ... } — expected schema {<key>: string}
197
+ - delegate(agent, name, description, &block) — add a hand_off_to_* tool that forwards a message to another Agent
219
198
 
220
- Command resolution:
221
- - The bin/scout dispatcher walks nested directories (e.g., “agent/kb”) and lists available scripts when a directory is targeted.
199
+ Notes and caveats
222
200
 
223
- ---
201
+ - Tools are represented internally as values in @other_options[:tools] and have the form {name => [object_or_handler, function_definition]}. The Agent injects Workflow and KnowledgeBase tools automatically when present.
202
+ - Backends differ in how they handle file/pdf/image uploads — some backends support uploading and special message role handling, others do not. When you add pdf/image messages to the chat the Chat processing step replaces the content with a Path object; whether the endpoint uploads the file is backend dependent.
203
+ - Errors raised while calling LLM.ask are handled by the Agent#process_exception hook if you set it to a Proc. If the Proc returns truthy the ask is retried; otherwise the exception is raised.
224
204
 
225
- ## Examples
205
+ Examples
226
206
 
227
- Minimal conversation with an Agent (tests)
207
+ Minimal conversation
228
208
  ```ruby
229
- a = LLM::Agent.new
230
- a.start_chat.system 'you are a robot'
231
- a.user "hi"
232
- puts a.print
209
+ agent = LLM::Agent.new
210
+ agent.start_chat.system 'You are a bot'
211
+ agent.start
212
+ agent.user 'Tell me a joke'
213
+ puts agent.chat
233
214
  ```
234
215
 
235
- Register and run a simple workflow tool call
216
+ Workflow tool example
217
+
236
218
  ```ruby
237
219
  m = Module.new do
238
220
  extend Workflow
239
- self.name = "Registration"
221
+ self.name = 'Registration'
240
222
  input :name, :string
241
223
  input :age, :integer
242
- input :gender, :select, nil, :select_options => %w(male female)
224
+ input :gender, :select, nil, select_options: %w(male female)
243
225
  task :person => :yaml do
244
226
  inputs.to_hash
245
227
  end
246
228
  end
247
229
 
248
- puts LLM.workflow_ask(m, "Register Eduard Smith, a 25 yo male, using a tool call",
249
- backend: 'ollama', model: 'llama3')
250
- # Or equivalently through an Agent:
251
230
  agent = LLM::Agent.new workflow: m, backend: 'ollama', model: 'llama3'
252
- puts agent.ask "Register Eduard Smith, a 25 yo male, using a tool call to the tool provided"
231
+ agent.ask 'Register Eduard Smith, a 25 yo male, using a tool call'
253
232
  ```
254
233
 
255
- Knowledge base reasoning with an Agent (tests pattern)
256
- ```ruby
257
- TmpFile.with_dir do |dir|
258
- kb = KnowledgeBase.new dir
259
- kb.format = {"Person" => "Alias"}
260
- kb.register :brothers, datafile_test(:person).brothers, undirected: true
261
- kb.register :marriages, datafile_test(:person).marriages, undirected: true, source: "=>Alias", target: "=>Alias"
262
- kb.register :parents, datafile_test(:person).parents
263
-
264
- agent = LLM::Agent.new knowledge_base: kb
265
- puts agent.ask "Who is Miki's brother in law?"
266
- end
267
- ```
234
+ Delegation example (see multi_agent.rb in the repo)
268
235
 
269
- Iterate structured results
270
236
  ```ruby
271
- agent = LLM::Agent.new
272
- agent.iterate("List three steps to bake bread") do |step|
273
- puts "- #{step}"
274
- end
237
+ supervisor = LLM.agent
238
+ supervisor.delegate joker_agent, :joker, 'Use this agent for jokes'
239
+ # The model can then call the hand_off_to_joker function and the delegate block
240
+ # will forward the message to joker_agent.
275
241
  ```
276
242
 
243
+ Command-line integration
244
+
245
+ The scout CLI provides commands that work with Agent directories and workflows (scout agent ask, scout agent kb ...). The CLI resolves agent directories via the Path subsystem and can load workflow.rb / knowledge_base / start_chat automatically.
246
+
277
247
  ---
278
248
 
279
- Agent gives you a stateful, toolaware façade over LLM.ask and Chat, so you can build conversational applications that call Workflows and explore KnowledgeBases with minimal ceremony—both from Ruby APIs and via the scout command-line.
249
+ Agent gives you a stateful, tool-aware façade over LLM.ask and Chat so you can build conversational applications that call Workflows and explore KnowledgeBases with minimal ceremony—both from Ruby APIs and via the scout command-line.
@@ -15,6 +15,21 @@ module LLM
15
15
  @start_chat = start_chat
16
16
  end
17
17
 
18
+ def workflow(&block)
19
+ if block_given?
20
+ workflow = self.workflow
21
+
22
+ workflow.instance_eval &block
23
+ else
24
+ @workflow ||= begin
25
+ m = Module.new
26
+ m = "ScoutAgent"
27
+ m.extend Workflow
28
+ m
29
+ end
30
+ end
31
+ end
32
+
18
33
  def format_message(message, prefix = "user")
19
34
  message.split(/\n\n+/).reject{|line| line.empty? }.collect do |line|
20
35
  prefix + "\t" + line.gsub("\n", ' ')
@@ -1,8 +1,6 @@
1
1
  require 'scout'
2
2
  require 'aws-sdk-bedrockruntime'
3
- require_relative '../parse'
4
- require_relative '../tools'
5
- require_relative '../utils'
3
+ require_relative '../chat'
6
4
 
7
5
  module LLM
8
6
  module Bedrock
@@ -77,8 +77,7 @@ module Chat
77
77
  end
78
78
 
79
79
  # XML-style tag handling (protected content)
80
- if stripped =~ /^<(\w+)(\s+[^>]*)?>/ && text =~ %r{</#{$1}>}
81
- tag = $1
80
+ if stripped =~ /^<(\w+)(\s+[^>]*)?>/ && (tag = $1) && text =~ %r{</#{$1}>}
82
81
  protected_stack.push(tag)
83
82
  in_protected_block = true
84
83
  protected_block_type = :xml
@@ -38,7 +38,7 @@ module Chat
38
38
  end
39
39
  end.flatten
40
40
 
41
- Workflow.produce(jobs)
41
+ Workflow.produce(jobs) if jobs.any?
42
42
 
43
43
  new
44
44
  end
@@ -1,5 +1,5 @@
1
1
  module LLM
2
- @max_content_length = Scout::Config.get(:max_content_length, :llm_tools, :tools, :llm, :ask, default: 5_000)
2
+ @max_content_length = Scout::Config.get(:max_content_length, :llm_tools, :tools, :llm, :ask, default: 30_000)
3
3
  self.singleton_class.attr_accessor :max_content_length
4
4
 
5
5
  def self.call_id_name_and_arguments(tool_call)
@@ -64,7 +64,7 @@ module LLM
64
64
  Log.high "Called #{function_name}: " + Log.fingerprint(content)
65
65
 
66
66
  if content.length > max_content_length
67
- exception_msg = "Function #{function_name} called with parameters #{Log.fingerprint function_arguments} returned #{content.length} characters, which is more than the maximum set of #{max_content_length}."
67
+ exception_msg = "Function #{function_name} called with parameters #{Log.fingerprint function_arguments} returned #{content.length} characters, which is more than the maximum of #{max_content_length}."
68
68
  Log.high exception_msg
69
69
  content = {exception: exception_msg, stack: caller}.to_json
70
70
  end
@@ -76,8 +76,10 @@ module LLM
76
76
  }
77
77
 
78
78
  function_call = tool_call.dup
79
+ function_call = {'name' => tool_call['name']}.merge tool_call.except('name')
79
80
 
80
81
  function_call['id'] = function_call.delete('call_id') if function_call.dig('call_id')
82
+
81
83
  [
82
84
  {role: "function_call", content: function_call.to_json},
83
85
  {role: "function_call_output", content: response_message.to_json},
@@ -76,7 +76,8 @@ module LLM
76
76
 
77
77
  else
78
78
  tasks = workflow.all_exports if tasks.nil?
79
- tasks = workflow.all_tasks if tasks.empty?
79
+ tasks = workflow.all_tasks if tasks.empty? || workflow.tasks
80
+ tasks = [] if tasks.nil?
80
81
 
81
82
  tasks.inject({}){|tool_definitions,task_name|
82
83
  definition = self.task_tool_definition(workflow, task_name)