legion-llm 0.6.20 → 0.6.24

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 45d07a2c60a8663ba1b62165b3b489d49a2aac37ee1e1ec6abff7bd5f4357d6c
4
- data.tar.gz: 9ee8246c75fee6d7e690b55f4e2a91b030f6b142c91dd79acb7bf66edf4d9d05
3
+ metadata.gz: c7e4263174302a505c21078bbc343c36134c08e22f0ad2d2741e6e2b4b747327
4
+ data.tar.gz: ef99cbea7efe6f0e0c479586c73792d814328169ca1e0faaf2362da8cb140be1
5
5
  SHA512:
6
- metadata.gz: 92b102167bb6f346fab490787baedda2f2fa6fb528713c6b055b269f747c490d56fed5d21027616bc2c6d1f7cf4069ce14bb9b7607f7a6ad07c2c69b05ce0814
7
- data.tar.gz: f1fded39722bf678936df28f3bbf3ec095265bdabc28f70eaf67e64fae5519b7c58842a8432e4b4bfdc0476c9275473f75a9c83bf0c77d6f5cc2afe1fa700aeb
6
+ metadata.gz: bb18ac2c9d7cb8108edc71208dc97b4befaed9ab6cdbcb7c1f8662c00402c08619609e3ce5d372b9f4483175020ef2100ac5b86ae2d147216e755f4211173f41
7
+ data.tar.gz: e0f5b35fb908eff66faea9509f984fa141531c467db9ab1c8e2276f5e47dcd7a65f42e63e5004db4d27334867489c578a44c51b3779d9d954bd7d4028487e37e
data/CHANGELOG.md CHANGED
@@ -1,5 +1,56 @@
1
1
  # Legion LLM Changelog
2
2
 
3
+ ## [0.6.24] - 2026-04-08
4
+
5
+ ### Added
6
+ - `Legion::LLM::Patches::RubyLLMParallelTools`: monkey-patch that replaces RubyLLM's serial `handle_tool_calls` loop with concurrent thread execution so all tool calls in a batch run in parallel
7
+ - `ToolResultWrapper` struct exposes `tool_call_id`, `id`, `tool_name`, `result`, and `content` so bridge scripts can match results back to UI slots without falling back to name-based matching
8
+ - `emit_tool_result_event` in `Pipeline::Executor`: fires `tool_event_handler` with `type: :tool_result`, `duration_ms`, `started_at`, and `finished_at` after each tool completes
9
+ - `tool_event_handler` now also fires `type: :model_fallback` events (with `from_model`, `to_model`, `error`, `reason`) on auth-failed provider fallback in both regular and streaming paths
10
+ - `max_tool_rounds` setting (default `200`) in LLM settings; `install_tool_loop_guard` now reads it at call time so callers can override the cap per-session
11
+ - `started_at` timestamp stored in `Thread.current[:legion_current_tool_started_at]` for accurate per-call wall-clock duration even across parallel threads
12
+
13
+ ### Changed
14
+ - `MAX_RUBY_LLM_TOOL_ROUNDS` constant raised from `25` to `200` (now serves as a fallback default for the configurable `max_tool_rounds` setting)
15
+
16
+ ### Fixed
17
+ - `ConversationStore#db_append_message` now serializes non-String `content` values (e.g., tool-call arrays) to JSON before writing to the database, preventing Sequel type errors when tool-use messages are persisted
18
+
19
+ ## [0.6.23] - 2026-04-07
20
+
21
+ ### Fixed
22
+ - `build_response_routing` now always sets `routing[:escalated]` (defaults to `false`) instead of conditionally omitting the key
23
+ - Schema spec annotations updated: Thinking, Cache, Config(Generation) corrected to reflect `from_chat_args` first-class field mapping; ErrorResponse annotation updated with complete error hierarchy including `EscalationExhausted`, `PrivacyModeError`, `TokenBudgetExceeded`, `DaemonDeniedError`, `DaemonRateLimitedError`
24
+
25
+ ## [0.6.22] - 2026-04-07
26
+
27
+ ### Fixed
28
+ - Classification LEVELS ordering: swapped `[:public, :internal, :restricted, :confidential]` to correct `[:public, :internal, :confidential, :restricted]` so severity comparisons work properly
29
+ - `Response.from_ruby_llm` now extracts actual `stop_reason` from provider response instead of hardcoding `:end_turn`
30
+ - `Request.from_chat_args` maps 16 fields (`tool_choice`, `generation`, `thinking`, `response_format`, `context_strategy`, `cache`, `fork`, `tokens`, `stop`, `modality`, `hooks`, `idempotency_key`, `ttl`, `metadata`, `enrichments`, `predictions`) to first-class struct members instead of dumping into `extra`
31
+ - `build_response` populates routing details (strategy, tier, escalation chain, latency), cost estimation via `CostEstimator`, and actual stop reason instead of hardcoded defaults
32
+ - `response_tool_calls` merges execution data (exchange_id, source, status, duration_ms, result) from timeline events into tool call hashes
33
+ - `step_conversation_uuid` now auto-generates `conv_<hex>` when no conversation_id is provided (was a no-op)
34
+ - `step_response_normalization` now normalizes all enrichment keys to string format (was a no-op)
35
+ - Enrichment key `[:conversation_history]` corrected to `['context:conversation_history']` for consistent `source:type` pattern
36
+
37
+ ### Changed
38
+ - Schema spec (`docs/llm-schema-spec.md`) updated: ToolCall, Config(Generation), Cost, Routing(response), Stop status changed from Partial/Not-implemented to Implemented
39
+
40
+ ## [0.6.21] - 2026-04-07
41
+
42
+ ### Added
43
+ - Real-time tool call SSE streaming: tool-call, tool-result, and tool-error events emitted during execution, not after completion
44
+ - `ClientToolMethods` module extracted from inline tool class for cleaner separation
45
+ - Rich tool execution logging: command, path, pattern, url shown per tool type instead of just key names
46
+ - `summarize_tool_args` produces structured log details per tool type (sh, file_read, file_write, file_edit, grep, glob, web_fetch, list_directory)
47
+ - `tool_event_handler` callback on `Pipeline::Executor` for real-time tool event forwarding via `Thread.current`
48
+
49
+ ### Fixed
50
+ - `install_tool_loop_guard` now uses `session.on_tool_call` instead of `session.on(:tool_call)` — RubyLLM callback was never firing, tool_call_id was always nil
51
+ - `list_directory` tool now expands `~` via `File.expand_path` — previously failed with `ENOENT` on tilde paths
52
+ - SSE text-delta events logged at debug level instead of info to reduce log noise
53
+
3
54
  ## [0.6.20] - 2026-04-06
4
55
 
5
56
  ### Added
data/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  LLM integration for the [LegionIO](https://github.com/LegionIO/LegionIO) framework. Wraps [ruby_llm](https://github.com/crmne/ruby_llm) to provide chat, embeddings, tool use, and agent capabilities to any Legion extension.
4
4
 
5
- **Version**: 0.6.14
5
+ **Version**: 0.6.23
6
6
 
7
7
  ## Installation
8
8
 
@@ -1,6 +1,75 @@
1
1
  # Legion::LLM Schema Specification
2
2
 
3
- ## Status: Draft / Brainstorming
3
+ ## Status: Mixed Envelope Implemented, Inner Types Aspirational
4
+
5
+ **Implemented in**: `Pipeline::Request` and `Pipeline::Response` (`lib/legion/llm/pipeline/request.rb`, `response.rb`)
6
+ **Version**: 1.0.0 (schema_version field on all payloads)
7
+ **Last verified**: 2026-04-07
8
+
9
+ The outer envelope is implemented: all 32 `Request` fields and 34 `Response` fields exist as `Data.define` members. However, many inner types (Message, ContentBlock, ToolCall, Chunk, Conversation, Feedback, ErrorResponse) are **not yet implemented as dedicated structs** — they are plain hashes or strings in the current code. Several Response fields are **always nil or empty** in the pipeline today.
10
+
11
+ This document serves as both the **canonical reference** for what is implemented and the **target specification** for what inner types should look like. Sections are annotated with implementation status.
12
+
13
+ For the AMQP wire protocol (exchange topology, queue configuration, message envelope, routing keys), see the Legion Wire Protocol spec in the LegionIO docs repo.
14
+
15
+ ### Implementation Status Matrix
16
+
17
+ | Section | Status | Notes |
18
+ |---------|--------|-------|
19
+ | **Request (envelope)** | Implemented | All 32 fields exist on `Data.define`. `from_chat_args` maps all to first-class fields. |
20
+ | **Response (envelope)** | Partial | All 34 fields exist. 10 fields always nil/empty (see below). |
21
+ | **Message** | Not implemented | Plain `{ role:, content: }` hashes. No struct, no id/seq/status/version. |
22
+ | **ContentBlock** | Not implemented | Content is always String. Only `:text` block used (system prompt caching). |
23
+ | **Tool** | Partial | `ToolAdapter` has name/description/parameters. No `source` on object, no `version`. |
24
+ | **ToolCall** | Partial | `id`, `name`, `arguments` + `exchange_id`, `source`, `status`, `duration_ms`, `result` merged from Timeline. `error` field never populated. Timeline lookup by tool name, not call ID (breaks duplicate tool calls). |
25
+ | **ToolChoice** | Stub | Field exists on Request, defaults to `{ mode: :auto }`, never forwarded to provider. |
26
+ | **Enrichment** | Implemented | RAG/GAIA enrichments work. Value shapes vary between steps. |
27
+ | **Prediction** | Partial | Request-side works. Response-side actuals never filled in. |
28
+ | **Tracing** | Implemented | trace_id, span_id, exchange_id all generated and propagated. |
29
+ | **Classification** | Partial | Labels applied but routing restrictions not enforced. |
30
+ | **Caller** | Implemented | Identity propagated, Profile derived. |
31
+ | **Agent** | Not implemented | Response `agent` field always nil. |
32
+ | **Billing** | Partial | Per-request cap only. No cumulative budget enforcement. |
33
+ | **Test** | Implemented | Test mode flags propagated. |
34
+ | **Modality** | Not implemented | Field exists, not acted upon. |
35
+ | **Hooks** | Partial | Pre/post hooks on Request. Response hooks not fired. |
36
+ | **Feedback** | Not implemented | No struct, class, or storage. Spec only. |
37
+ | **Audit** | Implemented | Uses symbol keys (not string keys as spec claims). |
38
+ | **Timeline** | Implemented | Event recording works. Participant tracking works. |
39
+ | **Participants** | Implemented | Tracked via Timeline. |
40
+ | **Wire Capture** | Not implemented | Response `wire` field always nil. |
41
+ | **Retry** | Not implemented | Response `retry` field always nil. |
42
+ | **Safety** | Not implemented | Response `safety` field always nil. |
43
+ | **Rate Limit** | Not implemented | Response `rate_limit` field always nil. |
44
+ | **Thinking** | Partial | Request thinking config mapped to first-class field. Response thinking **never populated** by executor (always nil). |
45
+ | **Context Window** | Not implemented | `tokens.context_window`, `utilization`, `headroom` never populated. |
46
+ | **Validation** | Not implemented | Response `validation` field always nil. |
47
+ | **Provider Features** | Not implemented | Response `features` field always nil. |
48
+ | **Model Deprecation** | Not implemented | Response `deprecation` field always nil. |
49
+ | **Cache** | Partial | Request cache mapped to first-class field. Response `cache` always `{}`. |
50
+ | **Chunk (Streaming)** | Not implemented | Raw RubyLLM chunks passed through; no spec-compliant Chunk struct. |
51
+ | **ErrorResponse** | Not implemented | No struct; only exception classes (`LLMError` hierarchy). |
52
+ | **Conversation** | Partial | `ConversationStore` exists but no `Conversation` struct. Limited fields. |
53
+ | **Config (Generation)** | Implemented | `from_chat_args` now maps generation, thinking, response_format, etc. to first-class fields. |
54
+ | **Quality** | Implemented | Returns `{ score:, band:, source: }` (not `{ score:, acceptable:, checker: }` as spec says). |
55
+ | **Cost** | Implemented | Populated via `CostEstimator.estimate` with `estimated_usd`, `provider`, `model`. |
56
+ | **Routing (response)** | Implemented | `provider`, `model`, `strategy`, `tier`, `escalated`, `escalation_chain`, `latency_ms` populated. |
57
+ | **Stop** | Implemented | `stop.reason` extracted from provider response (`:end_turn`, `:tool_use`, etc.). |
58
+ | **Metering** | Not implemented | Module exists but not wired into pipeline steps. |
59
+
60
+ #### Response Fields Always Nil/Empty
61
+
62
+ These Response fields exist on the `Data.define` but are **never populated** by the executor today:
63
+
64
+ - `agent` — always nil
65
+ - `cache` — always `{}`
66
+ - `safety` — always nil
67
+ - `rate_limit` — always nil
68
+ - `features` — always nil
69
+ - `deprecation` — always nil
70
+ - `validation` — always nil
71
+ - `wire` — always nil
72
+ - `retry` — always nil
4
73
 
5
74
  ## Design Principles
6
75
 
@@ -27,6 +96,8 @@ schema_version: "1.0.0" # semver -- major.minor.patch
27
96
 
28
97
  ## Message
29
98
 
99
+ > **Implementation status: NOT IMPLEMENTED** — No `Message` struct exists. Messages are plain hashes with only `role` and `content` in the pipeline. `ConversationStore` persists additional fields (`id`, `seq`, `parent_id`, `agent_id`, `created_at`) in its DB rows, but these are not surfaced as a structured Message object.
100
+
30
101
  The atomic unit of conversation. Every exchange between user, assistant, and tools is a Message.
31
102
 
32
103
  ```
@@ -78,6 +149,8 @@ message.text # returns text content regardless of String vs Array<ContentBlock>
78
149
 
79
150
  ## Content Blocks
80
151
 
152
+ > **Implementation status: NOT IMPLEMENTED** — No `ContentBlock` struct exists. Content is always a plain String in the pipeline. The only place a typed block hash is constructed is for system prompt caching (`{ type: :text, content: ..., cache_control: ... }`). No image, audio, video, document, tool_use, tool_result, citation, or error block handling exists.
153
+
81
154
  Multimodal content. When `Message.content` is an array, each element is a ContentBlock.
82
155
 
83
156
  ### Block Types
@@ -199,6 +272,8 @@ data: Hash? # structured error data
199
272
 
200
273
  ## Tool
201
274
 
275
+ > **Implementation status: PARTIAL** — `ToolAdapter` wraps `RubyLLM::Tool` with `name`, `description`, `parameters`. The `source` field exists as a parallel lookup in `find_tool_source` (not on the tool object). `version` does not exist.
276
+
202
277
  Tool definitions available to the LLM.
203
278
 
204
279
  ```
@@ -227,6 +302,8 @@ Used by RBAC (can this caller use tools from this source?) and audit (which syst
227
302
 
228
303
  ## ToolCall
229
304
 
305
+ > **Implementation status: PARTIAL** — Tool calls are hashes with `id`, `name`, `arguments` and optionally `exchange_id`, `source`, `status`, `duration_ms`, `result` merged from matching Timeline events. The `error` field is never populated. Timeline lookup uses tool name (not call ID), so duplicate invocations of the same tool in one response will only have execution data for the last invocation.
306
+
230
307
  A tool invocation made by the assistant, with execution results.
231
308
 
232
309
  ```
@@ -251,6 +328,8 @@ Always a parsed Hash, never a JSON string. Provider adapters that receive argume
251
328
 
252
329
  ## ToolChoice
253
330
 
331
+ > **Implementation status: STUB** — Field exists on Request, defaults to `{ mode: :auto }`. The `:specific` mode's `name` field is not handled. The `tool_choice` value is never forwarded to the underlying RubyLLM provider call.
332
+
254
333
  Controls how the LLM uses available tools.
255
334
 
256
335
  ```
@@ -263,6 +342,8 @@ ToolChoice
263
342
 
264
343
  ## Enrichment
265
344
 
345
+ > **Implementation status: IMPLEMENTED** — RAG and GAIA enrichments work. Note: value shapes are inconsistent across pipeline steps — not all enrichments include `content:`, `data:`, `duration_ms:`, `timestamp:` as spec describes.
346
+
266
347
  Things that *shaped* the request during processing. Any system can contribute enrichments without schema changes. Enrichments modify or observe the request -- for decisions and outcomes, see [Audit](#audit).
267
348
 
268
349
  Enrichments are a **Hash keyed by `"source:type"`**, not an array. This enables direct lookup and clean request-vs-response comparison without looping.
@@ -319,6 +400,8 @@ Adding a new system requires zero schema changes -- just add a new key.
319
400
 
320
401
  ## Prediction
321
402
 
403
+ > **Implementation status: PARTIAL** — Request-side predictions work (components can contribute predictions). Response-side actuals (`actual_value`, `accurate`) are never filled in — no post-execution comparison occurs.
404
+
322
405
  Hypothesis recorded before execution, compared to reality after execution. Enables self-improving systems. Any component in the pipeline can contribute predictions.
323
406
 
324
407
  Predictions are a **Hash keyed by `"source:type"`**, same pattern as enrichments. Direct lookup, no looping.
@@ -395,6 +478,8 @@ response.predictions.count { |_, v| v[:correct] }.to_f / response.predictions.si
395
478
 
396
479
  ## Tracing & Correlation
397
480
 
481
+ > **Implementation status: IMPLEMENTED** — `trace_id`, `span_id`, `exchange_id` all generated and propagated via `Pipeline::Tracing`.
482
+
398
483
  OpenTelemetry-compatible distributed tracing. Groups related requests across agentic loops, forks, and multi-step tasks.
399
484
 
400
485
  ```
@@ -424,6 +509,8 @@ Tracing is present on Request, Response, ErrorResponse, and Chunk.
424
509
 
425
510
  ## Exchange (Per-Hop Tracking)
426
511
 
512
+ > **Implementation status: IMPLEMENTED** — `conversation_id`, `request_id` (mapped to `id`), and `exchange_id` all generated via `Pipeline::Tracing` and propagated through the pipeline.
513
+
427
514
  Three-level ID hierarchy inspired by SIP's Call-ID / CSeq / Branch/Via model. Tracks every hop within a single request.
428
515
 
429
516
  ```
@@ -487,6 +574,8 @@ In practice, each exchange would become a child span under the request's span in
487
574
 
488
575
  ## Data Classification & Compliance
489
576
 
577
+ > **Implementation status: PARTIAL** — Classification labels are applied to requests. However, routing restrictions (e.g., preventing PHI-tagged data from going to certain providers) are not enforced.
578
+
490
579
  Data governance for enterprise adoption. Controls where data can be processed, how long it's retained, and what it contains.
491
580
 
492
581
  ```
@@ -538,6 +627,8 @@ Provider registry includes each provider's processing jurisdiction. Router match
538
627
 
539
628
  ## Caller
540
629
 
630
+ > **Implementation status: IMPLEMENTED** — Caller identity propagated through the pipeline. `Profile.derive` reads `caller[:requested_by][:type]` to determine step skipping.
631
+
541
632
  Auth-level identity tracking. Who authenticated to make this request, and on whose behalf. Separate from `agent` (which tracks AI entity identity).
542
633
 
543
634
  ```
@@ -602,6 +693,8 @@ RBAC checks `caller.requested_by` for permission evaluation. If `requested_for`
602
693
 
603
694
  ## Agent Identity
604
695
 
696
+ > **Implementation status: NOT IMPLEMENTED** — The `agent` field exists on both Request and Response but is always nil. No agent identity is attached during pipeline execution.
697
+
605
698
  Tracks which AI entity is executing the request. Not about auth (that's `caller`) -- about the AI agent doing the work.
606
699
 
607
700
  ```
@@ -648,6 +741,8 @@ Multiple LLM requests can share a `task_id`, enabling: "Show me everything that
648
741
 
649
742
  ## Billing & Budget
650
743
 
744
+ > **Implementation status: PARTIAL** — Per-request cost cap works. Cumulative budget tracking (daily/monthly limits) is not implemented. Metering module exists but is not wired into pipeline steps.
745
+
651
746
  Cost tracking, budget enforcement, and rate limiting.
652
747
 
653
748
  ```
@@ -690,6 +785,8 @@ Checked in the pipeline before the provider call:
690
785
 
691
786
  ## Test & Evaluation Mode
692
787
 
788
+ > **Implementation status: IMPLEMENTED** — Test mode flags propagated through the pipeline.
789
+
693
790
  Controls for testing, benchmarking, replay, and experimentation.
694
791
 
695
792
  ```
@@ -743,6 +840,8 @@ Experiment results are tracked via predictions (expected: better quality with GA
743
840
 
744
841
  ## Modality
745
842
 
843
+ > **Implementation status: NOT IMPLEMENTED** — The `modality` field exists on Request but is not acted upon by the pipeline or provider adapters.
844
+
746
845
  Declares input and output modality expectations. Guides routing (not all providers support all combinations) and future-proofs for multimodal evolution.
747
846
 
748
847
  ```
@@ -799,6 +898,8 @@ Provider capabilities:
799
898
 
800
899
  ## Lifecycle Hooks
801
900
 
901
+ > **Implementation status: PARTIAL** — Pre/post hooks on Request are supported. Response-side hook firing is not implemented.
902
+
802
903
  Caller-declared injection points in the pipeline. Named hooks registered by extensions or configuration.
803
904
 
804
905
  ```
@@ -839,6 +940,8 @@ Hooks receive the full request/response context and can add enrichments, but can
839
940
 
840
941
  ## Feedback
841
942
 
943
+ > **Implementation status: NOT IMPLEMENTED** — No Feedback struct, class, or storage exists. No code submits, receives, or stores feedback.
944
+
842
945
  User or automated quality feedback on specific messages. Lives on the Conversation, not on individual requests. Closes the learning loop.
843
946
 
844
947
  ```
@@ -884,6 +987,8 @@ Quality checkers and GAIA can also submit feedback:
884
987
 
885
988
  ## Audit
886
989
 
990
+ > **Implementation status: IMPLEMENTED** — Audit records are populated by the pipeline. Note: uses symbol keys (`:step`, `:action`), not string keys as some examples in this spec show.
991
+
887
992
  Record of what *happened* during pipeline processing -- decisions, actions, outcomes. Separate from enrichments (which record what *shaped* the request). Response-only.
888
993
 
889
994
  Audit is a **Hash keyed by `"step:action"`**, same pattern as enrichments and predictions.
@@ -990,6 +1095,8 @@ response.audit[:"persistence:store"][:data][:method] # => :direct
990
1095
 
991
1096
  ## Pipeline Timeline
992
1097
 
1098
+ > **Implementation status: IMPLEMENTED** — `Pipeline::Timeline` records ordered events with participant tracking.
1099
+
993
1100
  Inspired by [Homer/SIPCAPTURE](https://github.com/sipcapture/homer) call flow diagrams. A unified, globally-sequenced timeline of **everything** that happened during a request. Reconstructs the full call flow across all systems -- enrichments, audit, tool calls, provider calls, connections -- in one ordered record.
994
1101
 
995
1102
  This is the **one place an array is correct**. Timeline is ordered data, not lookup data. You iterate it in sequence to reconstruct the call flow, like Homer's ladder diagram.
@@ -1125,6 +1232,8 @@ The timeline is built during pipeline execution and returned on the response. It
1125
1232
 
1126
1233
  ## Participants
1127
1234
 
1235
+ > **Implementation status: IMPLEMENTED** — Tracked via `Pipeline::Timeline`.
1236
+
1128
1237
  All systems that touched this request. Enables Homer-style column headers for call flow visualization. Response-only, populated by the pipeline.
1129
1238
 
1130
1239
  ```
@@ -1154,6 +1263,8 @@ Auto-populated: every unique `from` and `to` value in the timeline becomes a par
1154
1263
 
1155
1264
  ## Wire Capture
1156
1265
 
1266
+ > **Implementation status: NOT IMPLEMENTED** — Response `wire` field is always nil. No capture of raw provider payloads occurs.
1267
+
1157
1268
  Raw request and response payloads as sent to/received from the provider. For debugging translator issues, you need both sides of the wire. Opt-in (can be expensive to store).
1158
1269
 
1159
1270
  Keyed by `exchange_id` -- one capture per provider call, not per request. A request with retries or tool loops produces multiple wire captures.
@@ -1223,6 +1334,8 @@ This lives on `response.routing.connection` since it's part of the routing outco
1223
1334
 
1224
1335
  ## Retry
1225
1336
 
1337
+ > **Implementation status: NOT IMPLEMENTED** — Response `retry` field is always nil. Retry logic exists in the executor (rate limit rescue) but results are not captured in the retry struct.
1338
+
1226
1339
  Distinct from escalation. Retries are the same provider/model attempted again after a transient failure. Escalation is switching to a different provider/model.
1227
1340
 
1228
1341
  ```
@@ -1269,6 +1382,8 @@ response.retry = {
1269
1382
 
1270
1383
  ## Content Safety
1271
1384
 
1385
+ > **Implementation status: NOT IMPLEMENTED** — Response `safety` field is always nil. Provider safety results are not captured.
1386
+
1272
1387
  Provider-reported content filtering results. Different from classification (which is our data governance). This is the provider saying "I evaluated this content against my safety policies."
1273
1388
 
1274
1389
  Response-only. Not all providers return this.
@@ -1322,6 +1437,8 @@ response.safety = {
1322
1437
 
1323
1438
  ## Rate Limit State
1324
1439
 
1440
+ > **Implementation status: NOT IMPLEMENTED** — Response `rate_limit` field is always nil. Provider rate limit headers are not captured (rate limit errors are rescued and retried, but quota state is not stored).
1441
+
1325
1442
  Provider quota state returned in response headers. Structured and always captured (not opt-in like wire). Critical for routing decisions.
1326
1443
 
1327
1444
  ```
@@ -1361,6 +1478,8 @@ end
1361
1478
 
1362
1479
  ## Thinking & Reasoning
1363
1480
 
1481
+ > **Implementation status: PARTIAL** — Request-side thinking configuration is mapped to the first-class `thinking` field by `from_chat_args`. Response-side `thinking` field exists on the Response struct but is **never populated** by the executor — it is always nil.
1482
+
1364
1483
  Controls for extended thinking, chain-of-thought, and reasoning behavior. Separate from generation parameters (temperature, top_p) because reasoning is about *how deeply* the model thinks, not *how randomly* it samples.
1365
1484
 
1366
1485
  ### Request side
@@ -1395,6 +1514,8 @@ Thinking tokens are tracked separately from regular output tokens because they h
1395
1514
 
1396
1515
  ## Context Window Utilization
1397
1516
 
1517
+ > **Implementation status: NOT IMPLEMENTED** — `tokens.context_window`, `tokens.utilization`, and `tokens.headroom` are never populated on the Response. Only `input_tokens` and `output_tokens` are set.
1518
+
1398
1519
  Expands response-side tokens with capacity information. Drives context strategy decisions.
1399
1520
 
1400
1521
  Added to `response.tokens`:
@@ -1439,6 +1560,8 @@ end
1439
1560
 
1440
1561
  ## Structured Output Validation
1441
1562
 
1563
+ > **Implementation status: NOT IMPLEMENTED** — Response `validation` field is always nil. `StructuredOutput` module exists for enforcing schemas but does not populate this struct.
1564
+
1442
1565
  When `response_format.type` is `:json` or `:json_schema`, reports whether the response actually validated.
1443
1566
 
1444
1567
  Response-only. Added to response alongside quality.
@@ -1479,6 +1602,8 @@ response.validation = {
1479
1602
 
1480
1603
  ## Provider Features
1481
1604
 
1605
+ > **Implementation status: NOT IMPLEMENTED** — Response `features` field is always nil.
1606
+
1482
1607
  Post-hoc report of which provider-specific features actually activated on this request. Different from capabilities (what the provider CAN do) -- this is what it DID.
1483
1608
 
1484
1609
  Response-only. Hash-keyed by feature name.
@@ -1519,6 +1644,8 @@ end
1519
1644
 
1520
1645
  ## Model Deprecation
1521
1646
 
1647
+ > **Implementation status: NOT IMPLEMENTED** — Response `deprecation` field is always nil.
1648
+
1522
1649
  Structured deprecation warnings from providers. Separate from the `warnings` array because automated systems need to act on these programmatically.
1523
1650
 
1524
1651
  Response-only.
@@ -1564,6 +1691,8 @@ end
1564
1691
 
1565
1692
  ## Cache
1566
1693
 
1694
+ > **Implementation status: PARTIAL** — Request-side `cache` field is mapped to the first-class field by `from_chat_args` (defaults to `{ strategy: :default, cacheable: true }`). Response-side `cache` field is always `{}`.
1695
+
1567
1696
  Symmetric caching controls on request and response. Replaces a flat strategy symbol with structured metadata.
1568
1697
 
1569
1698
  ### Request side (what I want)
@@ -1631,6 +1760,8 @@ Response: cache: { hit: true, key: "sha256:abc123", tier: :local, age: 45, expir
1631
1760
 
1632
1761
  ## Request
1633
1762
 
1763
+ > **Implementation status: IMPLEMENTED (envelope)** — All 32 fields exist as `Data.define` members with `.build` and `.from_chat_args` constructors. All fields including `generation`, `thinking`, `response_format`, `context_strategy`, `cache`, `fork`, `tokens`, `stop`, `modality`, `hooks`, `idempotency_key`, `ttl`, `metadata`, `enrichments`, and `predictions` are mapped to first-class struct members. Convenience accessors (`.model`, `.provider`) described in the spec are not defined.
1764
+
1634
1765
  What goes into the Legion::LLM pipeline.
1635
1766
 
1636
1767
  ```
@@ -1795,6 +1926,8 @@ For queue ordering when requests go through RMQ:
1795
1926
 
1796
1927
  ## Response
1797
1928
 
1929
+ > **Implementation status: PARTIAL (envelope)** — All 34 fields exist as `Data.define` members. 9 fields are always nil/empty (see status matrix above). `routing` populates `provider`, `model`, `strategy`, `tier`, `escalated`, `escalation_chain`, `latency_ms`. `stop.reason` extracted from provider response (falls back to `:end_turn`). `quality` returns `{ score:, band:, source:, signals: }` from `ConfidenceScorer` (not `{ score:, acceptable:, checker: }` as the Response struct below shows). `cost` populated via `CostEstimator.estimate` with `estimated_usd`, `provider`, `model`. Convenience accessors (`.model`, `.provider`) are not defined.
1930
+
1798
1931
  What comes back from the Legion::LLM pipeline.
1799
1932
 
1800
1933
  ```
@@ -1843,7 +1976,7 @@ Response
1843
1976
 
1844
1977
  # Stop (symmetric with request)
1845
1978
  stop: Hash
1846
- reason: Symbol # :end_turn, :tool_calls, :max_tokens, :safety, :stop_sequence
1979
+ reason: Symbol # :end_turn, :tool_use, :max_tokens, :safety, :stop_sequence
1847
1980
  sequence: String? # which stop sequence was hit (nil if none)
1848
1981
 
1849
1982
  # Tools (symmetric with request)
@@ -1984,6 +2117,8 @@ response.participants # ["pipeline", "rbac", "provider:claude", ...]
1984
2117
 
1985
2118
  ## Chunk (Streaming)
1986
2119
 
2120
+ > **Implementation status: NOT IMPLEMENTED** — No `Chunk` struct exists. Streaming (`call_stream`) yields raw RubyLLM chunk objects directly to callers with no translation to the spec format.
2121
+
1987
2122
  Incremental data during a streamed response.
1988
2123
 
1989
2124
  ```
@@ -2015,6 +2150,8 @@ Chunk
2015
2150
 
2016
2151
  ## ErrorResponse
2017
2152
 
2153
+ > **Implementation status: NOT IMPLEMENTED** — No `ErrorResponse` struct exists. Errors are raised as exceptions from the `Legion::LLM` error hierarchy: `LLMError` (base), `AuthError`, `RateLimitError`, `ContextOverflow`, `ProviderError`, `ProviderDown`, `UnsupportedCapability`, `PipelineError`, `TokenBudgetExceeded`, `EmbeddingUnavailableError`. Additionally, `EscalationExhausted`, `DaemonDeniedError`, `DaemonRateLimitedError`, and `PrivacyModeError` inherit from `StandardError` directly (not `LLMError`). These are Ruby exceptions, not structured response payloads.
2154
+
2018
2155
  Standard error format for failed requests.
2019
2156
 
2020
2157
  ```
@@ -2059,6 +2196,8 @@ ErrorResponse
2059
2196
 
2060
2197
  ## Conversation
2061
2198
 
2199
+ > **Implementation status: PARTIAL** — `ConversationStore` exists as an in-memory LRU (256 slots) with optional DB persistence. No `Conversation` struct — conversations are plain hashes (`{ messages: [], metadata: {}, lru_tick: N }`). DB persistence stores `id`, `caller_identity`, `metadata` (JSON blob), `created_at`, `updated_at`. Most spec fields (`title`, `summary`, `state`, `shared`, `participants`, `tags`, `pinned`, `usage_total`, `routing_history`) exist only as arbitrary metadata blob entries, not first-class fields.
2200
+
2062
2201
  The persistent conversation object stored in the ConversationStore.
2063
2202
 
2064
2203
  ```
@@ -2119,6 +2258,8 @@ Legion::LLM.chat(
2119
2258
 
2120
2259
  ## Config (Generation Parameters)
2121
2260
 
2261
+ > **Implementation status: PARTIAL** — Generation parameters are mapped to the first-class `generation` field by `from_chat_args`. However, provider adapters only forward `model` and `provider` to RubyLLM, not temperature/top_p/etc from the `generation` hash.
2262
+
2122
2263
  Sent in `request.generation`. Provider adapters map supported parameters and ignore unsupported ones.
2123
2264
 
2124
2265
  ```
@@ -2162,6 +2303,8 @@ response_format:
2162
2303
 
2163
2304
  ## Provider Adapter Contract
2164
2305
 
2306
+ > **Implementation status: PARTIAL** — Provider LEXs (extensions-ai/) exist and work for chat/embed. The formal `ProviderAdapter` interface with `Translator` is not enforced — providers integrate via RubyLLM's native provider system.
2307
+
2165
2308
  Every provider LEX must implement `Legion::LLM::ProviderAdapter` including a `Translator`.
2166
2309
 
2167
2310
  ### Required methods
@@ -373,11 +373,13 @@ module Legion
373
373
  end
374
374
 
375
375
  def db_append_message(conversation_id, msg)
376
+ content = msg[:content]
377
+ content = content.to_json unless content.is_a?(String) || content.nil?
376
378
  row = {
377
379
  conversation_id: conversation_id,
378
380
  seq: msg[:seq],
379
381
  role: msg[:role].to_s,
380
- content: msg[:content],
382
+ content: content,
381
383
  provider: msg[:provider]&.to_s,
382
384
  model: msg[:model]&.to_s,
383
385
  input_tokens: msg[:input_tokens],
@@ -0,0 +1,102 @@
1
+ # frozen_string_literal: true
2
+
3
+ # Patch: RubyLLM::Chat parallel tool call execution
4
+ #
5
+ # RubyLLM's default `handle_tool_calls` iterates tool calls serially with
6
+ # `.each_value`, meaning when an LLM returns N tool calls in a single response
7
+ # they execute one-at-a-time. This patch replaces that loop with concurrent
8
+ # thread execution so all tool calls in a batch run in parallel, and results
9
+ # are collected before re-prompting the model.
10
+ #
11
+ # Additionally, RubyLLM fires `on_tool_result` with the raw tool return value
12
+ # (a String/Hash/etc.) which carries no `tool_call_id`. The legion-interlink
13
+ # bridge script's `serialize_tool_result` needs a `tool_call_id` field to
14
+ # match results back to the correct tool call slot in the UI — without it
15
+ # every result falls back to name-based matching, which breaks when multiple
16
+ # tools of the same name run in parallel and leaves them stuck on RUNNING.
17
+ #
18
+ # Fix: wrap each result in a ToolResultWrapper that exposes both the raw
19
+ # content/result AND the originating tool_call_id / id fields.
20
+ #
21
+ # NOTE: This is a temporary shim. When RubyLLM is replaced this file goes away.
22
+ #
23
+ # Thread safety notes:
24
+ # - Each tool call executes in its own thread.
25
+ # - @on[:tool_call] fires per-thread (fast, just event emission — safe).
26
+ # - @on[:tool_result] fires per-thread with the wrapper object.
27
+ # - add_message is called serially after all threads complete to preserve
28
+ # message ordering and avoid races on @messages.
29
+ # - If ANY tool returns a RubyLLM::Tool::Halt, complete() is skipped —
30
+ # matching the original semantics.
31
+
32
+ module Legion
33
+ module LLM
34
+ module Patches
35
+ # Wraps a raw tool result value so that the bridge-script's
36
+ # serialize_tool_result can read both :tool_call_id/:id (for UI matching)
37
+ # and :result/:content (for the result payload) off a single object.
38
+ ToolResultWrapper = Struct.new(:result, :content, :tool_call_id, :id, :tool_name) do
39
+ # Delegate is_a? checks for RubyLLM::Tool::Halt so the caller can still
40
+ # detect halt results transparently.
41
+ def is_a?(klass)
42
+ result.is_a?(klass) || super
43
+ end
44
+
45
+ alias_method :kind_of?, :is_a?
46
+ end
47
+
48
+ module RubyLLMParallelTools
49
+ def handle_tool_calls(response, &)
50
+ tool_calls = response.tool_calls.values
51
+
52
+ # Dispatch all tool calls concurrently, preserving original order.
53
+ threads = tool_calls.map do |tool_call|
54
+ Thread.new do
55
+ @on[:new_message]&.call
56
+ @on[:tool_call]&.call(tool_call)
57
+ raw = execute_tool(tool_call)
58
+ # Wrap so serialize_tool_result in the bridge script gets an ID.
59
+ wrapper = ToolResultWrapper.new(
60
+ raw, # :result — raw value (String/Hash/Halt/etc.)
61
+ raw, # :content — alias for bridge compat
62
+ tool_call.id, # :tool_call_id
63
+ tool_call.id, # :id
64
+ tool_call.name # :tool_name
65
+ )
66
+ @on[:tool_result]&.call(wrapper)
67
+ { tool_call: tool_call, raw: raw }
68
+ end
69
+ end
70
+
71
+ begin
72
+ results = threads.map(&:value) # block until all complete
73
+
74
+ # Commit messages serially — preserves ordering, avoids @messages races.
75
+ halt_result = nil
76
+ results.each do |entry|
77
+ tool_call = entry[:tool_call]
78
+ raw = entry[:raw]
79
+ tool_payload = raw.is_a?(RubyLLM::Tool::Halt) ? raw.content : raw
80
+ content = content_like?(tool_payload) ? tool_payload : tool_payload.to_s
81
+ message = add_message(role: :tool, content: content, tool_call_id: tool_call.id)
82
+ @on[:end_message]&.call(message)
83
+ halt_result = raw if raw.is_a?(RubyLLM::Tool::Halt)
84
+ end
85
+
86
+ reset_tool_choice if forced_tool_choice?
87
+ halt_result || complete(&)
88
+ ensure
89
+ threads.each do |thread|
90
+ thread.kill if thread.alive?
91
+ thread.join
92
+ end
93
+ end
94
+ end
95
+ end
96
+ end
97
+ end
98
+ end
99
+
100
+ # Use prepend (not alias_method/override) so the patch stays clearly visible
101
+ # in the ancestor chain and is easy to remove when RubyLLM is dropped.
102
+ RubyLLM::Chat.prepend(Legion::LLM::Patches::RubyLLMParallelTools)