llm.rb 4.15.0 → 4.16.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 40217f9b44b00028739994a8f6f6a278b366d7fa7f4799b86afd2b793f367084
4
- data.tar.gz: cf7e6d7935cf6ab8479ac864e09ea7a4403a97345f98e30ae03de9ae941c97a0
3
+ metadata.gz: 793403110075dfcc650d4b0931ebcdfee74787ce1412b61318d7d749b22c3e9f
4
+ data.tar.gz: 469e1b635896822483e8a6ec7cebaf8c34443b90010435c4ae2d4899fd71c1b4
5
5
  SHA512:
6
- metadata.gz: 3850bb93244032d3ee721c1a7bc85efd0e86f605a421960abfbab56b89b9b4c36d97efb68970759561f08f165069e4c2e213132c0bb485b3366c57bebb71e3ad
7
- data.tar.gz: 2856c27c38e6d6d8d659d8a21c13b4ea514fdc0a0bd024935b2ea2f20d10eac96f324489f1c06a08c9bad6be5f97756eedc38b9f301e7a78533a8422f02cd329
6
+ metadata.gz: 515e571ad97704659363a764f633c9199239d0ad1e741b1239e7528cf455160a1246ba756fa16201c47f03b5185939809842eef28aaeb45123aa38c038c23232
7
+ data.tar.gz: 180e987e00885e15d004965e98c5b70b00481568be43c168431afaad27baed40adfb8c93a9cad10fb96210dad3e768407524422b5edf0c62b011ef57900ac742
data/CHANGELOG.md CHANGED
@@ -2,8 +2,30 @@
2
2
 
3
3
  ## Unreleased
4
4
 
5
+ Changes since `v4.16.0`.
6
+
7
+ ## v4.16.0
8
+
5
9
  Changes since `v4.15.0`.
6
10
 
11
+ This release expands ORM support with built-in ActiveRecord persistence
12
+ and improves compatibility with OpenAI-compatible gateways, proxies, and
13
+ self-hosted servers that use non-standard API root paths.
14
+
15
+ ### Change
16
+
17
+ * **Support OpenAI-compatible base paths** <br>
18
+ Add `base_path:` to provider configuration so OpenAI-compatible
19
+ endpoints can vary both host and API prefix. This supports providers,
20
+ proxies, and gateways that keep OpenAI request shapes but use
21
+ non-standard URL layouts such as DeepInfra's `/v1/openai/...`.
22
+
23
+ * **Add ActiveRecord context persistence with `acts_as_llm`** <br>
24
+ Add a built-in ActiveRecord wrapper that mirrors the Sequel plugin
25
+ API so applications can persist `LLM::Context` state on records with
26
+ default columns, provider/context hooks, validation-backed writes,
27
+ and `format: :string`, `:json`, or `:jsonb` storage.
28
+
7
29
  ## v4.15.0
8
30
 
9
31
  Changes since `v4.14.0`.
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  <p align="center">
5
5
  <a href="https://0x1eef.github.io/x/llm.rb?rebuild=1"><img src="https://img.shields.io/badge/docs-0x1eef.github.io-blue.svg" alt="RubyDoc"></a>
6
6
  <a href="https://opensource.org/license/0bsd"><img src="https://img.shields.io/badge/License-0BSD-orange.svg?" alt="License"></a>
7
- <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-4.15.0-green.svg?" alt="Version"></a>
7
+ <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-4.16.0-green.svg?" alt="Version"></a>
8
8
  </p>
9
9
 
10
10
  ## About
@@ -17,9 +17,9 @@ state.
17
17
  It is built for engineers who want control over how these systems run. llm.rb
18
18
  stays close to Ruby, runs on the standard library by default, loads optional
19
19
  pieces only when needed, and remains easy to extend. It also works well in
20
- Rails or ActiveRecord applications, and it includes built-in Sequel plugin
21
- support, where a small wrapper around context persistence is enough to save
22
- and restore long-lived conversation state across requests, jobs, or retries.
20
+ Rails or ActiveRecord applications, with built-in `acts_as_llm`, and includes
21
+ built-in Sequel support through `plugin :llm`, so long-lived context state can
22
+ be saved and restored across requests, jobs, or retries.
23
23
 
24
24
  Most LLM libraries stop at request/response APIs. Building real systems means
25
25
  stitching together streaming, tools, state, persistence, and external
@@ -87,12 +87,12 @@ same context object.
87
87
  - **MCP is built in** <br>
88
88
  Connect to MCP servers over stdio or HTTP without bolting on a separate
89
89
  integration stack.
90
- - **Sequel persistence is built in** <br>
91
- Use `plugin :llm` to persist `LLM::Context` state on a Sequel model with
92
- sensible default columns, then pass provider setup through
93
- `provider:` when you need it. Use `format: :string` for text columns or
94
- `format: :jsonb` when you want native PostgreSQL JSON storage with Sequel's
95
- JSON typecasting support enabled.
90
+ - **ActiveRecord and Sequel persistence are built in** <br>
91
+ Use `acts_as_llm` on ActiveRecord models or `plugin :llm` on Sequel models
92
+ to persist `LLM::Context` state with sensible default columns. Both support
93
+ `provider:` and `context:` hooks, plus `format: :string` for text columns
94
+ or `format: :jsonb` for native PostgreSQL JSON storage when ORM JSON
95
+ typecasting support is enabled.
96
96
  - **Persistent HTTP pooling is shared process-wide** <br>
97
97
  When enabled, separate
98
98
  [`LLM::Provider`](https://0x1eef.github.io/x/llm.rb/LLM/Provider.html)
@@ -101,6 +101,10 @@ same context object.
101
101
  [`LLM::MCP`](https://0x1eef.github.io/x/llm.rb/LLM/MCP.html)
102
102
  instances can do the same, instead of each object creating its own
103
103
  isolated per-instance transport.
104
+ - **OpenAI-compatible gateways are supported** <br>
105
+ Target OpenAI-compatible services such as DeepInfra and OpenRouter, as well
106
+ as proxies and self-hosted servers, with `host:` and `base_path:` when they
107
+ preserve OpenAI request shapes but change the API root path.
104
108
  - **Provider support is broad** <br>
105
109
  Work with OpenAI, OpenAI-compatible endpoints, Anthropic, Google, DeepSeek,
106
110
  Z.ai, xAI, llama.cpp, and Ollama through the same runtime.
@@ -203,12 +207,30 @@ ctx.talk("Remember that my favorite language is Ruby")
203
207
  puts ctx.talk("What is my favorite language?").content
204
208
  ```
205
209
 
210
+ **ActiveRecord (ORM)**
211
+
212
+ See the [deepdive](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) for more examples.
213
+
214
+ ```ruby
215
+ require "llm"
216
+ require "active_record"
217
+ require "llm/active_record"
218
+
219
+ class Context < ApplicationRecord
220
+ acts_as_llm provider: -> { { key: ENV["#{provider.upcase}_SECRET"], persistent: true } }
221
+ end
222
+
223
+ ctx = Context.create!(provider: "openai", model: "gpt-5.4-mini")
224
+ ctx.talk("Remember that my favorite language is Ruby")
225
+ puts ctx.talk("What is my favorite language?").content
226
+ ```
227
+
206
228
  ## Resources
207
229
 
208
230
  - [deepdive](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) is the
209
231
  examples guide.
210
- - [_examples/relay](./_examples/relay) shows a real application built on top
211
- of llm.rb.
232
+ - [relay](https://github.com/llmrb/relay) shows a real application built on
233
+ top of llm.rb.
212
234
  - [doc site](https://0x1eef.github.io/x/llm.rb?rebuild=1) has the API docs.
213
235
 
214
236
  ## License