ruby_llm-contract 0.5.0 → 0.5.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +6 -0
- data/Gemfile.lock +2 -2
- data/README.md +13 -0
- data/lib/ruby_llm/contract/adapters/ruby_llm.rb +4 -1
- data/lib/ruby_llm/contract/step/base.rb +2 -2
- data/lib/ruby_llm/contract/version.rb +1 -1
- metadata +1 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 359d08f8cf1e31b84f308c47c7f93c7cee7663054de3ab538a34c1f67873554f
|
|
4
|
+
data.tar.gz: 60d8728bed042277d40ec1d231b6712e258b658fd893a73afc6ed1f8e9cff8c8
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 4bd4d7cea9fde7281bf84e1283c4201f8c5e9425cb8357e40b85e5184f19f51eb57a88a35901eddf571defd93ff33ef790e24b5e2eb90add8ef6371e791d37e5
|
|
7
|
+
data.tar.gz: e68ca27fc2225224cd900b1afb2180cfd43929e0461420c7fd2987706a2ebaa282b1e659c8b5c14e69e30d1250ede547061e2d2ab74b5c9cc0bb7fdb77109f0a
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,11 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## 0.5.2 (2026-04-06)
|
|
4
|
+
|
|
5
|
+
### Features
|
|
6
|
+
|
|
7
|
+
- **`reasoning_effort` forwarded to provider** — `context: { reasoning_effort: "low" }` now passed through `with_params` to the LLM. Previously accepted as a known context key but silently ignored by the RubyLLM adapter.
|
|
8
|
+
|
|
3
9
|
## 0.5.0 (2026-03-25)
|
|
4
10
|
|
|
5
11
|
Data-Driven Prompt Engineering — see ADR-0015.
|
data/Gemfile.lock
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
PATH
|
|
2
2
|
remote: .
|
|
3
3
|
specs:
|
|
4
|
-
ruby_llm-contract (0.5.
|
|
4
|
+
ruby_llm-contract (0.5.2)
|
|
5
5
|
dry-types (~> 1.7)
|
|
6
6
|
ruby_llm (~> 1.0)
|
|
7
7
|
ruby_llm-schema (~> 0.3)
|
|
@@ -258,7 +258,7 @@ CHECKSUMS
|
|
|
258
258
|
rubocop-ast (1.49.1) sha256=4412f3ee70f6fe4546cc489548e0f6fcf76cafcfa80fa03af67098ffed755035
|
|
259
259
|
ruby-progressbar (1.13.0) sha256=80fc9c47a9b640d6834e0dc7b3c94c9df37f08cb072b7761e4a71e22cff29b33
|
|
260
260
|
ruby_llm (1.14.0) sha256=57c6f7034fc4a44504ea137d70f853b07824f1c1cdbe774ab3ab3522e7098deb
|
|
261
|
-
ruby_llm-contract (0.5.
|
|
261
|
+
ruby_llm-contract (0.5.2)
|
|
262
262
|
ruby_llm-schema (0.3.0) sha256=a591edc5ca1b7f0304f0e2261de61ba4b3bea17be09f5cf7558153adfda3dec6
|
|
263
263
|
ruby_parser (3.22.0) sha256=1eb4937cd9eb220aa2d194e352a24dba90aef00751e24c8dfffdb14000f15d23
|
|
264
264
|
rubycritic (4.12.0) sha256=024fed90fe656fa939f6ea80aab17569699ac3863d0b52fd72cb99892247abc8
|
data/README.md
CHANGED
|
@@ -38,6 +38,18 @@ result.trace[:model] # => "gpt-4.1-nano"
|
|
|
38
38
|
|
|
39
39
|
Bad JSON? Auto-retry. Wrong value? Escalate to a smarter model. Schema violated? Caught client-side even if the provider ignores it. All with cost tracking.
|
|
40
40
|
|
|
41
|
+
## Start Here: Eval-First
|
|
42
|
+
|
|
43
|
+
The most powerful way to use this gem is simple:
|
|
44
|
+
|
|
45
|
+
- define evals before changing prompts
|
|
46
|
+
- compare prompt versions on the same dataset
|
|
47
|
+
- merge only when the eval stays green
|
|
48
|
+
|
|
49
|
+
Read: [Eval-First](docs/guide/eval_first.md)
|
|
50
|
+
|
|
51
|
+
This is the workflow that gives prompt engineering teeth. No vibes, no cherry-picked examples, no "it felt better in the playground". Just cases, regressions, baselines, and measured wins.
|
|
52
|
+
|
|
41
53
|
## Which model should I use?
|
|
42
54
|
|
|
43
55
|
Define test cases. Compare models. Get data.
|
|
@@ -220,6 +232,7 @@ Works with any ruby_llm provider (OpenAI, Anthropic, Gemini, etc).
|
|
|
220
232
|
| Guide | |
|
|
221
233
|
|-------|-|
|
|
222
234
|
| [Getting Started](docs/guide/getting_started.md) | Features walkthrough, model escalation, eval |
|
|
235
|
+
| [Eval-First](docs/guide/eval_first.md) | Practical workflow for prompt engineering with datasets, baselines, and A/B gates |
|
|
223
236
|
| [Best Practices](docs/guide/best_practices.md) | 6 patterns for bulletproof validates |
|
|
224
237
|
| [Output Schema](docs/guide/output_schema.md) | Full schema reference + constraints |
|
|
225
238
|
| [Pipeline](docs/guide/pipeline.md) | Multi-step composition, timeout, fail-fast |
|
|
@@ -52,7 +52,10 @@ module RubyLLM
|
|
|
52
52
|
CHAT_OPTION_METHODS.each do |key, method_name|
|
|
53
53
|
chat.public_send(method_name, options[key]) if options[key]
|
|
54
54
|
end
|
|
55
|
-
|
|
55
|
+
params = {}
|
|
56
|
+
params[:max_tokens] = options[:max_tokens] if options[:max_tokens]
|
|
57
|
+
params[:reasoning_effort] = options[:reasoning_effort] if options[:reasoning_effort]
|
|
58
|
+
chat.with_params(**params) if params.any?
|
|
56
59
|
end
|
|
57
60
|
|
|
58
61
|
def build_response(response)
|
|
@@ -49,7 +49,7 @@ module RubyLLM
|
|
|
49
49
|
end
|
|
50
50
|
end
|
|
51
51
|
|
|
52
|
-
KNOWN_CONTEXT_KEYS = %i[adapter model temperature max_tokens provider assume_model_exists].freeze
|
|
52
|
+
KNOWN_CONTEXT_KEYS = %i[adapter model temperature max_tokens provider assume_model_exists reasoning_effort].freeze
|
|
53
53
|
|
|
54
54
|
include Concerns::ContextHelpers
|
|
55
55
|
|
|
@@ -139,7 +139,7 @@ module RubyLLM
|
|
|
139
139
|
{
|
|
140
140
|
model: context[:model] || model || RubyLLM::Contract.configuration.default_model,
|
|
141
141
|
temperature: context[:temperature],
|
|
142
|
-
extra_options: context.slice(:provider, :assume_model_exists, :max_tokens),
|
|
142
|
+
extra_options: context.slice(:provider, :assume_model_exists, :max_tokens, :reasoning_effort),
|
|
143
143
|
policy: retry_policy
|
|
144
144
|
}
|
|
145
145
|
end
|