ruby_llm-contract 0.5.0 → 0.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 2e6861a314beadad8064d5fe08b85dc0f94032987ba41c27fc9a640788f10c28
4
- data.tar.gz: 7efb88142ef8ac8287ed58f6cc9bc93ca78130084e14f8310981e7f90faf9943
3
+ metadata.gz: 359d08f8cf1e31b84f308c47c7f93c7cee7663054de3ab538a34c1f67873554f
4
+ data.tar.gz: 60d8728bed042277d40ec1d231b6712e258b658fd893a73afc6ed1f8e9cff8c8
5
5
  SHA512:
6
- metadata.gz: 1bd259ab22d13b2e7cc9848c401b91ebfaa135dc7502b72074fd28e71e6120f7ea183c09361948bf8d68df9cd5190e9450a3f03954bbd2927f8091c987368bb7
7
- data.tar.gz: 904185a06d1def6513268033cbe87b30e3af9b92e0a26ec3cf89c93ad88450c4d49aa13d79c2428b1f7f6cc1cbffefeccf146b5a5c7ddf0057ab3db2f1b7dc8d
6
+ metadata.gz: 4bd4d7cea9fde7281bf84e1283c4201f8c5e9425cb8357e40b85e5184f19f51eb57a88a35901eddf571defd93ff33ef790e24b5e2eb90add8ef6371e791d37e5
7
+ data.tar.gz: e68ca27fc2225224cd900b1afb2180cfd43929e0461420c7fd2987706a2ebaa282b1e659c8b5c14e69e30d1250ede547061e2d2ab74b5c9cc0bb7fdb77109f0a
data/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.5.2 (2026-04-06)
4
+
5
+ ### Features
6
+
7
+ - **`reasoning_effort` forwarded to provider** — `context: { reasoning_effort: "low" }` now passed through `with_params` to the LLM. Previously accepted as a known context key but silently ignored by the RubyLLM adapter.
8
+
3
9
  ## 0.5.0 (2026-03-25)
4
10
 
5
11
  Data-Driven Prompt Engineering — see ADR-0015.
data/Gemfile.lock CHANGED
@@ -1,7 +1,7 @@
1
1
  PATH
2
2
  remote: .
3
3
  specs:
4
- ruby_llm-contract (0.5.0)
4
+ ruby_llm-contract (0.5.2)
5
5
  dry-types (~> 1.7)
6
6
  ruby_llm (~> 1.0)
7
7
  ruby_llm-schema (~> 0.3)
@@ -258,7 +258,7 @@ CHECKSUMS
258
258
  rubocop-ast (1.49.1) sha256=4412f3ee70f6fe4546cc489548e0f6fcf76cafcfa80fa03af67098ffed755035
259
259
  ruby-progressbar (1.13.0) sha256=80fc9c47a9b640d6834e0dc7b3c94c9df37f08cb072b7761e4a71e22cff29b33
260
260
  ruby_llm (1.14.0) sha256=57c6f7034fc4a44504ea137d70f853b07824f1c1cdbe774ab3ab3522e7098deb
261
- ruby_llm-contract (0.5.0)
261
+ ruby_llm-contract (0.5.2)
262
262
  ruby_llm-schema (0.3.0) sha256=a591edc5ca1b7f0304f0e2261de61ba4b3bea17be09f5cf7558153adfda3dec6
263
263
  ruby_parser (3.22.0) sha256=1eb4937cd9eb220aa2d194e352a24dba90aef00751e24c8dfffdb14000f15d23
264
264
  rubycritic (4.12.0) sha256=024fed90fe656fa939f6ea80aab17569699ac3863d0b52fd72cb99892247abc8
data/README.md CHANGED
@@ -38,6 +38,18 @@ result.trace[:model] # => "gpt-4.1-nano"
38
38
 
39
39
  Bad JSON? Auto-retry. Wrong value? Escalate to a smarter model. Schema violated? Caught client-side even if the provider ignores it. All with cost tracking.
40
40
 
41
+ ## Start Here: Eval-First
42
+
43
+ The most powerful way to use this gem is simple:
44
+
45
+ - define evals before changing prompts
46
+ - compare prompt versions on the same dataset
47
+ - merge only when the eval stays green
48
+
49
+ Read: [Eval-First](docs/guide/eval_first.md)
50
+
51
+ This is the workflow that gives prompt engineering teeth. No vibes, no cherry-picked examples, no "it felt better in the playground". Just cases, regressions, baselines, and measured wins.
52
+
41
53
  ## Which model should I use?
42
54
 
43
55
  Define test cases. Compare models. Get data.
@@ -220,6 +232,7 @@ Works with any ruby_llm provider (OpenAI, Anthropic, Gemini, etc).
220
232
  | Guide | |
221
233
  |-------|-|
222
234
  | [Getting Started](docs/guide/getting_started.md) | Features walkthrough, model escalation, eval |
235
+ | [Eval-First](docs/guide/eval_first.md) | Practical workflow for prompt engineering with datasets, baselines, and A/B gates |
223
236
  | [Best Practices](docs/guide/best_practices.md) | 6 patterns for bulletproof validates |
224
237
  | [Output Schema](docs/guide/output_schema.md) | Full schema reference + constraints |
225
238
  | [Pipeline](docs/guide/pipeline.md) | Multi-step composition, timeout, fail-fast |
@@ -52,7 +52,10 @@ module RubyLLM
52
52
  CHAT_OPTION_METHODS.each do |key, method_name|
53
53
  chat.public_send(method_name, options[key]) if options[key]
54
54
  end
55
- chat.with_params(max_tokens: options[:max_tokens]) if options[:max_tokens]
55
+ params = {}
56
+ params[:max_tokens] = options[:max_tokens] if options[:max_tokens]
57
+ params[:reasoning_effort] = options[:reasoning_effort] if options[:reasoning_effort]
58
+ chat.with_params(**params) if params.any?
56
59
  end
57
60
 
58
61
  def build_response(response)
@@ -49,7 +49,7 @@ module RubyLLM
49
49
  end
50
50
  end
51
51
 
52
- KNOWN_CONTEXT_KEYS = %i[adapter model temperature max_tokens provider assume_model_exists].freeze
52
+ KNOWN_CONTEXT_KEYS = %i[adapter model temperature max_tokens provider assume_model_exists reasoning_effort].freeze
53
53
 
54
54
  include Concerns::ContextHelpers
55
55
 
@@ -139,7 +139,7 @@ module RubyLLM
139
139
  {
140
140
  model: context[:model] || model || RubyLLM::Contract.configuration.default_model,
141
141
  temperature: context[:temperature],
142
- extra_options: context.slice(:provider, :assume_model_exists, :max_tokens),
142
+ extra_options: context.slice(:provider, :assume_model_exists, :max_tokens, :reasoning_effort),
143
143
  policy: retry_policy
144
144
  }
145
145
  end
@@ -2,6 +2,6 @@
2
2
 
3
3
  module RubyLLM
4
4
  module Contract
5
- VERSION = "0.5.0"
5
+ VERSION = "0.5.2"
6
6
  end
7
7
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm-contract
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.0
4
+ version: 0.5.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Justyna