llm_conductor 1.5.0 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5d361f0593674c6ca79eb53b6884c200a804d104e92f59e1f6fe5a75c2138dd4
4
- data.tar.gz: b8476bdaa573306af54d7f5cbb49e94357d359f357f45d14eecb4889c49e8ba0
3
+ metadata.gz: 80aca14904612848b82ee2f4e08d05d3b3a0cc4a44e6d754d9f785905aa8c447
4
+ data.tar.gz: c4496c6b595bb737583ab72a861942df62c06f3e2200e061afd07ccf08a35bde
5
5
  SHA512:
6
- metadata.gz: 80cbe8b64fa36e18a8cb117ed5d72742ceeaae3336d317a3cee90fc343c2a8cc5148b595cf8c41f35000ca9d49d396253f4e7a3922155f025a90cbdd65a32e34
7
- data.tar.gz: bd7f70f9bd41889e8c481c1e4f75c0c8f8ef9c1592cfe5b5f1f8584c7c087e9a3ddaa01c0397dbad6d45af53449bdb005490d5054228434b2a27650386334f8d
6
+ metadata.gz: f1d6e0b0d0c185c28dba1d7c26e94aceee1ddfd3f5278cea73d03eeb719c12b79cc3319e214ee8e1e5e99c861f98fce5e39b1c3c041c1ee49ea33accc2e15c36
7
+ data.tar.gz: 4ea5a87ba4c4870756153e71ab92fdcb935d804a915fc49fac96f40291c2d6d3e1730e7a4acea8fb8528ea816a9675f5935cb93deb6b32706e80e36659bfb23d
@@ -29,7 +29,8 @@ params: { temperature: 0.3, top_p: 0.85 }
29
29
  | Provider | Status |
30
30
  |----------|--------|
31
31
  | Ollama | ✅ Supported |
32
- | OpenAI, Anthropic, Gemini, etc. | 🔜 Coming soon |
32
+ | Gemini | Supported |
33
+ | OpenAI, Anthropic, etc. | 🔜 Coming soon |
33
34
 
34
35
  ---
35
36
 
@@ -133,6 +134,42 @@ response = LlmConductor.generate(
133
134
  )
134
135
  ```
135
136
 
137
+ ## Gemini Parameters Reference
138
+
139
+ Below are common parameters supported by Google Gemini via `generationConfig`. For a complete list, see the [Gemini API docs](https://ai.google.dev/gemini-api/docs/text-generation).
140
+
141
+ ### Supported Parameters
142
+
143
+ | Ruby Key | Gemini API Key | Type | Description |
144
+ |----------|---------------|------|-------------|
145
+ | `temperature` | `temperature` | Float | Controls randomness (0.0-2.0) |
146
+ | `top_p` | `topP` | Float | Nucleus sampling threshold (0.0-1.0) |
147
+ | `top_k` | `topK` | Integer | Top-k sampling limit |
148
+ | `max_tokens` | `maxOutputTokens` | Integer | Maximum tokens to generate |
149
+ | `max_output_tokens` | `maxOutputTokens` | Integer | Alias for max_tokens |
150
+ | `candidate_count` | `candidateCount` | Integer | Number of candidates to return |
151
+ | `stop_sequences` | `stopSequences` | Array | Stop sequences that end generation |
152
+
153
+ ### Gemini Usage Examples
154
+
155
+ ```ruby
156
+ # Low temperature for focused output
157
+ response = LlmConductor.generate(
158
+ model: 'gemini-2.5-flash',
159
+ prompt: 'Summarize this article.',
160
+ vendor: :gemini,
161
+ params: { temperature: 0.3, max_tokens: 500 }
162
+ )
163
+
164
+ # Creative writing with Gemini
165
+ response = LlmConductor.generate(
166
+ model: 'gemini-2.5-flash',
167
+ prompt: 'Write a poem about the ocean.',
168
+ vendor: :gemini,
169
+ params: { temperature: 0.9, top_p: 0.95, top_k: 40 }
170
+ )
171
+ ```
172
+
136
173
  ## Ollama Parameters Reference
137
174
 
138
175
  Below are common parameters supported by Ollama. For a complete list, see the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).
@@ -311,11 +348,11 @@ Always refer to your provider's documentation for supported parameters.
311
348
 
312
349
  Currently, custom parameters are fully supported for:
313
350
  - ✅ **Ollama**
351
+ - ✅ **Google (Gemini)** — maps snake_case Ruby keys to camelCase `generationConfig`
314
352
 
315
353
  Coming soon:
316
354
  - 🔜 OpenAI (GPT)
317
355
  - 🔜 Anthropic (Claude)
318
- - 🔜 Google (Gemini)
319
356
  - 🔜 Groq
320
357
  - 🔜 OpenRouter
321
358
  - 🔜 Z.ai
@@ -15,6 +15,17 @@ module LlmConductor
15
15
 
16
16
  private
17
17
 
18
+ # Gemini REST API uses camelCase keys in generationConfig.
19
+ PARAM_KEY_MAP = {
20
+ temperature: :temperature,
21
+ top_p: :topP,
22
+ top_k: :topK,
23
+ max_tokens: :maxOutputTokens,
24
+ max_output_tokens: :maxOutputTokens,
25
+ candidate_count: :candidateCount,
26
+ stop_sequences: :stopSequences
27
+ }.freeze
28
+
18
29
  def generate_content(prompt)
19
30
  content = format_content(prompt)
20
31
  parts = build_parts_for_gemini(content)
@@ -25,10 +36,28 @@ module LlmConductor
25
36
  ]
26
37
  }
27
38
 
39
+ # Inject generationConfig from params when present
40
+ generation_config = build_generation_config
41
+ payload[:generationConfig] = generation_config if generation_config
42
+
28
43
  response = client.generate_content(payload)
29
44
  response.dig('candidates', 0, 'content', 'parts', 0, 'text')
30
45
  end
31
46
 
47
+ # Build Gemini generationConfig from params hash.
48
+ # Maps snake_case Ruby keys to camelCase Gemini API keys.
49
+ # @return [Hash, nil] generationConfig hash or nil if no mapped params
50
+ def build_generation_config
51
+ return unless params.is_a?(Hash) && params.any?
52
+
53
+ gen_cfg = {}
54
+ params.each do |key, value|
55
+ mapped = PARAM_KEY_MAP[key.to_sym]
56
+ gen_cfg[mapped] = value if mapped
57
+ end
58
+ gen_cfg.any? ? gen_cfg : nil
59
+ end
60
+
32
61
  # Build parts array for Gemini API from formatted content
33
62
  # Converts VisionSupport format to Gemini's specific format
34
63
  # @param content [String, Array] Formatted content from VisionSupport
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LlmConductor
4
- VERSION = '1.5.0'
4
+ VERSION = '1.6.0'
5
5
  end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm_conductor
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.5.0
4
+ version: 1.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ben Zheng
8
8
  bindir: exe
9
9
  cert_chain: []
10
- date: 2025-12-12 00:00:00.000000000 Z
10
+ date: 2026-02-11 00:00:00.000000000 Z
11
11
  dependencies:
12
12
  - !ruby/object:Gem::Dependency
13
13
  name: activesupport