gllm-inference-binary 0.5.53__cp312-cp312-win_amd64.whl → 0.5.55__cp312-cp312-win_amd64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of gllm-inference-binary might be problematic. Click here for more details.

@@ -51,83 +51,82 @@ class BedrockLMInvoker(BaseLMInvoker):
51
51
  result = await lm_invoker.invoke([text, image])
52
52
  ```
53
53
 
54
- Tool calling:
55
- Tool calling is a feature that allows the language model to call tools to perform tasks.
56
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
57
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
58
- `tool_calls` attribute in the output.
59
-
60
- Usage example:
61
- ```python
62
- lm_invoker = BedrockLMInvoker(..., tools=[tool_1, tool_2])
63
- ```
54
+ Text output:
55
+ The `BedrockLMInvoker` generates text outputs by default.
56
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
57
+ via the `texts` (all text outputs) or `text` (first text output) properties.
64
58
 
65
59
  Output example:
66
60
  ```python
67
- LMOutput(
68
- response="Let me call the tools...",
69
- tool_calls=[
70
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
71
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
72
- ]
73
- )
61
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
74
62
  ```
75
63
 
76
64
  Structured output:
77
- Structured output is a feature that allows the language model to output a structured response.
65
+ The `BedrockLMInvoker` can be configured to generate structured outputs.
78
66
  This feature can be enabled by providing a schema to the `response_schema` parameter.
79
67
 
80
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
81
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
82
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
83
-
84
- Structured output is achieved by providing the schema name in the `tool_choice` parameter. This forces
85
- the model to call the provided schema as a tool. Thus, structured output is not compatible with tool calling,
86
- since the tool calling is reserved to force the model to call the provided schema as a tool.
87
- The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
88
- invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
68
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
69
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
89
70
 
90
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
91
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
92
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
71
+ The schema must either be one of the following:
72
+ 1. A Pydantic BaseModel class
73
+ The structured output will be a Pydantic model.
74
+ 2. A JSON schema dictionary
75
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
76
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
77
+ The structured output will be a dictionary.
93
78
 
94
- # Example 1: Using a JSON schema dictionary
95
79
  Usage example:
96
80
  ```python
97
- schema = {
98
- "title": "Animal",
99
- "description": "A description of an animal.",
100
- "properties": {
101
- "color": {"title": "Color", "type": "string"},
102
- "name": {"title": "Name", "type": "string"},
103
- },
104
- "required": ["name", "color"],
105
- "type": "object",
106
- }
107
- lm_invoker = BedrockLMInvoker(..., response_schema=schema)
81
+ class Animal(BaseModel):
82
+ name: str
83
+ color: str
84
+
85
+ json_schema = Animal.model_json_schema()
86
+
87
+ lm_invoker = BedrockLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
88
+ lm_invoker = BedrockLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
108
89
  ```
90
+
109
91
  Output example:
110
92
  ```python
111
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
93
+ # Using Pydantic BaseModel class outputs a Pydantic model
94
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
95
+
96
+ # Using JSON schema dictionary outputs a dictionary
97
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
112
98
  ```
113
99
 
114
- # Example 2: Using a Pydantic BaseModel class
100
+ Structured output is not compatible with tool calling.
101
+ When structured output is enabled, streaming is disabled.
102
+
103
+ Tool calling:
104
+ The `BedrockLMInvoker` can be configured to call tools to perform certain tasks.
105
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
106
+
107
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
108
+ can be accessed via the `tool_calls` property.
109
+
115
110
  Usage example:
116
111
  ```python
117
- class Animal(BaseModel):
118
- name: str
119
- color: str
120
-
121
- lm_invoker = BedrockLMInvoker(..., response_schema=Animal)
112
+ lm_invoker = BedrockLMInvoker(..., tools=[tool_1, tool_2])
122
113
  ```
114
+
123
115
  Output example:
124
116
  ```python
125
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
117
+ LMOutput(
118
+ outputs=[
119
+ LMOutputItem(type="text", output="I\'m using tools..."),
120
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
121
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
122
+ ]
123
+ )
126
124
  ```
127
125
 
128
126
  Analytics tracking:
129
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
127
+ The `BedrockLMInvoker` can be configured to output additional information about the invocation.
130
128
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
129
+
131
130
  When enabled, the following attributes will be stored in the output:
132
131
  1. `token_usage`: The token usage.
133
132
  2. `duration`: The duration in seconds.
@@ -136,7 +135,7 @@ class BedrockLMInvoker(BaseLMInvoker):
136
135
  Output example:
137
136
  ```python
138
137
  LMOutput(
139
- response="Golden retriever is a good dog breed.",
138
+ outputs=[...],
140
139
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
141
140
  duration=0.729,
142
141
  finish_details={"stop_reason": "end_turn"},
@@ -151,8 +150,6 @@ class BedrockLMInvoker(BaseLMInvoker):
151
150
  Retry config examples:
152
151
  ```python
153
152
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
154
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
155
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
156
153
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
157
154
  ```
158
155
 
@@ -160,17 +157,6 @@ class BedrockLMInvoker(BaseLMInvoker):
160
157
  ```python
161
158
  lm_invoker = BedrockLMInvoker(..., retry_config=retry_config)
162
159
  ```
163
-
164
- Output types:
165
- The output of the `BedrockLMInvoker` can either be:
166
- 1. `str`: A text response.
167
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
168
- 2.1. response (str)
169
- 2.2. tool_calls (list[ToolCall])
170
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
171
- 2.4. token_usage (TokenUsage | None)
172
- 2.5. duration (float | None)
173
- 2.6. finish_details (dict[str, Any] | None)
174
160
  '''
175
161
  session: Incomplete
176
162
  client_kwargs: Incomplete
@@ -44,9 +44,42 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
44
44
  result = await lm_invoker.invoke([text, image])
45
45
  ```
46
46
 
47
+ Text output:
48
+ The `DatasaurLMInvoker` generates text outputs by default.
49
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
50
+ via the `texts` (all text outputs) or `text` (first text output) properties.
51
+
52
+ Output example:
53
+ ```python
54
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
55
+ ```
56
+
57
+ Citations:
58
+ The `DatasaurLMInvoker` can be configured to output the citations used to generate the response.
59
+ This feature can be enabled by setting the `citations` parameter to `True`.
60
+
61
+ Citations outputs are stored in the `outputs` attribute of the `LMOutput` object and
62
+ can be accessed via the `citations` property.
63
+
64
+ Usage example:
65
+ ```python
66
+ lm_invoker = DatasaurLMInvoker(..., citations=True)
67
+ ```
68
+
69
+ Output example:
70
+ ```python
71
+ LMOutput(
72
+ outputs=[
73
+ LMOutputItem(type="citation", output=Chunk(id="123", content="...", metadata={...}, score=0.95)),
74
+ LMOutputItem(type="text", output="According to recent reports... ([Source](https://www.example.com))."),
75
+ ],
76
+ )
77
+ ```
78
+
47
79
  Analytics tracking:
48
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
80
+ The `DatasaurLMInvoker` can be configured to output additional information about the invocation.
49
81
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
82
+
50
83
  When enabled, the following attributes will be stored in the output:
51
84
  1. `token_usage`: The token usage.
52
85
  2. `duration`: The duration in seconds.
@@ -55,16 +88,13 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
55
88
  Output example:
56
89
  ```python
57
90
  LMOutput(
58
- response="Golden retriever is a good dog breed.",
91
+ outputs=[...],
59
92
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
60
93
  duration=0.729,
61
- finish_details={"finish_reason": "stop"},
94
+ finish_details={"stop_reason": "end_turn"},
62
95
  )
63
96
  ```
64
97
 
65
- When streaming is enabled, token usage is not supported. Therefore, the `token_usage` attribute will be `None`
66
- regardless of the value of the `output_analytics` parameter.
67
-
68
98
  Retry and timeout:
69
99
  The `DatasaurLMInvoker` supports retry and timeout configuration.
70
100
  By default, the max retries is set to 0 and the timeout is set to 30.0 seconds.
@@ -73,8 +103,6 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
73
103
  Retry config examples:
74
104
  ```python
75
105
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
76
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
77
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
78
106
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
79
107
  ```
80
108
 
@@ -82,34 +110,6 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
82
110
  ```python
83
111
  lm_invoker = DatasaurLMInvoker(..., retry_config=retry_config)
84
112
  ```
85
-
86
- Citations:
87
- The `DatasaurLMInvoker` can be configured to output the citations used to generate the response.
88
- They can be enabled by setting the `citations` parameter to `True`.
89
- When enabled, the citations will be stored as `Chunk` objects in the `citations` attribute in the output.
90
-
91
- Usage example:
92
- ```python
93
- lm_invoker = DatasaurLMInvoker(..., citations=True)
94
- ```
95
-
96
- Output example:
97
- ```python
98
- LMOutput(
99
- response="The winner of the match is team A ([Example title](https://www.example.com)).",
100
- citations=[Chunk(id="123", content="...", metadata={...}, score=0.95)],
101
- )
102
- ```
103
-
104
- Output types:
105
- The output of the `DatasaurLMInvoker` can either be:
106
- 1. `str`: A text response.
107
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
108
- 2.1. response (str)
109
- 2.2. token_usage (TokenUsage | None)
110
- 2.3. duration (float | None)
111
- 2.4. finish_details (dict[str, Any] | None)
112
- 2.5. citations (list[Chunk])
113
113
  '''
114
114
  client_kwargs: Incomplete
115
115
  citations: Incomplete
@@ -82,10 +82,61 @@ class GoogleLMInvoker(BaseLMInvoker):
82
82
  result = await lm_invoker.invoke([text, image])
83
83
  ```
84
84
 
85
+ Text output:
86
+ The `GoogleLMInvoker` generates text outputs by default.
87
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
88
+ via the `texts` (all text outputs) or `text` (first text output) properties.
89
+
90
+ Output example:
91
+ ```python
92
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
93
+ ```
94
+
95
+ Structured output:
96
+ The `GoogleLMInvoker` can be configured to generate structured outputs.
97
+ This feature can be enabled by providing a schema to the `response_schema` parameter.
98
+
99
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
100
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
101
+
102
+ The schema must either be one of the following:
103
+ 1. A Pydantic BaseModel class
104
+ The structured output will be a Pydantic model.
105
+ 2. A JSON schema dictionary
106
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
107
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
108
+ The structured output will be a dictionary.
109
+
110
+ Usage example:
111
+ ```python
112
+ class Animal(BaseModel):
113
+ name: str
114
+ color: str
115
+
116
+ json_schema = Animal.model_json_schema()
117
+
118
+ lm_invoker = GoogleLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
119
+ lm_invoker = GoogleLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
120
+ ```
121
+
122
+ Output example:
123
+ ```python
124
+ # Using Pydantic BaseModel class outputs a Pydantic model
125
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
126
+
127
+ # Using JSON schema dictionary outputs a dictionary
128
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
129
+ ```
130
+
131
+ Structured output is not compatible with tool calling.
132
+ When structured output is enabled, streaming is disabled.
133
+
85
134
  Image generation:
86
- The `GoogleLMInvoker` supports image generation. This can be done by using an image generation model,
87
- such as `gemini-2.5-flash-image`. Streaming is disabled for image generation models.
88
- The generated image will be stored in the `attachments` attribute in the output.
135
+ The `GoogleLMInvoker` can be configured to generate images.
136
+ This feature can be enabled by using an image generation model, such as `gemini-2.5-flash-image`.
137
+
138
+ Image outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
139
+ via the `attachments` property.
89
140
 
90
141
  Usage example:
91
142
  ```python
@@ -97,16 +148,25 @@ class GoogleLMInvoker(BaseLMInvoker):
97
148
  Output example:
98
149
  ```python
99
150
  LMOutput(
100
- response="Let me call the tools...",
101
- attachments=[Attachment(filename="image.png", mime_type="image/png", data=b"...")],
151
+ outputs=[
152
+ LMOutputItem(type="text", output="Creating a picture..."),
153
+ LMOutputItem(
154
+ type="attachment",
155
+ output=Attachment(filename="image.png", mime_type="image/png", data=b"..."),
156
+ ),
157
+ ],
102
158
  )
103
159
  ```
104
160
 
161
+ Image generation is not compatible with tool calling and thinking.
162
+ When image generation is enabled, streaming is disabled.
163
+
105
164
  Tool calling:
106
- Tool calling is a feature that allows the language model to call tools to perform tasks.
107
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
108
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
109
- `tool_calls` attribute in the output.
165
+ The `GoogleLMInvoker` can be configured to call tools to perform certain tasks.
166
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
167
+
168
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
169
+ can be accessed via the `tool_calls` property.
110
170
 
111
171
  Usage example:
112
172
  ```python
@@ -116,67 +176,60 @@ class GoogleLMInvoker(BaseLMInvoker):
116
176
  Output example:
117
177
  ```python
118
178
  LMOutput(
119
- response="Let me call the tools...",
120
- tool_calls=[
121
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
122
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
179
+ outputs=[
180
+ LMOutputItem(type="text", output="I\'m using tools..."),
181
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
182
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
123
183
  ]
124
184
  )
125
185
  ```
126
186
 
127
- Structured output:
128
- Structured output is a feature that allows the language model to output a structured response.
129
- This feature can be enabled by providing a schema to the `response_schema` parameter.
130
-
131
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
132
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
133
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
134
-
135
- Structured output is not compatible with tool calling. The language model also doesn\'t need to stream
136
- anything when structured output is enabled. Thus, standard invocation will be performed regardless of
137
- whether the `event_emitter` parameter is provided or not.
187
+ Thinking:
188
+ The `GoogleLMInvoker` can be configured to perform step-by-step thinking process before answering.
189
+ This feature can be enabled by setting the `thinking` parameter to `True`.
138
190
 
139
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
140
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
141
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
191
+ Thinking outputs are stored in the `outputs` attribute of the `LMOutput` object
192
+ and can be accessed via the `thinkings` property.
142
193
 
143
- # Example 1: Using a JSON schema dictionary
144
194
  Usage example:
145
195
  ```python
146
- schema = {
147
- "title": "Animal",
148
- "description": "A description of an animal.",
149
- "properties": {
150
- "color": {"title": "Color", "type": "string"},
151
- "name": {"title": "Name", "type": "string"},
152
- },
153
- "required": ["name", "color"],
154
- "type": "object",
155
- }
156
- lm_invoker = GoogleLMInvoker(..., response_schema=schema)
196
+ lm_invoker = GoogleLMInvoker(..., thinking=True, thinking_budget=1024)
157
197
  ```
198
+
158
199
  Output example:
159
200
  ```python
160
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
201
+ LMOutput(
202
+ outputs=[
203
+ LMOutputItem(type="thinking", output=Reasoning(type="thinking", reasoning="I\'m thinking...", ...)),
204
+ LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
205
+ ]
206
+ )
161
207
  ```
162
208
 
163
- # Example 2: Using a Pydantic BaseModel class
164
- Usage example:
165
- ```python
166
- class Animal(BaseModel):
167
- name: str
168
- color: str
169
-
170
- lm_invoker = GoogleLMInvoker(..., response_schema=Animal)
171
- ```
172
- Output example:
209
+ Streaming output example:
173
210
  ```python
174
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
211
+ {"type": "thinking_start", "value": "", ...}
212
+ {"type": "thinking", "value": "I\'m ", ...}
213
+ {"type": "thinking", "value": "thinking...", ...}
214
+ {"type": "thinking_end", "value": "", ...}
215
+ {"type": "response", "value": "Golden retriever ", ...}
216
+ {"type": "response", "value": "is a good dog breed.", ...}
175
217
  ```
218
+ Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
219
+ To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
220
+ LM invoker initialization. The legacy event format support will be removed in v0.6.
221
+
222
+ The amount of tokens allocated for the thinking process can be set via the `thinking_budget` parameter.
223
+ For more information, please refer to the following documentation:
224
+ https://ai.google.dev/gemini-api/docs/thinking
225
+
226
+ Thinking is only available for certain models, starting from Gemini 2.5 series.
227
+ Thinking is required for Gemini 2.5 Pro models.
176
228
 
177
229
  Analytics tracking:
178
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
230
+ The `GoogleLMInvoker` can be configured to output additional information about the invocation.
179
231
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
232
+
180
233
  When enabled, the following attributes will be stored in the output:
181
234
  1. `token_usage`: The token usage.
182
235
  2. `duration`: The duration in seconds.
@@ -185,15 +238,10 @@ class GoogleLMInvoker(BaseLMInvoker):
185
238
  Output example:
186
239
  ```python
187
240
  LMOutput(
188
- response="Golden retriever is a good dog breed.",
189
- token_usage=TokenUsage(
190
- input_tokens=1500,
191
- output_tokens=200,
192
- input_token_details=InputTokenDetails(cached_tokens=1200, uncached_tokens=300),
193
- output_token_details=OutputTokenDetails(reasoning_tokens=180, response_tokens=20),
194
- ),
241
+ outputs=[...],
242
+ token_usage=TokenUsage(input_tokens=100, output_tokens=50),
195
243
  duration=0.729,
196
- finish_details={"finish_reason": "STOP", "finish_message": None},
244
+ finish_details={"stop_reason": "end_turn"},
197
245
  )
198
246
  ```
199
247
 
@@ -205,8 +253,6 @@ class GoogleLMInvoker(BaseLMInvoker):
205
253
  Retry config examples:
206
254
  ```python
207
255
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
208
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
209
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
210
256
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
211
257
  ```
212
258
 
@@ -214,61 +260,6 @@ class GoogleLMInvoker(BaseLMInvoker):
214
260
  ```python
215
261
  lm_invoker = GoogleLMInvoker(..., retry_config=retry_config)
216
262
  ```
217
-
218
- Thinking:
219
- Thinking is a feature that allows the language model to have enhanced reasoning capabilities for complex tasks,
220
- while also providing transparency into its step-by-step thought process before it delivers its final answer.
221
- It can be enabled by setting the `thinking` parameter to `True`.
222
-
223
- Thinking is only available for certain models, starting from Gemini 2.5 series, and is required for
224
- Gemini 2.5 Pro models. Therefore, `thinking` defaults to `True` for Gemini 2.5 Pro models and `False`
225
- for other models. Setting `thinking` to `False` for Gemini 2.5 Pro models will raise a `ValueError`.
226
- When enabled, the reasoning is stored in the `reasoning` attribute in the output.
227
-
228
- Usage example:
229
- ```python
230
- lm_invoker = GoogleLMInvoker(..., thinking=True, thinking_budget=1024)
231
- ```
232
-
233
- Output example:
234
- ```python
235
- LMOutput(
236
- response="Golden retriever is a good dog breed.",
237
- reasoning=[Reasoning(reasoning="Let me think about it...")],
238
- )
239
- ```
240
-
241
- Streaming output example:
242
- ```python
243
- {"type": "thinking_start", "value": "", ...}
244
- {"type": "thinking", "value": "Let me think "\', ...}
245
- {"type": "thinking", "value": "about it...", ...}
246
- {"type": "thinking_end", "value": ""}\', ...}
247
- {"type": "response", "value": "Golden retriever ", ...}
248
- {"type": "response", "value": "is a good dog breed.", ...}
249
- ```
250
- Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
251
- To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
252
- LM invoker initialization. The legacy event format support will be removed in v0.6.
253
-
254
- When thinking is enabled, the amount of tokens allocated for the thinking process can be set via the
255
- `thinking_budget` parameter. The `thinking_budget`:
256
- 1. Defaults to -1, in which case the model will control the budget automatically.
257
- 2. Must be greater than the model\'s minimum thinking budget.
258
- For more details, please refer to https://ai.google.dev/gemini-api/docs/thinking
259
-
260
- Output types:
261
- The output of the `GoogleLMInvoker` can either be:
262
- 1. `str`: A text response.
263
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
264
- 2.1. response (str)
265
- 2.2. attachments (list[Attachment])
266
- 2.3. tool_calls (list[ToolCall])
267
- 2.4. structured_output (dict[str, Any] | BaseModel | None)
268
- 2.5. token_usage (TokenUsage | None)
269
- 2.6. duration (float | None)
270
- 2.7. finish_details (dict[str, Any])
271
- 2.8. reasoning (list[Reasoning])
272
263
  '''
273
264
  client_params: Incomplete
274
265
  generate_image: Incomplete