gllm-inference-binary 0.5.53__cp313-cp313-win_amd64.whl → 0.5.55__cp313-cp313-win_amd64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of gllm-inference-binary might be problematic. Click here for more details.

@@ -5,7 +5,6 @@ DOCUMENT_MIME_TYPES: Incomplete
5
5
  EMBEDDING_ENDPOINT: str
6
6
  GOOGLE_SCOPES: Incomplete
7
7
  GRPC_ENABLE_RETRIES_KEY: str
8
- HEX_REPR_LENGTH: int
9
8
  INVOKER_DEFAULT_TIMEOUT: float
10
9
  INVOKER_PROPAGATED_MAX_RETRIES: int
11
10
  JINA_DEFAULT_URL: str
@@ -49,84 +49,123 @@ class AnthropicLMInvoker(BaseLMInvoker):
49
49
  result = await lm_invoker.invoke([text, image])
50
50
  ```
51
51
 
52
- Tool calling:
53
- Tool calling is a feature that allows the language model to call tools to perform tasks.
54
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
55
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
56
- `tool_calls` attribute in the output.
52
+ Text output:
53
+ The `AnthropicLMInvoker` generates text outputs by default.
54
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
55
+ via the `texts` (all text outputs) or `text` (first text output) properties.
56
+
57
+ Output example:
58
+ ```python
59
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
60
+ ```
61
+
62
+ Structured output:
63
+ The `AnthropicLMInvoker` can be configured to generate structured outputs.
64
+ This feature can be enabled by providing a schema to the `response_schema` parameter.
65
+
66
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
67
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
68
+
69
+ The schema must either be one of the following:
70
+ 1. A Pydantic BaseModel class
71
+ The structured output will be a Pydantic model.
72
+ 2. A JSON schema dictionary
73
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
74
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
75
+ The structured output will be a dictionary.
57
76
 
58
77
  Usage example:
59
78
  ```python
60
- lm_invoker = AnthropicLMInvoker(..., tools=[tool_1, tool_2])
79
+ class Animal(BaseModel):
80
+ name: str
81
+ color: str
82
+
83
+ json_schema = Animal.model_json_schema()
84
+
85
+ lm_invoker = AnthropicLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
86
+ lm_invoker = AnthropicLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
61
87
  ```
62
88
 
63
89
  Output example:
64
90
  ```python
65
- LMOutput(
66
- response="Let me call the tools...",
67
- tool_calls=[
68
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
69
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
70
- ]
71
- )
72
- ```
91
+ # Using Pydantic BaseModel class outputs a Pydantic model
92
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
73
93
 
74
- Structured output:
75
- Structured output is a feature that allows the language model to output a structured response.
76
- This feature can be enabled by providing a schema to the `response_schema` parameter.
94
+ # Using JSON schema dictionary outputs a dictionary
95
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
96
+ ```
77
97
 
78
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
79
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
80
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
98
+ Structured output is not compatible with tool calling or thinking.
99
+ When structured output is enabled, streaming is disabled.
81
100
 
82
- Structured output is achieved by providing the schema name in the `tool_choice` parameter. This forces
83
- the model to call the provided schema as a tool. Thus, structured output is not compatible with:
84
- 1. Tool calling, since the tool calling is reserved to force the model to call the provided schema as a tool.
85
- 2. Thinking, since thinking is not allowed when a tool use is forced through the `tool_choice` parameter.
86
- The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
87
- invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
101
+ Tool calling:
102
+ The `AnthropicLMInvoker` can be configured to call tools to perform certain tasks.
103
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
88
104
 
89
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
90
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
91
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
105
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
106
+ can be accessed via the `tool_calls` property.
92
107
 
93
- # Example 1: Using a JSON schema dictionary
94
108
  Usage example:
95
109
  ```python
96
- schema = {
97
- "title": "Animal",
98
- "description": "A description of an animal.",
99
- "properties": {
100
- "color": {"title": "Color", "type": "string"},
101
- "name": {"title": "Name", "type": "string"},
102
- },
103
- "required": ["name", "color"],
104
- "type": "object",
105
- }
106
- lm_invoker = AnthropicLMInvoker(..., response_schema=schema)
110
+ lm_invoker = AnthropicLMInvoker(..., tools=[tool_1, tool_2])
107
111
  ```
112
+
108
113
  Output example:
109
114
  ```python
110
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
115
+ LMOutput(
116
+ outputs=[
117
+ LMOutputItem(type="text", output="I\'m using tools..."),
118
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
119
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
120
+ ]
121
+ )
111
122
  ```
112
123
 
113
- # Example 2: Using a Pydantic BaseModel class
124
+ Thinking:
125
+ The `AnthropicLMInvoker` can be configured to perform step-by-step thinking process before answering.
126
+ This feature can be enabled by setting the `thinking` parameter to `True`.
127
+
128
+ Thinking outputs are stored in the `outputs` attribute of the `LMOutput` object
129
+ and can be accessed via the `thinkings` property.
130
+
114
131
  Usage example:
115
132
  ```python
116
- class Animal(BaseModel):
117
- name: str
118
- color: str
119
-
120
- lm_invoker = AnthropicLMInvoker(..., response_schema=Animal)
133
+ lm_invoker = AnthropicLMInvoker(..., thinking=True, thinking_budget=1024)
121
134
  ```
135
+
122
136
  Output example:
123
137
  ```python
124
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
138
+ LMOutput(
139
+ outputs=[
140
+ LMOutputItem(type="thinking", output=Reasoning(type="thinking", reasoning="I\'m thinking...", ...)),
141
+ LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
142
+ ]
143
+ )
144
+ ```
145
+
146
+ Streaming output example:
147
+ ```python
148
+ {"type": "thinking_start", "value": "", ...}
149
+ {"type": "thinking", "value": "I\'m ", ...}
150
+ {"type": "thinking", "value": "thinking...", ...}
151
+ {"type": "thinking_end", "value": "", ...}
152
+ {"type": "response", "value": "Golden retriever ", ...}
153
+ {"type": "response", "value": "is a good dog breed.", ...}
125
154
  ```
155
+ Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
156
+ To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
157
+ LM invoker initialization. The legacy event format support will be removed in v0.6.
158
+
159
+ The amount of tokens allocated for the thinking process can be set via the `thinking_budget` parameter.
160
+ For more information, please refer to the following documentation:
161
+ https://docs.claude.com/en/docs/build-with-claude/extended-thinking#working-with-thinking-budgets.
162
+
163
+ Thinking is only available for certain models, starting from Claude Sonnet 3.7.
126
164
 
127
165
  Analytics tracking:
128
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
166
+ The `AnthropicLMInvoker` can be configured to output additional information about the invocation.
129
167
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
168
+
130
169
  When enabled, the following attributes will be stored in the output:
131
170
  1. `token_usage`: The token usage.
132
171
  2. `duration`: The duration in seconds.
@@ -135,7 +174,7 @@ class AnthropicLMInvoker(BaseLMInvoker):
135
174
  Output example:
136
175
  ```python
137
176
  LMOutput(
138
- response="Golden retriever is a good dog breed.",
177
+ outputs=[...],
139
178
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
140
179
  duration=0.729,
141
180
  finish_details={"stop_reason": "end_turn"},
@@ -150,8 +189,6 @@ class AnthropicLMInvoker(BaseLMInvoker):
150
189
  Retry config examples:
151
190
  ```python
152
191
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
153
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
154
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
155
192
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
156
193
  ```
157
194
 
@@ -160,47 +197,6 @@ class AnthropicLMInvoker(BaseLMInvoker):
160
197
  lm_invoker = AnthropicLMInvoker(..., retry_config=retry_config)
161
198
  ```
162
199
 
163
- Thinking:
164
- Thinking is a feature that allows the language model to have enhanced reasoning capabilities for complex tasks,
165
- while also providing transparency into its step-by-step thought process before it delivers its final answer.
166
- This feature is only available for certain models, starting from Claude 3.7 Sonnet.
167
- It can be enabled by setting the `thinking` parameter to `True`.
168
-
169
- When thinking is enabled, the amount of tokens allocated for the thinking process can be set via the
170
- `thinking_budget` parameter. The `thinking_budget`:
171
- 1. Must be greater than or equal to 1024.
172
- 2. Must be less than the `max_tokens` hyperparameter, as the `thinking_budget` is allocated from the
173
- `max_tokens`. For example, if `max_tokens=2048` and `thinking_budget=1024`, the language model will
174
- allocate at most 1024 tokens for thinking and the remaining 1024 tokens for generating the response.
175
-
176
- When enabled, the reasoning is stored in the `reasoning` attribute in the output.
177
-
178
- Usage example:
179
- ```python
180
- lm_invoker = AnthropicLMInvoker(..., thinking=True, thinking_budget=1024)
181
- ```
182
-
183
- Output example:
184
- ```python
185
- LMOutput(
186
- response="Golden retriever is a good dog breed.",
187
- reasoning=[Reasoning(type="thinking", reasoning="Let me think about it...", signature="x")],
188
- )
189
- ```
190
-
191
- Streaming output example:
192
- ```python
193
- {"type": "thinking_start", "value": "", ...}
194
- {"type": "thinking", "value": "Let me think "\', ...}
195
- {"type": "thinking", "value": "about it..."}\', ...}
196
- {"type": "thinking_end", "value": ""}\', ...}
197
- {"type": "response", "value": "Golden retriever ", ...}
198
- {"type": "response", "value": "is a good dog breed.", ...}
199
- ```
200
- Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
201
- To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
202
- LM invoker initialization. The legacy event format support will be removed in v0.6.
203
-
204
200
  Batch processing:
205
201
  The `AnthropicLMInvoker` supports batch processing, which allows the language model to process multiple
206
202
  requests in a single call. Batch processing is supported through the `batch` attribute.
@@ -214,7 +210,7 @@ class AnthropicLMInvoker(BaseLMInvoker):
214
210
  Output example:
215
211
  ```python
216
212
  {
217
- "request_1": LMOutput(response="The sky is blue."),
213
+ "request_1": LMOutput(outputs=[LMOutputItem(type="text", output="The sky is blue.")]),
218
214
  "request_2": LMOutput(finish_details={"type": "error", "error": {"message": "...", ...}, ...}),
219
215
  }
220
216
  ```
@@ -240,7 +236,7 @@ class AnthropicLMInvoker(BaseLMInvoker):
240
236
  Output example:
241
237
  ```python
242
238
  {
243
- "request_1": LMOutput(response="The sky is blue."),
239
+ "request_1": LMOutput(outputs=[LMOutputItem(type="text", output="The sky is blue.")]),
244
240
  "request_2": LMOutput(finish_details={"type": "error", "error": {"message": "...", ...}, ...}),
245
241
  }
246
242
  ```
@@ -263,18 +259,6 @@ class AnthropicLMInvoker(BaseLMInvoker):
263
259
  ```python
264
260
  await lm_invoker.batch.cancel(batch_id)
265
261
  ```
266
-
267
- Output types:
268
- The output of the `AnthropicLMInvoker` can either be:
269
- 1. `str`: A text response.
270
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
271
- 2.1. response (str)
272
- 2.2. tool_calls (list[ToolCall])
273
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
274
- 2.4. token_usage (TokenUsage | None)
275
- 2.5. duration (float | None)
276
- 2.6. finish_details (dict[str, Any])
277
- 2.7. reasoning (list[Reasoning])
278
262
  '''
279
263
  client: Incomplete
280
264
  thinking: Incomplete
@@ -51,11 +51,60 @@ class AzureOpenAILMInvoker(OpenAILMInvoker):
51
51
  result = await lm_invoker.invoke([text, image])
52
52
  ```
53
53
 
54
+ Text output:
55
+ The `AzureOpenAILMInvoker` generates text outputs by default.
56
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
57
+ via the `texts` (all text outputs) or `text` (first text output) properties.
58
+
59
+ Output example:
60
+ ```python
61
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
62
+ ```
63
+
64
+ Structured output:
65
+ The `AzureOpenAILMInvoker` can be configured to generate structured outputs.
66
+ This feature can be enabled by providing a schema to the `response_schema` parameter.
67
+
68
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
69
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
70
+
71
+ The schema must either be one of the following:
72
+ 1. A Pydantic BaseModel class
73
+ The structured output will be a Pydantic model.
74
+ 2. A JSON schema dictionary
75
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
76
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
77
+ The structured output will be a dictionary.
78
+
79
+ Usage example:
80
+ ```python
81
+ class Animal(BaseModel):
82
+ name: str
83
+ color: str
84
+
85
+ json_schema = Animal.model_json_schema()
86
+
87
+ lm_invoker = AzureOpenAILMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
88
+ lm_invoker = AzureOpenAILMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
89
+ ```
90
+
91
+ Output example:
92
+ ```python
93
+ # Using Pydantic BaseModel class outputs a Pydantic model
94
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
95
+
96
+ # Using JSON schema dictionary outputs a dictionary
97
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
98
+ ```
99
+
100
+ When structured output is enabled, streaming is disabled.
101
+
54
102
  Tool calling:
55
- Tool calling is a feature that allows the language model to call tools to perform tasks.
56
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
57
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
58
- `tool_calls` attribute in the output.
103
+ The `AzureOpenAILMInvoker` can be configured to call tools to perform certain tasks.
104
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
105
+
106
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
107
+ can be accessed via the `tool_calls` property.
59
108
 
60
109
  Usage example:
61
110
  ```python
@@ -65,66 +114,62 @@ class AzureOpenAILMInvoker(OpenAILMInvoker):
65
114
  Output example:
66
115
  ```python
67
116
  LMOutput(
68
- response="Let me call the tools...",
69
- tool_calls=[
70
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
71
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
117
+ outputs=[
118
+ LMOutputItem(type="text", output="I\'m using tools..."),
119
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
120
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
72
121
  ]
73
122
  )
74
123
  ```
75
124
 
76
- Structured output:
77
- Structured output is a feature that allows the language model to output a structured response.
78
- This feature can be enabled by providing a schema to the `response_schema` parameter.
125
+ Reasoning:
126
+ The `AzureOpenAILMInvoker` performs step-by-step reasoning before generating a response when reasoning
127
+ models are used, such as GPT-5 models and o-series models.
79
128
 
80
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
81
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
82
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
129
+ The reasoning effort can be set via the `reasoning_effort` parameter, which guides the models on the amount
130
+ of reasoning tokens to generate. Available options include `minimal`, `low`, `medium`, and `high`.
83
131
 
84
- The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
85
- invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
132
+ While the raw reasoning tokens are not available, the summary of the reasoning tokens can still be generated.
133
+ This can be done by passing the desired summary level via the `reasoning_summary` parameter.
134
+ Available options include `auto` and `detailed`.
86
135
 
87
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
88
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
89
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
136
+ Reasoning summaries are stored in the `outputs` attribute of the `LMOutput` object
137
+ and can be accessed via the `thinkings` property.
90
138
 
91
- # Example 1: Using a JSON schema dictionary
92
139
  Usage example:
93
140
  ```python
94
- schema = {
95
- "title": "Animal",
96
- "description": "A description of an animal.",
97
- "properties": {
98
- "color": {"title": "Color", "type": "string"},
99
- "name": {"title": "Name", "type": "string"},
100
- },
101
- "required": ["name", "color"],
102
- "type": "object",
103
- }
104
- lm_invoker = AzureOpenAILMInvoker(..., response_schema=schema)
141
+ lm_invoker = AzureOpenAILMInvoker(..., reasoning_effort="high", reasoning_summary="detailed")
105
142
  ```
143
+
106
144
  Output example:
107
145
  ```python
108
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
146
+ LMOutput(
147
+ outputs=[
148
+ LMOutputItem(type="thinking", output=Reasoning(type="thinking", reasoning="I\'m thinking...", ...)),
149
+ LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
150
+ ]
151
+ )
109
152
  ```
110
153
 
111
- # Example 2: Using a Pydantic BaseModel class
112
- Usage example:
113
- ```python
114
- class Animal(BaseModel):
115
- name: str
116
- color: str
117
-
118
- lm_invoker = AzureOpenAILMInvoker(..., response_schema=Animal)
119
- ```
120
- Output example:
154
+ Streaming output example:
121
155
  ```python
122
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
156
+ {"type": "thinking_start", "value": "", ...}
157
+ {"type": "thinking", "value": "I\'m ", ...}
158
+ {"type": "thinking", "value": "thinking...", ...}
159
+ {"type": "thinking_end", "value": "", ...}
160
+ {"type": "response", "value": "Golden retriever ", ...}
161
+ {"type": "response", "value": "is a good dog breed.", ...}
123
162
  ```
163
+ Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
164
+ To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
165
+ LM invoker initialization. The legacy event format support will be removed in v0.6.
166
+
167
+ Reasoning summary is not compatible with tool calling.
124
168
 
125
169
  Analytics tracking:
126
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
170
+ The `AzureOpenAILMInvoker` can be configured to output additional information about the invocation.
127
171
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
172
+
128
173
  When enabled, the following attributes will be stored in the output:
129
174
  1. `token_usage`: The token usage.
130
175
  2. `duration`: The duration in seconds.
@@ -133,15 +178,10 @@ class AzureOpenAILMInvoker(OpenAILMInvoker):
133
178
  Output example:
134
179
  ```python
135
180
  LMOutput(
136
- response="Golden retriever is a good dog breed.",
137
- token_usage=TokenUsage(
138
- input_tokens=1500,
139
- output_tokens=200,
140
- input_token_details=InputTokenDetails(cached_tokens=1200, uncached_tokens=300),
141
- output_token_details=OutputTokenDetails(reasoning_tokens=180, response_tokens=20),
142
- ),
181
+ outputs=[...],
182
+ token_usage=TokenUsage(input_tokens=100, output_tokens=50),
143
183
  duration=0.729,
144
- finish_details={"status": "completed", "incomplete_details": {"reason": None}},
184
+ finish_details={"stop_reason": "end_turn"},
145
185
  )
146
186
  ```
147
187
 
@@ -153,8 +193,6 @@ class AzureOpenAILMInvoker(OpenAILMInvoker):
153
193
  Retry config examples:
154
194
  ```python
155
195
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
156
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
157
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
158
196
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
159
197
  ```
160
198
 
@@ -162,61 +200,6 @@ class AzureOpenAILMInvoker(OpenAILMInvoker):
162
200
  ```python
163
201
  lm_invoker = AzureOpenAILMInvoker(..., retry_config=retry_config)
164
202
  ```
165
-
166
- Reasoning:
167
- Azure OpenAI\'s GPT-5 models and o-series models are classified as reasoning models. Reasoning models think
168
- before they answer, producing a long internal chain of thought before responding to the user. Reasoning models
169
- excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows.
170
-
171
- The reasoning effort of reasoning models can be set via the `reasoning_effort` parameter. This parameter
172
- will guide the models on how many reasoning tokens it should generate before creating a response.
173
- Available options include:
174
- 1. "minimal": Favors the least amount of reasoning, only supported for GPT-5 models onwards.
175
- 2. "low": Favors speed and economical token usage.
176
- 3. "medium": Favors a balance between speed and reasoning accuracy.
177
- 4. "high": Favors more complete reasoning at the cost of more tokens generated and slower responses.
178
-
179
- Azure OpenAI doesn\'t expose the raw reasoning tokens. However, the summary of the reasoning tokens can still be
180
- generated. The summary level can be set via the `reasoning_summary` parameter. Available options include:
181
- 1. "auto": The model decides the summary level automatically.
182
- 2. "detailed": The model will generate a detailed summary of the reasoning tokens.
183
- Reasoning summary is not compatible with tool calling.
184
- When enabled, the reasoning summary will be stored in the `reasoning` attribute in the output.
185
-
186
- Output example:
187
- ```python
188
- LMOutput(
189
- response="Golden retriever is a good dog breed.",
190
- reasoning=[Reasoning(id="x", reasoning="Let me think about it...")],
191
- )
192
- ```
193
-
194
- Streaming output example:
195
- ```python
196
- {"type": "thinking_start", "value": ""}\', ...}
197
- {"type": "thinking", "value": "Let me think "}\', ...}
198
- {"type": "thinking", "value": "about it..."}\', ...}
199
- {"type": "thinking_end", "value": ""}\', ...}
200
- {"type": "response", "value": "Golden retriever ", ...}
201
- {"type": "response", "value": "is a good dog breed.", ...}
202
- ```
203
- Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
204
- To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
205
- LM invoker initialization. The legacy event format support will be removed in v0.6.
206
-
207
- Setting reasoning-related parameters for non-reasoning models will raise an error.
208
-
209
- Output types:
210
- The output of the `AzureOpenAILMInvoker` can either be:
211
- 1. `str`: A text response.
212
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
213
- 2.1. response (str)
214
- 2.2. tool_calls (list[ToolCall])
215
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
216
- 2.4. token_usage (TokenUsage | None)
217
- 2.5. duration (float | None)
218
- 2.6. finish_details (dict[str, Any] | None)
219
- 2.7. reasoning (list[Reasoning])
220
203
  '''
221
204
  client_kwargs: Incomplete
222
205
  def __init__(self, azure_endpoint: str, azure_deployment: str, api_key: str | None = None, api_version: str | None = None, model_kwargs: dict[str, Any] | None = None, default_hyperparameters: dict[str, Any] | None = None, tools: list[Tool | LangChainTool] | None = None, response_schema: ResponseSchema | None = None, output_analytics: bool = False, retry_config: RetryConfig | None = None, reasoning_effort: ReasoningEffort | None = None, reasoning_summary: ReasoningSummary | None = None, simplify_events: bool = False) -> None: