gllm-inference-binary 0.5.52__cp312-cp312-win_amd64.whl → 0.5.54__cp312-cp312-win_amd64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of gllm-inference-binary might be problematic. Click here for more details.

@@ -72,80 +72,116 @@ class OpenAIChatCompletionsLMInvoker(BaseLMInvoker):
72
72
  result = await lm_invoker.invoke([text, image])
73
73
  ```
74
74
 
75
- Tool calling:
76
- Tool calling is a feature that allows the language model to call tools to perform tasks.
77
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
78
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
79
- `tool_calls` attribute in the output.
80
-
81
- Usage example:
82
- ```python
83
- lm_invoker = OpenAIChatCompletionsLMInvoker(..., tools=[tool_1, tool_2])
84
- ```
75
+ Text output:
76
+ The `OpenAIChatCompletionsLMInvoker` generates text outputs by default.
77
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
78
+ via the `texts` (all text outputs) or `text` (first text output) properties.
85
79
 
86
80
  Output example:
87
81
  ```python
88
- LMOutput(
89
- response="Let me call the tools...",
90
- tool_calls=[
91
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
92
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
93
- ]
94
- )
82
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
95
83
  ```
96
84
 
97
85
  Structured output:
98
- Structured output is a feature that allows the language model to output a structured response.
86
+ The `OpenAIChatCompletionsLMInvoker` can be configured to generate structured outputs.
99
87
  This feature can be enabled by providing a schema to the `response_schema` parameter.
100
88
 
101
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
102
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
103
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
89
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
90
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
104
91
 
105
- The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
106
- invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
92
+ The schema must either be one of the following:
93
+ 1. A Pydantic BaseModel class
94
+ The structured output will be a Pydantic model.
95
+ 2. A JSON schema dictionary
96
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
97
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
98
+ The structured output will be a dictionary.
107
99
 
108
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
109
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
110
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
111
-
112
- # Example 1: Using a JSON schema dictionary
113
100
  Usage example:
114
101
  ```python
115
- schema = {
116
- "title": "Animal",
117
- "description": "A description of an animal.",
118
- "properties": {
119
- "color": {"title": "Color", "type": "string"},
120
- "name": {"title": "Name", "type": "string"},
121
- },
122
- "required": ["name", "color"],
123
- "type": "object",
124
- }
125
- lm_invoker = OpenAIChatCompletionsLMInvoker(..., response_schema=schema)
102
+ class Animal(BaseModel):
103
+ name: str
104
+ color: str
105
+
106
+ json_schema = Animal.model_json_schema()
107
+
108
+ lm_invoker = OpenAIChatCompletionsLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
109
+ lm_invoker = OpenAIChatCompletionsLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
126
110
  ```
111
+
127
112
  Output example:
128
113
  ```python
129
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
114
+ # Using Pydantic BaseModel class outputs a Pydantic model
115
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
116
+
117
+ # Using JSON schema dictionary outputs a dictionary
118
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
130
119
  ```
131
120
 
132
- # Example 2: Using a Pydantic BaseModel class
121
+ When structured output is enabled, streaming is disabled.
122
+
123
+ Tool calling:
124
+ The `OpenAIChatCompletionsLMInvoker` can be configured to call tools to perform certain tasks.
125
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
126
+
127
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
128
+ can be accessed via the `tool_calls` property.
129
+
133
130
  Usage example:
134
131
  ```python
135
- class Animal(BaseModel):
136
- name: str
137
- color: str
132
+ lm_invoker = OpenAIChatCompletionsLMInvoker(..., tools=[tool_1, tool_2])
133
+ ```
138
134
 
139
- lm_invoker = OpenAIChatCompletionsLMInvoker(..., response_schema=Animal)
135
+ Output example:
136
+ ```python
137
+ LMOutput(
138
+ outputs=[
139
+ LMOutputItem(type="text", output="I\'m using tools..."),
140
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
141
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
142
+ ]
143
+ )
140
144
  ```
145
+
146
+ Reasoning:
147
+ The `OpenAILMInvoker` performs step-by-step reasoning before generating a response when reasoning
148
+ models are used, such as GPT-5 models and o-series models.
149
+
150
+ The reasoning effort can be set via the `reasoning_effort` parameter, which guides the models on the amount
151
+ of reasoning tokens to generate. Available options include `minimal`, `low`, `medium`, and `high`.
152
+
153
+ Some models may also output the reasoning tokens. In this case, the reasoning tokens are stored in
154
+ the `outputs` attribute of the `LMOutput` object and can be accessed via the `thinkings` property.
155
+
141
156
  Output example:
142
157
  ```python
143
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
158
+ LMOutput(
159
+ outputs=[
160
+ LMOutputItem(type="thinking", output=Reasoning(reasoning="I\'m thinking...", ...)),
161
+ LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
162
+ ]
163
+ )
164
+ ```
165
+
166
+ Streaming output example:
167
+ ```python
168
+ {"type": "thinking_start", "value": "", ...}
169
+ {"type": "thinking", "value": "I\'m ", ...}
170
+ {"type": "thinking", "value": "thinking...", ...}
171
+ {"type": "thinking_end", "value": "", ...}
172
+ {"type": "response", "value": "Golden retriever ", ...}
173
+ {"type": "response", "value": "is a good dog breed.", ...}
144
174
  ```
175
+ Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
176
+ To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
177
+ LM invoker initialization. The legacy event format support will be removed in v0.6.
178
+
179
+ Setting reasoning-related parameters for non-reasoning models will raise an error.
145
180
 
146
181
  Analytics tracking:
147
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
182
+ The `OpenAIChatCompletionsLMInvoker` can be configured to output additional information about the invocation.
148
183
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
184
+
149
185
  When enabled, the following attributes will be stored in the output:
150
186
  1. `token_usage`: The token usage.
151
187
  2. `duration`: The duration in seconds.
@@ -154,15 +190,14 @@ class OpenAIChatCompletionsLMInvoker(BaseLMInvoker):
154
190
  Output example:
155
191
  ```python
156
192
  LMOutput(
157
- response="Golden retriever is a good dog breed.",
193
+ outputs=[...],
158
194
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
159
195
  duration=0.729,
160
- finish_details={"finish_reason": "stop"},
196
+ finish_details={"stop_reason": "end_turn"},
161
197
  )
162
198
  ```
163
199
 
164
- When streaming is enabled, token usage is not supported. Therefore, the `token_usage` attribute will be `None`
165
- regardless of the value of the `output_analytics` parameter.
200
+ When streaming is enabled, token usage is not supported.
166
201
 
167
202
  Retry and timeout:
168
203
  The `OpenAIChatCompletionsLMInvoker` supports retry and timeout configuration.
@@ -172,8 +207,6 @@ class OpenAIChatCompletionsLMInvoker(BaseLMInvoker):
172
207
  Retry config examples:
173
208
  ```python
174
209
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
175
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
176
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
177
210
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
178
211
  ```
179
212
 
@@ -181,58 +214,6 @@ class OpenAIChatCompletionsLMInvoker(BaseLMInvoker):
181
214
  ```python
182
215
  lm_invoker = OpenAIChatCompletionsLMInvoker(..., retry_config=retry_config)
183
216
  ```
184
-
185
- Reasoning:
186
- Some language models support advanced reasoning capabilities. When using such reasoning-capable models,
187
- you can configure how much reasoning the model should perform before generating a final response by setting
188
- reasoning-related parameters.
189
-
190
- The reasoning effort of reasoning models can be set via the `reasoning_effort` parameter. This parameter
191
- will guide the models on how many reasoning tokens it should generate before creating a response to the prompt.
192
- The reasoning effort is only supported by some language models.
193
- Available options include:
194
- 1. "low": Favors speed and economical token usage.
195
- 2. "medium": Favors a balance between speed and reasoning accuracy.
196
- 3. "high": Favors more complete reasoning at the cost of more tokens generated and slower responses.
197
- This may differ between models. When not set, the reasoning effort will be equivalent to None by default.
198
-
199
- When using reasoning models, some providers might output the reasoning summary. These will be stored in the
200
- `reasoning` attribute in the output.
201
-
202
- Output example:
203
- ```python
204
- LMOutput(
205
- response="Golden retriever is a good dog breed.",
206
- reasoning=[Reasoning(id="", reasoning="Let me think about it...")],
207
- )
208
- ```
209
-
210
- Streaming output example:
211
- ```python
212
- {"type": "thinking_start", "value": ""}\', ...}
213
- {"type": "thinking", "value": "Let me think "}\', ...}
214
- {"type": "thinking", "value": "about it..."}\', ...}
215
- {"type": "thinking_end", "value": ""}\', ...}
216
- {"type": "response", "value": "Golden retriever ", ...}
217
- {"type": "response", "value": "is a good dog breed.", ...}
218
- ```
219
- Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
220
- To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
221
- LM invoker initialization. The legacy event format support will be removed in v0.6.
222
-
223
- Setting reasoning-related parameters for non-reasoning models will raise an error.
224
-
225
- Output types:
226
- The output of the `OpenAIChatCompletionsLMInvoker` can either be:
227
- 1. `str`: A text response.
228
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
229
- 2.1. response (str)
230
- 2.2. tool_calls (list[ToolCall])
231
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
232
- 2.4. token_usage (TokenUsage | None)
233
- 2.5. duration (float | None)
234
- 2.6. finish_details (dict[str, Any])
235
- 2.7. reasoning (list[Reasoning])
236
217
  '''
237
218
  client_kwargs: Incomplete
238
219
  def __init__(self, model_name: str, api_key: str | None = None, base_url: str = ..., model_kwargs: dict[str, Any] | None = None, default_hyperparameters: dict[str, Any] | None = None, tools: list[Tool | LangChainTool] | None = None, response_schema: ResponseSchema | None = None, output_analytics: bool = False, retry_config: RetryConfig | None = None, reasoning_effort: ReasoningEffort | None = None, simplify_events: bool = False) -> None: