gllm-inference-binary 0.5.53__cp311-cp311-win_amd64.whl → 0.5.55__cp311-cp311-win_amd64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of gllm-inference-binary might be problematic. Click here for more details.

@@ -78,81 +78,82 @@ class LangChainLMInvoker(BaseLMInvoker):
78
78
  result = await lm_invoker.invoke([text, image])
79
79
  ```
80
80
 
81
- Tool calling:
82
- Tool calling is a feature that allows the language model to call tools to perform tasks.
83
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
84
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
85
- `tool_calls` attribute in the output.
86
-
87
- Usage example:
88
- ```python
89
- lm_invoker = LangChainLMInvoker(..., tools=[tool_1, tool_2])
90
- ```
81
+ Text output:
82
+ The `LangChainLMInvoker` generates text outputs by default.
83
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
84
+ via the `texts` (all text outputs) or `text` (first text output) properties.
91
85
 
92
86
  Output example:
93
87
  ```python
94
- LMOutput(
95
- response="Let me call the tools...",
96
- tool_calls=[
97
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
98
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
99
- ]
100
- )
88
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
101
89
  ```
102
90
 
103
91
  Structured output:
104
- Structured output is a feature that allows the language model to output a structured response.
92
+ The `LangChainLMInvoker` can be configured to generate structured outputs.
105
93
  This feature can be enabled by providing a schema to the `response_schema` parameter.
106
94
 
107
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
108
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
109
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
110
-
111
- Structured output is not compatible with tool calling. The language model also doesn\'t need to stream
112
- anything when structured output is enabled. Thus, standard invocation will be performed regardless of
113
- whether the `event_emitter` parameter is provided or not.
95
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
96
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
114
97
 
115
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
116
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
117
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
98
+ The schema must either be one of the following:
99
+ 1. A Pydantic BaseModel class
100
+ The structured output will be a Pydantic model.
101
+ 2. A JSON schema dictionary
102
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
103
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
104
+ The structured output will be a dictionary.
118
105
 
119
- # Example 1: Using a JSON schema dictionary
120
106
  Usage example:
121
107
  ```python
122
- schema = {
123
- "title": "Animal",
124
- "description": "A description of an animal.",
125
- "properties": {
126
- "color": {"title": "Color", "type": "string"},
127
- "name": {"title": "Name", "type": "string"},
128
- },
129
- "required": ["name", "color"],
130
- "type": "object",
131
- }
132
- lm_invoker = LangChainLMInvoker(..., response_schema=schema)
108
+ class Animal(BaseModel):
109
+ name: str
110
+ color: str
111
+
112
+ json_schema = Animal.model_json_schema()
113
+
114
+ lm_invoker = LangChainLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
115
+ lm_invoker = LangChainLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
133
116
  ```
117
+
134
118
  Output example:
135
119
  ```python
136
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
120
+ # Using Pydantic BaseModel class outputs a Pydantic model
121
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
122
+
123
+ # Using JSON schema dictionary outputs a dictionary
124
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
137
125
  ```
138
126
 
139
- # Example 2: Using a Pydantic BaseModel class
127
+ Structured output is not compatible with tool calling.
128
+ When structured output is enabled, streaming is disabled.
129
+
130
+ Tool calling:
131
+ The `LangChainLMInvoker` can be configured to call tools to perform certain tasks.
132
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
133
+
134
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
135
+ can be accessed via the `tool_calls` property.
136
+
140
137
  Usage example:
141
138
  ```python
142
- class Animal(BaseModel):
143
- name: str
144
- color: str
145
-
146
- lm_invoker = LangChainLMInvoker(..., response_schema=Animal)
139
+ lm_invoker = LangChainLMInvoker(..., tools=[tool_1, tool_2])
147
140
  ```
141
+
148
142
  Output example:
149
143
  ```python
150
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
144
+ LMOutput(
145
+ outputs=[
146
+ LMOutputItem(type="text", output="I\'m using tools..."),
147
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
148
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
149
+ ]
150
+ )
151
151
  ```
152
152
 
153
153
  Analytics tracking:
154
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
154
+ The `LangChainLMInvoker` can be configured to output additional information about the invocation.
155
155
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
156
+
156
157
  When enabled, the following attributes will be stored in the output:
157
158
  1. `token_usage`: The token usage.
158
159
  2. `duration`: The duration in seconds.
@@ -161,10 +162,10 @@ class LangChainLMInvoker(BaseLMInvoker):
161
162
  Output example:
162
163
  ```python
163
164
  LMOutput(
164
- response="Golden retriever is a good dog breed.",
165
+ outputs=[...],
165
166
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
166
167
  duration=0.729,
167
- finish_details={"finish_reason": "stop"},
168
+ finish_details={"stop_reason": "end_turn"},
168
169
  )
169
170
  ```
170
171
 
@@ -176,8 +177,6 @@ class LangChainLMInvoker(BaseLMInvoker):
176
177
  Retry config examples:
177
178
  ```python
178
179
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
179
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
180
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
181
180
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
182
181
  ```
183
182
 
@@ -185,17 +184,6 @@ class LangChainLMInvoker(BaseLMInvoker):
185
184
  ```python
186
185
  lm_invoker = LangChainLMInvoker(..., retry_config=retry_config)
187
186
  ```
188
-
189
- Output types:
190
- The output of the `LangChainLMInvoker` can either be:
191
- 1. `str`: A text response.
192
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
193
- 2.1. response (str)
194
- 2.2. tool_calls (list[ToolCall])
195
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
196
- 2.4. token_usage (TokenUsage | None)
197
- 2.5. duration (float | None)
198
- 2.6. finish_details (dict[str, Any])
199
187
  '''
200
188
  model: Incomplete
201
189
  def __init__(self, model: BaseChatModel | None = None, model_class_path: str | None = None, model_name: str | None = None, model_kwargs: dict[str, Any] | None = None, default_hyperparameters: dict[str, Any] | None = None, tools: list[Tool | LangChainTool] | None = None, response_schema: ResponseSchema | None = None, output_analytics: bool = False, retry_config: RetryConfig | None = None) -> None:
@@ -57,80 +57,116 @@ class LiteLLMLMInvoker(OpenAIChatCompletionsLMInvoker):
57
57
  result = await lm_invoker.invoke([text, image])
58
58
  ```
59
59
 
60
- Tool calling:
61
- Tool calling is a feature that allows the language model to call tools to perform tasks.
62
- Tools can be passed to the via the `tools` parameter as a list of `Tool` objects.
63
- When tools are provided and the model decides to call a tool, the tool calls are stored in the
64
- `tool_calls` attribute in the output.
65
-
66
- Usage example:
67
- ```python
68
- lm_invoker = LiteLLMLMInvoker(..., tools=[tool_1, tool_2])
69
- ```
60
+ Text output:
61
+ The `LiteLLMLMInvoker` generates text outputs by default.
62
+ Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
63
+ via the `texts` (all text outputs) or `text` (first text output) properties.
70
64
 
71
65
  Output example:
72
66
  ```python
73
- LMOutput(
74
- response="Let me call the tools...",
75
- tool_calls=[
76
- ToolCall(id="123", name="tool_1", args={"key": "value"}),
77
- ToolCall(id="456", name="tool_2", args={"key": "value"}),
78
- ]
79
- )
67
+ LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
80
68
  ```
81
69
 
82
70
  Structured output:
83
- Structured output is a feature that allows the language model to output a structured response.
71
+ The `LiteLLMLMInvoker` can be configured to generate structured outputs.
84
72
  This feature can be enabled by providing a schema to the `response_schema` parameter.
85
73
 
86
- The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
87
- If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
88
- For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
74
+ Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
75
+ via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
89
76
 
90
- The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
91
- invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
77
+ The schema must either be one of the following:
78
+ 1. A Pydantic BaseModel class
79
+ The structured output will be a Pydantic model.
80
+ 2. A JSON schema dictionary
81
+ JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
82
+ Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
83
+ The structured output will be a dictionary.
92
84
 
93
- When enabled, the structured output is stored in the `structured_output` attribute in the output.
94
- 1. If the schema is a JSON schema dictionary, the structured output is a dictionary.
95
- 2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
96
-
97
- # Example 1: Using a JSON schema dictionary
98
85
  Usage example:
99
86
  ```python
100
- schema = {
101
- "title": "Animal",
102
- "description": "A description of an animal.",
103
- "properties": {
104
- "color": {"title": "Color", "type": "string"},
105
- "name": {"title": "Name", "type": "string"},
106
- },
107
- "required": ["name", "color"],
108
- "type": "object",
109
- }
110
- lm_invoker = LiteLLMLMInvoker(..., response_schema=schema)
87
+ class Animal(BaseModel):
88
+ name: str
89
+ color: str
90
+
91
+ json_schema = Animal.model_json_schema()
92
+
93
+ lm_invoker = LiteLLMLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
94
+ lm_invoker = LiteLLMLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
111
95
  ```
96
+
112
97
  Output example:
113
98
  ```python
114
- LMOutput(structured_output={"name": "Golden retriever", "color": "Golden"})
99
+ # Using Pydantic BaseModel class outputs a Pydantic model
100
+ LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
101
+
102
+ # Using JSON schema dictionary outputs a dictionary
103
+ LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
115
104
  ```
116
105
 
117
- # Example 2: Using a Pydantic BaseModel class
106
+ When structured output is enabled, streaming is disabled.
107
+
108
+ Tool calling:
109
+ The `LiteLLMLMInvoker` can be configured to call tools to perform certain tasks.
110
+ This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
111
+
112
+ Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
113
+ can be accessed via the `tool_calls` property.
114
+
118
115
  Usage example:
119
116
  ```python
120
- class Animal(BaseModel):
121
- name: str
122
- color: str
117
+ lm_invoker = LiteLLMLMInvoker(..., tools=[tool_1, tool_2])
118
+ ```
123
119
 
124
- lm_invoker = LiteLLMLMInvoker(..., response_schema=Animal)
120
+ Output example:
121
+ ```python
122
+ LMOutput(
123
+ outputs=[
124
+ LMOutputItem(type="text", output="I\'m using tools..."),
125
+ LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
126
+ LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
127
+ ]
128
+ )
125
129
  ```
130
+
131
+ Reasoning:
132
+ The `LiteLLMLMInvoker` performs step-by-step reasoning before generating a response when reasoning
133
+ models are used, such as GPT-5 models and o-series models.
134
+
135
+ The reasoning effort can be set via the `reasoning_effort` parameter, which guides the models on the amount
136
+ of reasoning tokens to generate. Available options include `minimal`, `low`, `medium`, and `high`.
137
+
138
+ Some models may also output the reasoning tokens. In this case, the reasoning tokens are stored in
139
+ the `outputs` attribute of the `LMOutput` object and can be accessed via the `thinkings` property.
140
+
126
141
  Output example:
127
142
  ```python
128
- LMOutput(structured_output=Animal(name="Golden retriever", color="Golden"))
143
+ LMOutput(
144
+ outputs=[
145
+ LMOutputItem(type="thinking", output=Reasoning(reasoning="I\'m thinking...", ...)),
146
+ LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
147
+ ]
148
+ )
129
149
  ```
130
150
 
151
+ Streaming output example:
152
+ ```python
153
+ {"type": "thinking_start", "value": "", ...}
154
+ {"type": "thinking", "value": "I\'m ", ...}
155
+ {"type": "thinking", "value": "thinking...", ...}
156
+ {"type": "thinking_end", "value": "", ...}
157
+ {"type": "response", "value": "Golden retriever ", ...}
158
+ {"type": "response", "value": "is a good dog breed.", ...}
159
+ ```
160
+ Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
161
+ To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
162
+ LM invoker initialization. The legacy event format support will be removed in v0.6.
163
+
164
+ Setting reasoning-related parameters for non-reasoning models will raise an error.
165
+
131
166
  Analytics tracking:
132
- Analytics tracking is a feature that allows the module to output additional information about the invocation.
167
+ The `LiteLLMLMInvoker` can be configured to output additional information about the invocation.
133
168
  This feature can be enabled by setting the `output_analytics` parameter to `True`.
169
+
134
170
  When enabled, the following attributes will be stored in the output:
135
171
  1. `token_usage`: The token usage.
136
172
  2. `duration`: The duration in seconds.
@@ -139,15 +175,14 @@ class LiteLLMLMInvoker(OpenAIChatCompletionsLMInvoker):
139
175
  Output example:
140
176
  ```python
141
177
  LMOutput(
142
- response="Golden retriever is a good dog breed.",
178
+ outputs=[...],
143
179
  token_usage=TokenUsage(input_tokens=100, output_tokens=50),
144
180
  duration=0.729,
145
- finish_details={"finish_reason": "stop"},
181
+ finish_details={"stop_reason": "end_turn"},
146
182
  )
147
183
  ```
148
184
 
149
- When streaming is enabled, token usage is not supported. Therefore, the `token_usage` attribute will be `None`
150
- regardless of the value of the `output_analytics` parameter.
185
+ When streaming is enabled, token usage is not supported.
151
186
 
152
187
  Retry and timeout:
153
188
  The `LiteLLMLMInvoker` supports retry and timeout configuration.
@@ -157,8 +192,6 @@ class LiteLLMLMInvoker(OpenAIChatCompletionsLMInvoker):
157
192
  Retry config examples:
158
193
  ```python
159
194
  retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
160
- retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
161
- retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
162
195
  retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
163
196
  ```
164
197
 
@@ -166,59 +199,6 @@ class LiteLLMLMInvoker(OpenAIChatCompletionsLMInvoker):
166
199
  ```python
167
200
  lm_invoker = LiteLLMLMInvoker(..., retry_config=retry_config)
168
201
  ```
169
-
170
- Reasoning:
171
- Some language models support advanced reasoning capabilities. When using such reasoning-capable models,
172
- you can configure how much reasoning the model should perform before generating a final response by setting
173
- reasoning-related parameters.
174
-
175
- The reasoning effort of reasoning models can be set via the `reasoning_effort` parameter. This parameter
176
- will guide the models on how many reasoning tokens it should generate before creating a response to the prompt.
177
- The reasoning effort is only supported by some language models.
178
- Available options include:
179
- 1. "low": Favors speed and economical token usage.
180
- 2. "medium": Favors a balance between speed and reasoning accuracy.
181
- 3. "high": Favors more complete reasoning at the cost of more tokens generated and slower responses.
182
- This may differ between models. When not set, the reasoning effort will be equivalent to None by default.
183
-
184
- When using reasoning models, some providers might output the reasoning summary. These will be stored in the
185
- `reasoning` attribute in the output.
186
-
187
- Output example:
188
- ```python
189
- LMOutput(
190
- response="Golden retriever is a good dog breed.",
191
- reasoning=[Reasoning(id="", reasoning="Let me think about it...")],
192
- )
193
- ```
194
-
195
- Streaming output example:
196
- ```python
197
- {"type": "thinking_start", "value": ""}\', ...}
198
- {"type": "thinking", "value": "Let me think "}\', ...}
199
- {"type": "thinking", "value": "about it..."}\', ...}
200
- {"type": "thinking_end", "value": ""}\', ...}
201
- {"type": "response", "value": "Golden retriever ", ...}
202
- {"type": "response", "value": "is a good dog breed.", ...}
203
- ```
204
- Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
205
- To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
206
- LM invoker initialization. The legacy event format support will be removed in v0.6.
207
-
208
- Setting reasoning-related parameters for non-reasoning models will raise an error.
209
-
210
-
211
- Output types:
212
- The output of the `LiteLLMLMInvoker` can either be:
213
- 1. `str`: A text response.
214
- 2. `LMOutput`: A Pydantic model that may contain the following attributes:
215
- 2.1. response (str)
216
- 2.2. tool_calls (list[ToolCall])
217
- 2.3. structured_output (dict[str, Any] | BaseModel | None)
218
- 2.4. token_usage (TokenUsage | None)
219
- 2.5. duration (float | None)
220
- 2.6. finish_details (dict[str, Any])
221
- 2.7. reasoning (list[Reasoning])
222
202
  '''
223
203
  completion: Incomplete
224
204
  def __init__(self, model_id: str, default_hyperparameters: dict[str, Any] | None = None, tools: list[Tool | LangChainTool] | None = None, response_schema: ResponseSchema | None = None, output_analytics: bool = False, retry_config: RetryConfig | None = None, reasoning_effort: ReasoningEffort | None = None, simplify_events: bool = False) -> None:
@@ -7,7 +7,7 @@ from gllm_core.utils import RetryConfig
7
7
  from gllm_inference.constants import DOCUMENT_MIME_TYPES as DOCUMENT_MIME_TYPES, INVOKER_DEFAULT_TIMEOUT as INVOKER_DEFAULT_TIMEOUT
8
8
  from gllm_inference.exceptions import BaseInvokerError as BaseInvokerError, convert_to_base_invoker_error as convert_to_base_invoker_error
9
9
  from gllm_inference.lm_invoker.batch import BatchOperations as BatchOperations
10
- from gllm_inference.schema import Attachment as Attachment, AttachmentType as AttachmentType, BatchStatus as BatchStatus, LMEventType as LMEventType, LMInput as LMInput, LMOutput as LMOutput, Message as Message, MessageContent as MessageContent, MessageRole as MessageRole, ModelId as ModelId, Reasoning as Reasoning, ResponseSchema as ResponseSchema, ToolCall as ToolCall, ToolResult as ToolResult
10
+ from gllm_inference.schema import Attachment as Attachment, AttachmentType as AttachmentType, BatchStatus as BatchStatus, LMInput as LMInput, LMOutput as LMOutput, Message as Message, MessageContent as MessageContent, MessageRole as MessageRole, ModelId as ModelId, Reasoning as Reasoning, ResponseSchema as ResponseSchema, ToolCall as ToolCall, ToolResult as ToolResult
11
11
  from langchain_core.tools import Tool as LangChainTool
12
12
  from typing import Any
13
13