gllm-inference-binary 0.5.52__cp312-cp312-win_amd64.whl → 0.5.54__cp312-cp312-win_amd64.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of gllm-inference-binary might be problematic. Click here for more details.
- gllm_inference/constants.pyi +0 -1
- gllm_inference/lm_invoker/anthropic_lm_invoker.pyi +92 -108
- gllm_inference/lm_invoker/azure_openai_lm_invoker.pyi +92 -109
- gllm_inference/lm_invoker/bedrock_lm_invoker.pyi +51 -65
- gllm_inference/lm_invoker/datasaur_lm_invoker.pyi +36 -36
- gllm_inference/lm_invoker/google_lm_invoker.pyi +107 -117
- gllm_inference/lm_invoker/langchain_lm_invoker.pyi +52 -64
- gllm_inference/lm_invoker/litellm_lm_invoker.pyi +86 -106
- gllm_inference/lm_invoker/openai_chat_completions_lm_invoker.pyi +86 -105
- gllm_inference/lm_invoker/openai_lm_invoker.pyi +157 -186
- gllm_inference/lm_invoker/portkey_lm_invoker.pyi +104 -68
- gllm_inference/lm_invoker/xai_lm_invoker.pyi +92 -128
- gllm_inference/schema/__init__.pyi +3 -3
- gllm_inference/schema/attachment.pyi +1 -1
- gllm_inference/schema/enums.pyi +11 -0
- gllm_inference/schema/lm_output.pyi +167 -23
- gllm_inference.cp312-win_amd64.pyd +0 -0
- gllm_inference.pyi +1 -3
- {gllm_inference_binary-0.5.52.dist-info → gllm_inference_binary-0.5.54.dist-info}/METADATA +1 -1
- {gllm_inference_binary-0.5.52.dist-info → gllm_inference_binary-0.5.54.dist-info}/RECORD +22 -22
- {gllm_inference_binary-0.5.52.dist-info → gllm_inference_binary-0.5.54.dist-info}/WHEEL +0 -0
- {gllm_inference_binary-0.5.52.dist-info → gllm_inference_binary-0.5.54.dist-info}/top_level.txt +0 -0
|
@@ -51,83 +51,82 @@ class BedrockLMInvoker(BaseLMInvoker):
|
|
|
51
51
|
result = await lm_invoker.invoke([text, image])
|
|
52
52
|
```
|
|
53
53
|
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
`tool_calls` attribute in the output.
|
|
59
|
-
|
|
60
|
-
Usage example:
|
|
61
|
-
```python
|
|
62
|
-
lm_invoker = BedrockLMInvoker(..., tools=[tool_1, tool_2])
|
|
63
|
-
```
|
|
54
|
+
Text output:
|
|
55
|
+
The `BedrockLMInvoker` generates text outputs by default.
|
|
56
|
+
Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
57
|
+
via the `texts` (all text outputs) or `text` (first text output) properties.
|
|
64
58
|
|
|
65
59
|
Output example:
|
|
66
60
|
```python
|
|
67
|
-
LMOutput(
|
|
68
|
-
response="Let me call the tools...",
|
|
69
|
-
tool_calls=[
|
|
70
|
-
ToolCall(id="123", name="tool_1", args={"key": "value"}),
|
|
71
|
-
ToolCall(id="456", name="tool_2", args={"key": "value"}),
|
|
72
|
-
]
|
|
73
|
-
)
|
|
61
|
+
LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
|
|
74
62
|
```
|
|
75
63
|
|
|
76
64
|
Structured output:
|
|
77
|
-
|
|
65
|
+
The `BedrockLMInvoker` can be configured to generate structured outputs.
|
|
78
66
|
This feature can be enabled by providing a schema to the `response_schema` parameter.
|
|
79
67
|
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
|
|
83
|
-
|
|
84
|
-
Structured output is achieved by providing the schema name in the `tool_choice` parameter. This forces
|
|
85
|
-
the model to call the provided schema as a tool. Thus, structured output is not compatible with tool calling,
|
|
86
|
-
since the tool calling is reserved to force the model to call the provided schema as a tool.
|
|
87
|
-
The language model also doesn\'t need to stream anything when structured output is enabled. Thus, standard
|
|
88
|
-
invocation will be performed regardless of whether the `event_emitter` parameter is provided or not.
|
|
68
|
+
Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
69
|
+
via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
|
|
89
70
|
|
|
90
|
-
|
|
91
|
-
1.
|
|
92
|
-
|
|
71
|
+
The schema must either be one of the following:
|
|
72
|
+
1. A Pydantic BaseModel class
|
|
73
|
+
The structured output will be a Pydantic model.
|
|
74
|
+
2. A JSON schema dictionary
|
|
75
|
+
JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
|
|
76
|
+
Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
|
|
77
|
+
The structured output will be a dictionary.
|
|
93
78
|
|
|
94
|
-
# Example 1: Using a JSON schema dictionary
|
|
95
79
|
Usage example:
|
|
96
80
|
```python
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
"type": "object",
|
|
106
|
-
}
|
|
107
|
-
lm_invoker = BedrockLMInvoker(..., response_schema=schema)
|
|
81
|
+
class Animal(BaseModel):
|
|
82
|
+
name: str
|
|
83
|
+
color: str
|
|
84
|
+
|
|
85
|
+
json_schema = Animal.model_json_schema()
|
|
86
|
+
|
|
87
|
+
lm_invoker = BedrockLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
|
|
88
|
+
lm_invoker = BedrockLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
|
|
108
89
|
```
|
|
90
|
+
|
|
109
91
|
Output example:
|
|
110
92
|
```python
|
|
111
|
-
|
|
93
|
+
# Using Pydantic BaseModel class outputs a Pydantic model
|
|
94
|
+
LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
|
|
95
|
+
|
|
96
|
+
# Using JSON schema dictionary outputs a dictionary
|
|
97
|
+
LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
|
|
112
98
|
```
|
|
113
99
|
|
|
114
|
-
|
|
100
|
+
Structured output is not compatible with tool calling.
|
|
101
|
+
When structured output is enabled, streaming is disabled.
|
|
102
|
+
|
|
103
|
+
Tool calling:
|
|
104
|
+
The `BedrockLMInvoker` can be configured to call tools to perform certain tasks.
|
|
105
|
+
This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
|
|
106
|
+
|
|
107
|
+
Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
|
|
108
|
+
can be accessed via the `tool_calls` property.
|
|
109
|
+
|
|
115
110
|
Usage example:
|
|
116
111
|
```python
|
|
117
|
-
|
|
118
|
-
name: str
|
|
119
|
-
color: str
|
|
120
|
-
|
|
121
|
-
lm_invoker = BedrockLMInvoker(..., response_schema=Animal)
|
|
112
|
+
lm_invoker = BedrockLMInvoker(..., tools=[tool_1, tool_2])
|
|
122
113
|
```
|
|
114
|
+
|
|
123
115
|
Output example:
|
|
124
116
|
```python
|
|
125
|
-
LMOutput(
|
|
117
|
+
LMOutput(
|
|
118
|
+
outputs=[
|
|
119
|
+
LMOutputItem(type="text", output="I\'m using tools..."),
|
|
120
|
+
LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
|
|
121
|
+
LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
|
|
122
|
+
]
|
|
123
|
+
)
|
|
126
124
|
```
|
|
127
125
|
|
|
128
126
|
Analytics tracking:
|
|
129
|
-
|
|
127
|
+
The `BedrockLMInvoker` can be configured to output additional information about the invocation.
|
|
130
128
|
This feature can be enabled by setting the `output_analytics` parameter to `True`.
|
|
129
|
+
|
|
131
130
|
When enabled, the following attributes will be stored in the output:
|
|
132
131
|
1. `token_usage`: The token usage.
|
|
133
132
|
2. `duration`: The duration in seconds.
|
|
@@ -136,7 +135,7 @@ class BedrockLMInvoker(BaseLMInvoker):
|
|
|
136
135
|
Output example:
|
|
137
136
|
```python
|
|
138
137
|
LMOutput(
|
|
139
|
-
|
|
138
|
+
outputs=[...],
|
|
140
139
|
token_usage=TokenUsage(input_tokens=100, output_tokens=50),
|
|
141
140
|
duration=0.729,
|
|
142
141
|
finish_details={"stop_reason": "end_turn"},
|
|
@@ -151,8 +150,6 @@ class BedrockLMInvoker(BaseLMInvoker):
|
|
|
151
150
|
Retry config examples:
|
|
152
151
|
```python
|
|
153
152
|
retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
|
|
154
|
-
retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
|
|
155
|
-
retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
|
|
156
153
|
retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
|
|
157
154
|
```
|
|
158
155
|
|
|
@@ -160,17 +157,6 @@ class BedrockLMInvoker(BaseLMInvoker):
|
|
|
160
157
|
```python
|
|
161
158
|
lm_invoker = BedrockLMInvoker(..., retry_config=retry_config)
|
|
162
159
|
```
|
|
163
|
-
|
|
164
|
-
Output types:
|
|
165
|
-
The output of the `BedrockLMInvoker` can either be:
|
|
166
|
-
1. `str`: A text response.
|
|
167
|
-
2. `LMOutput`: A Pydantic model that may contain the following attributes:
|
|
168
|
-
2.1. response (str)
|
|
169
|
-
2.2. tool_calls (list[ToolCall])
|
|
170
|
-
2.3. structured_output (dict[str, Any] | BaseModel | None)
|
|
171
|
-
2.4. token_usage (TokenUsage | None)
|
|
172
|
-
2.5. duration (float | None)
|
|
173
|
-
2.6. finish_details (dict[str, Any] | None)
|
|
174
160
|
'''
|
|
175
161
|
session: Incomplete
|
|
176
162
|
client_kwargs: Incomplete
|
|
@@ -44,9 +44,42 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
|
|
|
44
44
|
result = await lm_invoker.invoke([text, image])
|
|
45
45
|
```
|
|
46
46
|
|
|
47
|
+
Text output:
|
|
48
|
+
The `DatasaurLMInvoker` generates text outputs by default.
|
|
49
|
+
Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
50
|
+
via the `texts` (all text outputs) or `text` (first text output) properties.
|
|
51
|
+
|
|
52
|
+
Output example:
|
|
53
|
+
```python
|
|
54
|
+
LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Citations:
|
|
58
|
+
The `DatasaurLMInvoker` can be configured to output the citations used to generate the response.
|
|
59
|
+
This feature can be enabled by setting the `citations` parameter to `True`.
|
|
60
|
+
|
|
61
|
+
Citations outputs are stored in the `outputs` attribute of the `LMOutput` object and
|
|
62
|
+
can be accessed via the `citations` property.
|
|
63
|
+
|
|
64
|
+
Usage example:
|
|
65
|
+
```python
|
|
66
|
+
lm_invoker = DatasaurLMInvoker(..., citations=True)
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
Output example:
|
|
70
|
+
```python
|
|
71
|
+
LMOutput(
|
|
72
|
+
outputs=[
|
|
73
|
+
LMOutputItem(type="citation", output=Chunk(id="123", content="...", metadata={...}, score=0.95)),
|
|
74
|
+
LMOutputItem(type="text", output="According to recent reports... ([Source](https://www.example.com))."),
|
|
75
|
+
],
|
|
76
|
+
)
|
|
77
|
+
```
|
|
78
|
+
|
|
47
79
|
Analytics tracking:
|
|
48
|
-
|
|
80
|
+
The `DatasaurLMInvoker` can be configured to output additional information about the invocation.
|
|
49
81
|
This feature can be enabled by setting the `output_analytics` parameter to `True`.
|
|
82
|
+
|
|
50
83
|
When enabled, the following attributes will be stored in the output:
|
|
51
84
|
1. `token_usage`: The token usage.
|
|
52
85
|
2. `duration`: The duration in seconds.
|
|
@@ -55,16 +88,13 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
|
|
|
55
88
|
Output example:
|
|
56
89
|
```python
|
|
57
90
|
LMOutput(
|
|
58
|
-
|
|
91
|
+
outputs=[...],
|
|
59
92
|
token_usage=TokenUsage(input_tokens=100, output_tokens=50),
|
|
60
93
|
duration=0.729,
|
|
61
|
-
finish_details={"
|
|
94
|
+
finish_details={"stop_reason": "end_turn"},
|
|
62
95
|
)
|
|
63
96
|
```
|
|
64
97
|
|
|
65
|
-
When streaming is enabled, token usage is not supported. Therefore, the `token_usage` attribute will be `None`
|
|
66
|
-
regardless of the value of the `output_analytics` parameter.
|
|
67
|
-
|
|
68
98
|
Retry and timeout:
|
|
69
99
|
The `DatasaurLMInvoker` supports retry and timeout configuration.
|
|
70
100
|
By default, the max retries is set to 0 and the timeout is set to 30.0 seconds.
|
|
@@ -73,8 +103,6 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
|
|
|
73
103
|
Retry config examples:
|
|
74
104
|
```python
|
|
75
105
|
retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
|
|
76
|
-
retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
|
|
77
|
-
retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
|
|
78
106
|
retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
|
|
79
107
|
```
|
|
80
108
|
|
|
@@ -82,34 +110,6 @@ class DatasaurLMInvoker(OpenAIChatCompletionsLMInvoker):
|
|
|
82
110
|
```python
|
|
83
111
|
lm_invoker = DatasaurLMInvoker(..., retry_config=retry_config)
|
|
84
112
|
```
|
|
85
|
-
|
|
86
|
-
Citations:
|
|
87
|
-
The `DatasaurLMInvoker` can be configured to output the citations used to generate the response.
|
|
88
|
-
They can be enabled by setting the `citations` parameter to `True`.
|
|
89
|
-
When enabled, the citations will be stored as `Chunk` objects in the `citations` attribute in the output.
|
|
90
|
-
|
|
91
|
-
Usage example:
|
|
92
|
-
```python
|
|
93
|
-
lm_invoker = DatasaurLMInvoker(..., citations=True)
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
Output example:
|
|
97
|
-
```python
|
|
98
|
-
LMOutput(
|
|
99
|
-
response="The winner of the match is team A ([Example title](https://www.example.com)).",
|
|
100
|
-
citations=[Chunk(id="123", content="...", metadata={...}, score=0.95)],
|
|
101
|
-
)
|
|
102
|
-
```
|
|
103
|
-
|
|
104
|
-
Output types:
|
|
105
|
-
The output of the `DatasaurLMInvoker` can either be:
|
|
106
|
-
1. `str`: A text response.
|
|
107
|
-
2. `LMOutput`: A Pydantic model that may contain the following attributes:
|
|
108
|
-
2.1. response (str)
|
|
109
|
-
2.2. token_usage (TokenUsage | None)
|
|
110
|
-
2.3. duration (float | None)
|
|
111
|
-
2.4. finish_details (dict[str, Any] | None)
|
|
112
|
-
2.5. citations (list[Chunk])
|
|
113
113
|
'''
|
|
114
114
|
client_kwargs: Incomplete
|
|
115
115
|
citations: Incomplete
|
|
@@ -82,10 +82,61 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
82
82
|
result = await lm_invoker.invoke([text, image])
|
|
83
83
|
```
|
|
84
84
|
|
|
85
|
+
Text output:
|
|
86
|
+
The `GoogleLMInvoker` generates text outputs by default.
|
|
87
|
+
Text outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
88
|
+
via the `texts` (all text outputs) or `text` (first text output) properties.
|
|
89
|
+
|
|
90
|
+
Output example:
|
|
91
|
+
```python
|
|
92
|
+
LMOutput(outputs=[LMOutputItem(type="text", output="Hello, there!")])
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
Structured output:
|
|
96
|
+
The `GoogleLMInvoker` can be configured to generate structured outputs.
|
|
97
|
+
This feature can be enabled by providing a schema to the `response_schema` parameter.
|
|
98
|
+
|
|
99
|
+
Structured outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
100
|
+
via the `structureds` (all structured outputs) or `structured` (first structured output) properties.
|
|
101
|
+
|
|
102
|
+
The schema must either be one of the following:
|
|
103
|
+
1. A Pydantic BaseModel class
|
|
104
|
+
The structured output will be a Pydantic model.
|
|
105
|
+
2. A JSON schema dictionary
|
|
106
|
+
JSON dictionary schema must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
|
|
107
|
+
Thus, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
|
|
108
|
+
The structured output will be a dictionary.
|
|
109
|
+
|
|
110
|
+
Usage example:
|
|
111
|
+
```python
|
|
112
|
+
class Animal(BaseModel):
|
|
113
|
+
name: str
|
|
114
|
+
color: str
|
|
115
|
+
|
|
116
|
+
json_schema = Animal.model_json_schema()
|
|
117
|
+
|
|
118
|
+
lm_invoker = GoogleLMInvoker(..., response_schema=Animal) # Using Pydantic BaseModel class
|
|
119
|
+
lm_invoker = GoogleLMInvoker(..., response_schema=json_schema) # Using JSON schema dictionary
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
Output example:
|
|
123
|
+
```python
|
|
124
|
+
# Using Pydantic BaseModel class outputs a Pydantic model
|
|
125
|
+
LMOutput(outputs=[LMOutputItem(type="structured", output=Animal(name="dog", color="white"))])
|
|
126
|
+
|
|
127
|
+
# Using JSON schema dictionary outputs a dictionary
|
|
128
|
+
LMOutput(outputs=[LMOutputItem(type="structured", output={"name": "dog", "color": "white"})])
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
Structured output is not compatible with tool calling.
|
|
132
|
+
When structured output is enabled, streaming is disabled.
|
|
133
|
+
|
|
85
134
|
Image generation:
|
|
86
|
-
The `GoogleLMInvoker`
|
|
87
|
-
such as `gemini-2.5-flash-image`.
|
|
88
|
-
|
|
135
|
+
The `GoogleLMInvoker` can be configured to generate images.
|
|
136
|
+
This feature can be enabled by using an image generation model, such as `gemini-2.5-flash-image`.
|
|
137
|
+
|
|
138
|
+
Image outputs are stored in the `outputs` attribute of the `LMOutput` object and can be accessed
|
|
139
|
+
via the `attachments` property.
|
|
89
140
|
|
|
90
141
|
Usage example:
|
|
91
142
|
```python
|
|
@@ -97,16 +148,24 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
97
148
|
Output example:
|
|
98
149
|
```python
|
|
99
150
|
LMOutput(
|
|
100
|
-
|
|
101
|
-
|
|
151
|
+
outputs=[
|
|
152
|
+
LMOutputItem(type="text", output="Creating a picture..."),
|
|
153
|
+
LMOutputItem(
|
|
154
|
+
type="attachment",
|
|
155
|
+
output=Attachment(filename="image.png", mime_type="image/png", data=b"..."),
|
|
156
|
+
),
|
|
157
|
+
],
|
|
102
158
|
)
|
|
103
159
|
```
|
|
104
160
|
|
|
161
|
+
When image generation is enabled, streaming is disabled.
|
|
162
|
+
|
|
105
163
|
Tool calling:
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
`
|
|
164
|
+
The `GoogleLMInvoker` can be configured to call tools to perform certain tasks.
|
|
165
|
+
This feature can be enabled by providing a list of `Tool` objects to the `tools` parameter.
|
|
166
|
+
|
|
167
|
+
Tool calls outputs are stored in the `outputs` attribute of the `LMOutput` object and
|
|
168
|
+
can be accessed via the `tool_calls` property.
|
|
110
169
|
|
|
111
170
|
Usage example:
|
|
112
171
|
```python
|
|
@@ -116,67 +175,60 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
116
175
|
Output example:
|
|
117
176
|
```python
|
|
118
177
|
LMOutput(
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
ToolCall(id="123", name="tool_1", args={"key": "value"}),
|
|
122
|
-
ToolCall(id="456", name="tool_2", args={"key": "value"}),
|
|
178
|
+
outputs=[
|
|
179
|
+
LMOutputItem(type="text", output="I\'m using tools..."),
|
|
180
|
+
LMOutputItem(type="tool_call", output=ToolCall(id="123", name="tool_1", args={"key": "value"})),
|
|
181
|
+
LMOutputItem(type="tool_call", output=ToolCall(id="456", name="tool_2", args={"key": "value"})),
|
|
123
182
|
]
|
|
124
183
|
)
|
|
125
184
|
```
|
|
126
185
|
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
This feature can be enabled by
|
|
130
|
-
|
|
131
|
-
The schema must be either a JSON schema dictionary or a Pydantic BaseModel class.
|
|
132
|
-
If JSON schema is used, it must be compatible with Pydantic\'s JSON schema, especially for complex schemas.
|
|
133
|
-
For this reason, it is recommended to create the JSON schema using Pydantic\'s `model_json_schema` method.
|
|
134
|
-
|
|
135
|
-
Structured output is not compatible with tool calling. The language model also doesn\'t need to stream
|
|
136
|
-
anything when structured output is enabled. Thus, standard invocation will be performed regardless of
|
|
137
|
-
whether the `event_emitter` parameter is provided or not.
|
|
186
|
+
Thinking:
|
|
187
|
+
The `GoogleLMInvoker` can be configured to perform step-by-step thinking process before answering.
|
|
188
|
+
This feature can be enabled by setting the `thinking` parameter to `True`.
|
|
138
189
|
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
2. If the schema is a Pydantic BaseModel class, the structured output is a Pydantic model.
|
|
190
|
+
Thinking outputs are stored in the `outputs` attribute of the `LMOutput` object
|
|
191
|
+
and can be accessed via the `thinkings` property.
|
|
142
192
|
|
|
143
|
-
# Example 1: Using a JSON schema dictionary
|
|
144
193
|
Usage example:
|
|
145
194
|
```python
|
|
146
|
-
|
|
147
|
-
"title": "Animal",
|
|
148
|
-
"description": "A description of an animal.",
|
|
149
|
-
"properties": {
|
|
150
|
-
"color": {"title": "Color", "type": "string"},
|
|
151
|
-
"name": {"title": "Name", "type": "string"},
|
|
152
|
-
},
|
|
153
|
-
"required": ["name", "color"],
|
|
154
|
-
"type": "object",
|
|
155
|
-
}
|
|
156
|
-
lm_invoker = GoogleLMInvoker(..., response_schema=schema)
|
|
195
|
+
lm_invoker = GoogleLMInvoker(..., thinking=True, thinking_budget=1024)
|
|
157
196
|
```
|
|
197
|
+
|
|
158
198
|
Output example:
|
|
159
199
|
```python
|
|
160
|
-
LMOutput(
|
|
200
|
+
LMOutput(
|
|
201
|
+
outputs=[
|
|
202
|
+
LMOutputItem(type="thinking", output=Reasoning(type="thinking", reasoning="I\'m thinking...", ...)),
|
|
203
|
+
LMOutputItem(type="text", output="Golden retriever is a good dog breed."),
|
|
204
|
+
]
|
|
205
|
+
)
|
|
161
206
|
```
|
|
162
207
|
|
|
163
|
-
|
|
164
|
-
Usage example:
|
|
165
|
-
```python
|
|
166
|
-
class Animal(BaseModel):
|
|
167
|
-
name: str
|
|
168
|
-
color: str
|
|
169
|
-
|
|
170
|
-
lm_invoker = GoogleLMInvoker(..., response_schema=Animal)
|
|
171
|
-
```
|
|
172
|
-
Output example:
|
|
208
|
+
Streaming output example:
|
|
173
209
|
```python
|
|
174
|
-
|
|
210
|
+
{"type": "thinking_start", "value": "", ...}
|
|
211
|
+
{"type": "thinking", "value": "I\'m ", ...}
|
|
212
|
+
{"type": "thinking", "value": "thinking...", ...}
|
|
213
|
+
{"type": "thinking_end", "value": "", ...}
|
|
214
|
+
{"type": "response", "value": "Golden retriever ", ...}
|
|
215
|
+
{"type": "response", "value": "is a good dog breed.", ...}
|
|
175
216
|
```
|
|
217
|
+
Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
|
|
218
|
+
To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
|
|
219
|
+
LM invoker initialization. The legacy event format support will be removed in v0.6.
|
|
220
|
+
|
|
221
|
+
The amount of tokens allocated for the thinking process can be set via the `thinking_budget` parameter.
|
|
222
|
+
For more information, please refer to the following documentation:
|
|
223
|
+
https://ai.google.dev/gemini-api/docs/thinking
|
|
224
|
+
|
|
225
|
+
Thinking is only available for certain models, starting from Gemini 2.5 series.
|
|
226
|
+
Thinking is required for Gemini 2.5 Pro models.
|
|
176
227
|
|
|
177
228
|
Analytics tracking:
|
|
178
|
-
|
|
229
|
+
The `GoogleLMInvoker` can be configured to output additional information about the invocation.
|
|
179
230
|
This feature can be enabled by setting the `output_analytics` parameter to `True`.
|
|
231
|
+
|
|
180
232
|
When enabled, the following attributes will be stored in the output:
|
|
181
233
|
1. `token_usage`: The token usage.
|
|
182
234
|
2. `duration`: The duration in seconds.
|
|
@@ -185,15 +237,10 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
185
237
|
Output example:
|
|
186
238
|
```python
|
|
187
239
|
LMOutput(
|
|
188
|
-
|
|
189
|
-
token_usage=TokenUsage(
|
|
190
|
-
input_tokens=1500,
|
|
191
|
-
output_tokens=200,
|
|
192
|
-
input_token_details=InputTokenDetails(cached_tokens=1200, uncached_tokens=300),
|
|
193
|
-
output_token_details=OutputTokenDetails(reasoning_tokens=180, response_tokens=20),
|
|
194
|
-
),
|
|
240
|
+
outputs=[...],
|
|
241
|
+
token_usage=TokenUsage(input_tokens=100, output_tokens=50),
|
|
195
242
|
duration=0.729,
|
|
196
|
-
finish_details={"
|
|
243
|
+
finish_details={"stop_reason": "end_turn"},
|
|
197
244
|
)
|
|
198
245
|
```
|
|
199
246
|
|
|
@@ -205,8 +252,6 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
205
252
|
Retry config examples:
|
|
206
253
|
```python
|
|
207
254
|
retry_config = RetryConfig(max_retries=0, timeout=None) # No retry, no timeout
|
|
208
|
-
retry_config = RetryConfig(max_retries=0, timeout=10.0) # No retry, 10.0 seconds timeout
|
|
209
|
-
retry_config = RetryConfig(max_retries=5, timeout=None) # 5 max retries, no timeout
|
|
210
255
|
retry_config = RetryConfig(max_retries=5, timeout=10.0) # 5 max retries, 10.0 seconds timeout
|
|
211
256
|
```
|
|
212
257
|
|
|
@@ -214,61 +259,6 @@ class GoogleLMInvoker(BaseLMInvoker):
|
|
|
214
259
|
```python
|
|
215
260
|
lm_invoker = GoogleLMInvoker(..., retry_config=retry_config)
|
|
216
261
|
```
|
|
217
|
-
|
|
218
|
-
Thinking:
|
|
219
|
-
Thinking is a feature that allows the language model to have enhanced reasoning capabilities for complex tasks,
|
|
220
|
-
while also providing transparency into its step-by-step thought process before it delivers its final answer.
|
|
221
|
-
It can be enabled by setting the `thinking` parameter to `True`.
|
|
222
|
-
|
|
223
|
-
Thinking is only available for certain models, starting from Gemini 2.5 series, and is required for
|
|
224
|
-
Gemini 2.5 Pro models. Therefore, `thinking` defaults to `True` for Gemini 2.5 Pro models and `False`
|
|
225
|
-
for other models. Setting `thinking` to `False` for Gemini 2.5 Pro models will raise a `ValueError`.
|
|
226
|
-
When enabled, the reasoning is stored in the `reasoning` attribute in the output.
|
|
227
|
-
|
|
228
|
-
Usage example:
|
|
229
|
-
```python
|
|
230
|
-
lm_invoker = GoogleLMInvoker(..., thinking=True, thinking_budget=1024)
|
|
231
|
-
```
|
|
232
|
-
|
|
233
|
-
Output example:
|
|
234
|
-
```python
|
|
235
|
-
LMOutput(
|
|
236
|
-
response="Golden retriever is a good dog breed.",
|
|
237
|
-
reasoning=[Reasoning(reasoning="Let me think about it...")],
|
|
238
|
-
)
|
|
239
|
-
```
|
|
240
|
-
|
|
241
|
-
Streaming output example:
|
|
242
|
-
```python
|
|
243
|
-
{"type": "thinking_start", "value": "", ...}
|
|
244
|
-
{"type": "thinking", "value": "Let me think "\', ...}
|
|
245
|
-
{"type": "thinking", "value": "about it...", ...}
|
|
246
|
-
{"type": "thinking_end", "value": ""}\', ...}
|
|
247
|
-
{"type": "response", "value": "Golden retriever ", ...}
|
|
248
|
-
{"type": "response", "value": "is a good dog breed.", ...}
|
|
249
|
-
```
|
|
250
|
-
Note: By default, the thinking token will be streamed with the legacy `EventType.DATA` event type.
|
|
251
|
-
To use the new simplified streamed event format, set the `simplify_events` parameter to `True` during
|
|
252
|
-
LM invoker initialization. The legacy event format support will be removed in v0.6.
|
|
253
|
-
|
|
254
|
-
When thinking is enabled, the amount of tokens allocated for the thinking process can be set via the
|
|
255
|
-
`thinking_budget` parameter. The `thinking_budget`:
|
|
256
|
-
1. Defaults to -1, in which case the model will control the budget automatically.
|
|
257
|
-
2. Must be greater than the model\'s minimum thinking budget.
|
|
258
|
-
For more details, please refer to https://ai.google.dev/gemini-api/docs/thinking
|
|
259
|
-
|
|
260
|
-
Output types:
|
|
261
|
-
The output of the `GoogleLMInvoker` can either be:
|
|
262
|
-
1. `str`: A text response.
|
|
263
|
-
2. `LMOutput`: A Pydantic model that may contain the following attributes:
|
|
264
|
-
2.1. response (str)
|
|
265
|
-
2.2. attachments (list[Attachment])
|
|
266
|
-
2.3. tool_calls (list[ToolCall])
|
|
267
|
-
2.4. structured_output (dict[str, Any] | BaseModel | None)
|
|
268
|
-
2.5. token_usage (TokenUsage | None)
|
|
269
|
-
2.6. duration (float | None)
|
|
270
|
-
2.7. finish_details (dict[str, Any])
|
|
271
|
-
2.8. reasoning (list[Reasoning])
|
|
272
262
|
'''
|
|
273
263
|
client_params: Incomplete
|
|
274
264
|
generate_image: Incomplete
|