skyvern-llamaindex 0.0.2__tar.gz → 0.0.4__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,308 @@
1
+ Metadata-Version: 2.3
2
+ Name: skyvern-llamaindex
3
+ Version: 0.0.4
4
+ Summary: Skyvern integration for LlamaIndex
5
+ Author: lawyzheng
6
+ Author-email: lawy@skyvern.com
7
+ Requires-Python: >=3.11,<3.12
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: Programming Language :: Python :: 3.11
10
+ Requires-Dist: llama-index (>=0.12.19,<0.13.0)
11
+ Requires-Dist: skyvern (>=0.1.56,<0.2.0)
12
+ Description-Content-Type: text/markdown
13
+
14
+ <!-- START doctoc generated TOC please keep comment here to allow auto update -->
15
+ <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
16
+ **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
17
+
18
+ - [Skyvern LlamaIndex](#skyvern-llamaindex)
19
+ - [Installation](#installation)
20
+ - [Basic Usage](#basic-usage)
21
+ - [Run a task(sync) locally in your local environment](#run-a-tasksync-locally-in-your-local-environment)
22
+ - [Run a task(async) locally in your local environment](#run-a-taskasync-locally-in-your-local-environment)
23
+ - [Get a task locally in your local environment](#get-a-task-locally-in-your-local-environment)
24
+ - [Run a task(sync) by calling skyvern APIs](#run-a-tasksync-by-calling-skyvern-apis)
25
+ - [Run a task(async) by calling skyvern APIs](#run-a-taskasync-by-calling-skyvern-apis)
26
+ - [Get a task by calling skyvern APIs](#get-a-task-by-calling-skyvern-apis)
27
+ - [Advanced Usage](#advanced-usage)
28
+ - [Dispatch a task(async) locally in your local environment and wait until the task is finished](#dispatch-a-taskasync-locally-in-your-local-environment-and-wait-until-the-task-is-finished)
29
+ - [Dispatch a task(async) by calling skyvern APIs and wait until the task is finished](#dispatch-a-taskasync-by-calling-skyvern-apis-and-wait-until-the-task-is-finished)
30
+
31
+ <!-- END doctoc generated TOC please keep comment here to allow auto update -->
32
+
33
+ # Skyvern LlamaIndex
34
+
35
+ This is a LlamaIndex integration for Skyvern.
36
+
37
+ ## Installation
38
+
39
+ ```bash
40
+ pip install skyvern-llamaindex
41
+ ```
42
+
43
+ ## Basic Usage
44
+
45
+ ### Run a task(sync) locally in your local environment
46
+ > sync task won't return until the task is finished.
47
+
48
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
49
+
50
+
51
+ ```python
52
+ from dotenv import load_dotenv
53
+ from llama_index.agent.openai import OpenAIAgent
54
+ from llama_index.llms.openai import OpenAI
55
+ from skyvern_llamaindex.agent import SkyvernTool
56
+
57
+ # load OpenAI API key from .env
58
+ load_dotenv()
59
+
60
+ skyvern_tool = SkyvernTool()
61
+
62
+ agent = OpenAIAgent.from_tools(
63
+ tools=[skyvern_tool.run_task()],
64
+ llm=OpenAI(model="gpt-4o"),
65
+ verbose=True,
66
+ )
67
+
68
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
69
+ print(response)
70
+ ```
71
+
72
+ ### Run a task(async) locally in your local environment
73
+ > async task will return immediately and the task will be running in the background.
74
+
75
+ :warning: :warning: if you want to run the task in the background, you need to keep the agent running until the task is finished, otherwise the task will be killed when the agent finished the chat.
76
+
77
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
78
+
79
+ ```python
80
+ import asyncio
81
+ from dotenv import load_dotenv
82
+ from llama_index.agent.openai import OpenAIAgent
83
+ from llama_index.llms.openai import OpenAI
84
+ from skyvern_llamaindex.agent import SkyvernTool
85
+ from llama_index.core.tools import FunctionTool
86
+
87
+ # load OpenAI API key from .env
88
+ load_dotenv()
89
+
90
+ async def sleep(seconds: int) -> str:
91
+ await asyncio.sleep(seconds)
92
+ return f"Slept for {seconds} seconds"
93
+
94
+ # define a sleep tool to keep the agent running until the task is finished
95
+ sleep_tool = FunctionTool.from_defaults(
96
+ async_fn=sleep,
97
+ description="Sleep for a given number of seconds",
98
+ name="sleep",
99
+ )
100
+
101
+ skyvern_tool = SkyvernTool()
102
+
103
+ agent = OpenAIAgent.from_tools(
104
+ tools=[skyvern_tool.dispatch_task(), sleep_tool],
105
+ llm=OpenAI(model="gpt-4o"),
106
+ verbose=True,
107
+ )
108
+
109
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, sleep for 10 minutes.")
110
+ print(response)
111
+ ```
112
+
113
+ ### Get a task locally in your local environment
114
+
115
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
116
+
117
+ ```python
118
+ from dotenv import load_dotenv
119
+ from llama_index.agent.openai import OpenAIAgent
120
+ from llama_index.llms.openai import OpenAI
121
+ from skyvern_llamaindex.agent import SkyvernTool
122
+
123
+ # load OpenAI API key from .env
124
+ load_dotenv()
125
+
126
+ skyvern_tool = SkyvernTool()
127
+
128
+ agent = OpenAIAgent.from_tools(
129
+ tools=[skyvern_tool.get_task()],
130
+ llm=OpenAI(model="gpt-4o"),
131
+ verbose=True,
132
+ )
133
+
134
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
135
+ print(response)
136
+ ```
137
+
138
+ ### Run a task(sync) by calling skyvern APIs
139
+ > sync task won't return until the task is finished.
140
+
141
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
142
+
143
+ ```python
144
+ from dotenv import load_dotenv
145
+ from llama_index.agent.openai import OpenAIAgent
146
+ from llama_index.llms.openai import OpenAI
147
+ from skyvern_llamaindex.client import SkyvernTool
148
+
149
+ # load OpenAI API key from .env
150
+ load_dotenv()
151
+
152
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
153
+ # or you can load the api_key from SKYVERN_API_KEY in .env
154
+ # skyvern_tool = SkyvernTool()
155
+
156
+ agent = OpenAIAgent.from_tools(
157
+ tools=[skyvern_tool.run_task()],
158
+ llm=OpenAI(model="gpt-4o"),
159
+ verbose=True,
160
+ )
161
+
162
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
163
+ print(response)
164
+ ```
165
+
166
+ ### Run a task(async) by calling skyvern APIs
167
+ > async task will return immediately and the task will be running in the background.
168
+
169
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
170
+
171
+ the task is actually running in the skyvern cloud service, so you don't need to keep your agent running until the task is finished.
172
+
173
+ ```python
174
+ from dotenv import load_dotenv
175
+ from llama_index.agent.openai import OpenAIAgent
176
+ from llama_index.llms.openai import OpenAI
177
+ from skyvern_llamaindex.client import SkyvernTool
178
+
179
+ # load OpenAI API key from .env
180
+ load_dotenv()
181
+
182
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
183
+ # or you can load the api_key from SKYVERN_API_KEY in .env
184
+ # skyvern_tool = SkyvernTool()
185
+
186
+ agent = OpenAIAgent.from_tools(
187
+ tools=[skyvern_tool.dispatch_task()],
188
+ llm=OpenAI(model="gpt-4o"),
189
+ verbose=True,
190
+ )
191
+
192
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
193
+ print(response)
194
+ ```
195
+
196
+
197
+ ### Get a task by calling skyvern APIs
198
+
199
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
200
+
201
+ ```python
202
+ from dotenv import load_dotenv
203
+ from llama_index.agent.openai import OpenAIAgent
204
+ from llama_index.llms.openai import OpenAI
205
+ from skyvern_llamaindex.client import SkyvernTool
206
+
207
+ # load OpenAI API key from .env
208
+ load_dotenv()
209
+
210
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
211
+ # or you can load the api_key from SKYVERN_API_KEY in .env
212
+ # skyvern_tool = SkyvernTool()
213
+
214
+ agent = OpenAIAgent.from_tools(
215
+ tools=[skyvern_tool.get_task()],
216
+ llm=OpenAI(model="gpt-4o"),
217
+ verbose=True,
218
+ )
219
+
220
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
221
+ print(response)
222
+ ```
223
+
224
+ ## Advanced Usage
225
+
226
+ To provide some examples of how to integrate Skyvern with other llama-index tools in the agent.
227
+
228
+ ### Dispatch a task(async) locally in your local environment and wait until the task is finished
229
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
230
+
231
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
232
+
233
+ ```python
234
+ import asyncio
235
+ from dotenv import load_dotenv
236
+ from llama_index.agent.openai import OpenAIAgent
237
+ from llama_index.llms.openai import OpenAI
238
+ from llama_index.core.tools import FunctionTool
239
+ from skyvern_llamaindex.agent import SkyvernTool
240
+
241
+ # load OpenAI API key from .env
242
+ load_dotenv()
243
+
244
+ async def sleep(seconds: int) -> str:
245
+ await asyncio.sleep(seconds)
246
+ return f"Slept for {seconds} seconds"
247
+
248
+ sleep_tool = FunctionTool.from_defaults(
249
+ async_fn=sleep,
250
+ description="Sleep for a given number of seconds",
251
+ name="sleep",
252
+ )
253
+
254
+ skyvern_tool = SkyvernTool()
255
+
256
+ agent = OpenAIAgent.from_tools(
257
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
258
+ llm=OpenAI(model="gpt-4o"),
259
+ verbose=True,
260
+ max_function_calls=10,
261
+ )
262
+
263
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
264
+ print(response)
265
+
266
+ ```
267
+
268
+ ### Dispatch a task(async) by calling skyvern APIs and wait until the task is finished
269
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
270
+
271
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
272
+
273
+ ```python
274
+ import asyncio
275
+ from dotenv import load_dotenv
276
+ from llama_index.agent.openai import OpenAIAgent
277
+ from llama_index.llms.openai import OpenAI
278
+ from llama_index.core.tools import FunctionTool
279
+ from skyvern_llamaindex.client import SkyvernTool
280
+
281
+ # load OpenAI API key from .env
282
+ load_dotenv()
283
+
284
+ async def sleep(seconds: int) -> str:
285
+ await asyncio.sleep(seconds)
286
+ return f"Slept for {seconds} seconds"
287
+
288
+ sleep_tool = FunctionTool.from_defaults(
289
+ async_fn=sleep,
290
+ description="Sleep for a given number of seconds",
291
+ name="sleep",
292
+ )
293
+
294
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
295
+ # or you can load the api_key from SKYVERN_API_KEY in .env
296
+ # skyvern_tool = SkyvernTool()
297
+
298
+ agent = OpenAIAgent.from_tools(
299
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
300
+ llm=OpenAI(model="gpt-4o"),
301
+ verbose=True,
302
+ max_function_calls=10,
303
+ )
304
+
305
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
306
+ print(response)
307
+
308
+ ```
@@ -0,0 +1,295 @@
1
+ <!-- START doctoc generated TOC please keep comment here to allow auto update -->
2
+ <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
3
+ **Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
4
+
5
+ - [Skyvern LlamaIndex](#skyvern-llamaindex)
6
+ - [Installation](#installation)
7
+ - [Basic Usage](#basic-usage)
8
+ - [Run a task(sync) locally in your local environment](#run-a-tasksync-locally-in-your-local-environment)
9
+ - [Run a task(async) locally in your local environment](#run-a-taskasync-locally-in-your-local-environment)
10
+ - [Get a task locally in your local environment](#get-a-task-locally-in-your-local-environment)
11
+ - [Run a task(sync) by calling skyvern APIs](#run-a-tasksync-by-calling-skyvern-apis)
12
+ - [Run a task(async) by calling skyvern APIs](#run-a-taskasync-by-calling-skyvern-apis)
13
+ - [Get a task by calling skyvern APIs](#get-a-task-by-calling-skyvern-apis)
14
+ - [Advanced Usage](#advanced-usage)
15
+ - [Dispatch a task(async) locally in your local environment and wait until the task is finished](#dispatch-a-taskasync-locally-in-your-local-environment-and-wait-until-the-task-is-finished)
16
+ - [Dispatch a task(async) by calling skyvern APIs and wait until the task is finished](#dispatch-a-taskasync-by-calling-skyvern-apis-and-wait-until-the-task-is-finished)
17
+
18
+ <!-- END doctoc generated TOC please keep comment here to allow auto update -->
19
+
20
+ # Skyvern LlamaIndex
21
+
22
+ This is a LlamaIndex integration for Skyvern.
23
+
24
+ ## Installation
25
+
26
+ ```bash
27
+ pip install skyvern-llamaindex
28
+ ```
29
+
30
+ ## Basic Usage
31
+
32
+ ### Run a task(sync) locally in your local environment
33
+ > sync task won't return until the task is finished.
34
+
35
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
36
+
37
+
38
+ ```python
39
+ from dotenv import load_dotenv
40
+ from llama_index.agent.openai import OpenAIAgent
41
+ from llama_index.llms.openai import OpenAI
42
+ from skyvern_llamaindex.agent import SkyvernTool
43
+
44
+ # load OpenAI API key from .env
45
+ load_dotenv()
46
+
47
+ skyvern_tool = SkyvernTool()
48
+
49
+ agent = OpenAIAgent.from_tools(
50
+ tools=[skyvern_tool.run_task()],
51
+ llm=OpenAI(model="gpt-4o"),
52
+ verbose=True,
53
+ )
54
+
55
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
56
+ print(response)
57
+ ```
58
+
59
+ ### Run a task(async) locally in your local environment
60
+ > async task will return immediately and the task will be running in the background.
61
+
62
+ :warning: :warning: if you want to run the task in the background, you need to keep the agent running until the task is finished, otherwise the task will be killed when the agent finished the chat.
63
+
64
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
65
+
66
+ ```python
67
+ import asyncio
68
+ from dotenv import load_dotenv
69
+ from llama_index.agent.openai import OpenAIAgent
70
+ from llama_index.llms.openai import OpenAI
71
+ from skyvern_llamaindex.agent import SkyvernTool
72
+ from llama_index.core.tools import FunctionTool
73
+
74
+ # load OpenAI API key from .env
75
+ load_dotenv()
76
+
77
+ async def sleep(seconds: int) -> str:
78
+ await asyncio.sleep(seconds)
79
+ return f"Slept for {seconds} seconds"
80
+
81
+ # define a sleep tool to keep the agent running until the task is finished
82
+ sleep_tool = FunctionTool.from_defaults(
83
+ async_fn=sleep,
84
+ description="Sleep for a given number of seconds",
85
+ name="sleep",
86
+ )
87
+
88
+ skyvern_tool = SkyvernTool()
89
+
90
+ agent = OpenAIAgent.from_tools(
91
+ tools=[skyvern_tool.dispatch_task(), sleep_tool],
92
+ llm=OpenAI(model="gpt-4o"),
93
+ verbose=True,
94
+ )
95
+
96
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, sleep for 10 minutes.")
97
+ print(response)
98
+ ```
99
+
100
+ ### Get a task locally in your local environment
101
+
102
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
103
+
104
+ ```python
105
+ from dotenv import load_dotenv
106
+ from llama_index.agent.openai import OpenAIAgent
107
+ from llama_index.llms.openai import OpenAI
108
+ from skyvern_llamaindex.agent import SkyvernTool
109
+
110
+ # load OpenAI API key from .env
111
+ load_dotenv()
112
+
113
+ skyvern_tool = SkyvernTool()
114
+
115
+ agent = OpenAIAgent.from_tools(
116
+ tools=[skyvern_tool.get_task()],
117
+ llm=OpenAI(model="gpt-4o"),
118
+ verbose=True,
119
+ )
120
+
121
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
122
+ print(response)
123
+ ```
124
+
125
+ ### Run a task(sync) by calling skyvern APIs
126
+ > sync task won't return until the task is finished.
127
+
128
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
129
+
130
+ ```python
131
+ from dotenv import load_dotenv
132
+ from llama_index.agent.openai import OpenAIAgent
133
+ from llama_index.llms.openai import OpenAI
134
+ from skyvern_llamaindex.client import SkyvernTool
135
+
136
+ # load OpenAI API key from .env
137
+ load_dotenv()
138
+
139
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
140
+ # or you can load the api_key from SKYVERN_API_KEY in .env
141
+ # skyvern_tool = SkyvernTool()
142
+
143
+ agent = OpenAIAgent.from_tools(
144
+ tools=[skyvern_tool.run_task()],
145
+ llm=OpenAI(model="gpt-4o"),
146
+ verbose=True,
147
+ )
148
+
149
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
150
+ print(response)
151
+ ```
152
+
153
+ ### Run a task(async) by calling skyvern APIs
154
+ > async task will return immediately and the task will be running in the background.
155
+
156
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
157
+
158
+ the task is actually running in the skyvern cloud service, so you don't need to keep your agent running until the task is finished.
159
+
160
+ ```python
161
+ from dotenv import load_dotenv
162
+ from llama_index.agent.openai import OpenAIAgent
163
+ from llama_index.llms.openai import OpenAI
164
+ from skyvern_llamaindex.client import SkyvernTool
165
+
166
+ # load OpenAI API key from .env
167
+ load_dotenv()
168
+
169
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
170
+ # or you can load the api_key from SKYVERN_API_KEY in .env
171
+ # skyvern_tool = SkyvernTool()
172
+
173
+ agent = OpenAIAgent.from_tools(
174
+ tools=[skyvern_tool.dispatch_task()],
175
+ llm=OpenAI(model="gpt-4o"),
176
+ verbose=True,
177
+ )
178
+
179
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
180
+ print(response)
181
+ ```
182
+
183
+
184
+ ### Get a task by calling skyvern APIs
185
+
186
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
187
+
188
+ ```python
189
+ from dotenv import load_dotenv
190
+ from llama_index.agent.openai import OpenAIAgent
191
+ from llama_index.llms.openai import OpenAI
192
+ from skyvern_llamaindex.client import SkyvernTool
193
+
194
+ # load OpenAI API key from .env
195
+ load_dotenv()
196
+
197
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
198
+ # or you can load the api_key from SKYVERN_API_KEY in .env
199
+ # skyvern_tool = SkyvernTool()
200
+
201
+ agent = OpenAIAgent.from_tools(
202
+ tools=[skyvern_tool.get_task()],
203
+ llm=OpenAI(model="gpt-4o"),
204
+ verbose=True,
205
+ )
206
+
207
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
208
+ print(response)
209
+ ```
210
+
211
+ ## Advanced Usage
212
+
213
+ To provide some examples of how to integrate Skyvern with other llama-index tools in the agent.
214
+
215
+ ### Dispatch a task(async) locally in your local environment and wait until the task is finished
216
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
217
+
218
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init --openai-api-key <your_openai_api_key>` command in your terminal to set up skyvern first.
219
+
220
+ ```python
221
+ import asyncio
222
+ from dotenv import load_dotenv
223
+ from llama_index.agent.openai import OpenAIAgent
224
+ from llama_index.llms.openai import OpenAI
225
+ from llama_index.core.tools import FunctionTool
226
+ from skyvern_llamaindex.agent import SkyvernTool
227
+
228
+ # load OpenAI API key from .env
229
+ load_dotenv()
230
+
231
+ async def sleep(seconds: int) -> str:
232
+ await asyncio.sleep(seconds)
233
+ return f"Slept for {seconds} seconds"
234
+
235
+ sleep_tool = FunctionTool.from_defaults(
236
+ async_fn=sleep,
237
+ description="Sleep for a given number of seconds",
238
+ name="sleep",
239
+ )
240
+
241
+ skyvern_tool = SkyvernTool()
242
+
243
+ agent = OpenAIAgent.from_tools(
244
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
245
+ llm=OpenAI(model="gpt-4o"),
246
+ verbose=True,
247
+ max_function_calls=10,
248
+ )
249
+
250
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
251
+ print(response)
252
+
253
+ ```
254
+
255
+ ### Dispatch a task(async) by calling skyvern APIs and wait until the task is finished
256
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
257
+
258
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
259
+
260
+ ```python
261
+ import asyncio
262
+ from dotenv import load_dotenv
263
+ from llama_index.agent.openai import OpenAIAgent
264
+ from llama_index.llms.openai import OpenAI
265
+ from llama_index.core.tools import FunctionTool
266
+ from skyvern_llamaindex.client import SkyvernTool
267
+
268
+ # load OpenAI API key from .env
269
+ load_dotenv()
270
+
271
+ async def sleep(seconds: int) -> str:
272
+ await asyncio.sleep(seconds)
273
+ return f"Slept for {seconds} seconds"
274
+
275
+ sleep_tool = FunctionTool.from_defaults(
276
+ async_fn=sleep,
277
+ description="Sleep for a given number of seconds",
278
+ name="sleep",
279
+ )
280
+
281
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
282
+ # or you can load the api_key from SKYVERN_API_KEY in .env
283
+ # skyvern_tool = SkyvernTool()
284
+
285
+ agent = OpenAIAgent.from_tools(
286
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
287
+ llm=OpenAI(model="gpt-4o"),
288
+ verbose=True,
289
+ max_function_calls=10,
290
+ )
291
+
292
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
293
+ print(response)
294
+
295
+ ```
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "skyvern-llamaindex"
3
- version = "0.0.2"
3
+ version = "0.0.4"
4
4
  description = "Skyvern integration for LlamaIndex"
5
5
  authors = ["lawyzheng <lawy@skyvern.com>"]
6
6
  packages = [{ include = "skyvern_llamaindex" }]