skyvern-llamaindex 0.0.3__tar.gz → 0.0.5__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,307 @@
1
+ Metadata-Version: 2.3
2
+ Name: skyvern-llamaindex
3
+ Version: 0.0.5
4
+ Summary: Skyvern integration for LlamaIndex
5
+ Author: lawyzheng
6
+ Author-email: lawy@skyvern.com
7
+ Requires-Python: >=3.11,<3.12
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: Programming Language :: Python :: 3.11
10
+ Requires-Dist: llama-index (>=0.12.19,<0.13.0)
11
+ Requires-Dist: skyvern (>=0.1.84)
12
+ Description-Content-Type: text/markdown
13
+
14
+ <!-- START doctoc generated TOC please keep comment here to allow auto update -->
15
+ <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
16
+
17
+ - [Skyvern LlamaIndex](#skyvern-llamaindex)
18
+ - [Installation](#installation)
19
+ - [Basic Usage](#basic-usage)
20
+ - [Run a task(sync) locally in your local environment](#run-a-tasksync-locally-in-your-local-environment)
21
+ - [Run a task(async) locally in your local environment](#run-a-taskasync-locally-in-your-local-environment)
22
+ - [Get a task locally in your local environment](#get-a-task-locally-in-your-local-environment)
23
+ - [Run a task(sync) by calling skyvern APIs](#run-a-tasksync-by-calling-skyvern-apis)
24
+ - [Run a task(async) by calling skyvern APIs](#run-a-taskasync-by-calling-skyvern-apis)
25
+ - [Get a task by calling skyvern APIs](#get-a-task-by-calling-skyvern-apis)
26
+ - [Advanced Usage](#advanced-usage)
27
+ - [Dispatch a task(async) locally in your local environment and wait until the task is finished](#dispatch-a-taskasync-locally-in-your-local-environment-and-wait-until-the-task-is-finished)
28
+ - [Dispatch a task(async) by calling skyvern APIs and wait until the task is finished](#dispatch-a-taskasync-by-calling-skyvern-apis-and-wait-until-the-task-is-finished)
29
+
30
+ <!-- END doctoc generated TOC please keep comment here to allow auto update -->
31
+
32
+ # Skyvern LlamaIndex
33
+
34
+ This is a LlamaIndex integration for Skyvern.
35
+
36
+ ## Installation
37
+
38
+ ```bash
39
+ pip install skyvern-llamaindex
40
+ ```
41
+
42
+ ## Basic Usage
43
+
44
+ ### Run a task(sync) locally in your local environment
45
+ > sync task won't return until the task is finished.
46
+
47
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
48
+
49
+
50
+ ```python
51
+ from dotenv import load_dotenv
52
+ from llama_index.agent.openai import OpenAIAgent
53
+ from llama_index.llms.openai import OpenAI
54
+ from skyvern_llamaindex.agent import SkyvernTool
55
+
56
+ # load OpenAI API key from .env
57
+ load_dotenv()
58
+
59
+ skyvern_tool = SkyvernTool()
60
+
61
+ agent = OpenAIAgent.from_tools(
62
+ tools=[skyvern_tool.run_task()],
63
+ llm=OpenAI(model="gpt-4o"),
64
+ verbose=True,
65
+ )
66
+
67
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
68
+ print(response)
69
+ ```
70
+
71
+ ### Run a task(async) locally in your local environment
72
+ > async task will return immediately and the task will be running in the background.
73
+
74
+ :warning: :warning: if you want to run the task in the background, you need to keep the agent running until the task is finished, otherwise the task will be killed when the agent finished the chat.
75
+
76
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
77
+
78
+ ```python
79
+ import asyncio
80
+ from dotenv import load_dotenv
81
+ from llama_index.agent.openai import OpenAIAgent
82
+ from llama_index.llms.openai import OpenAI
83
+ from skyvern_llamaindex.agent import SkyvernTool
84
+ from llama_index.core.tools import FunctionTool
85
+
86
+ # load OpenAI API key from .env
87
+ load_dotenv()
88
+
89
+ async def sleep(seconds: int) -> str:
90
+ await asyncio.sleep(seconds)
91
+ return f"Slept for {seconds} seconds"
92
+
93
+ # define a sleep tool to keep the agent running until the task is finished
94
+ sleep_tool = FunctionTool.from_defaults(
95
+ async_fn=sleep,
96
+ description="Sleep for a given number of seconds",
97
+ name="sleep",
98
+ )
99
+
100
+ skyvern_tool = SkyvernTool()
101
+
102
+ agent = OpenAIAgent.from_tools(
103
+ tools=[skyvern_tool.dispatch_task(), sleep_tool],
104
+ llm=OpenAI(model="gpt-4o"),
105
+ verbose=True,
106
+ )
107
+
108
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, sleep for 10 minutes.")
109
+ print(response)
110
+ ```
111
+
112
+ ### Get a task locally in your local environment
113
+
114
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
115
+
116
+ ```python
117
+ from dotenv import load_dotenv
118
+ from llama_index.agent.openai import OpenAIAgent
119
+ from llama_index.llms.openai import OpenAI
120
+ from skyvern_llamaindex.agent import SkyvernTool
121
+
122
+ # load OpenAI API key from .env
123
+ load_dotenv()
124
+
125
+ skyvern_tool = SkyvernTool()
126
+
127
+ agent = OpenAIAgent.from_tools(
128
+ tools=[skyvern_tool.get_task()],
129
+ llm=OpenAI(model="gpt-4o"),
130
+ verbose=True,
131
+ )
132
+
133
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
134
+ print(response)
135
+ ```
136
+
137
+ ### Run a task(sync) by calling skyvern APIs
138
+ > sync task won't return until the task is finished.
139
+
140
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
141
+
142
+ ```python
143
+ from dotenv import load_dotenv
144
+ from llama_index.agent.openai import OpenAIAgent
145
+ from llama_index.llms.openai import OpenAI
146
+ from skyvern_llamaindex.client import SkyvernTool
147
+
148
+ # load OpenAI API key from .env
149
+ load_dotenv()
150
+
151
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
152
+ # or you can load the api_key from SKYVERN_API_KEY in .env
153
+ # skyvern_tool = SkyvernTool()
154
+
155
+ agent = OpenAIAgent.from_tools(
156
+ tools=[skyvern_tool.run_task()],
157
+ llm=OpenAI(model="gpt-4o"),
158
+ verbose=True,
159
+ )
160
+
161
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
162
+ print(response)
163
+ ```
164
+
165
+ ### Run a task(async) by calling skyvern APIs
166
+ > async task will return immediately and the task will be running in the background.
167
+
168
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
169
+
170
+ the task is actually running in the skyvern cloud service, so you don't need to keep your agent running until the task is finished.
171
+
172
+ ```python
173
+ from dotenv import load_dotenv
174
+ from llama_index.agent.openai import OpenAIAgent
175
+ from llama_index.llms.openai import OpenAI
176
+ from skyvern_llamaindex.client import SkyvernTool
177
+
178
+ # load OpenAI API key from .env
179
+ load_dotenv()
180
+
181
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
182
+ # or you can load the api_key from SKYVERN_API_KEY in .env
183
+ # skyvern_tool = SkyvernTool()
184
+
185
+ agent = OpenAIAgent.from_tools(
186
+ tools=[skyvern_tool.dispatch_task()],
187
+ llm=OpenAI(model="gpt-4o"),
188
+ verbose=True,
189
+ )
190
+
191
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
192
+ print(response)
193
+ ```
194
+
195
+
196
+ ### Get a task by calling skyvern APIs
197
+
198
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
199
+
200
+ ```python
201
+ from dotenv import load_dotenv
202
+ from llama_index.agent.openai import OpenAIAgent
203
+ from llama_index.llms.openai import OpenAI
204
+ from skyvern_llamaindex.client import SkyvernTool
205
+
206
+ # load OpenAI API key from .env
207
+ load_dotenv()
208
+
209
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
210
+ # or you can load the api_key from SKYVERN_API_KEY in .env
211
+ # skyvern_tool = SkyvernTool()
212
+
213
+ agent = OpenAIAgent.from_tools(
214
+ tools=[skyvern_tool.get_task()],
215
+ llm=OpenAI(model="gpt-4o"),
216
+ verbose=True,
217
+ )
218
+
219
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
220
+ print(response)
221
+ ```
222
+
223
+ ## Advanced Usage
224
+
225
+ To provide some examples of how to integrate Skyvern with other llama-index tools in the agent.
226
+
227
+ ### Dispatch a task(async) locally in your local environment and wait until the task is finished
228
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
229
+
230
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
231
+
232
+ ```python
233
+ import asyncio
234
+ from dotenv import load_dotenv
235
+ from llama_index.agent.openai import OpenAIAgent
236
+ from llama_index.llms.openai import OpenAI
237
+ from llama_index.core.tools import FunctionTool
238
+ from skyvern_llamaindex.agent import SkyvernTool
239
+
240
+ # load OpenAI API key from .env
241
+ load_dotenv()
242
+
243
+ async def sleep(seconds: int) -> str:
244
+ await asyncio.sleep(seconds)
245
+ return f"Slept for {seconds} seconds"
246
+
247
+ sleep_tool = FunctionTool.from_defaults(
248
+ async_fn=sleep,
249
+ description="Sleep for a given number of seconds",
250
+ name="sleep",
251
+ )
252
+
253
+ skyvern_tool = SkyvernTool()
254
+
255
+ agent = OpenAIAgent.from_tools(
256
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
257
+ llm=OpenAI(model="gpt-4o"),
258
+ verbose=True,
259
+ max_function_calls=10,
260
+ )
261
+
262
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
263
+ print(response)
264
+
265
+ ```
266
+
267
+ ### Dispatch a task(async) by calling skyvern APIs and wait until the task is finished
268
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
269
+
270
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
271
+
272
+ ```python
273
+ import asyncio
274
+ from dotenv import load_dotenv
275
+ from llama_index.agent.openai import OpenAIAgent
276
+ from llama_index.llms.openai import OpenAI
277
+ from llama_index.core.tools import FunctionTool
278
+ from skyvern_llamaindex.client import SkyvernTool
279
+
280
+ # load OpenAI API key from .env
281
+ load_dotenv()
282
+
283
+ async def sleep(seconds: int) -> str:
284
+ await asyncio.sleep(seconds)
285
+ return f"Slept for {seconds} seconds"
286
+
287
+ sleep_tool = FunctionTool.from_defaults(
288
+ async_fn=sleep,
289
+ description="Sleep for a given number of seconds",
290
+ name="sleep",
291
+ )
292
+
293
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
294
+ # or you can load the api_key from SKYVERN_API_KEY in .env
295
+ # skyvern_tool = SkyvernTool()
296
+
297
+ agent = OpenAIAgent.from_tools(
298
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
299
+ llm=OpenAI(model="gpt-4o"),
300
+ verbose=True,
301
+ max_function_calls=10,
302
+ )
303
+
304
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
305
+ print(response)
306
+
307
+ ```
@@ -0,0 +1,294 @@
1
+ <!-- START doctoc generated TOC please keep comment here to allow auto update -->
2
+ <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
3
+
4
+ - [Skyvern LlamaIndex](#skyvern-llamaindex)
5
+ - [Installation](#installation)
6
+ - [Basic Usage](#basic-usage)
7
+ - [Run a task(sync) locally in your local environment](#run-a-tasksync-locally-in-your-local-environment)
8
+ - [Run a task(async) locally in your local environment](#run-a-taskasync-locally-in-your-local-environment)
9
+ - [Get a task locally in your local environment](#get-a-task-locally-in-your-local-environment)
10
+ - [Run a task(sync) by calling skyvern APIs](#run-a-tasksync-by-calling-skyvern-apis)
11
+ - [Run a task(async) by calling skyvern APIs](#run-a-taskasync-by-calling-skyvern-apis)
12
+ - [Get a task by calling skyvern APIs](#get-a-task-by-calling-skyvern-apis)
13
+ - [Advanced Usage](#advanced-usage)
14
+ - [Dispatch a task(async) locally in your local environment and wait until the task is finished](#dispatch-a-taskasync-locally-in-your-local-environment-and-wait-until-the-task-is-finished)
15
+ - [Dispatch a task(async) by calling skyvern APIs and wait until the task is finished](#dispatch-a-taskasync-by-calling-skyvern-apis-and-wait-until-the-task-is-finished)
16
+
17
+ <!-- END doctoc generated TOC please keep comment here to allow auto update -->
18
+
19
+ # Skyvern LlamaIndex
20
+
21
+ This is a LlamaIndex integration for Skyvern.
22
+
23
+ ## Installation
24
+
25
+ ```bash
26
+ pip install skyvern-llamaindex
27
+ ```
28
+
29
+ ## Basic Usage
30
+
31
+ ### Run a task(sync) locally in your local environment
32
+ > sync task won't return until the task is finished.
33
+
34
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
35
+
36
+
37
+ ```python
38
+ from dotenv import load_dotenv
39
+ from llama_index.agent.openai import OpenAIAgent
40
+ from llama_index.llms.openai import OpenAI
41
+ from skyvern_llamaindex.agent import SkyvernTool
42
+
43
+ # load OpenAI API key from .env
44
+ load_dotenv()
45
+
46
+ skyvern_tool = SkyvernTool()
47
+
48
+ agent = OpenAIAgent.from_tools(
49
+ tools=[skyvern_tool.run_task()],
50
+ llm=OpenAI(model="gpt-4o"),
51
+ verbose=True,
52
+ )
53
+
54
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
55
+ print(response)
56
+ ```
57
+
58
+ ### Run a task(async) locally in your local environment
59
+ > async task will return immediately and the task will be running in the background.
60
+
61
+ :warning: :warning: if you want to run the task in the background, you need to keep the agent running until the task is finished, otherwise the task will be killed when the agent finished the chat.
62
+
63
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
64
+
65
+ ```python
66
+ import asyncio
67
+ from dotenv import load_dotenv
68
+ from llama_index.agent.openai import OpenAIAgent
69
+ from llama_index.llms.openai import OpenAI
70
+ from skyvern_llamaindex.agent import SkyvernTool
71
+ from llama_index.core.tools import FunctionTool
72
+
73
+ # load OpenAI API key from .env
74
+ load_dotenv()
75
+
76
+ async def sleep(seconds: int) -> str:
77
+ await asyncio.sleep(seconds)
78
+ return f"Slept for {seconds} seconds"
79
+
80
+ # define a sleep tool to keep the agent running until the task is finished
81
+ sleep_tool = FunctionTool.from_defaults(
82
+ async_fn=sleep,
83
+ description="Sleep for a given number of seconds",
84
+ name="sleep",
85
+ )
86
+
87
+ skyvern_tool = SkyvernTool()
88
+
89
+ agent = OpenAIAgent.from_tools(
90
+ tools=[skyvern_tool.dispatch_task(), sleep_tool],
91
+ llm=OpenAI(model="gpt-4o"),
92
+ verbose=True,
93
+ )
94
+
95
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, sleep for 10 minutes.")
96
+ print(response)
97
+ ```
98
+
99
+ ### Get a task locally in your local environment
100
+
101
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
102
+
103
+ ```python
104
+ from dotenv import load_dotenv
105
+ from llama_index.agent.openai import OpenAIAgent
106
+ from llama_index.llms.openai import OpenAI
107
+ from skyvern_llamaindex.agent import SkyvernTool
108
+
109
+ # load OpenAI API key from .env
110
+ load_dotenv()
111
+
112
+ skyvern_tool = SkyvernTool()
113
+
114
+ agent = OpenAIAgent.from_tools(
115
+ tools=[skyvern_tool.get_task()],
116
+ llm=OpenAI(model="gpt-4o"),
117
+ verbose=True,
118
+ )
119
+
120
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
121
+ print(response)
122
+ ```
123
+
124
+ ### Run a task(sync) by calling skyvern APIs
125
+ > sync task won't return until the task is finished.
126
+
127
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
128
+
129
+ ```python
130
+ from dotenv import load_dotenv
131
+ from llama_index.agent.openai import OpenAIAgent
132
+ from llama_index.llms.openai import OpenAI
133
+ from skyvern_llamaindex.client import SkyvernTool
134
+
135
+ # load OpenAI API key from .env
136
+ load_dotenv()
137
+
138
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
139
+ # or you can load the api_key from SKYVERN_API_KEY in .env
140
+ # skyvern_tool = SkyvernTool()
141
+
142
+ agent = OpenAIAgent.from_tools(
143
+ tools=[skyvern_tool.run_task()],
144
+ llm=OpenAI(model="gpt-4o"),
145
+ verbose=True,
146
+ )
147
+
148
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
149
+ print(response)
150
+ ```
151
+
152
+ ### Run a task(async) by calling skyvern APIs
153
+ > async task will return immediately and the task will be running in the background.
154
+
155
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
156
+
157
+ the task is actually running in the skyvern cloud service, so you don't need to keep your agent running until the task is finished.
158
+
159
+ ```python
160
+ from dotenv import load_dotenv
161
+ from llama_index.agent.openai import OpenAIAgent
162
+ from llama_index.llms.openai import OpenAI
163
+ from skyvern_llamaindex.client import SkyvernTool
164
+
165
+ # load OpenAI API key from .env
166
+ load_dotenv()
167
+
168
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
169
+ # or you can load the api_key from SKYVERN_API_KEY in .env
170
+ # skyvern_tool = SkyvernTool()
171
+
172
+ agent = OpenAIAgent.from_tools(
173
+ tools=[skyvern_tool.dispatch_task()],
174
+ llm=OpenAI(model="gpt-4o"),
175
+ verbose=True,
176
+ )
177
+
178
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.'")
179
+ print(response)
180
+ ```
181
+
182
+
183
+ ### Get a task by calling skyvern APIs
184
+
185
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
186
+
187
+ ```python
188
+ from dotenv import load_dotenv
189
+ from llama_index.agent.openai import OpenAIAgent
190
+ from llama_index.llms.openai import OpenAI
191
+ from skyvern_llamaindex.client import SkyvernTool
192
+
193
+ # load OpenAI API key from .env
194
+ load_dotenv()
195
+
196
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
197
+ # or you can load the api_key from SKYVERN_API_KEY in .env
198
+ # skyvern_tool = SkyvernTool()
199
+
200
+ agent = OpenAIAgent.from_tools(
201
+ tools=[skyvern_tool.get_task()],
202
+ llm=OpenAI(model="gpt-4o"),
203
+ verbose=True,
204
+ )
205
+
206
+ response = agent.chat("Get the task information with Skyvern. The task id is '<task_id>'.")
207
+ print(response)
208
+ ```
209
+
210
+ ## Advanced Usage
211
+
212
+ To provide some examples of how to integrate Skyvern with other llama-index tools in the agent.
213
+
214
+ ### Dispatch a task(async) locally in your local environment and wait until the task is finished
215
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
216
+
217
+ :warning: :warning: if you want to run this code block, you need to run `skyvern init` command in your terminal to set up skyvern first.
218
+
219
+ ```python
220
+ import asyncio
221
+ from dotenv import load_dotenv
222
+ from llama_index.agent.openai import OpenAIAgent
223
+ from llama_index.llms.openai import OpenAI
224
+ from llama_index.core.tools import FunctionTool
225
+ from skyvern_llamaindex.agent import SkyvernTool
226
+
227
+ # load OpenAI API key from .env
228
+ load_dotenv()
229
+
230
+ async def sleep(seconds: int) -> str:
231
+ await asyncio.sleep(seconds)
232
+ return f"Slept for {seconds} seconds"
233
+
234
+ sleep_tool = FunctionTool.from_defaults(
235
+ async_fn=sleep,
236
+ description="Sleep for a given number of seconds",
237
+ name="sleep",
238
+ )
239
+
240
+ skyvern_tool = SkyvernTool()
241
+
242
+ agent = OpenAIAgent.from_tools(
243
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
244
+ llm=OpenAI(model="gpt-4o"),
245
+ verbose=True,
246
+ max_function_calls=10,
247
+ )
248
+
249
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
250
+ print(response)
251
+
252
+ ```
253
+
254
+ ### Dispatch a task(async) by calling skyvern APIs and wait until the task is finished
255
+ > dispatch task will return immediately and the task will be running in the background. You can use `get_task` tool to poll the task information until the task is finished.
256
+
257
+ no need to run `skyvern init` command in your terminal to set up skyvern before using this integration.
258
+
259
+ ```python
260
+ import asyncio
261
+ from dotenv import load_dotenv
262
+ from llama_index.agent.openai import OpenAIAgent
263
+ from llama_index.llms.openai import OpenAI
264
+ from llama_index.core.tools import FunctionTool
265
+ from skyvern_llamaindex.client import SkyvernTool
266
+
267
+ # load OpenAI API key from .env
268
+ load_dotenv()
269
+
270
+ async def sleep(seconds: int) -> str:
271
+ await asyncio.sleep(seconds)
272
+ return f"Slept for {seconds} seconds"
273
+
274
+ sleep_tool = FunctionTool.from_defaults(
275
+ async_fn=sleep,
276
+ description="Sleep for a given number of seconds",
277
+ name="sleep",
278
+ )
279
+
280
+ skyvern_tool = SkyvernTool(api_key="<your_organization_api_key>")
281
+ # or you can load the api_key from SKYVERN_API_KEY in .env
282
+ # skyvern_tool = SkyvernTool()
283
+
284
+ agent = OpenAIAgent.from_tools(
285
+ tools=[skyvern_tool.dispatch_task(), skyvern_tool.get_task(), sleep_tool],
286
+ llm=OpenAI(model="gpt-4o"),
287
+ verbose=True,
288
+ max_function_calls=10,
289
+ )
290
+
291
+ response = agent.chat("Run a task with Skyvern. The task is about 'Navigate to the Hacker News homepage and get the top 3 posts.' Then, get this task information until it's completed. The task information re-get interval should be 60s.")
292
+ print(response)
293
+
294
+ ```
@@ -1,6 +1,6 @@
1
1
  [tool.poetry]
2
2
  name = "skyvern-llamaindex"
3
- version = "0.0.3"
3
+ version = "0.0.5"
4
4
  description = "Skyvern integration for LlamaIndex"
5
5
  authors = ["lawyzheng <lawy@skyvern.com>"]
6
6
  packages = [{ include = "skyvern_llamaindex" }]
@@ -8,7 +8,7 @@ readme = "README.md"
8
8
 
9
9
  [tool.poetry.dependencies]
10
10
  python = "^3.11,<3.12"
11
- skyvern = "^0.1.56"
11
+ skyvern = ">=0.1.84"
12
12
  llama-index = "^0.12.19"
13
13
 
14
14