vision-agent 0.2.228__py3-none-any.whl → 0.2.230__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,562 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: vision-agent
3
- Version: 0.2.228
4
- Summary: Toolset for Vision Agent
5
- Author: Landing AI
6
- Author-email: dev@landing.ai
7
- Requires-Python: >=3.9,<4.0
8
- Classifier: Programming Language :: Python :: 3
9
- Classifier: Programming Language :: Python :: 3.9
10
- Classifier: Programming Language :: Python :: 3.10
11
- Classifier: Programming Language :: Python :: 3.11
12
- Requires-Dist: anthropic (>=0.31.0,<0.32.0)
13
- Requires-Dist: av (>=11.0.0,<12.0.0)
14
- Requires-Dist: e2b (>=0.17.2a50,<0.18.0)
15
- Requires-Dist: e2b-code-interpreter (==0.0.11a37)
16
- Requires-Dist: flake8 (>=7.0.0,<8.0.0)
17
- Requires-Dist: ipykernel (>=6.29.4,<7.0.0)
18
- Requires-Dist: langsmith (>=0.1.58,<0.2.0)
19
- Requires-Dist: libcst (>=1.5.0,<2.0.0)
20
- Requires-Dist: matplotlib (>=3.9.2,<4.0.0)
21
- Requires-Dist: nbclient (>=0.10.0,<0.11.0)
22
- Requires-Dist: nbformat (>=5.10.4,<6.0.0)
23
- Requires-Dist: numpy (>=1.21.0,<2.0.0)
24
- Requires-Dist: openai (>=1.0.0,<2.0.0)
25
- Requires-Dist: opencv-python (>=4.0.0,<5.0.0)
26
- Requires-Dist: opentelemetry-api (>=1.29.0,<2.0.0)
27
- Requires-Dist: pandas (>=2.0.0,<3.0.0)
28
- Requires-Dist: pillow (>=10.0.0,<11.0.0)
29
- Requires-Dist: pillow-heif (>=0.16.0,<0.17.0)
30
- Requires-Dist: pydantic (==2.7.4)
31
- Requires-Dist: pydantic-settings (>=2.2.1,<3.0.0)
32
- Requires-Dist: pytube (==15.0.0)
33
- Requires-Dist: requests (>=2.0.0,<3.0.0)
34
- Requires-Dist: rich (>=13.7.1,<14.0.0)
35
- Requires-Dist: scikit-learn (>=1.5.2,<2.0.0)
36
- Requires-Dist: scipy (>=1.13.0,<1.14.0)
37
- Requires-Dist: tabulate (>=0.9.0,<0.10.0)
38
- Requires-Dist: tenacity (>=8.3.0,<9.0.0)
39
- Requires-Dist: tqdm (>=4.64.0,<5.0.0)
40
- Requires-Dist: typing_extensions (>=4.0.0,<5.0.0)
41
- Project-URL: Homepage, https://landing.ai
42
- Project-URL: documentation, https://github.com/landing-ai/vision-agent
43
- Project-URL: repository, https://github.com/landing-ai/vision-agent
44
- Description-Content-Type: text/markdown
45
-
46
- <div align="center">
47
- <picture>
48
- <source media="(prefers-color-scheme: dark)" srcset="https://github.com/landing-ai/vision-agent/blob/main/assets/logo_light.svg?raw=true">
49
- <source media="(prefers-color-scheme: light)" srcset="https://github.com/landing-ai/vision-agent/blob/main/assets/logo_dark.svg?raw=true">
50
- <img alt="VisionAgent" height="200px" src="https://github.com/landing-ai/vision-agent/blob/main/assets/logo_light.svg?raw=true">
51
- </picture>
52
-
53
- [![](https://dcbadge.vercel.app/api/server/wPdN8RCYew?compact=true&style=flat)](https://discord.gg/wPdN8RCYew)
54
- ![ci_status](https://github.com/landing-ai/vision-agent/actions/workflows/ci_cd.yml/badge.svg)
55
- [![PyPI version](https://badge.fury.io/py/vision-agent.svg)](https://badge.fury.io/py/vision-agent)
56
- ![version](https://img.shields.io/pypi/pyversions/vision-agent)
57
- </div>
58
-
59
- VisionAgent is a library that helps you utilize agent frameworks to generate code to
60
- solve your vision task. Check out our discord for updates and roadmaps!
61
-
62
- ## Table of Contents
63
- - [🚀Quick Start](#quick-start)
64
- - [📚Documentation](#documentation)
65
- - [🔍🤖VisionAgent](#visionagent-basic-usage)
66
- - [🛠️Tools](#tools)
67
- - [🤖LMMs](#lmms)
68
- - [💻🤖VisionAgent Coder](#visionagent-coder)
69
- - [🏗️Additional Backends](#additional-backends)
70
-
71
- ## Quick Start
72
- ### Web Application
73
- The fastest way to test out VisionAgent is to use our web application. You can find it
74
- [here](https://va.landing.ai/).
75
-
76
- ### Local Jupyter Notebook
77
- You can also run VisionAgent in a local Jupyter Notebook. Here are some examples of using VisionAgent:
78
-
79
- 1. [Counting cans in an image](https://github.com/landing-ai/vision-agent/blob/main/examples/notebooks/counting_cans.ipynb)
80
-
81
- Check out the [notebooks](https://github.com/landing-ai/vision-agent/blob/main/examples/notebooks) folder for more examples.
82
-
83
-
84
- ### Get Started
85
- To get started with the python library, you can install it using pip:
86
-
87
- #### Installation and Setup
88
- ```bash
89
- pip install vision-agent
90
- ```
91
-
92
- ```bash
93
- export ANTHROPIC_API_KEY="your-api-key"
94
- ```
95
-
96
- ---
97
- **NOTE**
98
- You must have the Anthropic API key set in your environment variables to use
99
- VisionAgent. If you don't have an Anthropic key you can use another provider like
100
- OpenAI or Ollama.
101
- ---
102
-
103
- #### Chatting with VisionAgent
104
- To get started you can just import the `VisionAgent` and start chatting with it:
105
- ```python
106
- >>> from vision_agent.agent import VisionAgent
107
- >>> agent = VisionAgent(verbosity=2)
108
- >>> resp = agent("Hello")
109
- >>> print(resp)
110
- [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}]
111
- >>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]})
112
- >>> resp = agent(resp)
113
- ```
114
-
115
- The chat messages are similar to `OpenAI`'s format with `role` and `content` keys but
116
- in addition to those you can add `media` which is a list of media files that can either
117
- be images or video files.
118
-
119
- #### Getting Code from VisionAgent
120
- You can also use `VisionAgentCoder` to generate code for you:
121
-
122
- ```python
123
- >>> from vision_agent.agent import VisionAgentCoder
124
- >>> agent = VisionAgentCoder(verbosity=2)
125
- >>> code = agent("Count the number of people in this image", media="people.jpg")
126
- ```
127
-
128
- #### Don't have Anthropic/OpenAI API keys?
129
- You can use `OllamaVisionAgentCoder` which uses Ollama as the backend. To get started
130
- pull the models:
131
-
132
- ```bash
133
- ollama pull llama3.2-vision
134
- ollama pull mxbai-embed-large
135
- ```
136
-
137
- Then you can use it just like you would use `VisionAgentCoder`:
138
-
139
- ```python
140
- >>> from vision_agent.agent import OllamaVisionAgentCoder
141
- >>> agent = OllamaVisionAgentCoder(verbosity=2)
142
- >>> code = agent("Count the number of people in this image", media="people.jpg")
143
- ```
144
-
145
- ---
146
- **NOTE**
147
- Smaller open source models like Llama 3.1 8B will not work well with VisionAgent. You
148
- will encounter many coding errors because it generates incorrect code or JSON decoding
149
- errors because it generates incorrect JSON. We recommend using larger models or
150
- Anthropic/OpenAI models.
151
- ---
152
-
153
- ## Documentation
154
-
155
- [VisionAgent Library Docs](https://landing-ai.github.io/vision-agent/)
156
-
157
- ## VisionAgent Basic Usage
158
- ### Chatting and Message Formats
159
- `VisionAgent` is an agent that can chat with you and call other tools or agents to
160
- write vision code for you. You can interact with it like you would ChatGPT or any other
161
- chatbot. The agent uses Clause-3.5 for it's LMM.
162
-
163
- The message format is:
164
- ```json
165
- {
166
- "role": "user",
167
- "content": "Hello",
168
- "media": ["image.jpg"]
169
- }
170
- ```
171
- Where `role` can be `user`, `assistant` or `observation` if the agent has executed a
172
- function and needs to observe the output. `content` is always the text message and
173
- `media` is a list of media files that can be images or videos that you want the agent
174
- to examine.
175
-
176
- When the agent responds, inside it's `context` you will find the following data structure:
177
- ```json
178
- {
179
- "thoughts": "The user has greeted me. I will respond with a greeting and ask how I can assist them.",
180
- "response": "Hello! How can I assist you today?",
181
- "let_user_respond": true
182
- }
183
- ```
184
-
185
- `thoughts` are the thoughts the agent had when processing the message, `response` is the
186
- response it generated which could contain a python execution, and `let_user_respond` is
187
- a boolean that tells the agent if it should wait for the user to respond before
188
- continuing, for example it may want to execute code and look at the output before
189
- letting the user respond.
190
-
191
- ### Chatting and Artifacts
192
- If you run `chat_with_artifacts` you will also notice an `Artifact` object. `Artifact`'s
193
- are a way to sync files between local and remote environments. The agent will read and
194
- write to the artifact object, which is just a pickle object, when it wants to save or
195
- load files.
196
-
197
- ```python
198
- import vision_agent as va
199
- from vision_agent.tools.meta_tools import Artifact
200
-
201
- artifact = Artifact("artifact.pkl")
202
- # you can store text files such as code or images in the artifact
203
- with open("code.py", "r") as f:
204
- artifacts["code.py"] = f.read()
205
- with open("image.png", "rb") as f:
206
- artifacts["image.png"] = f.read()
207
-
208
- agent = va.agent.VisionAgent()
209
- response, artifacts = agent.chat_with_artifacts(
210
- [
211
- {
212
- "role": "user",
213
- "content": "Can you write code to count the number of people in image.png",
214
- }
215
- ],
216
- artifacts=artifacts,
217
- )
218
- ```
219
-
220
- ### Running the Streamlit App
221
- To test out things quickly, sometimes it's easier to run the streamlit app locally to
222
- chat with `VisionAgent`, you can run the following command:
223
-
224
- ```bash
225
- pip install -r examples/chat/requirements.txt
226
- export WORKSPACE=/path/to/your/workspace
227
- export ZMQ_PORT=5555
228
- streamlit run examples/chat/app.py
229
- ```
230
- You can find more details about the streamlit app [here](examples/chat/), there are
231
- still some concurrency issues with the streamlit app so if you find it doing weird things
232
- clear your workspace and restart the app.
233
-
234
- ## Tools
235
- There are a variety of tools for the model or the user to use. Some are executed locally
236
- while others are hosted for you. You can easily access them yourself, for example if
237
- you want to run `owl_v2_image` and visualize the output you can run:
238
-
239
- ```python
240
- import vision_agent.tools as T
241
- import matplotlib.pyplot as plt
242
-
243
- image = T.load_image("dogs.jpg")
244
- dets = T.owl_v2_image("dogs", image)
245
- # visualize the owl_v2_ bounding boxes on the image
246
- viz = T.overlay_bounding_boxes(image, dets)
247
-
248
- # plot the image in matplotlib or save it
249
- plt.imshow(viz)
250
- plt.show()
251
- T.save_image(viz, "viz.png")
252
- ```
253
-
254
- Or if you want to run on video data, for example track sharks and people at 10 FPS:
255
-
256
- ```python
257
- frames_and_ts = T.extract_frames_and_timestamps("sharks.mp4", fps=10)
258
- # extract only the frames from frames and timestamps
259
- frames = [f["frame"] for f in frames_and_ts]
260
- # track the sharks and people in the frames, returns segmentation masks
261
- track = T.florence2_sam2_video_tracking("shark, person", frames)
262
- # plot the segmentation masks on the frames
263
- viz = T.overlay_segmentation_masks(frames, track)
264
- T.save_video(viz, "viz.mp4")
265
- ```
266
-
267
- You can find all available tools in `vision_agent/tools/tools.py`, however the
268
- `VisionAgent` will only utilizes a subset of tools that have been tested and provide
269
- the best performance. Those can be found in the same file under the `FUNCION_TOOLS`
270
- variable inside `tools.py`.
271
-
272
- #### Custom Tools
273
- If you can't find the tool you are looking for you can also add custom tools to the
274
- agent:
275
-
276
- ```python
277
- import vision_agent as va
278
- import numpy as np
279
-
280
- @va.tools.register_tool(imports=["import numpy as np"])
281
- def custom_tool(image_path: str) -> str:
282
- """My custom tool documentation.
283
-
284
- Parameters:
285
- image_path (str): The path to the image.
286
-
287
- Returns:
288
- str: The result of the tool.
289
-
290
- Example
291
- -------
292
- >>> custom_tool("image.jpg")
293
- """
294
-
295
- return np.zeros((10, 10))
296
- ```
297
-
298
- You need to ensure you call `@va.tools.register_tool` with any imports it uses. Global
299
- variables will not be captured by `register_tool` so you need to include them in the
300
- function. Make sure the documentation is in the same format above with description,
301
- `Parameters:`, `Returns:`, and `Example\n-------`. The `VisionAgent` will use your
302
- documentation when trying to determine when to use your tool. You can find an example
303
- use case [here](examples/custom_tools/) for adding a custom tool. Note you may need to
304
- play around with the prompt to ensure the model picks the tool when you want it to.
305
-
306
- Can't find the tool you need and want us to host it? Check out our
307
- [vision-agent-tools](https://github.com/landing-ai/vision-agent-tools) repository where
308
- we add the source code for all the tools used in `VisionAgent`.
309
-
310
- ## LMMs
311
- All of our agents are based off of LMMs or Large Multimodal Models. We provide a thin
312
- abstraction layer on top of the underlying provider APIs to be able to more easily
313
- handle media.
314
-
315
-
316
- ```python
317
- from vision_agent.lmm import AnthropicLMM
318
-
319
- lmm = AnthropicLMM()
320
- response = lmm("Describe this image", media=["apple.jpg"])
321
- >>> "This is an image of an apple."
322
- ```
323
-
324
- Or you can use the `OpenAI` chat interaface and pass it other media like videos:
325
-
326
- ```python
327
- response = lmm(
328
- [
329
- {
330
- "role": "user",
331
- "content": "What's going on in this video?",
332
- "media": ["video.mp4"]
333
- }
334
- ]
335
- )
336
- ```
337
-
338
- ## VisionAgent Coder
339
- Underneath the hood, `VisionAgent` uses `VisionAgentCoder` to generate code to solve
340
- vision tasks. You can use `VisionAgentCoder` directly to generate code if you want:
341
-
342
- ```python
343
- >>> from vision_agent.agent import VisionAgentCoder
344
- >>> agent = VisionAgentCoder()
345
- >>> code = agent("What percentage of the area of the jar is filled with coffee beans?", media="jar.jpg")
346
- ```
347
-
348
- Which produces the following code:
349
- ```python
350
- from vision_agent.tools import load_image, florence2_sam2_image
351
-
352
- def calculate_filled_percentage(image_path: str) -> float:
353
- # Step 1: Load the image
354
- image = load_image(image_path)
355
-
356
- # Step 2: Segment the jar
357
- jar_segments = florence2_sam2_image("jar", image)
358
-
359
- # Step 3: Segment the coffee beans
360
- coffee_beans_segments = florence2_sam2_image("coffee beans", image)
361
-
362
- # Step 4: Calculate the area of the segmented jar
363
- jar_area = 0
364
- for segment in jar_segments:
365
- jar_area += segment['mask'].sum()
366
-
367
- # Step 5: Calculate the area of the segmented coffee beans
368
- coffee_beans_area = 0
369
- for segment in coffee_beans_segments:
370
- coffee_beans_area += segment['mask'].sum()
371
-
372
- # Step 6: Compute the percentage of the jar area that is filled with coffee beans
373
- if jar_area == 0:
374
- return 0.0 # To avoid division by zero
375
- filled_percentage = (coffee_beans_area / jar_area) * 100
376
-
377
- # Step 7: Return the computed percentage
378
- return filled_percentage
379
- ```
380
-
381
- To better understand how the model came up with it's answer, you can run it in debug
382
- mode by passing in the verbose argument:
383
-
384
- ```python
385
- >>> agent = VisionAgentCoder(verbosity=2)
386
- ```
387
-
388
- ### Detailed Usage
389
- You can also have it return more information by calling `generate_code`. The format
390
- of the input is a list of dictionaries with the keys `role`, `content`, and `media`:
391
-
392
- ```python
393
- >>> results = agent.generate_code([{"role": "user", "content": "What percentage of the area of the jar is filled with coffee beans?", "media": ["jar.jpg"]}])
394
- >>> print(results)
395
- {
396
- "code": "from vision_agent.tools import ..."
397
- "test": "calculate_filled_percentage('jar.jpg')",
398
- "test_result": "...",
399
- "plans": {"plan1": {"thoughts": "..."}, ...},
400
- "plan_thoughts": "...",
401
- "working_memory": ...,
402
- }
403
- ```
404
-
405
- With this you can examine more detailed information such as the testing code, testing
406
- results, plan or working memory it used to complete the task.
407
-
408
- ### Multi-turn conversations
409
- You can have multi-turn conversations with vision-agent as well, giving it feedback on
410
- the code and having it update. You just need to add the code as a response from the
411
- assistant:
412
-
413
- ```python
414
- agent = va.agent.VisionAgentCoder(verbosity=2)
415
- conv = [
416
- {
417
- "role": "user",
418
- "content": "Are these workers wearing safety gear? Output only a True or False value.",
419
- "media": ["workers.png"],
420
- }
421
- ]
422
- result = agent.generate_code(conv)
423
- code = result["code"]
424
- conv.append({"role": "assistant", "content": code})
425
- conv.append(
426
- {
427
- "role": "user",
428
- "content": "Can you also return the number of workers wearing safety gear?",
429
- }
430
- )
431
- result = agent.generate_code(conv)
432
- ```
433
-
434
-
435
- ## Additional Backends
436
- ### E2B Code Execution
437
- If you wish to run your code on the E2B backend, make sure you have your `E2B_API_KEY`
438
- set and then set `CODE_SANDBOX_RUNTIME=e2b` in your environment variables. This will
439
- run all the agent generated code on the E2B backend.
440
-
441
- ### Anthropic
442
- `AnthropicVisionAgentCoder` uses Anthropic. To get started you just need to get an
443
- Anthropic API key and set it in your environment variables:
444
-
445
- ```bash
446
- export ANTHROPIC_API_KEY="your-api-key"
447
- ```
448
-
449
- Because Anthropic does not support embedding models, the default embedding model used
450
- is the OpenAI model so you will also need to set your OpenAI API key:
451
-
452
- ```bash
453
- export OPEN_AI_API_KEY="your-api-key"
454
- ```
455
-
456
- Usage is the same as `VisionAgentCoder`:
457
-
458
- ```python
459
- >>> import vision_agent as va
460
- >>> agent = va.agent.AnthropicVisionAgentCoder()
461
- >>> agent("Count the apples in the image", media="apples.jpg")
462
- ```
463
-
464
- ### OpenAI
465
- `OpenAIVisionAgentCoder` uses OpenAI. To get started you just need to get an OpenAI API
466
- key and set it in your environment variables:
467
-
468
- ```bash
469
- export OPEN_AI_API_KEY="your-api-key"
470
- ```
471
-
472
- Usage is the same as `VisionAgentCoder`:
473
-
474
- ```python
475
- >>> import vision_agent as va
476
- >>> agent = va.agent.OpenAIVisionAgentCoder()
477
- >>> agent("Count the apples in the image", media="apples.jpg")
478
- ```
479
-
480
-
481
- ### Ollama
482
- `OllamaVisionAgentCoder` uses Ollama. To get started you must download a few models:
483
-
484
- ```bash
485
- ollama pull llama3.2-vision
486
- ollama pull mxbai-embed-large
487
- ```
488
-
489
- `llama3.2-vision` is used for the `OllamaLMM` for `OllamaVisionAgentCoder`. Becuase
490
- `llama3.2-vision` is a smaller model you **WILL see performance degredation** compared to
491
- using Anthropic or OpenAI models. `mxbai-embed-large` is the embedding model used to
492
- look up tools. You can use it just like you would use `VisionAgentCoder`:
493
-
494
- ```python
495
- >>> import vision_agent as va
496
- >>> agent = va.agent.OllamaVisionAgentCoder()
497
- >>> agent("Count the apples in the image", media="apples.jpg")
498
- ```
499
- > WARNING: VisionAgent doesn't work well unless the underlying LMM is sufficiently powerful. Do not expect good results or even working code with smaller models like Llama 3.1 8B.
500
-
501
- ### Azure OpenAI
502
- `AzureVisionAgentCoder` uses Azure OpenAI models. To get started follow the Azure Setup
503
- section below. You can use it just like you would use `VisionAgentCoder`:
504
-
505
- ```python
506
- >>> import vision_agent as va
507
- >>> agent = va.agent.AzureVisionAgentCoder()
508
- >>> agent("Count the apples in the image", media="apples.jpg")
509
- ```
510
-
511
-
512
- ### Azure Setup
513
- If you want to use Azure OpenAI models, you need to have two OpenAI model deployments:
514
-
515
- 1. OpenAI GPT-4o model
516
- 2. OpenAI text embedding model
517
-
518
- <img width="1201" alt="Screenshot 2024-06-12 at 5 54 48 PM" src="https://github.com/landing-ai/vision-agent/assets/2736300/da125592-b01d-45bc-bc99-d48c9dcdfa32">
519
-
520
- Then you can set the following environment variables:
521
-
522
- ```bash
523
- export AZURE_OPENAI_API_KEY="your-api-key"
524
- export AZURE_OPENAI_ENDPOINT="your-endpoint"
525
- # The deployment name of your Azure OpenAI chat model
526
- export AZURE_OPENAI_CHAT_MODEL_DEPLOYMENT_NAME="your_gpt4o_model_deployment_name"
527
- # The deployment name of your Azure OpenAI text embedding model
528
- export AZURE_OPENAI_EMBEDDING_MODEL_DEPLOYMENT_NAME="your_embedding_model_deployment_name"
529
- ```
530
-
531
- > NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. The default value 8000TPM is not enough.
532
-
533
- You can then run VisionAgent using the Azure OpenAI models:
534
-
535
- ```python
536
- import vision_agent as va
537
- agent = va.agent.AzureVisionAgentCoder()
538
- ```
539
-
540
- ******************************************************************************************************************************
541
-
542
- ## Q&A
543
-
544
- ### How to get started with OpenAI API credits
545
-
546
- 1. Visit the [OpenAI API platform](https://beta.openai.com/signup/) to sign up for an API key.
547
- 2. Follow the instructions to purchase and manage your API credits.
548
- 3. Ensure your API key is correctly configured in your project settings.
549
-
550
- Failure to have sufficient API credits may result in limited or no functionality for
551
- the features that rely on the OpenAI API. For more details on managing your API usage
552
- and credits, please refer to the OpenAI API documentation.
553
-
554
-
555
- ******************************************************************************************************************************
556
-
557
- ## Troubleshooting
558
-
559
- ### 1. Encounter `ModuleNotFoundError` when VisionAgent generating code
560
-
561
- If you keep seeing a `ModuleNotFoundError` when VisionAgent generating code and seeing VisionAgent got stuck and could not install the missing dependencies, you can manually add the missing dependencies into your Python environment by: `pip install <missing_package_name>`. And then try generating code again.
562
-