vision-agent 0.0.14__tar.gz → 0.0.16__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: vision-agent
3
- Version: 0.0.14
3
+ Version: 0.0.16
4
4
  Summary: Toolset for Vision Agent
5
5
  Author: Landing AI
6
6
  Author-email: dev@landing.ai
@@ -26,12 +26,18 @@ Description-Content-Type: text/markdown
26
26
  # 🔍 Vision Agent
27
27
 
28
28
  [![](https://dcbadge.vercel.app/api/server/wPdN8RCYew)](https://discord.gg/wPdN8RCYew)
29
+ ![ci_status](https://github.com/landing-ai/vision-agent/actions/workflows/ci_cd.yml/badge.svg)
30
+ [![PyPI version](https://badge.fury.io/py/vision-agent.svg)](https://badge.fury.io/py/vision-agent)
31
+ ![version](https://img.shields.io/pypi/pyversions/vision-agent)
29
32
 
30
- Vision Agent is a minimal library for educational purposes that helps you utilize multimodal models to organize and structure your image data. Checkout our discord for roadmaps and updates! One of the problems of dealing with image data is it can be difficult to organize and quickly search. For example, you might have a bunch of pictures of houses and want to count how many yellow houses you have, or how many houses with adobe roofs. This library utilizes LMMs to help create these tags or descriptions and allow you to search over them, or use them in a database to do other operations.
33
+
34
+ Vision Agent is a library for that helps you to use multimodal models to organize and structure your image data. Check out our discord for roadmaps and updates!
35
+
36
+ One of the problems of dealing with image data is it can be difficult to organize and search. For example, you might have a bunch of pictures of houses and want to count how many yellow houses you have, or how many houses with adobe roofs. The vision agent library uses LMMs to help create tags or descriptions of images to allow you to search over them, or use them in a database to carry out other operations.
31
37
 
32
38
  ## Getting Started
33
39
  ### LMMs
34
- To get started you can create an LMM and start generating text from images. The following code will grab the LLaVA-1.6 34B model and generate a description of the image you pass it.
40
+ To get started, you can use an LMM to start generating text from images. The following code will use the LLaVA-1.6 34B model to generate a description of the image you pass it.
35
41
 
36
42
  ```python
37
43
  import vision_agent as va
@@ -1,12 +1,18 @@
1
1
  # 🔍 Vision Agent
2
2
 
3
3
  [![](https://dcbadge.vercel.app/api/server/wPdN8RCYew)](https://discord.gg/wPdN8RCYew)
4
+ ![ci_status](https://github.com/landing-ai/vision-agent/actions/workflows/ci_cd.yml/badge.svg)
5
+ [![PyPI version](https://badge.fury.io/py/vision-agent.svg)](https://badge.fury.io/py/vision-agent)
6
+ ![version](https://img.shields.io/pypi/pyversions/vision-agent)
4
7
 
5
- Vision Agent is a minimal library for educational purposes that helps you utilize multimodal models to organize and structure your image data. Checkout our discord for roadmaps and updates! One of the problems of dealing with image data is it can be difficult to organize and quickly search. For example, you might have a bunch of pictures of houses and want to count how many yellow houses you have, or how many houses with adobe roofs. This library utilizes LMMs to help create these tags or descriptions and allow you to search over them, or use them in a database to do other operations.
8
+
9
+ Vision Agent is a library for that helps you to use multimodal models to organize and structure your image data. Check out our discord for roadmaps and updates!
10
+
11
+ One of the problems of dealing with image data is it can be difficult to organize and search. For example, you might have a bunch of pictures of houses and want to count how many yellow houses you have, or how many houses with adobe roofs. The vision agent library uses LMMs to help create tags or descriptions of images to allow you to search over them, or use them in a database to carry out other operations.
6
12
 
7
13
  ## Getting Started
8
14
  ### LMMs
9
- To get started you can create an LMM and start generating text from images. The following code will grab the LLaVA-1.6 34B model and generate a description of the image you pass it.
15
+ To get started, you can use an LMM to start generating text from images. The following code will use the LLaVA-1.6 34B model to generate a description of the image you pass it.
10
16
 
11
17
  ```python
12
18
  import vision_agent as va
@@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
4
4
 
5
5
  [tool.poetry]
6
6
  name = "vision-agent"
7
- version = "0.0.14"
7
+ version = "0.0.16"
8
8
  description = "Toolset for Vision Agent"
9
9
  authors = ["Landing AI <dev@landing.ai>"]
10
10
  readme = "README.md"
File without changes