clip-rb 1.0.0 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +8 -0
- data/lib/clip/version.rb +1 -1
- metadata +6 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: c6c5fa03ca9061b273aa50792bfa67159b213049f9630466d18ec621459c227f
|
4
|
+
data.tar.gz: 23e59e48ab413dde5f61fdcce0f83a7ffd7bbe713c4f289840dedec347c4ed4d
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: b7ce62a3bbb124e6a5481199a4a56d5ca08b21c0ec71420e9970c543ea320f281cdc820a208087a9122de7d1f61734331346517a97871f24769ca1b77a7206be
|
7
|
+
data.tar.gz: 02a4f1145d2769e85aa8d415c35bc83a83469196e4d792a022f55f0de22e221bf1b19ddc592c9b05b613eafef2cb19aea91f36550d99acaede5dc05f625ed675
|
data/README.md
CHANGED
@@ -7,6 +7,14 @@
|
|
7
7
|
|
8
8
|
CLIP (Contrastive Language–Image Pre-training) is a powerful neural network developed by OpenAI. It connects text and images by learning shared representations, enabling tasks such as image-to-text matching, zero-shot classification, and visual search. With clip-rb, you can easily encode text and images into high-dimensional embeddings for similarity comparison or use in downstream applications like caption generation and vector search.
|
9
9
|
|
10
|
+
## Why do I need this?
|
11
|
+
|
12
|
+
It's a key piece of tech to write an unlabeled image search. You can upload images and then search them by text or using another image as a reference.
|
13
|
+
|
14
|
+
The other thing you need is a knn search in a vector database. Generate embeddings for images using this, store them in the database. When user wants to search generate embeddings for their query or image, and do a vector search to find the relevant images.
|
15
|
+
|
16
|
+
See [neighbor gem](https://github.com/ankane/neighbor) to learn more about vector search.
|
17
|
+
|
10
18
|
---
|
11
19
|
|
12
20
|
## Requirements
|
data/lib/clip/version.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: clip-rb
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0.
|
4
|
+
version: 1.0.2
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Krzysztof Hasiński
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2025-
|
11
|
+
date: 2025-02-04 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: onnxruntime
|
@@ -84,16 +84,16 @@ dependencies:
|
|
84
84
|
name: mini_magick
|
85
85
|
requirement: !ruby/object:Gem::Requirement
|
86
86
|
requirements:
|
87
|
-
- - "
|
87
|
+
- - ">="
|
88
88
|
- !ruby/object:Gem::Version
|
89
|
-
version: '
|
89
|
+
version: '0'
|
90
90
|
type: :runtime
|
91
91
|
prerelease: false
|
92
92
|
version_requirements: !ruby/object:Gem::Requirement
|
93
93
|
requirements:
|
94
|
-
- - "
|
94
|
+
- - ">="
|
95
95
|
- !ruby/object:Gem::Version
|
96
|
-
version: '
|
96
|
+
version: '0'
|
97
97
|
description: OpenAI CLIP embeddings, uses ONNX models. Allows to create embeddings
|
98
98
|
for images and text
|
99
99
|
email:
|