clip-rb 0.3.2 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +8 -0
  3. data/lib/clip/version.rb +1 -1
  4. metadata +6 -6
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a485f1ddea43feebe9d17ca0ba10aed4be536b4133ccb42e961a6b70c24ab2d6
4
- data.tar.gz: 0f2f033f79b76a05bf752b076acde9d5c25b9b4ce2a153a037d633ecbaf940ec
3
+ metadata.gz: 439cf05cf7254fd365e7e92bf9850459f7d89955e806ac8f05e15a7f150a06ca
4
+ data.tar.gz: 686958e82bec0e3492d5c41d2466537cadf2b325f7b4cc3f46c8d57adcc0bd5d
5
5
  SHA512:
6
- metadata.gz: 197bc9e402a1d9def40d2c6662f4b5ab096eff2968e6bd0dac2b7f6dfcb1ce3accf5a402382c95d83e6ae9b4069c3519596d2eb39759b75fb35e04465bdbc8c6
7
- data.tar.gz: 93eb3ad72dec30725ccb62de7a7620fbfecb7da9fc39ffc611ddd293eeb1bd548f91c5c4cd41fa40dec175253408460e07b0bc4f2ada83b3be9748d2d64943e0
6
+ metadata.gz: 0562c28111e9ec9b57f177972d3caf2e07f571b04143b3dbb24a38ac33d2ae0fcf40429985ac7bfa27ea165808643c72b9358957666353728e6ad8c06d2f5ef4
7
+ data.tar.gz: eda7f529de010b88b59da53ae700dbc2bb0ca7bd4cf2c1a3dc551186a67d43e6ddb54bb77dac9259af019ed8843e9345461f4b0c12fce9e118c29c4c8f4c7410
data/README.md CHANGED
@@ -7,6 +7,14 @@
7
7
 
8
8
  CLIP (Contrastive Language–Image Pre-training) is a powerful neural network developed by OpenAI. It connects text and images by learning shared representations, enabling tasks such as image-to-text matching, zero-shot classification, and visual search. With clip-rb, you can easily encode text and images into high-dimensional embeddings for similarity comparison or use in downstream applications like caption generation and vector search.
9
9
 
10
+ ## Why do I need this?
11
+
12
+ It's a key piece of tech to write an unlabeled image search. You can upload images and then search them by text or using another image as a reference.
13
+
14
+ The other thing you need is a knn search in a vector database. Generate embeddings for images using this, store them in the database. When user wants to search generate embeddings for their query or image, and do a vector search to find the relevant images.
15
+
16
+ See [neighbor gem](https://github.com/ankane/neighbor) to learn more about vector search.
17
+
10
18
  ---
11
19
 
12
20
  ## Requirements
data/lib/clip/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Clip
4
- VERSION = "0.3.2"
4
+ VERSION = "1.0.1"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: clip-rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.2
4
+ version: 1.0.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Krzysztof Hasiński
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2025-01-27 00:00:00.000000000 Z
11
+ date: 2025-02-01 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: onnxruntime
@@ -84,16 +84,16 @@ dependencies:
84
84
  name: mini_magick
85
85
  requirement: !ruby/object:Gem::Requirement
86
86
  requirements:
87
- - - "~>"
87
+ - - ">="
88
88
  - !ruby/object:Gem::Version
89
- version: '5.0'
89
+ version: '0'
90
90
  type: :runtime
91
91
  prerelease: false
92
92
  version_requirements: !ruby/object:Gem::Requirement
93
93
  requirements:
94
- - - "~>"
94
+ - - ">="
95
95
  - !ruby/object:Gem::Version
96
- version: '5.0'
96
+ version: '0'
97
97
  description: OpenAI CLIP embeddings, uses ONNX models. Allows to create embeddings
98
98
  for images and text
99
99
  email: