langchainrb 0.19.4 → 0.19.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 65a5c3e4eb0dcf204f75f20fde73d4e049fe657177e19d3fbcc8ffefad059776
4
- data.tar.gz: 167a2e8c9305859afc8d384a3b2ff33ea7fc77f7abe7feb93a8f6c104effd2f3
3
+ metadata.gz: 5851ca8d3c014468081f957ecb1f76ab7d50e6d1519c265ab4d52ec39c7bc8b9
4
+ data.tar.gz: 4850f1636545fedca6a1c7f4d17bc4a07309708219d2a06024b14adb17a275a5
5
5
  SHA512:
6
- metadata.gz: 7b65c3e5ef0f7774f634157a98e2fdb2db57fcb9981c96c0ae476ef78ee4a4a3d4822c48339543b0cd0e4d708bf043f2d9653fdfdda5f0df093ea01839d033e3
7
- data.tar.gz: e93b0e741cf729697067a4d89b0c938e8261cc8df3d0b30dc0a1dbbb03a0f6100ede5120e2ba8f2d7cc65cf6b5707061d64a4b6e101020413713ef445c0265be
6
+ metadata.gz: b46baf32e44f9134aceea57957045b37466a23a28c709c03df867071de9615bcf8803a97ec64fa3194ee4d3e13a686389a7c0b9d870c5f9716b5ff361fb8e873
7
+ data.tar.gz: 4e7112d5cb46857e89bbd25f10dae57ec38085027ee6c5ea523e6637218fa97899a0e9deeb2fbf7a116cd6a70a6f17a4c4b294d31c4edf703cce401477ae3a0a
data/CHANGELOG.md CHANGED
@@ -9,7 +9,15 @@
9
9
  - [DOCS]: Documentation changes. No changes to the library's behavior.
10
10
  - [SECURITY]: A change which fixes a security vulnerability.
11
11
 
12
- ## [Unreleased]
12
+ ## [0.19.5]
13
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/859] Add metadata support to PgVector storage
14
+ - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/939] Fix Langchain::Vectorsearch::Milvus initializer by passing :api_key
15
+ - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/953] Handle nil response in OpenAI LLM streaming
16
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/956] Deprecate `Langchain::Vectorsearch::Epsilla` class
17
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/961] Deprecate `Langchain::LLM::LlamaCpp` class
18
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/962] Deprecate `Langchain::LLM::AI21` class
19
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/971] Exclude `temperature` from being automatically added to OpenAI LLM parameters
20
+ - [OPTIM] [https://github.com/patterns-ai-core/langchainrb/pull/977] Enable `Langchain::Tool::RubyCodeInterpreter` on Ruby 3.3+
13
21
 
14
22
  ## [0.19.4] - 2025-02-17
15
23
  - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/894] Tools can now output image_urls, and all tool output must be wrapped by a tool_response() method
data/README.md CHANGED
@@ -2,14 +2,14 @@
2
2
  ---
3
3
  ⚡ Building LLM-powered applications in Ruby ⚡
4
4
 
5
- For deep Rails integration see: [langchainrb_rails](https://github.com/andreibondarev/langchainrb_rails) gem.
5
+ For deep Rails integration see: [langchainrb_rails](https://github.com/patterns-ai-core/langchainrb_rails) gem.
6
6
 
7
7
  Available for paid consulting engagements! [Email me](mailto:andrei@sourcelabs.io).
8
8
 
9
- ![Tests status](https://github.com/andreibondarev/langchainrb/actions/workflows/ci.yml/badge.svg?branch=main)
9
+ ![Tests status](https://github.com/patterns-ai-core/langchainrb/actions/workflows/ci.yml/badge.svg?branch=main)
10
10
  [![Gem Version](https://badge.fury.io/rb/langchainrb.svg)](https://badge.fury.io/rb/langchainrb)
11
11
  [![Docs](http://img.shields.io/badge/yard-docs-blue.svg)](http://rubydoc.info/gems/langchainrb)
12
- [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/andreibondarev/langchainrb/blob/main/LICENSE.txt)
12
+ [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/patterns-ai-core/langchainrb/blob/main/LICENSE.txt)
13
13
  [![](https://dcbadge.vercel.app/api/server/WDARp7J2n8?compact=true&style=flat)](https://discord.gg/WDARp7J2n8)
14
14
  [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40rushing_andrei)](https://twitter.com/rushing_andrei)
15
15
 
@@ -57,7 +57,6 @@ The `Langchain::LLM` module provides a unified interface for interacting with va
57
57
 
58
58
  ## Supported LLM Providers
59
59
 
60
- - AI21
61
60
  - Anthropic
62
61
  - AWS Bedrock
63
62
  - Azure OpenAI
@@ -65,7 +64,6 @@ The `Langchain::LLM` module provides a unified interface for interacting with va
65
64
  - Google Gemini
66
65
  - Google Vertex AI
67
66
  - HuggingFace
68
- - LlamaCpp
69
67
  - Mistral AI
70
68
  - Ollama
71
69
  - OpenAI
@@ -369,7 +367,7 @@ fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
369
367
  fix_parser.parse(llm_response)
370
368
  ```
371
369
 
372
- See [here](https://github.com/andreibondarev/langchainrb/tree/main/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb) for a concrete example
370
+ See [here](https://github.com/patterns-ai-core/langchainrb/tree/main/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb) for a concrete example
373
371
 
374
372
  ## Building Retrieval Augment Generation (RAG) system
375
373
  RAG is a methodology that assists LLMs generate accurate and up-to-date information.
@@ -387,7 +385,6 @@ Langchain.rb provides a convenient unified interface on top of supported vectors
387
385
  | Database | Open-source | Cloud offering |
388
386
  | -------- |:------------------:| :------------: |
389
387
  | [Chroma](https://trychroma.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ |
390
- | [Epsilla](https://epsilla.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ |
391
388
  | [Hnswlib](https://github.com/nmslib/hnswlib/?utm_source=langchainrb&utm_medium=github) | ✅ | ❌ |
392
389
  | [Milvus](https://milvus.io/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ Zilliz Cloud |
393
390
  | [Pinecone](https://www.pinecone.io/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ |
@@ -420,7 +417,6 @@ client = Langchain::Vectorsearch::Weaviate.new(
420
417
  You can instantiate any other supported vector search database:
421
418
  ```ruby
422
419
  client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"`
423
- client = Langchain::Vectorsearch::Epsilla.new(...) # `gem "epsilla-ruby", "~> 0.0.3"`
424
420
  client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"`
425
421
  client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.3"`
426
422
  client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 0.1.6"`
@@ -639,7 +635,7 @@ ragas.score(answer: "", question: "", context: "")
639
635
  ```
640
636
 
641
637
  ## Examples
642
- Additional examples available: [/examples](https://github.com/andreibondarev/langchainrb/tree/main/examples)
638
+ Additional examples available: [/examples](https://github.com/patterns-ai-core/langchainrb/tree/main/examples)
643
639
 
644
640
  ## Logging
645
641
 
@@ -665,7 +661,7 @@ gem install unicode -- --with-cflags="-Wno-incompatible-function-pointer-types"
665
661
 
666
662
  ## Development
667
663
 
668
- 1. `git clone https://github.com/andreibondarev/langchainrb.git`
664
+ 1. `git clone https://github.com/patterns-ai-core/langchainrb.git`
669
665
  2. `cp .env.example .env`, then fill out the environment variables in `.env`
670
666
  3. `bundle exec rake` to ensure that the tests pass and to run standardrb
671
667
  4. `bin/console` to load the gem in a REPL session. Feel free to add your own instances of LLMs, Tools, Agents, etc. and experiment with them.
@@ -680,7 +676,7 @@ Join us in the [Langchain.rb](https://discord.gg/WDARp7J2n8) Discord server.
680
676
 
681
677
  ## Contributing
682
678
 
683
- Bug reports and pull requests are welcome on GitHub at https://github.com/andreibondarev/langchainrb.
679
+ Bug reports and pull requests are welcome on GitHub at https://github.com/patterns-ai-core/langchainrb.
684
680
 
685
681
  ## License
686
682
 
@@ -17,6 +17,8 @@ module Langchain::LLM
17
17
  }.freeze
18
18
 
19
19
  def initialize(api_key:, default_options: {})
20
+ Langchain.logger.warn "DEPRECATED: `Langchain::LLM::AI21` is deprecated, and will be removed in the next major version. Please use another LLM provider."
21
+
20
22
  depends_on "ai21"
21
23
 
22
24
  @client = ::AI21::Client.new(api_key)
@@ -22,10 +22,15 @@ module Langchain::LLM
22
22
  #
23
23
  # @param api_key [String] The API key to use
24
24
  # @param llm_options [Hash] Options to pass to the Anthropic client
25
- # @param default_options [Hash] Default options to use on every call to LLM, e.g.: { temperature:, completion_model:, chat_model:, max_tokens: }
25
+ # @param default_options [Hash] Default options to use on every call to LLM, e.g.: { temperature:, completion_model:, chat_model:, max_tokens:, thinking: }
26
26
  # @return [Langchain::LLM::Anthropic] Langchain::LLM::Anthropic instance
27
27
  def initialize(api_key:, llm_options: {}, default_options: {})
28
- depends_on "anthropic"
28
+ begin
29
+ depends_on "ruby-anthropic", req: "anthropic"
30
+ rescue Langchain::DependencyHelper::LoadError
31
+ # Falls back to the older `anthropic` gem if `ruby-anthropic` gem cannot be loaded.
32
+ depends_on "anthropic"
33
+ end
29
34
 
30
35
  @client = ::Anthropic::Client.new(access_token: api_key, **llm_options)
31
36
  @defaults = DEFAULTS.merge(default_options)
@@ -34,7 +39,8 @@ module Langchain::LLM
34
39
  temperature: {default: @defaults[:temperature]},
35
40
  max_tokens: {default: @defaults[:max_tokens]},
36
41
  metadata: {},
37
- system: {}
42
+ system: {},
43
+ thinking: {default: @defaults[:thinking]}
38
44
  )
39
45
  chat_parameters.ignore(:n, :user)
40
46
  chat_parameters.remap(stop: :stop_sequences)
@@ -97,6 +103,7 @@ module Langchain::LLM
97
103
  # @option params [String] :system System prompt
98
104
  # @option params [Float] :temperature Amount of randomness injected into the response
99
105
  # @option params [Array<String>] :tools Definitions of tools that the model may use
106
+ # @option params [Hash] :thinking Enable extended thinking mode, e.g. { type: "enabled", budget_tokens: 4000 }
100
107
  # @option params [Integer] :top_k Only sample from the top K options for each subsequent token
101
108
  # @option params [Float] :top_p Use nucleus sampling.
102
109
  # @return [Langchain::LLM::AnthropicResponse] The chat completion
@@ -170,7 +177,7 @@ module Langchain::LLM
170
177
  "id" => first_block.dig("content_block", "id"),
171
178
  "type" => "tool_use",
172
179
  "name" => first_block.dig("content_block", "name"),
173
- "input" => JSON.parse(input).transform_keys(&:to_sym)
180
+ "input" => input.empty? ? nil : JSON.parse(input).transform_keys(&:to_sym)
174
181
  }
175
182
  end.compact
176
183
  end
@@ -167,7 +167,7 @@ module Langchain::LLM
167
167
 
168
168
  def parse_model_id(model_id)
169
169
  model_id
170
- .gsub("us.", "") # Meta append "us." to their model ids
170
+ .gsub(/^(us|eu|apac|us-gov)\./, "") # Meta append "us." to their model ids, and AWS region prefixes are used by Bedrock cross-region model IDs.
171
171
  .split(".")
172
172
  end
173
173
 
@@ -7,14 +7,12 @@ module Langchain::LLM
7
7
  #
8
8
  # Langchain.rb provides a common interface to interact with all supported LLMs:
9
9
  #
10
- # - {Langchain::LLM::AI21}
11
10
  # - {Langchain::LLM::Anthropic}
12
11
  # - {Langchain::LLM::Azure}
13
12
  # - {Langchain::LLM::Cohere}
14
13
  # - {Langchain::LLM::GoogleGemini}
15
14
  # - {Langchain::LLM::GoogleVertexAI}
16
15
  # - {Langchain::LLM::HuggingFace}
17
- # - {Langchain::LLM::LlamaCpp}
18
16
  # - {Langchain::LLM::OpenAI}
19
17
  # - {Langchain::LLM::Replicate}
20
18
  #
@@ -23,6 +23,8 @@ module Langchain::LLM
23
23
  # @param n_threads [Integer] The CPU number of threads to use
24
24
  # @param seed [Integer] The seed to use
25
25
  def initialize(model_path:, n_gpu_layers: 1, n_ctx: 2048, n_threads: 1, seed: 0)
26
+ Langchain.logger.warn "DEPRECATED: `Langchain::LLM::LlamaCpp` is deprecated, and will be removed in the next major version. Please use `Langchain::LLM::Ollama` for self-hosted LLM inference."
27
+
26
28
  depends_on "llama_cpp"
27
29
 
28
30
  @model_path = model_path
@@ -15,7 +15,6 @@ module Langchain::LLM
15
15
  class OpenAI < Base
16
16
  DEFAULTS = {
17
17
  n: 1,
18
- temperature: 0.0,
19
18
  chat_model: "gpt-4o-mini",
20
19
  embedding_model: "text-embedding-3-small"
21
20
  }.freeze
@@ -173,7 +172,7 @@ module Langchain::LLM
173
172
 
174
173
  def with_api_error_handling
175
174
  response = yield
176
- return if response.empty?
175
+ return if response.nil? || response.empty?
177
176
 
178
177
  raise Langchain::LLM::ApiError.new "OpenAI API error: #{response.dig("error", "message")}" if response&.dig("error")
179
178
 
@@ -30,6 +30,7 @@ module Langchain::LLM::Parameters
30
30
  presence_penalty: {}, # Range: [-2, 2]
31
31
  repetition_penalty: {}, # Range: (0, 2]
32
32
  seed: {}, # OpenAI only
33
+ store: {}, # store messages with OpenAI
33
34
 
34
35
  # Function-calling
35
36
  tools: {default: []},
@@ -5,7 +5,7 @@ module Langchain::Tool
5
5
  # A tool that execute Ruby code in a sandboxed environment.
6
6
  #
7
7
  # Gem requirements:
8
- # gem "safe_ruby", "~> 1.0.4"
8
+ # gem "safe_ruby", "~> 1.0.5"
9
9
  #
10
10
  # Usage:
11
11
  # interpreter = Langchain::Tool::RubyCodeInterpreter.new
@@ -7,7 +7,6 @@ module Langchain::Vectorsearch
7
7
  # == Available vector databases
8
8
  #
9
9
  # - {Langchain::Vectorsearch::Chroma}
10
- # - {Langchain::Vectorsearch::Epsilla}
11
10
  # - {Langchain::Vectorsearch::Elasticsearch}
12
11
  # - {Langchain::Vectorsearch::Hnswlib}
13
12
  # - {Langchain::Vectorsearch::Milvus}
@@ -30,7 +29,6 @@ module Langchain::Vectorsearch
30
29
  # )
31
30
  #
32
31
  # # You can instantiate other supported vector databases the same way:
33
- # epsilla = Langchain::Vectorsearch::Epsilla.new(...)
34
32
  # milvus = Langchain::Vectorsearch::Milvus.new(...)
35
33
  # qdrant = Langchain::Vectorsearch::Qdrant.new(...)
36
34
  # pinecone = Langchain::Vectorsearch::Pinecone.new(...)
@@ -144,7 +144,7 @@ module Langchain::Vectorsearch
144
144
  # @yield [String] Stream responses back one String at a time
145
145
  # @return [String] The answer to the question
146
146
  def ask(question:, k: 4, &block)
147
- search_results = similarity_search(query: question, k: k)
147
+ search_results = similarity_search(text: question, k: k)
148
148
 
149
149
  context = search_results.map do |result|
150
150
  result[:input]
@@ -21,6 +21,8 @@ module Langchain::Vectorsearch
21
21
  # @param index_name [String] The name of the Epsilla table to use
22
22
  # @param llm [Object] The LLM client to use
23
23
  def initialize(url:, db_name:, db_path:, index_name:, llm:)
24
+ Langchain.logger.warn "DEPRECATED: `Langchain::Vectorsearch::Epsilla` is deprecated, and will be removed in the next major version. Please use other vector storage engines."
25
+
24
26
  depends_on "epsilla-ruby", req: "epsilla"
25
27
 
26
28
  uri = URI.parse(url)
@@ -16,6 +16,7 @@ module Langchain::Vectorsearch
16
16
 
17
17
  @client = ::Milvus::Client.new(
18
18
  url: url,
19
+ api_key: api_key,
19
20
  logger: Langchain.logger
20
21
  )
21
22
  @index_name = index_name
@@ -51,17 +51,30 @@ module Langchain::Vectorsearch
51
51
  # Upsert a list of texts to the index
52
52
  # @param texts [Array<String>] The texts to add to the index
53
53
  # @param ids [Array<Integer>] The ids of the objects to add to the index, in the same order as the texts
54
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
54
55
  # @return [PG::Result] The response from the database including the ids of
55
56
  # the added or updated texts.
56
- def upsert_texts(texts:, ids:)
57
- data = texts.zip(ids).flat_map do |(text, id)|
58
- {id: id, content: text, vectors: llm.embed(text: text).embedding.to_s, namespace: namespace}
57
+ def upsert_texts(texts:, ids:, metadata: nil)
58
+ metadata = Array.new(texts.size, {}) if metadata.nil?
59
+
60
+ data = texts.zip(ids, metadata).flat_map do |text, id, meta|
61
+ {
62
+ id: id,
63
+ content: text,
64
+ vectors: llm.embed(text: text).embedding.to_s,
65
+ namespace: namespace,
66
+ metadata: meta.to_json
67
+ }
59
68
  end
60
69
  # @db[table_name.to_sym].multi_insert(data, return: :primary_key)
61
70
  @db[table_name.to_sym]
62
71
  .insert_conflict(
63
72
  target: :id,
64
- update: {content: Sequel[:excluded][:content], vectors: Sequel[:excluded][:vectors]}
73
+ update: {
74
+ content: Sequel[:excluded][:content],
75
+ vectors: Sequel[:excluded][:vectors],
76
+ metadata: Sequel[:excluded][:metadata]
77
+ }
65
78
  )
66
79
  .multi_insert(data, return: :primary_key)
67
80
  end
@@ -69,25 +82,34 @@ module Langchain::Vectorsearch
69
82
  # Add a list of texts to the index
70
83
  # @param texts [Array<String>] The texts to add to the index
71
84
  # @param ids [Array<String>] The ids to add to the index, in the same order as the texts
85
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
72
86
  # @return [Array<Integer>] The the ids of the added texts.
73
- def add_texts(texts:, ids: nil)
87
+ def add_texts(texts:, ids: nil, metadata: nil)
88
+ metadata = Array.new(texts.size, {}) if metadata.nil?
89
+
74
90
  if ids.nil? || ids.empty?
75
- data = texts.map do |text|
76
- {content: text, vectors: llm.embed(text: text).embedding.to_s, namespace: namespace}
91
+ data = texts.zip(metadata).map do |text, meta|
92
+ {
93
+ content: text,
94
+ vectors: llm.embed(text: text).embedding.to_s,
95
+ namespace: namespace,
96
+ metadata: meta.to_json
97
+ }
77
98
  end
78
99
 
79
100
  @db[table_name.to_sym].multi_insert(data, return: :primary_key)
80
101
  else
81
- upsert_texts(texts: texts, ids: ids)
102
+ upsert_texts(texts: texts, ids: ids, metadata: metadata)
82
103
  end
83
104
  end
84
105
 
85
106
  # Update a list of ids and corresponding texts to the index
86
107
  # @param texts [Array<String>] The texts to add to the index
87
108
  # @param ids [Array<String>] The ids to add to the index, in the same order as the texts
109
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
88
110
  # @return [Array<Integer>] The ids of the updated texts.
89
- def update_texts(texts:, ids:)
90
- upsert_texts(texts: texts, ids: ids)
111
+ def update_texts(texts:, ids:, metadata: nil)
112
+ upsert_texts(texts: texts, ids: ids, metadata: metadata)
91
113
  end
92
114
 
93
115
  # Remove a list of texts from the index
@@ -107,6 +129,7 @@ module Langchain::Vectorsearch
107
129
  text :content
108
130
  column :vectors, "vector(#{vector_dimensions})"
109
131
  text namespace_column.to_sym, default: nil
132
+ jsonb :metadata, default: "{}"
110
133
  end
111
134
  end
112
135
 
@@ -136,6 +159,7 @@ module Langchain::Vectorsearch
136
159
  def similarity_search_by_vector(embedding:, k: 4)
137
160
  db.transaction do # BEGIN
138
161
  documents_model
162
+ .select(:content, :metadata)
139
163
  .nearest_neighbors(:vectors, embedding, distance: operator).limit(k)
140
164
  .where(namespace_column.to_sym => namespace)
141
165
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- VERSION = "0.19.4"
4
+ VERSION = "0.19.5"
5
5
  end
data/lib/langchain.rb CHANGED
@@ -29,10 +29,6 @@ loader.inflector.inflect(
29
29
 
30
30
  loader.collapse("#{__dir__}/langchain/llm/response")
31
31
 
32
- # RubyCodeInterpreter does not work with Ruby 3.3;
33
- # https://github.com/ukutaht/safe_ruby/issues/4
34
- loader.ignore("#{__dir__}/langchain/tool/ruby_code_interpreter") if RUBY_VERSION >= "3.3.0"
35
-
36
32
  loader.setup
37
33
 
38
34
  # Langchain.rb a is library for building LLM-backed Ruby applications. It is an abstraction layer that sits on top of the emerging AI-related tools that makes it easy for developers to consume and string those services together.
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: langchainrb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.19.4
4
+ version: 0.19.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Bondarev
8
8
  bindir: exe
9
9
  cert_chain: []
10
- date: 2025-02-17 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
11
11
  dependencies:
12
12
  - !ruby/object:Gem::Dependency
13
13
  name: baran
@@ -41,16 +41,22 @@ dependencies:
41
41
  name: json-schema
42
42
  requirement: !ruby/object:Gem::Requirement
43
43
  requirements:
44
- - - "~>"
44
+ - - ">="
45
45
  - !ruby/object:Gem::Version
46
46
  version: '4'
47
+ - - "<"
48
+ - !ruby/object:Gem::Version
49
+ version: '6'
47
50
  type: :runtime
48
51
  prerelease: false
49
52
  version_requirements: !ruby/object:Gem::Requirement
50
53
  requirements:
51
- - - "~>"
54
+ - - ">="
52
55
  - !ruby/object:Gem::Version
53
56
  version: '4'
57
+ - - "<"
58
+ - !ruby/object:Gem::Version
59
+ version: '6'
54
60
  - !ruby/object:Gem::Dependency
55
61
  name: zeitwerk
56
62
  requirement: !ruby/object:Gem::Requirement
@@ -113,14 +119,14 @@ dependencies:
113
119
  requirements:
114
120
  - - "~>"
115
121
  - !ruby/object:Gem::Version
116
- version: 3.10.0
122
+ version: 3.11.0
117
123
  type: :development
118
124
  prerelease: false
119
125
  version_requirements: !ruby/object:Gem::Requirement
120
126
  requirements:
121
127
  - - "~>"
122
128
  - !ruby/object:Gem::Version
123
- version: 3.10.0
129
+ version: 3.11.0
124
130
  - !ruby/object:Gem::Dependency
125
131
  name: yard
126
132
  requirement: !ruby/object:Gem::Requirement
@@ -192,19 +198,19 @@ dependencies:
192
198
  - !ruby/object:Gem::Version
193
199
  version: 0.2.1
194
200
  - !ruby/object:Gem::Dependency
195
- name: anthropic
201
+ name: ruby-anthropic
196
202
  requirement: !ruby/object:Gem::Requirement
197
203
  requirements:
198
204
  - - "~>"
199
205
  - !ruby/object:Gem::Version
200
- version: '0.3'
206
+ version: '0.4'
201
207
  type: :development
202
208
  prerelease: false
203
209
  version_requirements: !ruby/object:Gem::Requirement
204
210
  requirements:
205
211
  - - "~>"
206
212
  - !ruby/object:Gem::Version
207
- version: '0.3'
213
+ version: '0.4'
208
214
  - !ruby/object:Gem::Dependency
209
215
  name: aws-sdk-bedrockruntime
210
216
  requirement: !ruby/object:Gem::Requirement
@@ -275,20 +281,6 @@ dependencies:
275
281
  - - "~>"
276
282
  - !ruby/object:Gem::Version
277
283
  version: 8.2.0
278
- - !ruby/object:Gem::Dependency
279
- name: epsilla-ruby
280
- requirement: !ruby/object:Gem::Requirement
281
- requirements:
282
- - - "~>"
283
- - !ruby/object:Gem::Version
284
- version: 0.0.4
285
- type: :development
286
- prerelease: false
287
- version_requirements: !ruby/object:Gem::Requirement
288
- requirements:
289
- - - "~>"
290
- - !ruby/object:Gem::Version
291
- version: 0.0.4
292
284
  - !ruby/object:Gem::Dependency
293
285
  name: eqn
294
286
  requirement: !ruby/object:Gem::Requirement
@@ -387,20 +379,6 @@ dependencies:
387
379
  - - "~>"
388
380
  - !ruby/object:Gem::Version
389
381
  version: 0.10.3
390
- - !ruby/object:Gem::Dependency
391
- name: llama_cpp
392
- requirement: !ruby/object:Gem::Requirement
393
- requirements:
394
- - - "~>"
395
- - !ruby/object:Gem::Version
396
- version: 0.9.4
397
- type: :development
398
- prerelease: false
399
- version_requirements: !ruby/object:Gem::Requirement
400
- requirements:
401
- - - "~>"
402
- - !ruby/object:Gem::Version
403
- version: 0.9.4
404
382
  - !ruby/object:Gem::Dependency
405
383
  name: nokogiri
406
384
  requirement: !ruby/object:Gem::Requirement
@@ -575,14 +553,14 @@ dependencies:
575
553
  requirements:
576
554
  - - "~>"
577
555
  - !ruby/object:Gem::Version
578
- version: 1.0.4
556
+ version: 1.0.5
579
557
  type: :development
580
558
  prerelease: false
581
559
  version_requirements: !ruby/object:Gem::Requirement
582
560
  requirements:
583
561
  - - "~>"
584
562
  - !ruby/object:Gem::Version
585
- version: 1.0.4
563
+ version: 1.0.5
586
564
  - !ruby/object:Gem::Dependency
587
565
  name: sequel
588
566
  requirement: !ruby/object:Gem::Requirement
@@ -790,7 +768,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
790
768
  - !ruby/object:Gem::Version
791
769
  version: '0'
792
770
  requirements: []
793
- rubygems_version: 3.6.2
771
+ rubygems_version: 3.6.7
794
772
  specification_version: 4
795
773
  summary: Build LLM-backed Ruby applications with Ruby's Langchain.rb
796
774
  test_files: []