langchainrb 0.19.3 → 0.19.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5567aa14daf63f120a6c2b2b2991fbd85b650b4cb4e2ac49703736dacd46197e
4
- data.tar.gz: ee71c45590998793c343c9620565cfcfdff5cd2d715091f3ad6804d54b814444
3
+ metadata.gz: 5851ca8d3c014468081f957ecb1f76ab7d50e6d1519c265ab4d52ec39c7bc8b9
4
+ data.tar.gz: 4850f1636545fedca6a1c7f4d17bc4a07309708219d2a06024b14adb17a275a5
5
5
  SHA512:
6
- metadata.gz: a5d212aadfef3ca07a6c9dc517141f0e6dedb1e414b0e8bbcbc0b7abddd5e2405a8a07ab6cc10efd66ee8997c41b5c8bd82ea8755d0f8e5fea7728db94f82379
7
- data.tar.gz: a3a5a5c62cc075e8d09c0e6b629e1ec8b118d35d98e4de107c48d779d8b35261de3ad0061e22a4be4b2aefc15bbb25c20f658fe1f0a694ecc87279cd21309ff8
6
+ metadata.gz: b46baf32e44f9134aceea57957045b37466a23a28c709c03df867071de9615bcf8803a97ec64fa3194ee4d3e13a686389a7c0b9d870c5f9716b5ff361fb8e873
7
+ data.tar.gz: 4e7112d5cb46857e89bbd25f10dae57ec38085027ee6c5ea523e6637218fa97899a0e9deeb2fbf7a116cd6a70a6f17a4c4b294d31c4edf703cce401477ae3a0a
data/CHANGELOG.md CHANGED
@@ -9,7 +9,19 @@
9
9
  - [DOCS]: Documentation changes. No changes to the library's behavior.
10
10
  - [SECURITY]: A change which fixes a security vulnerability.
11
11
 
12
- ## [Unreleased]
12
+ ## [0.19.5]
13
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/859] Add metadata support to PgVector storage
14
+ - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/939] Fix Langchain::Vectorsearch::Milvus initializer by passing :api_key
15
+ - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/953] Handle nil response in OpenAI LLM streaming
16
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/956] Deprecate `Langchain::Vectorsearch::Epsilla` class
17
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/961] Deprecate `Langchain::LLM::LlamaCpp` class
18
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/962] Deprecate `Langchain::LLM::AI21` class
19
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/971] Exclude `temperature` from being automatically added to OpenAI LLM parameters
20
+ - [OPTIM] [https://github.com/patterns-ai-core/langchainrb/pull/977] Enable `Langchain::Tool::RubyCodeInterpreter` on Ruby 3.3+
21
+
22
+ ## [0.19.4] - 2025-02-17
23
+ - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/894] Tools can now output image_urls, and all tool output must be wrapped by a tool_response() method
24
+ - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/921] Fix for Assistant when OpenAI o1/o3 models are used
13
25
 
14
26
  ## [0.19.3] - 2025-01-13
15
27
  - [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/900] Empty text content should not be set when content is nil when using AnthropicMessage
data/README.md CHANGED
@@ -2,14 +2,14 @@
2
2
  ---
3
3
  ⚡ Building LLM-powered applications in Ruby ⚡
4
4
 
5
- For deep Rails integration see: [langchainrb_rails](https://github.com/andreibondarev/langchainrb_rails) gem.
5
+ For deep Rails integration see: [langchainrb_rails](https://github.com/patterns-ai-core/langchainrb_rails) gem.
6
6
 
7
7
  Available for paid consulting engagements! [Email me](mailto:andrei@sourcelabs.io).
8
8
 
9
- ![Tests status](https://github.com/andreibondarev/langchainrb/actions/workflows/ci.yml/badge.svg?branch=main)
9
+ ![Tests status](https://github.com/patterns-ai-core/langchainrb/actions/workflows/ci.yml/badge.svg?branch=main)
10
10
  [![Gem Version](https://badge.fury.io/rb/langchainrb.svg)](https://badge.fury.io/rb/langchainrb)
11
11
  [![Docs](http://img.shields.io/badge/yard-docs-blue.svg)](http://rubydoc.info/gems/langchainrb)
12
- [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/andreibondarev/langchainrb/blob/main/LICENSE.txt)
12
+ [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/patterns-ai-core/langchainrb/blob/main/LICENSE.txt)
13
13
  [![](https://dcbadge.vercel.app/api/server/WDARp7J2n8?compact=true&style=flat)](https://discord.gg/WDARp7J2n8)
14
14
  [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40rushing_andrei)](https://twitter.com/rushing_andrei)
15
15
 
@@ -57,7 +57,6 @@ The `Langchain::LLM` module provides a unified interface for interacting with va
57
57
 
58
58
  ## Supported LLM Providers
59
59
 
60
- - AI21
61
60
  - Anthropic
62
61
  - AWS Bedrock
63
62
  - Azure OpenAI
@@ -65,7 +64,6 @@ The `Langchain::LLM` module provides a unified interface for interacting with va
65
64
  - Google Gemini
66
65
  - Google Vertex AI
67
66
  - HuggingFace
68
- - LlamaCpp
69
67
  - Mistral AI
70
68
  - Ollama
71
69
  - OpenAI
@@ -369,7 +367,7 @@ fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm(
369
367
  fix_parser.parse(llm_response)
370
368
  ```
371
369
 
372
- See [here](https://github.com/andreibondarev/langchainrb/tree/main/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb) for a concrete example
370
+ See [here](https://github.com/patterns-ai-core/langchainrb/tree/main/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb) for a concrete example
373
371
 
374
372
  ## Building Retrieval Augment Generation (RAG) system
375
373
  RAG is a methodology that assists LLMs generate accurate and up-to-date information.
@@ -387,7 +385,6 @@ Langchain.rb provides a convenient unified interface on top of supported vectors
387
385
  | Database | Open-source | Cloud offering |
388
386
  | -------- |:------------------:| :------------: |
389
387
  | [Chroma](https://trychroma.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ |
390
- | [Epsilla](https://epsilla.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ |
391
388
  | [Hnswlib](https://github.com/nmslib/hnswlib/?utm_source=langchainrb&utm_medium=github) | ✅ | ❌ |
392
389
  | [Milvus](https://milvus.io/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ Zilliz Cloud |
393
390
  | [Pinecone](https://www.pinecone.io/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ |
@@ -420,7 +417,6 @@ client = Langchain::Vectorsearch::Weaviate.new(
420
417
  You can instantiate any other supported vector search database:
421
418
  ```ruby
422
419
  client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"`
423
- client = Langchain::Vectorsearch::Epsilla.new(...) # `gem "epsilla-ruby", "~> 0.0.3"`
424
420
  client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"`
425
421
  client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.3"`
426
422
  client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 0.1.6"`
@@ -580,11 +576,11 @@ class MovieInfoTool
580
576
  end
581
577
 
582
578
  def search_movie(query:)
583
- ...
579
+ tool_response(...)
584
580
  end
585
581
 
586
582
  def get_movie_details(movie_id:)
587
- ...
583
+ tool_response(...)
588
584
  end
589
585
  end
590
586
  ```
@@ -639,7 +635,7 @@ ragas.score(answer: "", question: "", context: "")
639
635
  ```
640
636
 
641
637
  ## Examples
642
- Additional examples available: [/examples](https://github.com/andreibondarev/langchainrb/tree/main/examples)
638
+ Additional examples available: [/examples](https://github.com/patterns-ai-core/langchainrb/tree/main/examples)
643
639
 
644
640
  ## Logging
645
641
 
@@ -665,7 +661,7 @@ gem install unicode -- --with-cflags="-Wno-incompatible-function-pointer-types"
665
661
 
666
662
  ## Development
667
663
 
668
- 1. `git clone https://github.com/andreibondarev/langchainrb.git`
664
+ 1. `git clone https://github.com/patterns-ai-core/langchainrb.git`
669
665
  2. `cp .env.example .env`, then fill out the environment variables in `.env`
670
666
  3. `bundle exec rake` to ensure that the tests pass and to run standardrb
671
667
  4. `bin/console` to load the gem in a REPL session. Feel free to add your own instances of LLMs, Tools, Agents, etc. and experiment with them.
@@ -680,7 +676,7 @@ Join us in the [Langchain.rb](https://discord.gg/WDARp7J2n8) Discord server.
680
676
 
681
677
  ## Contributing
682
678
 
683
- Bug reports and pull requests are welcome on GitHub at https://github.com/andreibondarev/langchainrb.
679
+ Bug reports and pull requests are welcome on GitHub at https://github.com/patterns-ai-core/langchainrb.
684
680
 
685
681
  ## License
686
682
 
@@ -24,7 +24,9 @@ module Langchain
24
24
  if tools.any?
25
25
  params[:tools] = build_tools(tools)
26
26
  params[:tool_choice] = build_tool_choice(tool_choice)
27
- params[:parallel_tool_calls] = parallel_tool_calls
27
+ # Temporary fix because OpenAI o1/o3/reasoning models don't support `parallel_tool_calls` parameter.
28
+ # Set `Assistant.new(parallel_tool_calls: nil, ...)` to avoid the error.
29
+ params[:parallel_tool_calls] = parallel_tool_calls unless parallel_tool_calls.nil?
28
30
  end
29
31
  params
30
32
  end
@@ -371,9 +371,15 @@ module Langchain
371
371
 
372
372
  # Call the callback if set
373
373
  tool_execution_callback.call(tool_call_id, tool_name, method_name, tool_arguments) if tool_execution_callback # rubocop:disable Style/SafeNavigation
374
+
374
375
  output = tool_instance.send(method_name, **tool_arguments)
375
376
 
376
- submit_tool_output(tool_call_id: tool_call_id, output: output)
377
+ # Handle both ToolResponse and legacy return values
378
+ if output.is_a?(ToolResponse)
379
+ add_message(role: @llm_adapter.tool_role, content: output.content, image_url: output.image_url, tool_call_id: tool_call_id)
380
+ else
381
+ submit_tool_output(tool_call_id: tool_call_id, output: output)
382
+ end
377
383
  end
378
384
 
379
385
  # Build a message
@@ -17,6 +17,8 @@ module Langchain::LLM
17
17
  }.freeze
18
18
 
19
19
  def initialize(api_key:, default_options: {})
20
+ Langchain.logger.warn "DEPRECATED: `Langchain::LLM::AI21` is deprecated, and will be removed in the next major version. Please use another LLM provider."
21
+
20
22
  depends_on "ai21"
21
23
 
22
24
  @client = ::AI21::Client.new(api_key)
@@ -22,10 +22,15 @@ module Langchain::LLM
22
22
  #
23
23
  # @param api_key [String] The API key to use
24
24
  # @param llm_options [Hash] Options to pass to the Anthropic client
25
- # @param default_options [Hash] Default options to use on every call to LLM, e.g.: { temperature:, completion_model:, chat_model:, max_tokens: }
25
+ # @param default_options [Hash] Default options to use on every call to LLM, e.g.: { temperature:, completion_model:, chat_model:, max_tokens:, thinking: }
26
26
  # @return [Langchain::LLM::Anthropic] Langchain::LLM::Anthropic instance
27
27
  def initialize(api_key:, llm_options: {}, default_options: {})
28
- depends_on "anthropic"
28
+ begin
29
+ depends_on "ruby-anthropic", req: "anthropic"
30
+ rescue Langchain::DependencyHelper::LoadError
31
+ # Falls back to the older `anthropic` gem if `ruby-anthropic` gem cannot be loaded.
32
+ depends_on "anthropic"
33
+ end
29
34
 
30
35
  @client = ::Anthropic::Client.new(access_token: api_key, **llm_options)
31
36
  @defaults = DEFAULTS.merge(default_options)
@@ -34,7 +39,8 @@ module Langchain::LLM
34
39
  temperature: {default: @defaults[:temperature]},
35
40
  max_tokens: {default: @defaults[:max_tokens]},
36
41
  metadata: {},
37
- system: {}
42
+ system: {},
43
+ thinking: {default: @defaults[:thinking]}
38
44
  )
39
45
  chat_parameters.ignore(:n, :user)
40
46
  chat_parameters.remap(stop: :stop_sequences)
@@ -97,6 +103,7 @@ module Langchain::LLM
97
103
  # @option params [String] :system System prompt
98
104
  # @option params [Float] :temperature Amount of randomness injected into the response
99
105
  # @option params [Array<String>] :tools Definitions of tools that the model may use
106
+ # @option params [Hash] :thinking Enable extended thinking mode, e.g. { type: "enabled", budget_tokens: 4000 }
100
107
  # @option params [Integer] :top_k Only sample from the top K options for each subsequent token
101
108
  # @option params [Float] :top_p Use nucleus sampling.
102
109
  # @return [Langchain::LLM::AnthropicResponse] The chat completion
@@ -170,7 +177,7 @@ module Langchain::LLM
170
177
  "id" => first_block.dig("content_block", "id"),
171
178
  "type" => "tool_use",
172
179
  "name" => first_block.dig("content_block", "name"),
173
- "input" => JSON.parse(input).transform_keys(&:to_sym)
180
+ "input" => input.empty? ? nil : JSON.parse(input).transform_keys(&:to_sym)
174
181
  }
175
182
  end.compact
176
183
  end
@@ -167,7 +167,7 @@ module Langchain::LLM
167
167
 
168
168
  def parse_model_id(model_id)
169
169
  model_id
170
- .gsub("us.", "") # Meta append "us." to their model ids
170
+ .gsub(/^(us|eu|apac|us-gov)\./, "") # Meta append "us." to their model ids, and AWS region prefixes are used by Bedrock cross-region model IDs.
171
171
  .split(".")
172
172
  end
173
173
 
@@ -7,14 +7,12 @@ module Langchain::LLM
7
7
  #
8
8
  # Langchain.rb provides a common interface to interact with all supported LLMs:
9
9
  #
10
- # - {Langchain::LLM::AI21}
11
10
  # - {Langchain::LLM::Anthropic}
12
11
  # - {Langchain::LLM::Azure}
13
12
  # - {Langchain::LLM::Cohere}
14
13
  # - {Langchain::LLM::GoogleGemini}
15
14
  # - {Langchain::LLM::GoogleVertexAI}
16
15
  # - {Langchain::LLM::HuggingFace}
17
- # - {Langchain::LLM::LlamaCpp}
18
16
  # - {Langchain::LLM::OpenAI}
19
17
  # - {Langchain::LLM::Replicate}
20
18
  #
@@ -23,6 +23,8 @@ module Langchain::LLM
23
23
  # @param n_threads [Integer] The CPU number of threads to use
24
24
  # @param seed [Integer] The seed to use
25
25
  def initialize(model_path:, n_gpu_layers: 1, n_ctx: 2048, n_threads: 1, seed: 0)
26
+ Langchain.logger.warn "DEPRECATED: `Langchain::LLM::LlamaCpp` is deprecated, and will be removed in the next major version. Please use `Langchain::LLM::Ollama` for self-hosted LLM inference."
27
+
26
28
  depends_on "llama_cpp"
27
29
 
28
30
  @model_path = model_path
@@ -15,7 +15,6 @@ module Langchain::LLM
15
15
  class OpenAI < Base
16
16
  DEFAULTS = {
17
17
  n: 1,
18
- temperature: 0.0,
19
18
  chat_model: "gpt-4o-mini",
20
19
  embedding_model: "text-embedding-3-small"
21
20
  }.freeze
@@ -173,7 +172,7 @@ module Langchain::LLM
173
172
 
174
173
  def with_api_error_handling
175
174
  response = yield
176
- return if response.empty?
175
+ return if response.nil? || response.empty?
177
176
 
178
177
  raise Langchain::LLM::ApiError.new "OpenAI API error: #{response.dig("error", "message")}" if response&.dig("error")
179
178
 
@@ -30,6 +30,7 @@ module Langchain::LLM::Parameters
30
30
  presence_penalty: {}, # Range: [-2, 2]
31
31
  repetition_penalty: {}, # Range: (0, 2]
32
32
  seed: {}, # OpenAI only
33
+ store: {}, # store messages with OpenAI
33
34
 
34
35
  # Function-calling
35
36
  tools: {default: []},
@@ -28,15 +28,4 @@ module Langchain::OutputParsers
28
28
  raise NotImplementedError
29
29
  end
30
30
  end
31
-
32
- class OutputParserException < StandardError
33
- def initialize(message, text)
34
- @message = message
35
- @text = text
36
- end
37
-
38
- def to_s
39
- "#{@message}\nText: #{@text}"
40
- end
41
- end
42
31
  end
@@ -0,0 +1,10 @@
1
+ class Langchain::OutputParsers::OutputParserException < StandardError
2
+ def initialize(message, text)
3
+ @message = message
4
+ @text = text
5
+ end
6
+
7
+ def to_s
8
+ "#{@message}\nText: #{@text}"
9
+ end
10
+ end
@@ -26,13 +26,14 @@ module Langchain::Tool
26
26
  # Evaluates a pure math expression or if equation contains non-math characters (e.g.: "12F in Celsius") then it uses the google search calculator to evaluate the expression
27
27
  #
28
28
  # @param input [String] math expression
29
- # @return [String] Answer
29
+ # @return [Langchain::Tool::Response] Answer
30
30
  def execute(input:)
31
31
  Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")
32
32
 
33
- Eqn::Calculator.calc(input)
33
+ result = Eqn::Calculator.calc(input)
34
+ tool_response(content: result)
34
35
  rescue Eqn::ParseError, Eqn::NoVariableValueError
35
- "\"#{input}\" is an invalid mathematical expression"
36
+ tool_response(content: "\"#{input}\" is an invalid mathematical expression")
36
37
  end
37
38
  end
38
39
  end
@@ -49,50 +49,53 @@ module Langchain::Tool
49
49
 
50
50
  # Database Tool: Returns a list of tables in the database
51
51
  #
52
- # @return [Array<Symbol>] List of tables in the database
52
+ # @return [Langchain::Tool::Response] List of tables in the database
53
53
  def list_tables
54
- db.tables
54
+ tool_response(content: db.tables)
55
55
  end
56
56
 
57
57
  # Database Tool: Returns the schema for a list of tables
58
58
  #
59
59
  # @param tables [Array<String>] The tables to describe.
60
- # @return [String] The schema for the tables
60
+ # @return [Langchain::Tool::Response] The schema for the tables
61
61
  def describe_tables(tables: [])
62
62
  return "No tables specified" if tables.empty?
63
63
 
64
64
  Langchain.logger.debug("#{self.class} - Describing tables: #{tables}")
65
65
 
66
- tables
66
+ result = tables
67
67
  .map do |table|
68
68
  describe_table(table)
69
69
  end
70
70
  .join("\n")
71
+
72
+ tool_response(content: result)
71
73
  end
72
74
 
73
75
  # Database Tool: Returns the database schema
74
76
  #
75
- # @return [String] Database schema
77
+ # @return [Langchain::Tool::Response] Database schema
76
78
  def dump_schema
77
79
  Langchain.logger.debug("#{self.class} - Dumping schema tables and keys")
78
80
 
79
81
  schemas = db.tables.map do |table|
80
82
  describe_table(table)
81
83
  end
82
- schemas.join("\n")
84
+
85
+ tool_response(content: schemas.join("\n"))
83
86
  end
84
87
 
85
88
  # Database Tool: Executes a SQL query and returns the results
86
89
  #
87
90
  # @param input [String] SQL query to be executed
88
- # @return [Array] Results from the SQL query
91
+ # @return [Langchain::Tool::Response] Results from the SQL query
89
92
  def execute(input:)
90
93
  Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")
91
94
 
92
- db[input].to_a
95
+ tool_response(content: db[input].to_a)
93
96
  rescue Sequel::DatabaseError => e
94
97
  Langchain.logger.error("#{self.class} - #{e.message}")
95
- e.message # Return error to LLM
98
+ tool_response(content: e.message)
96
99
  end
97
100
 
98
101
  private
@@ -100,7 +103,7 @@ module Langchain::Tool
100
103
  # Describes a table and its schema
101
104
  #
102
105
  # @param table [String] The table to describe
103
- # @return [String] The schema for the table
106
+ # @return [Langchain::Tool::Response] The schema for the table
104
107
  def describe_table(table)
105
108
  # TODO: There's probably a clear way to do all of this below
106
109
 
@@ -127,6 +130,8 @@ module Langchain::Tool
127
130
  schema << ",\n" unless fk == db.foreign_key_list(table).last
128
131
  end
129
132
  schema << ");\n"
133
+
134
+ tool_response(content: schema)
130
135
  end
131
136
  end
132
137
  end
@@ -24,21 +24,22 @@ module Langchain::Tool
24
24
  end
25
25
 
26
26
  def list_directory(directory_path:)
27
- Dir.entries(directory_path)
27
+ tool_response(content: Dir.entries(directory_path))
28
28
  rescue Errno::ENOENT
29
- "No such directory: #{directory_path}"
29
+ tool_response(content: "No such directory: #{directory_path}")
30
30
  end
31
31
 
32
32
  def read_file(file_path:)
33
- File.read(file_path)
33
+ tool_response(content: File.read(file_path))
34
34
  rescue Errno::ENOENT
35
- "No such file: #{file_path}"
35
+ tool_response(content: "No such file: #{file_path}")
36
36
  end
37
37
 
38
38
  def write_to_file(file_path:, content:)
39
39
  File.write(file_path, content)
40
+ tool_response(content: "File written successfully")
40
41
  rescue Errno::EACCES
41
- "Permission denied: #{file_path}"
42
+ tool_response(content: "Permission denied: #{file_path}")
42
43
  end
43
44
  end
44
45
  end
@@ -36,7 +36,7 @@ module Langchain::Tool
36
36
  # Executes Google Search and returns the result
37
37
  #
38
38
  # @param input [String] search query
39
- # @return [String] Answer
39
+ # @return [Langchain::Tool::Response] Answer
40
40
  def execute(input:)
41
41
  Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")
42
42
 
@@ -44,31 +44,31 @@ module Langchain::Tool
44
44
 
45
45
  answer_box = results[:answer_box_list] ? results[:answer_box_list].first : results[:answer_box]
46
46
  if answer_box
47
- return answer_box[:result] ||
47
+ return tool_response(content: answer_box[:result] ||
48
48
  answer_box[:answer] ||
49
49
  answer_box[:snippet] ||
50
50
  answer_box[:snippet_highlighted_words] ||
51
- answer_box.reject { |_k, v| v.is_a?(Hash) || v.is_a?(Array) || v.start_with?("http") }
51
+ answer_box.reject { |_k, v| v.is_a?(Hash) || v.is_a?(Array) || v.start_with?("http") })
52
52
  elsif (events_results = results[:events_results])
53
- return events_results.take(10)
53
+ return tool_response(content: events_results.take(10))
54
54
  elsif (sports_results = results[:sports_results])
55
- return sports_results
55
+ return tool_response(content: sports_results)
56
56
  elsif (top_stories = results[:top_stories])
57
- return top_stories
57
+ return tool_response(content: top_stories)
58
58
  elsif (news_results = results[:news_results])
59
- return news_results
59
+ return tool_response(content: news_results)
60
60
  elsif (jobs_results = results.dig(:jobs_results, :jobs))
61
- return jobs_results
61
+ return tool_response(content: jobs_results)
62
62
  elsif (shopping_results = results[:shopping_results]) && shopping_results.first.key?(:title)
63
- return shopping_results.take(3)
63
+ return tool_response(content: shopping_results.take(3))
64
64
  elsif (questions_and_answers = results[:questions_and_answers])
65
- return questions_and_answers
65
+ return tool_response(content: questions_and_answers)
66
66
  elsif (popular_destinations = results.dig(:popular_destinations, :destinations))
67
- return popular_destinations
67
+ return tool_response(content: popular_destinations)
68
68
  elsif (top_sights = results.dig(:top_sights, :sights))
69
- return top_sights
69
+ return tool_response(content: top_sights)
70
70
  elsif (images_results = results[:images_results]) && images_results.first.key?(:thumbnail)
71
- return images_results.map { |h| h[:thumbnail] }.take(10)
71
+ return tool_response(content: images_results.map { |h| h[:thumbnail] }.take(10))
72
72
  end
73
73
 
74
74
  snippets = []
@@ -110,8 +110,8 @@ module Langchain::Tool
110
110
  snippets << local_results
111
111
  end
112
112
 
113
- return "No good search result found" if snippets.empty?
114
- snippets
113
+ return tool_response(content: "No good search result found") if snippets.empty?
114
+ tool_response(content: snippets)
115
115
  end
116
116
 
117
117
  #
@@ -57,7 +57,7 @@ module Langchain::Tool
57
57
  # @param page_size [Integer] The number of results to return per page. 20 is the API's default, 100 is the maximum. Our default is 5.
58
58
  # @param page [Integer] Use this to page through the results.
59
59
  #
60
- # @return [String] JSON response
60
+ # @return [Langchain::Tool::Response] JSON response
61
61
  def get_everything(
62
62
  q: nil,
63
63
  search_in: nil,
@@ -86,7 +86,8 @@ module Langchain::Tool
86
86
  params[:pageSize] = page_size if page_size
87
87
  params[:page] = page if page
88
88
 
89
- send_request(path: "everything", params: params)
89
+ response = send_request(path: "everything", params: params)
90
+ tool_response(content: response)
90
91
  end
91
92
 
92
93
  # Retrieve top headlines
@@ -98,7 +99,7 @@ module Langchain::Tool
98
99
  # @param page_size [Integer] The number of results to return per page. 20 is the API's default, 100 is the maximum. Our default is 5.
99
100
  # @param page [Integer] Use this to page through the results.
100
101
  #
101
- # @return [String] JSON response
102
+ # @return [Langchain::Tool::Response] JSON response
102
103
  def get_top_headlines(
103
104
  country: nil,
104
105
  category: nil,
@@ -117,7 +118,8 @@ module Langchain::Tool
117
118
  params[:pageSize] = page_size if page_size
118
119
  params[:page] = page if page
119
120
 
120
- send_request(path: "top-headlines", params: params)
121
+ response = send_request(path: "top-headlines", params: params)
122
+ tool_response(content: response)
121
123
  end
122
124
 
123
125
  # Retrieve news sources
@@ -126,7 +128,7 @@ module Langchain::Tool
126
128
  # @param language [String] The 2-letter ISO-639-1 code of the language you want to get headlines for. Possible options: ar, de, en, es, fr, he, it, nl, no, pt, ru, se, ud, zh.
127
129
  # @param country [String] The 2-letter ISO 3166-1 code of the country you want to get headlines for. Possible options: ae, ar, at, au, be, bg, br, ca, ch, cn, co, cu, cz, de, eg, fr, gb, gr, hk, hu, id, ie, il, in, it, jp, kr, lt, lv, ma, mx, my, ng, nl, no, nz, ph, pl, pt, ro, rs, ru, sa, se, sg, si, sk, th, tr, tw, ua, us, ve, za.
128
130
  #
129
- # @return [String] JSON response
131
+ # @return [Langchain::Tool::Response] JSON response
130
132
  def get_sources(
131
133
  category: nil,
132
134
  language: nil,
@@ -139,7 +141,8 @@ module Langchain::Tool
139
141
  params[:category] = category if category
140
142
  params[:language] = language if language
141
143
 
142
- send_request(path: "top-headlines/sources", params: params)
144
+ response = send_request(path: "top-headlines/sources", params: params)
145
+ tool_response(content: response)
143
146
  end
144
147
 
145
148
  private
@@ -5,7 +5,7 @@ module Langchain::Tool
5
5
  # A tool that execute Ruby code in a sandboxed environment.
6
6
  #
7
7
  # Gem requirements:
8
- # gem "safe_ruby", "~> 1.0.4"
8
+ # gem "safe_ruby", "~> 1.0.5"
9
9
  #
10
10
  # Usage:
11
11
  # interpreter = Langchain::Tool::RubyCodeInterpreter.new
@@ -27,11 +27,11 @@ module Langchain::Tool
27
27
  # Executes Ruby code in a sandboxes environment.
28
28
  #
29
29
  # @param input [String] ruby code expression
30
- # @return [String] Answer
30
+ # @return [Langchain::Tool::Response] Answer
31
31
  def execute(input:)
32
32
  Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")
33
33
 
34
- safe_eval(input)
34
+ tool_response(content: safe_eval(input))
35
35
  end
36
36
 
37
37
  def safe_eval(code)
@@ -41,7 +41,7 @@ module Langchain::Tool
41
41
  # @param include_domains [Array<String>] A list of domains to specifically include in the search results. Default is None, which includes all domains.
42
42
  # @param exclude_domains [Array<String>] A list of domains to specifically exclude from the search results. Default is None, which doesn't exclude any domains.
43
43
  #
44
- # @return [String] The search results in JSON format.
44
+ # @return [Langchain::Tool::Response] The search results in JSON format.
45
45
  def search(
46
46
  query:,
47
47
  search_depth: "basic",
@@ -70,7 +70,7 @@ module Langchain::Tool
70
70
  response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
71
71
  http.request(request)
72
72
  end
73
- response.body
73
+ tool_response(content: response.body)
74
74
  end
75
75
  end
76
76
  end
@@ -33,8 +33,10 @@ module Langchain::Tool
33
33
  #
34
34
  # @param query [String] The query to search for
35
35
  # @param k [Integer] The number of results to return
36
+ # @return [Langchain::Tool::Response] The response from the server
36
37
  def similarity_search(query:, k: 4)
37
- vectorsearch.similarity_search(query:, k: 4)
38
+ result = vectorsearch.similarity_search(query:, k: 4)
39
+ tool_response(content: result)
38
40
  end
39
41
  end
40
42
  end
@@ -55,15 +55,15 @@ module Langchain::Tool
55
55
  params = {appid: @api_key, q: [city, state_code, country_code].compact.join(","), units: units}
56
56
 
57
57
  location_response = send_request(path: "geo/1.0/direct", params: params.except(:units))
58
- return location_response if location_response.is_a?(String) # Error occurred
58
+ return tool_response(content: location_response) if location_response.is_a?(String) # Error occurred
59
59
 
60
60
  location = location_response.first
61
- return "Location not found" unless location
61
+ return tool_response(content: "Location not found") unless location
62
62
 
63
63
  params = params.merge(lat: location["lat"], lon: location["lon"]).except(:q)
64
64
  weather_data = send_request(path: "data/2.5/weather", params: params)
65
65
 
66
- parse_weather_response(weather_data, units)
66
+ tool_response(content: parse_weather_response(weather_data, units))
67
67
  end
68
68
 
69
69
  def send_request(path:, params:)
@@ -27,13 +27,13 @@ module Langchain::Tool
27
27
  # Executes Wikipedia API search and returns the answer
28
28
  #
29
29
  # @param input [String] search query
30
- # @return [String] Answer
30
+ # @return [Langchain::Tool::Response] Answer
31
31
  def execute(input:)
32
32
  Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")
33
33
 
34
34
  page = ::Wikipedia.find(input)
35
35
  # It would be nice to figure out a way to provide page.content but the LLM token limit is an issue
36
- page.summary
36
+ tool_response(content: page.summary)
37
37
  end
38
38
  end
39
39
  end
@@ -61,6 +61,20 @@ module Langchain::ToolDefinition
61
61
  .downcase
62
62
  end
63
63
 
64
+ def self.extended(base)
65
+ base.include(InstanceMethods)
66
+ end
67
+
68
+ module InstanceMethods
69
+ # Create a tool response
70
+ # @param content [String, nil] The content of the tool response
71
+ # @param image_url [String, nil] The URL of an image
72
+ # @return [Langchain::ToolResponse] The tool response
73
+ def tool_response(content: nil, image_url: nil)
74
+ Langchain::ToolResponse.new(content: content, image_url: image_url)
75
+ end
76
+ end
77
+
64
78
  # Manages schemas for functions
65
79
  class FunctionSchemas
66
80
  def initialize(tool_name)
@@ -0,0 +1,24 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Langchain
4
+ # ToolResponse represents the standardized output of a tool.
5
+ # It can contain either text content or an image URL.
6
+ class ToolResponse
7
+ attr_reader :content, :image_url
8
+
9
+ # Initializes a new ToolResponse.
10
+ #
11
+ # @param content [String] The text content of the response.
12
+ # @param image_url [String, nil] Optional URL to an image.
13
+ def initialize(content: nil, image_url: nil)
14
+ raise ArgumentError, "Either content or image_url must be provided" if content.nil? && image_url.nil?
15
+
16
+ @content = content
17
+ @image_url = image_url
18
+ end
19
+
20
+ def to_s
21
+ content.to_s
22
+ end
23
+ end
24
+ end
@@ -7,7 +7,6 @@ module Langchain::Vectorsearch
7
7
  # == Available vector databases
8
8
  #
9
9
  # - {Langchain::Vectorsearch::Chroma}
10
- # - {Langchain::Vectorsearch::Epsilla}
11
10
  # - {Langchain::Vectorsearch::Elasticsearch}
12
11
  # - {Langchain::Vectorsearch::Hnswlib}
13
12
  # - {Langchain::Vectorsearch::Milvus}
@@ -30,7 +29,6 @@ module Langchain::Vectorsearch
30
29
  # )
31
30
  #
32
31
  # # You can instantiate other supported vector databases the same way:
33
- # epsilla = Langchain::Vectorsearch::Epsilla.new(...)
34
32
  # milvus = Langchain::Vectorsearch::Milvus.new(...)
35
33
  # qdrant = Langchain::Vectorsearch::Qdrant.new(...)
36
34
  # pinecone = Langchain::Vectorsearch::Pinecone.new(...)
@@ -144,7 +144,7 @@ module Langchain::Vectorsearch
144
144
  # @yield [String] Stream responses back one String at a time
145
145
  # @return [String] The answer to the question
146
146
  def ask(question:, k: 4, &block)
147
- search_results = similarity_search(query: question, k: k)
147
+ search_results = similarity_search(text: question, k: k)
148
148
 
149
149
  context = search_results.map do |result|
150
150
  result[:input]
@@ -21,6 +21,8 @@ module Langchain::Vectorsearch
21
21
  # @param index_name [String] The name of the Epsilla table to use
22
22
  # @param llm [Object] The LLM client to use
23
23
  def initialize(url:, db_name:, db_path:, index_name:, llm:)
24
+ Langchain.logger.warn "DEPRECATED: `Langchain::Vectorsearch::Epsilla` is deprecated, and will be removed in the next major version. Please use other vector storage engines."
25
+
24
26
  depends_on "epsilla-ruby", req: "epsilla"
25
27
 
26
28
  uri = URI.parse(url)
@@ -16,6 +16,7 @@ module Langchain::Vectorsearch
16
16
 
17
17
  @client = ::Milvus::Client.new(
18
18
  url: url,
19
+ api_key: api_key,
19
20
  logger: Langchain.logger
20
21
  )
21
22
  @index_name = index_name
@@ -51,17 +51,30 @@ module Langchain::Vectorsearch
51
51
  # Upsert a list of texts to the index
52
52
  # @param texts [Array<String>] The texts to add to the index
53
53
  # @param ids [Array<Integer>] The ids of the objects to add to the index, in the same order as the texts
54
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
54
55
  # @return [PG::Result] The response from the database including the ids of
55
56
  # the added or updated texts.
56
- def upsert_texts(texts:, ids:)
57
- data = texts.zip(ids).flat_map do |(text, id)|
58
- {id: id, content: text, vectors: llm.embed(text: text).embedding.to_s, namespace: namespace}
57
+ def upsert_texts(texts:, ids:, metadata: nil)
58
+ metadata = Array.new(texts.size, {}) if metadata.nil?
59
+
60
+ data = texts.zip(ids, metadata).flat_map do |text, id, meta|
61
+ {
62
+ id: id,
63
+ content: text,
64
+ vectors: llm.embed(text: text).embedding.to_s,
65
+ namespace: namespace,
66
+ metadata: meta.to_json
67
+ }
59
68
  end
60
69
  # @db[table_name.to_sym].multi_insert(data, return: :primary_key)
61
70
  @db[table_name.to_sym]
62
71
  .insert_conflict(
63
72
  target: :id,
64
- update: {content: Sequel[:excluded][:content], vectors: Sequel[:excluded][:vectors]}
73
+ update: {
74
+ content: Sequel[:excluded][:content],
75
+ vectors: Sequel[:excluded][:vectors],
76
+ metadata: Sequel[:excluded][:metadata]
77
+ }
65
78
  )
66
79
  .multi_insert(data, return: :primary_key)
67
80
  end
@@ -69,25 +82,34 @@ module Langchain::Vectorsearch
69
82
  # Add a list of texts to the index
70
83
  # @param texts [Array<String>] The texts to add to the index
71
84
  # @param ids [Array<String>] The ids to add to the index, in the same order as the texts
85
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
72
86
  # @return [Array<Integer>] The the ids of the added texts.
73
- def add_texts(texts:, ids: nil)
87
+ def add_texts(texts:, ids: nil, metadata: nil)
88
+ metadata = Array.new(texts.size, {}) if metadata.nil?
89
+
74
90
  if ids.nil? || ids.empty?
75
- data = texts.map do |text|
76
- {content: text, vectors: llm.embed(text: text).embedding.to_s, namespace: namespace}
91
+ data = texts.zip(metadata).map do |text, meta|
92
+ {
93
+ content: text,
94
+ vectors: llm.embed(text: text).embedding.to_s,
95
+ namespace: namespace,
96
+ metadata: meta.to_json
97
+ }
77
98
  end
78
99
 
79
100
  @db[table_name.to_sym].multi_insert(data, return: :primary_key)
80
101
  else
81
- upsert_texts(texts: texts, ids: ids)
102
+ upsert_texts(texts: texts, ids: ids, metadata: metadata)
82
103
  end
83
104
  end
84
105
 
85
106
  # Update a list of ids and corresponding texts to the index
86
107
  # @param texts [Array<String>] The texts to add to the index
87
108
  # @param ids [Array<String>] The ids to add to the index, in the same order as the texts
109
+ # @param metadata [Array<Hash>] The metadata to associate with each text, in the same order as the texts
88
110
  # @return [Array<Integer>] The ids of the updated texts.
89
- def update_texts(texts:, ids:)
90
- upsert_texts(texts: texts, ids: ids)
111
+ def update_texts(texts:, ids:, metadata: nil)
112
+ upsert_texts(texts: texts, ids: ids, metadata: metadata)
91
113
  end
92
114
 
93
115
  # Remove a list of texts from the index
@@ -107,6 +129,7 @@ module Langchain::Vectorsearch
107
129
  text :content
108
130
  column :vectors, "vector(#{vector_dimensions})"
109
131
  text namespace_column.to_sym, default: nil
132
+ jsonb :metadata, default: "{}"
110
133
  end
111
134
  end
112
135
 
@@ -136,6 +159,7 @@ module Langchain::Vectorsearch
136
159
  def similarity_search_by_vector(embedding:, k: 4)
137
160
  db.transaction do # BEGIN
138
161
  documents_model
162
+ .select(:content, :metadata)
139
163
  .nearest_neighbors(:vectors, embedding, distance: operator).limit(k)
140
164
  .where(namespace_column.to_sym => namespace)
141
165
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- VERSION = "0.19.3"
4
+ VERSION = "0.19.5"
5
5
  end
data/lib/langchain.rb CHANGED
@@ -29,10 +29,6 @@ loader.inflector.inflect(
29
29
 
30
30
  loader.collapse("#{__dir__}/langchain/llm/response")
31
31
 
32
- # RubyCodeInterpreter does not work with Ruby 3.3;
33
- # https://github.com/ukutaht/safe_ruby/issues/4
34
- loader.ignore("#{__dir__}/langchain/tool/ruby_code_interpreter") if RUBY_VERSION >= "3.3.0"
35
-
36
32
  loader.setup
37
33
 
38
34
  # Langchain.rb a is library for building LLM-backed Ruby applications. It is an abstraction layer that sits on top of the emerging AI-related tools that makes it easy for developers to consume and string those services together.
metadata CHANGED
@@ -1,14 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: langchainrb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.19.3
4
+ version: 0.19.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Bondarev
8
- autorequire:
9
8
  bindir: exe
10
9
  cert_chain: []
11
- date: 2025-01-13 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
12
11
  dependencies:
13
12
  - !ruby/object:Gem::Dependency
14
13
  name: baran
@@ -42,16 +41,22 @@ dependencies:
42
41
  name: json-schema
43
42
  requirement: !ruby/object:Gem::Requirement
44
43
  requirements:
45
- - - "~>"
44
+ - - ">="
46
45
  - !ruby/object:Gem::Version
47
46
  version: '4'
47
+ - - "<"
48
+ - !ruby/object:Gem::Version
49
+ version: '6'
48
50
  type: :runtime
49
51
  prerelease: false
50
52
  version_requirements: !ruby/object:Gem::Requirement
51
53
  requirements:
52
- - - "~>"
54
+ - - ">="
53
55
  - !ruby/object:Gem::Version
54
56
  version: '4'
57
+ - - "<"
58
+ - !ruby/object:Gem::Version
59
+ version: '6'
55
60
  - !ruby/object:Gem::Dependency
56
61
  name: zeitwerk
57
62
  requirement: !ruby/object:Gem::Requirement
@@ -114,14 +119,14 @@ dependencies:
114
119
  requirements:
115
120
  - - "~>"
116
121
  - !ruby/object:Gem::Version
117
- version: 3.10.0
122
+ version: 3.11.0
118
123
  type: :development
119
124
  prerelease: false
120
125
  version_requirements: !ruby/object:Gem::Requirement
121
126
  requirements:
122
127
  - - "~>"
123
128
  - !ruby/object:Gem::Version
124
- version: 3.10.0
129
+ version: 3.11.0
125
130
  - !ruby/object:Gem::Dependency
126
131
  name: yard
127
132
  requirement: !ruby/object:Gem::Requirement
@@ -193,19 +198,19 @@ dependencies:
193
198
  - !ruby/object:Gem::Version
194
199
  version: 0.2.1
195
200
  - !ruby/object:Gem::Dependency
196
- name: anthropic
201
+ name: ruby-anthropic
197
202
  requirement: !ruby/object:Gem::Requirement
198
203
  requirements:
199
204
  - - "~>"
200
205
  - !ruby/object:Gem::Version
201
- version: '0.3'
206
+ version: '0.4'
202
207
  type: :development
203
208
  prerelease: false
204
209
  version_requirements: !ruby/object:Gem::Requirement
205
210
  requirements:
206
211
  - - "~>"
207
212
  - !ruby/object:Gem::Version
208
- version: '0.3'
213
+ version: '0.4'
209
214
  - !ruby/object:Gem::Dependency
210
215
  name: aws-sdk-bedrockruntime
211
216
  requirement: !ruby/object:Gem::Requirement
@@ -276,20 +281,6 @@ dependencies:
276
281
  - - "~>"
277
282
  - !ruby/object:Gem::Version
278
283
  version: 8.2.0
279
- - !ruby/object:Gem::Dependency
280
- name: epsilla-ruby
281
- requirement: !ruby/object:Gem::Requirement
282
- requirements:
283
- - - "~>"
284
- - !ruby/object:Gem::Version
285
- version: 0.0.4
286
- type: :development
287
- prerelease: false
288
- version_requirements: !ruby/object:Gem::Requirement
289
- requirements:
290
- - - "~>"
291
- - !ruby/object:Gem::Version
292
- version: 0.0.4
293
284
  - !ruby/object:Gem::Dependency
294
285
  name: eqn
295
286
  requirement: !ruby/object:Gem::Requirement
@@ -388,20 +379,6 @@ dependencies:
388
379
  - - "~>"
389
380
  - !ruby/object:Gem::Version
390
381
  version: 0.10.3
391
- - !ruby/object:Gem::Dependency
392
- name: llama_cpp
393
- requirement: !ruby/object:Gem::Requirement
394
- requirements:
395
- - - "~>"
396
- - !ruby/object:Gem::Version
397
- version: 0.9.4
398
- type: :development
399
- prerelease: false
400
- version_requirements: !ruby/object:Gem::Requirement
401
- requirements:
402
- - - "~>"
403
- - !ruby/object:Gem::Version
404
- version: 0.9.4
405
382
  - !ruby/object:Gem::Dependency
406
383
  name: nokogiri
407
384
  requirement: !ruby/object:Gem::Requirement
@@ -576,14 +553,14 @@ dependencies:
576
553
  requirements:
577
554
  - - "~>"
578
555
  - !ruby/object:Gem::Version
579
- version: 1.0.4
556
+ version: 1.0.5
580
557
  type: :development
581
558
  prerelease: false
582
559
  version_requirements: !ruby/object:Gem::Requirement
583
560
  requirements:
584
561
  - - "~>"
585
562
  - !ruby/object:Gem::Version
586
- version: 1.0.4
563
+ version: 1.0.5
587
564
  - !ruby/object:Gem::Dependency
588
565
  name: sequel
589
566
  requirement: !ruby/object:Gem::Requirement
@@ -718,6 +695,7 @@ files:
718
695
  - lib/langchain/loader.rb
719
696
  - lib/langchain/output_parsers/base.rb
720
697
  - lib/langchain/output_parsers/output_fixing_parser.rb
698
+ - lib/langchain/output_parsers/output_parser_exception.rb
721
699
  - lib/langchain/output_parsers/prompts/naive_fix_prompt.yaml
722
700
  - lib/langchain/output_parsers/structured_output_parser.rb
723
701
  - lib/langchain/processors/base.rb
@@ -749,6 +727,7 @@ files:
749
727
  - lib/langchain/tool/weather.rb
750
728
  - lib/langchain/tool/wikipedia.rb
751
729
  - lib/langchain/tool_definition.rb
730
+ - lib/langchain/tool_response.rb
752
731
  - lib/langchain/utils/cosine_similarity.rb
753
732
  - lib/langchain/utils/hash_transformer.rb
754
733
  - lib/langchain/utils/image_wrapper.rb
@@ -775,7 +754,6 @@ metadata:
775
754
  source_code_uri: https://github.com/patterns-ai-core/langchainrb
776
755
  changelog_uri: https://github.com/patterns-ai-core/langchainrb/blob/main/CHANGELOG.md
777
756
  documentation_uri: https://rubydoc.info/gems/langchainrb
778
- post_install_message:
779
757
  rdoc_options: []
780
758
  require_paths:
781
759
  - lib
@@ -790,8 +768,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
790
768
  - !ruby/object:Gem::Version
791
769
  version: '0'
792
770
  requirements: []
793
- rubygems_version: 3.5.3
794
- signing_key:
771
+ rubygems_version: 3.6.7
795
772
  specification_version: 4
796
773
  summary: Build LLM-backed Ruby applications with Ruby's Langchain.rb
797
774
  test_files: []