langchainrb 0.9.3 → 0.9.5

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b31725a5fdb7c09d25e97b3b8ecbd78eba1eeece6ef2db82f009aa121e1f4956
4
- data.tar.gz: 1f9116daf6780682d8c32021212bba702a053af9b4293e488ca605cc298c8e12
3
+ metadata.gz: 833d4dafdf55e45852261e1c86b8121fd3ed1b61766a7fb121589e6549b255e0
4
+ data.tar.gz: d3834e7a5d15cf1ddd45bfc2db69b73afb5cf60b6b43004a398a688b5d5932e1
5
5
  SHA512:
6
- metadata.gz: 6f50889ce152ac93567951c2a854c5589f5306c8a54c852febc9d5884b3d924904beabfd076e87fefb95354dda99fbb77d179045274ff25bf9515ecee3b2d6bb
7
- data.tar.gz: aee9ed10fe48eeef9dc5ba1433145823d20c55042bea7bc359ce5cad5dd4783eb68f1e84bdad79349efea17d77455b9dd6ba918a6a6d0d991e7c350887feac6d
6
+ metadata.gz: 98dbc07b39f956d7425c562451d9eced8162cd7d7e181ac6188d029090275f5485376db3afeb826e730c168f6d64a7cd6af90503974f8b2696b1555f3d18b589
7
+ data.tar.gz: 3e641d27e3ccdedfa363c7bfecb7f6a1293c1f866421db3ac5e74dcd4615934a132345f9c7912fec0200d2fe626747faaefa592592066e336d33c4db5d3fd050
data/CHANGELOG.md CHANGED
@@ -1,5 +1,15 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.9.5]
4
+ - Now using OpenAI's "text-embedding-3-small" model to generate embeddings
5
+ - Added `remove_texts(ids:)` method to Qdrant and Chroma
6
+ - Add Ruby 3.3 support
7
+
8
+ ## [0.9.4]
9
+ - New `Ollama#summarize()` method
10
+ - Improved README
11
+ - Fixes + specs
12
+
3
13
  ## [0.9.3]
4
14
  - Add EML processor
5
15
  - Tools can support multiple-methods
data/README.md CHANGED
@@ -42,7 +42,7 @@ If bundler is not being used to manage dependencies, install the gem by executin
42
42
 
43
43
  gem install langchainrb
44
44
 
45
- Additional gems may be required when loading LLM Providers. These are not included by default so you can include only what you need.
45
+ Additional gems may be required. They're not included by default so you can include only what you need.
46
46
 
47
47
  ## Usage
48
48
 
@@ -51,10 +51,10 @@ require "langchain"
51
51
  ```
52
52
 
53
53
  ## Large Language Models (LLMs)
54
- Langchain.rb wraps all supported LLMs in a unified interface allowing you to easily swap out and test out different models.
54
+ Langchain.rb wraps supported LLMs in a unified interface allowing you to easily swap out and test out different models.
55
55
 
56
56
  #### Supported LLMs and features:
57
- | LLM providers | embed() | complete() | chat() | summarize() | Notes |
57
+ | LLM providers | `embed()` | `complete()` | `chat()` | `summarize()` | Notes |
58
58
  | -------- |:------------------:| :-------: | :-----------------: | :-------: | :----------------- |
59
59
  | [OpenAI](https://openai.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ❌ | Including Azure OpenAI |
60
60
  | [AI21](https://ai21.com/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ | ❌ | ✅ | |
@@ -64,7 +64,7 @@ Langchain.rb wraps all supported LLMs in a unified interface allowing you to eas
64
64
  | [GooglePalm](https://ai.google/discover/palm2?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
65
65
  | [Google Vertex AI](https://cloud.google.com/vertex-ai?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ❌ | ✅ | |
66
66
  | [HuggingFace](https://huggingface.co/?utm_source=langchainrb&utm_medium=github) | ✅ | ❌ | ❌ | ❌ | |
67
- | [Ollama](https://ollama.ai/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | | |
67
+ | [Ollama](https://ollama.ai/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | | |
68
68
  | [Replicate](https://replicate.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
69
69
 
70
70
  #### Using standalone LLMs:
@@ -83,12 +83,7 @@ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"], llm_options: {
83
83
 
84
84
  Generate vector embeddings:
85
85
  ```ruby
86
- llm.embed(text: "foo bar")
87
- ```
88
-
89
- Generate a text completion:
90
- ```ruby
91
- llm.complete(prompt: "What is the meaning of life?").completion
86
+ llm.embed(text: "foo bar").embedding
92
87
  ```
93
88
 
94
89
  Generate a chat completion:
@@ -249,7 +244,7 @@ Then parse the llm response:
249
244
 
250
245
  ```ruby
251
246
  llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
252
- llm_response = llm.chat(prompt: prompt_text).completion
247
+ llm_response = llm.chat(messages: [{role: "user", content: prompt_text}]).completion
253
248
  parser.parse(llm_response)
254
249
  # {
255
250
  # "name" => "Kim Ji-hyun",
@@ -398,14 +393,9 @@ client.similarity_search_by_vector(
398
393
 
399
394
  RAG-based querying
400
395
  ```ruby
401
- client.ask(
402
- question:
403
- )
396
+ client.ask(question: "...")
404
397
  ```
405
398
 
406
- ## Evaluations (Evals)
407
- The Evaluations module is a collection of tools that can be used to evaluate and track the performance of the output products by LLM and your RAG (Retrieval Augmented Generation) pipelines.
408
-
409
399
  ## Assistants
410
400
  Assistants are Agent-like objects that leverage helpful instructions, LLMs, tools and knowledge to respond to user queries. Assistants can be configured with an LLM of your choice (currently only OpenAI), any vector search database and easily extended with additional tools.
411
401
 
@@ -473,6 +463,9 @@ assistant.thread.messages
473
463
 
474
464
  The Assistant checks the context window limits before every request to the LLM and remove oldest thread messages one by one if the context window is exceeded.
475
465
 
466
+ ## Evaluations (Evals)
467
+ The Evaluations module is a collection of tools that can be used to evaluate and track the performance of the output products by LLM and your RAG (Retrieval Augmented Generation) pipelines.
468
+
476
469
  ### RAGAS
477
470
  Ragas helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. The implementation is based on this [paper](https://arxiv.org/abs/2309.15217) and the original Python [repo](https://github.com/explodinggradients/ragas). Ragas tracks the following 3 metrics and assigns the 0.0 - 1.0 scores:
478
471
  * Faithfulness - the answer is grounded in the given context.
@@ -501,7 +494,7 @@ Additional examples available: [/examples](https://github.com/andreibondarev/lan
501
494
 
502
495
  ## Logging
503
496
 
504
- LangChain.rb uses standard logging mechanisms and defaults to `:warn` level. Most messages are at info level, but we will add debug or warn statements as needed.
497
+ Langchain.rb uses standard logging mechanisms and defaults to `:warn` level. Most messages are at info level, but we will add debug or warn statements as needed.
505
498
  To show all log messages:
506
499
 
507
500
  ```ruby
@@ -26,6 +26,8 @@ module Langchain::Agent
26
26
  # @param max_iterations [Integer] The maximum number of iterations to run
27
27
  # @return [ReActAgent] The Agent::ReActAgent instance
28
28
  def initialize(llm:, tools: [], max_iterations: 10)
29
+ warn "[DEPRECATION] `Langchain::Agent::ReActAgent` is deprecated. Please use `Langchain::Assistant` instead."
30
+
29
31
  Langchain::Tool::Base.validate_tools!(tools: tools)
30
32
 
31
33
  @tools = tools
@@ -11,6 +11,8 @@ module Langchain::Agent
11
11
  # @param db [Object] Database connection info
12
12
  #
13
13
  def initialize(llm:, db:)
14
+ warn "[DEPRECATION] `Langchain::Agent::ReActAgent` is deprecated. Please use `Langchain::Assistant` instead."
15
+
14
16
  @llm = llm
15
17
  @db = db
16
18
  @schema = @db.dump_schema
@@ -1,6 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
+ # Assistants are Agent-like objects that leverage helpful instructions, LLMs, tools and knowledge to respond to user queries.
5
+ # Assistants can be configured with an LLM of your choice (currently only OpenAI), any vector search database and easily extended with additional tools.
4
6
  class Assistant
5
7
  attr_reader :llm, :thread, :instructions
6
8
  attr_accessor :tools
@@ -176,26 +178,6 @@ module Langchain
176
178
  Message.new(role: role, content: content, tool_calls: tool_calls, tool_call_id: tool_call_id)
177
179
  end
178
180
 
179
- # # TODO: Fix the message truncation when context window is exceeded
180
- # def build_assistant_prompt(instructions:, tools:)
181
- # while begin
182
- # # Check if the prompt exceeds the context window
183
- # # Return false to exit the while loop
184
- # !llm.class.const_get(:LENGTH_VALIDATOR).validate_max_tokens!(
185
- # thread.messages,
186
- # llm.defaults[:chat_completion_model_name],
187
- # {llm: llm}
188
- # )
189
- # # Rescue error if context window is exceeded and return true to continue the while loop
190
- # rescue Langchain::Utils::TokenLength::TokenLimitExceeded
191
- # # Should be using `retry` instead of while()
192
- # true
193
- # end
194
- # # Truncate the oldest messages when the context window is exceeded
195
- # thread.messages.shift
196
- # end
197
-
198
- # prompt
199
- # end
181
+ # TODO: Fix the message truncation when context window is exceeded
200
182
  end
201
183
  end
@@ -1,8 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- # Langchain::Thread keeps track of messages in a conversation
5
- # Eventually we may want to add functionality to persist to the thread to disk, DB, storage, etc.
4
+ # Langchain::Thread keeps track of messages in a conversation.
5
+ # TODO: Add functionality to persist to the thread to disk, DB, storage, etc.
6
6
  class Thread
7
7
  attr_accessor :messages
8
8
 
@@ -4,12 +4,10 @@ require "baran"
4
4
 
5
5
  module Langchain
6
6
  module Chunker
7
- #
8
7
  # Simple text chunker
9
8
  #
10
9
  # Usage:
11
10
  # Langchain::Chunker::Markdown.new(text).chunks
12
- #
13
11
  class Markdown < Base
14
12
  attr_reader :text, :chunk_size, :chunk_overlap
15
13
 
@@ -4,12 +4,10 @@ require "baran"
4
4
 
5
5
  module Langchain
6
6
  module Chunker
7
- #
8
7
  # Recursive text chunker. Preferentially splits on separators.
9
8
  #
10
9
  # Usage:
11
10
  # Langchain::Chunker::RecursiveText.new(text).chunks
12
- #
13
11
  class RecursiveText < Base
14
12
  attr_reader :text, :chunk_size, :chunk_overlap, :separators
15
13
 
@@ -2,7 +2,6 @@
2
2
 
3
3
  module Langchain
4
4
  module Chunker
5
- #
6
5
  # LLM-powered semantic chunker.
7
6
  # Semantic chunking is a technique of splitting texts by their semantic meaning, e.g.: themes, topics, and ideas.
8
7
  # We use an LLM to accomplish this. The Anthropic LLM is highly recommended for this task as it has the longest context window (100k tokens).
@@ -12,7 +11,6 @@ module Langchain
12
11
  # text,
13
12
  # llm: Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"])
14
13
  # ).chunks
15
- #
16
14
  class Semantic < Base
17
15
  attr_reader :text, :llm, :prompt_template
18
16
  # @param [Langchain::LLM::Base] Langchain::LLM::* instance
@@ -28,7 +26,7 @@ module Langchain
28
26
  prompt = prompt_template.format(text: text)
29
27
 
30
28
  # Replace static 50k limit with dynamic limit based on text length (max_tokens_to_sample)
31
- completion = llm.complete(prompt: prompt, max_tokens_to_sample: 50000)
29
+ completion = llm.complete(prompt: prompt, max_tokens_to_sample: 50000).completion
32
30
  completion
33
31
  .gsub("Here are the paragraphs split by topic:\n\n", "")
34
32
  .split("---")
@@ -4,12 +4,10 @@ require "pragmatic_segmenter"
4
4
 
5
5
  module Langchain
6
6
  module Chunker
7
- #
8
7
  # This chunker splits text by sentences.
9
8
  #
10
9
  # Usage:
11
10
  # Langchain::Chunker::Sentence.new(text).chunks
12
- #
13
11
  class Sentence < Base
14
12
  attr_reader :text
15
13
 
@@ -4,12 +4,10 @@ require "baran"
4
4
 
5
5
  module Langchain
6
6
  module Chunker
7
- #
8
7
  # Simple text chunker
9
8
  #
10
9
  # Usage:
11
10
  # Langchain::Chunker::Text.new(text).chunks
12
- #
13
11
  class Text < Base
14
12
  attr_reader :text, :chunk_size, :chunk_overlap, :separator
15
13
 
@@ -42,7 +42,7 @@ module Langchain
42
42
  for_class_name = for_class&.name
43
43
 
44
44
  log_line_parts = []
45
- log_line_parts << "[LangChain.rb]".colorize(color: :yellow)
45
+ log_line_parts << "[Langchain.rb]".colorize(color: :yellow)
46
46
  log_line_parts << if for_class.respond_to?(:logger_options)
47
47
  "[#{for_class_name}]".colorize(for_class.logger_options) + ":"
48
48
  elsif for_class_name
@@ -9,6 +9,8 @@ module Langchain
9
9
  TOKEN_LEEWAY = 20
10
10
 
11
11
  def initialize(llm:, messages: [], **options)
12
+ warn "[DEPRECATION] `Langchain::Conversation::Memory` is deprecated. Please use `Langchain::Assistant` instead."
13
+
12
14
  @llm = llm
13
15
  @context = nil
14
16
  @summary = nil
@@ -12,6 +12,8 @@ module Langchain
12
12
  }
13
13
 
14
14
  def initialize(content)
15
+ warn "[DEPRECATION] `Langchain::Conversation::*` is deprecated. Please use `Langchain::Assistant` and `Langchain::Messages` classes instead."
16
+
15
17
  @content = content
16
18
  end
17
19
 
@@ -1,5 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require "active_support/core_ext/hash"
4
+
3
5
  module Langchain::LLM
4
6
  # Interface to Ollama API.
5
7
  # Available models: https://ollama.ai/library
@@ -17,6 +19,16 @@ module Langchain::LLM
17
19
  chat_completion_model_name: "llama2"
18
20
  }.freeze
19
21
 
22
+ EMBEDDING_SIZES = {
23
+ codellama: 4_096,
24
+ "dolphin-mixtral": 4_096,
25
+ llama2: 4_096,
26
+ llava: 4_096,
27
+ mistral: 4_096,
28
+ "mistral-openorca": 4_096,
29
+ mixtral: 4_096
30
+ }.freeze
31
+
20
32
  # Initialize the Ollama client
21
33
  # @param url [String] The URL of the Ollama instance
22
34
  # @param default_options [Hash] The default options to use
@@ -24,7 +36,17 @@ module Langchain::LLM
24
36
  def initialize(url:, default_options: {})
25
37
  depends_on "faraday"
26
38
  @url = url
27
- @defaults = DEFAULTS.merge(default_options)
39
+ @defaults = DEFAULTS.deep_merge(default_options)
40
+ end
41
+
42
+ # Returns the # of vector dimensions for the embeddings
43
+ # @return [Integer] The # of vector dimensions
44
+ def default_dimension
45
+ # since Ollama can run multiple models, look it up or generate an embedding and return the size
46
+ @default_dimension ||=
47
+ EMBEDDING_SIZES.fetch(defaults[:embeddings_model_name].to_sym) do
48
+ embed(text: "test").embedding.size
49
+ end
28
50
  end
29
51
 
30
52
  #
@@ -108,9 +130,11 @@ module Langchain::LLM
108
130
  req.body = parameters
109
131
 
110
132
  req.options.on_data = proc do |chunk, size|
111
- json_chunk = JSON.parse(chunk)
133
+ chunk.split("\n").each do |line_chunk|
134
+ json_chunk = JSON.parse(line_chunk)
112
135
 
113
- response += json_chunk.dig("response")
136
+ response += json_chunk.dig("response")
137
+ end
114
138
 
115
139
  yield json_chunk, size if block
116
140
  end
@@ -217,6 +241,19 @@ module Langchain::LLM
217
241
  Langchain::LLM::OllamaResponse.new(response.body, model: parameters[:model])
218
242
  end
219
243
 
244
+ # Generate a summary for a given text
245
+ #
246
+ # @param text [String] The text to generate a summary for
247
+ # @return [String] The summary
248
+ def summarize(text:)
249
+ prompt_template = Langchain::Prompt.load_from_path(
250
+ file_path: Langchain.root.join("langchain/llm/prompts/ollama/summarize_template.yaml")
251
+ )
252
+ prompt = prompt_template.format(text: text)
253
+
254
+ complete(prompt: prompt)
255
+ end
256
+
220
257
  private
221
258
 
222
259
  # @return [Faraday::Connection] Faraday client
@@ -9,7 +9,7 @@ module Langchain::LLM
9
9
  # Usage:
10
10
  # openai = Langchain::LLM::OpenAI.new(
11
11
  # api_key: ENV["OPENAI_API_KEY"],
12
- # llm_options: {},
12
+ # llm_options: {}, # Available options: https://github.com/alexrudall/ruby-openai/blob/main/lib/openai/client.rb#L5-L13
13
13
  # default_options: {}
14
14
  # )
15
15
  class OpenAI < Base
@@ -17,8 +17,13 @@ module Langchain::LLM
17
17
  n: 1,
18
18
  temperature: 0.0,
19
19
  chat_completion_model_name: "gpt-3.5-turbo",
20
- embeddings_model_name: "text-embedding-ada-002",
21
- dimension: 1536
20
+ embeddings_model_name: "text-embedding-3-small"
21
+ }.freeze
22
+
23
+ EMBEDDING_SIZES = {
24
+ "text-embedding-ada-002": 1536,
25
+ "text-embedding-3-large": 3072,
26
+ "text-embedding-3-small": 1536
22
27
  }.freeze
23
28
 
24
29
  LENGTH_VALIDATOR = Langchain::Utils::TokenLength::OpenAIValidator
@@ -48,7 +53,8 @@ module Langchain::LLM
48
53
  text:,
49
54
  model: defaults[:embeddings_model_name],
50
55
  encoding_format: nil,
51
- user: nil
56
+ user: nil,
57
+ dimensions: EMBEDDING_SIZES.fetch(model.to_sym, nil)
52
58
  )
53
59
  raise ArgumentError.new("text argument is required") if text.empty?
54
60
  raise ArgumentError.new("model argument is required") if model.empty?
@@ -61,6 +67,10 @@ module Langchain::LLM
61
67
  parameters[:encoding_format] = encoding_format if encoding_format
62
68
  parameters[:user] = user if user
63
69
 
70
+ if ["text-embedding-3-small", "text-embedding-3-large"].include?(model)
71
+ parameters[:dimensions] = EMBEDDING_SIZES[model.to_sym] if EMBEDDING_SIZES.key?(model.to_sym)
72
+ end
73
+
64
74
  validate_max_tokens(text, parameters[:model])
65
75
 
66
76
  response = with_api_error_handling do
@@ -77,6 +87,8 @@ module Langchain::LLM
77
87
  # @param params [Hash] The parameters to pass to the `chat()` method
78
88
  # @return [Langchain::LLM::OpenAIResponse] Response object
79
89
  def complete(prompt:, **params)
90
+ warn "DEPRECATED: `Langchain::LLM::OpenAI#complete` is deprecated, and will be removed in the next major version. Use `Langchain::LLM::OpenAI#chat` instead."
91
+
80
92
  if params[:stop_sequences]
81
93
  params[:stop] = params.delete(:stop_sequences)
82
94
  end
@@ -170,6 +182,10 @@ module Langchain::LLM
170
182
  complete(prompt: prompt)
171
183
  end
172
184
 
185
+ def default_dimension
186
+ @defaults[:dimension] || EMBEDDING_SIZES.fetch(defaults[:embeddings_model_name].to_sym)
187
+ end
188
+
173
189
  private
174
190
 
175
191
  attr_reader :response_chunks
@@ -0,0 +1,9 @@
1
+ _type: prompt
2
+ input_variables:
3
+ - text
4
+ template: |
5
+ Write a concise summary of the following TEXT. Do not include the word summary, just provide the summary.
6
+
7
+ TEXT: {text}
8
+
9
+ CONCISE SUMMARY:
@@ -5,18 +5,15 @@ module Langchain::OutputParsers
5
5
  #
6
6
  # @abstract
7
7
  class Base
8
- #
9
8
  # Parse the output of an LLM call.
10
9
  #
11
10
  # @param text - LLM output to parse.
12
11
  #
13
12
  # @return [Object] Parsed output.
14
- #
15
13
  def parse(text:)
16
14
  raise NotImplementedError
17
15
  end
18
16
 
19
- #
20
17
  # Return a string describing the format of the output.
21
18
  #
22
19
  # @return [String] Format instructions.
@@ -27,7 +24,6 @@ module Langchain::OutputParsers
27
24
  # "foo": "bar"
28
25
  # }
29
26
  # ```
30
- #
31
27
  def get_format_instructions
32
28
  raise NotImplementedError
33
29
  end
@@ -6,13 +6,11 @@ module Langchain::OutputParsers
6
6
  class OutputFixingParser < Base
7
7
  attr_reader :llm, :parser, :prompt
8
8
 
9
- #
10
9
  # Initializes a new instance of the class.
11
10
  #
12
11
  # @param llm [Langchain::LLM] The LLM used in the fixing process
13
12
  # @param parser [Langchain::OutputParsers] The parser originally used which resulted in parsing error
14
13
  # @param prompt [Langchain::Prompt::PromptTemplate]
15
- #
16
14
  def initialize(llm:, parser:, prompt:)
17
15
  raise ArgumentError.new("llm must be an instance of Langchain::LLM got: #{llm.class}") unless llm.is_a?(Langchain::LLM::Base)
18
16
  raise ArgumentError.new("parser must be an instance of Langchain::OutputParsers got #{parser.class}") unless parser.is_a?(Langchain::OutputParsers::Base)
@@ -30,17 +28,14 @@ module Langchain::OutputParsers
30
28
  }
31
29
  end
32
30
 
33
- #
34
31
  # calls get_format_instructions on the @parser
35
32
  #
36
33
  # @return [String] Instructions for how the output of a language model should be formatted
37
34
  # according to the @schema.
38
- #
39
35
  def get_format_instructions
40
36
  parser.get_format_instructions
41
37
  end
42
38
 
43
- #
44
39
  # Parse the output of an LLM call, if fails with OutputParserException
45
40
  # then call the LLM with a fix prompt in an attempt to get the correctly
46
41
  # formatted response
@@ -48,7 +43,6 @@ module Langchain::OutputParsers
48
43
  # @param completion [String] Text output from the LLM call
49
44
  #
50
45
  # @return [Object] object that is succesfully parsed by @parser.parse
51
- #
52
46
  def parse(completion)
53
47
  parser.parse(completion)
54
48
  rescue OutputParserException => e
@@ -63,7 +57,6 @@ module Langchain::OutputParsers
63
57
  parser.parse(new_completion)
64
58
  end
65
59
 
66
- #
67
60
  # Creates a new instance of the class using the given JSON::Schema.
68
61
  #
69
62
  # @param llm [Langchain::LLM] The LLM used in the fixing process
@@ -71,7 +64,6 @@ module Langchain::OutputParsers
71
64
  # @param prompt [Langchain::Prompt::PromptTemplate]
72
65
  #
73
66
  # @return [Object] A new instance of the class
74
- #
75
67
  def self.from_llm(llm:, parser:, prompt: nil)
76
68
  new(llm: llm, parser: parser, prompt: prompt || naive_fix_prompt)
77
69
  end
@@ -5,15 +5,12 @@ require "json-schema"
5
5
 
6
6
  module Langchain::OutputParsers
7
7
  # = Structured Output Parser
8
- #
9
8
  class StructuredOutputParser < Base
10
9
  attr_reader :schema
11
10
 
12
- #
13
11
  # Initializes a new instance of the class.
14
12
  #
15
13
  # @param schema [JSON::Schema] The json schema
16
- #
17
14
  def initialize(schema:)
18
15
  @schema = validate_schema!(schema)
19
16
  end
@@ -25,24 +22,20 @@ module Langchain::OutputParsers
25
22
  }
26
23
  end
27
24
 
28
- #
29
25
  # Creates a new instance of the class using the given JSON::Schema.
30
26
  #
31
27
  # @param schema [JSON::Schema] The JSON::Schema to use
32
28
  #
33
29
  # @return [Object] A new instance of the class
34
- #
35
30
  def self.from_json_schema(schema)
36
31
  new(schema: schema)
37
32
  end
38
33
 
39
- #
40
34
  # Returns a string containing instructions for how the output of a language model should be formatted
41
35
  # according to the @schema.
42
36
  #
43
37
  # @return [String] Instructions for how the output of a language model should be formatted
44
38
  # according to the @schema.
45
- #
46
39
  def get_format_instructions
47
40
  <<~INSTRUCTIONS
48
41
  You must format your output as a JSON value that adheres to a given "JSON Schema" instance.
@@ -62,13 +55,10 @@ module Langchain::OutputParsers
62
55
  INSTRUCTIONS
63
56
  end
64
57
 
65
- #
66
58
  # Parse the output of an LLM call extracting an object that abides by the @schema
67
59
  #
68
60
  # @param text [String] Text output from the LLM call
69
- #
70
61
  # @return [Object] object that abides by the @schema
71
- #
72
62
  def parse(text)
73
63
  json = text.include?("```") ? text.strip.split(/```(?:json)?/)[1] : text.strip
74
64
  parsed = JSON.parse(json)
@@ -1,4 +1,3 @@
1
- require "mail"
2
1
  require "uri"
3
2
 
4
3
  module Langchain
@@ -1,41 +1,45 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- module Langchain::Tool
4
- class RubyCodeInterpreter < Base
5
- #
6
- # A tool that execute Ruby code in a sandboxed environment.
7
- #
8
- # Gem requirements:
9
- # gem "safe_ruby", "~> 1.0.4"
10
- #
11
- # Usage:
12
- # interpreter = Langchain::Tool::RubyCodeInterpreter.new
13
- #
14
- NAME = "ruby_code_interpreter"
15
- ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
3
+ # RubyCodeInterpreter does not work with Ruby 3.3;
4
+ # https://github.com/ukutaht/safe_ruby/issues/4
5
+ if RUBY_VERSION <= "3.2"
6
+ module Langchain::Tool
7
+ class RubyCodeInterpreter < Base
8
+ #
9
+ # A tool that execute Ruby code in a sandboxed environment.
10
+ #
11
+ # Gem requirements:
12
+ # gem "safe_ruby", "~> 1.0.4"
13
+ #
14
+ # Usage:
15
+ # interpreter = Langchain::Tool::RubyCodeInterpreter.new
16
+ #
17
+ NAME = "ruby_code_interpreter"
18
+ ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
16
19
 
17
- description <<~DESC
18
- A Ruby code interpreter. Use this to execute ruby expressions. Input should be a valid ruby expression. If you want to see the output of the tool, make sure to return a value.
19
- DESC
20
+ description <<~DESC
21
+ A Ruby code interpreter. Use this to execute ruby expressions. Input should be a valid ruby expression. If you want to see the output of the tool, make sure to return a value.
22
+ DESC
20
23
 
21
- def initialize(timeout: 30)
22
- depends_on "safe_ruby"
24
+ def initialize(timeout: 30)
25
+ depends_on "safe_ruby"
23
26
 
24
- @timeout = timeout
25
- end
27
+ @timeout = timeout
28
+ end
26
29
 
27
- # Executes Ruby code in a sandboxes environment.
28
- #
29
- # @param input [String] ruby code expression
30
- # @return [String] Answer
31
- def execute(input:)
32
- Langchain.logger.info("Executing \"#{input}\"", for: self.class)
30
+ # Executes Ruby code in a sandboxes environment.
31
+ #
32
+ # @param input [String] ruby code expression
33
+ # @return [String] Answer
34
+ def execute(input:)
35
+ Langchain.logger.info("Executing \"#{input}\"", for: self.class)
33
36
 
34
- safe_eval(input)
35
- end
37
+ safe_eval(input)
38
+ end
36
39
 
37
- def safe_eval(code)
38
- SafeRuby.eval(code, timeout: @timeout)
40
+ def safe_eval(code)
41
+ SafeRuby.eval(code, timeout: @timeout)
42
+ end
39
43
  end
40
44
  end
41
45
  end
@@ -136,7 +136,7 @@ module Langchain::Vectorsearch
136
136
  # @param k [Integer] The number of results to return
137
137
  # @return [String] Response
138
138
  def similarity_search_with_hyde(query:, k: 4)
139
- hyde_completion = llm.complete(prompt: generate_hyde_prompt(question: query))
139
+ hyde_completion = llm.complete(prompt: generate_hyde_prompt(question: query)).completion
140
140
  similarity_search(query: hyde_completion, k: k)
141
141
  end
142
142
 
@@ -60,6 +60,13 @@ module Langchain::Vectorsearch
60
60
  collection.update(embeddings)
61
61
  end
62
62
 
63
+ # Remove a list of texts from the index
64
+ # @param ids [Array<String>] The list of ids to remove
65
+ # @return [Hash] The response from the server
66
+ def remove_texts(ids:)
67
+ collection.delete(ids)
68
+ end
69
+
63
70
  # Create the collection with the default schema
64
71
  # @return [::Chroma::Resources::Collection] Created collection
65
72
  def create_default_schema
@@ -64,6 +64,16 @@ module Langchain::Vectorsearch
64
64
  add_texts(texts: texts, ids: ids)
65
65
  end
66
66
 
67
+ # Remove a list of texts from the index
68
+ # @param ids [Array<Integer>] The ids to remove
69
+ # @return [Hash] The response from the server
70
+ def remove_texts(ids:)
71
+ client.points.delete(
72
+ collection_name: index_name,
73
+ points: ids
74
+ )
75
+ end
76
+
67
77
  # Get the default schema
68
78
  # @return [Hash] The response from the server
69
79
  def get_default_schema
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- VERSION = "0.9.3"
4
+ VERSION = "0.9.5"
5
5
  end
data/lib/langchain.rb CHANGED
@@ -72,7 +72,7 @@ loader.setup
72
72
  #
73
73
  # = Logging
74
74
  #
75
- # LangChain.rb uses standard logging mechanisms and defaults to :debug level. Most messages are at info level, but we will add debug or warn statements as needed. To show all log messages:
75
+ # Langchain.rb uses standard logging mechanisms and defaults to :debug level. Most messages are at info level, but we will add debug or warn statements as needed. To show all log messages:
76
76
  #
77
77
  # Langchain.logger.level = :info
78
78
  module Langchain
metadata CHANGED
@@ -1,15 +1,29 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: langchainrb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.3
4
+ version: 0.9.5
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Bondarev
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-02-23 00:00:00.000000000 Z
11
+ date: 2024-03-15 00:00:00.000000000 Z
12
12
  dependencies:
13
+ - !ruby/object:Gem::Dependency
14
+ name: activesupport
15
+ requirement: !ruby/object:Gem::Requirement
16
+ requirements:
17
+ - - ">="
18
+ - !ruby/object:Gem::Version
19
+ version: 7.0.8
20
+ type: :runtime
21
+ prerelease: false
22
+ version_requirements: !ruby/object:Gem::Requirement
23
+ requirements:
24
+ - - ">="
25
+ - !ruby/object:Gem::Version
26
+ version: 7.0.8
13
27
  - !ruby/object:Gem::Dependency
14
28
  name: baran
15
29
  requirement: !ruby/object:Gem::Requirement
@@ -178,6 +192,34 @@ dependencies:
178
192
  - - "~>"
179
193
  - !ruby/object:Gem::Version
180
194
  version: 2.2.7
195
+ - !ruby/object:Gem::Dependency
196
+ name: vcr
197
+ requirement: !ruby/object:Gem::Requirement
198
+ requirements:
199
+ - - ">="
200
+ - !ruby/object:Gem::Version
201
+ version: '0'
202
+ type: :development
203
+ prerelease: false
204
+ version_requirements: !ruby/object:Gem::Requirement
205
+ requirements:
206
+ - - ">="
207
+ - !ruby/object:Gem::Version
208
+ version: '0'
209
+ - !ruby/object:Gem::Dependency
210
+ name: webmock
211
+ requirement: !ruby/object:Gem::Requirement
212
+ requirements:
213
+ - - ">="
214
+ - !ruby/object:Gem::Version
215
+ version: '0'
216
+ type: :development
217
+ prerelease: false
218
+ version_requirements: !ruby/object:Gem::Requirement
219
+ requirements:
220
+ - - ">="
221
+ - !ruby/object:Gem::Version
222
+ version: '0'
181
223
  - !ruby/object:Gem::Dependency
182
224
  name: ai21
183
225
  requirement: !ruby/object:Gem::Requirement
@@ -626,7 +668,7 @@ dependencies:
626
668
  - - ">="
627
669
  - !ruby/object:Gem::Version
628
670
  version: '0'
629
- description: Build LLM-backed Ruby applications with Ruby's LangChain
671
+ description: Build LLM-backed Ruby applications with Ruby's Langchain.rb
630
672
  email:
631
673
  - andrei.bondarev13@gmail.com
632
674
  executables: []
@@ -684,6 +726,7 @@ files:
684
726
  - lib/langchain/llm/llama_cpp.rb
685
727
  - lib/langchain/llm/ollama.rb
686
728
  - lib/langchain/llm/openai.rb
729
+ - lib/langchain/llm/prompts/ollama/summarize_template.yaml
687
730
  - lib/langchain/llm/prompts/summarize_template.yaml
688
731
  - lib/langchain/llm/replicate.rb
689
732
  - lib/langchain/llm/response/ai21_response.rb
@@ -776,8 +819,8 @@ required_rubygems_version: !ruby/object:Gem::Requirement
776
819
  - !ruby/object:Gem::Version
777
820
  version: '0'
778
821
  requirements: []
779
- rubygems_version: 3.4.1
822
+ rubygems_version: 3.5.3
780
823
  signing_key:
781
824
  specification_version: 4
782
- summary: Build LLM-backed Ruby applications with Ruby's LangChain
825
+ summary: Build LLM-backed Ruby applications with Ruby's Langchain.rb
783
826
  test_files: []