langchainrb 0.10.3 → 0.11.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 14c9c33c905c2df744cc7006735dbc55b86b767cfff4b655b03103072c804bb1
4
- data.tar.gz: e5076e7322ff375b16463ff14e5c504d7b106e05874f6860f9669a2269346d61
3
+ metadata.gz: '08b38ee39600716a9854a387fc0b54091290f27d7afa88b276a480651049bdd4'
4
+ data.tar.gz: 69d4292c129e751d9d4001912f5ab54ba23f9d9944a5e4294f4ba1dd426d401e
5
5
  SHA512:
6
- metadata.gz: f590a6f2f2adec60ee777a94886e178abfaf0789c0141bbd278d0a008d9ace75cb9f69d6afdf3ae3d58f7307246acfaf7f0fef43d43065bf8c1d83aa0e51562d
7
- data.tar.gz: 3fef869c24edf5a14ea4ddbdae8af919b2082e8f95b676ca36dd9e606ba67812e88dcd6d67f15156b9cce8daaaab26acd9c8602e3c894199dd7eae43d0926443
6
+ metadata.gz: 3f7d4403d11076fcff2975c88a236fa8b6ace079d83b774000af1d59c08f36794057eaef56769a51bb8e21ab462bba9dbf9df60d704f285b788af8a67b6977ea
7
+ data.tar.gz: 5d14c72130cada67dadfe880b2a0d723b312d8c27e7ef46452aa4ad65dce1055b66d8459b24c4b6d3f1adb5eeacf30c9fdc686b0190809ef051d393c33d3d33d
data/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.11.0]
4
+ - Langchain::Tool::Vectorsearch that wraps Langchain::Vectorsearch::* classes. This allows the Assistant to call the tool and inject data from vector DBs.
5
+
6
+ ## [0.11.0]
7
+ - Delete previously deprecated `Langchain::Agent::ReActAgent` and `Langchain::Agent::SQLQueryAgent` classes
8
+ - New `Langchain::Agent::FileSystem` tool that can read files, write to files, and list the contents of a directory
9
+
3
10
  ## [0.10.3]
4
11
  - Bump dependencies
5
12
  - Ollama#complete fix
data/README.md CHANGED
@@ -15,7 +15,7 @@ Available for paid consulting engagements! [Email me](mailto:andrei@sourcelabs.i
15
15
 
16
16
  ## Use Cases
17
17
  * Retrieval Augmented Generation (RAG) and vector search
18
- * [Assistants](#assistants) (chat bots) & [AI Agents](https://github.com/andreibondarev/langchainrb/tree/main/lib/langchain/agent/agents.md)
18
+ * [Assistants](#assistants) (chat bots)
19
19
 
20
20
  ## Table of Contents
21
21
 
@@ -55,19 +55,22 @@ require "langchain"
55
55
  Langchain.rb wraps supported LLMs in a unified interface allowing you to easily swap out and test out different models.
56
56
 
57
57
  #### Supported LLMs and features:
58
- | LLM providers | `embed()` | `complete()` | `chat()` | `summarize()` | Notes |
58
+ | LLM providers | `embed()` | `complete()` | `chat()` | `summarize()` | Notes |
59
59
  | -------- |:------------------:| :-------: | :-----------------: | :-------: | :----------------- |
60
60
  | [OpenAI](https://openai.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ❌ | Including Azure OpenAI |
61
61
  | [AI21](https://ai21.com/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ | ❌ | ✅ | |
62
- | [Anthropic](https://anthropic.com/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ | ❌ | ❌ | |
62
+ | [Anthropic](https://anthropic.com/?utm_source=langchainrb&utm_medium=github) | ❌ | ✅ | ❌ | ❌ | |
63
63
  | [AWS Bedrock](https://aws.amazon.com/bedrock?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ❌ | ❌ | Provides AWS, Cohere, AI21, Antropic and Stability AI models |
64
- | [Cohere](https://cohere.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
64
+ | [Cohere](https://cohere.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
65
65
  | [GooglePalm](https://ai.google/discover/palm2?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
66
66
  | [Google Vertex AI](https://cloud.google.com/vertex-ai?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ❌ | ✅ | |
67
67
  | [HuggingFace](https://huggingface.co/?utm_source=langchainrb&utm_medium=github) | ✅ | ❌ | ❌ | ❌ | |
68
+ | [Mistral AI](https://mistral.ai/?utm_source=langchainrb&utm_medium=github) | ✅ | ❌ | ✅ | ❌ | |
68
69
  | [Ollama](https://ollama.ai/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
69
70
  | [Replicate](https://replicate.com/?utm_source=langchainrb&utm_medium=github) | ✅ | ✅ | ✅ | ✅ | |
70
71
 
72
+
73
+
71
74
  #### Using standalone LLMs:
72
75
 
73
76
  #### OpenAI
@@ -400,6 +403,22 @@ client.ask(question: "...")
400
403
  ## Assistants
401
404
  Assistants are Agent-like objects that leverage helpful instructions, LLMs, tools and knowledge to respond to user queries. Assistants can be configured with an LLM of your choice (currently only OpenAI), any vector search database and easily extended with additional tools.
402
405
 
406
+ ### Available Tools 🛠️
407
+
408
+ | Name | Description | ENV Requirements | Gem Requirements |
409
+ | ------------ | :------------------------------------------------: | :-----------------------------------------------------------: | :---------------------------------------: |
410
+ | "calculator" | Useful for getting the result of a math expression | | `gem "eqn", "~> 1.6.5"` |
411
+ | "database" | Useful for querying a SQL database | | `gem "sequel", "~> 5.68.0"` |
412
+ | "file_system" | Interacts with the file system | | |
413
+ | "ruby_code_interpreter" | Interprets Ruby expressions | | `gem "safe_ruby", "~> 1.0.4"` |
414
+ | "google_search" | A wrapper around Google Search | `ENV["SERPAPI_API_KEY"]` (https://serpapi.com/manage-api-key) | `gem "google_search_results", "~> 2.0.0"` |
415
+ | "weather" | Calls Open Weather API to retrieve the current weather | `ENV["OPEN_WEATHER_API_KEY"]` (https://home.openweathermap.org/api_keys) | `gem "open-weather-ruby-client", "~> 0.3.0"` |
416
+ | "wikipedia" | Calls Wikipedia API to retrieve the summary | | `gem "wikipedia-client", "~> 1.17.0"` |
417
+
418
+ ### Demos
419
+ 1. [Building an AI Assistant that operates a simulated E-commerce Store](https://www.loom.com/share/83aa4fd8dccb492aad4ca95da40ed0b2)
420
+ 2. [New Langchain.rb Assistants interface](https://www.loom.com/share/e883a4a49b8746c1b0acf9d58cf6da36)
421
+
403
422
  ### Creating an Assistant
404
423
  1. Instantiate an LLM of your choice
405
424
  ```ruby
@@ -50,6 +50,11 @@ module Langchain
50
50
  # @param auto_tool_execution [Boolean] Whether or not to automatically run tools
51
51
  # @return [Array<Langchain::Message>] The messages in the thread
52
52
  def run(auto_tool_execution: false)
53
+ if thread.messages.empty?
54
+ Langchain.logger.warn("No messages in the thread")
55
+ return
56
+ end
57
+
53
58
  running = true
54
59
 
55
60
  while running
@@ -21,9 +21,9 @@ module Langchain::LLM
21
21
  }.freeze
22
22
 
23
23
  EMBEDDING_SIZES = {
24
- "text-embedding-ada-002": 1536,
25
- "text-embedding-3-large": 3072,
26
- "text-embedding-3-small": 1536
24
+ "text-embedding-ada-002" => 1536,
25
+ "text-embedding-3-large" => 3072,
26
+ "text-embedding-3-small" => 1536
27
27
  }.freeze
28
28
 
29
29
  LENGTH_VALIDATOR = Langchain::Utils::TokenLength::OpenAIValidator
@@ -54,7 +54,7 @@ module Langchain::LLM
54
54
  model: defaults[:embeddings_model_name],
55
55
  encoding_format: nil,
56
56
  user: nil,
57
- dimensions: EMBEDDING_SIZES.fetch(model.to_sym, nil)
57
+ dimensions: nil
58
58
  )
59
59
  raise ArgumentError.new("text argument is required") if text.empty?
60
60
  raise ArgumentError.new("model argument is required") if model.empty?
@@ -67,8 +67,10 @@ module Langchain::LLM
67
67
  parameters[:encoding_format] = encoding_format if encoding_format
68
68
  parameters[:user] = user if user
69
69
 
70
- if ["text-embedding-3-small", "text-embedding-3-large"].include?(model)
71
- parameters[:dimensions] = EMBEDDING_SIZES[model.to_sym] if EMBEDDING_SIZES.key?(model.to_sym)
70
+ if dimensions
71
+ parameters[:dimensions] = dimensions
72
+ elsif EMBEDDING_SIZES.key?(model)
73
+ parameters[:dimensions] = EMBEDDING_SIZES[model]
72
74
  end
73
75
 
74
76
  validate_max_tokens(text, parameters[:model])
@@ -183,7 +185,7 @@ module Langchain::LLM
183
185
  end
184
186
 
185
187
  def default_dimension
186
- @defaults[:dimension] || EMBEDDING_SIZES.fetch(defaults[:embeddings_model_name].to_sym)
188
+ @defaults[:dimension] || EMBEDDING_SIZES.fetch(defaults[:embeddings_model_name])
187
189
  end
188
190
 
189
191
  private
@@ -9,6 +9,7 @@ module Langchain::Tool
9
9
  #
10
10
  # - {Langchain::Tool::Calculator}: calculate the result of a math expression
11
11
  # - {Langchain::Tool::Database}: executes SQL queries
12
+ # - {Langchain::Tool::FileSystem}: interacts with files
12
13
  # - {Langchain::Tool::GoogleSearch}: search on Google (via SerpAPI)
13
14
  # - {Langchain::Tool::RubyCodeInterpreter}: runs ruby code
14
15
  # - {Langchain::Tool::Weather}: gets current weather data
@@ -29,8 +30,9 @@ module Langchain::Tool
29
30
  #
30
31
  # 3. Pass the tools when Agent is instantiated.
31
32
  #
32
- # agent = Langchain::Agent::ReActAgent.new(
33
- # llm: Langchain::LLM::OpenAI.new(api_key: "YOUR_API_KEY"), # or other like Cohere, Hugging Face, Google Palm or Replicate
33
+ # agent = Langchain::Assistant.new(
34
+ # llm: Langchain::LLM::OpenAI.new(api_key: "YOUR_API_KEY"), # or other LLM that supports function calling (coming soon)
35
+ # thread: Langchain::Thread.new,
34
36
  # tools: [
35
37
  # Langchain::Tool::GoogleSearch.new(api_key: "YOUR_API_KEY"),
36
38
  # Langchain::Tool::Calculator.new,
@@ -42,9 +44,10 @@ module Langchain::Tool
42
44
  #
43
45
  # 1. Create a new file in lib/langchain/tool/your_tool_name.rb
44
46
  # 2. Create a class in the file that inherits from {Langchain::Tool::Base}
45
- # 3. Add `NAME=` and `DESCRIPTION=` constants in your Tool class
46
- # 4. Implement `execute(input:)` method in your tool class
47
- # 5. Add your tool to the {file:README.md}
47
+ # 3. Add `NAME=` and `ANNOTATIONS_PATH=` constants in your Tool class
48
+ # 4. Implement various methods in your tool class
49
+ # 5. Create a sidecar .json file in the same directory as your tool file annotating the methods in the Open API format
50
+ # 6. Add your tool to the {file:README.md}
48
51
  class Base
49
52
  include Langchain::DependencyHelper
50
53
 
@@ -61,30 +64,6 @@ module Langchain::Tool
61
64
  }
62
65
  end
63
66
 
64
- # Returns the DESCRIPTION constant of the tool
65
- #
66
- # @return [String] tool description
67
- def description
68
- self.class.const_get(:DESCRIPTION)
69
- end
70
-
71
- # Sets the DESCRIPTION constant of the tool
72
- #
73
- # @param value [String] tool description
74
- def self.description(value)
75
- const_set(:DESCRIPTION, value.tr("\n", " ").strip)
76
- end
77
-
78
- # Instantiates and executes the tool and returns the answer
79
- #
80
- # @param input [String] input to the tool
81
- # @return [String] answer
82
- def self.execute(input:)
83
- warn "DEPRECATED: `#{self}.execute` is deprecated, and will be removed in the next major version."
84
-
85
- new.execute(input: input)
86
- end
87
-
88
67
  # Returns the tool as a list of OpenAI formatted functions
89
68
  #
90
69
  # @return [Hash] tool as an OpenAI tool
@@ -92,15 +71,6 @@ module Langchain::Tool
92
71
  method_annotations
93
72
  end
94
73
 
95
- # Executes the tool and returns the answer
96
- #
97
- # @param input [String] input to the tool
98
- # @return [String] answer
99
- # @raise NotImplementedError when not implemented
100
- def execute(input:)
101
- raise NotImplementedError, "Your tool must implement the `#execute(input:)` method that returns a string"
102
- end
103
-
104
74
  # Return tool's method annotations as JSON
105
75
  #
106
76
  # @return [Hash] Tool's method annotations
@@ -111,16 +81,5 @@ module Langchain::Tool
111
81
  )
112
82
  )
113
83
  end
114
-
115
- # Validates the list of tools or raises an error
116
- #
117
- # @param tools [Array<Langchain::Tool>] list of tools to be used
118
- # @raise [ArgumentError] If any of the tools are not supported
119
- def self.validate_tools!(tools:)
120
- # Check if the tool count is equal to unique tool count
121
- if tools.count != tools.map(&:name).uniq.count
122
- raise ArgumentError, "Either tools are not unique or are conflicting with each other"
123
- end
124
- end
125
84
  end
126
85
  end
@@ -15,17 +15,6 @@ module Langchain::Tool
15
15
  NAME = "calculator"
16
16
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
17
17
 
18
- description <<~DESC
19
- Useful for getting the result of a math expression.
20
-
21
- The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.
22
- Usage:
23
- Action Input: 1 + 1
24
- Action Input: 3 * 2 / 4
25
- Action Input: 9 - 7
26
- Action Input: (4.1 + 2.3) / (2.0 - 5.6) * 3
27
- DESC
28
-
29
18
  def initialize
30
19
  depends_on "eqn"
31
20
  end
@@ -12,12 +12,6 @@ module Langchain::Tool
12
12
  NAME = "database"
13
13
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
14
14
 
15
- description <<~DESC
16
- Useful for getting the result of a database query.
17
-
18
- The input to this tool should be valid SQL.
19
- DESC
20
-
21
15
  attr_reader :db, :requested_tables, :excluded_tables
22
16
 
23
17
  # Establish a database connection
@@ -0,0 +1,57 @@
1
+ [
2
+ {
3
+ "type": "function",
4
+ "function": {
5
+ "name": "file_system-list_directory",
6
+ "description": "File System Tool: Lists out the content of a specified directory",
7
+ "parameters": {
8
+ "type": "object",
9
+ "properties": {
10
+ "directory_path": {
11
+ "type": "string",
12
+ "description": "Directory path to list"
13
+ }
14
+ },
15
+ "required": ["directory_path"]
16
+ }
17
+ }
18
+ },
19
+ {
20
+ "type": "function",
21
+ "function": {
22
+ "name": "file_system-read_file",
23
+ "description": "File System Tool: Reads the contents of a file",
24
+ "parameters": {
25
+ "type": "object",
26
+ "properties": {
27
+ "file_path": {
28
+ "type": "string",
29
+ "description": "Path to the file to read from"
30
+ }
31
+ },
32
+ "required": ["file_path"]
33
+ }
34
+ }
35
+ },
36
+ {
37
+ "type": "function",
38
+ "function": {
39
+ "name": "file_system-write_to_file",
40
+ "description": "File System Tool: Write content to a file",
41
+ "parameters": {
42
+ "type": "object",
43
+ "properties": {
44
+ "file_path": {
45
+ "type": "string",
46
+ "description": "Path to the file to write"
47
+ },
48
+ "content": {
49
+ "type": "string",
50
+ "description": "Content to write to the file"
51
+ }
52
+ },
53
+ "required": ["file_path", "content"]
54
+ }
55
+ }
56
+ }
57
+ ]
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Langchain::Tool
4
+ class FileSystem < Base
5
+ #
6
+ # A tool that wraps the Ruby file system classes.
7
+ #
8
+ # Usage:
9
+ # file_system = Langchain::Tool::FileSystem.new
10
+ #
11
+ NAME = "file_system"
12
+ ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
13
+
14
+ def list_directory(directory_path:)
15
+ Dir.entries(directory_path)
16
+ rescue Errno::ENOENT
17
+ "No such directory: #{directory_path}"
18
+ end
19
+
20
+ def read_file(file_path:)
21
+ File.read(file_path)
22
+ rescue Errno::ENOENT
23
+ "No such file: #{file_path}"
24
+ end
25
+
26
+ def write_to_file(file_path:, content:)
27
+ File.write(file_path, content)
28
+ rescue Errno::EACCES
29
+ "Permission denied: #{file_path}"
30
+ end
31
+ end
32
+ end
@@ -15,12 +15,6 @@ module Langchain::Tool
15
15
  NAME = "google_search"
16
16
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
17
17
 
18
- description <<~DESC
19
- A wrapper around SerpApi's Google Search API.
20
-
21
- Useful for when you need to answer questions about current events. Always one of the first options when you need to find information on internet. Input should be a search query.
22
- DESC
23
-
24
18
  attr_reader :api_key
25
19
 
26
20
  #
@@ -14,10 +14,6 @@ module Langchain::Tool
14
14
  NAME = "ruby_code_interpreter"
15
15
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
16
16
 
17
- description <<~DESC
18
- A Ruby code interpreter. Use this to execute ruby expressions. Input should be a valid ruby expression. If you want to see the output of the tool, make sure to return a value.
19
- DESC
20
-
21
17
  def initialize(timeout: 30)
22
18
  depends_on "safe_ruby"
23
19
 
@@ -0,0 +1,24 @@
1
+ [
2
+ {
3
+ "type": "function",
4
+ "function": {
5
+ "name": "vectorsearch-similarity_search",
6
+ "description": "Vectorsearch: Retrieves relevant document for the query",
7
+ "parameters": {
8
+ "type": "object",
9
+ "properties": {
10
+ "query": {
11
+ "type": "string",
12
+ "description": "Query to find similar documents for"
13
+ },
14
+ "k": {
15
+ "type": "integer",
16
+ "description": "Number of similar documents to retrieve",
17
+ "default": 4
18
+ }
19
+ },
20
+ "required": ["query"]
21
+ }
22
+ }
23
+ }
24
+ ]
@@ -0,0 +1,36 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Langchain::Tool
4
+ class Vectorsearch < Base
5
+ #
6
+ # A tool wraps vectorsearch classes
7
+ #
8
+ # Usage:
9
+ # # Initialize the LLM that will be used to generate embeddings
10
+ # ollama = Langchain::LLM::Ollama.new(url: ENV["OLLAMA_URL"]
11
+ # chroma = Langchain::Vectorsearch::Chroma.new(url: ENV["CHROMA_URL"], index_name: "my_index", llm: ollama)
12
+ #
13
+ # # This tool can now be used by the Assistant
14
+ # vectorsearch_tool = Langchain::Tool::Vectorsearch.new(vectorsearch: chroma)
15
+ #
16
+ NAME = "vectorsearch"
17
+ ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
18
+
19
+ attr_reader :vectorsearch
20
+
21
+ # Initializes the Vectorsearch tool
22
+ #
23
+ # @param vectorsearch [Langchain::Vectorsearch::Base] Vectorsearch instance to use
24
+ def initialize(vectorsearch:)
25
+ @vectorsearch = vectorsearch
26
+ end
27
+
28
+ # Executes the vector search and returns the results
29
+ #
30
+ # @param query [String] The query to search for
31
+ # @param k [Integer] The number of results to return
32
+ def similarity_search(query:, k: 4)
33
+ vectorsearch.similarity_search(query:, k: 4)
34
+ end
35
+ end
36
+ end
@@ -19,17 +19,6 @@ module Langchain::Tool
19
19
  NAME = "weather"
20
20
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
21
21
 
22
- description <<~DESC
23
- Useful for getting current weather data
24
-
25
- The input to this tool should be a city name followed by the units (imperial, metric, or standard)
26
- Usage:
27
- Action Input: St Louis, Missouri; metric
28
- Action Input: Boston, Massachusetts; imperial
29
- Action Input: Dubai, AE; imperial
30
- Action Input: Kiev, Ukraine; metric
31
- DESC
32
-
33
22
  attr_reader :client, :units
34
23
 
35
24
  # Initializes the Weather tool
@@ -15,15 +15,6 @@ module Langchain::Tool
15
15
  NAME = "wikipedia"
16
16
  ANNOTATIONS_PATH = Langchain.root.join("./langchain/tool/#{NAME}/#{NAME}.json").to_path
17
17
 
18
- description <<~DESC
19
- A wrapper around Wikipedia.
20
-
21
- Useful for when you need to answer general questions about
22
- people, places, companies, facts, historical events, or other subjects.
23
-
24
- Input should be a search query.
25
- DESC
26
-
27
18
  # Initializes the Wikipedia tool
28
19
  def initialize
29
20
  depends_on "wikipedia-client", req: "wikipedia"
@@ -9,7 +9,7 @@ module Langchain::Vectorsearch
9
9
  # gem "weaviate-ruby", "~> 0.8.9"
10
10
  #
11
11
  # Usage:
12
- # weaviate = Langchain::Vectorsearch::Weaviate.new(url:, api_key:, index_name:, llm:)
12
+ # weaviate = Langchain::Vectorsearch::Weaviate.new(url: ENV["WEAVIATE_URL"], api_key: ENV["WEAVIATE_API_KEY"], index_name: "Docs", llm: llm)
13
13
  #
14
14
 
15
15
  # Initialize the Weaviate adapter
@@ -71,6 +71,22 @@ module Langchain::Vectorsearch
71
71
  end
72
72
  end
73
73
 
74
+ # Deletes a list of texts in the index
75
+ # @param ids [Array] The ids of texts to delete
76
+ # @return [Hash] The response from the server
77
+ def remove_texts(ids:)
78
+ raise ArgumentError, "ids must be an array" unless ids.is_a?(Array)
79
+
80
+ client.objects.batch_delete(
81
+ class_name: index_name,
82
+ where: {
83
+ path: ["__id"],
84
+ operator: "ContainsAny",
85
+ valueTextArray: ids
86
+ }
87
+ )
88
+ end
89
+
74
90
  # Create default schema
75
91
  # @return [Hash] The response from the server
76
92
  def create_default_schema
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- VERSION = "0.10.3"
4
+ VERSION = "0.11.1"
5
5
  end
data/lib/langchain.rb CHANGED
@@ -21,17 +21,17 @@ loader.inflector.inflect(
21
21
  "openai" => "OpenAI",
22
22
  "openai_validator" => "OpenAIValidator",
23
23
  "openai_response" => "OpenAIResponse",
24
- "pdf" => "PDF",
25
- "react_agent" => "ReActAgent",
26
- "sql_query_agent" => "SQLQueryAgent"
24
+ "pdf" => "PDF"
27
25
  )
28
26
  loader.collapse("#{__dir__}/langchain/llm/response")
29
27
  loader.collapse("#{__dir__}/langchain/assistants")
30
28
 
31
29
  loader.collapse("#{__dir__}/langchain/tool/calculator")
32
30
  loader.collapse("#{__dir__}/langchain/tool/database")
31
+ loader.collapse("#{__dir__}/langchain/tool/file_system")
33
32
  loader.collapse("#{__dir__}/langchain/tool/google_search")
34
33
  loader.collapse("#{__dir__}/langchain/tool/ruby_code_interpreter")
34
+ loader.collapse("#{__dir__}/langchain/tool/vectorsearch")
35
35
  loader.collapse("#{__dir__}/langchain/tool/weather")
36
36
  loader.collapse("#{__dir__}/langchain/tool/wikipedia")
37
37
 
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: langchainrb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.10.3
4
+ version: 0.11.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Bondarev
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-03-28 00:00:00.000000000 Z
11
+ date: 2024-04-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activesupport
@@ -58,14 +58,14 @@ dependencies:
58
58
  requirements:
59
59
  - - "~>"
60
60
  - !ruby/object:Gem::Version
61
- version: 0.0.7
61
+ version: 0.0.8
62
62
  type: :runtime
63
63
  prerelease: false
64
64
  version_requirements: !ruby/object:Gem::Requirement
65
65
  requirements:
66
66
  - - "~>"
67
67
  - !ruby/object:Gem::Version
68
- version: 0.0.7
68
+ version: 0.0.8
69
69
  - !ruby/object:Gem::Dependency
70
70
  name: json-schema
71
71
  requirement: !ruby/object:Gem::Requirement
@@ -693,13 +693,6 @@ files:
693
693
  - LICENSE.txt
694
694
  - README.md
695
695
  - lib/langchain.rb
696
- - lib/langchain/agent/agents.md
697
- - lib/langchain/agent/base.rb
698
- - lib/langchain/agent/react_agent.rb
699
- - lib/langchain/agent/react_agent/react_agent_prompt.yaml
700
- - lib/langchain/agent/sql_query_agent.rb
701
- - lib/langchain/agent/sql_query_agent/sql_query_agent_answer_prompt.yaml
702
- - lib/langchain/agent/sql_query_agent/sql_query_agent_sql_prompt.yaml
703
696
  - lib/langchain/assistants/assistant.rb
704
697
  - lib/langchain/assistants/message.rb
705
698
  - lib/langchain/assistants/thread.rb
@@ -777,10 +770,14 @@ files:
777
770
  - lib/langchain/tool/calculator/calculator.rb
778
771
  - lib/langchain/tool/database/database.json
779
772
  - lib/langchain/tool/database/database.rb
773
+ - lib/langchain/tool/file_system/file_system.json
774
+ - lib/langchain/tool/file_system/file_system.rb
780
775
  - lib/langchain/tool/google_search/google_search.json
781
776
  - lib/langchain/tool/google_search/google_search.rb
782
777
  - lib/langchain/tool/ruby_code_interpreter/ruby_code_interpreter.json
783
778
  - lib/langchain/tool/ruby_code_interpreter/ruby_code_interpreter.rb
779
+ - lib/langchain/tool/vectorsearch/vectorsearch.json
780
+ - lib/langchain/tool/vectorsearch/vectorsearch.rb
784
781
  - lib/langchain/tool/weather/weather.json
785
782
  - lib/langchain/tool/weather/weather.rb
786
783
  - lib/langchain/tool/wikipedia/wikipedia.json
@@ -1,54 +0,0 @@
1
-
2
- ### Agents 🤖
3
- Agents are semi-autonomous bots that can respond to user questions and use available to them Tools to provide informed replies. They break down problems into series of steps and define Actions (and Action Inputs) along the way that are executed and fed back to them as additional information. Once an Agent decides that it has the Final Answer it responds with it.
4
-
5
- #### ReAct Agent
6
-
7
- Add `gem "ruby-openai"`, `gem "eqn"`, and `gem "google_search_results"` to your Gemfile
8
-
9
- ```ruby
10
- search_tool = Langchain::Tool::GoogleSearch.new(api_key: ENV["SERPAPI_API_KEY"])
11
- calculator = Langchain::Tool::Calculator.new
12
-
13
- openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
14
-
15
- agent = Langchain::Agent::ReActAgent.new(
16
- llm: openai,
17
- tools: [search_tool, calculator]
18
- )
19
- ```
20
- ```ruby
21
- agent.run(question: "How many full soccer fields would be needed to cover the distance between NYC and DC in a straight line?")
22
- #=> "Approximately 2,945 soccer fields would be needed to cover the distance between NYC and DC in a straight line."
23
- ```
24
-
25
- #### SQL-Query Agent
26
-
27
- Add `gem "sequel"` to your Gemfile
28
-
29
- ```ruby
30
- database = Langchain::Tool::Database.new(connection_string: "postgres://user:password@localhost:5432/db_name")
31
-
32
- agent = Langchain::Agent::SQLQueryAgent.new(llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]), db: database)
33
- ```
34
- ```ruby
35
- agent.run(question: "How many users have a name with length greater than 5 in the users table?")
36
- #=> "14 users have a name with length greater than 5 in the users table."
37
- ```
38
-
39
- #### Demo
40
- ![May-12-2023 13-09-13](https://github.com/andreibondarev/langchainrb/assets/541665/6bad4cd9-976c-420f-9cf9-b85bf84f7eaf)
41
-
42
- ![May-12-2023 13-07-45](https://github.com/andreibondarev/langchainrb/assets/541665/9aacdcc7-4225-4ea0-ab96-7ee48826eb9b)
43
-
44
- #### Available Tools 🛠️
45
-
46
- | Name | Description | ENV Requirements | Gem Requirements |
47
- | ------------ | :------------------------------------------------: | :-----------------------------------------------------------: | :---------------------------------------: |
48
- | "calculator" | Useful for getting the result of a math expression | | `gem "eqn", "~> 1.6.5"` |
49
- | "database" | Useful for querying a SQL database | | `gem "sequel", "~> 5.68.0"` |
50
- | "ruby_code_interpreter" | Interprets Ruby expressions | | `gem "safe_ruby", "~> 1.0.4"` |
51
- | "google_search" | A wrapper around Google Search | `ENV["SERPAPI_API_KEY"]` (https://serpapi.com/manage-api-key) | `gem "google_search_results", "~> 2.0.0"` |
52
- | "weather" | Calls Open Weather API to retrieve the current weather | `ENV["OPEN_WEATHER_API_KEY"]` (https://home.openweathermap.org/api_keys) | `gem "open-weather-ruby-client", "~> 0.3.0"` |
53
- | "wikipedia" | Calls Wikipedia API to retrieve the summary | | `gem "wikipedia-client", "~> 1.17.0"` |
54
-
@@ -1,20 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Langchain::Agent
4
- # = Agents
5
- #
6
- # Agents are semi-autonomous bots that can respond to user questions and use available to them Tools to provide informed replies. They break down problems into series of steps and define Actions (and Action Inputs) along the way that are executed and fed back to them as additional information. Once an Agent decides that it has the Final Answer it responds with it.
7
- #
8
- # Available:
9
- # - {Langchain::Agent::ReActAgent}
10
- # - {Langchain::Agent::SQLQueryAgent}
11
- #
12
- # @abstract
13
- class Base
14
- def self.logger_options
15
- {
16
- color: :red
17
- }
18
- end
19
- end
20
- end
@@ -1,26 +0,0 @@
1
- _type: prompt
2
- template: |
3
- Today is {date} and you can use tools to get new information. Answer the following questions as best you can using the following tools:
4
-
5
- {tools}
6
-
7
- Use the following format:
8
-
9
- Question: the input question you must answer
10
- Thought: you should always think about what to do
11
- Action: the action to take, should be one of {tool_names}
12
- Action Input: the input to the action
13
- Observation: the result of the action
14
- ... (this Thought/Action/Action Input/Observation can repeat N times)
15
- Thought: I now know the final answer
16
- Final Answer: the final answer to the original input question
17
-
18
- Begin!
19
-
20
- Question: {question}
21
- Thought:
22
- input_variables:
23
- - date
24
- - question
25
- - tools
26
- - tool_names
@@ -1,133 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Langchain::Agent
4
- # = ReAct Agent
5
- #
6
- # llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) # or your choice of Langchain::LLM::Base implementation
7
- #
8
- # agent = Langchain::Agent::ReActAgent.new(
9
- # llm: llm,
10
- # tools: [
11
- # Langchain::Tool::GoogleSearch.new(api_key: "YOUR_API_KEY"),
12
- # Langchain::Tool::Calculator.new,
13
- # Langchain::Tool::Wikipedia.new
14
- # ]
15
- # )
16
- #
17
- # agent.run(question: "How many full soccer fields would be needed to cover the distance between NYC and DC in a straight line?")
18
- # #=> "Approximately 2,945 soccer fields would be needed to cover the distance between NYC and DC in a straight line."
19
- class ReActAgent < Base
20
- attr_reader :llm, :tools, :max_iterations
21
-
22
- # Initializes the Agent
23
- #
24
- # @param llm [Object] The LLM client to use
25
- # @param tools [Array<Tool>] The tools to use
26
- # @param max_iterations [Integer] The maximum number of iterations to run
27
- # @return [ReActAgent] The Agent::ReActAgent instance
28
- def initialize(llm:, tools: [], max_iterations: 10)
29
- warn "[DEPRECATION] `Langchain::Agent::ReActAgent` is deprecated. Please use `Langchain::Assistant` instead."
30
-
31
- Langchain::Tool::Base.validate_tools!(tools: tools)
32
-
33
- @tools = tools
34
-
35
- @llm = llm
36
- @max_iterations = max_iterations
37
- end
38
-
39
- # Validate tools when they're re-assigned
40
- #
41
- # @param value [Array<Tool>] The tools to use
42
- # @return [Array<Tool>] The tools that will be used
43
- def tools=(value)
44
- Langchain::Tool::Base.validate_tools!(tools: value)
45
- @tools = value
46
- end
47
-
48
- # Run the Agent!
49
- #
50
- # @param question [String] The question to ask
51
- # @return [String] The answer to the question
52
- def run(question:)
53
- question = question.strip
54
- prompt = create_prompt(
55
- question: question,
56
- tools: tools
57
- )
58
-
59
- final_response = nil
60
- max_iterations.times do
61
- Langchain.logger.info("Sending the prompt to the #{llm.class} LLM", for: self.class)
62
-
63
- response = llm.complete(prompt: prompt, stop_sequences: ["Observation:"]).completion
64
-
65
- # Append the response to the prompt
66
- prompt += response
67
-
68
- # Find the requested action in the "Action: search" format
69
- action = response.match(/Action: (.*)/)&.send(:[], -1)
70
-
71
- if action
72
- # Find the input to the action in the "Action Input: [action_input]" format
73
- action_input = response.match(/Action Input: "?(.*)"?/)&.send(:[], -1)
74
-
75
- # Find the Tool and call `execute`` with action_input as the input
76
- tool = tools.find { |tool| tool.name == action.strip }
77
- Langchain.logger.info("Invoking \"#{tool.class}\" Tool with \"#{action_input}\"", for: self.class)
78
-
79
- # Call `execute` with action_input as the input
80
- result = tool.execute(input: action_input)
81
-
82
- # Append the Observation to the prompt
83
- prompt += if prompt.end_with?("Observation:")
84
- " #{result}\nThought:"
85
- else
86
- "\nObservation: #{result}\nThought:"
87
- end
88
- elsif response.include?("Final Answer:")
89
- # Return the final answer
90
- final_response = response.split("Final Answer:")[-1]
91
- break
92
- end
93
- end
94
-
95
- final_response || raise(MaxIterationsReachedError.new(max_iterations))
96
- end
97
-
98
- private
99
-
100
- # Create the initial prompt to pass to the LLM
101
- # @param question [String] Question to ask
102
- # @param tools [Array] Tools to use
103
- # @return [String] Prompt
104
- def create_prompt(question:, tools:)
105
- tool_list = tools.map(&:name)
106
-
107
- prompt_template.format(
108
- date: Date.today.strftime("%B %d, %Y"),
109
- question: question,
110
- tool_names: "[#{tool_list.join(", ")}]",
111
- tools: tools.map do |tool|
112
- tool_name = tool.name
113
- tool_description = tool.description
114
- "#{tool_name}: #{tool_description}"
115
- end.join("\n")
116
- )
117
- end
118
-
119
- # Load the PromptTemplate from the YAML file
120
- # @return [PromptTemplate] PromptTemplate instance
121
- def prompt_template
122
- @template ||= Langchain::Prompt.load_from_path(
123
- file_path: Langchain.root.join("langchain/agent/react_agent/react_agent_prompt.yaml")
124
- )
125
- end
126
-
127
- class MaxIterationsReachedError < Langchain::Errors::BaseError
128
- def initialize(max_iterations)
129
- super("Agent stopped after #{max_iterations} iterations")
130
- end
131
- end
132
- end
133
- end
@@ -1,11 +0,0 @@
1
- _type: prompt
2
- template: |
3
- Given an input question and results of a SQL query, look at the results and return the answer. Use the following format:
4
- Question: {question}
5
- The SQL query: {sql_query}
6
- Result of the SQLQuery: {results}
7
- Final answer: Final answer here
8
- input_variables:
9
- - question
10
- - sql_query
11
- - results
@@ -1,21 +0,0 @@
1
- _type: prompt
2
- template: |
3
- Given an input question, create a syntactically correct {dialect} query to run, then return the query in valid SQL.
4
- Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
5
- Pay attention to use only the column names that you can see in the schema description.
6
- Be careful to not query for columns that do not exist.
7
- Pay attention to which column is in which table.
8
- Also, qualify column names with the table name when needed.
9
-
10
- Only use the tables listed below.
11
- {schema}
12
-
13
- Use the following format:
14
-
15
- Question: {question}
16
-
17
- SQLQuery:
18
- input_variables:
19
- - dialect
20
- - schema
21
- - question
@@ -1,84 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Langchain::Agent
4
- class SQLQueryAgent < Base
5
- attr_reader :llm, :db, :schema
6
-
7
- #
8
- # Initializes the Agent
9
- #
10
- # @param llm [Object] The LLM client to use
11
- # @param db [Object] Database connection info
12
- #
13
- def initialize(llm:, db:)
14
- warn "[DEPRECATION] `Langchain::Agent::ReActAgent` is deprecated. Please use `Langchain::Assistant` instead."
15
-
16
- @llm = llm
17
- @db = db
18
- @schema = @db.dump_schema
19
- end
20
-
21
- #
22
- # Ask a question and get an answer
23
- #
24
- # @param question [String] Question to ask the LLM/Database
25
- # @return [String] Answer to the question
26
- #
27
- def run(question:)
28
- prompt = create_prompt_for_sql(question: question)
29
-
30
- # Get the SQL string to execute
31
- Langchain.logger.info("Passing the inital prompt to the #{llm.class} LLM", for: self.class)
32
- sql_string = llm.complete(prompt: prompt).completion
33
-
34
- # Execute the SQL string and collect the results
35
- Langchain.logger.info("Passing the SQL to the Database: #{sql_string}", for: self.class)
36
- results = db.execute(input: sql_string)
37
-
38
- # Pass the results and get the LLM to synthesize the answer to the question
39
- Langchain.logger.info("Passing the synthesize prompt to the #{llm.class} LLM with results: #{results}", for: self.class)
40
- prompt2 = create_prompt_for_answer(question: question, sql_query: sql_string, results: results)
41
- llm.complete(prompt: prompt2).completion
42
- end
43
-
44
- private
45
-
46
- # Create the initial prompt to pass to the LLM
47
- # @param question[String] Question to ask
48
- # @return [String] Prompt
49
- def create_prompt_for_sql(question:)
50
- prompt_template_sql.format(
51
- dialect: "standard SQL",
52
- schema: schema,
53
- question: question
54
- )
55
- end
56
-
57
- # Load the PromptTemplate from the YAML file
58
- # @return [PromptTemplate] PromptTemplate instance
59
- def prompt_template_sql
60
- Langchain::Prompt.load_from_path(
61
- file_path: Langchain.root.join("langchain/agent/sql_query_agent/sql_query_agent_sql_prompt.yaml")
62
- )
63
- end
64
-
65
- # Create the second prompt to pass to the LLM
66
- # @param question [String] Question to ask
67
- # @return [String] Prompt
68
- def create_prompt_for_answer(question:, sql_query:, results:)
69
- prompt_template_answer.format(
70
- question: question,
71
- sql_query: sql_query,
72
- results: results
73
- )
74
- end
75
-
76
- # Load the PromptTemplate from the YAML file
77
- # @return [PromptTemplate] PromptTemplate instance
78
- def prompt_template_answer
79
- Langchain::Prompt.load_from_path(
80
- file_path: Langchain.root.join("langchain/agent/sql_query_agent/sql_query_agent_answer_prompt.yaml")
81
- )
82
- end
83
- end
84
- end