langchainrb 0.7.2 → 0.7.3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 49f95a7d3bf92523a3bb74ffd9c1cff35c258c4ecb9523e75b3be4ffdf333359
4
- data.tar.gz: a114fc925963757330e83e9287314b1c363206a31293e788ab8f7cc5f8e82249
3
+ metadata.gz: 3d2d42bf6883822d160e0eeeb4adbfe1598ee271bd3dfd8d4d4b914db814ed0d
4
+ data.tar.gz: f041fc5f276258072275ab5979bf670cc5c6a122b8d4d55ca571224af790d43d
5
5
  SHA512:
6
- metadata.gz: e0fb4076645a2ba09e0e9012fa2ec84260c5294f59628284baace34ad98b4dc2621c29217890aba7995d21288b68b0eab96a4ad4ba74beb1c41d8e79c296539d
7
- data.tar.gz: 2d681b82119d4c4356011bcba6f5590429abdb3bea3049ab4c50ba720320493a64838bc08c6b9b8f16d2b2bd71d445795ae56923074a47b26e9948873460a250
6
+ metadata.gz: 61b3c342e8630e6d3ca325bfb105a29d609d99d668dc5c4cfa1cb2c447c230bb8f1f6aa7d252a08129918a0fa11e37bcab813c9700a4c690dd9e5d337eebeb7d
7
+ data.tar.gz: 7ef534ed87ae2d6c077854a03eb314390238d95e9c0b49e85c9042d60d122806709ee07e007e5de884535d4cb8b6a3ffa6504a31e6ac36fadbde10e9c1924444
data/CHANGELOG.md CHANGED
@@ -1,5 +1,9 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.7.3] - 2023-11-08
4
+ - LLM response passes through the context in RAG cases
5
+ - Fix gpt-4 token length validation
6
+
3
7
  ## [0.7.2] - 2023-11-02
4
8
  - Azure OpenAI LLM support
5
9
 
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  💎🔗 Langchain.rb
2
2
  ---
3
- ⚡ Building applications with LLMs through composability
3
+ ⚡ Building LLM-powered applications in Ruby
4
4
 
5
5
  For deep Rails integration see: [langchainrb_rails](https://github.com/andreibondarev/langchainrb_rails) gem.
6
6
 
@@ -11,21 +11,24 @@ Available for paid consulting engagements! [Email me](mailto:andrei@sourcelabs.i
11
11
  [![Docs](http://img.shields.io/badge/yard-docs-blue.svg)](http://rubydoc.info/gems/langchainrb)
12
12
  [![License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/andreibondarev/langchainrb/blob/main/LICENSE.txt)
13
13
  [![](https://dcbadge.vercel.app/api/server/WDARp7J2n8?compact=true&style=flat)](https://discord.gg/WDARp7J2n8)
14
+ [![X](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40rushing_andrei)](https://twitter.com/rushing_andrei)
14
15
 
15
- Langchain.rb is a library that's an abstraction layer on top many emergent AI, ML and other DS tools. The goal is to abstract complexity and difficult concepts to make building AI/ML-supercharged applications approachable for traditional software engineers.
16
+ ## Use Cases
17
+ * Retrieval Augmented Generation (RAG) and vector search
18
+ * Chat bots
19
+ * [AI agents](https://github.com/andreibondarev/langchainrb/tree/main/lib/langchain/agent/agents.md)
16
20
 
17
- ## Explore Langchain.rb
21
+ ## Table of Contents
18
22
 
19
23
  - [Installation](#installation)
20
24
  - [Usage](#usage)
21
- - [Vector Search Databases](#using-vector-search-databases-)
22
- - [Standalone LLMs](#using-standalone-llms-️)
23
- - [Prompts](#using-prompts-)
24
- - [Output Parsers](#using-output-parsers)
25
- - [Agents](#using-agents-)
26
- - [Loaders](#loaders-)
27
- - [Examples](#examples)
25
+ - [Large Language Models (LLMs)](#large-language-models-llms)
26
+ - [Prompt Management](#prompt-management)
27
+ - [Output Parsers](#output-parsers)
28
+ - [Building RAG](#building-retrieval-augment-generation-rag-system)
29
+ - [Building chat bots](#building-chat-bots)
28
30
  - [Evaluations](#evaluations-evals)
31
+ - [Examples](#examples)
29
32
  - [Logging](#logging)
30
33
  - [Development](#development)
31
34
  - [Discord](#discord)
@@ -46,264 +49,65 @@ If bundler is not being used to manage dependencies, install the gem by executin
46
49
  require "langchain"
47
50
  ```
48
51
 
49
- #### Supported vector search databases and features:
50
-
51
- | Database | Querying | Storage | Schema Management | Backups | Rails Integration |
52
- | -------- |:------------------:| -------:| -----------------:| -------:| -----------------:|
53
- | [Chroma](https://trychroma.com/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
54
- | [Hnswlib](https://github.com/nmslib/hnswlib/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | WIP |
55
- | [Milvus](https://milvus.io/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
56
- | [Pinecone](https://www.pinecone.io/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
57
- | [Pgvector](https://github.com/pgvector/pgvector) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
58
- | [Qdrant](https://qdrant.tech/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
59
- | [Weaviate](https://weaviate.io/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | WIP | :white_check_mark: |
60
-
61
- ### Using Vector Search Databases 🔍
62
-
63
- Choose the LLM provider you'll be using (OpenAI or Cohere) and retrieve the API key.
64
-
65
- Add `gem "weaviate-ruby", "~> 0.8.3"` to your Gemfile.
66
-
67
- Pick the vector search database you'll be using and instantiate the client:
68
- ```ruby
69
- client = Langchain::Vectorsearch::Weaviate.new(
70
- url: ENV["WEAVIATE_URL"],
71
- api_key: ENV["WEAVIATE_API_KEY"],
72
- index_name: "",
73
- llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
74
- )
75
-
76
- # You can instantiate any other supported vector search database:
77
- client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"`
78
- client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"`
79
- client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.2"`
80
- client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 0.1.6"`
81
- client = Langchain::Vectorsearch::Pgvector.new(...) # `gem "pgvector", "~> 0.2"`
82
- client = Langchain::Vectorsearch::Qdrant.new(...) # `gem"qdrant-ruby", "~> 0.9.3"`
83
- ```
84
-
85
- ```ruby
86
- # Creating the default schema
87
- client.create_default_schema
88
- ```
89
-
90
- ```ruby
91
- # Store plain texts in your vector search database
92
- client.add_texts(
93
- texts: [
94
- "Begin by preheating your oven to 375°F (190°C). Prepare four boneless, skinless chicken breasts by cutting a pocket into the side of each breast, being careful not to cut all the way through. Season the chicken with salt and pepper to taste. In a large skillet, melt 2 tablespoons of unsalted butter over medium heat. Add 1 small diced onion and 2 minced garlic cloves, and cook until softened, about 3-4 minutes. Add 8 ounces of fresh spinach and cook until wilted, about 3 minutes. Remove the skillet from heat and let the mixture cool slightly.",
95
- "In a bowl, combine the spinach mixture with 4 ounces of softened cream cheese, 1/4 cup of grated Parmesan cheese, 1/4 cup of shredded mozzarella cheese, and 1/4 teaspoon of red pepper flakes. Mix until well combined. Stuff each chicken breast pocket with an equal amount of the spinach mixture. Seal the pocket with a toothpick if necessary. In the same skillet, heat 1 tablespoon of olive oil over medium-high heat. Add the stuffed chicken breasts and sear on each side for 3-4 minutes, or until golden brown."
96
- ]
97
- )
98
- ```
99
- ```ruby
100
- # Store the contents of your files in your vector search database
101
- my_pdf = Langchain.root.join("path/to/my.pdf")
102
- my_text = Langchain.root.join("path/to/my.txt")
103
- my_docx = Langchain.root.join("path/to/my.docx")
104
-
105
- client.add_data(paths: [my_pdf, my_text, my_docx])
106
- ```
107
- ```ruby
108
- # Retrieve similar documents based on the query string passed in
109
- client.similarity_search(
110
- query:,
111
- k: # number of results to be retrieved
112
- )
113
- ```
114
- ```ruby
115
- # Retrieve similar documents based on the query string passed in via the [HyDE technique](https://arxiv.org/abs/2212.10496)
116
- client.similarity_search_with_hyde()
117
- ```
118
- ```ruby
119
- # Retrieve similar documents based on the embedding passed in
120
- client.similarity_search_by_vector(
121
- embedding:,
122
- k: # number of results to be retrieved
123
- )
124
- ```
125
- ```ruby
126
- # Q&A-style querying based on the question passed in
127
- client.ask(
128
- question:
129
- )
130
- ```
131
-
132
- ## Integrating Vector Search into ActiveRecord models
133
- ```ruby
134
- class Product < ActiveRecord::Base
135
- vectorsearch provider: Langchain::Vectorsearch::Qdrant.new(
136
- api_key: ENV["QDRANT_API_KEY"],
137
- url: ENV["QDRANT_URL"],
138
- index_name: "Products",
139
- llm: Langchain::LLM::GooglePalm.new(api_key: ENV["GOOGLE_PALM_API_KEY"])
140
- )
52
+ ## Large Language Models (LLMs)
53
+ Langchain.rb wraps all supported LLMs in a unified interface allowing you to easily swap out and test out different models.
141
54
 
142
- after_save :upsert_to_vectorsearch
143
- end
144
- ```
55
+ #### Supported LLMs and features:
56
+ | LLM providers | embed() | complete() | chat() | summarize() | Notes |
57
+ | -------- |:------------------:| :-------: | :-----------------: | :-------: | :----------------- |
58
+ | [OpenAI](https://openai.com/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | ❌ | Including Azure OpenAI |
59
+ | [AI21](https://ai21.com/) | ❌ | :white_check_mark: | ❌ | :white_check_mark: | |
60
+ | [Anthropic](https://milvus.io/) | ❌ | :white_check_mark: | ❌ | ❌ | |
61
+ | [Cohere](https://www.pinecone.io/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
62
+ | [GooglePalm](https://ai.google/discover/palm2/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
63
+ | [HuggingFace](https://huggingface.co/) | :white_check_mark: | ❌ | ❌ | ❌ | |
64
+ | [Ollama](https://ollama.ai/) | :white_check_mark: | :white_check_mark: | ❌ | ❌ | |
65
+ | [Replicate](https://replicate.com/) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | |
145
66
 
146
- ### Exposed ActiveRecord methods
147
- ```ruby
148
- # Retrieve similar products based on the query string passed in
149
- Product.similarity_search(
150
- query:,
151
- k: # number of results to be retrieved
152
- )
153
- ```
154
- ```ruby
155
- # Q&A-style querying based on the question passed in
156
- Product.ask(
157
- question:
158
- )
159
- ```
160
-
161
- Additional info [here](https://github.com/andreibondarev/langchainrb/blob/main/lib/langchain/active_record/hooks.rb#L10-L38).
162
-
163
- ### Using Standalone LLMs 🗣️
164
-
165
- Add `gem "ruby-openai", "~> 4.0.0"` to your Gemfile.
67
+ #### Using standalone LLMs:
166
68
 
167
69
  #### OpenAI
168
- ```ruby
169
- openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
170
- ```
171
- You can pass additional parameters to the constructor, it will be passed to the OpenAI client:
172
- ```ruby
173
- openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"], llm_options: {uri_base: "http://localhost:1234"}) )
174
- ```
175
- ```ruby
176
- openai.embed(text: "foo bar")
177
- ```
178
- ```ruby
179
- openai.complete(prompt: "What is the meaning of life?")
180
- ```
181
-
182
- ##### Open AI Function calls support
183
-
184
- Conversation support
185
-
186
- ```ruby
187
- chat = Langchain::Conversation.new(llm: openai)
188
- ```
189
- ```ruby
190
- chat.set_context("You are the climate bot")
191
- chat.set_functions(functions)
192
- ```
193
70
 
194
- qdrant:
195
-
196
- ```ruby
197
- client.llm.functions = functions
198
- ```
199
-
200
- #### Azure
201
71
  Add `gem "ruby-openai", "~> 5.2.0"` to your Gemfile.
202
72
 
203
73
  ```ruby
204
- azure = Langchain::LLM::Azure.new(
205
- api_key: ENV["AZURE_API_KEY"],
206
- llm_options: {
207
- api_type: :azure,
208
- api_version: "2023-03-15-preview"
209
- },
210
- embedding_deployment_url: ENV.fetch("AZURE_EMBEDDING_URI"),
211
- chat_deployment_url: ENV.fetch("AZURE_CHAT_URI")
212
- )
213
- ```
214
- where `AZURE_EMBEDDING_URI` is e.g. `https://custom-domain.openai.azure.com/openai/deployments/gpt-35-turbo` and `AZURE_CHAT_URI` is e.g. `https://custom-domain.openai.azure.com/openai/deployments/ada-2`
215
-
216
- You can pass additional parameters to the constructor, it will be passed to the Azure client:
217
- ```ruby
218
- azure = Langchain::LLM::Azure.new(
219
- api_key: ENV["AZURE_API_KEY"],
220
- llm_options: {
221
- api_type: :azure,
222
- api_version: "2023-03-15-preview",
223
- request_timeout: 240 # Optional
224
- },
225
- embedding_deployment_url: ENV.fetch("AZURE_EMBEDDING_URI"),
226
- chat_deployment_url: ENV.fetch("AZURE_CHAT_URI")
227
- )
228
- ```
229
- ```ruby
230
- azure.embed(text: "foo bar")
231
- ```
232
- ```ruby
233
- azure.complete(prompt: "What is the meaning of life?")
234
- ```
235
-
236
- #### Cohere
237
- Add `gem "cohere-ruby", "~> 0.9.6"` to your Gemfile.
238
-
239
- ```ruby
240
- cohere = Langchain::LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"])
241
- ```
242
- ```ruby
243
- cohere.embed(text: "foo bar")
244
- ```
245
- ```ruby
246
- cohere.complete(prompt: "What is the meaning of life?")
247
- ```
248
-
249
- #### HuggingFace
250
- Add `gem "hugging-face", "~> 0.3.2"` to your Gemfile.
251
- ```ruby
252
- hugging_face = Langchain::LLM::HuggingFace.new(api_key: ENV["HUGGING_FACE_API_KEY"])
253
- ```
254
-
255
- #### Replicate
256
- Add `gem "replicate-ruby", "~> 0.2.2"` to your Gemfile.
257
- ```ruby
258
- replicate = Langchain::LLM::Replicate.new(api_key: ENV["REPLICATE_API_KEY"])
74
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
259
75
  ```
260
-
261
- #### Google PaLM (Pathways Language Model)
262
- Add `"google_palm_api", "~> 0.1.3"` to your Gemfile.
76
+ You can pass additional parameters to the constructor, it will be passed to the OpenAI client:
263
77
  ```ruby
264
- google_palm = Langchain::LLM::GooglePalm.new(api_key: ENV["GOOGLE_PALM_API_KEY"])
78
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"], llm_options: { ... })
265
79
  ```
266
80
 
267
- #### AI21
268
- Add `gem "ai21", "~> 0.2.1"` to your Gemfile.
81
+ Generate vector embeddings:
269
82
  ```ruby
270
- ai21 = Langchain::LLM::AI21.new(api_key: ENV["AI21_API_KEY"])
83
+ llm.embed(text: "foo bar")
271
84
  ```
272
85
 
273
- #### Anthropic
274
- Add `gem "anthropic", "~> 0.1.0"` to your Gemfile.
86
+ Generate a text completion:
275
87
  ```ruby
276
- anthropic = Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"])
88
+ llm.complete(prompt: "What is the meaning of life?")
277
89
  ```
278
90
 
91
+ Generate a chat completion:
279
92
  ```ruby
280
- anthropic.complete(prompt: "What is the meaning of life?")
93
+ llm.chat(prompt: "Hey! How are you?")
281
94
  ```
282
95
 
283
- #### Ollama
96
+ Summarize the text:
284
97
  ```ruby
285
- ollama = Langchain::LLM::Ollama.new(url: ENV["OLLAMA_URL"])
98
+ llm.complete(text: "...")
286
99
  ```
287
100
 
101
+ You can use any other LLM by invoking the same interface:
288
102
  ```ruby
289
- ollama.complete(prompt: "What is the meaning of life?")
290
- ```
291
- ```ruby
292
- ollama.embed(text: "Hello world!")
103
+ llm = Langchain::LLM::GooglePalm.new(...)
293
104
  ```
294
105
 
295
- ### Using Prompts 📋
106
+ ### Prompt Management
296
107
 
297
108
  #### Prompt Templates
298
109
 
299
- Create a prompt with one input variable:
300
-
301
- ```ruby
302
- prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"])
303
- prompt.format(adjective: "funny") # "Tell me a funny joke."
304
- ```
305
-
306
- Create a prompt with multiple input variables:
110
+ Create a prompt with input variables:
307
111
 
308
112
  ```ruby
309
113
  prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"])
@@ -384,7 +188,8 @@ prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/promp
384
188
  prompt.input_variables #=> ["adjective", "content"]
385
189
  ```
386
190
 
387
- ### Using Output Parsers
191
+
192
+ ### Output Parsers
388
193
 
389
194
  Parse LLM text responses into structured output, such as JSON.
390
195
 
@@ -484,93 +289,147 @@ fix_parser.parse(llm_response)
484
289
 
485
290
  See [here](https://github.com/andreibondarev/langchainrb/tree/main/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb) for a concrete example
486
291
 
487
- ### Using Agents 🤖
488
- Agents are semi-autonomous bots that can respond to user questions and use available to them Tools to provide informed replies. They break down problems into series of steps and define Actions (and Action Inputs) along the way that are executed and fed back to them as additional information. Once an Agent decides that it has the Final Answer it responds with it.
292
+ ## Building Retrieval Augment Generation (RAG) system
293
+ RAG is a methodology that assists LLMs generate accurate and up-to-date information.
294
+ A typical RAG workflow follows the 3 steps below:
295
+ 1. Relevant knowledge (or data) is retrieved from the knowledge base (typically a vector search DB)
296
+ 2. A prompt, containing retrieved knowledge above, is constructed.
297
+ 3. LLM receives the prompt above to generate a text completion.
298
+ Most common use-case for a RAG system is powering Q&A systems where users pose natural language questions and receive answers in natural language.
489
299
 
490
- #### ReAct Agent
300
+ ### Vector search databases
301
+ Langchain.rb provides a convenient unified interface on top of supported vectorsearch databases that make it easy to configure your index, add data, query and retrieve from it.
491
302
 
492
- Add `gem "ruby-openai"`, `gem "eqn"`, and `gem "google_search_results"` to your Gemfile
303
+ #### Supported vector search databases and features:
493
304
 
494
- ```ruby
495
- search_tool = Langchain::Tool::GoogleSearch.new(api_key: ENV["SERPAPI_API_KEY"])
496
- calculator = Langchain::Tool::Calculator.new
305
+ | Database | Open-source | Cloud offering |
306
+ | -------- |:------------------:| :------------: |
307
+ | [Chroma](https://trychroma.com/) | :white_check_mark: | :white_check_mark: |
308
+ | [Hnswlib](https://github.com/nmslib/hnswlib/) | :white_check_mark: | ❌ |
309
+ | [Milvus](https://milvus.io/) | :white_check_mark: | :white_check_mark: Zilliz Cloud |
310
+ | [Pinecone](https://www.pinecone.io/) | ❌ | :white_check_mark: |
311
+ | [Pgvector](https://github.com/pgvector/pgvector) | :white_check_mark: | :white_check_mark: |
312
+ | [Qdrant](https://qdrant.tech/) | :white_check_mark: | :white_check_mark: |
313
+ | [Weaviate](https://weaviate.io/) | :white_check_mark: | :white_check_mark: |
497
314
 
498
- openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
315
+ ### Using Vector Search Databases 🔍
499
316
 
500
- agent = Langchain::Agent::ReActAgent.new(
501
- llm: openai,
502
- tools: [search_tool, calculator]
503
- )
504
- ```
317
+ Pick the vector search database you'll be using, add the gem dependency and instantiate the client:
505
318
  ```ruby
506
- agent.run(question: "How many full soccer fields would be needed to cover the distance between NYC and DC in a straight line?")
507
- #=> "Approximately 2,945 soccer fields would be needed to cover the distance between NYC and DC in a straight line."
319
+ gem "weaviate-ruby", "~> 0.8.9"
508
320
  ```
509
321
 
510
- #### SQL-Query Agent
322
+ Choose and instantiate the LLM provider you'll be using to generate embeddings
323
+ ```ruby
324
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
325
+ ```
511
326
 
512
- Add `gem "sequel"` to your Gemfile
327
+ ```ruby
328
+ client = Langchain::Vectorsearch::Weaviate.new(
329
+ url: ENV["WEAVIATE_URL"],
330
+ api_key: ENV["WEAVIATE_API_KEY"],
331
+ index_name: "Documents",
332
+ llm: llm
333
+ )
334
+ ```
513
335
 
336
+ You can instantiate any other supported vector search database:
514
337
  ```ruby
515
- database = Langchain::Tool::Database.new(connection_string: "postgres://user:password@localhost:5432/db_name")
338
+ client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"`
339
+ client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"`
340
+ client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.2"`
341
+ client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 0.1.6"`
342
+ client = Langchain::Vectorsearch::Pgvector.new(...) # `gem "pgvector", "~> 0.2"`
343
+ client = Langchain::Vectorsearch::Qdrant.new(...) # `gem"qdrant-ruby", "~> 0.9.3"`
344
+ ```
516
345
 
517
- agent = Langchain::Agent::SQLQueryAgent.new(llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]), db: database)
346
+ Create the default schema:
347
+ ```ruby
348
+ client.create_default_schema
518
349
  ```
350
+
351
+ Add plain text data to your vector search database:
519
352
  ```ruby
520
- agent.run(question: "How many users have a name with length greater than 5 in the users table?")
521
- #=> "14 users have a name with length greater than 5 in the users table."
353
+ client.add_texts(
354
+ texts: [
355
+ "Begin by preheating your oven to 375°F (190°C). Prepare four boneless, skinless chicken breasts by cutting a pocket into the side of each breast, being careful not to cut all the way through. Season the chicken with salt and pepper to taste. In a large skillet, melt 2 tablespoons of unsalted butter over medium heat. Add 1 small diced onion and 2 minced garlic cloves, and cook until softened, about 3-4 minutes. Add 8 ounces of fresh spinach and cook until wilted, about 3 minutes. Remove the skillet from heat and let the mixture cool slightly.",
356
+ "In a bowl, combine the spinach mixture with 4 ounces of softened cream cheese, 1/4 cup of grated Parmesan cheese, 1/4 cup of shredded mozzarella cheese, and 1/4 teaspoon of red pepper flakes. Mix until well combined. Stuff each chicken breast pocket with an equal amount of the spinach mixture. Seal the pocket with a toothpick if necessary. In the same skillet, heat 1 tablespoon of olive oil over medium-high heat. Add the stuffed chicken breasts and sear on each side for 3-4 minutes, or until golden brown."
357
+ ]
358
+ )
522
359
  ```
523
360
 
524
- #### Demo
525
- ![May-12-2023 13-09-13](https://github.com/andreibondarev/langchainrb/assets/541665/6bad4cd9-976c-420f-9cf9-b85bf84f7eaf)
361
+ Or use the file parsers to load, parse and index data into your database:
362
+ ```ruby
363
+ my_pdf = Langchain.root.join("path/to/my.pdf")
364
+ my_text = Langchain.root.join("path/to/my.txt")
365
+ my_docx = Langchain.root.join("path/to/my.docx")
526
366
 
527
- ![May-12-2023 13-07-45](https://github.com/andreibondarev/langchainrb/assets/541665/9aacdcc7-4225-4ea0-ab96-7ee48826eb9b)
367
+ client.add_data(paths: [my_pdf, my_text, my_docx])
368
+ ```
369
+ Supported file formats: docx, html, pdf, text, json, jsonl, csv, xlsx.
528
370
 
529
- #### Available Tools 🛠️
371
+ Retrieve similar documents based on the query string passed in:
372
+ ```ruby
373
+ client.similarity_search(
374
+ query:,
375
+ k: # number of results to be retrieved
376
+ )
377
+ ```
530
378
 
531
- | Name | Description | ENV Requirements | Gem Requirements |
532
- | ------------ | :------------------------------------------------: | :-----------------------------------------------------------: | :---------------------------------------: |
533
- | "calculator" | Useful for getting the result of a math expression | | `gem "eqn", "~> 1.6.5"` |
534
- | "database" | Useful for querying a SQL database | | `gem "sequel", "~> 5.68.0"` |
535
- | "ruby_code_interpreter" | Interprets Ruby expressions | | `gem "safe_ruby", "~> 1.0.4"` |
536
- | "google_search" | A wrapper around Google Search | `ENV["SERPAPI_API_KEY"]` (https://serpapi.com/manage-api-key) | `gem "google_search_results", "~> 2.0.0"` |
537
- | "weather" | Calls Open Weather API to retrieve the current weather | `ENV["OPEN_WEATHER_API_KEY"]` (https://home.openweathermap.org/api_keys) | `gem "open-weather-ruby-client", "~> 0.3.0"` |
538
- | "wikipedia" | Calls Wikipedia API to retrieve the summary | | `gem "wikipedia-client", "~> 1.17.0"` |
379
+ Retrieve similar documents based on the query string passed in via the [HyDE technique](https://arxiv.org/abs/2212.10496):
380
+ ```ruby
381
+ client.similarity_search_with_hyde()
382
+ ```
539
383
 
540
- #### Loaders 🚚
384
+ Retrieve similar documents based on the embedding passed in:
385
+ ```ruby
386
+ client.similarity_search_by_vector(
387
+ embedding:,
388
+ k: # number of results to be retrieved
389
+ )
390
+ ```
541
391
 
542
- Need to read data from various sources? Load it up.
392
+ RAG-based querying
393
+ ```ruby
394
+ client.ask(
395
+ question:
396
+ )
397
+ ```
543
398
 
544
- ##### Usage
399
+ ## Building chat bots
545
400
 
546
- Just call `Langchan::Loader.load` with the path to the file or a URL you want to load.
401
+ ### Conversation class
547
402
 
403
+ Choose and instantiate the LLM provider you'll be using:
548
404
  ```ruby
549
- Langchain::Loader.load('/path/to/file.pdf')
405
+ llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
550
406
  ```
551
-
552
- or
553
-
407
+ Instantiate the Conversation class:
554
408
  ```ruby
555
- Langchain::Loader.load('https://www.example.com/file.pdf')
409
+ chat = Langchain::Conversation.new(llm: llm)
556
410
  ```
557
411
 
558
- ##### Supported Formats
412
+ (Optional) Set the conversation context:
413
+ ```ruby
414
+ chat.set_context("You are a chatbot from the future")
415
+ ```
559
416
 
417
+ Exchange messages with the LLM
418
+ ```ruby
419
+ chat.message("Tell me about future technologies")
420
+ ```
560
421
 
561
- | Format | Pocessor | Gem Requirements |
562
- | ------ | ---------------------------- | :--------------------------: |
563
- | docx | Langchain::Processors::Docx | `gem "docx", "~> 0.8.0"` |
564
- | html | Langchain::Processors::HTML | `gem "nokogiri", "~> 1.13"` |
565
- | pdf | Langchain::Processors::PDF | `gem "pdf-reader", "~> 1.4"` |
566
- | text | Langchain::Processors::Text | |
567
- | JSON | Langchain::Processors::JSON | |
568
- | JSONL | Langchain::Processors::JSONL | |
569
- | csv | Langchain::Processors::CSV | |
570
- | xlsx | Langchain::Processors::Xlsx | `gem "roo", "~> 2.10.0"` |
422
+ To stream the chat response:
423
+ ```ruby
424
+ chat = Langchain::Conversation.new(llm: llm) do |chunk|
425
+ print(chunk)
426
+ end
427
+ ```
571
428
 
572
- ## Examples
573
- Additional examples available: [/examples](https://github.com/andreibondarev/langchainrb/tree/main/examples)
429
+ Open AI Functions support
430
+ ```ruby
431
+ chat.set_functions(functions)
432
+ ```
574
433
 
575
434
  ## Evaluations (Evals)
576
435
  The Evaluations module is a collection of tools that can be used to evaluate and track the performance of the output products by LLM and your RAG (Retrieval Augmented Generation) pipelines.
@@ -598,13 +457,16 @@ ragas.score(answer: "", question: "", context: "")
598
457
  # }
599
458
  ```
600
459
 
460
+ ## Examples
461
+ Additional examples available: [/examples](https://github.com/andreibondarev/langchainrb/tree/main/examples)
462
+
601
463
  ## Logging
602
464
 
603
465
  LangChain.rb uses standard logging mechanisms and defaults to `:warn` level. Most messages are at info level, but we will add debug or warn statements as needed.
604
466
  To show all log messages:
605
467
 
606
468
  ```ruby
607
- Langchain.logger.level = :info
469
+ Langchain.logger.level = :debug
608
470
  ```
609
471
 
610
472
  ## Development
@@ -618,31 +480,6 @@ Langchain.logger.level = :info
618
480
  ## Discord
619
481
  Join us in the [Langchain.rb](https://discord.gg/WDARp7J2n8) Discord server.
620
482
 
621
- ## Core Contributors
622
- [<img style="border-radius:50%" alt="Andrei Bondarev" src="https://avatars.githubusercontent.com/u/541665?v=4" width="80" height="80" class="avatar">](https://twitter.com/rushing_andrei)
623
-
624
- ## Contributors
625
- [<img style="border-radius:50%" alt="Alex Chaplinsky" src="https://avatars.githubusercontent.com/u/695947?v=4" width="80" height="80" class="avatar">](https://github.com/alchaplinsky)
626
- [<img style="border-radius:50%" alt="Josh Nichols" src="https://avatars.githubusercontent.com/u/159?v=4" width="80" height="80" class="avatar">](https://github.com/technicalpickles)
627
- [<img style="border-radius:50%" alt="Matt Lindsey" src="https://avatars.githubusercontent.com/u/5638339?v=4" width="80" height="80" class="avatar">](https://github.com/mattlindsey)
628
- [<img style="border-radius:50%" alt="Ricky Chilcott" src="https://avatars.githubusercontent.com/u/445759?v=4" width="80" height="80" class="avatar">](https://github.com/rickychilcott)
629
- [<img style="border-radius:50%" alt="Moeki Kawakami" src="https://avatars.githubusercontent.com/u/72325947?v=4" width="80" height="80" class="avatar">](https://github.com/moekidev)
630
- [<img style="border-radius:50%" alt="Jens Stmrs" src="https://avatars.githubusercontent.com/u/3492669?v=4" width="80" height="80" class="avatar">](https://github.com/faustus7)
631
- [<img style="border-radius:50%" alt="Rafael Figueiredo" src="https://avatars.githubusercontent.com/u/35845775?v=4" width="80" height="80" class="avatar">](https://github.com/rafaelqfigueiredo)
632
- [<img style="border-radius:50%" alt="Piero Dotti" src="https://avatars.githubusercontent.com/u/5167659?v=4" width="80" height="80" class="avatar">](https://github.com/ProGM)
633
- [<img style="border-radius:50%" alt="Michał Ciemięga" src="https://avatars.githubusercontent.com/u/389828?v=4" width="80" height="80" class="avatar">](https://github.com/zewelor)
634
- [<img style="border-radius:50%" alt="Bruno Bornsztein" src="https://avatars.githubusercontent.com/u/3760?v=4" width="80" height="80" class="avatar">](https://github.com/bborn)
635
- [<img style="border-radius:50%" alt="Tim Williams" src="https://avatars.githubusercontent.com/u/1192351?v=4" width="80" height="80" class="avatar">](https://github.com/timrwilliams)
636
- [<img style="border-radius:50%" alt="Zhenhang Tung" src="https://avatars.githubusercontent.com/u/8170159?v=4" width="80" height="80" class="avatar">](https://github.com/ZhenhangTung)
637
- [<img style="border-radius:50%" alt="Hama" src="https://avatars.githubusercontent.com/u/38002468?v=4" width="80" height="80" class="avatar">](https://github.com/akmhmgc)
638
- [<img style="border-radius:50%" alt="Josh Weir" src="https://avatars.githubusercontent.com/u/10720337?v=4" width="80" height="80" class="avatar">](https://github.com/joshweir)
639
- [<img style="border-radius:50%" alt="Arthur Hess" src="https://avatars.githubusercontent.com/u/446035?v=4" width="80" height="80" class="avatar">](https://github.com/arthurhess)
640
- [<img style="border-radius:50%" alt="Jin Shen" src="https://avatars.githubusercontent.com/u/54917718?v=4" width="80" height="80" class="avatar">](https://github.com/jacshen-ebay)
641
- [<img style="border-radius:50%" alt="Earle Bunao" src="https://avatars.githubusercontent.com/u/4653624?v=4" width="80" height="80" class="avatar">](https://github.com/erbunao)
642
- [<img style="border-radius:50%" alt="Maël H." src="https://avatars.githubusercontent.com/u/61985678?v=4" width="80" height="80" class="avatar">](https://github.com/mael-ha)
643
- [<img style="border-radius:50%" alt="Chris O. Adebiyi" src="https://avatars.githubusercontent.com/u/62605573?v=4" width="80" height="80" class="avatar">](https://github.com/oluvvafemi)
644
- [<img style="border-radius:50%" alt="Aaron Breckenridge" src="https://avatars.githubusercontent.com/u/201360?v=4" width="80" height="80" class="avatar">](https://github.com/breckenedge)
645
-
646
483
  ## Star History
647
484
 
648
485
  [![Star History Chart](https://api.star-history.com/svg?repos=andreibondarev/langchainrb&type=Date)](https://star-history.com/#andreibondarev/langchainrb&Date)
@@ -0,0 +1,54 @@
1
+
2
+ ### Agents 🤖
3
+ Agents are semi-autonomous bots that can respond to user questions and use available to them Tools to provide informed replies. They break down problems into series of steps and define Actions (and Action Inputs) along the way that are executed and fed back to them as additional information. Once an Agent decides that it has the Final Answer it responds with it.
4
+
5
+ #### ReAct Agent
6
+
7
+ Add `gem "ruby-openai"`, `gem "eqn"`, and `gem "google_search_results"` to your Gemfile
8
+
9
+ ```ruby
10
+ search_tool = Langchain::Tool::GoogleSearch.new(api_key: ENV["SERPAPI_API_KEY"])
11
+ calculator = Langchain::Tool::Calculator.new
12
+
13
+ openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"])
14
+
15
+ agent = Langchain::Agent::ReActAgent.new(
16
+ llm: openai,
17
+ tools: [search_tool, calculator]
18
+ )
19
+ ```
20
+ ```ruby
21
+ agent.run(question: "How many full soccer fields would be needed to cover the distance between NYC and DC in a straight line?")
22
+ #=> "Approximately 2,945 soccer fields would be needed to cover the distance between NYC and DC in a straight line."
23
+ ```
24
+
25
+ #### SQL-Query Agent
26
+
27
+ Add `gem "sequel"` to your Gemfile
28
+
29
+ ```ruby
30
+ database = Langchain::Tool::Database.new(connection_string: "postgres://user:password@localhost:5432/db_name")
31
+
32
+ agent = Langchain::Agent::SQLQueryAgent.new(llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]), db: database)
33
+ ```
34
+ ```ruby
35
+ agent.run(question: "How many users have a name with length greater than 5 in the users table?")
36
+ #=> "14 users have a name with length greater than 5 in the users table."
37
+ ```
38
+
39
+ #### Demo
40
+ ![May-12-2023 13-09-13](https://github.com/andreibondarev/langchainrb/assets/541665/6bad4cd9-976c-420f-9cf9-b85bf84f7eaf)
41
+
42
+ ![May-12-2023 13-07-45](https://github.com/andreibondarev/langchainrb/assets/541665/9aacdcc7-4225-4ea0-ab96-7ee48826eb9b)
43
+
44
+ #### Available Tools 🛠️
45
+
46
+ | Name | Description | ENV Requirements | Gem Requirements |
47
+ | ------------ | :------------------------------------------------: | :-----------------------------------------------------------: | :---------------------------------------: |
48
+ | "calculator" | Useful for getting the result of a math expression | | `gem "eqn", "~> 1.6.5"` |
49
+ | "database" | Useful for querying a SQL database | | `gem "sequel", "~> 5.68.0"` |
50
+ | "ruby_code_interpreter" | Interprets Ruby expressions | | `gem "safe_ruby", "~> 1.0.4"` |
51
+ | "google_search" | A wrapper around Google Search | `ENV["SERPAPI_API_KEY"]` (https://serpapi.com/manage-api-key) | `gem "google_search_results", "~> 2.0.0"` |
52
+ | "weather" | Calls Open Weather API to retrieve the current weather | `ENV["OPEN_WEATHER_API_KEY"]` (https://home.openweathermap.org/api_keys) | `gem "open-weather-ruby-client", "~> 0.3.0"` |
53
+ | "wikipedia" | Calls Wikipedia API to retrieve the summary | | `gem "wikipedia-client", "~> 1.17.0"` |
54
+
@@ -4,7 +4,7 @@ module Langchain
4
4
  module Evals
5
5
  module Ragas
6
6
  # 123
7
- class Critique
7
+ class AspectCritique
8
8
  attr_reader :llm, :criterias
9
9
 
10
10
  CRITERIAS = {
@@ -53,7 +53,7 @@ module Langchain
53
53
  # @return [PromptTemplate] PromptTemplate instance
54
54
  def critique_prompt_template
55
55
  @template_one ||= Langchain::Prompt.load_from_path(
56
- file_path: Langchain.root.join("langchain/evals/ragas/prompts/critique.yml")
56
+ file_path: Langchain.root.join("langchain/evals/ragas/prompts/aspect_critique.yml")
57
57
  )
58
58
  end
59
59
  end
@@ -4,7 +4,7 @@ module Langchain::LLM
4
4
  # LLM interface for OpenAI APIs: https://platform.openai.com/overview
5
5
  #
6
6
  # Gem requirements:
7
- # gem "ruby-openai", "~> 4.0.0"
7
+ # gem "ruby-openai", "~> 5.2.0"
8
8
  #
9
9
  # Usage:
10
10
  # openai = Langchain::LLM::OpenAI.new(api_key:, llm_options: {})
@@ -5,6 +5,9 @@ module Langchain
5
5
  class BaseResponse
6
6
  attr_reader :raw_response, :model
7
7
 
8
+ # Save context in the response when doing RAG workflow vectorsearch#ask()
9
+ attr_accessor :context
10
+
8
11
  def initialize(raw_response, model: nil)
9
12
  @raw_response = raw_response
10
13
  @model = model
@@ -33,7 +33,7 @@ module Langchain::Prompt
33
33
  when ".json"
34
34
  config = JSON.parse(File.read(file_path))
35
35
  when ".yaml", ".yml"
36
- config = YAML.safe_load(File.read(file_path))
36
+ config = YAML.safe_load_file(file_path)
37
37
  else
38
38
  raise ArgumentError, "Got unsupported file type #{file_path.extname}"
39
39
  end
@@ -30,6 +30,7 @@ module Langchain
30
30
  def self.token_limit(model_name)
31
31
  TOKEN_LIMITS[model_name]
32
32
  end
33
+ singleton_class.alias_method :completion_token_limit, :token_limit
33
34
  end
34
35
  end
35
36
  end
@@ -20,6 +20,9 @@ module Langchain
20
20
  end
21
21
 
22
22
  leftover_tokens = token_limit(model_name) - text_token_length
23
+ # Some models have a separate token limit for completion (e.g. GPT-4 Turbo)
24
+ # We want the lower of the two limits
25
+ leftover_tokens = [leftover_tokens, completion_token_limit(model_name)].min
23
26
 
24
27
  # Raise an error even if whole prompt is equal to the model's token limit (leftover_tokens == 0)
25
28
  if leftover_tokens < 0
@@ -38,6 +38,7 @@ module Langchain
38
38
  def self.token_limit(model_name)
39
39
  TOKEN_LIMITS[model_name]
40
40
  end
41
+ singleton_class.alias_method :completion_token_limit, :token_limit
41
42
  end
42
43
  end
43
44
  end
@@ -46,6 +46,7 @@ module Langchain
46
46
  def self.token_limit(model_name)
47
47
  TOKEN_LIMITS.dig(model_name, "input_token_limit")
48
48
  end
49
+ singleton_class.alias_method :completion_token_limit, :token_limit
49
50
  end
50
51
  end
51
52
  end
@@ -10,6 +10,14 @@ module Langchain
10
10
  # It is used to validate the token length before the API call is made
11
11
  #
12
12
  class OpenAIValidator < BaseValidator
13
+ COMPLETION_TOKEN_LIMITS = {
14
+ # GPT-4 Turbo has a separate token limit for completion
15
+ # Source:
16
+ # https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
17
+ "gpt-4-1106-preview" => 4096,
18
+ "gpt-4-vision-preview" => 4096
19
+ }
20
+
13
21
  TOKEN_LIMITS = {
14
22
  # Source:
15
23
  # https://platform.openai.com/docs/api-reference/embeddings
@@ -29,6 +37,8 @@ module Langchain
29
37
  "gpt-4-32k" => 32768,
30
38
  "gpt-4-32k-0314" => 32768,
31
39
  "gpt-4-32k-0613" => 32768,
40
+ "gpt-4-1106-preview" => 128000,
41
+ "gpt-4-vision-preview" => 128000,
32
42
  "text-curie-001" => 2049,
33
43
  "text-babbage-001" => 2049,
34
44
  "text-ada-001" => 2049,
@@ -53,6 +63,10 @@ module Langchain
53
63
  def self.token_limit(model_name)
54
64
  TOKEN_LIMITS[model_name]
55
65
  end
66
+
67
+ def self.completion_token_limit(model_name)
68
+ COMPLETION_TOKEN_LIMITS[model_name] || token_limit(model_name)
69
+ end
56
70
  end
57
71
  end
58
72
  end
@@ -126,7 +126,9 @@ module Langchain::Vectorsearch
126
126
 
127
127
  prompt = generate_rag_prompt(question: question, context: context)
128
128
 
129
- llm.chat(prompt: prompt, &block)
129
+ response = llm.chat(prompt: prompt, &block)
130
+ response.context = context
131
+ response
130
132
  end
131
133
 
132
134
  private
@@ -151,7 +151,9 @@ module Langchain::Vectorsearch
151
151
 
152
152
  prompt = generate_rag_prompt(question: question, context: context)
153
153
 
154
- llm.chat(prompt: prompt, &block)
154
+ response = llm.chat(prompt: prompt, &block)
155
+ response.context = context
156
+ response
155
157
  end
156
158
  end
157
159
  end
@@ -148,7 +148,9 @@ module Langchain::Vectorsearch
148
148
 
149
149
  prompt = generate_rag_prompt(question: question, context: context)
150
150
 
151
- llm.chat(prompt: prompt, &block)
151
+ response = llm.chat(prompt: prompt, &block)
152
+ response.context = context
153
+ response
152
154
  end
153
155
  end
154
156
  end
@@ -180,7 +180,9 @@ module Langchain::Vectorsearch
180
180
 
181
181
  prompt = generate_rag_prompt(question: question, context: context)
182
182
 
183
- llm.chat(prompt: prompt, &block)
183
+ response = llm.chat(prompt: prompt, &block)
184
+ response.context = context
185
+ response
184
186
  end
185
187
 
186
188
  # Pinecone index
@@ -137,7 +137,9 @@ module Langchain::Vectorsearch
137
137
 
138
138
  prompt = generate_rag_prompt(question: question, context: context)
139
139
 
140
- llm.chat(prompt: prompt, &block)
140
+ response = llm.chat(prompt: prompt, &block)
141
+ response.context = context
142
+ response
141
143
  end
142
144
  end
143
145
  end
@@ -6,7 +6,7 @@ module Langchain::Vectorsearch
6
6
  # Wrapper around Weaviate
7
7
  #
8
8
  # Gem requirements:
9
- # gem "weaviate-ruby", "~> 0.8.3"
9
+ # gem "weaviate-ruby", "~> 0.8.9"
10
10
  #
11
11
  # Usage:
12
12
  # weaviate = Langchain::Vectorsearch::Weaviate.new(url:, api_key:, index_name:, llm:)
@@ -137,7 +137,9 @@ module Langchain::Vectorsearch
137
137
 
138
138
  prompt = generate_rag_prompt(question: question, context: context)
139
139
 
140
- llm.chat(prompt: prompt, &block)
140
+ response = llm.chat(prompt: prompt, &block)
141
+ response.context = context
142
+ response
141
143
  end
142
144
 
143
145
  private
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Langchain
4
- VERSION = "0.7.2"
4
+ VERSION = "0.7.3"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: langchainrb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.7.2
4
+ version: 0.7.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrei Bondarev
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2023-11-02 00:00:00.000000000 Z
11
+ date: 2023-11-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: baran
@@ -567,6 +567,7 @@ files:
567
567
  - LICENSE.txt
568
568
  - README.md
569
569
  - lib/langchain.rb
570
+ - lib/langchain/agent/agents.md
570
571
  - lib/langchain/agent/base.rb
571
572
  - lib/langchain/agent/react_agent.rb
572
573
  - lib/langchain/agent/react_agent/react_agent_prompt.yaml
@@ -590,13 +591,13 @@ files:
590
591
  - lib/langchain/data.rb
591
592
  - lib/langchain/dependency_helper.rb
592
593
  - lib/langchain/evals/ragas/answer_relevance.rb
594
+ - lib/langchain/evals/ragas/aspect_critique.rb
593
595
  - lib/langchain/evals/ragas/context_relevance.rb
594
- - lib/langchain/evals/ragas/critique.rb
595
596
  - lib/langchain/evals/ragas/faithfulness.rb
596
597
  - lib/langchain/evals/ragas/main.rb
597
598
  - lib/langchain/evals/ragas/prompts/answer_relevance.yml
599
+ - lib/langchain/evals/ragas/prompts/aspect_critique.yml
598
600
  - lib/langchain/evals/ragas/prompts/context_relevance.yml
599
- - lib/langchain/evals/ragas/prompts/critique.yml
600
601
  - lib/langchain/evals/ragas/prompts/faithfulness_statements_extraction.yml
601
602
  - lib/langchain/evals/ragas/prompts/faithfulness_statements_verification.yml
602
603
  - lib/langchain/llm/ai21.rb