langchain 0.2.7 → 0.2.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,14 +2,13 @@
2
2
 
3
3
  ⚡ Building applications with LLMs through composability ⚡
4
4
 
5
- [![CI](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml) ![npm](https://img.shields.io/npm/dm/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchainjs)
5
+ [![CI](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml) ![npm](https://img.shields.io/npm/dm/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchainjs)
6
6
  [<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchainjs)
7
7
 
8
8
  Looking for the Python version? Check out [LangChain](https://github.com/langchain-ai/langchain).
9
9
 
10
10
  To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
11
11
  [LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
12
- Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team.
13
12
 
14
13
  ## ⚡️ Quick Install
15
14
 
@@ -17,10 +16,6 @@ You can use npm, yarn, or pnpm to install LangChain.js
17
16
 
18
17
  `npm install -S langchain` or `yarn add langchain` or `pnpm add langchain`
19
18
 
20
- ```typescript
21
- import { ChatOpenAI } from "langchain/chat_models/openai";
22
- ```
23
-
24
19
  ## 🌐 Supported Environments
25
20
 
26
21
  LangChain is written in TypeScript and can be used in:
@@ -39,19 +34,20 @@ LangChain is written in TypeScript and can be used in:
39
34
  - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
40
35
 
41
36
  This framework consists of several parts.
42
- - **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic runtime for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
43
- - **[LangChain Templates](https://github.com/langchain-ai/langchain/tree/master/templates)**: (currently Python-only) A collection of easily deployable reference architectures for a wide variety of tasks.
44
- - **[LangServe](https://github.com/langchain-ai/langserve)**: (currently Python-only) A library for deploying LangChain chains as a REST API.
45
- - **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
37
+ - **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://js.langchain.com/v0.2/docs/concepts#langchain-expression-language), [components](https://js.langchain.com/v0.2/docs/concepts), and [third-party integrations](https://js.langchain.com/v0.2/docs/integrations/platforms/).
38
+ Use [LangGraph.js](https://js.langchain.com/v0.2/docs/concepts/#langgraphjs) to build stateful agents with first-class streaming and human-in-the-loop support.
39
+ - **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
40
+ - **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/) (currently Python-only).
46
41
 
47
42
  The LangChain libraries themselves are made up of several different packages.
48
43
  - **[`@langchain/core`](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core)**: Base abstractions and LangChain Expression Language.
49
44
  - **[`@langchain/community`](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community)**: Third party integrations.
50
45
  - **[`langchain`](https://github.com/langchain-ai/langchainjs/blob/main/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
46
+ - **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it.
51
47
 
52
48
  Integrations may also be split into their own compatible packages.
53
49
 
54
- ![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/img/langchain_stack_feb_2024.webp)
50
+ ![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/svg/langchain_stack_062024.svg)
55
51
 
56
52
  This library aims to assist in the development of those types of applications. Common examples of these applications include:
57
53
 
@@ -86,15 +82,15 @@ Data Augmented Generation involves specific types of chains that first interact
86
82
 
87
83
  **🤖 Agents:**
88
84
 
89
- Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
85
+ Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://js.langchain.com/v0.2/docs/concepts/#agents), along with [LangGraph.js](https://github.com/langchain-ai/langgraphjs/) for building custom agents.
90
86
 
91
87
  ## 📖 Documentation
92
88
 
93
- Please see [here](https://js.langchain.com/v0.2/) for full documentation, which includes:
89
+ Please see [here](https://js.langchain.com) for full documentation, which includes:
94
90
 
95
91
  - [Getting started](https://js.langchain.com/v0.2/docs/introduction): installation, setting up the environment, simple examples
96
- - [Tutorials](https://js.langchain.com/v0.2/docs/tutorials/): interactive guides and walkthroughs of common use cases/tasks.
97
- - [Use case](https://js.langchain.com/v0.2/docs/how_to/) walkthroughs and best practices for every component of the LangChain library.
92
+ - Overview of the [interfaces](https://js.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/), [modules](https://js.langchain.com/v0.2/docs/concepts) and [integrations](https://js.langchain.com/v0.2/docs/integrations/platforms/)
93
+ - [Tutorial](https://js.langchain.com/v0.2/docs/tutorials/) walkthroughs
98
94
  - [Reference](https://api.js.langchain.com): full API docs
99
95
 
100
96
  ## 💁 Contributing
@@ -108,4 +104,3 @@ Please report any security issues or concerns following our [security guidelines
108
104
  ## 🖇️ Relationship with Python LangChain
109
105
 
110
106
  This is built to integrate as seamlessly as possible with the [LangChain Python package](https://github.com/langchain-ai/langchain). Specifically, this means all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between languages.
111
-
@@ -2,6 +2,7 @@ import type { StructuredToolInterface } from "@langchain/core/tools";
2
2
  import type { BaseChatModel, BaseChatModelCallOptions } from "@langchain/core/language_models/chat_models";
3
3
  import { ChatPromptTemplate } from "@langchain/core/prompts";
4
4
  import { OpenAIClient } from "@langchain/openai";
5
+ import { ToolDefinition } from "@langchain/core/language_models/base";
5
6
  import { OpenAIToolsAgentOutputParser, type ToolsAgentStep } from "./output_parser.js";
6
7
  import { AgentRunnableSequence } from "../agent.js";
7
8
  export { OpenAIToolsAgentOutputParser, type ToolsAgentStep };
@@ -18,7 +19,7 @@ export type CreateOpenAIToolsAgentParams = {
18
19
  tools?: StructuredToolInterface[] | OpenAIClient.ChatCompletionTool[] | any[];
19
20
  }>;
20
21
  /** Tools this agent has access to. */
21
- tools: StructuredToolInterface[];
22
+ tools: StructuredToolInterface[] | ToolDefinition[];
22
23
  /** The prompt to use, must have an input key of `agent_scratchpad`. */
23
24
  prompt: ChatPromptTemplate;
24
25
  /**
@@ -2,8 +2,10 @@
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.createStructuredChatAgent = exports.StructuredChatAgent = void 0;
4
4
  const zod_to_json_schema_1 = require("zod-to-json-schema");
5
+ const base_1 = require("@langchain/core/language_models/base");
5
6
  const runnables_1 = require("@langchain/core/runnables");
6
7
  const prompts_1 = require("@langchain/core/prompts");
8
+ const function_calling_1 = require("@langchain/core/utils/function_calling");
7
9
  const llm_chain_js_1 = require("../../chains/llm_chain.cjs");
8
10
  const agent_js_1 = require("../agent.cjs");
9
11
  const outputParser_js_1 = require("./outputParser.cjs");
@@ -215,7 +217,16 @@ async function createStructuredChatAgent({ llm, tools, prompt, streamRunnable, }
215
217
  if (missingVariables.length > 0) {
216
218
  throw new Error(`Provided prompt is missing required input variables: ${JSON.stringify(missingVariables)}`);
217
219
  }
218
- const toolNames = tools.map((tool) => tool.name);
220
+ let toolNames = [];
221
+ if (tools.every(base_1.isOpenAITool)) {
222
+ toolNames = tools.map((tool) => tool.function.name);
223
+ }
224
+ else if (tools.every(function_calling_1.isStructuredTool)) {
225
+ toolNames = tools.map((tool) => tool.name);
226
+ }
227
+ else {
228
+ throw new Error("All tools must be either OpenAI or Structured tools, not a mix.");
229
+ }
219
230
  const partialedPrompt = await prompt.partial({
220
231
  tools: (0, render_js_1.renderTextDescriptionAndArgs)(tools),
221
232
  tool_names: toolNames.join(", "),
@@ -1,5 +1,5 @@
1
1
  import type { StructuredToolInterface } from "@langchain/core/tools";
2
- import type { BaseLanguageModelInterface } from "@langchain/core/language_models/base";
2
+ import { type BaseLanguageModelInterface, type ToolDefinition } from "@langchain/core/language_models/base";
3
3
  import type { BasePromptTemplate } from "@langchain/core/prompts";
4
4
  import { BaseMessagePromptTemplate, ChatPromptTemplate } from "@langchain/core/prompts";
5
5
  import { AgentStep } from "@langchain/core/agents";
@@ -99,7 +99,7 @@ export type CreateStructuredChatAgentParams = {
99
99
  /** LLM to use as the agent. */
100
100
  llm: BaseLanguageModelInterface;
101
101
  /** Tools this agent has access to. */
102
- tools: StructuredToolInterface[];
102
+ tools: (StructuredToolInterface | ToolDefinition)[];
103
103
  /**
104
104
  * The prompt to use. Must have input keys for
105
105
  * `tools`, `tool_names`, and `agent_scratchpad`.
@@ -1,6 +1,8 @@
1
1
  import { zodToJsonSchema } from "zod-to-json-schema";
2
+ import { isOpenAITool, } from "@langchain/core/language_models/base";
2
3
  import { RunnablePassthrough } from "@langchain/core/runnables";
3
4
  import { ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, PromptTemplate, } from "@langchain/core/prompts";
5
+ import { isStructuredTool } from "@langchain/core/utils/function_calling";
4
6
  import { LLMChain } from "../../chains/llm_chain.js";
5
7
  import { Agent, AgentRunnableSequence, } from "../agent.js";
6
8
  import { StructuredChatOutputParserWithRetries } from "./outputParser.js";
@@ -211,7 +213,16 @@ export async function createStructuredChatAgent({ llm, tools, prompt, streamRunn
211
213
  if (missingVariables.length > 0) {
212
214
  throw new Error(`Provided prompt is missing required input variables: ${JSON.stringify(missingVariables)}`);
213
215
  }
214
- const toolNames = tools.map((tool) => tool.name);
216
+ let toolNames = [];
217
+ if (tools.every(isOpenAITool)) {
218
+ toolNames = tools.map((tool) => tool.function.name);
219
+ }
220
+ else if (tools.every(isStructuredTool)) {
221
+ toolNames = tools.map((tool) => tool.name);
222
+ }
223
+ else {
224
+ throw new Error("All tools must be either OpenAI or Structured tools, not a mix.");
225
+ }
215
226
  const partialedPrompt = await prompt.partial({
216
227
  tools: renderTextDescriptionAndArgs(tools),
217
228
  tool_names: toolNames.join(", "),
@@ -1,6 +1,7 @@
1
1
  import { BaseChatModel } from "@langchain/core/language_models/chat_models";
2
2
  import { ChatPromptTemplate } from "@langchain/core/prompts";
3
3
  import { StructuredToolInterface } from "@langchain/core/tools";
4
+ import { ToolDefinition } from "@langchain/core/language_models/base";
4
5
  import { AgentRunnableSequence } from "../agent.js";
5
6
  import { ToolsAgentStep } from "./output_parser.js";
6
7
  /**
@@ -14,7 +15,7 @@ export type CreateToolCallingAgentParams = {
14
15
  */
15
16
  llm: BaseChatModel;
16
17
  /** Tools this agent has access to. */
17
- tools: StructuredToolInterface[];
18
+ tools: StructuredToolInterface[] | ToolDefinition[];
18
19
  /** The prompt to use, must have an input key of `agent_scratchpad`. */
19
20
  prompt: ChatPromptTemplate;
20
21
  /**
@@ -28,6 +28,7 @@ test("Test CSV loader from blob", async () => {
28
28
  expect(docs.length).toBe(2);
29
29
  expect(docs[0]).toMatchInlineSnapshot(`
30
30
  Document {
31
+ "id": undefined,
31
32
  "metadata": {
32
33
  "blobType": "text/csv",
33
34
  "line": 1,
@@ -39,6 +40,7 @@ test("Test CSV loader from blob", async () => {
39
40
  `);
40
41
  expect(docs[1]).toMatchInlineSnapshot(`
41
42
  Document {
43
+ "id": undefined,
42
44
  "metadata": {
43
45
  "blobType": "text/csv",
44
46
  "line": 2,
@@ -24,6 +24,7 @@ test("Test JSON loader from blob", async () => {
24
24
  expect(docs.length).toBe(2);
25
25
  expect(docs[0]).toMatchInlineSnapshot(`
26
26
  Document {
27
+ "id": undefined,
27
28
  "metadata": {
28
29
  "blobType": "application/json",
29
30
  "line": 1,
@@ -34,6 +35,7 @@ test("Test JSON loader from blob", async () => {
34
35
  `);
35
36
  expect(docs[1]).toMatchInlineSnapshot(`
36
37
  Document {
38
+ "id": undefined,
37
39
  "metadata": {
38
40
  "blobType": "application/json",
39
41
  "line": 2,
@@ -66,6 +68,7 @@ test("Test JSON loader from blob", async () => {
66
68
  expect(docs.length).toBe(10);
67
69
  expect(docs[0]).toMatchInlineSnapshot(`
68
70
  Document {
71
+ "id": undefined,
69
72
  "metadata": {
70
73
  "blobType": "application/json",
71
74
  "line": 1,
@@ -76,6 +79,7 @@ test("Test JSON loader from blob", async () => {
76
79
  `);
77
80
  expect(docs[1]).toMatchInlineSnapshot(`
78
81
  Document {
82
+ "id": undefined,
79
83
  "metadata": {
80
84
  "blobType": "application/json",
81
85
  "line": 2,
@@ -23,6 +23,7 @@ test("Test JSONL loader from blob", async () => {
23
23
  expect(docs.length).toBe(2);
24
24
  expect(docs[0]).toMatchInlineSnapshot(`
25
25
  Document {
26
+ "id": undefined,
26
27
  "metadata": {
27
28
  "blobType": "application/jsonl+json",
28
29
  "line": 1,
@@ -33,6 +34,7 @@ test("Test JSONL loader from blob", async () => {
33
34
  `);
34
35
  expect(docs[1]).toMatchInlineSnapshot(`
35
36
  Document {
37
+ "id": undefined,
36
38
  "metadata": {
37
39
  "blobType": "application/jsonl+json",
38
40
  "line": 2,
@@ -160,6 +160,17 @@ class ParentDocumentRetriever extends multi_vector_js_1.MultiVectorRetriever {
160
160
  parentDocs.push(...retrievedDocs);
161
161
  return parentDocs.slice(0, this.parentK);
162
162
  }
163
+ async _storeDocuments(parentDoc, childDocs, addToDocstore) {
164
+ if (this.childDocumentRetriever) {
165
+ await this.childDocumentRetriever.addDocuments(childDocs);
166
+ }
167
+ else {
168
+ await this.vectorstore.addDocuments(childDocs);
169
+ }
170
+ if (addToDocstore) {
171
+ await this.docstore.mset(Object.entries(parentDoc));
172
+ }
173
+ }
163
174
  /**
164
175
  * Adds documents to the docstore and vectorstores.
165
176
  * If a retriever is provided, it will be used to add documents instead of the vectorstore.
@@ -192,8 +203,6 @@ class ParentDocumentRetriever extends multi_vector_js_1.MultiVectorRetriever {
192
203
  if (parentDocs.length !== parentDocIds.length) {
193
204
  throw new Error(`Got uneven list of documents and ids.\nIf "ids" is provided, should be same length as "documents".`);
194
205
  }
195
- const embeddedDocs = [];
196
- const fullDocs = {};
197
206
  for (let i = 0; i < parentDocs.length; i += 1) {
198
207
  const parentDoc = parentDocs[i];
199
208
  const parentDocId = parentDocIds[i];
@@ -202,17 +211,7 @@ class ParentDocumentRetriever extends multi_vector_js_1.MultiVectorRetriever {
202
211
  pageContent: subDoc.pageContent,
203
212
  metadata: { ...subDoc.metadata, [this.idKey]: parentDocId },
204
213
  }));
205
- embeddedDocs.push(...taggedSubDocs);
206
- fullDocs[parentDocId] = parentDoc;
207
- }
208
- if (this.childDocumentRetriever) {
209
- await this.childDocumentRetriever.addDocuments(embeddedDocs);
210
- }
211
- else {
212
- await this.vectorstore.addDocuments(embeddedDocs);
213
- }
214
- if (addToDocstore) {
215
- await this.docstore.mset(Object.entries(fullDocs));
214
+ await this._storeDocuments(parentDoc, taggedSubDocs, addToDocstore);
216
215
  }
217
216
  }
218
217
  }
@@ -63,6 +63,7 @@ export declare class ParentDocumentRetriever extends MultiVectorRetriever {
63
63
  documentCompressorFilteringFn?: ParentDocumentRetrieverFields["documentCompressorFilteringFn"];
64
64
  constructor(fields: ParentDocumentRetrieverFields);
65
65
  _getRelevantDocuments(query: string): Promise<Document[]>;
66
+ _storeDocuments(parentDoc: Document, childDocs: Document[], addToDocstore: boolean): Promise<void>;
66
67
  /**
67
68
  * Adds documents to the docstore and vectorstores.
68
69
  * If a retriever is provided, it will be used to add documents instead of the vectorstore.
@@ -134,6 +134,17 @@ export class ParentDocumentRetriever extends MultiVectorRetriever {
134
134
  parentDocs.push(...retrievedDocs);
135
135
  return parentDocs.slice(0, this.parentK);
136
136
  }
137
+ async _storeDocuments(parentDoc, childDocs, addToDocstore) {
138
+ if (this.childDocumentRetriever) {
139
+ await this.childDocumentRetriever.addDocuments(childDocs);
140
+ }
141
+ else {
142
+ await this.vectorstore.addDocuments(childDocs);
143
+ }
144
+ if (addToDocstore) {
145
+ await this.docstore.mset(Object.entries(parentDoc));
146
+ }
147
+ }
137
148
  /**
138
149
  * Adds documents to the docstore and vectorstores.
139
150
  * If a retriever is provided, it will be used to add documents instead of the vectorstore.
@@ -166,8 +177,6 @@ export class ParentDocumentRetriever extends MultiVectorRetriever {
166
177
  if (parentDocs.length !== parentDocIds.length) {
167
178
  throw new Error(`Got uneven list of documents and ids.\nIf "ids" is provided, should be same length as "documents".`);
168
179
  }
169
- const embeddedDocs = [];
170
- const fullDocs = {};
171
180
  for (let i = 0; i < parentDocs.length; i += 1) {
172
181
  const parentDoc = parentDocs[i];
173
182
  const parentDocId = parentDocIds[i];
@@ -176,17 +185,7 @@ export class ParentDocumentRetriever extends MultiVectorRetriever {
176
185
  pageContent: subDoc.pageContent,
177
186
  metadata: { ...subDoc.metadata, [this.idKey]: parentDocId },
178
187
  }));
179
- embeddedDocs.push(...taggedSubDocs);
180
- fullDocs[parentDocId] = parentDoc;
181
- }
182
- if (this.childDocumentRetriever) {
183
- await this.childDocumentRetriever.addDocuments(embeddedDocs);
184
- }
185
- else {
186
- await this.vectorstore.addDocuments(embeddedDocs);
187
- }
188
- if (addToDocstore) {
189
- await this.docstore.mset(Object.entries(fullDocs));
188
+ await this._storeDocuments(parentDoc, taggedSubDocs, addToDocstore);
190
189
  }
191
190
  }
192
191
  }
@@ -11,7 +11,7 @@ test("Should work with a question input", async () => {
11
11
  "Cars are made out of plastic",
12
12
  "mitochondria is the powerhouse of the cell",
13
13
  "mitochondria is made of lipids",
14
- ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings());
14
+ ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
15
15
  const retriever = new EnsembleRetriever({
16
16
  retrievers: [vectorstore.asRetriever()],
17
17
  });
@@ -28,7 +28,7 @@ test("Should work with multiple retriever", async () => {
28
28
  "Cars are made out of plastic",
29
29
  "mitochondria is the powerhouse of the cell",
30
30
  "mitochondria is made of lipids",
31
- ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings());
31
+ ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
32
32
  const vectorstore2 = await MemoryVectorStore.fromTexts([
33
33
  "Buildings are made out of brick",
34
34
  "Buildings are made out of wood",
@@ -37,7 +37,7 @@ test("Should work with multiple retriever", async () => {
37
37
  "Cars are made out of plastic",
38
38
  "mitochondria is the powerhouse of the cell",
39
39
  "mitochondria is made of lipids",
40
- ], [{ id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }, { id: 10 }], new CohereEmbeddings());
40
+ ], [{ id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }, { id: 10 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
41
41
  const retriever = new EnsembleRetriever({
42
42
  retrievers: [vectorstore.asRetriever(), vectorstore2.asRetriever()],
43
43
  });
@@ -54,7 +54,7 @@ test("Should work with weights", async () => {
54
54
  "Cars are made out of plastic",
55
55
  "mitochondria is the powerhouse of the cell",
56
56
  "mitochondria is made of lipids",
57
- ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings());
57
+ ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
58
58
  const vectorstore2 = await MemoryVectorStore.fromTexts([
59
59
  "Buildings are made out of brick",
60
60
  "Buildings are made out of wood",
@@ -63,7 +63,7 @@ test("Should work with weights", async () => {
63
63
  "Cars are made out of plastic",
64
64
  "mitochondria is the powerhouse of the cell",
65
65
  "mitochondria is made of lipids",
66
- ], [{ id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }, { id: 10 }], new CohereEmbeddings());
66
+ ], [{ id: 6 }, { id: 7 }, { id: 8 }, { id: 9 }, { id: 10 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
67
67
  const retriever = new EnsembleRetriever({
68
68
  retrievers: [vectorstore.asRetriever(), vectorstore2.asRetriever()],
69
69
  weights: [0.5, 0.9],
@@ -12,7 +12,7 @@ test("Should work with a question input", async () => {
12
12
  "Cars are made out of plastic",
13
13
  "mitochondria is the powerhouse of the cell",
14
14
  "mitochondria is made of lipids",
15
- ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings());
15
+ ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
16
16
  const model = new ChatOpenAI({});
17
17
  const retriever = MultiQueryRetriever.fromLLM({
18
18
  llm: model,
@@ -32,7 +32,7 @@ test("Should work with a keyword", async () => {
32
32
  "Cars are made out of plastic",
33
33
  "mitochondria is the powerhouse of the cell",
34
34
  "mitochondria is made of lipids",
35
- ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings());
35
+ ], [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 5 }], new CohereEmbeddings({ model: "embed-english-v3.0" }));
36
36
  const model = new ChatOpenAI({});
37
37
  const retriever = MultiQueryRetriever.fromLLM({
38
38
  llm: model,
@@ -2,6 +2,7 @@
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.renderTextDescriptionAndArgs = exports.renderTextDescription = void 0;
4
4
  const zod_to_json_schema_1 = require("zod-to-json-schema");
5
+ const base_1 = require("@langchain/core/language_models/base");
5
6
  /**
6
7
  * Render the tool name and description in plain text.
7
8
  *
@@ -14,7 +15,14 @@ const zod_to_json_schema_1 = require("zod-to-json-schema");
14
15
  * @returns a string of all tools and their descriptions
15
16
  */
16
17
  function renderTextDescription(tools) {
17
- return tools.map((tool) => `${tool.name}: ${tool.description}`).join("\n");
18
+ if (tools.every(base_1.isOpenAITool)) {
19
+ return tools
20
+ .map((tool) => `${tool.function.name}${tool.function.description ? `: ${tool.function.description}` : ""}`)
21
+ .join("\n");
22
+ }
23
+ return tools
24
+ .map((tool) => `${tool.name}: ${tool.description}`)
25
+ .join("\n");
18
26
  }
19
27
  exports.renderTextDescription = renderTextDescription;
20
28
  /**
@@ -29,6 +37,11 @@ exports.renderTextDescription = renderTextDescription;
29
37
  * @returns a string of all tools, their descriptions and a stringified version of their schemas
30
38
  */
31
39
  function renderTextDescriptionAndArgs(tools) {
40
+ if (tools.every(base_1.isOpenAITool)) {
41
+ return tools
42
+ .map((tool) => `${tool.function.name}${tool.function.description ? `: ${tool.function.description}` : ""}, args: ${JSON.stringify(tool.function.parameters)}`)
43
+ .join("\n");
44
+ }
32
45
  return tools
33
46
  .map((tool) => `${tool.name}: ${tool.description}, args: ${JSON.stringify((0, zod_to_json_schema_1.zodToJsonSchema)(tool.schema).properties)}`)
34
47
  .join("\n");
@@ -1,4 +1,5 @@
1
1
  import { StructuredToolInterface } from "@langchain/core/tools";
2
+ import { ToolDefinition } from "@langchain/core/language_models/base";
2
3
  /**
3
4
  * Render the tool name and description in plain text.
4
5
  *
@@ -10,7 +11,7 @@ import { StructuredToolInterface } from "@langchain/core/tools";
10
11
  * @param tools
11
12
  * @returns a string of all tools and their descriptions
12
13
  */
13
- export declare function renderTextDescription(tools: StructuredToolInterface[]): string;
14
+ export declare function renderTextDescription(tools: StructuredToolInterface[] | ToolDefinition[]): string;
14
15
  /**
15
16
  * Render the tool name, description, and args in plain text.
16
17
  * Output will be in the format of:'
@@ -22,4 +23,4 @@ export declare function renderTextDescription(tools: StructuredToolInterface[]):
22
23
  * @param tools
23
24
  * @returns a string of all tools, their descriptions and a stringified version of their schemas
24
25
  */
25
- export declare function renderTextDescriptionAndArgs(tools: StructuredToolInterface[]): string;
26
+ export declare function renderTextDescriptionAndArgs(tools: StructuredToolInterface[] | ToolDefinition[]): string;
@@ -1,4 +1,5 @@
1
1
  import { zodToJsonSchema } from "zod-to-json-schema";
2
+ import { isOpenAITool, } from "@langchain/core/language_models/base";
2
3
  /**
3
4
  * Render the tool name and description in plain text.
4
5
  *
@@ -11,7 +12,14 @@ import { zodToJsonSchema } from "zod-to-json-schema";
11
12
  * @returns a string of all tools and their descriptions
12
13
  */
13
14
  export function renderTextDescription(tools) {
14
- return tools.map((tool) => `${tool.name}: ${tool.description}`).join("\n");
15
+ if (tools.every(isOpenAITool)) {
16
+ return tools
17
+ .map((tool) => `${tool.function.name}${tool.function.description ? `: ${tool.function.description}` : ""}`)
18
+ .join("\n");
19
+ }
20
+ return tools
21
+ .map((tool) => `${tool.name}: ${tool.description}`)
22
+ .join("\n");
15
23
  }
16
24
  /**
17
25
  * Render the tool name, description, and args in plain text.
@@ -25,6 +33,11 @@ export function renderTextDescription(tools) {
25
33
  * @returns a string of all tools, their descriptions and a stringified version of their schemas
26
34
  */
27
35
  export function renderTextDescriptionAndArgs(tools) {
36
+ if (tools.every(isOpenAITool)) {
37
+ return tools
38
+ .map((tool) => `${tool.function.name}${tool.function.description ? `: ${tool.function.description}` : ""}, args: ${JSON.stringify(tool.function.parameters)}`)
39
+ .join("\n");
40
+ }
28
41
  return tools
29
42
  .map((tool) => `${tool.name}: ${tool.description}, args: ${JSON.stringify(zodToJsonSchema(tool.schema).properties)}`)
30
43
  .join("\n");
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "langchain",
3
- "version": "0.2.7",
3
+ "version": "0.2.9",
4
4
  "description": "Typescript bindings for langchain",
5
5
  "type": "module",
6
6
  "engines": {
@@ -888,7 +888,7 @@
888
888
  }
889
889
  },
890
890
  "dependencies": {
891
- "@langchain/core": "~0.2.0",
891
+ "@langchain/core": ">=0.2.11 <0.3.0",
892
892
  "@langchain/openai": ">=0.1.0 <0.3.0",
893
893
  "@langchain/textsplitters": "~0.0.0",
894
894
  "binary-extensions": "^2.2.0",
@@ -900,7 +900,7 @@
900
900
  "ml-distance": "^4.0.0",
901
901
  "openapi-types": "^12.1.3",
902
902
  "p-retry": "4",
903
- "uuid": "^9.0.0",
903
+ "uuid": "^10.0.0",
904
904
  "yaml": "^2.2.1",
905
905
  "zod": "^3.22.4",
906
906
  "zod-to-json-schema": "^3.22.3"