langchain 0.0.206 → 0.0.208

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,23 +3,25 @@
3
3
  ⚡ Building applications with LLMs through composability ⚡
4
4
 
5
5
  [![CI](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml/badge.svg)](https://github.com/langchain-ai/langchainjs/actions/workflows/ci.yml) ![npm](https://img.shields.io/npm/dw/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchainjs)
6
- [<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/hwchase17/langchainjs)
6
+ [<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchainjs)
7
7
 
8
8
  Looking for the Python version? Check out [LangChain](https://github.com/langchain-ai/langchain).
9
9
 
10
10
  To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
11
11
  [LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
12
- Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team
12
+ Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team.
13
13
 
14
- ## Quick Install
14
+ ## ⚡️ Quick Install
15
15
 
16
- `yarn add langchain`
16
+ You can use npm, yarn, or pnpm to install LangChain.js
17
+
18
+ `npm install -S langchain` or `yarn add langchain` or `pnpm add langchain`
17
19
 
18
20
  ```typescript
19
- import { OpenAI } from "langchain/llms/openai";
21
+ import { ChatOpenAI } from "langchain/chat_models/openai";
20
22
  ```
21
23
 
22
- ## Supported Environments
24
+ ## 🌐 Supported Environments
23
25
 
24
26
  LangChain is written in TypeScript and can be used in:
25
27
 
@@ -30,27 +32,80 @@ LangChain is written in TypeScript and can be used in:
30
32
  - Browser
31
33
  - Deno
32
34
 
33
- ## 🤔 What is this?
35
+ ## 🤔 What is LangChain?
34
36
 
35
- Large language models (LLMs) are emerging as a transformative technology, enabling
36
- developers to build applications that they previously could not.
37
- But using these LLMs in isolation is often not enough to
38
- create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.
37
+ **LangChain** is a framework for developing applications powered by language models. It enables applications that:
38
+ - **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
39
+ - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
39
40
 
40
- This library is aimed at assisting in the development of those types of applications.
41
+ This framework consists of several parts.
42
+ - **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic runtime for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
43
+ - **[LangChain Templates](https://github.com/langchain-ai/langchain/tree/master/templates)**: (currently Python-only) A collection of easily deployable reference architectures for a wide variety of tasks.
44
+ - **[LangServe](https://github.com/langchain-ai/langserve)**: (currently Python-only) A library for deploying LangChain chains as a REST API.
45
+ - **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
41
46
 
42
- ## 📖 Full Documentation
47
+ The LangChain libraries themselves are made up of several different packages.
48
+ - **[`@langchain/core`](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core)**: Base abstractions and LangChain Expression Language.
49
+ - **[`@langchain/community`](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community)**: Third party integrations.
50
+ - **[`langchain`](https://github.com/langchain-ai/langchainjs/blob/main/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
43
51
 
44
- For full documentation of prompts, chains, agents and more, please see [here](https://js.langchain.com/docs/).
52
+ Integrations may also be split into their own compatible packages.
45
53
 
46
- ## Relationship with Python LangChain
54
+ ![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/img/langchain_stack.png)
47
55
 
48
- This is built to integrate as seamlessly as possible with the [LangChain Python package](https://github.com/langchain-ai/langchain). Specifically, this means all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between languages.
56
+ This library aims to assist in the development of those types of applications. Common examples of these applications include:
57
+
58
+ **❓Question Answering over specific documents**
59
+
60
+ - [Documentation](https://js.langchain.com/docs/use_cases/question_answering/)
61
+ - End-to-end Example: [Doc-Chatbot](https://github.com/dissorial/doc-chatbot)
62
+
63
+
64
+ **💬 Chatbots**
65
+
66
+ - [Documentation](https://js.langchain.com/docs/modules/model_io/models/chat/)
67
+ - End-to-end Example: [Chat-LangChain](https://github.com/langchain-ai/chat-langchain)
68
+
69
+ ## 🚀 How does LangChain help?
70
+
71
+ The main value props of the LangChain libraries are:
72
+ 1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
73
+ 2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
74
+
75
+ Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
76
+
77
+ Components fall into the following **modules**:
78
+
79
+ **📃 Model I/O:**
49
80
 
50
- The [LangChainHub](https://github.com/hwchase17/langchain-hub) is a central place for the serialized versions of these prompts, chains, and agents.
81
+ This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
82
+
83
+ **📚 Retrieval:**
84
+
85
+ Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
86
+
87
+ **🤖 Agents:**
88
+
89
+ Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
90
+
91
+ ## 📖 Documentation
92
+
93
+ Please see [here](https://js.langchain.com) for full documentation, which includes:
94
+
95
+ - [Getting started](https://js.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
96
+ - Overview of the [interfaces](https://js.langchain.com/docs/expression_language/), [modules](https://js.langchain.com/docs/modules/) and [integrations](https://js.langchain.com/docs/integrations/platforms)
97
+ - [Use case](https://js.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://js.langchain.com/docs/guides/)
98
+ - [Reference](https://api.js.langchain.com): full API docs
51
99
 
52
100
  ## 💁 Contributing
53
101
 
54
- As an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
102
+ As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
103
+
104
+ For detailed information on how to contribute, see [here](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md).
105
+
106
+ Please report any security issues or concerns following our [security guidelines](https://github.com/langchain-ai/langchainjs/blob/main/SECURITY.md).
107
+
108
+ ## 🖇️ Relationship with Python LangChain
109
+
110
+ This is built to integrate as seamlessly as possible with the [LangChain Python package](https://github.com/langchain-ai/langchain). Specifically, this means all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between languages.
55
111
 
56
- Check out [our contributing guidelines](https://github.com/langchain-ai/langchainjs/blob/main/CONTRIBUTING.md) for instructions on how to contribute.
@@ -10,7 +10,7 @@ const env_js_1 = require("../../util/env.cjs");
10
10
  */
11
11
  class AssemblyAILoader extends base_js_1.BaseDocumentLoader {
12
12
  /**
13
- * Creates a new AssemblyAI loader.
13
+ * Create a new AssemblyAI loader.
14
14
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
15
15
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
16
16
  */
@@ -37,14 +37,14 @@ class AssemblyAILoader extends base_js_1.BaseDocumentLoader {
37
37
  }
38
38
  class CreateTranscriptLoader extends AssemblyAILoader {
39
39
  /**
40
- * Retrevies an existing transcript by its ID.
41
- * @param params The parameters to create the transcript, or the ID of the transcript to retrieve.
40
+ * Transcribe audio or retrieve an existing transcript by its ID.
41
+ * @param params The parameters to transcribe audio, or the ID of the transcript to retrieve.
42
42
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
43
43
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
44
44
  */
45
45
  constructor(params, assemblyAIOptions) {
46
46
  super(assemblyAIOptions);
47
- Object.defineProperty(this, "CreateTranscriptParameters", {
47
+ Object.defineProperty(this, "transcribeParams", {
48
48
  enumerable: true,
49
49
  configurable: true,
50
50
  writable: true,
@@ -60,38 +60,42 @@ class CreateTranscriptLoader extends AssemblyAILoader {
60
60
  this.transcriptId = params;
61
61
  }
62
62
  else {
63
- this.CreateTranscriptParameters = params;
63
+ this.transcribeParams = params;
64
64
  }
65
65
  }
66
- async getOrCreateTranscript() {
66
+ async transcribeOrGetTranscript() {
67
67
  if (this.transcriptId) {
68
68
  return await this.client.transcripts.get(this.transcriptId);
69
69
  }
70
- if (this.CreateTranscriptParameters) {
71
- return await this.client.transcripts.create(this.CreateTranscriptParameters);
70
+ if (this.transcribeParams) {
71
+ let transcribeParams;
72
+ if ("audio_url" in this.transcribeParams) {
73
+ transcribeParams = {
74
+ ...this.transcribeParams,
75
+ audio: this.transcribeParams.audio_url,
76
+ };
77
+ }
78
+ else {
79
+ transcribeParams = this.transcribeParams;
80
+ }
81
+ return await this.client.transcripts.transcribe(transcribeParams);
82
+ }
83
+ else {
84
+ throw new Error("No transcript ID or transcribe parameters provided");
72
85
  }
73
86
  }
74
87
  }
75
88
  /**
76
- * Creates and loads the transcript as a document using AssemblyAI.
77
- * @example
78
- * ```typescript
79
- * const loader = new AudioTranscriptLoader(
80
- * { audio_url: "https:
81
- * { apiKey: "ASSEMBLYAI_API_KEY" },
82
- * );
83
- * const docs = await loader.load();
84
- * console.dir(docs, { depth: Infinity });
85
- * ```
89
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
86
90
  */
87
91
  class AudioTranscriptLoader extends CreateTranscriptLoader {
88
92
  /**
89
- * Creates a transcript and loads the transcript as a document using AssemblyAI.
93
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
90
94
  * @returns A promise that resolves to a single document containing the transcript text
91
95
  * as the page content, and the transcript object as the metadata.
92
96
  */
93
97
  async load() {
94
- const transcript = await this.getOrCreateTranscript();
98
+ const transcript = await this.transcribeOrGetTranscript();
95
99
  return [
96
100
  new document_js_1.Document({
97
101
  pageContent: transcript.text,
@@ -102,15 +106,15 @@ class AudioTranscriptLoader extends CreateTranscriptLoader {
102
106
  }
103
107
  exports.AudioTranscriptLoader = AudioTranscriptLoader;
104
108
  /**
105
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
109
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
106
110
  */
107
111
  class AudioTranscriptParagraphsLoader extends CreateTranscriptLoader {
108
112
  /**
109
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
113
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
110
114
  * @returns A promise that resolves to an array of documents, each containing a paragraph of the transcript.
111
115
  */
112
116
  async load() {
113
- const transcript = await this.getOrCreateTranscript();
117
+ const transcript = await this.transcribeOrGetTranscript();
114
118
  const paragraphsResponse = await this.client.transcripts.paragraphs(transcript.id);
115
119
  return paragraphsResponse.paragraphs.map((p) => new document_js_1.Document({
116
120
  pageContent: p.text,
@@ -120,16 +124,15 @@ class AudioTranscriptParagraphsLoader extends CreateTranscriptLoader {
120
124
  }
121
125
  exports.AudioTranscriptParagraphsLoader = AudioTranscriptParagraphsLoader;
122
126
  /**
123
- * Creates a transcript for the given `CreateTranscriptParameters.audio_url`,
124
- * and loads the sentences of the transcript, creating a document for each sentence.
127
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
125
128
  */
126
129
  class AudioTranscriptSentencesLoader extends CreateTranscriptLoader {
127
130
  /**
128
- * Creates a transcript and loads the sentences of the transcript, creating a document for each sentence.
131
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
129
132
  * @returns A promise that resolves to an array of documents, each containing a sentence of the transcript.
130
133
  */
131
134
  async load() {
132
- const transcript = await this.getOrCreateTranscript();
135
+ const transcript = await this.transcribeOrGetTranscript();
133
136
  const sentencesResponse = await this.client.transcripts.sentences(transcript.id);
134
137
  return sentencesResponse.sentences.map((p) => new document_js_1.Document({
135
138
  pageContent: p.text,
@@ -139,27 +142,12 @@ class AudioTranscriptSentencesLoader extends CreateTranscriptLoader {
139
142
  }
140
143
  exports.AudioTranscriptSentencesLoader = AudioTranscriptSentencesLoader;
141
144
  /**
142
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
143
- * @example
144
- * ```typescript
145
- * const loader = new AudioSubtitleLoader(
146
- * {
147
- * audio_url: "https:
148
- * },
149
- * "srt",
150
- * {
151
- * apiKey: "<ASSEMBLYAI_API_KEY>",
152
- * },
153
- * );
154
- *
155
- *
156
- * const docs = await loader.load();
157
- * ```
145
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
158
146
  */
159
147
  class AudioSubtitleLoader extends CreateTranscriptLoader {
160
148
  /**
161
- * Creates a new AudioSubtitleLoader.
162
- * @param params The parameters to create the transcript, or the ID of the transcript to retrieve.
149
+ * Create a new AudioSubtitleLoader.
150
+ * @param params The parameters to transcribe audio, or the ID of the transcript to retrieve.
163
151
  * @param subtitleFormat The format of the subtitles, either `srt` or `vtt`.
164
152
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
165
153
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
@@ -175,11 +163,11 @@ class AudioSubtitleLoader extends CreateTranscriptLoader {
175
163
  this.subtitleFormat = subtitleFormat;
176
164
  }
177
165
  /**
178
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
166
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
179
167
  * @returns A promise that resolves a document containing the subtitles as the page content.
180
168
  */
181
169
  async load() {
182
- const transcript = await this.getOrCreateTranscript();
170
+ const transcript = await this.transcribeOrGetTranscript();
183
171
  const subtitles = await this.client.transcripts.subtitles(transcript.id, this.subtitleFormat);
184
172
  return [
185
173
  new document_js_1.Document({
@@ -1,4 +1,4 @@
1
- import { AssemblyAI, CreateTranscriptParameters, SubtitleFormat, Transcript, TranscriptParagraph, TranscriptSentence } from "assemblyai";
1
+ import { AssemblyAI, TranscribeParams, SubtitleFormat, Transcript, TranscriptParagraph, TranscriptSentence, CreateTranscriptParameters } from "assemblyai";
2
2
  import { Document } from "../../document.js";
3
3
  import { BaseDocumentLoader } from "../base.js";
4
4
  import { AssemblyAIOptions } from "../../types/assemblyai-types.js";
@@ -9,95 +9,70 @@ export type * from "../../types/assemblyai-types.js";
9
9
  declare abstract class AssemblyAILoader extends BaseDocumentLoader {
10
10
  protected client: AssemblyAI;
11
11
  /**
12
- * Creates a new AssemblyAI loader.
12
+ * Create a new AssemblyAI loader.
13
13
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
14
14
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
15
15
  */
16
16
  constructor(assemblyAIOptions?: AssemblyAIOptions);
17
17
  }
18
18
  declare abstract class CreateTranscriptLoader extends AssemblyAILoader {
19
- protected CreateTranscriptParameters?: CreateTranscriptParameters;
19
+ protected transcribeParams?: TranscribeParams | CreateTranscriptParameters;
20
20
  protected transcriptId?: string;
21
21
  /**
22
- * Retrevies an existing transcript by its ID.
23
- * @param params The parameters to create the transcript, or the ID of the transcript to retrieve.
22
+ * Transcribe audio or retrieve an existing transcript by its ID.
23
+ * @param params The parameters to transcribe audio, or the ID of the transcript to retrieve.
24
24
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
25
25
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
26
26
  */
27
- constructor(params: CreateTranscriptParameters | string, assemblyAIOptions?: AssemblyAIOptions);
28
- protected getOrCreateTranscript(): Promise<any>;
27
+ constructor(params: TranscribeParams | CreateTranscriptParameters | string, assemblyAIOptions?: AssemblyAIOptions);
28
+ protected transcribeOrGetTranscript(): Promise<Transcript>;
29
29
  }
30
30
  /**
31
- * Creates and loads the transcript as a document using AssemblyAI.
32
- * @example
33
- * ```typescript
34
- * const loader = new AudioTranscriptLoader(
35
- * { audio_url: "https:
36
- * { apiKey: "ASSEMBLYAI_API_KEY" },
37
- * );
38
- * const docs = await loader.load();
39
- * console.dir(docs, { depth: Infinity });
40
- * ```
31
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
41
32
  */
42
33
  export declare class AudioTranscriptLoader extends CreateTranscriptLoader {
43
34
  /**
44
- * Creates a transcript and loads the transcript as a document using AssemblyAI.
35
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
45
36
  * @returns A promise that resolves to a single document containing the transcript text
46
37
  * as the page content, and the transcript object as the metadata.
47
38
  */
48
39
  load(): Promise<Document<Transcript>[]>;
49
40
  }
50
41
  /**
51
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
42
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
52
43
  */
53
44
  export declare class AudioTranscriptParagraphsLoader extends CreateTranscriptLoader {
54
45
  /**
55
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
46
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
56
47
  * @returns A promise that resolves to an array of documents, each containing a paragraph of the transcript.
57
48
  */
58
49
  load(): Promise<Document<TranscriptParagraph>[]>;
59
50
  }
60
51
  /**
61
- * Creates a transcript for the given `CreateTranscriptParameters.audio_url`,
62
- * and loads the sentences of the transcript, creating a document for each sentence.
52
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
63
53
  */
64
54
  export declare class AudioTranscriptSentencesLoader extends CreateTranscriptLoader {
65
55
  /**
66
- * Creates a transcript and loads the sentences of the transcript, creating a document for each sentence.
56
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
67
57
  * @returns A promise that resolves to an array of documents, each containing a sentence of the transcript.
68
58
  */
69
59
  load(): Promise<Document<TranscriptSentence>[]>;
70
60
  }
71
61
  /**
72
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
73
- * @example
74
- * ```typescript
75
- * const loader = new AudioSubtitleLoader(
76
- * {
77
- * audio_url: "https:
78
- * },
79
- * "srt",
80
- * {
81
- * apiKey: "<ASSEMBLYAI_API_KEY>",
82
- * },
83
- * );
84
- *
85
- *
86
- * const docs = await loader.load();
87
- * ```
62
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
88
63
  */
89
64
  export declare class AudioSubtitleLoader extends CreateTranscriptLoader {
90
65
  private subtitleFormat;
91
66
  /**
92
- * Creates a new AudioSubtitleLoader.
93
- * @param CreateTranscriptParameters The parameters to create the transcript.
67
+ * Create a new AudioSubtitleLoader.
68
+ * @param transcribeParams The parameters to transcribe audio.
94
69
  * @param subtitleFormat The format of the subtitles, either `srt` or `vtt`.
95
70
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
96
71
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
97
72
  */
98
- constructor(CreateTranscriptParameters: CreateTranscriptParameters, subtitleFormat: SubtitleFormat, assemblyAIOptions?: AssemblyAIOptions);
73
+ constructor(transcribeParams: TranscribeParams | CreateTranscriptParameters, subtitleFormat: SubtitleFormat, assemblyAIOptions?: AssemblyAIOptions);
99
74
  /**
100
- * Creates a new AudioSubtitleLoader.
75
+ * Create a new AudioSubtitleLoader.
101
76
  * @param transcriptId The ID of the transcript to retrieve.
102
77
  * @param subtitleFormat The format of the subtitles, either `srt` or `vtt`.
103
78
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
@@ -105,7 +80,7 @@ export declare class AudioSubtitleLoader extends CreateTranscriptLoader {
105
80
  */
106
81
  constructor(transcriptId: string, subtitleFormat: SubtitleFormat, assemblyAIOptions?: AssemblyAIOptions);
107
82
  /**
108
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
83
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
109
84
  * @returns A promise that resolves a document containing the subtitles as the page content.
110
85
  */
111
86
  load(): Promise<Document[]>;
@@ -7,7 +7,7 @@ import { getEnvironmentVariable } from "../../util/env.js";
7
7
  */
8
8
  class AssemblyAILoader extends BaseDocumentLoader {
9
9
  /**
10
- * Creates a new AssemblyAI loader.
10
+ * Create a new AssemblyAI loader.
11
11
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
12
12
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
13
13
  */
@@ -34,14 +34,14 @@ class AssemblyAILoader extends BaseDocumentLoader {
34
34
  }
35
35
  class CreateTranscriptLoader extends AssemblyAILoader {
36
36
  /**
37
- * Retrevies an existing transcript by its ID.
38
- * @param params The parameters to create the transcript, or the ID of the transcript to retrieve.
37
+ * Transcribe audio or retrieve an existing transcript by its ID.
38
+ * @param params The parameters to transcribe audio, or the ID of the transcript to retrieve.
39
39
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
40
40
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
41
41
  */
42
42
  constructor(params, assemblyAIOptions) {
43
43
  super(assemblyAIOptions);
44
- Object.defineProperty(this, "CreateTranscriptParameters", {
44
+ Object.defineProperty(this, "transcribeParams", {
45
45
  enumerable: true,
46
46
  configurable: true,
47
47
  writable: true,
@@ -57,38 +57,42 @@ class CreateTranscriptLoader extends AssemblyAILoader {
57
57
  this.transcriptId = params;
58
58
  }
59
59
  else {
60
- this.CreateTranscriptParameters = params;
60
+ this.transcribeParams = params;
61
61
  }
62
62
  }
63
- async getOrCreateTranscript() {
63
+ async transcribeOrGetTranscript() {
64
64
  if (this.transcriptId) {
65
65
  return await this.client.transcripts.get(this.transcriptId);
66
66
  }
67
- if (this.CreateTranscriptParameters) {
68
- return await this.client.transcripts.create(this.CreateTranscriptParameters);
67
+ if (this.transcribeParams) {
68
+ let transcribeParams;
69
+ if ("audio_url" in this.transcribeParams) {
70
+ transcribeParams = {
71
+ ...this.transcribeParams,
72
+ audio: this.transcribeParams.audio_url,
73
+ };
74
+ }
75
+ else {
76
+ transcribeParams = this.transcribeParams;
77
+ }
78
+ return await this.client.transcripts.transcribe(transcribeParams);
79
+ }
80
+ else {
81
+ throw new Error("No transcript ID or transcribe parameters provided");
69
82
  }
70
83
  }
71
84
  }
72
85
  /**
73
- * Creates and loads the transcript as a document using AssemblyAI.
74
- * @example
75
- * ```typescript
76
- * const loader = new AudioTranscriptLoader(
77
- * { audio_url: "https:
78
- * { apiKey: "ASSEMBLYAI_API_KEY" },
79
- * );
80
- * const docs = await loader.load();
81
- * console.dir(docs, { depth: Infinity });
82
- * ```
86
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
83
87
  */
84
88
  export class AudioTranscriptLoader extends CreateTranscriptLoader {
85
89
  /**
86
- * Creates a transcript and loads the transcript as a document using AssemblyAI.
90
+ * Transcribe audio and load the transcript as a document using AssemblyAI.
87
91
  * @returns A promise that resolves to a single document containing the transcript text
88
92
  * as the page content, and the transcript object as the metadata.
89
93
  */
90
94
  async load() {
91
- const transcript = await this.getOrCreateTranscript();
95
+ const transcript = await this.transcribeOrGetTranscript();
92
96
  return [
93
97
  new Document({
94
98
  pageContent: transcript.text,
@@ -98,15 +102,15 @@ export class AudioTranscriptLoader extends CreateTranscriptLoader {
98
102
  }
99
103
  }
100
104
  /**
101
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
105
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
102
106
  */
103
107
  export class AudioTranscriptParagraphsLoader extends CreateTranscriptLoader {
104
108
  /**
105
- * Creates a transcript and loads the paragraphs of the transcript, creating a document for each paragraph.
109
+ * Transcribe audio and load the paragraphs of the transcript, creating a document for each paragraph.
106
110
  * @returns A promise that resolves to an array of documents, each containing a paragraph of the transcript.
107
111
  */
108
112
  async load() {
109
- const transcript = await this.getOrCreateTranscript();
113
+ const transcript = await this.transcribeOrGetTranscript();
110
114
  const paragraphsResponse = await this.client.transcripts.paragraphs(transcript.id);
111
115
  return paragraphsResponse.paragraphs.map((p) => new Document({
112
116
  pageContent: p.text,
@@ -115,16 +119,15 @@ export class AudioTranscriptParagraphsLoader extends CreateTranscriptLoader {
115
119
  }
116
120
  }
117
121
  /**
118
- * Creates a transcript for the given `CreateTranscriptParameters.audio_url`,
119
- * and loads the sentences of the transcript, creating a document for each sentence.
122
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
120
123
  */
121
124
  export class AudioTranscriptSentencesLoader extends CreateTranscriptLoader {
122
125
  /**
123
- * Creates a transcript and loads the sentences of the transcript, creating a document for each sentence.
126
+ * Transcribe audio and load the sentences of the transcript, creating a document for each sentence.
124
127
  * @returns A promise that resolves to an array of documents, each containing a sentence of the transcript.
125
128
  */
126
129
  async load() {
127
- const transcript = await this.getOrCreateTranscript();
130
+ const transcript = await this.transcribeOrGetTranscript();
128
131
  const sentencesResponse = await this.client.transcripts.sentences(transcript.id);
129
132
  return sentencesResponse.sentences.map((p) => new Document({
130
133
  pageContent: p.text,
@@ -133,27 +136,12 @@ export class AudioTranscriptSentencesLoader extends CreateTranscriptLoader {
133
136
  }
134
137
  }
135
138
  /**
136
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
137
- * @example
138
- * ```typescript
139
- * const loader = new AudioSubtitleLoader(
140
- * {
141
- * audio_url: "https:
142
- * },
143
- * "srt",
144
- * {
145
- * apiKey: "<ASSEMBLYAI_API_KEY>",
146
- * },
147
- * );
148
- *
149
- *
150
- * const docs = await loader.load();
151
- * ```
139
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
152
140
  */
153
141
  export class AudioSubtitleLoader extends CreateTranscriptLoader {
154
142
  /**
155
- * Creates a new AudioSubtitleLoader.
156
- * @param params The parameters to create the transcript, or the ID of the transcript to retrieve.
143
+ * Create a new AudioSubtitleLoader.
144
+ * @param params The parameters to transcribe audio, or the ID of the transcript to retrieve.
157
145
  * @param subtitleFormat The format of the subtitles, either `srt` or `vtt`.
158
146
  * @param assemblyAIOptions The options to configure the AssemblyAI loader.
159
147
  * Configure the `assemblyAIOptions.apiKey` with your AssemblyAI API key, or configure it as the `ASSEMBLYAI_API_KEY` environment variable.
@@ -169,11 +157,11 @@ export class AudioSubtitleLoader extends CreateTranscriptLoader {
169
157
  this.subtitleFormat = subtitleFormat;
170
158
  }
171
159
  /**
172
- * Creates a transcript and loads subtitles for the transcript as `srt` or `vtt` format.
160
+ * Transcribe audio and load subtitles for the transcript as `srt` or `vtt` format.
173
161
  * @returns A promise that resolves a document containing the subtitles as the page content.
174
162
  */
175
163
  async load() {
176
- const transcript = await this.getOrCreateTranscript();
164
+ const transcript = await this.transcribeOrGetTranscript();
177
165
  const subtitles = await this.client.transcripts.subtitles(transcript.id, this.subtitleFormat);
178
166
  return [
179
167
  new Document({
@@ -3,15 +3,20 @@ Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.AutoGPTOutputParser = exports.preprocessJsonInput = void 0;
4
4
  const output_parser_js_1 = require("../../schema/output_parser.cjs");
5
5
  /**
6
- * Utility function used to preprocess a string to be parsed as JSON. It
7
- * replaces single backslashes with double backslashes, while leaving
6
+ * Utility function used to preprocess a string to be parsed as JSON.
7
+ * It replaces single backslashes with double backslashes, while leaving
8
8
  * already escaped ones intact.
9
+ * It also extracts the json code if it is inside a code block
9
10
  */
10
11
  function preprocessJsonInput(inputStr) {
11
- // Replace single backslashes with double backslashes,
12
- // while leaving already escaped ones intact
13
12
  const correctedStr = inputStr.replace(/(?<!\\)\\(?!["\\/bfnrt]|u[0-9a-fA-F]{4})/g, "\\\\");
14
- return correctedStr;
13
+ const match = correctedStr.match(/```(.*)(\r\n|\r|\n)(?<code>[\w\W\n]+)(\r\n|\r|\n)```/);
14
+ if (match?.groups?.code) {
15
+ return match.groups.code.trim();
16
+ }
17
+ else {
18
+ return correctedStr;
19
+ }
15
20
  }
16
21
  exports.preprocessJsonInput = preprocessJsonInput;
17
22
  /**
@@ -1,9 +1,10 @@
1
1
  import { BaseOutputParser } from "../../schema/output_parser.js";
2
2
  import { AutoGPTAction } from "./schema.js";
3
3
  /**
4
- * Utility function used to preprocess a string to be parsed as JSON. It
5
- * replaces single backslashes with double backslashes, while leaving
4
+ * Utility function used to preprocess a string to be parsed as JSON.
5
+ * It replaces single backslashes with double backslashes, while leaving
6
6
  * already escaped ones intact.
7
+ * It also extracts the json code if it is inside a code block
7
8
  */
8
9
  export declare function preprocessJsonInput(inputStr: string): string;
9
10
  /**
@@ -1,14 +1,19 @@
1
1
  import { BaseOutputParser } from "../../schema/output_parser.js";
2
2
  /**
3
- * Utility function used to preprocess a string to be parsed as JSON. It
4
- * replaces single backslashes with double backslashes, while leaving
3
+ * Utility function used to preprocess a string to be parsed as JSON.
4
+ * It replaces single backslashes with double backslashes, while leaving
5
5
  * already escaped ones intact.
6
+ * It also extracts the json code if it is inside a code block
6
7
  */
7
8
  export function preprocessJsonInput(inputStr) {
8
- // Replace single backslashes with double backslashes,
9
- // while leaving already escaped ones intact
10
9
  const correctedStr = inputStr.replace(/(?<!\\)\\(?!["\\/bfnrt]|u[0-9a-fA-F]{4})/g, "\\\\");
11
- return correctedStr;
10
+ const match = correctedStr.match(/```(.*)(\r\n|\r|\n)(?<code>[\w\W\n]+)(\r\n|\r|\n)```/);
11
+ if (match?.groups?.code) {
12
+ return match.groups.code.trim();
13
+ }
14
+ else {
15
+ return correctedStr;
16
+ }
12
17
  }
13
18
  /**
14
19
  * Class responsible for parsing the output of AutoGPT. It extends the
@@ -1,12 +1,11 @@
1
1
  import { BaseChatModel, BaseChatModelParams } from "../../chat_models/base.js";
2
2
  import { CallbackManagerForLLMRun } from "../../callbacks/manager.js";
3
3
  import { BaseMessage, ChatResult } from "../../schema/index.js";
4
- import { ChatOllama } from "../../chat_models/ollama.js";
5
- import { OllamaInput } from "../../util/ollama.js";
4
+ import { ChatOllama, type ChatOllamaInput } from "../../chat_models/ollama.js";
6
5
  import { BaseFunctionCallOptions } from "../../base_language/index.js";
7
6
  export interface ChatOllamaFunctionsCallOptions extends BaseFunctionCallOptions {
8
7
  }
9
- export type OllamaFunctionsInput = Partial<OllamaInput> & BaseChatModelParams & {
8
+ export type OllamaFunctionsInput = Partial<ChatOllamaInput> & BaseChatModelParams & {
10
9
  llm?: ChatOllama;
11
10
  toolSystemPromptTemplate?: string;
12
11
  };
@@ -103,7 +103,7 @@ class OpenAIAssistantRunnable extends base_js_1.Runnable {
103
103
  else {
104
104
  // Submitting tool outputs to an existing run, outside the AgentExecutor
105
105
  // framework.
106
- run = await this.client.beta.threads.runs.submitToolOutputs(input.runId, input.threadId, {
106
+ run = await this.client.beta.threads.runs.submitToolOutputs(input.threadId, input.runId, {
107
107
  tool_outputs: input.toolOutputs,
108
108
  });
109
109
  }
@@ -100,7 +100,7 @@ export class OpenAIAssistantRunnable extends Runnable {
100
100
  else {
101
101
  // Submitting tool outputs to an existing run, outside the AgentExecutor
102
102
  // framework.
103
- run = await this.client.beta.threads.runs.submitToolOutputs(input.runId, input.threadId, {
103
+ run = await this.client.beta.threads.runs.submitToolOutputs(input.threadId, input.runId, {
104
104
  tool_outputs: input.toolOutputs,
105
105
  });
106
106
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "langchain",
3
- "version": "0.0.206",
3
+ "version": "0.0.208",
4
4
  "description": "Typescript bindings for langchain",
5
5
  "type": "module",
6
6
  "engines": {
@@ -915,7 +915,7 @@
915
915
  "@vercel/kv": "^0.2.3",
916
916
  "@xata.io/client": "^0.28.0",
917
917
  "apify-client": "^2.7.1",
918
- "assemblyai": "^2.0.2",
918
+ "assemblyai": "^4.0.0",
919
919
  "axios": "^0.26.0",
920
920
  "cheerio": "^1.0.0-rc.12",
921
921
  "chromadb": "^1.5.3",
@@ -983,7 +983,7 @@
983
983
  "@vercel/kv": "^0.2.3",
984
984
  "@xata.io/client": "^0.28.0",
985
985
  "apify-client": "^2.7.1",
986
- "assemblyai": "^2.0.2",
986
+ "assemblyai": "^4.0.0",
987
987
  "axios": "*",
988
988
  "cheerio": "^1.0.0-rc.12",
989
989
  "chromadb": "*",
@@ -1175,7 +1175,7 @@
1175
1175
  },
1176
1176
  "dependencies": {
1177
1177
  "@anthropic-ai/sdk": "^0.9.1",
1178
- "@langchain/community": "~0.0.4",
1178
+ "@langchain/community": "~0.0.6",
1179
1179
  "@langchain/core": "~0.1.1",
1180
1180
  "@langchain/openai": "~0.0.5",
1181
1181
  "binary-extensions": "^2.2.0",
@@ -1,47 +0,0 @@
1
- "use strict";
2
- Object.defineProperty(exports, "__esModule", { value: true });
3
- exports.createOllamaStream = void 0;
4
- const stream_js_1 = require("./stream.cjs");
5
- async function* createOllamaStream(baseUrl, params, options) {
6
- let formattedBaseUrl = baseUrl;
7
- if (formattedBaseUrl.startsWith("http://localhost:")) {
8
- // Node 18 has issues with resolving "localhost"
9
- // See https://github.com/node-fetch/node-fetch/issues/1624
10
- formattedBaseUrl = formattedBaseUrl.replace("http://localhost:", "http://127.0.0.1:");
11
- }
12
- const response = await fetch(`${formattedBaseUrl}/api/generate`, {
13
- method: "POST",
14
- body: JSON.stringify(params),
15
- headers: {
16
- "Content-Type": "application/json",
17
- },
18
- signal: options.signal,
19
- });
20
- if (!response.ok) {
21
- const json = await response.json();
22
- const error = new Error(`Ollama call failed with status code ${response.status}: ${json.error}`);
23
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
24
- error.response = response;
25
- throw error;
26
- }
27
- if (!response.body) {
28
- throw new Error("Could not begin Ollama stream. Please check the given URL and try again.");
29
- }
30
- const stream = stream_js_1.IterableReadableStream.fromReadableStream(response.body);
31
- const decoder = new TextDecoder();
32
- let extra = "";
33
- for await (const chunk of stream) {
34
- const decoded = extra + decoder.decode(chunk);
35
- const lines = decoded.split("\n");
36
- extra = lines.pop() || "";
37
- for (const line of lines) {
38
- try {
39
- yield JSON.parse(line);
40
- }
41
- catch (e) {
42
- console.warn(`Received a non-JSON parseable chunk: ${line}`);
43
- }
44
- }
45
- }
46
- }
47
- exports.createOllamaStream = createOllamaStream;
@@ -1,89 +0,0 @@
1
- import { BaseLanguageModelCallOptions } from "../base_language/index.js";
2
- import type { StringWithAutocomplete } from "./types.js";
3
- export interface OllamaInput {
4
- embeddingOnly?: boolean;
5
- f16KV?: boolean;
6
- frequencyPenalty?: number;
7
- logitsAll?: boolean;
8
- lowVram?: boolean;
9
- mainGpu?: number;
10
- model?: string;
11
- baseUrl?: string;
12
- mirostat?: number;
13
- mirostatEta?: number;
14
- mirostatTau?: number;
15
- numBatch?: number;
16
- numCtx?: number;
17
- numGpu?: number;
18
- numGqa?: number;
19
- numKeep?: number;
20
- numThread?: number;
21
- penalizeNewline?: boolean;
22
- presencePenalty?: number;
23
- repeatLastN?: number;
24
- repeatPenalty?: number;
25
- ropeFrequencyBase?: number;
26
- ropeFrequencyScale?: number;
27
- temperature?: number;
28
- stop?: string[];
29
- tfsZ?: number;
30
- topK?: number;
31
- topP?: number;
32
- typicalP?: number;
33
- useMLock?: boolean;
34
- useMMap?: boolean;
35
- vocabOnly?: boolean;
36
- format?: StringWithAutocomplete<"json">;
37
- }
38
- export interface OllamaRequestParams {
39
- model: string;
40
- prompt: string;
41
- format?: StringWithAutocomplete<"json">;
42
- options: {
43
- embedding_only?: boolean;
44
- f16_kv?: boolean;
45
- frequency_penalty?: number;
46
- logits_all?: boolean;
47
- low_vram?: boolean;
48
- main_gpu?: number;
49
- mirostat?: number;
50
- mirostat_eta?: number;
51
- mirostat_tau?: number;
52
- num_batch?: number;
53
- num_ctx?: number;
54
- num_gpu?: number;
55
- num_gqa?: number;
56
- num_keep?: number;
57
- num_thread?: number;
58
- penalize_newline?: boolean;
59
- presence_penalty?: number;
60
- repeat_last_n?: number;
61
- repeat_penalty?: number;
62
- rope_frequency_base?: number;
63
- rope_frequency_scale?: number;
64
- temperature?: number;
65
- stop?: string[];
66
- tfs_z?: number;
67
- top_k?: number;
68
- top_p?: number;
69
- typical_p?: number;
70
- use_mlock?: boolean;
71
- use_mmap?: boolean;
72
- vocab_only?: boolean;
73
- };
74
- }
75
- export interface OllamaCallOptions extends BaseLanguageModelCallOptions {
76
- }
77
- export type OllamaGenerationChunk = {
78
- response: string;
79
- model: string;
80
- created_at: string;
81
- done: boolean;
82
- total_duration?: number;
83
- load_duration?: number;
84
- prompt_eval_count?: number;
85
- prompt_eval_duration?: number;
86
- eval_count?: number;
87
- eval_duration?: number;
88
- };
89
- export declare function createOllamaStream(baseUrl: string, params: OllamaRequestParams, options: OllamaCallOptions): AsyncGenerator<OllamaGenerationChunk>;
@@ -1,43 +0,0 @@
1
- import { IterableReadableStream } from "./stream.js";
2
- export async function* createOllamaStream(baseUrl, params, options) {
3
- let formattedBaseUrl = baseUrl;
4
- if (formattedBaseUrl.startsWith("http://localhost:")) {
5
- // Node 18 has issues with resolving "localhost"
6
- // See https://github.com/node-fetch/node-fetch/issues/1624
7
- formattedBaseUrl = formattedBaseUrl.replace("http://localhost:", "http://127.0.0.1:");
8
- }
9
- const response = await fetch(`${formattedBaseUrl}/api/generate`, {
10
- method: "POST",
11
- body: JSON.stringify(params),
12
- headers: {
13
- "Content-Type": "application/json",
14
- },
15
- signal: options.signal,
16
- });
17
- if (!response.ok) {
18
- const json = await response.json();
19
- const error = new Error(`Ollama call failed with status code ${response.status}: ${json.error}`);
20
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
21
- error.response = response;
22
- throw error;
23
- }
24
- if (!response.body) {
25
- throw new Error("Could not begin Ollama stream. Please check the given URL and try again.");
26
- }
27
- const stream = IterableReadableStream.fromReadableStream(response.body);
28
- const decoder = new TextDecoder();
29
- let extra = "";
30
- for await (const chunk of stream) {
31
- const decoded = extra + decoder.decode(chunk);
32
- const lines = decoded.split("\n");
33
- extra = lines.pop() || "";
34
- for (const line of lines) {
35
- try {
36
- yield JSON.parse(line);
37
- }
38
- catch (e) {
39
- console.warn(`Received a non-JSON parseable chunk: ${line}`);
40
- }
41
- }
42
- }
43
- }