@langchain/langgraph 0.0.10-rc.1 → 0.0.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,12 +10,11 @@ It is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache B
10
10
  The current interface exposed is one inspired by [NetworkX](https://networkx.org/documentation/latest/).
11
11
 
12
12
  The main use is for adding **cycles** to your LLM application.
13
- Crucially, this is NOT a **DAG** framework.
13
+ Crucially, LangGraph is NOT optimized for only **DAG** workflows.
14
14
  If you want to build a DAG, you should use just use [LangChain Expression Language](https://js.langchain.com/docs/expression_language/).
15
15
 
16
16
  Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
17
17
 
18
-
19
18
  > Looking for the Python version? Click [here](https://github.com/langchain-ai/langgraph).
20
19
 
21
20
  ## Installation
@@ -24,18 +23,257 @@ Cycles are important for agent-like behaviors, where you call an LLM in a loop,
24
23
  npm install @langchain/langgraph
25
24
  ```
26
25
 
27
- ## Quick Start
26
+ ## Quick start
27
+
28
+ One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.
29
+
30
+ State in LangGraph can be pretty general, but to keep things simpler to start, we'll show off an example where the graph's state is limited to a list of chat messages using the built-in `MessageGraph` class. This is convenient when using LangGraph with LangChain chat models because we can return chat model output directly.
31
+
32
+ First, install the LangChain OpenAI integration package:
33
+
34
+ ```shell
35
+ npm i @langchain/openai
36
+ ```
37
+
38
+ We also need to export some environment variables:
39
+
40
+ ```shell
41
+ export OPENAI_API_KEY=sk-...
42
+ ```
43
+
44
+ And now we're ready! The graph below contains a single node called `"oracle"` that executes a chat model, then returns the result:
45
+
46
+ ```ts
47
+ import { ChatOpenAI } from "@langchain/openai";
48
+ import { HumanMessage, BaseMessage, } from "@langchain/core/messages";
49
+ import { END, MessageGraph } from "@langchain/langgraph";
50
+
51
+ const model = new ChatOpenAI({ temperature: 0 });
52
+
53
+ const graph = new MessageGraph();
54
+
55
+ graph.addNode("oracle", async (state: BaseMessage[]) => {
56
+ return model.invoke(state);
57
+ });
58
+
59
+ graph.addEdge("oracle", END);
60
+
61
+ graph.setEntryPoint("oracle");
62
+
63
+ const runnable = graph.compile();
64
+ ```
65
+
66
+ Let's run it!
67
+
68
+ ```ts
69
+ // For Message graph, input should always be a message or list of messages.
70
+ const res = await runnable.invoke(
71
+ new HumanMessage("What is 1 + 1?")
72
+ );
73
+ ```
74
+
75
+ ```ts
76
+ [
77
+ HumanMessage {
78
+ content: 'What is 1 + 1?',
79
+ additional_kwargs: {}
80
+ },
81
+ AIMessage {
82
+ content: '1 + 1 equals 2.',
83
+ additional_kwargs: { function_call: undefined, tool_calls: undefined }
84
+ }
85
+ ]
86
+ ```
87
+
88
+ So what did we do here? Let's break it down step by step:
89
+
90
+ 1. First, we initialize our model and a `MessageGraph`.
91
+ 2. Next, we add a single node to the graph, called `"oracle"`, which simply calls the model with the given input.
92
+ 3. We add an edge from this `"oracle"` node to the special value `END`. This means that execution will end after current node.
93
+ 4. We set `"oracle"` as the entrypoint to the graph.
94
+ 5. We compile the graph, ensuring that no more modifications to it can be made.
95
+
96
+ Then, when we execute the graph:
97
+
98
+ 1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"oracle"`.
99
+ 2. The `"oracle"` node executes, invoking the chat model.
100
+ 3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
101
+ 4. Execution progresses to the special `END` value and outputs the final state.
102
+
103
+ And as a result, we get a list of two chat messages as output.
104
+
105
+ ### Interaction with LCEL
106
+
107
+ As an aside for those already familiar with LangChain - `addNode` actually takes any runnable as input. In the above example, the passed function is automatically converted, but we could also have passed the model directly:
108
+
109
+ ```ts
110
+ graph.addNode("oracle", model);
111
+ ```
112
+
113
+ In which case the `.invoke()` method will be called when the graph executes.
114
+
115
+ Just make sure you are mindful of the fact that the input to the runnable is the entire current state. So this will fail:
116
+
117
+ ```ts
118
+ // This will NOT work with MessageGraph!
119
+ import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
120
+
121
+ const prompt = ChatPromptTemplate.fromMessages([
122
+ ["system", "You are a helpful assistant who always speaks in pirate dialect"],
123
+ MessagesPlaceholder("messages"),
124
+ ]);
125
+
126
+ const chain = prompt.pipe(model);
127
+
128
+ // State is a list of messages, but our chain expects an object input:
129
+ //
130
+ // { messages: [] }
131
+ //
132
+ // Therefore, the graph will throw an exception when it executes here.
133
+ graph.addNode("oracle", chain);
134
+ ```
135
+
136
+ ## Conditional edges
137
+
138
+ Now, let's move onto something a little bit less trivial. Because math can be difficult for LLMs, let's allow the LLM to conditionally call a calculator node using tool calling.
139
+
140
+ ```bash
141
+ npm i langchain @langchain/openai
142
+ ```
143
+
144
+ We'll recreate our graph with an additional `"calculator"` that will take the result of the most recent message, if it is a math expression, and calculate the result.
145
+ We'll also bind the calculator to the OpenAI model as a tool to allow the model to optionally use the tool if it deems necessary:
146
+
147
+ ```ts
148
+ import {
149
+ ToolMessage,
150
+ } from "@langchain/core/messages";
151
+ import { Calculator } from "langchain/tools/calculator";
152
+ import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
153
+
154
+ const model = new ChatOpenAI({
155
+ temperature: 0,
156
+ }).bind({
157
+ tools: [convertToOpenAITool(new Calculator())],
158
+ tool_choice: "auto",
159
+ });
160
+
161
+ const graph = new MessageGraph();
162
+
163
+ graph.addNode("oracle", async (state: BaseMessage[]) => {
164
+ return model.invoke(state);
165
+ });
166
+
167
+ graph.addNode("calculator", async (state: BaseMessage[]) => {
168
+ const tool = new Calculator();
169
+ const toolCalls =
170
+ state[state.length - 1].additional_kwargs.tool_calls ?? [];
171
+ const calculatorCall = toolCalls.find(
172
+ (toolCall) => toolCall.function.name === "calculator"
173
+ );
174
+ if (calculatorCall === undefined) {
175
+ throw new Error("No calculator input found.");
176
+ }
177
+ const result = await tool.invoke(
178
+ JSON.parse(calculatorCall.function.arguments)
179
+ );
180
+ return new ToolMessage({
181
+ tool_call_id: calculatorCall.id,
182
+ content: result,
183
+ });
184
+ });
185
+
186
+ graph.addEdge("calculator", END);
187
+
188
+ graph.setEntryPoint("oracle");
189
+ ```
190
+
191
+ Now let's think - what do we want to have happen?
192
+
193
+ - If the `"oracle"` node returns a message expecting a tool call, we want to execute the `"calculator"` node
194
+ - If not, we can just end execution
195
+
196
+ We can achieve this using **conditional edges**, which routes execution to a node based on the current state using a function.
197
+
198
+ Here's what that looks like:
199
+
200
+ ```ts
201
+ const router = (state: BaseMessage[]) => {
202
+ const toolCalls =
203
+ state[state.length - 1].additional_kwargs.tool_calls ?? [];
204
+ if (toolCalls.length) {
205
+ return "calculator";
206
+ } else {
207
+ return "end";
208
+ }
209
+ };
210
+
211
+ graph.addConditionalEdges("oracle", router, {
212
+ calculator: "calculator",
213
+ end: END,
214
+ });
215
+ ```
216
+
217
+ If the model output contains a tool call, we move to the `"calculator"` node. Otherwise, we end.
218
+
219
+ Great! Now all that's left is to compile the graph and try it out. Math-related questions are routed to the calculator tool:
220
+
221
+ ```ts
222
+ const runnable = graph.compile();
223
+ const mathResponse = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
224
+ ```
225
+
226
+ ```ts
227
+ [
228
+ HumanMessage {
229
+ content: 'What is 1 + 1?',
230
+ additional_kwargs: {}
231
+ },
232
+ AIMessage {
233
+ content: '',
234
+ additional_kwargs: { function_call: undefined, tool_calls: [Array] }
235
+ },
236
+ ToolMessage {
237
+ content: '2',
238
+ name: undefined,
239
+ additional_kwargs: {},
240
+ tool_call_id: 'call_P7KWQoftVsj6fgsqKyolWp91'
241
+ }
242
+ ]
243
+ ```
244
+
245
+ While conversational responses are outputted directly:
246
+
247
+ ```ts
248
+ const otherResponse = await runnable.invoke(new HumanMessage("What is your name?"));
249
+ ```
250
+
251
+ ```ts
252
+ [
253
+ HumanMessage {
254
+ content: 'What is your name?',
255
+ additional_kwargs: {}
256
+ },
257
+ AIMessage {
258
+ content: 'My name is Assistant. How can I assist you today?',
259
+ additional_kwargs: { function_call: undefined, tool_calls: undefined }
260
+ }
261
+ ]
262
+ ```
263
+
264
+ ## Cycles
265
+
266
+ Now, let's go over a more general example with a cycle. We will recreate the [`AgentExecutor`](https://js.langchain.com/docs/modules/agents/concepts#agentexecutor) class from LangChain.
28
267
 
29
- Here we will go over an example of recreating the [`AgentExecutor`](https://js.langchain.com/docs/modules/agents/concepts#agentexecutor) class from LangChain.
30
268
  The benefits of creating it with LangGraph is that it is more modifiable.
31
269
 
32
- We will also want to install some LangChain packages:
270
+ We will need to install some LangChain packages:
33
271
 
34
272
  ```shell
35
273
  npm install langchain @langchain/core @langchain/community @langchain/openai
36
274
  ```
37
275
 
38
- We also need to export some environment variables needed for our agent.
276
+ We also need additional environment variables.
39
277
 
40
278
  ```shell
41
279
  export OPENAI_API_KEY=sk-...
@@ -52,7 +290,7 @@ export LANGCHAIN_ENDPOINT=https://api.langchain.com
52
290
 
53
291
  ### Set up the tools
54
292
 
55
- We will first define the tools we want to use.
293
+ As above, we will first define the tools we want to use.
56
294
  For this simple example, we will use a built-in search tool via Tavily.
57
295
  However, it is really easy to create your own tools - see documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do that.
58
296
 
@@ -62,8 +300,7 @@ import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
62
300
  const tools = [new TavilySearchResults({ maxResults: 1 })];
63
301
  ```
64
302
 
65
- We can now wrap these tools in a simple ToolExecutor.
66
- This is a real simple class that takes in a ToolInvocation and calls that tool, returning the output.
303
+ We can now wrap these tools in a ToolExecutor, which simply takes in a ToolInvocation and calls that tool, returning the output.
67
304
 
68
305
  A ToolInvocation is any type with `tool` and `toolInput` attribute.
69
306
 
@@ -71,25 +308,18 @@ A ToolInvocation is any type with `tool` and `toolInput` attribute.
71
308
  ```typescript
72
309
  import { ToolExecutor } from "@langchain/langgraph/prebuilt";
73
310
 
74
- const toolExecutor = new ToolExecutor({
75
- tools
76
- });
311
+ const toolExecutor = new ToolExecutor({ tools });
77
312
  ```
78
313
 
79
314
  ### Set up the model
80
315
 
81
316
  Now we need to load the chat model we want to use.
82
- Importantly, this should satisfy two criteria:
83
-
84
- 1. It should work with messages. We will represent all agent state in the form of messages, so it needs to be able to work well with them.
85
- 2. It should work with OpenAI function calling. This means it should either be an OpenAI model or a model that exposes a similar interface.
86
-
87
- Note: these model requirements are not requirements for using LangGraph - they are just requirements for this one example.
317
+ This time, we'll use the older function calling interface. This walkthrough will use OpenAI, but we can choose any model that supports OpenAI function calling.
88
318
 
89
319
  ```typescript
90
320
  import { ChatOpenAI } from "@langchain/openai";
91
321
 
92
- // We will set streaming=True so that we can stream tokens
322
+ // We will set streaming: true so that we can stream tokens
93
323
  // See the streaming section for more information on this.
94
324
  const model = new ChatOpenAI({
95
325
  temperature: 0,
@@ -113,9 +343,9 @@ const newModel = model.bind({
113
343
 
114
344
  ### Define the agent state
115
345
 
116
- The main type of graph in `langgraph` is the `StatefulGraph`.
346
+ This time, we'll use the more general `StateGraph`.
117
347
  This graph is parameterized by a state object that it passes around to each node.
118
- Each node then returns operations to update that state.
348
+ Remember that each node then returns operations to update that state.
119
349
  These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.
120
350
  Whether to set or add is denoted by annotating the state object you construct the graph with.
121
351
 
@@ -133,13 +363,17 @@ const agentState = {
133
363
  value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
134
364
  default: () => [],
135
365
  }
136
- }
366
+ };
137
367
  ```
138
368
 
369
+ You can think of the `MessageGraph` used in the initial example as a preconfigured version of this graph. The difference is that the state is directly a list of messages,
370
+ instead of an object containing a key called `"messages"` whose value is a list of messages.
371
+ The `MessageGraph` update step is similar to the one above where we always append the returned values of a node to the internal state.
372
+
139
373
  ### Define the nodes
140
374
 
141
375
  We now need to define a few different nodes in our graph.
142
- In `langgraph`, a node can be either a function or a [runnable](https://js.langchain.com/docs/expression_language/).
376
+ In LangGraph, a node can be either a function or a [runnable](https://js.langchain.com/docs/expression_language/).
143
377
  There are two main nodes we need for this:
144
378
 
145
379
  1. The agent: responsible for deciding what (if any) actions to take.
@@ -160,6 +394,10 @@ Let's define the nodes, as well as a function to decide how what conditional edg
160
394
  ```typescript
161
395
  import { FunctionMessage } from "@langchain/core/messages";
162
396
  import { AgentAction } from "@langchain/core/agents";
397
+ import {
398
+ ChatPromptTemplate,
399
+ MessagesPlaceholder
400
+ } from "@langchain/core/prompts";
163
401
 
164
402
  // Define the function that determines whether to continue or not
165
403
  const shouldContinue = (state: { messages: Array<BaseMessage> }) => {
@@ -191,7 +429,7 @@ const _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => {
191
429
  // We construct an AgentAction from the function_call
192
430
  return {
193
431
  tool: lastMessage.additional_kwargs.function_call.name,
194
- toolInput: JSON.stringify(
432
+ toolInput: JSON.parse(
195
433
  lastMessage.additional_kwargs.function_call.arguments
196
434
  ),
197
435
  log: "",
@@ -200,11 +438,18 @@ const _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => {
200
438
 
201
439
  // Define the function that calls the model
202
440
  const callModel = async (
203
- state: { messages: Array<BaseMessage> },
204
- config?: RunnableConfig
441
+ state: { messages: Array<BaseMessage> }
205
442
  ) => {
206
443
  const { messages } = state;
207
- const response = await newModel.invoke(messages, config);
444
+ // You can use a prompt here to tweak model behavior.
445
+ // You can also just pass messages to the model directly.
446
+ const prompt = ChatPromptTemplate.fromMessages([
447
+ ["system", "You are a helpful assistant."],
448
+ new MessagesPlaceholder("messages"),
449
+ ]);
450
+ const response = await prompt
451
+ .pipe(newModel)
452
+ .invoke({ messages });
208
453
  // We return a list, because this will get added to the existing list
209
454
  return {
210
455
  messages: [response],
@@ -212,12 +457,11 @@ const callModel = async (
212
457
  };
213
458
 
214
459
  const callTool = async (
215
- state: { messages: Array<BaseMessage> },
216
- config?: RunnableConfig
460
+ state: { messages: Array<BaseMessage> }
217
461
  ) => {
218
462
  const action = _getAction(state);
219
463
  // We call the tool_executor and get back a response
220
- const response = await toolExecutor.invoke(action, config);
464
+ const response = await toolExecutor.invoke(action);
221
465
  // We use the response to create a FunctionMessage
222
466
  const functionMessage = new FunctionMessage({
223
467
  content: response,
@@ -242,8 +486,8 @@ const workflow = new StateGraph({
242
486
  });
243
487
 
244
488
  // Define the two nodes we will cycle between
245
- workflow.addNode("agent", new RunnableLambda({ func: callModel }));
246
- workflow.addNode("action", new RunnableLambda({ func: callTool }));
489
+ workflow.addNode("agent", callModel);
490
+ workflow.addNode("action", callTool);
247
491
 
248
492
  // Set the entrypoint as `agent`
249
493
  // This means that this node is the first one called
@@ -251,23 +495,23 @@ workflow.setEntryPoint("agent");
251
495
 
252
496
  // We now add a conditional edge
253
497
  workflow.addConditionalEdges(
254
- // First, we define the start node. We use `agent`.
255
- // This means these are the edges taken after the `agent` node is called.
256
- "agent",
257
- // Next, we pass in the function that will determine which node is called next.
258
- shouldContinue,
259
- // Finally we pass in a mapping.
260
- // The keys are strings, and the values are other nodes.
261
- // END is a special node marking that the graph should finish.
262
- // What will happen is we will call `should_continue`, and then the output of that
263
- // will be matched against the keys in this mapping.
264
- // Based on which one it matches, that node will then be called.
265
- {
266
- // If `tools`, then we call the tool node.
267
- continue: "action",
268
- // Otherwise we finish.
269
- end: END
270
- }
498
+ // First, we define the start node. We use `agent`.
499
+ // This means these are the edges taken after the `agent` node is called.
500
+ "agent",
501
+ // Next, we pass in the function that will determine which node is called next.
502
+ shouldContinue,
503
+ // Finally we pass in a mapping.
504
+ // The keys are strings, and the values are other nodes.
505
+ // END is a special node marking that the graph should finish.
506
+ // What will happen is we will call `should_continue`, and then the output of that
507
+ // will be matched against the keys in this mapping.
508
+ // Based on which one it matches, that node will then be called.
509
+ {
510
+ // If `tools`, then we call the tool node.
511
+ continue: "action",
512
+ // Otherwise we finish.
513
+ end: END
514
+ }
271
515
  );
272
516
 
273
517
  // We now add a normal edge from `tools` to `agent`.
@@ -295,7 +539,7 @@ const inputs = {
295
539
  const result = await app.invoke(inputs);
296
540
  ```
297
541
 
298
- See a LangSmith trace of this run [here](https://smith.langchain.com/public/2562d46e-da94-4c9d-9b14-3759a26aec9b/r).
542
+ See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).
299
543
 
300
544
  This may take a little bit - it's making a few calls behind the scenes.
301
545
  In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
@@ -318,7 +562,13 @@ for await (const output of await app.stream(inputs)) {
318
562
  }
319
563
  ```
320
564
 
321
- See a LangSmith trace of this run [here](https://smith.langchain.com/public/9afacb13-b9dc-416e-abbe-6ed2a0811afe/r).
565
+ See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).
566
+
567
+ ## Running Examples
568
+
569
+ You can find some more example notebooks of different use-cases in the `examples/` folder in this repo. These example notebooks use the [Deno runtime](https://deno.land/).
570
+
571
+ To pull in environment variables, you can create a `.env` file at the **root** of this repo (not in the `examples/` folder itself).
322
572
 
323
573
  ## When to Use
324
574
 
@@ -338,19 +588,19 @@ All agent state is represented as a list of messages.
338
588
  This specifically uses OpenAI function calling.
339
589
  This is recommended agent executor for newer chat based models that support function calling.
340
590
 
341
- - [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/blob/main/examples/chat_agent_executor_with_function_calling/base.ipynb): Walks through creating this type of executor from scratch
591
+ - [Getting Started Notebook](/examples/chat_agent_executor_with_function_calling/base.ipynb): Walks through creating this type of executor from scratch
342
592
 
343
593
  ### AgentExecutor
344
594
 
345
595
  This agent executor uses existing LangChain agents.
346
596
 
347
- - [Getting Started Notebook](https://github.com/langchain-ai/langgraphjs/blob/main/examples/agent_executor/base.ipynb): Walks through creating this type of executor from scratch
597
+ - [Getting Started Notebook](/examples/agent_executor/base.ipynb): Walks through creating this type of executor from scratch
348
598
 
349
599
  ### Multi-agent Examples
350
600
 
351
- - [Multi-agent collaboration](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/multi-agent-collaboration.ipynb): how to create two agents that work together to accomplish a task
352
- - [Multi-agent with supervisor](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/agent_supervisor.ipynb): how to orchestrate individual agents by using an LLM as a "supervisor" to distribute work
353
- - [Hierarchical agent teams](https://github.com/langchain-ai/langgraphjs/blob/main/examples/multi_agent/hierarchical_agent_teams.ipynb): how to orchestrate "teams" of agents as nested graphs that can collaborate to solve a problem
601
+ - [Multi-agent collaboration](/examples/multi_agent/multi_agent_collaboration.ipynb): how to create two agents that work together to accomplish a task
602
+ - [Multi-agent with supervisor](/examples/multi_agent/agent_supervisor.ipynb): how to orchestrate individual agents by using an LLM as a "supervisor" to distribute work
603
+ - [Hierarchical agent teams](/examples/multi_agent/hierarchical_agent_teams.ipynb): how to orchestrate "teams" of agents as nested graphs that can collaborate to solve a problem
354
604
 
355
605
  ## Documentation
356
606
 
@@ -37,19 +37,17 @@ async function createCheckpoint(checkpoint, channels) {
37
37
  channelVersions: { ...checkpoint.channelVersions },
38
38
  versionsSeen: { ...checkpoint.versionsSeen },
39
39
  };
40
- for (const k in channels) {
41
- if (newCheckpoint.channelValues[k] === undefined) {
42
- try {
43
- newCheckpoint.channelValues[k] = await channels[k].checkpoint();
44
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
40
+ for (const k of Object.keys(channels)) {
41
+ try {
42
+ newCheckpoint.channelValues[k] = await channels[k].checkpoint();
43
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
44
+ }
45
+ catch (error) {
46
+ if (error.name === EmptyChannelError.name) {
47
+ // no-op
45
48
  }
46
- catch (error) {
47
- if ("name" in error && error.name === EmptyChannelError.name) {
48
- // no-op
49
- }
50
- else {
51
- throw error; // Rethrow unexpected errors
52
- }
49
+ else {
50
+ throw error; // Rethrow unexpected errors
53
51
  }
54
52
  }
55
53
  }
@@ -1,11 +1,13 @@
1
1
  import { Checkpoint } from "../checkpoint/index.js";
2
- export declare abstract class BaseChannel<Value = unknown, Update = unknown, C = unknown> {
2
+ export declare abstract class BaseChannel<Value = unknown, Update = unknown, // Expected type of the parameter `update` is called with.
3
+ C = unknown> {
3
4
  /**
4
5
  * The name of the channel.
5
6
  */
6
7
  abstract lc_graph_name: string;
7
8
  /**
8
9
  * Return a new identical channel, optionally initialized from a checkpoint.
10
+ * Can be thought of as a "restoration" from a checkpoint which is a "snapshot" of the channel's state.
9
11
  *
10
12
  * @param {C | undefined} checkpoint
11
13
  * @param {C | undefined} initialValue
@@ -30,19 +30,17 @@ export async function createCheckpoint(checkpoint, channels) {
30
30
  channelVersions: { ...checkpoint.channelVersions },
31
31
  versionsSeen: { ...checkpoint.versionsSeen },
32
32
  };
33
- for (const k in channels) {
34
- if (newCheckpoint.channelValues[k] === undefined) {
35
- try {
36
- newCheckpoint.channelValues[k] = await channels[k].checkpoint();
37
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
33
+ for (const k of Object.keys(channels)) {
34
+ try {
35
+ newCheckpoint.channelValues[k] = await channels[k].checkpoint();
36
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
37
+ }
38
+ catch (error) {
39
+ if (error.name === EmptyChannelError.name) {
40
+ // no-op
38
41
  }
39
- catch (error) {
40
- if ("name" in error && error.name === EmptyChannelError.name) {
41
- // no-op
42
- }
43
- else {
44
- throw error; // Rethrow unexpected errors
45
- }
42
+ else {
43
+ throw error; // Rethrow unexpected errors
46
44
  }
47
45
  }
48
46
  }
@@ -36,8 +36,11 @@ class BinaryOperatorAggregate extends index_js_1.BaseChannel {
36
36
  this.initialValueFactory = initialValueFactory;
37
37
  this.value = initialValueFactory?.();
38
38
  }
39
- empty(_) {
39
+ empty(checkpoint) {
40
40
  const empty = new BinaryOperatorAggregate(this.operator, this.initialValueFactory);
41
+ if (checkpoint) {
42
+ empty.value = checkpoint;
43
+ }
41
44
  return empty;
42
45
  }
43
46
  update(values) {
@@ -61,7 +64,7 @@ class BinaryOperatorAggregate extends index_js_1.BaseChannel {
61
64
  return this.value;
62
65
  }
63
66
  checkpoint() {
64
- if (!this.value) {
67
+ if (this.value === undefined) {
65
68
  throw new index_js_1.EmptyChannelError();
66
69
  }
67
70
  return this.value;
@@ -9,7 +9,7 @@ export declare class BinaryOperatorAggregate<Value> extends BaseChannel<Value, V
9
9
  operator: BinaryOperator<Value>;
10
10
  initialValueFactory?: () => Value;
11
11
  constructor(operator: BinaryOperator<Value>, initialValueFactory?: () => Value);
12
- empty(_?: Value): BinaryOperatorAggregate<Value>;
12
+ empty(checkpoint?: Value): BinaryOperatorAggregate<Value>;
13
13
  update(values: Value[]): void;
14
14
  get(): Value;
15
15
  checkpoint(): Value;
@@ -33,8 +33,11 @@ export class BinaryOperatorAggregate extends BaseChannel {
33
33
  this.initialValueFactory = initialValueFactory;
34
34
  this.value = initialValueFactory?.();
35
35
  }
36
- empty(_) {
36
+ empty(checkpoint) {
37
37
  const empty = new BinaryOperatorAggregate(this.operator, this.initialValueFactory);
38
+ if (checkpoint) {
39
+ empty.value = checkpoint;
40
+ }
38
41
  return empty;
39
42
  }
40
43
  update(values) {
@@ -58,7 +61,7 @@ export class BinaryOperatorAggregate extends BaseChannel {
58
61
  return this.value;
59
62
  }
60
63
  checkpoint() {
61
- if (!this.value) {
64
+ if (this.value === undefined) {
62
65
  throw new EmptyChannelError();
63
66
  }
64
67
  return this.value;
@@ -1,6 +1,6 @@
1
1
  "use strict";
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
- exports.BaseCheckpointSaver = exports.emptyCheckpoint = void 0;
3
+ exports.BaseCheckpointSaver = exports.copyCheckpoint = exports.emptyCheckpoint = void 0;
4
4
  function emptyCheckpoint() {
5
5
  return {
6
6
  v: 1,
@@ -11,6 +11,16 @@ function emptyCheckpoint() {
11
11
  };
12
12
  }
13
13
  exports.emptyCheckpoint = emptyCheckpoint;
14
+ function copyCheckpoint(checkpoint) {
15
+ return {
16
+ v: checkpoint.v,
17
+ ts: checkpoint.ts,
18
+ channelValues: { ...checkpoint.channelValues },
19
+ channelVersions: { ...checkpoint.channelVersions },
20
+ versionsSeen: { ...checkpoint.versionsSeen },
21
+ };
22
+ }
23
+ exports.copyCheckpoint = copyCheckpoint;
14
24
  class BaseCheckpointSaver {
15
25
  constructor() {
16
26
  Object.defineProperty(this, "at", {
@@ -35,6 +35,7 @@ export interface Checkpoint {
35
35
  versionsSeen: Record<string, Record<string, number>>;
36
36
  }
37
37
  export declare function emptyCheckpoint(): Checkpoint;
38
+ export declare function copyCheckpoint(checkpoint: Checkpoint): Checkpoint;
38
39
  export declare const enum CheckpointAt {
39
40
  END_OF_STEP = "end_of_step",
40
41
  END_OF_RUN = "end_of_run"