@langchain/langgraph 0.0.27-rc.0 → 0.0.28

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,21 +1,26 @@
1
1
  # 🦜🕸️LangGraph.js
2
2
 
3
+ [![Docs](https://img.shields.io/badge/docs-latest-blue)](https://langchain-ai.github.io/langgraphjs/)
4
+ ![Version](https://img.shields.io/npm/v/@langchain/langgraph?logo=npm)
5
+ [![Downloads](https://img.shields.io/npm/dm/@langchain/langgraph)](https://www.npmjs.com/package/@langchain/langgraph)
6
+ [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langgraphjs)](https://github.com/langchain-ai/langgraphjs/issues)
7
+ [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.com/channels/1038097195422978059/1170024642245832774)
8
+
3
9
  ⚡ Building language agents as graphs ⚡
4
10
 
5
11
  ## Overview
6
12
 
7
- LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) [LangChain.js](https://github.com/langchain-ai/langchainjs).
8
- It extends the [LangChain Expression Language](https://js.langchain.com/docs/expression_language/) with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
9
- It is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/).
10
- The current interface exposed is one inspired by [NetworkX](https://networkx.org/documentation/latest/).
13
+ [LangGraph.js](https://langchain-ai.github.io/langgraphjs/) is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Built on top of [LangChain.js](https://github.com/langchain-ai/langchainjs), it offers these core benefits compared to other LLM frameworks: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your application, crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence, enabling advanced human-in-the-loop and memory features.
11
14
 
12
- The main use is for adding **cycles** to your LLM application.
13
- Crucially, LangGraph is NOT optimized for only **DAG** workflows.
14
- If you want to build a DAG, you should use just use [LangChain Expression Language](https://js.langchain.com/docs/expression_language/).
15
+ LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
15
16
 
16
- Cycles are important for agent-like behaviors, where you call an LLM in a loop, asking it what action to take next.
17
+ ### Key Features
17
18
 
18
- > Looking for the Python version? Click [here](https://github.com/langchain-ai/langgraph).
19
+ - **Cycles and Branching**: Implement loops and conditionals in your apps.
20
+ - **Persistence**: Automatically save state after each step in the graph. Pause and resume the graph execution at any point to support error recovery, human-in-the-loop workflows, time travel and more.
21
+ - **Human-in-the-Loop**: Interrupt graph execution to approve or edit next action planned by the agent.
22
+ - **Streaming Support**: Stream outputs as they are produced by each node (including token streaming).
23
+ - **Integration with LangChain**: LangGraph integrates seamlessly with [LangChain](https://github.com/langchain-ai/langchainjs/) and [LangSmith](https://docs.smith.langchain.com/) (but does not require them).
19
24
 
20
25
  ## Installation
21
26
 
@@ -23,820 +28,210 @@ Cycles are important for agent-like behaviors, where you call an LLM in a loop,
23
28
  npm install @langchain/langgraph
24
29
  ```
25
30
 
26
- ## Quick start
31
+ ## Example
27
32
 
28
33
  One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in the graph as they execute, and each node updates this internal state with its return value after it executes. The way that the graph updates its internal state is defined by either the type of graph chosen or a custom function.
29
34
 
30
- State in LangGraph can be pretty general, but to keep things simpler to start, we'll show off an example where the graph's state is limited to a list of chat messages using the built-in `MessageGraph` class. This is convenient when using LangGraph with LangChain chat models because we can return chat model output directly.
31
-
32
- First, install the LangChain OpenAI integration package:
33
-
34
- ```shell
35
- npm i @langchain/openai
36
- ```
37
-
38
- We also need to export some environment variables:
39
-
40
- ```shell
41
- export OPENAI_API_KEY=sk-...
42
- ```
43
-
44
- And now we're ready! The graph below contains a single node called `"oracle"` that executes a chat model, then returns the result:
45
-
46
- ```ts
47
- import { ChatOpenAI } from "@langchain/openai";
48
- import { HumanMessage, BaseMessage, } from "@langchain/core/messages";
49
- import { START, END, MessageGraph } from "@langchain/langgraph";
50
-
51
- const model = new ChatOpenAI({ temperature: 0 });
52
-
53
- const graph = new MessageGraph();
54
-
55
- graph.addNode("oracle", async (state: BaseMessage[]) => {
56
- return model.invoke(state);
57
- });
58
-
59
- graph.addEdge("oracle", END);
60
-
61
- graph.addEdge(START, "oracle");
62
-
63
- const runnable = graph.compile();
64
- ```
35
+ Let's take a look at a simple example of an agent that can use a search tool.
65
36
 
66
- Let's run it!
67
-
68
- ```ts
69
- // For Message graph, input should always be a message or list of messages.
70
- const res = await runnable.invoke(
71
- new HumanMessage("What is 1 + 1?")
72
- );
73
- ```
74
-
75
- ```ts
76
- [
77
- HumanMessage {
78
- content: 'What is 1 + 1?',
79
- additional_kwargs: {}
80
- },
81
- AIMessage {
82
- content: '1 + 1 equals 2.',
83
- additional_kwargs: { function_call: undefined, tool_calls: undefined }
84
- }
85
- ]
86
- ```
87
-
88
- So what did we do here? Let's break it down step by step:
89
-
90
- 1. First, we initialize our model and a `MessageGraph`.
91
- 2. Next, we add a single node to the graph, called `"oracle"`, which simply calls the model with the given input.
92
- 3. We add an edge from this `"oracle"` node to the special value `END`. This means that execution will end after current node.
93
- 4. We set `"oracle"` as the entrypoint to the graph by adding an edge from the special `START` value to it.
94
- 5. We compile the graph, ensuring that no more modifications to it can be made.
95
-
96
- Then, when we execute the graph:
97
-
98
- 1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"oracle"`.
99
- 2. The `"oracle"` node executes, invoking the chat model.
100
- 3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
101
- 4. Execution progresses to the special `END` value and outputs the final state.
102
-
103
- And as a result, we get a list of two chat messages as output.
104
-
105
- ### Interaction with LCEL
106
-
107
- As an aside for those already familiar with LangChain - `addNode` actually takes any runnable as input. In the above example, the passed function is automatically converted, but we could also have passed the model directly:
108
-
109
- ```ts
110
- graph.addNode("oracle", model);
111
- ```
112
-
113
- In which case the `.invoke()` method will be called when the graph executes.
114
-
115
- Just make sure you are mindful of the fact that the input to the runnable is the entire current state. So this will fail:
116
-
117
- ```ts
118
- // This will NOT work with MessageGraph!
119
- import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
120
-
121
- const prompt = ChatPromptTemplate.fromMessages([
122
- ["system", "You are a helpful assistant who always speaks in pirate dialect"],
123
- MessagesPlaceholder("messages"),
124
- ]);
125
-
126
- const chain = prompt.pipe(model);
127
-
128
- // State is a list of messages, but our chain expects an object input:
129
- //
130
- // { messages: [] }
131
- //
132
- // Therefore, the graph will throw an exception when it executes here.
133
- graph.addNode("oracle", chain);
134
- ```
135
-
136
- ## Conditional edges
137
-
138
- Now, let's move onto something a little bit less trivial. Because math can be difficult for LLMs, let's allow the LLM to conditionally call a calculator node using tool calling.
37
+ First install the required dependencies:
139
38
 
140
39
  ```bash
141
- npm i langchain @langchain/openai
142
- ```
143
-
144
- We'll recreate our graph with an additional `"calculator"` that will take the result of the most recent message, if it is a math expression, and calculate the result.
145
- We'll also bind the calculator to the OpenAI model as a tool to allow the model to optionally use the tool if it deems necessary:
146
-
147
- ```ts
148
- import {
149
- ToolMessage,
150
- } from "@langchain/core/messages";
151
- import { Calculator } from "langchain/tools/calculator";
152
- import { convertToOpenAITool } from "@langchain/core/utils/function_calling";
153
-
154
- const model = new ChatOpenAI({
155
- temperature: 0,
156
- }).bind({
157
- tools: [convertToOpenAITool(new Calculator())],
158
- tool_choice: "auto",
159
- });
160
-
161
- const graph = new MessageGraph();
162
-
163
- graph.addNode("oracle", async (state: BaseMessage[]) => {
164
- return model.invoke(state);
165
- });
166
-
167
- graph.addNode("calculator", async (state: BaseMessage[]) => {
168
- const tool = new Calculator();
169
- const toolCalls =
170
- state[state.length - 1].additional_kwargs.tool_calls ?? [];
171
- const calculatorCall = toolCalls.find(
172
- (toolCall) => toolCall.function.name === "calculator"
173
- );
174
- if (calculatorCall === undefined) {
175
- throw new Error("No calculator input found.");
176
- }
177
- const result = await tool.invoke(
178
- JSON.parse(calculatorCall.function.arguments)
179
- );
180
- return new ToolMessage({
181
- tool_call_id: calculatorCall.id,
182
- content: result,
183
- });
184
- });
185
-
186
- graph.addEdge("calculator", END);
187
-
188
- graph.addEdge(START, "oracle");
189
- ```
190
-
191
- Now let's think - what do we want to have happen?
192
-
193
- - If the `"oracle"` node returns a message expecting a tool call, we want to execute the `"calculator"` node
194
- - If not, we can just end execution
195
-
196
- We can achieve this using **conditional edges**, which routes execution to a node based on the current state using a function.
197
-
198
- Here's what that looks like:
199
-
200
- ```ts
201
- const router = (state: BaseMessage[]) => {
202
- const toolCalls =
203
- state[state.length - 1].additional_kwargs.tool_calls ?? [];
204
- if (toolCalls.length) {
205
- return "calculator";
206
- } else {
207
- return "end";
208
- }
209
- };
210
-
211
- graph.addConditionalEdges("oracle", router, {
212
- calculator: "calculator",
213
- end: END,
214
- });
215
- ```
216
-
217
- If the model output contains a tool call, we move to the `"calculator"` node. Otherwise, we end.
218
-
219
- Great! Now all that's left is to compile the graph and try it out. Math-related questions are routed to the calculator tool:
220
-
221
- ```ts
222
- const runnable = graph.compile();
223
- const mathResponse = await runnable.invoke(new HumanMessage("What is 1 + 1?"));
224
- ```
225
-
226
- ```ts
227
- [
228
- HumanMessage {
229
- content: 'What is 1 + 1?',
230
- additional_kwargs: {}
231
- },
232
- AIMessage {
233
- content: '',
234
- additional_kwargs: { function_call: undefined, tool_calls: [Array] }
235
- },
236
- ToolMessage {
237
- content: '2',
238
- name: undefined,
239
- additional_kwargs: {},
240
- tool_call_id: 'call_P7KWQoftVsj6fgsqKyolWp91'
241
- }
242
- ]
243
- ```
244
-
245
- While conversational responses are outputted directly:
246
-
247
- ```ts
248
- const otherResponse = await runnable.invoke(new HumanMessage("What is your name?"));
249
- ```
250
-
251
- ```ts
252
- [
253
- HumanMessage {
254
- content: 'What is your name?',
255
- additional_kwargs: {}
256
- },
257
- AIMessage {
258
- content: 'My name is Assistant. How can I assist you today?',
259
- additional_kwargs: { function_call: undefined, tool_calls: undefined }
260
- }
261
- ]
40
+ npm install @langchain/anthropic
262
41
  ```
263
42
 
264
- ## Cycles
43
+ Then set the required environment variables:
265
44
 
266
- Now, let's go over a more general example with a cycle. We will recreate the [`AgentExecutor`](https://js.langchain.com/docs/modules/agents/concepts#agentexecutor) class from LangChain.
267
-
268
- The benefits of creating it with LangGraph is that it is more modifiable.
269
-
270
- We will need to install some LangChain packages:
271
-
272
- ```shell
273
- npm install langchain @langchain/core @langchain/community @langchain/openai
274
- ```
275
-
276
- We also need additional environment variables.
277
-
278
- ```shell
279
- export OPENAI_API_KEY=sk-...
280
- export TAVILY_API_KEY=tvly-...
45
+ ```bash
46
+ export ANTHROPIC_API_KEY=sk-...
281
47
  ```
282
48
 
283
- Optionally, we can set up [LangSmith](https://docs.smith.langchain.com/) for best-in-class observability.
49
+ Optionally, set up [LangSmith](https://docs.smith.langchain.com/) for best-in-class observability:
284
50
 
285
- ```shell
286
- export LANGCHAIN_TRACING_V2="true"
51
+ ```bash
52
+ export LANGCHAIN_TRACING_V2=true
287
53
  export LANGCHAIN_API_KEY=ls__...
288
- export LANGCHAIN_ENDPOINT=https://api.langchain.com
289
54
  ```
290
55
 
291
- ### Set up the tools
292
-
293
- As above, we will first define the tools we want to use.
294
- For this simple example, we will use a built-in search tool via Tavily.
295
- However, it is really easy to create your own tools - see documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do that.
296
-
297
- ```typescript
298
- import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
299
-
300
- const tools = [new TavilySearchResults({ maxResults: 1 })];
301
- ```
302
-
303
- We can now wrap these tools in a ToolExecutor, which simply takes in a ToolInvocation and calls that tool, returning the output.
304
-
305
- A ToolInvocation is any type with `tool` and `toolInput` attribute.
306
-
56
+ Now let's define our agent:
307
57
 
308
58
  ```typescript
309
- import { ToolExecutor } from "@langchain/langgraph/prebuilt";
59
+ import { HumanMessage, AIMessage } from "@langchain/core/messages";
60
+ import { DynamicStructuredTool } from "@langchain/core/tools";
61
+ import { z } from "zod";
62
+ import { ChatAnthropic } from "@langchain/anthropic";
63
+ import { END, START, StateGraph, StateGraphArgs } from "@langchain/langgraph";
64
+ import { MemorySaver } from "@langchain/langgraph";
65
+ import { ToolNode } from "@langchain/langgraph/prebuilt";
310
66
 
311
- const toolExecutor = new ToolExecutor({ tools });
312
- ```
313
-
314
- ### Set up the model
315
-
316
- Now we need to load the chat model we want to use.
317
- This time, we'll use the older function calling interface. This walkthrough will use OpenAI, but we can choose any model that supports OpenAI function calling.
318
-
319
- ```typescript
320
- import { ChatOpenAI } from "@langchain/openai";
321
-
322
- // We will set streaming: true so that we can stream tokens
323
- // See the streaming section for more information on this.
324
- const model = new ChatOpenAI({
325
- temperature: 0,
326
- streaming: true
327
- });
328
- ```
329
-
330
- After we've done this, we should make sure the model knows that it has these tools available to call.
331
- We can do this by converting the LangChain tools into the format for OpenAI function calling, and then bind them to the model class.
332
-
333
- ```typescript
334
- import { convertToOpenAIFunction } from "@langchain/core/utils/function_calling";
335
-
336
- const toolsAsOpenAIFunctions = tools.map((tool) =>
337
- convertToOpenAIFunction(tool)
338
- );
339
- const newModel = model.bind({
340
- functions: toolsAsOpenAIFunctions,
341
- });
342
- ```
343
-
344
- ### Define the agent state
345
-
346
- This time, we'll use the more general `StateGraph`.
347
- This graph is parameterized by a state object that it passes around to each node.
348
- Remember that each node then returns operations to update that state.
349
- These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.
350
- Whether to set or add is denoted by annotating the state object you construct the graph with.
351
-
352
- For this example, the state we will track will just be a list of messages.
353
- We want each node to just add messages to that list.
354
- Therefore, we will use an object with one key (`messages`) with the value as an object: `{ value: Function, default?: () => any }`
355
-
356
- The `default` key must be a factory that returns the default value for that attribute.
357
-
358
- ```typescript
359
- import { BaseMessage } from "@langchain/core/messages";
67
+ // Define the state interface
68
+ interface AgentState {
69
+ messages: HumanMessage[];
70
+ }
360
71
 
361
- const agentState = {
72
+ // Define the graph state
73
+ const graphState: StateGraphArgs<AgentState>["channels"] = {
362
74
  messages: {
363
- value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
75
+ value: (x: HumanMessage[], y: HumanMessage[]) => x.concat(y),
364
76
  default: () => [],
365
- }
77
+ },
366
78
  };
367
- ```
368
-
369
- You can think of the `MessageGraph` used in the initial example as a preconfigured version of this graph. The difference is that the state is directly a list of messages,
370
- instead of an object containing a key called `"messages"` whose value is a list of messages.
371
- The `MessageGraph` update step is similar to the one above where we always append the returned values of a node to the internal state.
372
-
373
- ### Define the nodes
374
79
 
375
- We now need to define a few different nodes in our graph.
376
- In LangGraph, a node can be either a function or a [runnable](https://js.langchain.com/docs/expression_language/).
377
- There are two main nodes we need for this:
378
-
379
- 1. The agent: responsible for deciding what (if any) actions to take.
380
- 2. A function to invoke tools: if the agent decides to take an action, this node will then execute that action.
381
-
382
- We will also need to define some edges.
383
- Some of these edges may be conditional.
384
- The reason they are conditional is that based on the output of a node, one of several paths may be taken.
385
- The path that is taken is not known until that node is run (the LLM decides).
386
-
387
- 1. Conditional Edge: after the agent is called, we should either:
388
- a. If the agent said to take an action, then the function to invoke tools should be called
389
- b. If the agent said that it was finished, then it should finish
390
- 2. Normal Edge: after the tools are invoked, it should always go back to the agent to decide what to do next
80
+ // Define the tools for the agent to use
81
+
82
+ const searchTool = new DynamicStructuredTool({
83
+ name: "search",
84
+ description:
85
+ "Call to surf the web.",
86
+ schema: z.object({
87
+ query: z.string().describe("The query to use in your search."),
88
+ }),
89
+ func: async ({ query }: { query: string }) => {
90
+ // This is a placeholder for the actual implementation
91
+ if (query.toLowerCase().includes("sf") || query.toLowerCase().includes("san francisco")) {
92
+ return "It's 60 degrees and foggy."
93
+ }
94
+ return "It's 90 degrees and sunny."
95
+ },
96
+ });
391
97
 
392
- Let's define the nodes, as well as a function to decide how what conditional edge to take.
98
+ const tools = [searchTool];
99
+ const toolNode = new ToolNode<AgentState>(tools);
393
100
 
394
- ```typescript
395
- import { FunctionMessage } from "@langchain/core/messages";
396
- import { AgentAction } from "@langchain/core/agents";
397
- import {
398
- ChatPromptTemplate,
399
- MessagesPlaceholder
400
- } from "@langchain/core/prompts";
101
+ const model = new ChatAnthropic({
102
+ model: "claude-3-sonnet-20240229",
103
+ temperature: 0,
104
+ }).bindTools(tools);
401
105
 
402
106
  // Define the function that determines whether to continue or not
403
- const shouldContinue = (state: { messages: Array<BaseMessage> }) => {
404
- const { messages } = state;
405
- const lastMessage = messages[messages.length - 1];
406
- // If there is no function call, then we finish
407
- if (
408
- !("function_call" in lastMessage.additional_kwargs) ||
409
- !lastMessage.additional_kwargs.function_call
410
- ) {
411
- return "end";
412
- }
413
- // Otherwise if there is, we continue
414
- return "continue";
415
- };
107
+ function shouldContinue(state: AgentState): "tools" | typeof END {
108
+ const messages = state.messages;
109
+ const lastMessage = messages[messages.length - 1] as AIMessage;
416
110
 
417
- // Define the function to execute tools
418
- const _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => {
419
- const { messages } = state;
420
- // Based on the continue condition
421
- // we know the last message involves a function call
422
- const lastMessage = messages[messages.length - 1];
423
- if (!lastMessage) {
424
- throw new Error("No messages found.");
425
- }
426
- if (!lastMessage.additional_kwargs.function_call) {
427
- throw new Error("No function call found in message.");
111
+ // If the LLM makes a tool call, then we route to the "tools" node
112
+ if (lastMessage.tool_calls?.length) {
113
+ return "tools";
428
114
  }
429
- // We construct an AgentAction from the function_call
430
- return {
431
- tool: lastMessage.additional_kwargs.function_call.name,
432
- toolInput: JSON.parse(
433
- lastMessage.additional_kwargs.function_call.arguments
434
- ),
435
- log: "",
436
- };
437
- };
115
+ // Otherwise, we stop (reply to the user)
116
+ return END;
117
+ }
438
118
 
439
119
  // Define the function that calls the model
440
- const callModel = async (
441
- state: { messages: Array<BaseMessage> }
442
- ) => {
443
- const { messages } = state;
444
- // You can use a prompt here to tweak model behavior.
445
- // You can also just pass messages to the model directly.
446
- const prompt = ChatPromptTemplate.fromMessages([
447
- ["system", "You are a helpful assistant."],
448
- new MessagesPlaceholder("messages"),
449
- ]);
450
- const response = await prompt
451
- .pipe(newModel)
452
- .invoke({ messages });
453
- // We return a list, because this will get added to the existing list
454
- return {
455
- messages: [response],
456
- };
457
- };
120
+ async function callModel(state: AgentState) {
121
+ const messages = state.messages;
122
+ const response = await model.invoke(messages);
458
123
 
459
- const callTool = async (
460
- state: { messages: Array<BaseMessage> }
461
- ) => {
462
- const action = _getAction(state);
463
- // We call the tool_executor and get back a response
464
- const response = await toolExecutor.invoke(action);
465
- // We use the response to create a FunctionMessage
466
- const functionMessage = new FunctionMessage({
467
- content: response,
468
- name: action.tool,
469
- });
470
124
  // We return a list, because this will get added to the existing list
471
- return { messages: [functionMessage] };
472
- };
473
- ```
474
-
475
- ### Define the graph
476
-
477
- We can now put it all together and define the graph!
478
-
479
- ```typescript
480
- import { StateGraph, START, END } from "@langchain/langgraph";
481
- import { RunnableLambda } from "@langchain/core/runnables";
125
+ return { messages: [response] };
126
+ }
482
127
 
483
128
  // Define a new graph
484
- const workflow = new StateGraph({
485
- channels: agentState,
486
- });
487
-
488
- // Define the two nodes we will cycle between
489
- workflow.addNode("agent", callModel);
490
- workflow.addNode("action", callTool);
491
-
492
- // Set the entrypoint as `agent`
493
- // This means that this node is the first one called
494
- workflow.addEdge(START, "agent");
495
-
496
- // We now add a conditional edge
497
- workflow.addConditionalEdges(
498
- // First, we define the start node. We use `agent`.
499
- // This means these are the edges taken after the `agent` node is called.
500
- "agent",
501
- // Next, we pass in the function that will determine which node is called next.
502
- shouldContinue,
503
- // Finally we pass in a mapping.
504
- // The keys are strings, and the values are other nodes.
505
- // END is a special node marking that the graph should finish.
506
- // What will happen is we will call `should_continue`, and then the output of that
507
- // will be matched against the keys in this mapping.
508
- // Based on which one it matches, that node will then be called.
509
- {
510
- // If `tools`, then we call the tool node.
511
- continue: "action",
512
- // Otherwise we finish.
513
- end: END
514
- }
515
- );
129
+ const workflow = new StateGraph<AgentState>({ channels: graphState })
130
+ .addNode("agent", callModel)
131
+ .addNode("tools", toolNode)
132
+ .addEdge(START, "agent")
133
+ .addConditionalEdges("agent", shouldContinue)
134
+ .addEdge("tools", "agent");
516
135
 
517
- // We now add a normal edge from `tools` to `agent`.
518
- // This means that after `tools` is called, `agent` node is called next.
519
- workflow.addEdge("action", "agent");
136
+ // Initialize memory to persist state between graph runs
137
+ const checkpointer = new MemorySaver();
520
138
 
521
139
  // Finally, we compile it!
522
- // This compiles it into a LangChain Runnable,
523
- // meaning you can use it as you would any other runnable
524
- const app = workflow.compile();
525
- ```
526
-
527
- ### Use it!
528
-
529
- We can now use it!
530
- This now exposes the [same interface](https://js.langchain.com/docs/expression_language/) as all other LangChain runnables.
531
- This runnable accepts a list of messages.
532
-
533
- ```typescript
534
- import { HumanMessage } from "@langchain/core/messages";
535
-
536
- const inputs = {
537
- messages: [new HumanMessage("what is the weather in sf")]
538
- }
539
- const result = await app.invoke(inputs);
540
- ```
541
-
542
- See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).
543
-
544
- This may take a little bit - it's making a few calls behind the scenes.
545
- In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
546
-
547
- ## Streaming
548
-
549
- LangGraph has support for several different types of streaming.
550
-
551
- ### Streaming Node Output
552
-
553
- One of the benefits of using LangGraph is that it is easy to stream output as it's produced by each node.
554
-
555
- ```typescript
556
- const inputs = {
557
- messages: [new HumanMessage("what is the weather in sf")]
558
- };
559
- for await (const output of await app.stream(inputs)) {
560
- console.log("output", output);
561
- console.log("-----\n");
562
- }
563
- ```
564
-
565
- See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).
566
-
567
- ## Running Examples
568
-
569
- You can find some more example notebooks of different use-cases in the `examples/` folder in this repo. These example notebooks use the [Deno runtime](https://deno.land/).
570
-
571
- To pull in environment variables, you can create a `.env` file at the **root** of this repo (not in the `examples/` folder itself).
572
-
573
- ## When to Use
574
-
575
- When should you use this versus [LangChain Expression Language](https://js.langchain.com/docs/expression_language/)?
576
-
577
- If you need cycles.
578
-
579
- Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles.
580
- `langgraph` adds that syntax.
581
-
582
- ## Examples
583
-
584
- ### ChatAgentExecutor: with function calling
585
-
586
- This agent executor takes a list of messages as input and outputs a list of messages.
587
- All agent state is represented as a list of messages.
588
- This specifically uses OpenAI function calling.
589
- This is recommended agent executor for newer chat based models that support function calling.
590
-
591
- - [Getting Started Notebook](/examples/chat_agent_executor_with_function_calling/base.ipynb): Walks through creating this type of executor from scratch
592
-
593
- ### AgentExecutor
594
-
595
- This agent executor uses existing LangChain agents.
596
-
597
- - [Getting Started Notebook](/examples/agent_executor/base.ipynb): Walks through creating this type of executor from scratch
598
-
599
- ### Multi-agent Examples
600
-
601
- - [Multi-agent collaboration](/examples/multi_agent/multi_agent_collaboration.ipynb): how to create two agents that work together to accomplish a task
602
- - [Multi-agent with supervisor](/examples/multi_agent/agent_supervisor.ipynb): how to orchestrate individual agents by using an LLM as a "supervisor" to distribute work
603
- - [Hierarchical agent teams](/examples/multi_agent/hierarchical_agent_teams.ipynb): how to orchestrate "teams" of agents as nested graphs that can collaborate to solve a problem
604
-
605
- ## Documentation
606
-
607
- There are only a few new APIs to use.
608
-
609
- ### StateGraph
610
-
611
- The main entrypoint is `StateGraph`.
612
-
613
- ```typescript
614
- import { StateGraph } from "@langchain/langgraph";
615
- ```
616
-
617
- This class is responsible for constructing the graph.
618
- It exposes an interface inspired by [NetworkX](https://networkx.org/documentation/latest/).
619
- This graph is parameterized by a state object that it passes around to each node.
620
-
621
-
622
- #### `constructor`
623
-
624
- ```typescript
625
- interface StateGraphArgs<T = any> {
626
- channels: Record<
627
- string,
628
- {
629
- value: BinaryOperator<T> | null;
630
- default?: () => T;
631
- }
632
- >;
633
- }
634
-
635
- class StateGraph<T> extends Graph {
636
- constructor(fields: StateGraphArgs<T>) {}
637
- ```
638
-
639
- When constructing the graph, you need to pass in a schema for a state.
640
- Each node then returns operations to update that state.
641
- These operations can either SET specific attributes on the state (e.g. overwrite the existing values) or ADD to the existing attribute.
642
- Whether to set or add is denoted by annotating the state object you construct the graph with.
643
-
644
-
645
- Let's take a look at an example:
646
-
647
- ```typescript
648
- import { BaseMessage } from "@langchain/core/messages";
649
-
650
- const schema = {
651
- input: {
652
- value: null,
653
- },
654
- agentOutcome: {
655
- value: null,
656
- },
657
- steps: {
658
- value: (x: Array<BaseMessage>, y: Array<BaseMessage>) => x.concat(y),
659
- default: () => [],
660
- },
661
- };
662
- ```
663
-
664
- We can then use this like:
665
-
666
- ```typescript
667
- // Initialize the StateGraph with this state
668
- const graph = new StateGraph({ channels: schema })
669
- // Create nodes and edges
670
- ...
671
- // Compile the graph
672
- const app = graph.compile()
673
-
674
- // The inputs should be an object, because the schema is an object
675
- const inputs = {
676
- // Let's assume this the input
677
- input: "hi"
678
- // Let's assume agent_outcome is set by the graph as some point
679
- // It doesn't need to be provided, and it will be null by default
680
- }
140
+ // This compiles it into a LangChain Runnable.
141
+ // Note that we're (optionally) passing the memory when compiling the graph
142
+ const app = workflow.compile({ checkpointer });
143
+
144
+ // Use the Runnable
145
+ const finalState = await app.invoke(
146
+ { messages: [new HumanMessage("what is the weather in sf")] },
147
+ { configurable: { thread_id: "42" } }
148
+ );
149
+ console.log(finalState.messages[finalState.messages.length - 1].content);
681
150
  ```
682
151
 
683
- ### `.addNode`
152
+ This will output:
684
153
 
685
- ```typescript
686
- addNode(key: string, action: RunnableLike<RunInput, RunOutput>): void
687
154
  ```
688
-
689
- This method adds a node to the graph.
690
- It takes two arguments:
691
-
692
- - `key`: A string representing the name of the node. This must be unique.
693
- - `action`: The action to take when this node is called. This should either be a function or a runnable.
694
-
695
- ### `.addEdge`
696
-
697
- ```typescript
698
- addEdge(startKey: string, endKey: string): void
155
+ Based on the search results, I can tell you that the current weather in San Francisco is:\n\nTemperature: 60 degrees Fahrenheit\nConditions: Foggy\n\nSan Francisco is known for its microclimates and frequent fog, especially during the summer months. The temperature of 60°F (about 15.5°C) is quite typical for the city, which tends to have mild temperatures year-round. The fog, often referred to as "Karl the Fog" by locals, is a characteristic feature of San Francisco\'s weather, particularly in the mornings and evenings.\n\nIs there anything else you\'d like to know about the weather in San Francisco or any other location?
699
156
  ```
700
157
 
701
- Creates an edge from one node to the next.
702
- This means that output of the first node will be passed to the next node.
703
- It takes two arguments.
704
-
705
- - `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
706
- - `endKey`: A string representing the name of the end node. This key must have already been registered in the graph.
707
-
708
- ### `.addConditionalEdges`
158
+ Now when we pass the same `"thread_id"`, the conversation context is retained via the saved state (i.e. stored list of messages):
709
159
 
710
160
  ```typescript
711
- addConditionalEdges(
712
- startKey: string,
713
- condition: CallableFunction,
714
- conditionalEdgeMapping: Record<string, string>
715
- ): void
161
+ const nextState = await app.invoke(
162
+ { messages: [new HumanMessage("what about ny")] },
163
+ { configurable: { thread_id: "42" } }
164
+ );
165
+ console.log(nextState.messages[nextState.messages.length - 1].content);
716
166
  ```
717
167
 
718
- This method adds conditional edges.
719
- What this means is that only one of the downstream edges will be taken, and which one that is depends on the results of the start node.
720
- This takes three arguments:
721
-
722
- - `startKey`: A string representing the name of the start node. This key must have already been registered in the graph.
723
- - `condition`: A function to call to decide what to do next. The input will be the output of the start node. It should return a string that is present in `conditionalEdgeMapping` and represents the edge to take.
724
- - `conditionalEdgeMapping`: A mapping of string to string. The keys should be strings that may be returned by `condition`. The values should be the downstream node to call if that condition is returned.
725
-
726
- ### `START`
727
-
728
- ```typescript
729
- import { START } from "@langchain/langgraph";
730
168
  ```
731
-
732
- This is a special node representing the start of the graph.
733
- This means that anything with an edge from this node will be the entrypoint of the graph.
734
-
735
- ### `END`
736
-
737
- ```typescript
738
- import { END } from "@langchain/langgraph";
169
+ Based on the search results, I can tell you that the current weather in New York City is:\n\nTemperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)\nConditions: Sunny\n\nThis weather is quite different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right now. Here are a few points to note:\n\n1. The temperature of 90°F is quite hot, typical of summer weather in New York City.\n2. The sunny conditions suggest clear skies, which is great for outdoor activities but also means it might feel even hotter due to direct sunlight.\n3. This kind of weather in New York often comes with high humidity, which can make it feel even warmer than the actual temperature suggests.\n\nIt's interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny conditions. This difference illustrates how varied weather can be across different parts of the United States, even on the same day.\n\nIs there anything else you'd like to know about the weather in New York or any other location?
739
170
  ```
740
171
 
741
- This is a special node representing the end of the graph.
742
- This means that anything passed to this node will be the final output of the graph.
743
- It can be used in two places:
744
-
745
- - As the `endKey` in `addEdge`
746
- - As a value in `conditionalEdgeMapping` as passed to `addConditionalEdges`
747
-
748
- ## When to Use
749
-
750
- When should you use this versus [LangChain Expression Language](https://js.langchain.com/docs/expression_language/)?
751
-
752
- If you need cycles.
172
+ ### Step-by-step Breakdown
753
173
 
754
- Langchain Expression Language allows you to easily define chains (DAGs) but does not have a good mechanism for adding in cycles.
755
- `langgraph` adds that syntax.
174
+ 1. <details>
175
+ <summary>Initialize the model and tools.</summary>
756
176
 
757
- ## Examples
177
+ - We use `ChatAnthropic` as our LLM. **NOTE:** We need make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for Anthropic tool calling using the `.bindTools()` method.
178
+ - We define the tools we want to use -- a search tool in our case. It is really easy to create your own tools - see documentation [here](https://js.langchain.com/docs/modules/agents/tools/dynamic) on how to do that.
179
+ </details>
758
180
 
759
- ### AgentExecutor
181
+ 2. <details>
182
+ <summary>Initialize graph with state.</summary>
760
183
 
761
- See the above Quick Start for an example of re-creating the LangChain [`AgentExecutor`](https://js.langchain.com/docs/modules/agents/concepts#agentexecutor) class.
184
+ - We initialize the graph (`StateGraph`) by passing the state interface (`AgentState`).
185
+ - The `graphState` object defines how updates from each node should be merged into the graph's state.
186
+ </details>
762
187
 
763
- ### Forced Function Calling
188
+ 3. <details>
189
+ <summary>Define graph nodes.</summary>
764
190
 
765
- One simple modification of the above Graph is to modify it such that a certain tool is always called first.
766
- This can be useful if you want to enforce a certain tool is called, but still want to enable agentic behavior after the fact.
191
+ There are two main nodes we need:
767
192
 
768
- Assuming you have done the above Quick Start, you can build off it like:
193
+ - The `agent` node: responsible for deciding what (if any) actions to take.
194
+ - The `tools` node that invokes tools: if the agent decides to take an action, this node will then execute that action.
195
+ </details>
769
196
 
770
- #### Define the first tool call
197
+ 4. <details>
198
+ <summary>Define entry point and graph edges.</summary>
771
199
 
772
- Here, we manually define the first tool call that we will make.
773
- Notice that it does that same thing as `agent` would have done (adds the `agentOutcome` key).
774
- This is so that we can easily plug it in.
200
+ First, we need to set the entry point for graph execution - `agent` node.
775
201
 
776
- ```typescript
777
- import { AgentStep, AgentAction, AgentFinish } from "@langchain/core/agents";
778
-
779
- // Define the data type that the agent will return.
780
- type AgentData = {
781
- input: string;
782
- steps: Array<AgentStep>;
783
- agentOutcome?: AgentAction | AgentFinish;
784
- };
785
-
786
- const firstAgent = (inputs: AgentData) => {
787
- const newInputs = inputs;
788
- const action = {
789
- // We force call this tool
790
- tool: "tavily_search_results_json",
791
- // We just pass in the `input` key to this tool
792
- toolInput: newInputs.input,
793
- log: ""
794
- };
795
- newInputs.agentOutcome = action;
796
- return newInputs;
797
- };
798
- ```
799
-
800
- #### Create the graph
801
-
802
- We can now create a new graph with this new node
202
+ Then we define one normal and one conditional edge. A conditional edge means that the destination depends on the contents of the graph's state (`AgentState`). In our case, the destination is not known until the agent (LLM) decides.
803
203
 
804
- ```typescript
805
- const workflow = new Graph();
806
-
807
- // Add the same nodes as before, plus this "first agent"
808
- workflow.addNode("firstAgent", firstAgent);
809
- workflow.addNode("agent", agent);
810
- workflow.addNode("tools", executeTools);
204
+ - Conditional edge: after the agent is called, we should either:
205
+ - a. Run tools if the agent said to take an action, OR
206
+ - b. Finish (respond to the user) if the agent did not ask to run tools
207
+ - Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next
208
+ </details>
811
209
 
812
- // We now set the entry point to be this first agent
813
- workflow.addEdge(START, "firstAgent");
210
+ 5. <details>
211
+ <summary>Compile the graph.</summary>
814
212
 
815
- // We define the same edges as before
816
- workflow.addConditionalEdges("agent", shouldContinue, {
817
- continue: "tools",
818
- exit: END
819
- });
820
- workflow.addEdge("tools", "agent");
213
+ - When we compile the graph, we turn it into a LangChain [Runnable](https://js.langchain.com/docs/expression_language/), which automatically enables calling `.invoke()`, `.stream()` and `.batch()` with your inputs.
214
+ - We can also optionally pass a checkpointer object for persisting state between graph runs, enabling memory, human-in-the-loop workflows, time travel and more. In our case we use `MemorySaver` - a simple in-memory checkpointer.
215
+ </details>
821
216
 
822
- // We also define a new edge, from the "first agent" to the tools node
823
- // This is so that we can call the tool
824
- workflow.addEdge("firstAgent", "tools");
217
+ 6. <details>
218
+ <summary>Execute the graph.</summary>
825
219
 
826
- // We now compile the graph as before
827
- const chain = workflow.compile();
828
- ```
220
+ 1. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, `"agent"`.
221
+ 2. The `"agent"` node executes, invoking the chat model.
222
+ 3. The chat model returns an `AIMessage`. LangGraph adds this to the state.
223
+ 4. The graph cycles through the following steps until there are no more `tool_calls` on the `AIMessage`:
829
224
 
830
- #### Use it!
225
+ - If `AIMessage` has `tool_calls`, the `"tools"` node executes.
226
+ - The `"agent"` node executes again and returns an `AIMessage`.
831
227
 
832
- We can now use it as before!
833
- Depending on whether or not the first tool call is actually useful, this may save you an LLM call or two.
228
+ 5. Execution progresses to the special `END` value and outputs the final state.
229
+ As a result, we get a list of all our chat messages as output.
230
+ </details>
834
231
 
835
- ```typescript
836
- const result = await chain.invoke({
837
- input: "what is the weather in sf",
838
- steps: []
839
- });
840
- ```
232
+ ## Documentation
841
233
 
842
- You can see a LangSmith trace of this chain [here](https://smith.langchain.com/public/2e0a089f-8c05-405a-8404-b0a60b79a84a/r).
234
+ - [Tutorials](https://langchain-ai.github.io/langgraphjs/tutorials/): Learn to build with LangGraph through guided examples.
235
+ - [How-to Guides](https://langchain-ai.github.io/langgraphjs/how-tos/): Accomplish specific things within LangGraph, from streaming, to adding memory & persistence, to common design patterns (branching, subgraphs, etc.). These are the place to go if you want to copy and run a specific code snippet.
236
+ - [Conceptual Guides](https://langchain-ai.github.io/langgraphjs/concepts/): In-depth explanations of the key concepts and principles behind LangGraph, such as nodes, edges, state and more.
237
+ - [API Reference](https://langchain-ai.github.io/langgraphjs/reference/graphs/): Review important classes and methods, simple examples of how to use the graph and checkpointing APIs, higher-level prebuilt components and more.