@langchain/langgraph 0.2.55 → 0.2.56

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,101 +5,26 @@
5
5
  [![Downloads](https://img.shields.io/npm/dm/@langchain/langgraph)](https://www.npmjs.com/package/@langchain/langgraph)
6
6
  [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langgraphjs)](https://github.com/langchain-ai/langgraphjs/issues)
7
7
 
8
- ⚡ Building language agents as graphs ⚡
9
-
10
8
  > [!NOTE]
11
9
  > Looking for the Python version? See the [Python repo](https://github.com/langchain-ai/langgraph) and the [Python docs](https://langchain-ai.github.io/langgraph/).
12
10
 
13
- ## Overview
14
-
15
- [LangGraph](https://langchain-ai.github.io/langgraphjs/) is a library for building
16
- stateful, multi-actor applications with LLMs, used to create agent and multi-agent
17
- workflows. Check out an introductory tutorial [here](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/).
18
-
19
-
20
- LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
21
-
22
- ### Why use LangGraph?
23
-
24
- LangGraph powers [production-grade agents](https://www.langchain.com/built-with-langgraph), trusted by Linkedin, Uber, Klarna, GitLab, and many more. LangGraph provides fine-grained control over both the flow and state of your agent applications. It implements a central [persistence layer](https://langchain-ai.github.io/langgraphjs/concepts/persistence/), enabling features that are common to most agent architectures:
25
-
26
- - **Memory**: LangGraph persists arbitrary aspects of your application's state,
27
- supporting memory of conversations and other updates within and across user
28
- interactions;
29
- - **Human-in-the-loop**: Because state is checkpointed, execution can be interrupted
30
- and resumed, allowing for decisions, validation, and corrections at key stages via
31
- human input.
32
-
33
- Standardizing these components allows individuals and teams to focus on the behavior
34
- of their agent, instead of its supporting infrastructure.
35
-
36
- Through [LangGraph Platform](#langgraph-platform), LangGraph also provides tooling for
37
- the development, deployment, debugging, and monitoring of your applications.
38
-
39
- LangGraph integrates seamlessly with
40
- [LangChain](https://js.langchain.com/docs/introduction/) and
41
- [LangSmith](https://docs.smith.langchain.com/) (but does not require them).
42
-
43
- To learn more about LangGraph, check out our first LangChain Academy
44
- course, *Introduction to LangGraph*, available for free
45
- [here](https://academy.langchain.com/courses/intro-to-langgraph).
11
+ LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks.
46
12
 
47
- ### LangGraph Platform
48
-
49
- [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_platform) is infrastructure for deploying LangGraph agents. It is a commercial solution for deploying agentic applications to production, built on the open-source LangGraph framework. The LangGraph Platform consists of several components that work together to support the development, deployment, debugging, and monitoring of LangGraph applications: [LangGraph Server](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_server) (APIs), [LangGraph SDKs](https://langchain-ai.github.io/langgraphjs/concepts/sdk) (clients for the APIs), [LangGraph CLI](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_cli) (command line tool for building the server), and [LangGraph Studio](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_studio) (UI/debugger).
50
-
51
- See deployment options [here](https://langchain-ai.github.io/langgraphjs/concepts/deployment_options/)
52
- (includes a free tier).
53
-
54
- Here are some common issues that arise in complex deployments, which LangGraph Platform addresses:
55
-
56
- - **Streaming support**: LangGraph Server provides [multiple streaming modes](https://langchain-ai.github.io/langgraphjs/concepts/streaming) optimized for various application needs
57
- - **Background runs**: Runs agents asynchronously in the background
58
- - **Support for long running agents**: Infrastructure that can handle long running processes
59
- - **[Double texting](https://langchain-ai.github.io/langgraphjs/concepts/double_texting)**: Handle the case where you get two messages from the user before the agent can respond
60
- - **Handle burstiness**: Task queue for ensuring requests are handled consistently without loss, even under heavy loads
61
-
62
- ## Installation
63
-
64
- ```shell
13
+ ```bash
65
14
  npm install @langchain/langgraph @langchain/core
66
15
  ```
67
16
 
68
- ## Example
69
-
70
- Let's build a tool-calling [ReAct-style](https://langchain-ai.github.io/langgraphjs/concepts/agentic_concepts/#react-implementation) agent that uses a search tool!
71
-
72
- ```shell
73
- npm install @langchain/anthropic zod
74
- ```
75
-
76
- ```shell
77
- export ANTHROPIC_API_KEY=sk-...
78
- ```
79
-
80
- Optionally, we can set up [LangSmith](https://docs.smith.langchain.com/) for best-in-class observability.
81
-
82
- ```shell
83
- export LANGSMITH_TRACING=true
84
- export LANGSMITH_API_KEY=lsv2_sk_...
85
- ```
86
-
87
- The simplest way to create a tool-calling agent in LangGraph is to use [`createReactAgent`](https://langchain-ai.github.io/langgraphjs/reference/functions/langgraph_prebuilt.createReactAgent.html):
88
-
89
- <details open>
90
- <summary>High-level implementation</summary>
17
+ To learn more about how to use LangGraph, check out [the docs](https://langchain-ai.github.io/langgraphjs/). We show a simple example below of how to create a ReAct agent.
91
18
 
92
19
  ```ts
20
+ // npm install @langchain-anthropic
93
21
  import { createReactAgent } from "@langchain/langgraph/prebuilt";
94
- import { MemorySaver } from "@langchain/langgraph";
95
22
  import { ChatAnthropic } from "@langchain/anthropic";
96
23
  import { tool } from "@langchain/core/tools";
97
24
 
98
25
  import { z } from "zod";
99
26
 
100
- // Define the tools for the agent to use
101
27
  const search = tool(async ({ query }) => {
102
- // This is a placeholder, but don't tell the LLM that...
103
28
  if (query.toLowerCase().includes("sf") || query.toLowerCase().includes("san francisco")) {
104
29
  return "It's 60 degrees and foggy."
105
30
  }
@@ -112,256 +37,67 @@ const search = tool(async ({ query }) => {
112
37
  }),
113
38
  });
114
39
 
115
- const tools = [search];
116
40
  const model = new ChatAnthropic({
117
- model: "claude-3-5-sonnet-latest"
41
+ model: "claude-3-7-sonnet-latest"
118
42
  });
119
43
 
120
- // Initialize memory to persist state between graph runs
121
- const checkpointer = new MemorySaver();
122
-
123
- const app = createReactAgent({
44
+ const agent = createReactAgent({
124
45
  llm: model,
125
- tools,
126
- checkpointSaver: checkpointer,
46
+ tools: [search],
127
47
  });
128
48
 
129
- // Use the agent
130
- const result = await app.invoke(
49
+ const result = await agent.invoke(
131
50
  {
132
51
  messages: [{
133
52
  role: "user",
134
53
  content: "what is the weather in sf"
135
54
  }]
136
- },
137
- { configurable: { thread_id: 42 } }
138
- );
139
- console.log(result.messages.at(-1)?.content);
140
- ```
141
- ```
142
- "Based on the search results, it's currently 60 degrees Fahrenheit and foggy in San Francisco, which is quite typical weather for the city."
143
- ```
144
-
145
- Now when we pass the same <code>"thread_id"</code>, the conversation context is retained via the saved state (i.e. stored list of messages)
146
-
147
- ```ts
148
- const followup = await app.invoke(
149
- {
150
- messages: [{
151
- role: "user",
152
- content: "what about ny"
153
- }]
154
- },
155
- { configurable: { thread_id: 42 } }
156
- );
157
-
158
- console.log(followup.messages.at(-1)?.content);
159
- ```
160
-
161
- ```
162
- "According to the search results, it's currently 90 degrees Fahrenheit and sunny in New York City. That's quite a warm day for New York!"
163
- ```
164
- </details>
165
-
166
- > [!TIP]
167
- > LangGraph is a **low-level** framework that allows you to implement any custom agent
168
- architectures. Click on the low-level implementation below to see how to implement a
169
- tool-calling agent from scratch.
170
-
171
- <details>
172
- <summary>Low-level implementation</summary>
173
-
174
- ```ts
175
- import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";
176
- import { tool } from "@langchain/core/tools";
177
- import { z } from "zod";
178
- import { ChatAnthropic } from "@langchain/anthropic";
179
- import { StateGraph } from "@langchain/langgraph";
180
- import { MemorySaver, Annotation, messagesStateReducer } from "@langchain/langgraph";
181
- import { ToolNode } from "@langchain/langgraph/prebuilt";
182
-
183
- // Define the graph state
184
- // See here for more info: https://langchain-ai.github.io/langgraphjs/how-tos/define-state/
185
- const StateAnnotation = Annotation.Root({
186
- messages: Annotation<BaseMessage[]>({
187
- // `messagesStateReducer` function defines how `messages` state key should be updated
188
- // (in this case it appends new messages to the list and overwrites messages with the same ID)
189
- reducer: messagesStateReducer,
190
- }),
191
- });
192
-
193
- // Define the tools for the agent to use
194
- const weatherTool = tool(async ({ query }) => {
195
- // This is a placeholder for the actual implementation
196
- if (query.toLowerCase().includes("sf") || query.toLowerCase().includes("san francisco")) {
197
- return "It's 60 degrees and foggy."
198
55
  }
199
- return "It's 90 degrees and sunny."
200
- }, {
201
- name: "weather",
202
- description:
203
- "Call to get the current weather for a location.",
204
- schema: z.object({
205
- query: z.string().describe("The query to use in your search."),
206
- }),
207
- });
208
-
209
- const tools = [weatherTool];
210
- const toolNode = new ToolNode(tools);
211
-
212
- const model = new ChatAnthropic({
213
- model: "claude-3-5-sonnet-20240620",
214
- temperature: 0,
215
- }).bindTools(tools);
216
-
217
- // Define the function that determines whether to continue or not
218
- // We can extract the state typing via `StateAnnotation.State`
219
- function shouldContinue(state: typeof StateAnnotation.State) {
220
- const messages = state.messages;
221
- const lastMessage = messages[messages.length - 1] as AIMessage;
222
-
223
- // If the LLM makes a tool call, then we route to the "tools" node
224
- if (lastMessage.tool_calls?.length) {
225
- return "tools";
226
- }
227
- // Otherwise, we stop (reply to the user)
228
- return "__end__";
229
- }
230
-
231
- // Define the function that calls the model
232
- async function callModel(state: typeof StateAnnotation.State) {
233
- const messages = state.messages;
234
- const response = await model.invoke(messages);
235
-
236
- // We return a list, because this will get added to the existing list
237
- return { messages: [response] };
238
- }
239
-
240
- // Define a new graph
241
- const workflow = new StateGraph(StateAnnotation)
242
- .addNode("agent", callModel)
243
- .addNode("tools", toolNode)
244
- .addEdge("__start__", "agent")
245
- .addConditionalEdges("agent", shouldContinue)
246
- .addEdge("tools", "agent");
247
-
248
- // Initialize memory to persist state between graph runs
249
- const checkpointer = new MemorySaver();
250
-
251
- // Finally, we compile it!
252
- // This compiles it into a LangChain Runnable.
253
- // Note that we're (optionally) passing the memory when compiling the graph
254
- const app = workflow.compile({ checkpointer });
255
-
256
- // Use the Runnable
257
- const finalState = await app.invoke(
258
- { messages: [new HumanMessage("what is the weather in sf")] },
259
- { configurable: { thread_id: "42" } }
260
56
  );
261
-
262
- console.log(finalState.messages[finalState.messages.length - 1].content);
263
57
  ```
264
58
 
265
- <b>Step-by-step Breakdown</b>:
266
-
267
- <details>
268
- <summary>Initialize the model and tools.</summary>
269
- <ul>
270
- <li>
271
- We use <code>ChatAnthropic</code> as our LLM. <strong>NOTE:</strong> we need to make sure the model knows that it has these tools available to call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the <code>.bindTools()</code> method.
272
- </li>
273
- <li>
274
- We define the tools we want to use - a search tool in our case. It is really easy to create your own tools - see documentation here on how to do that <a href="https://js.langchain.com/docs/how_to/custom_tools">here</a>.
275
- </li>
276
- </ul>
277
- </details>
278
-
279
- <details>
280
- <summary>Initialize graph with state.</summary>
281
-
282
- <ul>
283
- <li>We initialize the graph (<code>StateGraph</code>) by passing state schema with a reducer that defines how the state should be updated. In our case, we want to append new messages to the list and overwrite messages with the same ID, so we use the prebuilt <code>messagesStateReducer</code>.</li>
284
- </ul>
285
- </details>
286
-
287
- <details>
288
- <summary>Define graph nodes.</summary>
289
-
290
- There are two main nodes we need:
291
-
292
- <ul>
293
- <li>The <code>agent</code> node: responsible for deciding what (if any) actions to take.</li>
294
- <li>The <code>tools</code> node that invokes tools: if the agent decides to take an action, this node will then execute that action.</li>
295
- </ul>
296
- </details>
59
+ ## Why use LangGraph?
297
60
 
298
- <details>
299
- <summary>Define entry point and graph edges.</summary>
61
+ LangGraph is built for developers who want to build powerful, adaptable AI agents. Developers choose LangGraph for:
300
62
 
301
- First, we need to set the entry point for graph execution - <code>agent</code> node.
63
+ - **Reliability and controllability.** Steer agent actions with moderation checks and human-in-the-loop approvals. LangGraph persists context for long-running workflows, keeping your agents on course.
64
+ - **Low-level and extensible.** Build custom agents with fully descriptive, low-level primitives – free from rigid abstractions that limit customization. Design scalable multi-agent systems, with each agent serving a specific role tailored to your use case.
65
+ - **First-class streaming support.** With token-by-token streaming and streaming of intermediate steps, LangGraph gives users clear visibility into agent reasoning and actions as they unfold in real time.
302
66
 
303
- Then we define one normal and one conditional edge. Conditional edge means that the destination depends on the contents of the graph's state. In our case, the destination is not known until the agent (LLM) decides.
67
+ LangGraph is trusted in production and powering agents for companies like:
304
68
 
305
- <ul>
306
- <li>Conditional edge: after the agent is called, we should either:
307
- <ul>
308
- <li>a. Run tools if the agent said to take an action, OR</li>
309
- <li>b. Finish (respond to the user) if the agent did not ask to run tools</li>
310
- </ul>
311
- </li>
312
- <li>Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next</li>
313
- </ul>
314
- </details>
69
+ - [Klarna](https://blog.langchain.dev/customers-klarna/): Customer support bot for 85 million active users
70
+ - [Elastic](https://www.elastic.co/blog/elastic-security-generative-ai-features): Security AI assistant for threat detection
71
+ - [Uber](https://dpe.org/sessions/ty-smith-adam-huda/this-year-in-ubers-ai-driven-developer-productivity-revolution/): Automated unit test generation
72
+ - [Replit](https://www.langchain.com/breakoutagents/replit): Code generation
73
+ - And many more ([see list here](https://www.langchain.com/built-with-langgraph))
315
74
 
316
- <details>
317
- <summary>Compile the graph.</summary>
75
+ ## LangGraph’s ecosystem
318
76
 
319
- <ul>
320
- <li>
321
- When we compile the graph, we turn it into a LangChain
322
- <a href="https://js.langchain.com/docs/concepts/runnables">Runnable</a>,
323
- which automatically enables calling <code>.invoke()</code>, <code>.stream()</code> and <code>.batch()</code>
324
- with your inputs
325
- </li>
326
- <li>
327
- We can also optionally pass checkpointer object for persisting state between graph runs, and enabling memory,
328
- human-in-the-loop workflows, time travel and more. In our case we use <code>MemorySaver</code> -
329
- a simple in-memory checkpointer
330
- </li>
331
- </ul>
332
- </details>
77
+ While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:
333
78
 
334
- <details>
335
- <summary>Execute the graph.</summary>
79
+ - [LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
80
+ - [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/#langgraph-platform) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_studio/).
336
81
 
337
- <ol>
338
- <li>LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, <code>"agent"</code>.</li>
339
- <li>The <code>"agent"</code> node executes, invoking the chat model.</li>
340
- <li>The chat model returns an <code>AIMessage</code>. LangGraph adds this to the state.</li>
341
- <li>Graph cycles the following steps until there are no more <code>tool_calls</code> on <code>AIMessage</code>:
342
- <ul>
343
- <li>If <code>AIMessage</code> has <code>tool_calls</code>, <code>"tools"</code> node executes</li>
344
- <li>The <code>"agent"</code> node executes again and returns <code>AIMessage</code></li>
345
- </ul>
346
- </li>
347
- <li>Execution progresses to the special <code>END</code> value and outputs the final state. And as a result, we get a list of all our chat messages as output.</li>
348
- </ol>
349
- </details>
82
+ ## Pairing with LangGraph Platform
350
83
 
351
- </details>
84
+ While LangGraph is our open-source agent orchestration framework, enterprises that need scalable agent deployment can benefit from [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_platform/).
352
85
 
353
- ## Documentation
86
+ LangGraph Platform can help engineering teams:
354
87
 
355
- * [Tutorials](https://langchain-ai.github.io/langgraphjs/tutorials/): Learn to build with LangGraph through guided examples.
356
- * [How-to Guides](https://langchain-ai.github.io/langgraphjs/how-tos/): Accomplish specific things within LangGraph, from streaming, to adding memory & persistence, to common design patterns (branching, subgraphs, etc.), these are the place to go if you want to copy and run a specific code snippet.
357
- * [Conceptual Guides](https://langchain-ai.github.io/langgraphjs/concepts/high_level/): In-depth explanations of the key concepts and principles behind LangGraph, such as nodes, edges, state and more.
358
- * [API Reference](https://langchain-ai.github.io/langgraphjs/reference/): Review important classes and methods, simple examples of how to use the graph and checkpointing APIs, higher-level prebuilt components and more.
359
- * [LangGraph Platform](https://langchain-ai.github.io/langgraphjs/concepts/#langgraph-platform): LangGraph Platform is a commercial solution for deploying agentic applications in production, built on the open-source LangGraph framework.
88
+ - **Accelerate agent development**: Quickly create agent UXs with configurable templates and [LangGraph Studio](https://langchain-ai.github.io/langgraphjs/concepts/langgraph_studio/) for visualizing and debugging agent interactions.
89
+ - **Deploy seamlessly**: We handle the complexity of deploying your agent. LangGraph Platform includes robust APIs for memory, threads, and cron jobs plus auto-scaling task queues & servers.
90
+ - **Centralize agent management & reusability**: Discover, reuse, and manage agents across the organization. Business users can also modify agents without coding.
360
91
 
361
- ## Resources
92
+ ## Additional resources
362
93
 
363
- * [Built with LangGraph](https://www.langchain.com/built-with-langgraph): Hear how industry leaders use LangGraph to ship powerful, production-ready AI applications.
94
+ - [LangChain Academy](https://academy.langchain.com/courses/intro-to-langgraph): Learn the basics of LangGraph in our free, structured course.
95
+ - [Tutorials](https://langchain-ai.github.io/langgraphjs/tutorials/): Simple walkthroughs with guided examples on getting started with LangGraph.
96
+ - [Templates](https://langchain-ai.github.io/langgraphjs/concepts/template_applications/): Pre-built reference apps for common agentic workflows (e.g. ReAct agent, memory, retrieval etc.) that can be cloned and adapted.
97
+ - [How-to Guides](https://langchain-ai.github.io/langgraphjs/how-tos/): Quick, actionable code snippets for topics such as streaming, adding memory & persistence, and design patterns (e.g. branching, subgraphs, etc.).
98
+ - [API Reference](https://langchain-ai.github.io/langgraphjs/reference/): Detailed reference on core classes, methods, how to use the graph and checkpointing APIs, and higher-level prebuilt components.
99
+ - [Built with LangGraph](https://www.langchain.com/built-with-langgraph): Hear how industry leaders use LangGraph to ship powerful, production-ready AI applications.
364
100
 
365
- ## Contributing
101
+ ## Acknowledgements
366
102
 
367
- For more information on how to contribute, see [here](https://github.com/langchain-ai/langgraphjs/blob/main/CONTRIBUTING.md).
103
+ LangGraph is inspired by [Pregel](https://research.google/pubs/pub37252/) and [Apache Beam](https://beam.apache.org/). The public interface draws inspiration from [NetworkX](https://networkx.org/documentation/latest/). LangGraph is built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
@@ -15,6 +15,10 @@ const constants_js_1 = require("./constants.cjs");
15
15
  * 2. Otherwise, it throws a `GraphInterrupt` with the provided value
16
16
  * 3. The graph can be resumed by passing a `Command` with a `resume` value
17
17
  *
18
+ * Because the `interrupt` function propagates by throwing a special `GraphInterrupt` error,
19
+ * you should avoid using `try/catch` blocks around the `interrupt` function,
20
+ * or if you do, ensure that the `GraphInterrupt` error is thrown again within your `catch` block.
21
+ *
18
22
  * @param value - The value to include in the interrupt. This will be available in task.interrupts[].value
19
23
  * @returns The `resume` value provided when the graph is re-invoked with a Command
20
24
  *
@@ -9,6 +9,10 @@
9
9
  * 2. Otherwise, it throws a `GraphInterrupt` with the provided value
10
10
  * 3. The graph can be resumed by passing a `Command` with a `resume` value
11
11
  *
12
+ * Because the `interrupt` function propagates by throwing a special `GraphInterrupt` error,
13
+ * you should avoid using `try/catch` blocks around the `interrupt` function,
14
+ * or if you do, ensure that the `GraphInterrupt` error is thrown again within your `catch` block.
15
+ *
12
16
  * @param value - The value to include in the interrupt. This will be available in task.interrupts[].value
13
17
  * @returns The `resume` value provided when the graph is re-invoked with a Command
14
18
  *
package/dist/interrupt.js CHANGED
@@ -12,6 +12,10 @@ import { CONFIG_KEY_CHECKPOINT_NS, CONFIG_KEY_SCRATCHPAD, CONFIG_KEY_SEND, CHECK
12
12
  * 2. Otherwise, it throws a `GraphInterrupt` with the provided value
13
13
  * 3. The graph can be resumed by passing a `Command` with a `resume` value
14
14
  *
15
+ * Because the `interrupt` function propagates by throwing a special `GraphInterrupt` error,
16
+ * you should avoid using `try/catch` blocks around the `interrupt` function,
17
+ * or if you do, ensure that the `GraphInterrupt` error is thrown again within your `catch` block.
18
+ *
15
19
  * @param value - The value to include in the interrupt. This will be available in task.interrupts[].value
16
20
  * @returns The `resume` value provided when the graph is re-invoked with a Command
17
21
  *
@@ -1 +1 @@
1
- {"version":3,"file":"interrupt.js","sourceRoot":"","sources":["../src/interrupt.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,kCAAkC,EAAE,MAAM,4BAA4B,CAAC;AAGhF,OAAO,EAAE,cAAc,EAAE,MAAM,aAAa,CAAC;AAC7C,OAAO,EACL,wBAAwB,EACxB,qBAAqB,EACrB,eAAe,EACf,8BAA8B,EAC9B,MAAM,GACP,MAAM,gBAAgB,CAAC;AAGxB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GAsCG;AACH,8DAA8D;AAC9D,MAAM,UAAU,SAAS,CAAuB,KAAQ;IACtD,MAAM,MAAM,GACV,kCAAkC,CAAC,iBAAiB,EAAE,CAAC;IACzD,IAAI,CAAC,MAAM,EAAE;QACX,MAAM,IAAI,KAAK,CAAC,oDAAoD,CAAC,CAAC;KACvE;IAED,MAAM,IAAI,GAAG,MAAM,CAAC,YAAY,CAAC;IACjC,IAAI,CAAC,IAAI,EAAE;QACT,MAAM,IAAI,KAAK,CAAC,iCAAiC,CAAC,CAAC;KACpD;IAED,wBAAwB;IACxB,MAAM,UAAU,GAAqB,IAAI,CAAC,qBAAqB,CAAC,CAAC;IACjE,UAAU,CAAC,gBAAgB,IAAI,CAAC,CAAC;IACjC,MAAM,GAAG,GAAG,UAAU,CAAC,gBAAgB,CAAC;IAExC,8BAA8B;IAC9B,IAAI,UAAU,CAAC,MAAM,CAAC,MAAM,GAAG,CAAC,IAAI,GAAG,GAAG,UAAU,CAAC,MAAM,CAAC,MAAM,EAAE;QAClE,OAAO,UAAU,CAAC,MAAM,CAAC,GAAG,CAAM,CAAC;KACpC;IAED,4BAA4B;IAC5B,IAAI,UAAU,CAAC,UAAU,KAAK,SAAS,EAAE;QACvC,IAAI,UAAU,CAAC,MAAM,CAAC,MAAM,KAAK,GAAG,EAAE;YACpC,MAAM,IAAI,KAAK,CACb,2BAA2B,UAAU,CAAC,MAAM,CAAC,MAAM,QAAQ,GAAG,EAAE,CACjE,CAAC;SACH;QACD,MAAM,CAAC,GAAG,UAAU,CAAC,iBAAiB,EAAE,CAAC;QACzC,UAAU,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAC1B,MAAM,IAAI,GAAG,IAAI,CAAC,eAAe,CAAC,CAAC;QACnC,IAAI,IAAI,EAAE;YACR,IAAI,CAAC,CAAC,CAAC,MAAM,EAAE,UAAU,CAAC,MAAM,CAAiB,CAAC,CAAC,CAAC;SACrD;QACD,OAAO,CAAM,CAAC;KACf;IAED,wBAAwB;IACxB,MAAM,IAAI,cAAc,CAAC;QACvB;YACE,KAAK;YACL,IAAI,EAAE,QAAQ;YACd,SAAS,EAAE,IAAI;YACf,EAAE,EAAE,IAAI,CAAC,wBAAwB,CAAC,EAAE,KAAK,CAAC,8BAA8B,CAAC;SAC1E;KACF,CAAC,CAAC;AACL,CAAC"}
1
+ {"version":3,"file":"interrupt.js","sourceRoot":"","sources":["../src/interrupt.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,kCAAkC,EAAE,MAAM,4BAA4B,CAAC;AAGhF,OAAO,EAAE,cAAc,EAAE,MAAM,aAAa,CAAC;AAC7C,OAAO,EACL,wBAAwB,EACxB,qBAAqB,EACrB,eAAe,EACf,8BAA8B,EAC9B,MAAM,GACP,MAAM,gBAAgB,CAAC;AAGxB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA0CG;AACH,8DAA8D;AAC9D,MAAM,UAAU,SAAS,CAAuB,KAAQ;IACtD,MAAM,MAAM,GACV,kCAAkC,CAAC,iBAAiB,EAAE,CAAC;IACzD,IAAI,CAAC,MAAM,EAAE;QACX,MAAM,IAAI,KAAK,CAAC,oDAAoD,CAAC,CAAC;KACvE;IAED,MAAM,IAAI,GAAG,MAAM,CAAC,YAAY,CAAC;IACjC,IAAI,CAAC,IAAI,EAAE;QACT,MAAM,IAAI,KAAK,CAAC,iCAAiC,CAAC,CAAC;KACpD;IAED,wBAAwB;IACxB,MAAM,UAAU,GAAqB,IAAI,CAAC,qBAAqB,CAAC,CAAC;IACjE,UAAU,CAAC,gBAAgB,IAAI,CAAC,CAAC;IACjC,MAAM,GAAG,GAAG,UAAU,CAAC,gBAAgB,CAAC;IAExC,8BAA8B;IAC9B,IAAI,UAAU,CAAC,MAAM,CAAC,MAAM,GAAG,CAAC,IAAI,GAAG,GAAG,UAAU,CAAC,MAAM,CAAC,MAAM,EAAE;QAClE,OAAO,UAAU,CAAC,MAAM,CAAC,GAAG,CAAM,CAAC;KACpC;IAED,4BAA4B;IAC5B,IAAI,UAAU,CAAC,UAAU,KAAK,SAAS,EAAE;QACvC,IAAI,UAAU,CAAC,MAAM,CAAC,MAAM,KAAK,GAAG,EAAE;YACpC,MAAM,IAAI,KAAK,CACb,2BAA2B,UAAU,CAAC,MAAM,CAAC,MAAM,QAAQ,GAAG,EAAE,CACjE,CAAC;SACH;QACD,MAAM,CAAC,GAAG,UAAU,CAAC,iBAAiB,EAAE,CAAC;QACzC,UAAU,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAC1B,MAAM,IAAI,GAAG,IAAI,CAAC,eAAe,CAAC,CAAC;QACnC,IAAI,IAAI,EAAE;YACR,IAAI,CAAC,CAAC,CAAC,MAAM,EAAE,UAAU,CAAC,MAAM,CAAiB,CAAC,CAAC,CAAC;SACrD;QACD,OAAO,CAAM,CAAC;KACf;IAED,wBAAwB;IACxB,MAAM,IAAI,cAAc,CAAC;QACvB;YACE,KAAK;YACL,IAAI,EAAE,QAAQ;YACd,SAAS,EAAE,IAAI;YACf,EAAE,EAAE,IAAI,CAAC,wBAAwB,CAAC,EAAE,KAAK,CAAC,8BAA8B,CAAC;SAC1E;KACF,CAAC,CAAC;AACL,CAAC"}