langchain-dev-utils 1.2.5__py3-none-any.whl → 1.2.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. langchain_dev_utils/__init__.py +1 -1
  2. langchain_dev_utils/_utils.py +42 -0
  3. langchain_dev_utils/agents/__init__.py +0 -1
  4. langchain_dev_utils/agents/factory.py +2 -10
  5. langchain_dev_utils/agents/file_system.py +1 -1
  6. langchain_dev_utils/agents/middleware/__init__.py +2 -0
  7. langchain_dev_utils/agents/middleware/model_fallback.py +1 -1
  8. langchain_dev_utils/agents/middleware/model_router.py +37 -46
  9. langchain_dev_utils/agents/middleware/plan.py +17 -18
  10. langchain_dev_utils/agents/middleware/summarization.py +6 -4
  11. langchain_dev_utils/agents/middleware/tool_call_repair.py +96 -0
  12. langchain_dev_utils/agents/middleware/tool_emulator.py +3 -3
  13. langchain_dev_utils/agents/middleware/tool_selection.py +3 -3
  14. langchain_dev_utils/agents/plan.py +1 -1
  15. langchain_dev_utils/agents/wrap.py +8 -20
  16. langchain_dev_utils/chat_models/adapters/openai_compatible.py +33 -17
  17. langchain_dev_utils/chat_models/base.py +38 -50
  18. langchain_dev_utils/chat_models/types.py +0 -1
  19. langchain_dev_utils/embeddings/base.py +40 -46
  20. langchain_dev_utils/message_convert/__init__.py +0 -1
  21. langchain_dev_utils/message_convert/content.py +8 -11
  22. langchain_dev_utils/message_convert/format.py +2 -2
  23. langchain_dev_utils/pipeline/parallel.py +10 -41
  24. langchain_dev_utils/pipeline/sequential.py +6 -21
  25. langchain_dev_utils/tool_calling/human_in_the_loop.py +6 -6
  26. langchain_dev_utils/tool_calling/utils.py +3 -3
  27. {langchain_dev_utils-1.2.5.dist-info → langchain_dev_utils-1.2.7.dist-info}/METADATA +28 -120
  28. langchain_dev_utils-1.2.7.dist-info/RECORD +37 -0
  29. langchain_dev_utils-1.2.5.dist-info/RECORD +0 -35
  30. {langchain_dev_utils-1.2.5.dist-info → langchain_dev_utils-1.2.7.dist-info}/WHEEL +0 -0
  31. {langchain_dev_utils-1.2.5.dist-info → langchain_dev_utils-1.2.7.dist-info}/licenses/LICENSE +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: langchain-dev-utils
3
- Version: 1.2.5
3
+ Version: 1.2.7
4
4
  Summary: A practical utility library for LangChain and LangGraph development
5
5
  Project-URL: Source Code, https://github.com/TBice123123/langchain-dev-utils
6
6
  Project-URL: repository, https://github.com/TBice123123/langchain-dev-utils
@@ -11,6 +11,7 @@ Requires-Python: >=3.11
11
11
  Requires-Dist: langchain>=1.1.0
12
12
  Requires-Dist: langgraph>=1.0.0
13
13
  Provides-Extra: standard
14
+ Requires-Dist: json-repair>=0.53.1; extra == 'standard'
14
15
  Requires-Dist: langchain-openai; extra == 'standard'
15
16
  Description-Content-Type: text/markdown
16
17
 
@@ -57,24 +58,6 @@ Mainly consists of the following two functions:
57
58
  - `register_model_provider`: Register a chat model provider
58
59
  - `load_chat_model`: Load a chat model
59
60
 
60
- **`register_model_provider` Parameters:**
61
-
62
- | Parameter | Type | Required | Default | Description |
63
- |-----------|------|----------|---------|-------------|
64
- | `provider_name` | str | Yes | - | The name of the model provider, used as an identifier for loading models later. |
65
- | `chat_model` | ChatModel \| str | Yes | - | The chat model, which can be either a `ChatModel` instance or a string (currently only `"openai-compatible"` is supported). |
66
- | `base_url` | str | No | - | The API endpoint URL of the model provider (applicable to both `chat_model` types, but primarily used when `chat_model` is a string with value `"openai-compatible"`). |
67
- | `model_profiles` | dict | No | - | Declares the capabilities and parameters supported by each model provided by this provider. The configuration corresponding to the `model_name` will be loaded and assigned to `model.profile` (e.g., fields such as `max_input_tokens`, `tool_calling` etc.). |
68
- | `compatibility_options` | dict | No | - | Compatibility options for the model provider (only effective when `chat_model` is a string with value `"openai-compatible"`). Used to declare support for OpenAI-compatible features (e.g., `tool_choice` strategies, JSON mode, etc.) to ensure correct functional adaptation. |
69
-
70
- **`load_chat_model` Parameters:**
71
-
72
- | Parameter | Type | Required | Default | Description |
73
- |-----------|------|----------|---------|-------------|
74
- | `model` | str | Yes | - | Chat model name |
75
- | `model_provider` | str | No | - | Chat model provider name |
76
- | `kwargs` | dict | No | - | Additional parameters passed to the chat model class, e.g., temperature, top_p, etc. |
77
-
78
61
  Example for integrating a qwen3-4b model deployed using `vllm`:
79
62
 
80
63
  ```python
@@ -102,22 +85,6 @@ Mainly consists of the following two functions:
102
85
  - `register_embeddings_provider`: Register an embedding model provider
103
86
  - `load_embeddings`: Load an embedding model
104
87
 
105
- **`register_embeddings_provider` Parameters:**
106
-
107
- | Parameter | Type | Required | Default | Description |
108
- |-----------|------|----------|---------|-------------|
109
- | `provider_name` | str | Yes | - | Embedding model provider name, used as an identifier for subsequent model loading |
110
- | `embeddings_model` | Embeddings \| str | Yes | - | Embedding model, can be Embeddings or a string (currently supports "openai-compatible") |
111
- | `base_url` | str | No | - | The API address of the Embedding model provider (valid for both types of `embeddings_model`, but mainly used when `embeddings_model` is a string and is "openai-compatible") |
112
-
113
- **`load_embeddings` Parameters:**
114
-
115
- | Parameter | Type | Required | Default | Description |
116
- |-----------|------|----------|---------|-------------|
117
- | `model` | str | Yes | - | Embedding model name |
118
- | `provider` | str | No | - | Embedding model provider name |
119
- | `kwargs` | dict | No | - | Other additional parameters |
120
-
121
88
  Example for integrating a qwen3-embedding-4b model deployed using `vllm`:
122
89
 
123
90
  ```python
@@ -150,13 +117,9 @@ Includes the following features:
150
117
 
151
118
  For stream responses obtained using `stream()` and `astream()`, you can use `merge_ai_message_chunk` to merge them into a final AIMessage.
152
119
 
153
- **`merge_ai_message_chunk` Parameters:**
154
-
155
- | Parameter | Type | Required | Default | Description |
156
- |-----------|------|----------|---------|-------------|
157
- | `chunks` | List[AIMessageChunk] | Yes | - | List of AIMessageChunk objects |
158
-
159
120
  ```python
121
+ from langchain_dev_utils.message_convert import merge_ai_message_chunk
122
+
160
123
  chunks = list(model.stream("Hello"))
161
124
  merged = merge_ai_message_chunk(chunks)
162
125
  ```
@@ -165,15 +128,8 @@ merged = merge_ai_message_chunk(chunks)
165
128
 
166
129
  For a list, you can use `format_sequence` to format it.
167
130
 
168
- **`format_sequence` Parameters:**
169
-
170
- | Parameter | Type | Required | Default | Description |
171
- |-----------|------|----------|---------|-------------|
172
- | `inputs` | List | Yes | - | A list containing any of the following types: langchain_core.messages, langchain_core.documents.Document, str |
173
- | `separator` | str | No | "-" | String used to join the content |
174
- | `with_num` | bool | No | False | If True, add a numeric prefix to each item (e.g., "1. Hello") |
175
-
176
131
  ```python
132
+ from langchain_dev_utils.message_convert import format_sequence
177
133
  text = format_sequence([
178
134
  "str1",
179
135
  "str2",
@@ -194,19 +150,6 @@ Includes the following features:
194
150
 
195
151
  `has_tool_calling` and `parse_tool_calling` are used to check and parse tool calls.
196
152
 
197
- **`has_tool_calling` Parameters:**
198
-
199
- | Parameter | Type | Required | Default | Description |
200
- |-----------|------|----------|---------|-------------|
201
- | `message` | AIMessage | Yes | - | AIMessage object |
202
-
203
- **`parse_tool_calling` Parameters:**
204
-
205
- | Parameter | Type | Required | Default | Description |
206
- |-----------|------|----------|---------|-------------|
207
- | `message` | AIMessage | Yes | - | AIMessage object |
208
- | `first_tool_call_only` | bool | No | False | Whether to only check the first tool call |
209
-
210
153
  ```python
211
154
  import datetime
212
155
  from langchain_core.tools import tool
@@ -234,7 +177,7 @@ if has_tool_calling(response):
234
177
  Both can accept a `handler` parameter for custom breakpoint return and response handling logic.
235
178
 
236
179
  ```python
237
- from langchain_dev_utils import human_in_the_loop
180
+ from langchain_dev_utils.tool_calling import human_in_the_loop
238
181
  from langchain_core.tools import tool
239
182
  import datetime
240
183
 
@@ -256,14 +199,7 @@ Includes the following features:
256
199
 
257
200
  #### 4.1 Agent Factory Functions
258
201
 
259
- In LangChain v1, the officially provided `create_agent` function can be used to create a single agent, where the model parameter supports passing a BaseChatModel instance or a specific string (when passing a string, it is limited to the models supported by `init_chat_model`). To extend the flexibility of specifying models via strings, this library provides a functionally identical `create_agent` function, allowing you to directly use models supported by `load_chat_model` (requires prior registration).
260
-
261
- **`create_agent` Parameters:**
262
-
263
- | Parameter | Type | Required | Default | Description |
264
- |-----------|------|----------|---------|-------------|
265
- | `model` | str \| BaseChatModel | Yes | - | Model name or model instance. Can be a string identifier for a model registered with `register_model_provider` or a BaseChatModel instance. |
266
- | Other parameters | Various | No | - | All other parameters are the same as in `langchain.agents.create_agent` |
202
+ In LangChain v1, the official `create_agent` function can be used to create a single agent; its `model` parameter accepts either a BaseChatModel instance or a specific string (when a string is provided, only models supported by `init_chat_model` are allowed). To extend the flexibility of specifying models via string, this library provides an equivalent `create_agent` function that lets you designate any model supported by `load_chat_model` (registration required beforehand).
267
203
 
268
204
  Usage example:
269
205
 
@@ -278,24 +214,26 @@ print(response)
278
214
 
279
215
  #### 4.2 Middleware
280
216
 
281
- Provides some commonly used middleware components. Below are examples of `SummarizationMiddleware` and `PlanMiddleware`.
217
+ Provides some commonly used middleware components. Below, we illustrate with `ToolCallRepairMiddleware` and `PlanMiddleware`.
282
218
 
283
- `SummarizationMiddleware` is used for agent summarization.
219
+ `ToolCallRepairMiddleware` is used to repair `invalid_tool_calls` generated by large language models.
284
220
 
285
221
  `PlanMiddleware` is used for agent planning.
286
222
 
287
223
  ```python
288
224
  from langchain_dev_utils.agents.middleware import (
289
- SummarizationMiddleware,
225
+ ToolcallRepairMiddleware,
290
226
  PlanMiddleware,
291
227
  )
292
228
 
293
- agent=create_agent(
229
+ agent = create_agent(
294
230
  "vllm:qwen3-4b",
295
231
  name="plan-agent",
296
- middleware=[PlanMiddleware(), SummarizationMiddleware(model="vllm:qwen3-4b")]
232
+ middleware=[ToolCallRepairMiddleware(), PlanMiddleware(
233
+ use_read_plan_tool=False
234
+ )]
297
235
  )
298
- response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]}))
236
+ response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]})
299
237
  print(response)
300
238
  ```
301
239
 
@@ -303,29 +241,14 @@ print(response)
303
241
 
304
242
  ### 5. **State Graph Orchestration**
305
243
 
306
- Includes the following features:
244
+ Includes the following capabilities:
307
245
 
308
246
  - Sequential graph orchestration
309
247
  - Parallel graph orchestration
310
248
 
311
249
  #### 5.1 Sequential Graph Orchestration
312
250
 
313
- Sequential graph orchestration:
314
- Uses `create_sequential_pipeline`, supported parameters:
315
-
316
- **`create_sequential_pipeline` Parameters:**
317
-
318
- | Parameter | Type | Required | Default | Description |
319
- |-----------|------|----------|---------|-------------|
320
- | `sub_graphs` | List[StateGraph] | Yes | - | List of state graphs to combine (must be StateGraph instances) |
321
- | `state_schema` | type | Yes | - | State Schema for the final generated graph |
322
- | `graph_name` | str | No | - | Name of the final generated graph |
323
- | `context_schema` | type | No | - | Context Schema for the final generated graph |
324
- | `input_schema` | type | No | - | Input Schema for the final generated graph |
325
- | `output_schema` | type | No | - | Output Schema for the final generated graph |
326
- | `checkpoint` | BaseCheckpointSaver | No | - | LangGraph persistence Checkpoint |
327
- | `store` | BaseStore | No | - | LangGraph persistence Store |
328
- | `cache` | BaseCache | No | - | LangGraph Cache |
251
+ Use `create_sequential_pipeline` to orchestrate multiple subgraphs in sequential order:
329
252
 
330
253
  ```python
331
254
  from langchain.agents import AgentState
@@ -340,25 +263,25 @@ register_model_provider(
340
263
  base_url="http://localhost:8000/v1",
341
264
  )
342
265
 
343
- # Build sequential pipeline (all sub-graphs execute sequentially)
266
+ # Build a sequential pipeline (all subgraphs executed in order)
344
267
  graph = create_sequential_pipeline(
345
268
  sub_graphs=[
346
269
  create_agent(
347
270
  model="vllm:qwen3-4b",
348
271
  tools=[get_current_time],
349
- system_prompt="You are a time query assistant, can only answer the current time. If the question is unrelated to time, please directly answer that you cannot answer.",
272
+ system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
350
273
  name="time_agent",
351
274
  ),
352
275
  create_agent(
353
276
  model="vllm:qwen3-4b",
354
277
  tools=[get_current_weather],
355
- system_prompt="You are a weather query assistant, can only answer the current weather. If the question is unrelated to weather, please directly answer that you cannot answer.",
278
+ system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
356
279
  name="weather_agent",
357
280
  ),
358
281
  create_agent(
359
282
  model="vllm:qwen3-4b",
360
283
  tools=[get_current_user],
361
- system_prompt="You are a user query assistant, can only answer the current user. If the question is unrelated to user, please directly answer that you cannot answer.",
284
+ system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
362
285
  name="user_agent",
363
286
  ),
364
287
  ],
@@ -371,51 +294,36 @@ print(response)
371
294
 
372
295
  #### 5.2 Parallel Graph Orchestration
373
296
 
374
- Parallel graph orchestration:
375
- Uses `create_parallel_pipeline`, supported parameters:
376
-
377
- **`create_parallel_pipeline` Parameters:**
378
-
379
- | Parameter | Type | Required | Default | Description |
380
- |-----------|------|----------|---------|-------------|
381
- | `sub_graphs` | List[StateGraph] | Yes | - | List of state graphs to combine |
382
- | `state_schema` | type | Yes | - | State Schema for the final generated graph |
383
- | `branches_fn` | Callable | Yes | - | Parallel branch function, returns a list of Send objects to control parallel execution |
384
- | `graph_name` | str | No | - | Name of the final generated graph |
385
- | `context_schema` | type | No | - | Context Schema for the final generated graph |
386
- | `input_schema` | type | No | - | Input Schema for the final generated graph |
387
- | `output_schema` | type | No | - | Output Schema for the final generated graph |
388
- | `checkpoint` | BaseCheckpointSaver | No | - | LangGraph persistence Checkpoint |
389
- | `store` | BaseStore | No | - | LangGraph persistence Store |
390
- | `cache` | BaseCache | No | - | LangGraph Cache |
297
+ Use `create_parallel_pipeline` to orchestrate multiple subgraphs in parallel:
391
298
 
392
299
  ```python
393
300
  from langchain_dev_utils.pipeline import create_parallel_pipeline
394
301
 
395
- # Build parallel pipeline (all sub-graphs execute in parallel)
302
+ # Build a parallel pipeline (all subgraphs executed concurrently)
396
303
  graph = create_parallel_pipeline(
397
304
  sub_graphs=[
398
305
  create_agent(
399
306
  model="vllm:qwen3-4b",
400
307
  tools=[get_current_time],
401
- system_prompt="You are a time query assistant, can only answer the current time. If the question is unrelated to time, please directly answer that you cannot answer.",
308
+ system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
402
309
  name="time_agent",
403
310
  ),
404
311
  create_agent(
405
312
  model="vllm:qwen3-4b",
406
313
  tools=[get_current_weather],
407
- system_prompt="You are a weather query assistant, can only answer the current weather. If the question is unrelated to weather, please directly answer that you cannot answer.",
314
+ system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
408
315
  name="weather_agent",
409
316
  ),
410
317
  create_agent(
411
318
  model="vllm:qwen3-4b",
412
319
  tools=[get_current_user],
413
- system_prompt="You are a user query assistant, can only answer the current user. If the question is unrelated to user, please directly answer that you cannot answer.",
320
+ system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
414
321
  name="user_agent",
415
322
  ),
416
323
  ],
417
324
  state_schema=AgentState,
418
325
  )
326
+
419
327
  response = graph.invoke({"messages": [HumanMessage("Hello")]})
420
328
  print(response)
421
329
  ```
@@ -426,4 +334,4 @@ print(response)
426
334
 
427
335
  - [GitHub Repository](https://github.com/TBice123123/langchain-dev-utils) — Browse source code, submit Pull Requests
428
336
  - [Issue Tracker](https://github.com/TBice123123/langchain-dev-utils/issues) — Report bugs or suggest improvements
429
- - We welcome contributions in all forms — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!
337
+ - We welcome contributions in all forms — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!
@@ -0,0 +1,37 @@
1
+ langchain_dev_utils/__init__.py,sha256=49prCLbE3fFzLfxem5rd2dr1iV4_L-bN0N4J7jxU5yA,22
2
+ langchain_dev_utils/_utils.py,sha256=8Y8qzE9tWuF2UoDGa6xrTyEZRWtOmrGvXNroIF0SOCU,1207
3
+ langchain_dev_utils/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
4
+ langchain_dev_utils/agents/__init__.py,sha256=PJ-lSDZv_AXMYA3H4fx-HzJa14tPbkGmq1HX8LNfaPo,125
5
+ langchain_dev_utils/agents/factory.py,sha256=XdGjktksfTDys7X4SgfPrQz10HUo5fTNAWESDQenIlE,3728
6
+ langchain_dev_utils/agents/file_system.py,sha256=Yk3eetREE26WNrnTWLoiDUpOyCJ-rhjlfFDk6foLa1E,8468
7
+ langchain_dev_utils/agents/plan.py,sha256=WwhoiJBmVYVI9bT8HfjCzTJ_SIp9WFil0gOeznv2omQ,6497
8
+ langchain_dev_utils/agents/wrap.py,sha256=RuchoH_VotPmKFuYEn2SXoSgNxZhSA9jKM0Iv_8oHLk,4718
9
+ langchain_dev_utils/agents/middleware/__init__.py,sha256=sAd0gehREpt0MB5deX5_YmTJPBoqmeSsjNQ-ta6R3EM,768
10
+ langchain_dev_utils/agents/middleware/model_fallback.py,sha256=nivtXXF4cwyOBv6p7RW12nXtNg87wjTWxO3BKIYiroI,1674
11
+ langchain_dev_utils/agents/middleware/model_router.py,sha256=pOK-4PNTLrmjaQA9poHoQnsaVwoX0JeJrLVysulv9iU,7631
12
+ langchain_dev_utils/agents/middleware/plan.py,sha256=0qDCmenxgY_zrwMfOyYlgLfhYNw-HszNLeeOkfj14NA,16002
13
+ langchain_dev_utils/agents/middleware/summarization.py,sha256=IoZ2PM1OC3AXwf0DWpfreuPOAipeiYu0KPmAABWXuY0,3087
14
+ langchain_dev_utils/agents/middleware/tool_call_repair.py,sha256=oZF0Oejemqs9kSn8xbW79FWyVVarL4IGCz0gpqYBkFM,3529
15
+ langchain_dev_utils/agents/middleware/tool_emulator.py,sha256=OgtPhqturaWzF4fRSJ3f_IXvIrYrrAjlpOC5zmLtrkY,2031
16
+ langchain_dev_utils/agents/middleware/tool_selection.py,sha256=dRH5ejR6N02Djwxt6Gd63MYkg6SV5pySlzaRt53OoZk,3113
17
+ langchain_dev_utils/chat_models/__init__.py,sha256=YSLUyHrWEEj4y4DtGFCOnDW02VIYZdfAH800m4Klgeg,224
18
+ langchain_dev_utils/chat_models/base.py,sha256=CVMfgqMRnIKv8z4babusa2c4RKVuiWTL39mPD8cHAf4,11880
19
+ langchain_dev_utils/chat_models/types.py,sha256=ch0t30oqjR-nePXWt_U5ybTohKeBZ1snTIOeQUEEBa8,510
20
+ langchain_dev_utils/chat_models/adapters/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
21
+ langchain_dev_utils/chat_models/adapters/openai_compatible.py,sha256=sG_qvdGyYN3bRFN28OyuR57tiqMv5QCq1gNA1QHAWco,20273
22
+ langchain_dev_utils/embeddings/__init__.py,sha256=zbEOaV86TUi9Zrg_dH9dpdgacWg31HMJTlTQknA9EKk,244
23
+ langchain_dev_utils/embeddings/base.py,sha256=l4uCB5ecr3GAkfYGpYxqamOPIM6fkP1H_QK-277YEic,9295
24
+ langchain_dev_utils/message_convert/__init__.py,sha256=ZGrHGXPKMrZ_p9MqfIVZ4jgbEyb7aC4Q7X-muuThIYU,457
25
+ langchain_dev_utils/message_convert/content.py,sha256=LhrFXL1zYkkpp4ave6SBorDLig5xnllQ2VYCgFz-eR4,7681
26
+ langchain_dev_utils/message_convert/format.py,sha256=1TOcJ09atH7LRtn_IIuBshKDXAyqoy3Q9b0Po-S-F9g,2377
27
+ langchain_dev_utils/pipeline/__init__.py,sha256=eE6WktaLHDkqMeXDIDaLtm-OPTwtsX_Av8iK9uYrceo,186
28
+ langchain_dev_utils/pipeline/parallel.py,sha256=nwZWbdSNeyanC9WufoJBTceotgT--UnPOfStXjgNMOc,5271
29
+ langchain_dev_utils/pipeline/sequential.py,sha256=sYJXQzVHDKUc-UV-HMv38JTPnse1A7sRM0vqSdpHK0k,3850
30
+ langchain_dev_utils/pipeline/types.py,sha256=T3aROKKXeWvd0jcH5XkgMDQfEkLfPaiOhhV2q58fDHs,112
31
+ langchain_dev_utils/tool_calling/__init__.py,sha256=mu_WxKMcu6RoTf4vkTPbA1WSBSNc6YIqyBtOQ6iVQj4,322
32
+ langchain_dev_utils/tool_calling/human_in_the_loop.py,sha256=7Z_QO5OZUR6K8nLoIcafc6osnvX2IYNorOJcbx6bVso,9672
33
+ langchain_dev_utils/tool_calling/utils.py,sha256=W2ZRRMhn7SHHZxFfCXVaPIh2uFkY2XkO6EWrdRuv6VE,2757
34
+ langchain_dev_utils-1.2.7.dist-info/METADATA,sha256=roKP7w_tdYwiRQiELlz1Bt_0U5F3uaE-cwa8iqk46-8,13100
35
+ langchain_dev_utils-1.2.7.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
36
+ langchain_dev_utils-1.2.7.dist-info/licenses/LICENSE,sha256=AWAOzNEcsvCEzHOF0qby5OKxviVH_eT9Yce1sgJTico,1084
37
+ langchain_dev_utils-1.2.7.dist-info/RECORD,,
@@ -1,35 +0,0 @@
1
- langchain_dev_utils/__init__.py,sha256=jBmZf3HLbiQlWiolOsAA6J5-BbxXD2bqFqEqDH3lfqo,22
2
- langchain_dev_utils/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
3
- langchain_dev_utils/agents/__init__.py,sha256=e17SMQdJIQngbUCr2N1tY-yw0tD3tEnH7PSvyDmVPeQ,127
4
- langchain_dev_utils/agents/factory.py,sha256=JjdJwPTJpQwAlwQlBalbuGej5Jcpy2Fz6lH3EwEaxQo,3979
5
- langchain_dev_utils/agents/file_system.py,sha256=S6RUEmQI2eerW0gBQp0IP0X5ak5FwvqgIGRiycr2iyw,8468
6
- langchain_dev_utils/agents/plan.py,sha256=ydJuJLlNydheQvLPl2uCc3TBVv42YxGzPhKgtldIdIk,6497
7
- langchain_dev_utils/agents/wrap.py,sha256=4BWksU9DRz8c3ZHQiUi4GHwGhNysDLNs8pmLWV7BeAI,5165
8
- langchain_dev_utils/agents/middleware/__init__.py,sha256=cjrb8Rue5uukl9pKPF7CjSrHtcYsUBj3Mdvv2szlp7E,679
9
- langchain_dev_utils/agents/middleware/model_fallback.py,sha256=pXdraahOMukLgvjX70LwhrjIoEhLYQfNEwJMQHG2WPk,1673
10
- langchain_dev_utils/agents/middleware/model_router.py,sha256=Qb_s_FoREp11yKHdmp_ZTRxB1whsFrj86awUNR0fpCk,8461
11
- langchain_dev_utils/agents/middleware/plan.py,sha256=saRXhzkC2pd7LNiNclSmGJelmisbTXhhTrbSUkSkf9g,16220
12
- langchain_dev_utils/agents/middleware/summarization.py,sha256=BtWPJcQBssGAT0nb1c0xsGEOsb8x5sAAE6xqujYjHhY,3027
13
- langchain_dev_utils/agents/middleware/tool_emulator.py,sha256=u9rV24yUB-dyc1uUfUe74B1wOGVI3TZRwxkE1bvGm18,2025
14
- langchain_dev_utils/agents/middleware/tool_selection.py,sha256=ZqdyK4Yhp2u3GM6B_D6U7Srca9vy1o7s6N_LrV24-dQ,3107
15
- langchain_dev_utils/chat_models/__init__.py,sha256=YSLUyHrWEEj4y4DtGFCOnDW02VIYZdfAH800m4Klgeg,224
16
- langchain_dev_utils/chat_models/base.py,sha256=MeVFt16ytJMTISnW1YbzNvaW_iesxH1nWr8FHG-8CL8,12550
17
- langchain_dev_utils/chat_models/types.py,sha256=M0iCGWgXmX1f1vkymH-jNGdFQlsJS5JqpmgHctUS9jw,512
18
- langchain_dev_utils/chat_models/adapters/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
19
- langchain_dev_utils/chat_models/adapters/openai_compatible.py,sha256=4Q8ySa7jS2_AFo0oxLoqeY_aQyPppvV-DAMLt2rmGoE,20192
20
- langchain_dev_utils/embeddings/__init__.py,sha256=zbEOaV86TUi9Zrg_dH9dpdgacWg31HMJTlTQknA9EKk,244
21
- langchain_dev_utils/embeddings/base.py,sha256=25ebUEaf7075h8NARgTHjAvwK6JddhHor5upiucJqu0,9686
22
- langchain_dev_utils/message_convert/__init__.py,sha256=xwjaQ1oJoc80xy70oQI4uW3gAmgV5JymJd5hgnA6s3g,458
23
- langchain_dev_utils/message_convert/content.py,sha256=ApmQ7fUUBO3Ihjm2hYSWd4GrU_CvrjbWla-MA7DAFRc,7758
24
- langchain_dev_utils/message_convert/format.py,sha256=fh4GyyuZBTMrHeCEwdu9fOh5n8tdli1vDF44jK1i-tI,2373
25
- langchain_dev_utils/pipeline/__init__.py,sha256=eE6WktaLHDkqMeXDIDaLtm-OPTwtsX_Av8iK9uYrceo,186
26
- langchain_dev_utils/pipeline/parallel.py,sha256=fp-DZmQ470GurLcrYBKCarHM1gyCJuuT33IkgoRkFPc,7586
27
- langchain_dev_utils/pipeline/sequential.py,sha256=TYv0Fs8o2FsgRkWmcM3p2vMg806DopUONVMC_9yeQgk,5041
28
- langchain_dev_utils/pipeline/types.py,sha256=T3aROKKXeWvd0jcH5XkgMDQfEkLfPaiOhhV2q58fDHs,112
29
- langchain_dev_utils/tool_calling/__init__.py,sha256=mu_WxKMcu6RoTf4vkTPbA1WSBSNc6YIqyBtOQ6iVQj4,322
30
- langchain_dev_utils/tool_calling/human_in_the_loop.py,sha256=nbaON9806pv5tpMRQUA_Ch3HJA5HBFgzZR7kQRf6PiY,9819
31
- langchain_dev_utils/tool_calling/utils.py,sha256=3cNv_Zx32KxdsGn8IkxjWUzxYEEwVJeJgTZTbfSg0pA,2751
32
- langchain_dev_utils-1.2.5.dist-info/METADATA,sha256=LB3TTtW0Hb6vopD23t0wiziRUsSFroD0-wsp1r8ipjg,18822
33
- langchain_dev_utils-1.2.5.dist-info/WHEEL,sha256=WLgqFyCfm_KASv4WHyYy0P3pM_m7J5L9k2skdKLirC8,87
34
- langchain_dev_utils-1.2.5.dist-info/licenses/LICENSE,sha256=AWAOzNEcsvCEzHOF0qby5OKxviVH_eT9Yce1sgJTico,1084
35
- langchain_dev_utils-1.2.5.dist-info/RECORD,,