trulens-apps-langgraph 1.5.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,95 @@
1
+ Metadata-Version: 2.1
2
+ Name: trulens-apps-langgraph
3
+ Version: 1.5.2
4
+ Summary: Library to systematically track and evaluate LangGraph based applications.
5
+ Home-page: https://trulens.org/
6
+ License: MIT
7
+ Author: Snowflake Inc.
8
+ Author-email: ml-observability-wg-dl@snowflake.com
9
+ Requires-Python: >=3.9,<4.0
10
+ Classifier: Development Status :: 3 - Alpha
11
+ Classifier: License :: OSI Approved :: MIT License
12
+ Classifier: Operating System :: OS Independent
13
+ Classifier: Programming Language :: Python :: 3
14
+ Classifier: Programming Language :: Python :: 3.9
15
+ Classifier: Programming Language :: Python :: 3.10
16
+ Classifier: Programming Language :: Python :: 3.11
17
+ Classifier: Programming Language :: Python :: 3.12
18
+ Classifier: Programming Language :: Python :: 3.13
19
+ Requires-Dist: langgraph (>=0.5.2)
20
+ Requires-Dist: pydantic (>=2.4.2,<3.0.0)
21
+ Requires-Dist: trulens-apps-langchain (>=1.0.0,<2.0.0)
22
+ Requires-Dist: trulens-core (>=1.0.0,<2.0.0)
23
+ Project-URL: Documentation, https://trulens.org/getting_started/
24
+ Project-URL: Repository, https://github.com/truera/trulens
25
+ Description-Content-Type: text/markdown
26
+
27
+ # trulens-apps-langgraph
28
+
29
+ TruLens integration for LangGraph applications. This package provides comprehensive instrumentation and evaluation capabilities for LangGraph-based multi-agent workflows.
30
+
31
+ ## Features
32
+
33
+ - **Automatic Detection**: TruGraph automatically detects LangGraph applications
34
+ - **Combined Instrumentation**: Inherits all LangChain instrumentation plus LangGraph-specific methods
35
+ - **Multi-Agent Evaluation**: Comprehensive evaluation capabilities for complex workflows
36
+ - **Automatic @task Instrumentation**: Automatically detects and instruments functions decorated with `@task`
37
+ - **Smart Attribute Extraction**: Intelligently extracts information from function arguments
38
+
39
+ ## Installation
40
+
41
+ ```bash
42
+ pip install trulens-apps-langgraph
43
+ ```
44
+
45
+ ## Quick Start
46
+
47
+ ```python
48
+ from langgraph.graph import StateGraph, MessagesState, END
49
+ from langchain_core.messages import HumanMessage
50
+ from trulens.apps.langgraph import TruGraph
51
+
52
+ # Create your LangGraph application
53
+ workflow = StateGraph(MessagesState)
54
+ workflow.add_node("agent", your_agent_function)
55
+ workflow.add_edge("agent", END)
56
+ workflow.set_entry_point("agent")
57
+ graph = workflow.compile()
58
+
59
+ # Automatically instrument with TruGraph
60
+ tru_app = TruGraph(graph, app_name="MyLangGraphApp")
61
+
62
+ # Use normally - all interactions are automatically logged
63
+ with tru_app as recording:
64
+ result = graph.invoke({"messages": [HumanMessage(content="Hello!")]})
65
+ ```
66
+
67
+ ## Automatic @task Instrumentation
68
+
69
+ TruGraph automatically instruments functions decorated with LangGraph's `@task` decorator by monkey-patching the decorator itself. This follows TruLens instrumentation patterns and ensures seamless integration:
70
+
71
+ ```python
72
+ from langgraph.func import task
73
+
74
+ @task # Automatically instrumented by TruGraph when TruGraph is imported
75
+ def my_agent_function(state, config):
76
+ # Your agent logic here
77
+ return updated_state
78
+ ```
79
+
80
+ ### How it works:
81
+
82
+ 1. **Decorator Monkey-Patching**: TruGraph patches the `@task` decorator at import time
83
+ 2. **Intelligent Attribute Extraction**: Automatically extracts information from function arguments:
84
+ - Handles `BaseChatModel` and `BaseModel` objects
85
+ - Extracts data from dataclasses and Pydantic models
86
+ - Skips non-serializable objects like LLM pools
87
+ - Captures return values and exceptions
88
+ 3. **No Code Changes Required**: Works with existing `@task` decorated functions
89
+
90
+ This approach follows TruLens conventions and is more robust than scanning `sys.modules`.
91
+
92
+ ## Usage
93
+
94
+ See the [TruLens documentation](https://trulens.org/getting_started/) for complete usage instructions.
95
+
@@ -0,0 +1,68 @@
1
+ # trulens-apps-langgraph
2
+
3
+ TruLens integration for LangGraph applications. This package provides comprehensive instrumentation and evaluation capabilities for LangGraph-based multi-agent workflows.
4
+
5
+ ## Features
6
+
7
+ - **Automatic Detection**: TruGraph automatically detects LangGraph applications
8
+ - **Combined Instrumentation**: Inherits all LangChain instrumentation plus LangGraph-specific methods
9
+ - **Multi-Agent Evaluation**: Comprehensive evaluation capabilities for complex workflows
10
+ - **Automatic @task Instrumentation**: Automatically detects and instruments functions decorated with `@task`
11
+ - **Smart Attribute Extraction**: Intelligently extracts information from function arguments
12
+
13
+ ## Installation
14
+
15
+ ```bash
16
+ pip install trulens-apps-langgraph
17
+ ```
18
+
19
+ ## Quick Start
20
+
21
+ ```python
22
+ from langgraph.graph import StateGraph, MessagesState, END
23
+ from langchain_core.messages import HumanMessage
24
+ from trulens.apps.langgraph import TruGraph
25
+
26
+ # Create your LangGraph application
27
+ workflow = StateGraph(MessagesState)
28
+ workflow.add_node("agent", your_agent_function)
29
+ workflow.add_edge("agent", END)
30
+ workflow.set_entry_point("agent")
31
+ graph = workflow.compile()
32
+
33
+ # Automatically instrument with TruGraph
34
+ tru_app = TruGraph(graph, app_name="MyLangGraphApp")
35
+
36
+ # Use normally - all interactions are automatically logged
37
+ with tru_app as recording:
38
+ result = graph.invoke({"messages": [HumanMessage(content="Hello!")]})
39
+ ```
40
+
41
+ ## Automatic @task Instrumentation
42
+
43
+ TruGraph automatically instruments functions decorated with LangGraph's `@task` decorator by monkey-patching the decorator itself. This follows TruLens instrumentation patterns and ensures seamless integration:
44
+
45
+ ```python
46
+ from langgraph.func import task
47
+
48
+ @task # Automatically instrumented by TruGraph when TruGraph is imported
49
+ def my_agent_function(state, config):
50
+ # Your agent logic here
51
+ return updated_state
52
+ ```
53
+
54
+ ### How it works:
55
+
56
+ 1. **Decorator Monkey-Patching**: TruGraph patches the `@task` decorator at import time
57
+ 2. **Intelligent Attribute Extraction**: Automatically extracts information from function arguments:
58
+ - Handles `BaseChatModel` and `BaseModel` objects
59
+ - Extracts data from dataclasses and Pydantic models
60
+ - Skips non-serializable objects like LLM pools
61
+ - Captures return values and exceptions
62
+ 3. **No Code Changes Required**: Works with existing `@task` decorated functions
63
+
64
+ This approach follows TruLens conventions and is more robust than scanning `sys.modules`.
65
+
66
+ ## Usage
67
+
68
+ See the [TruLens documentation](https://trulens.org/getting_started/) for complete usage instructions.
@@ -0,0 +1,38 @@
1
+ [build-system]
2
+ build-backend = "poetry.core.masonry.api"
3
+ requires = [
4
+ "poetry-core",
5
+ ]
6
+
7
+ [tool.poetry]
8
+ name = "trulens-apps-langgraph"
9
+ version = "1.5.2"
10
+ description = "Library to systematically track and evaluate LangGraph based applications."
11
+ authors = [
12
+ "Snowflake Inc. <ml-observability-wg-dl@snowflake.com>",
13
+ ]
14
+ license = "MIT"
15
+ readme = "README.md"
16
+ packages = [
17
+ { include = "trulens" },
18
+ ]
19
+ homepage = "https://trulens.org/"
20
+ documentation = "https://trulens.org/getting_started/"
21
+ repository = "https://github.com/truera/trulens"
22
+ classifiers = [
23
+ "Programming Language :: Python :: 3",
24
+ "Operating System :: OS Independent",
25
+ "Development Status :: 3 - Alpha",
26
+ "License :: OSI Approved :: MIT License",
27
+ ]
28
+
29
+ [tool.poetry.dependencies]
30
+ python = "^3.9"
31
+ trulens-core = { version = "^1.0.0" }
32
+ trulens-apps-langchain = { version = "^1.0.0" }
33
+ langgraph = ">=0.5.2"
34
+ pydantic = "^2.4.2"
35
+
36
+ [tool.poetry.group.dev.dependencies]
37
+ trulens-core = { path = "../../core" }
38
+ trulens-apps-langchain = { path = "../langchain" }
@@ -0,0 +1,27 @@
1
+ """
2
+ !!! note "Additional Dependency Required"
3
+
4
+ To use this module, you must have the `trulens-apps-langgraph` package installed.
5
+
6
+ ```bash
7
+ pip install trulens-apps-langgraph
8
+ ```
9
+ """
10
+
11
+ # WARNING: This file does not follow the no-init aliases import standard.
12
+
13
+ from importlib.metadata import version
14
+
15
+ from trulens.apps.langgraph.tru_graph import LangGraphInstrument
16
+ from trulens.apps.langgraph.tru_graph import TruGraph
17
+ from trulens.core.utils import imports as import_utils
18
+
19
+ __version__ = version(
20
+ import_utils.safe_importlib_package_name(__package__ or __name__)
21
+ )
22
+
23
+
24
+ __all__ = [
25
+ "TruGraph",
26
+ "LangGraphInstrument",
27
+ ]
@@ -0,0 +1,587 @@
1
+ """LangGraph app instrumentation."""
2
+
3
+ from inspect import BoundArguments
4
+ from inspect import Signature
5
+ import logging
6
+ from typing import (
7
+ Any,
8
+ Callable,
9
+ ClassVar,
10
+ Dict,
11
+ List,
12
+ Optional,
13
+ Union,
14
+ )
15
+
16
+ from pydantic import Field
17
+ from trulens.apps.langchain.tru_chain import TruChain
18
+ from trulens.core import app as core_app
19
+ from trulens.core import instruments as core_instruments
20
+ from trulens.core.instruments import InstrumentedMethod
21
+ from trulens.core.session import TruSession
22
+ from trulens.core.utils import pyschema as pyschema_utils
23
+
24
+ logger = logging.getLogger(__name__)
25
+
26
+ # LangGraph imports with optional import handling
27
+ try:
28
+ from langgraph.graph import CompiledStateGraph
29
+ from langgraph.graph import StateGraph
30
+ from langgraph.graph.state import StateDefinition
31
+ from langgraph.pregel import Pregel
32
+ from langgraph.types import Command
33
+
34
+ LANGGRAPH_AVAILABLE = True
35
+ except ImportError:
36
+ # Create mock classes when langgraph is not available
37
+ StateGraph = type("StateGraph", (), {})
38
+ CompiledStateGraph = type("CompiledStateGraph", (), {})
39
+ Pregel = type("Pregel", (), {})
40
+ Command = type("Command", (), {})
41
+ StateDefinition = type("StateDefinition", (), {})
42
+ LANGGRAPH_AVAILABLE = False
43
+
44
+ try:
45
+ from langgraph.func import task
46
+
47
+ LANGGRAPH_TASK_AVAILABLE = True
48
+ except ImportError:
49
+ task = None
50
+ LANGGRAPH_TASK_AVAILABLE = False
51
+
52
+
53
+ class LangGraphInstrument(core_instruments.Instrument):
54
+ """Instrumentation for LangGraph apps."""
55
+
56
+ class Default:
57
+ """Instrumentation specification for LangGraph apps."""
58
+
59
+ MODULES = {"langgraph"}
60
+ """Filter for module name prefix for modules to be instrumented."""
61
+
62
+ CLASSES = (
63
+ lambda: {
64
+ CompiledStateGraph,
65
+ Pregel,
66
+ StateGraph,
67
+ }
68
+ if LANGGRAPH_AVAILABLE
69
+ else set()
70
+ )
71
+ """Filter for classes to be instrumented."""
72
+
73
+ # Instrument only methods with these names and of these classes.
74
+ METHODS: List[InstrumentedMethod] = (
75
+ [
76
+ InstrumentedMethod("invoke", CompiledStateGraph),
77
+ InstrumentedMethod("ainvoke", CompiledStateGraph),
78
+ InstrumentedMethod("stream", CompiledStateGraph),
79
+ InstrumentedMethod("astream", CompiledStateGraph),
80
+ InstrumentedMethod("stream_mode", CompiledStateGraph),
81
+ InstrumentedMethod("astream_mode", CompiledStateGraph),
82
+ InstrumentedMethod("invoke", Pregel),
83
+ InstrumentedMethod("ainvoke", Pregel),
84
+ InstrumentedMethod("stream", Pregel),
85
+ InstrumentedMethod("astream", Pregel),
86
+ InstrumentedMethod("stream_mode", Pregel),
87
+ InstrumentedMethod("astream_mode", Pregel),
88
+ ]
89
+ if LANGGRAPH_AVAILABLE
90
+ else []
91
+ )
92
+ """Methods to be instrumented.
93
+
94
+ Key is method name and value is filter for objects that need those
95
+ methods instrumented"""
96
+
97
+ def __init__(self, *args, **kwargs):
98
+ super().__init__(
99
+ include_modules=LangGraphInstrument.Default.MODULES,
100
+ include_classes=LangGraphInstrument.Default.CLASSES(),
101
+ include_methods=LangGraphInstrument.Default.METHODS,
102
+ *args,
103
+ **kwargs,
104
+ )
105
+
106
+ # Monkey-patch the @task decorator to automatically instrument functions
107
+ self._patch_task_decorator()
108
+
109
+ def _patch_task_decorator(self):
110
+ """Monkey-patch the @task decorator to automatically instrument decorated functions."""
111
+ if not LANGGRAPH_AVAILABLE:
112
+ return
113
+
114
+ try:
115
+ import langgraph.func as langgraph_func
116
+
117
+ if hasattr(langgraph_func.task, "_trulens_patched"):
118
+ return
119
+
120
+ # Store the original decorator
121
+ original_task = langgraph_func.task
122
+
123
+ def instrumented_task(*task_args, **task_kwargs):
124
+ """Instrumented version of the @task decorator."""
125
+
126
+ def decorator(func: Callable) -> Callable:
127
+ task_decorated_func = original_task(
128
+ *task_args, **task_kwargs
129
+ )(func)
130
+
131
+ try:
132
+ from trulens.core.otel.instrument import instrument
133
+
134
+ # Create a wrapper that extracts attributes using our heuristics
135
+ def trulens_wrapper(*args, **kwargs):
136
+ ret = None
137
+ exc = None
138
+
139
+ try:
140
+ ret = task_decorated_func(*args, **kwargs)
141
+ return ret
142
+ except Exception as e:
143
+ exc = e
144
+ raise
145
+ finally:
146
+ # Extract attributes using our heuristics
147
+ try:
148
+ attributes = _extract_task_attributes(
149
+ func, ret, exc, *args, **kwargs
150
+ )
151
+
152
+ logger.debug(
153
+ f"Extracted @task attributes for {func.__name__}: {list(attributes.keys())}"
154
+ )
155
+ except Exception as attr_exc:
156
+ logger.debug(
157
+ f"Failed to extract @task attributes for {func.__name__}: {attr_exc}"
158
+ )
159
+
160
+ # Apply the instrument decorator to get full TruLens tracing
161
+ instrumented_func = instrument(
162
+ span_type=f"TASK_{func.__name__.upper()}"
163
+ )(trulens_wrapper)
164
+
165
+ # Preserve the original function's metadata
166
+ instrumented_func.__name__ = func.__name__
167
+ instrumented_func.__qualname__ = getattr(
168
+ func, "__qualname__", func.__name__
169
+ )
170
+
171
+ logger.debug(
172
+ f"Successfully instrumented @task function: {func.__name__}"
173
+ )
174
+ return instrumented_func
175
+
176
+ except ImportError:
177
+ logger.debug(
178
+ f"TruLens instrumentation not available for @task function {func.__name__}"
179
+ )
180
+ return task_decorated_func
181
+
182
+ return decorator
183
+
184
+ langgraph_func.task = instrumented_task
185
+ langgraph_func.task._trulens_patched = True
186
+
187
+ logger.debug(
188
+ "Successfully monkey-patched LangGraph @task decorator for TruLens instrumentation"
189
+ )
190
+
191
+ except ImportError:
192
+ logger.debug(
193
+ "LangGraph @task decorator not available for monkey-patching"
194
+ )
195
+ except Exception as e:
196
+ logger.warning(
197
+ f"Failed to monkey-patch LangGraph @task decorator: {e}"
198
+ )
199
+
200
+
201
+ class TruGraph(TruChain):
202
+ """Recorder for _LangGraph_ applications.
203
+
204
+ This recorder is designed for LangGraph apps, providing a way to instrument,
205
+ log, and evaluate their behavior while inheriting all LangChain instrumentation
206
+ capabilities.
207
+
208
+ Example: "Creating a LangGraph multi-agent application"
209
+
210
+ Consider an example LangGraph multi-agent application:
211
+
212
+ ```python
213
+ from langgraph.graph import StateGraph, MessagesState
214
+ from langgraph.prebuilt import create_react_agent
215
+ from langchain_openai import ChatOpenAI
216
+ from langchain_community.tools.tavily_search import TavilySearchResults
217
+
218
+ # Create agents
219
+ llm = ChatOpenAI(model="gpt-4")
220
+ search_tool = TavilySearchResults()
221
+ research_agent = create_react_agent(llm, [search_tool])
222
+
223
+ # Build graph
224
+ workflow = StateGraph(MessagesState)
225
+ workflow.add_node("researcher", research_agent)
226
+ workflow.add_edge("researcher", END)
227
+ workflow.set_entry_point("researcher")
228
+
229
+ graph = workflow.compile()
230
+ ```
231
+
232
+ The application can be wrapped in a `TruGraph` recorder to provide logging
233
+ and evaluation upon the application's use.
234
+
235
+ Example: "Using the `TruGraph` recorder"
236
+
237
+ ```python
238
+ from trulens.apps.langgraph import TruGraph
239
+
240
+ # Wrap application
241
+ tru_recorder = TruGraph(
242
+ graph,
243
+ app_name="MultiAgentApp",
244
+ app_version="v1",
245
+ feedbacks=[f_context_relevance]
246
+ )
247
+
248
+ # Record application runs
249
+ with tru_recorder as recording:
250
+ result = graph.invoke({"messages": [("user", "What is the weather?")]})
251
+ ```
252
+
253
+ Args:
254
+ app: A LangGraph application (compiled StateGraph).
255
+
256
+ **kwargs: Additional arguments to pass to [App][trulens.core.app.App]
257
+ and [AppDefinition][trulens.core.schema.app.AppDefinition].
258
+ """
259
+
260
+ app: Union[CompiledStateGraph, Pregel, StateGraph]
261
+ """The langgraph app to be instrumented."""
262
+
263
+ root_callable: ClassVar[Optional[pyschema_utils.FunctionOrMethod]] = Field(
264
+ None
265
+ )
266
+ """The root callable of the wrapped app."""
267
+
268
+ def __init__(
269
+ self,
270
+ app: Union[CompiledStateGraph, Pregel, StateGraph],
271
+ main_method: Optional[Callable] = None,
272
+ **kwargs: Dict[str, Any],
273
+ ):
274
+ if not LANGGRAPH_AVAILABLE:
275
+ raise ImportError(
276
+ "LangGraph is not installed. Please install it with 'pip install langgraph' "
277
+ "to use TruGraph."
278
+ )
279
+
280
+ # For LangGraph apps, we need to check if it's a compiled graph
281
+ # compile if it's a StateGraph
282
+ if isinstance(app, StateGraph):
283
+ logger.warning(
284
+ "Received uncompiled StateGraph. Compiling it for instrumentation. "
285
+ "For better control, consider compiling the graph yourself before wrapping with TruGraph."
286
+ )
287
+ app = app.compile()
288
+
289
+ kwargs["app"] = app
290
+
291
+ if "connector" in kwargs:
292
+ TruSession(connector=kwargs["connector"])
293
+ else:
294
+ TruSession()
295
+
296
+ if main_method is not None:
297
+ kwargs["main_method"] = main_method
298
+ kwargs["root_class"] = pyschema_utils.Class.of_object(app)
299
+
300
+ # Create combined instrumentation for both LangChain and LangGraph
301
+ from trulens.apps.langchain.tru_chain import LangChainInstrument
302
+
303
+ class CombinedInstrument(core_instruments.Instrument):
304
+ def __init__(self, *args, **kwargs):
305
+ # Initialize with both LangChain and LangGraph settings
306
+ langchain_default = LangChainInstrument.Default
307
+ langgraph_default = LangGraphInstrument.Default
308
+
309
+ combined_modules = langchain_default.MODULES.union(
310
+ langgraph_default.MODULES
311
+ )
312
+ combined_classes = langchain_default.CLASSES().union(
313
+ langgraph_default.CLASSES()
314
+ )
315
+ combined_methods = (
316
+ langchain_default.METHODS + langgraph_default.METHODS
317
+ )
318
+
319
+ super().__init__(
320
+ include_modules=combined_modules,
321
+ include_classes=combined_classes,
322
+ include_methods=combined_methods,
323
+ *args,
324
+ **kwargs,
325
+ )
326
+
327
+ kwargs["instrument"] = CombinedInstrument(app=self)
328
+
329
+ # Call TruChain's parent (core_app.App) __init__ directly to avoid TruChain's specific initialization
330
+ core_app.App.__init__(self, **kwargs)
331
+
332
+ def main_input(
333
+ self, func: Callable, sig: Signature, bindings: BoundArguments
334
+ ) -> str:
335
+ """
336
+ Determine the main input string for the given function `func` with
337
+ signature `sig` if it is to be called with the given bindings
338
+ `bindings`.
339
+ """
340
+ # For LangGraph, the main input is typically the initial state
341
+ # which can be a dict with "messages" key or direct input
342
+ if "input" in bindings.arguments:
343
+ temp = bindings.arguments["input"]
344
+ if isinstance(temp, dict):
345
+ # For LangGraph, common patterns are:
346
+ # {"messages": [HumanMessage(content="...")]}
347
+ # or {"query": "..."}
348
+ if "messages" in temp:
349
+ messages = temp["messages"]
350
+ if isinstance(messages, list) and len(messages) > 0:
351
+ last_message = messages[-1]
352
+ if hasattr(last_message, "content"):
353
+ return last_message.content
354
+ elif (
355
+ isinstance(last_message, tuple)
356
+ and len(last_message) > 1
357
+ ):
358
+ return last_message[1] # (role, content) tuple
359
+ else:
360
+ return str(last_message)
361
+ elif "query" in temp:
362
+ return temp["query"]
363
+ else:
364
+ # Try to get any string-like value from the input dict
365
+ for _, value in temp.items():
366
+ if isinstance(value, str):
367
+ return value
368
+ return str(temp)
369
+ elif isinstance(temp, str):
370
+ return temp
371
+ else:
372
+ return str(temp)
373
+
374
+ # Fall back to TruChain's main_input method
375
+ return super().main_input(func, sig, bindings)
376
+
377
+ def main_output(
378
+ self, func: Callable, sig: Signature, bindings: BoundArguments, ret: Any
379
+ ) -> str:
380
+ """
381
+ Determine the main output string for the given function `func` with
382
+ signature `sig` after it is called with the given `bindings` and has
383
+ returned `ret`.
384
+ """
385
+ # For LangGraph, the output is typically the final state
386
+ # which can be a dict with "messages" key
387
+ if isinstance(ret, dict):
388
+ if "messages" in ret:
389
+ messages = ret["messages"]
390
+ if isinstance(messages, list) and len(messages) > 0:
391
+ last_message = messages[-1]
392
+ if hasattr(last_message, "content"):
393
+ return last_message.content
394
+ elif (
395
+ isinstance(last_message, tuple)
396
+ and len(last_message) > 1
397
+ ):
398
+ return last_message[1] # (role, content) tuple
399
+ else:
400
+ return str(last_message)
401
+ else:
402
+ # Try to get any string-like value from the output dict
403
+ for _, value in ret.items():
404
+ if isinstance(value, str):
405
+ return value
406
+ return str(ret)
407
+ elif isinstance(ret, str):
408
+ return ret
409
+ else:
410
+ # Fall back to TruChain's main_output method
411
+ return super().main_output(func, sig, bindings, ret)
412
+
413
+ def main_call(self, human: str):
414
+ """
415
+ A single text to a single text invocation of this app.
416
+ """
417
+ # Most LangGraph apps expect a dict with "messages" key
418
+ try:
419
+ # Try the common LangGraph pattern first
420
+ result = self.app.invoke({"messages": [("user", human)]})
421
+ return self._extract_output_from_result(result)
422
+ except Exception:
423
+ try:
424
+ result = self.app.invoke(human)
425
+ return self._extract_output_from_result(result)
426
+ except Exception:
427
+ return super().main_call(human)
428
+
429
+ async def main_acall(self, human: str):
430
+ """
431
+ A single text to a single text async invocation of this app.
432
+ """
433
+ try:
434
+ result = await self.app.ainvoke({"messages": [("user", human)]})
435
+ return self._extract_output_from_result(result)
436
+ except Exception:
437
+ try:
438
+ result = await self.app.ainvoke(human)
439
+ return self._extract_output_from_result(result)
440
+ except Exception:
441
+ return await super().main_acall(human)
442
+
443
+ def _extract_output_from_result(self, result: Any) -> str:
444
+ """
445
+ Helper method to extract string output from LangGraph result.
446
+ """
447
+ if isinstance(result, dict) and "messages" in result:
448
+ messages = result["messages"]
449
+ if isinstance(messages, list) and len(messages) > 0:
450
+ last_message = messages[-1]
451
+ if hasattr(last_message, "content"):
452
+ return last_message.content
453
+ elif isinstance(last_message, tuple) and len(last_message) > 1:
454
+ return last_message[1] # (role, content) tuple
455
+ else:
456
+ return str(last_message)
457
+ elif isinstance(result, str):
458
+ return result
459
+ else:
460
+ return str(result)
461
+
462
+
463
+ # Helper function to extract attributes from @task functions
464
+ def _extract_task_attributes(
465
+ func: Callable,
466
+ ret: Any,
467
+ exc: Exception,
468
+ ignore_args: Optional[set] = None,
469
+ extract_fields: Optional[Dict[str, str]] = None,
470
+ *args,
471
+ **kwargs,
472
+ ) -> Dict[str, Any]:
473
+ """
474
+ Extract attributes from @task function calls using intelligent heuristics.
475
+
476
+ This function automatically extracts relevant information from function arguments,
477
+ handling special cases for LLM models, Pydantic models, dataclasses, etc.
478
+ """
479
+ import dataclasses
480
+ import inspect
481
+ import json
482
+
483
+ if ignore_args is None:
484
+ ignore_args = set()
485
+ if extract_fields is None:
486
+ extract_fields = {}
487
+
488
+ try:
489
+ from langchain_core.language_models.chat_models import BaseChatModel
490
+ except ImportError:
491
+ BaseChatModel = type("BaseChatModel", (), {})
492
+
493
+ try:
494
+ from pydantic import BaseModel
495
+ except ImportError:
496
+ BaseModel = type("BaseModel", (), {}) # noqa: F841
497
+
498
+ attributes = {}
499
+
500
+ try:
501
+ # Get the BASE_SCOPE for attributes. TODO: should we allow users to override this?
502
+ try:
503
+ from trulens.otel.semconv.trace import BASE_SCOPE
504
+ except ImportError:
505
+ BASE_SCOPE = "trulens.task"
506
+
507
+ sig = inspect.signature(func)
508
+
509
+ # Merge args and kwargs to avoid duplicates
510
+ all_kwargs = {}
511
+ bound_args = sig.bind_partial(*args)
512
+ all_kwargs.update(bound_args.arguments)
513
+ all_kwargs.update(kwargs)
514
+ bound = sig.bind(**all_kwargs)
515
+ bound.apply_defaults()
516
+
517
+ for name, value in bound.arguments.items():
518
+ if name in ignore_args:
519
+ continue
520
+
521
+ # Skip LLM-related objects as they're not serializable
522
+ try:
523
+ if isinstance(value, BaseChatModel):
524
+ continue
525
+ except (TypeError, NameError):
526
+ # Handle case where types are mock objects
527
+ pass
528
+
529
+ # Extract only a specific field if specified
530
+ if name in extract_fields:
531
+ attr_path = extract_fields[name]
532
+ for attr in attr_path.split("."):
533
+ value = getattr(value, attr, None)
534
+ val = json.dumps(value, default=str, indent=2)
535
+ else:
536
+ # Handle different data types intelligently
537
+ if dataclasses.is_dataclass(value):
538
+ val = json.dumps(
539
+ dataclasses.asdict(value), default=str, indent=2
540
+ )
541
+ elif hasattr(value, "model_dump") or hasattr(value, "dict"):
542
+ # Handle Pydantic models (both v1 and v2)
543
+ try:
544
+ model_data = (
545
+ value.model_dump()
546
+ if hasattr(value, "model_dump")
547
+ else value.dict()
548
+ )
549
+ val = json.dumps(model_data, default=str, indent=2)
550
+ except (TypeError, AttributeError):
551
+ val = json.dumps(value, default=str, indent=2)
552
+ else:
553
+ val = json.dumps(value, default=str, indent=2)
554
+
555
+ attributes[f"{BASE_SCOPE}.{name}"] = val
556
+
557
+ # Add return value information
558
+ if dataclasses.is_dataclass(ret):
559
+ ret_val = dataclasses.asdict(ret)
560
+ elif hasattr(ret, "model_dump") or hasattr(ret, "dict"):
561
+ # Handle Pydantic models (both v1 and v2)
562
+ try:
563
+ ret_val = (
564
+ ret.model_dump()
565
+ if hasattr(ret, "model_dump")
566
+ else ret.dict()
567
+ )
568
+ except (TypeError, AttributeError):
569
+ ret_val = ret
570
+ else:
571
+ ret_val = ret
572
+
573
+ attributes[f"{BASE_SCOPE}.return"] = json.dumps(
574
+ ret_val, default=str, indent=2
575
+ )
576
+
577
+ attributes[f"{BASE_SCOPE}.exception"] = str(exc) if exc else ""
578
+
579
+ except Exception as e:
580
+ logger.exception(
581
+ f"Exception occurred during TruLens @task instrumentation: {e}"
582
+ )
583
+
584
+ return attributes
585
+
586
+
587
+ TruGraph.model_rebuild()