hivetrace 1.3.5__py3-none-any.whl → 1.3.7__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,929 @@
1
+ Metadata-Version: 2.1
2
+ Name: hivetrace
3
+ Version: 1.3.7
4
+ Summary: Hivetrace SDK for monitoring LLM applications
5
+ Home-page: http://hivetrace.ai
6
+ Author: Raft
7
+ Author-email: sales@raftds.com
8
+ Keywords: SDK,monitoring,logging,LLM,AI,Hivetrace
9
+ Classifier: License :: OSI Approved :: Apache Software License
10
+ Requires-Python: >=3.8
11
+ Description-Content-Type: text/markdown
12
+ License-File: LICENSE
13
+ Requires-Dist: httpx >=0.28.1
14
+ Requires-Dist: python-dotenv >=1.0.1
15
+ Provides-Extra: all
16
+ Requires-Dist: crewai >=0.95.0 ; extra == 'all'
17
+ Requires-Dist: httpx >=0.28.1 ; extra == 'all'
18
+ Requires-Dist: langchain-community ==0.3.18 ; extra == 'all'
19
+ Requires-Dist: langchain-openai ==0.2.5 ; extra == 'all'
20
+ Requires-Dist: langchain ==0.3.19 ; extra == 'all'
21
+ Requires-Dist: langchain-experimental ==0.3.4 ; extra == 'all'
22
+ Requires-Dist: openai-agents >=0.1.0 ; extra == 'all'
23
+ Requires-Dist: python-dotenv >=1.0.1 ; extra == 'all'
24
+ Provides-Extra: base
25
+ Requires-Dist: httpx >=0.28.1 ; extra == 'base'
26
+ Requires-Dist: python-dotenv >=1.0.1 ; extra == 'base'
27
+ Provides-Extra: crewai
28
+ Requires-Dist: crewai >=0.95.0 ; extra == 'crewai'
29
+ Provides-Extra: langchain
30
+ Requires-Dist: langchain-community ==0.3.18 ; extra == 'langchain'
31
+ Requires-Dist: langchain-openai ==0.2.5 ; extra == 'langchain'
32
+ Requires-Dist: langchain ==0.3.19 ; extra == 'langchain'
33
+ Requires-Dist: langchain-experimental ==0.3.4 ; extra == 'langchain'
34
+ Provides-Extra: openai_agents
35
+ Requires-Dist: openai-agents >=0.1.0 ; extra == 'openai_agents'
36
+
37
+ # Hivetrace SDK
38
+
39
+ ## Overview
40
+
41
+ The Hivetrace SDK lets you integrate with the Hivetrace service to monitor user prompts and LLM responses. It supports both synchronous and asynchronous workflows and can be configured via environment variables.
42
+
43
+ ---
44
+
45
+ ## Installation
46
+
47
+ Install from PyPI:
48
+
49
+ ```bash
50
+ pip install hivetrace[base]
51
+ ```
52
+
53
+ ---
54
+
55
+ ## Quick Start
56
+
57
+ ```python
58
+ from hivetrace import SyncHivetraceSDK, AsyncHivetraceSDK
59
+ ```
60
+
61
+ You can use either the synchronous client (`SyncHivetraceSDK`) or the asynchronous client (`AsyncHivetraceSDK`). Choose the one that fits your runtime.
62
+
63
+ ---
64
+
65
+ ## Synchronous Client
66
+
67
+ ### Initialize (Sync)
68
+
69
+ ```python
70
+ # The sync client reads configuration from environment variables or accepts an explicit config
71
+ client = SyncHivetraceSDK()
72
+ ```
73
+
74
+ ### Send a user prompt (input)
75
+
76
+ ```python
77
+ response = client.input(
78
+ application_id="your-application-id", # Obtained after registering the application in the UI
79
+ message="User prompt here",
80
+ )
81
+ ```
82
+
83
+ ### Send an LLM response (output)
84
+
85
+ ```python
86
+ response = client.output(
87
+ application_id="your-application-id",
88
+ message="LLM response here",
89
+ )
90
+ ```
91
+
92
+ ---
93
+
94
+ ## Asynchronous Client
95
+
96
+ ### Initialize (Async)
97
+
98
+ ```python
99
+ # The async client can be used as a context manager
100
+ client = AsyncHivetraceSDK()
101
+ ```
102
+
103
+ ### Send a user prompt (input)
104
+
105
+ ```python
106
+ response = await client.input(
107
+ application_id="your-application-id",
108
+ message="User prompt here",
109
+ )
110
+ ```
111
+
112
+ ### Send an LLM response (output)
113
+
114
+ ```python
115
+ response = await client.output(
116
+ application_id="your-application-id",
117
+ message="LLM response here",
118
+ )
119
+ ```
120
+
121
+ ---
122
+
123
+ ## Example with Additional Parameters
124
+
125
+ ```python
126
+ response = client.input(
127
+ application_id="your-application-id",
128
+ message="User prompt here",
129
+ additional_parameters={
130
+ "session_id": "your-session-id",
131
+ "user_id": "your-user-id",
132
+ "agents": {
133
+ "agent-1-id": {"name": "Agent 1", "description": "Agent description"},
134
+ "agent-2-id": {"name": "Agent 2"},
135
+ "agent-3-id": {}
136
+ }
137
+ }
138
+ )
139
+ ```
140
+
141
+ > **Note:** `session_id`, `user_id`, and all agent IDs must be valid UUIDs.
142
+
143
+ ---
144
+
145
+ ## API
146
+
147
+ ### `input`
148
+
149
+ ```python
150
+ # Sync
151
+ def input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
152
+
153
+ # Async
154
+ async def input(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
155
+ ```
156
+
157
+ Sends a **user prompt** to Hivetrace.
158
+
159
+ * `application_id` — Application identifier (must be a valid UUID, created in the UI)
160
+ * `message` — The user prompt
161
+ * `additional_parameters` — Optional dictionary with extra context (session, user, agents, etc.)
162
+
163
+ **Response example:**
164
+
165
+ ```json
166
+ {
167
+ "status": "processed",
168
+ "monitoring_result": {
169
+ "is_toxic": false,
170
+ "type_of_violation": "benign",
171
+ "token_count": 9,
172
+ "token_usage_warning": false,
173
+ "token_usage_unbounded": false
174
+ }
175
+ }
176
+ ```
177
+
178
+ ---
179
+
180
+ ### `output`
181
+
182
+ ```python
183
+ # Sync
184
+ def output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
185
+
186
+ # Async
187
+ async def output(application_id: str, message: str, additional_parameters: dict | None = None) -> dict: ...
188
+ ```
189
+
190
+ Sends an **LLM response** to Hivetrace.
191
+
192
+ * `application_id` — Application identifier (must be a valid UUID, created in the UI)
193
+ * `message` — The LLM response
194
+ * `additional_parameters` — Optional dictionary with extra context (session, user, agents, etc.)
195
+
196
+ **Response example:**
197
+
198
+ ```json
199
+ {
200
+ "status": "processed",
201
+ "monitoring_result": {
202
+ "is_toxic": false,
203
+ "type_of_violation": "safe",
204
+ "token_count": 21,
205
+ "token_usage_warning": false,
206
+ "token_usage_unbounded": false
207
+ }
208
+ }
209
+ ```
210
+
211
+ ---
212
+
213
+ ## Sending Requests in Sync Mode
214
+
215
+ ```python
216
+ def main():
217
+ # option 1: context manager
218
+ with SyncHivetraceSDK() as client:
219
+ response = client.input(
220
+ application_id="your-application-id",
221
+ message="User prompt here",
222
+ )
223
+
224
+ # option 2: manual close
225
+ client = SyncHivetraceSDK()
226
+ try:
227
+ response = client.input(
228
+ application_id="your-application-id",
229
+ message="User prompt here",
230
+ )
231
+ finally:
232
+ client.close()
233
+
234
+ main()
235
+ ```
236
+
237
+ ---
238
+
239
+ ## Sending Requests in Async Mode
240
+
241
+ ```python
242
+ import asyncio
243
+
244
+ async def main():
245
+ # option 1: context manager
246
+ async with AsyncHivetraceSDK() as client:
247
+ response = await client.input(
248
+ application_id="your-application-id",
249
+ message="User prompt here",
250
+ )
251
+
252
+ # option 2: manual close
253
+ client = AsyncHivetraceSDK()
254
+ try:
255
+ response = await client.input(
256
+ application_id="your-application-id",
257
+ message="User prompt here",
258
+ )
259
+ finally:
260
+ await client.close()
261
+
262
+ asyncio.run(main())
263
+ ```
264
+
265
+ ### Closing the Async Client
266
+
267
+ ```python
268
+ await client.close()
269
+ ```
270
+
271
+ ---
272
+
273
+ ## Configuration
274
+
275
+ The SDK reads configuration from environment variables:
276
+
277
+ * `HIVETRACE_URL` — Base URL allowed to call.
278
+ * `HIVETRACE_ACCESS_TOKEN` — API token used for authentication.
279
+
280
+ These are loaded automatically when you create a client.
281
+
282
+
283
+ ### Configuration Sources
284
+
285
+ Hivetrace SDK can retrieve configuration from the following sources:
286
+
287
+ **.env File:**
288
+
289
+ ```bash
290
+ HIVETRACE_URL=https://your-hivetrace-instance.com
291
+ HIVETRACE_ACCESS_TOKEN=your-access-token # obtained in the UI (API Tokens page)
292
+ ```
293
+
294
+ The SDK will automatically load these settings.
295
+
296
+ You can also pass a config dict explicitly when creating a client instance.
297
+ ```bash
298
+ client = SyncHivetraceSDK(
299
+ config={
300
+ "HIVETRACE_URL": HIVETRACE_URL,
301
+ "HIVETRACE_ACCESS_TOKEN": HIVETRACE_ACCESS_TOKEN,
302
+ },
303
+ )
304
+ ```
305
+
306
+ ## Environment Variables
307
+
308
+ Set up your environment variables for easier configuration:
309
+
310
+ ```bash
311
+ # .env file
312
+ HIVETRACE_URL=https://your-hivetrace-instance.com
313
+ HIVETRACE_ACCESS_TOKEN=your-access-token
314
+ HIVETRACE_APP_ID=your-application-id
315
+ ```
316
+
317
+ # CrewAI Integration
318
+
319
+ **Demo repository**
320
+
321
+ [https://github.com/anntish/multiagents-crew-forge](https://github.com/anntish/multiagents-crew-forge)
322
+
323
+ ## Step 1: Install the dependency
324
+
325
+ **What to do:** Add the HiveTrace SDK to your project
326
+
327
+ **Where:** In `requirements.txt` or via pip
328
+
329
+ ```bash
330
+ # Via pip (for quick testing)
331
+ pip install hivetrace[crewai]>=1.3.5
332
+
333
+ # Or add to requirements.txt (recommended)
334
+ echo "hivetrace[crewai]>=1.3.3" >> requirements.txt
335
+ pip install -r requirements.txt
336
+ ```
337
+
338
+ **Why:** The HiveTrace SDK provides decorators and clients for sending agent activity data to the monitoring platform.
339
+
340
+ ---
341
+
342
+ ## Step 2: **ADD** unique IDs for each agent
343
+
344
+ **Example:** In `src/config.py`
345
+
346
+ ```python
347
+ PLANNER_ID = "333e4567-e89b-12d3-a456-426614174001"
348
+ WRITER_ID = "444e4567-e89b-12d3-a456-426614174002"
349
+ EDITOR_ID = "555e4567-e89b-12d3-a456-426614174003"
350
+ ```
351
+
352
+ **Why agents need IDs:** HiveTrace tracks each agent individually. A UUID ensures the agent can be uniquely identified in the monitoring system.
353
+
354
+ ---
355
+
356
+ ## Step 3: Create an agent mapping
357
+
358
+ **What to do:** Map agent roles to their HiveTrace IDs
359
+
360
+ **Example:** In `src/agents.py` (where your agents are defined)
361
+
362
+ ```python
363
+ from crewai import Agent
364
+ # ADD: import agent IDs
365
+ from src.config import EDITOR_ID, PLANNER_ID, WRITER_ID
366
+
367
+ # ADD: mapping for HiveTrace (REQUIRED!)
368
+ agent_id_mapping = {
369
+ "Content Planner": { # ← Exactly the same as Agent(role="Content Planner")
370
+ "id": PLANNER_ID,
371
+ "description": "Creates content plans"
372
+ },
373
+ "Content Writer": { # ← Exactly the same as Agent(role="Content Writer")
374
+ "id": WRITER_ID,
375
+ "description": "Writes high-quality articles"
376
+ },
377
+ "Editor": { # ← Exactly the same as Agent(role="Editor")
378
+ "id": EDITOR_ID,
379
+ "description": "Edits and improves articles"
380
+ },
381
+ }
382
+
383
+ # Your existing agents (NO CHANGES)
384
+ planner = Agent(
385
+ role="Content Planner", # ← Must match key in agent_id_mapping
386
+ goal="Create a structured content plan for the given topic",
387
+ backstory="You are an experienced analyst...",
388
+ verbose=True,
389
+ )
390
+
391
+ writer = Agent(
392
+ role="Content Writer", # ← Must match key in agent_id_mapping
393
+ goal="Write an informative and engaging article",
394
+ backstory="You are a talented writer...",
395
+ verbose=True,
396
+ )
397
+
398
+ editor = Agent(
399
+ role="Editor", # ← Must match key in agent_id_mapping
400
+ goal="Improve the article",
401
+ backstory="You are an experienced editor...",
402
+ verbose=True,
403
+ )
404
+ ```
405
+
406
+ **Important:** The keys in `agent_id_mapping` must **exactly** match the `role` of your agents. Otherwise, HiveTrace will not be able to associate activity with the correct agent.
407
+
408
+ ---
409
+
410
+ ## Step 4: Integrate with tools (if used)
411
+
412
+ **What to do:** Add HiveTrace support to tools
413
+
414
+ **Example:** In `src/tools.py`
415
+
416
+ ```python
417
+ from crewai.tools import BaseTool
418
+ from typing import Optional
419
+
420
+ class WordCountTool(BaseTool):
421
+ name: str = "WordCountTool"
422
+ description: str = "Count words, characters and sentences in text"
423
+ # ADD: HiveTrace field (REQUIRED!)
424
+ agent_id: Optional[str] = None
425
+
426
+ def _run(self, text: str) -> str:
427
+ word_count = len(text.split())
428
+ return f"Word count: {word_count}"
429
+ ```
430
+
431
+ **Example:** In `src/agents.py`
432
+
433
+ ```python
434
+ from src.tools import WordCountTool
435
+ from src.config import PLANNER_ID, WRITER_ID, EDITOR_ID
436
+
437
+ # ADD: create tools for each agent
438
+ planner_tools = [WordCountTool()]
439
+ writer_tools = [WordCountTool()]
440
+ editor_tools = [WordCountTool()]
441
+
442
+ # ADD: assign tools to agents
443
+ for tool in planner_tools:
444
+ tool.agent_id = PLANNER_ID
445
+
446
+ for tool in writer_tools:
447
+ tool.agent_id = WRITER_ID
448
+
449
+ for tool in editor_tools:
450
+ tool.agent_id = EDITOR_ID
451
+
452
+ # Use tools in agents
453
+ planner = Agent(
454
+ role="Content Planner",
455
+ tools=planner_tools, # ← Agent-specific tools
456
+ # ... other parameters
457
+ )
458
+ ```
459
+
460
+ **Why:** HiveTrace tracks tool usage. The `agent_id` field in the tool class and its assignment let HiveTrace know which agent used which tool.
461
+
462
+ ---
463
+
464
+ ## Step 5: Initialize HiveTrace in FastAPI (if used)
465
+
466
+ **What to do:** Add the HiveTrace client to the application lifecycle
467
+
468
+ **Example:** In `main.py`
469
+
470
+ ```python
471
+ from contextlib import asynccontextmanager
472
+ from fastapi import FastAPI
473
+ # ADD: import HiveTrace SDK
474
+ from hivetrace import SyncHivetraceSDK
475
+ from src.config import HIVETRACE_ACCESS_TOKEN, HIVETRACE_URL
476
+
477
+ @asynccontextmanager
478
+ async def lifespan(app: FastAPI):
479
+ # ADD: initialize HiveTrace client
480
+ hivetrace = SyncHivetraceSDK(
481
+ config={
482
+ "HIVETRACE_URL": HIVETRACE_URL,
483
+ "HIVETRACE_ACCESS_TOKEN": HIVETRACE_ACCESS_TOKEN,
484
+ }
485
+ )
486
+ # Store client in app state
487
+ app.state.hivetrace = hivetrace
488
+ try:
489
+ yield
490
+ finally:
491
+ # IMPORTANT: close connection on shutdown
492
+ hivetrace.close()
493
+
494
+ app = FastAPI(lifespan=lifespan)
495
+ ```
496
+
497
+ ---
498
+
499
+ ## Step 6: Integrate into business logic
500
+
501
+ **What to do:** Wrap Crew creation with the HiveTrace decorator
502
+
503
+ **Example:** In `src/services/topic_service.py`
504
+
505
+ ```python
506
+ import uuid
507
+ from typing import Optional
508
+ from crewai import Crew
509
+ # ADD: HiveTrace imports
510
+ from hivetrace import SyncHivetraceSDK
511
+ from hivetrace import crewai_trace as trace
512
+
513
+ from src.agents import agent_id_mapping, planner, writer, editor
514
+ from src.tasks import plan_task, write_task, edit_task
515
+ from src.config import HIVETRACE_APP_ID
516
+
517
+ def process_topic(
518
+ topic: str,
519
+ hivetrace: SyncHivetraceSDK, # ← ADD parameter
520
+ user_id: Optional[str] = None,
521
+ session_id: Optional[str] = None,
522
+ ):
523
+ # ADD: generate unique conversation ID
524
+ agent_conversation_id = str(uuid.uuid4())
525
+
526
+ # ADD: common trace parameters
527
+ common_params = {
528
+ "agent_conversation_id": agent_conversation_id,
529
+ "user_id": user_id,
530
+ "session_id": session_id,
531
+ }
532
+
533
+ # ADD: log user request
534
+ hivetrace.input(
535
+ application_id=HIVETRACE_APP_ID,
536
+ message=f"Requesting information from agents on topic: {topic}",
537
+ additional_parameters={
538
+ **common_params,
539
+ "agents": agent_id_mapping, # ← pass agent mapping
540
+ },
541
+ )
542
+
543
+ # ADD: @trace decorator for monitoring Crew
544
+ @trace(
545
+ hivetrace=hivetrace,
546
+ application_id=HIVETRACE_APP_ID,
547
+ agent_id_mapping=agent_id_mapping, # ← REQUIRED!
548
+ )
549
+ def create_crew():
550
+ return Crew(
551
+ agents=[planner, writer, editor],
552
+ tasks=[plan_task, write_task, edit_task],
553
+ verbose=True,
554
+ )
555
+
556
+ # Execute with monitoring
557
+ crew = create_crew()
558
+ result = crew.kickoff(
559
+ inputs={"topic": topic},
560
+ **common_params # ← pass common parameters
561
+ )
562
+
563
+ return {
564
+ "result": result.raw,
565
+ "execution_details": {**common_params, "status": "completed"},
566
+ }
567
+ ```
568
+
569
+ **How it works:**
570
+
571
+ 1. **`agent_conversation_id`** — unique ID for grouping all actions under a single request
572
+ 2. **`hivetrace.input()`** — sends the user’s request to HiveTrace for inspection
573
+ 3. **`@trace`**:
574
+
575
+ * Intercepts all agent actions inside the Crew
576
+ * Sends data about each step to HiveTrace
577
+ * Associates actions with specific agents via `agent_id_mapping`
578
+ 4. **`**common_params`** — passes metadata into `crew.kickoff()` so all events are linked
579
+
580
+ **Critical:** The `@trace` decorator must be applied to the function that creates and returns the `Crew`, **not** the function that calls `kickoff()`.
581
+
582
+ ---
583
+
584
+ ## Step 7: Update FastAPI endpoints (if used)
585
+
586
+ **What to do:** Pass the HiveTrace client to the business logic
587
+
588
+ **Example:** In `src/routers/topic_router.py`
589
+
590
+ ```python
591
+ from fastapi import APIRouter, Body, Request
592
+ # ADD: import HiveTrace type
593
+ from hivetrace import SyncHivetraceSDK
594
+
595
+ from src.services.topic_service import process_topic
596
+ from src.config import SESSION_ID, USER_ID
597
+
598
+ router = APIRouter(prefix="/api")
599
+
600
+ @router.post("/process-topic")
601
+ async def api_process_topic(request: Request, request_body: dict = Body(...)):
602
+ # ADD: get HiveTrace client from app state
603
+ hivetrace: SyncHivetraceSDK = request.app.state.hivetrace
604
+
605
+ return process_topic(
606
+ topic=request_body["topic"],
607
+ hivetrace=hivetrace, # ← pass client
608
+ user_id=USER_ID,
609
+ session_id=SESSION_ID,
610
+ )
611
+ ```
612
+
613
+ **Why:** The API endpoint must pass the HiveTrace client to the business logic so monitoring data can be sent.
614
+
615
+ ---
616
+
617
+ ## 🚨 Common mistakes
618
+
619
+ 1. **Role mismatch** — make sure keys in `agent_id_mapping` exactly match `role` in agents
620
+ 2. **Missing `agent_id_mapping`** — the `@trace` decorator must receive the mapping
621
+ 3. **Decorator on wrong function** — `@trace` must be applied to the Crew creation function, not `kickoff`
622
+ 4. **Client not closed** — remember to call `hivetrace.close()` in the lifespan
623
+ 5. **Invalid credentials** — check your HiveTrace environment variables
624
+
625
+
626
+ # LangChain Integration
627
+
628
+ **Demo repository**
629
+
630
+ [https://github.com/anntish/multiagents-langchain-forge](https://github.com/anntish/multiagents-langchain-forge)
631
+
632
+ This project implements monitoring of a multi-agent system in LangChain via the HiveTrace SDK.
633
+
634
+ ### Step 1. Install Dependencies
635
+
636
+ ```bash
637
+ pip install hivetrace[langchain]>=1.3.5
638
+ # optional: add to requirements.txt and install
639
+ echo "hivetrace[langchain]>=1.3.3" >> requirements.txt
640
+ pip install -r requirements.txt
641
+ ```
642
+
643
+ What the package provides: SDK clients (sync/async), a universal callback for LangChain agents, and ready-to-use calls for sending inputs/logs/outputs to HiveTrace.
644
+
645
+ ### Step 2. Configure Environment Variables
646
+
647
+ * `HIVETRACE_URL`: HiveTrace address
648
+ * `HIVETRACE_ACCESS_TOKEN`: HiveTrace access token
649
+ * `HIVETRACE_APP_ID`: your application ID in HiveTrace
650
+ * `OPENAI_API_KEY`: key for the LLM provider (example with OpenAI)
651
+ * Additionally: `OPENAI_MODEL`, `USER_ID`, `SESSION_ID`
652
+
653
+ ### Step 3. Assign Fixed UUIDs to Your Agents
654
+
655
+ Create a dictionary of fixed UUIDs for all "agent nodes" (e.g., orchestrator, specialized agents). This ensures unambiguous identification in tracing.
656
+
657
+ Example: file `src/core/constants.py`:
658
+
659
+ ```python
660
+ PREDEFINED_AGENT_IDS = {
661
+ "MainHub": "111e1111-e89b-12d3-a456-426614174099",
662
+ "text_agent": "222e2222-e89b-12d3-a456-426614174099",
663
+ "math_agent": "333e3333-e89b-12d3-a456-426614174099",
664
+ "pre_text_agent": "444e4444-e89b-12d3-a456-426614174099",
665
+ "pre_math_agent": "555e5555-e89b-12d3-a456-426614174099",
666
+ }
667
+ ```
668
+
669
+ Tip: dictionary keys must match the actual node names appearing in logs (`tool`/agent name in LangChain calls).
670
+
671
+ ### Step 4. Attach the Callback to Executors and Tools
672
+
673
+ Create and use `AgentLoggingCallback` — it should be passed:
674
+
675
+ * as a callback in `AgentExecutor` (orchestrator), and
676
+ * as `callback_handler` in your tools/agent wrappers (`BaseTool`).
677
+
678
+ Example: file `src/core/orchestrator.py` (fragment):
679
+
680
+ ```python
681
+ from hivetrace.adapters.langchain import AgentLoggingCallback
682
+ from langchain.agents import AgentExecutor, create_openai_tools_agent
683
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
684
+
685
+ class OrchestratorAgent:
686
+ def __init__(self, llm, predefined_agent_ids=None):
687
+ self.llm = llm
688
+ self.logging_callback = AgentLoggingCallback(
689
+ default_root_name="MainHub",
690
+ predefined_agent_ids=predefined_agent_ids,
691
+ )
692
+ # Example: wrapper agents as tools
693
+ # MathAgentTool/TextAgentTool internally pass self.logging_callback further
694
+ agent = create_openai_tools_agent(self.llm, self.tools, ChatPromptTemplate.from_messages([
695
+ ("system", "You are the orchestrator agent of a multi-agent system."),
696
+ MessagesPlaceholder(variable_name="chat_history", optional=True),
697
+ ("human", "{input}"),
698
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
699
+ ]))
700
+ self.executor = AgentExecutor(
701
+ agent=agent,
702
+ tools=self.tools,
703
+ verbose=True,
704
+ callbacks=[self.logging_callback],
705
+ )
706
+ ```
707
+
708
+ Important: all nested agents/tools that create their own `AgentExecutor` or inherit from `BaseTool` must also receive this `callback_handler` so their steps are included in tracing.
709
+
710
+ ### Step 5. One-Line Integration in a Business Method
711
+
712
+ Use the `run_with_tracing` helper from `hivetrace/adapters/langchain/api.py`. It:
713
+
714
+ * logs the input with agent mapping and metadata;
715
+ * calls your orchestrator;
716
+ * collects and sends accumulated logs/final answer.
717
+
718
+ Minimal example (script):
719
+
720
+ ```python
721
+ import os, uuid
722
+ from langchain_openai import ChatOpenAI
723
+ from src.core.orchestrator import OrchestratorAgent
724
+ from src.core.constants import PREDEFINED_AGENT_IDS
725
+ from hivetrace.adapters.langchain import run_with_tracing
726
+
727
+ llm = ChatOpenAI(model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"), temperature=0.2, streaming=False)
728
+ orchestrator = OrchestratorAgent(llm, predefined_agent_ids=PREDEFINED_AGENT_IDS)
729
+
730
+ result = run_with_tracing(
731
+ orchestrator=orchestrator,
732
+ query="Format this text and count the number of words",
733
+ application_id=os.getenv("HIVETRACE_APP_ID"),
734
+ user_id=os.getenv("USER_ID"),
735
+ session_id=os.getenv("SESSION_ID"),
736
+ conversation_id=str(uuid.uuid4()),
737
+ )
738
+ print(result)
739
+ ```
740
+
741
+ FastAPI variant (handler fragment):
742
+
743
+ ```python
744
+ from fastapi import APIRouter, Request
745
+ from hivetrace.adapters.langchain import run_with_tracing
746
+ import uuid
747
+
748
+ router = APIRouter()
749
+
750
+ @router.post("/query")
751
+ async def process_query(payload: dict, request: Request):
752
+ orchestrator = request.app.state.orchestrator
753
+ conv_id = str(uuid.uuid4()) # always create a new agent_conversation_id for each request to group agent work for the same question
754
+ result = run_with_tracing(
755
+ orchestrator=orchestrator,
756
+ query=payload["query"],
757
+ application_id=request.app.state.HIVETRACE_APP_ID,
758
+ user_id=request.app.state.USER_ID,
759
+ session_id=request.app.state.SESSION_ID,
760
+ conversation_id=conv_id,
761
+ )
762
+ return {"status": "success", "result": result}
763
+ ```
764
+
765
+ ### Step 6. Reusing the HiveTrace Client (Optional)
766
+
767
+ Helpers automatically create a short-lived client if none is provided. If you want to reuse a client — create it once during the application's lifecycle and pass it to helpers.
768
+
769
+ FastAPI (lifespan):
770
+
771
+ ```python
772
+ from contextlib import asynccontextmanager
773
+ from fastapi import FastAPI
774
+ from hivetrace import SyncHivetraceSDK
775
+
776
+ @asynccontextmanager
777
+ async def lifespan(app: FastAPI):
778
+ hivetrace = SyncHivetraceSDK()
779
+ app.state.hivetrace = hivetrace
780
+ try:
781
+ yield
782
+ finally:
783
+ hivetrace.close()
784
+
785
+ app = FastAPI(lifespan=lifespan)
786
+ ```
787
+
788
+ Then:
789
+
790
+ ```python
791
+ result = run_with_tracing(
792
+ orchestrator=orchestrator,
793
+ query=payload.query,
794
+ hivetrace=request.app.state.hivetrace, # pass your own client
795
+ application_id=request.app.state.HIVETRACE_APP_ID,
796
+ )
797
+ ```
798
+
799
+ ### How Logs Look in HiveTrace
800
+
801
+ * **Agent nodes**: orchestrator nodes and specialized "agent wrappers" (`text_agent`, `math_agent`, etc.).
802
+ * **Actual tools**: low-level tools (e.g., `text_analyzer`, `text_formatter`) are logged on start/end events.
803
+ * **Service records**: automatically added `return_result` (returning result to parent) and `final_answer` (final answer of the root node) steps.
804
+
805
+ This gives a clear call graph with data flow direction and the final answer.
806
+
807
+ ### Common Mistakes and How to Avoid Them
808
+
809
+ * **Name mismatch**: key in `PREDEFINED_AGENT_IDS` must match the node/tool name in logs.
810
+ * **No agent mapping**: either pass `agents_mapping` to `run_with_tracing` or define `predefined_agent_ids` in `AgentLoggingCallback` — the SDK will build the mapping automatically.
811
+ * **Callback not attached**: add `AgentLoggingCallback` to all `AgentExecutor` and `BaseTool` wrappers via the `callback_handler` parameter.
812
+ * **Client not closed**: use lifespan/context manager for `SyncHivetraceSDK`.
813
+
814
+
815
+ # OpenAI Agents Integration
816
+
817
+ **Demo repository**
818
+
819
+ [https://github.com/anntish/openai-agents-forge](https://github.com/anntish/openai-agents-forge)
820
+
821
+ ### 1. Installation
822
+
823
+ ```bash
824
+ pip install hivetrace[openai_agents]==1.3.5
825
+ ```
826
+
827
+ ---
828
+
829
+ ### 2. Environment Setup
830
+
831
+ Set the environment variables (via `.env` or export):
832
+
833
+ ```bash
834
+ HIVETRACE_URL=http://localhost:8000 # Your HiveTrace URL
835
+ HIVETRACE_ACCESS_TOKEN=ht_... # Your HiveTrace access token
836
+ HIVETRACE_APPLICATION_ID=00000000-...-0000 # Your HiveTrace application ID
837
+
838
+ SESSION_ID=
839
+ USERID=
840
+
841
+ OPENAI_API_KEY=
842
+ OPENAI_BASE_URL=https://api.openai.com/v1
843
+ OPENAI_MODEL=gpt-4o-mini
844
+ ```
845
+
846
+ ---
847
+
848
+ ### 3. Attach the Trace Processor in Code
849
+
850
+ Add 3 lines before creating/using your agents:
851
+
852
+ ```python
853
+ from agents import set_trace_processors
854
+ from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor
855
+
856
+ set_trace_processors([
857
+ HivetraceOpenAIAgentProcessor() # will take config from env
858
+ ])
859
+ ```
860
+
861
+ Alternative (explicit configuration if you don’t want to rely on env):
862
+
863
+ ```python
864
+ from agents import set_trace_processors
865
+ from hivetrace import SyncHivetraceSDK
866
+ from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor
867
+
868
+ hivetrace = SyncHivetraceSDK(config={
869
+ "HIVETRACE_URL": "http://localhost:8000",
870
+ "HIVETRACE_ACCESS_TOKEN": "ht_...",
871
+ })
872
+
873
+ set_trace_processors([
874
+ HivetraceOpenAIAgentProcessor(
875
+ application_id="00000000-0000-0000-0000-000000000000",
876
+ hivetrace_instance=hivetrace,
877
+ )
878
+ ])
879
+ ```
880
+
881
+ Important:
882
+
883
+ * Register the processor only once at app startup.
884
+ * Attach it before the first agent run (`Runner.run(...)` / `Runner.run_sync(...)`).
885
+
886
+ ---
887
+
888
+ ### 4. Minimal "Before/After" Example
889
+
890
+ Before:
891
+
892
+ ```python
893
+ from agents import Agent, Runner
894
+
895
+ assistant = Agent(name="Assistant", instructions="Be helpful.")
896
+ print(Runner.run_sync(assistant, "Hi!"))
897
+ ```
898
+
899
+ After (with HiveTrace monitoring):
900
+
901
+ ```python
902
+ from agents import Agent, Runner, set_trace_processors
903
+ from hivetrace.adapters.openai_agents.tracing import HivetraceOpenAIAgentProcessor
904
+
905
+ set_trace_processors([HivetraceOpenAIAgentProcessor()])
906
+
907
+ assistant = Agent(name="Assistant", instructions="Be helpful.")
908
+ print(Runner.run_sync(assistant, "Hi!"))
909
+ ```
910
+
911
+ From this moment, all agent calls, handoffs, and tool invocations will be logged in HiveTrace.
912
+
913
+ ---
914
+
915
+ ### 5. Tool Tracing
916
+
917
+ If you use tools, decorate them with `@function_tool` so their calls are automatically traced:
918
+
919
+ ```python
920
+ from agents import function_tool
921
+
922
+ @function_tool(description_override="Adds two numbers")
923
+ def calculate_sum(a: int, b: int) -> int:
924
+ return a + b
925
+ ```
926
+
927
+ Add this tool to your agent’s `tools=[...]` — and its calls will appear in HiveTrace with inputs/outputs.
928
+
929
+ ---