openjck 0.3.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- openjck-0.3.0/PKG-INFO +593 -0
- openjck-0.3.0/README.md +567 -0
- openjck-0.3.0/openjck/__init__.py +16 -0
- openjck-0.3.0/openjck/cli.py +9 -0
- openjck-0.3.0/openjck/client.py +58 -0
- openjck-0.3.0/openjck/collector.py +176 -0
- openjck-0.3.0/openjck/decorators.py +319 -0
- openjck-0.3.0/openjck/integrations/__init__.py +3 -0
- openjck-0.3.0/openjck/integrations/langchain.py +216 -0
- openjck-0.3.0/openjck/intelligence.py +225 -0
- openjck-0.3.0/openjck/server.py +99 -0
- openjck-0.3.0/openjck/storage.py +209 -0
- openjck-0.3.0/openjck/ui/index.html +1193 -0
- openjck-0.3.0/openjck.egg-info/PKG-INFO +593 -0
- openjck-0.3.0/openjck.egg-info/SOURCES.txt +20 -0
- openjck-0.3.0/openjck.egg-info/dependency_links.txt +1 -0
- openjck-0.3.0/openjck.egg-info/entry_points.txt +2 -0
- openjck-0.3.0/openjck.egg-info/requires.txt +8 -0
- openjck-0.3.0/openjck.egg-info/top_level.txt +3 -0
- openjck-0.3.0/pyproject.toml +56 -0
- openjck-0.3.0/setup.cfg +4 -0
- openjck-0.3.0/tests/test_intelligence.py +39 -0
openjck-0.3.0/PKG-INFO
ADDED
|
@@ -0,0 +1,593 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: openjck
|
|
3
|
+
Version: 0.3.0
|
|
4
|
+
Summary: Visual debugger for AI agents. See every step, every decision, every failure.
|
|
5
|
+
License: MIT
|
|
6
|
+
Project-URL: Homepage, https://github.com/RavaniRoshan/openjck
|
|
7
|
+
Project-URL: Documentation, https://github.com/RavaniRoshan/openjck#readme
|
|
8
|
+
Project-URL: Issues, https://github.com/RavaniRoshan/openjck/issues
|
|
9
|
+
Keywords: ai,agents,debugging,observability,llm,tracing,langchain,crewai
|
|
10
|
+
Classifier: Development Status :: 3 - Alpha
|
|
11
|
+
Classifier: Intended Audience :: Developers
|
|
12
|
+
Classifier: Topic :: Software Development :: Debuggers
|
|
13
|
+
Classifier: License :: OSI Approved :: MIT License
|
|
14
|
+
Classifier: Programming Language :: Python :: 3
|
|
15
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
16
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
17
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
18
|
+
Requires-Python: >=3.10
|
|
19
|
+
Description-Content-Type: text/markdown
|
|
20
|
+
Provides-Extra: server
|
|
21
|
+
Requires-Dist: fastapi>=0.110.0; extra == "server"
|
|
22
|
+
Requires-Dist: uvicorn>=0.29.0; extra == "server"
|
|
23
|
+
Provides-Extra: all
|
|
24
|
+
Requires-Dist: fastapi>=0.110.0; extra == "all"
|
|
25
|
+
Requires-Dist: uvicorn>=0.29.0; extra == "all"
|
|
26
|
+
|
|
27
|
+
<div align="center">
|
|
28
|
+
|
|
29
|
+
<img width="900" height="220" alt="OpenJCK — Visual debugger for AI agent loops" src="https://github.com/user-attachments/assets/9c5847ff-0702-44ef-88f8-2a1f5e514543" />
|
|
30
|
+
|
|
31
|
+
<br />
|
|
32
|
+
<br />
|
|
33
|
+
|
|
34
|
+
[](https://www.npmjs.com/package/openjck)
|
|
35
|
+
[](https://pypi.org/project/openjck/)
|
|
36
|
+
[](https://pypi.org/project/openjck/)
|
|
37
|
+
[](https://nodejs.org)
|
|
38
|
+
[](LICENSE)
|
|
39
|
+
[](https://github.com/RavaniRoshan/openjck)
|
|
40
|
+
|
|
41
|
+
<br />
|
|
42
|
+
|
|
43
|
+
[**Quick Start**](#quick-start) · [**How It Works**](#how-it-works) · [**CLI Commands**](#cli-commands) · [**API Reference**](#api-reference) · [**Frameworks**](#framework-support) · [**Roadmap**](#roadmap)
|
|
44
|
+
|
|
45
|
+
<br />
|
|
46
|
+
|
|
47
|
+
<img width="1920" height="1080" alt="image" src="https://github.com/user-attachments/assets/982bafc7-ffac-4582-8166-0f9028a0b7bd" />
|
|
48
|
+
|
|
49
|
+
|
|
50
|
+
|
|
51
|
+
</div>
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## The Problem
|
|
56
|
+
|
|
57
|
+
You built an AI agent. It runs 15 steps. Something breaks at step 9.
|
|
58
|
+
|
|
59
|
+
You have no idea why.
|
|
60
|
+
|
|
61
|
+
The LLM got a bad prompt? A tool returned garbage? A file permission failed silently? You add `print()` everywhere. You re-run it. You grep through 300 lines of logs. Forty minutes later, you find the bug.
|
|
62
|
+
|
|
63
|
+
**This is the debugging dark age for AI agents.** No step-by-step visibility. No tool call inspector. No way to see what the LLM was actually thinking at each decision point.
|
|
64
|
+
|
|
65
|
+
OpenJCK fixes this.
|
|
66
|
+
|
|
67
|
+
---
|
|
68
|
+
|
|
69
|
+
## Quick Start
|
|
70
|
+
|
|
71
|
+
**Two packages. One shared purpose.**
|
|
72
|
+
|
|
73
|
+
```
|
|
74
|
+
pip install openjck ← instruments your Python agent
|
|
75
|
+
npx openjck ← opens the visual trace viewer
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
**Step 1 — Instrument your agent** (add 3 decorators, nothing else changes):
|
|
79
|
+
|
|
80
|
+
```python
|
|
81
|
+
from openjck import trace, trace_llm, trace_tool
|
|
82
|
+
import ollama
|
|
83
|
+
|
|
84
|
+
@trace(name="research_agent")
|
|
85
|
+
def run_agent(task: str):
|
|
86
|
+
response = call_llm([{"role": "user", "content": task}])
|
|
87
|
+
results = web_search(response.message.content)
|
|
88
|
+
write_file("output.md", results)
|
|
89
|
+
|
|
90
|
+
@trace_llm
|
|
91
|
+
def call_llm(messages: list):
|
|
92
|
+
return ollama.chat(model="qwen2.5:7b", messages=messages)
|
|
93
|
+
|
|
94
|
+
@trace_tool
|
|
95
|
+
def web_search(query: str) -> str:
|
|
96
|
+
...
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Step 2 — Run your agent normally:**
|
|
100
|
+
|
|
101
|
+
```
|
|
102
|
+
[OpenJCK] Run complete → COMPLETED
|
|
103
|
+
[OpenJCK] 8 steps | 2840 tokens | 4.2s
|
|
104
|
+
[OpenJCK] View trace → http://localhost:7823/trace/a3f9c1b2
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
**Step 3 — Open the viewer:**
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
npx openjck
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
You see this:
|
|
114
|
+
|
|
115
|
+
```
|
|
116
|
+
● ──── ● ──── ● ──── ● ──── ● ──── ● ──── ● ──── ✕
|
|
117
|
+
1 2 3 4 5 6 7 8
|
|
118
|
+
ERROR ↑
|
|
119
|
+
|
|
120
|
+
STEP 8 write_file [FAILED] 12ms
|
|
121
|
+
─────────────────────────────────────────────────
|
|
122
|
+
INPUT
|
|
123
|
+
path: "output.md"
|
|
124
|
+
content: "# Research Summary..."
|
|
125
|
+
|
|
126
|
+
ERROR
|
|
127
|
+
PermissionError: cannot write to output.md
|
|
128
|
+
File is open in another process
|
|
129
|
+
|
|
130
|
+
← Step 7: LLM decided to write the summary
|
|
131
|
+
→ Step 9: never reached
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
Bug found. Fixed in 30 seconds.
|
|
135
|
+
|
|
136
|
+
## Dashboard
|
|
137
|
+
|
|
138
|
+
```bash
|
|
139
|
+
npx openjck
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
Open http://localhost:7823 to see:
|
|
143
|
+
- All agent runs — live as they happen
|
|
144
|
+
- Costs, success rate, avg duration at a glance
|
|
145
|
+
- Automatic root cause analysis on every failure
|
|
146
|
+
- Time filters: last 24h / 7 days / all time
|
|
147
|
+
|
|
148
|
+
### Failure Intelligence
|
|
149
|
+
|
|
150
|
+
When an agent run fails, OpenJCK automatically identifies the root cause:
|
|
151
|
+
|
|
152
|
+
- Which step made the run unrecoverable
|
|
153
|
+
- Why that step's output caused the downstream failure
|
|
154
|
+
- The last recovery point before the failure chain began
|
|
155
|
+
- Recurring failure patterns across multiple runs
|
|
156
|
+
|
|
157
|
+
No configuration. No API keys. Fires automatically on every failed run.
|
|
158
|
+
|
|
159
|
+
---
|
|
160
|
+
|
|
161
|
+
## How It Works
|
|
162
|
+
|
|
163
|
+
```
|
|
164
|
+
Your Agent Code
|
|
165
|
+
│
|
|
166
|
+
│ @trace / @trace_llm / @trace_tool (3 decorators)
|
|
167
|
+
▼
|
|
168
|
+
TraceCollector captures every event in-memory, per-thread
|
|
169
|
+
│
|
|
170
|
+
▼
|
|
171
|
+
~/.openjck/traces/ one JSON file per run — never leaves your machine
|
|
172
|
+
│
|
|
173
|
+
▼
|
|
174
|
+
Express server localhost:7823 (Node.js · npx openjck)
|
|
175
|
+
│
|
|
176
|
+
▼
|
|
177
|
+
Visual UI timeline + step inspector + token counts
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
**Everything is local.** No cloud. No accounts. No API keys. No data leaves your machine.
|
|
181
|
+
|
|
182
|
+
Both the Python library and the npm CLI read from the **same folder** — `~/.openjck/traces/`. Run your agent from Python, view traces from any terminal with `npx`. Zero config between them.
|
|
183
|
+
|
|
184
|
+
---
|
|
185
|
+
|
|
186
|
+
## CLI Commands
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
npx openjck # start UI viewer (default)
|
|
190
|
+
npx openjck ui # start UI viewer
|
|
191
|
+
npx openjck traces # list all traces in terminal
|
|
192
|
+
npx openjck clear # delete all traces
|
|
193
|
+
npx openjck --version # show version
|
|
194
|
+
npx openjck --help # show help
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
**Global install** (optional — skip `npx` every time):
|
|
198
|
+
|
|
199
|
+
```bash
|
|
200
|
+
npm install -g openjck
|
|
201
|
+
openjck ui
|
|
202
|
+
openjck traces
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
**What `openjck traces` looks like:**
|
|
206
|
+
|
|
207
|
+
```
|
|
208
|
+
OpenJCK — Recorded Runs
|
|
209
|
+
|
|
210
|
+
ID Name Status Steps Duration Tokens
|
|
211
|
+
────────────────────────────────────────────────────────────────────────
|
|
212
|
+
a3f9c1b2 research_agent completed 8 4.20s 2840
|
|
213
|
+
9c4b1e3f failing_agent failed 6 2.41s 1345
|
|
214
|
+
✕ FileNotFoundError: File not found: config.txt
|
|
215
|
+
|
|
216
|
+
2 runs total · npx openjck ui to view in browser
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
---
|
|
220
|
+
|
|
221
|
+
## What Gets Captured
|
|
222
|
+
|
|
223
|
+
### For every `@trace_llm` call
|
|
224
|
+
|
|
225
|
+
| Field | Description |
|
|
226
|
+
|---|---|
|
|
227
|
+
| Full message history | Every message sent to the model |
|
|
228
|
+
| Model name | Which model + version was called |
|
|
229
|
+
| Response content | What the model replied |
|
|
230
|
+
| Tokens in / out | Prompt + completion token counts |
|
|
231
|
+
| Cost (USD) | Per-step cost based on model pricing |
|
|
232
|
+
| Latency | Execution time in ms |
|
|
233
|
+
| Error | Full traceback if the call failed |
|
|
234
|
+
|
|
235
|
+
### For every `@trace_tool` call
|
|
236
|
+
|
|
237
|
+
| Field | Description |
|
|
238
|
+
|---|---|
|
|
239
|
+
| Function arguments | Exact values passed in |
|
|
240
|
+
| Return value | What the tool returned |
|
|
241
|
+
| Latency | Execution time in ms |
|
|
242
|
+
| Error | Full traceback including line number |
|
|
243
|
+
|
|
244
|
+
---
|
|
245
|
+
|
|
246
|
+
## API Reference
|
|
247
|
+
|
|
248
|
+
### `@trace`
|
|
249
|
+
|
|
250
|
+
Marks the agent entry point. Starts a new trace for the entire run.
|
|
251
|
+
|
|
252
|
+
```python
|
|
253
|
+
@trace # uses function name
|
|
254
|
+
@trace(name="my_agent") # explicit run name
|
|
255
|
+
@trace(name="agent", metadata={}) # attach custom metadata
|
|
256
|
+
@trace(auto_open=False) # don't auto-start UI server
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
Supports `def` and `async def`.
|
|
260
|
+
|
|
261
|
+
---
|
|
262
|
+
|
|
263
|
+
### `@trace_llm`
|
|
264
|
+
|
|
265
|
+
Wraps an LLM call. Captures prompt, response, tokens, model, latency, cost.
|
|
266
|
+
|
|
267
|
+
```python
|
|
268
|
+
@trace_llm # auto-detects model from arguments
|
|
269
|
+
@trace_llm(model="gpt-4o") # explicit model label
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
Auto-detects token counts from **Ollama**, **OpenAI**, and **Anthropic** response formats.
|
|
273
|
+
|
|
274
|
+
---
|
|
275
|
+
|
|
276
|
+
### `@trace_tool`
|
|
277
|
+
|
|
278
|
+
Wraps a tool call. Captures input arguments, return value, and any exception.
|
|
279
|
+
|
|
280
|
+
```python
|
|
281
|
+
@trace_tool # uses function name
|
|
282
|
+
@trace_tool(name="filesystem.write") # explicit name in the UI
|
|
283
|
+
```
|
|
284
|
+
|
|
285
|
+
---
|
|
286
|
+
|
|
287
|
+
### `EventCapture` — manual instrumentation
|
|
288
|
+
|
|
289
|
+
For wrapping third-party code or dynamic dispatch:
|
|
290
|
+
|
|
291
|
+
```python
|
|
292
|
+
from openjck import EventCapture
|
|
293
|
+
|
|
294
|
+
with EventCapture("tool_call", "database.query", input={"sql": query}) as cap:
|
|
295
|
+
result = db.execute(query)
|
|
296
|
+
cap.output = result.fetchall()
|
|
297
|
+
cap.metadata = {"rows": len(result)}
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
---
|
|
301
|
+
|
|
302
|
+
### `TraceStorage` — programmatic access
|
|
303
|
+
|
|
304
|
+
```python
|
|
305
|
+
from openjck import TraceStorage
|
|
306
|
+
|
|
307
|
+
traces = TraceStorage.list_all() # all trace summaries
|
|
308
|
+
trace = TraceStorage.load("a3f9c1b2") # full trace with all steps
|
|
309
|
+
TraceStorage.delete("a3f9c1b2") # remove one trace
|
|
310
|
+
TraceStorage.search(q="research", status="failed") # filter traces
|
|
311
|
+
```
|
|
312
|
+
|
|
313
|
+
---
|
|
314
|
+
|
|
315
|
+
## Framework Support
|
|
316
|
+
|
|
317
|
+
OpenJCK is **framework-agnostic**. Wrap the functions. That's it.
|
|
318
|
+
|
|
319
|
+
```python
|
|
320
|
+
# ✅ Raw Python agents
|
|
321
|
+
# ✅ LangChain
|
|
322
|
+
# ✅ LlamaIndex
|
|
323
|
+
# ✅ CrewAI
|
|
324
|
+
# ✅ AutoGen
|
|
325
|
+
# ✅ Smolagents
|
|
326
|
+
# ✅ Async agents (asyncio / anyio)
|
|
327
|
+
# ✅ Any custom agent loop
|
|
328
|
+
```
|
|
329
|
+
|
|
330
|
+
### LangChain — zero decorators via auto-patch
|
|
331
|
+
|
|
332
|
+
```python
|
|
333
|
+
import openjck
|
|
334
|
+
openjck.patch_langchain() # instruments all LangChain LLM + tool calls
|
|
335
|
+
|
|
336
|
+
@trace(name="my_chain")
|
|
337
|
+
def run():
|
|
338
|
+
chain.invoke({"question": "..."}) # automatically traced
|
|
339
|
+
```
|
|
340
|
+
|
|
341
|
+
### CrewAI
|
|
342
|
+
|
|
343
|
+
```python
|
|
344
|
+
@trace(name="crewai_research")
|
|
345
|
+
def run_crew(topic: str):
|
|
346
|
+
crew = Crew(agents=[researcher, writer], tasks=[...])
|
|
347
|
+
return crew.kickoff(inputs={"topic": topic})
|
|
348
|
+
|
|
349
|
+
@trace_tool(name="search.web")
|
|
350
|
+
def search_tool(query: str) -> str:
|
|
351
|
+
return SerperDevTool().run(query)
|
|
352
|
+
```
|
|
353
|
+
|
|
354
|
+
### Ollama
|
|
355
|
+
|
|
356
|
+
```python
|
|
357
|
+
@trace_llm
|
|
358
|
+
def call_llm(messages):
|
|
359
|
+
return ollama.chat(model="qwen2.5-coder:7b", messages=messages)
|
|
360
|
+
```
|
|
361
|
+
|
|
362
|
+
---
|
|
363
|
+
|
|
364
|
+
## Async Support
|
|
365
|
+
|
|
366
|
+
All decorators work on `async def` with zero changes:
|
|
367
|
+
|
|
368
|
+
```python
|
|
369
|
+
@trace(name="async_agent")
|
|
370
|
+
async def run_agent(task: str):
|
|
371
|
+
response = await call_llm(...)
|
|
372
|
+
result = await fetch_data(...)
|
|
373
|
+
|
|
374
|
+
@trace_llm
|
|
375
|
+
async def call_llm(messages):
|
|
376
|
+
return await async_client.chat(model="qwen2.5:7b", messages=messages)
|
|
377
|
+
```
|
|
378
|
+
|
|
379
|
+
---
|
|
380
|
+
|
|
381
|
+
## Trace Storage
|
|
382
|
+
|
|
383
|
+
All traces are plain JSON at `~/.openjck/traces/<trace_id>.json`.
|
|
384
|
+
|
|
385
|
+
```
|
|
386
|
+
~/.openjck/
|
|
387
|
+
└── traces/
|
|
388
|
+
├── a3f9c1b2.json # completed — 8 steps, 2840 tokens
|
|
389
|
+
├── 9c4b1e3f.json # failed — error at step 6
|
|
390
|
+
└── ...
|
|
391
|
+
```
|
|
392
|
+
|
|
393
|
+
Both the Python library and the npm CLI read and write to this same location. No sync needed.
|
|
394
|
+
|
|
395
|
+
---
|
|
396
|
+
|
|
397
|
+
## Installation
|
|
398
|
+
|
|
399
|
+
### Python library (required — for agent instrumentation)
|
|
400
|
+
|
|
401
|
+
```bash
|
|
402
|
+
pip install openjck # core library only
|
|
403
|
+
pip install "openjck[server]" # includes FastAPI UI server (alternative to npx)
|
|
404
|
+
```
|
|
405
|
+
|
|
406
|
+
Requires: **Python 3.10+**
|
|
407
|
+
|
|
408
|
+
### npm CLI (recommended — for the visual UI viewer)
|
|
409
|
+
|
|
410
|
+
```bash
|
|
411
|
+
# No install — always runs latest:
|
|
412
|
+
npx openjck
|
|
413
|
+
|
|
414
|
+
# Or install once globally:
|
|
415
|
+
npm install -g openjck
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
Requires: **Node.js 18+**
|
|
419
|
+
|
|
420
|
+
---
|
|
421
|
+
|
|
422
|
+
## Why Not Just Use...
|
|
423
|
+
|
|
424
|
+
| | OpenJCK | LangSmith | Helicone | Print statements |
|
|
425
|
+
|---|---|---|---|---|
|
|
426
|
+
| Step-by-step visibility | ✅ | ✅ | ❌ | ❌ |
|
|
427
|
+
| Works with any framework | ✅ | ❌ | ✅ | ✅ |
|
|
428
|
+
| 100% local | ✅ | ❌ | ❌ | ✅ |
|
|
429
|
+
| Free forever | ✅ | Partial | Partial | ✅ |
|
|
430
|
+
| Visual UI | ✅ | ✅ | ✅ | ❌ |
|
|
431
|
+
| Token tracking | ✅ | ✅ | ✅ | ❌ |
|
|
432
|
+
| Cost tracking | ✅ | ✅ | ✅ | ❌ |
|
|
433
|
+
| Zero config | ✅ | ❌ | ❌ | ✅ |
|
|
434
|
+
|
|
435
|
+
OpenJCK is the only tool built specifically to debug **agentic loops** — the multi-step, tool-using, decision-making flows that break in ways traditional logging cannot explain.
|
|
436
|
+
|
|
437
|
+
---
|
|
438
|
+
|
|
439
|
+
## Changelog
|
|
440
|
+
|
|
441
|
+
### v0.2.1 (2026-03-19)
|
|
442
|
+
|
|
443
|
+
**New Features:**
|
|
444
|
+
- Live dashboard with real-time updates at http://localhost:7823
|
|
445
|
+
- Agent drill-down view with patterns analysis
|
|
446
|
+
- Trace detail view with step timeline
|
|
447
|
+
- Failure Intelligence Engine — automatic root cause analysis
|
|
448
|
+
- Recovery point detection
|
|
449
|
+
- Dependency chain tracing
|
|
450
|
+
- SQLite database for persistent storage
|
|
451
|
+
- Non-blocking HTTP emit client
|
|
452
|
+
- SSE (Server-Sent Events) for live updates
|
|
453
|
+
- Mobile responsive design
|
|
454
|
+
- Time filters (24h, 7d, all)
|
|
455
|
+
- Dashboard documentation pages
|
|
456
|
+
|
|
457
|
+
**Bug Fixes:**
|
|
458
|
+
- Fixed intelligence endpoint subprocess handling
|
|
459
|
+
- Fixed FOREIGN KEY constraint on intelligence table
|
|
460
|
+
- Fixed version display inconsistencies
|
|
461
|
+
|
|
462
|
+
**Python Package:** `openjck` on PyPI
|
|
463
|
+
**npm Package:** `openjck` on npm
|
|
464
|
+
|
|
465
|
+
### v0.2.0 (2026-03-15)
|
|
466
|
+
|
|
467
|
+
- Initial v0.2 release
|
|
468
|
+
|
|
469
|
+
### v0.1.0 (2026-03-10)
|
|
470
|
+
|
|
471
|
+
- Initial release
|
|
472
|
+
- Core decorators: `@trace`, `@trace_llm`, `@trace_tool`
|
|
473
|
+
- JSON trace persistence
|
|
474
|
+
- Visual timeline UI
|
|
475
|
+
- Token/cost tracking
|
|
476
|
+
- npm CLI
|
|
477
|
+
|
|
478
|
+
---
|
|
479
|
+
|
|
480
|
+
## Roadmap
|
|
481
|
+
|
|
482
|
+
**v0.2.1** *(current)*
|
|
483
|
+
- [x] Live dashboard with real-time updates
|
|
484
|
+
- [x] Failure Intelligence Engine
|
|
485
|
+
- [x] SQLite database
|
|
486
|
+
- [x] Agent drill-down view
|
|
487
|
+
- [x] Trace detail view
|
|
488
|
+
- [x] Mobile responsive design
|
|
489
|
+
|
|
490
|
+
**v0.3** *(next)*
|
|
491
|
+
- [ ] Side-by-side run comparison
|
|
492
|
+
- [ ] Token waterfall chart
|
|
493
|
+
- [ ] CrewAI auto-instrumentation
|
|
494
|
+
- [ ] LlamaIndex auto-instrumentation
|
|
495
|
+
- [ ] CI/CD integration — fail build on regression
|
|
496
|
+
- [ ] VS Code extension
|
|
497
|
+
|
|
498
|
+
**v1.0** *(horizon)*
|
|
499
|
+
- [ ] OpenJCK Cloud — share traces across your team
|
|
500
|
+
- [ ] Team dashboards + run history
|
|
501
|
+
- [ ] Slack / Discord alerts on agent failure
|
|
502
|
+
- [ ] Export trace as shareable HTML report
|
|
503
|
+
|
|
504
|
+
---
|
|
505
|
+
|
|
506
|
+
## Contributing
|
|
507
|
+
|
|
508
|
+
Built because debugging agents was making us insane.
|
|
509
|
+
|
|
510
|
+
```bash
|
|
511
|
+
git clone https://github.com/RavaniRoshan/openjck
|
|
512
|
+
cd openjck
|
|
513
|
+
|
|
514
|
+
# Python library
|
|
515
|
+
pip install -e ".[server]"
|
|
516
|
+
python examples/basic_agent.py # generates sample traces
|
|
517
|
+
|
|
518
|
+
# npm CLI
|
|
519
|
+
cd openjck-npm
|
|
520
|
+
npm install
|
|
521
|
+
node bin/openjck.js traces # verify traces from above
|
|
522
|
+
node bin/openjck.js ui # open UI at localhost:7823
|
|
523
|
+
```
|
|
524
|
+
|
|
525
|
+
Before opening a PR:
|
|
526
|
+
- Open an issue first for non-trivial changes
|
|
527
|
+
- Add an example for new features
|
|
528
|
+
- Keep `collector.py` and `decorators.py` dependency-free (stdlib only)
|
|
529
|
+
- Keep `bin/openjck.js` working without any build step
|
|
530
|
+
|
|
531
|
+
---
|
|
532
|
+
|
|
533
|
+
## Repository Structure
|
|
534
|
+
|
|
535
|
+
```
|
|
536
|
+
OpenJCK/
|
|
537
|
+
├── openjck/ ← Python library (pip install openjck)
|
|
538
|
+
│ ├── collector.py ← core event capture, thread-safe
|
|
539
|
+
│ ├── decorators.py ← @trace @trace_llm @trace_tool
|
|
540
|
+
│ ├── intelligence.py ← failure intelligence engine
|
|
541
|
+
│ ├── client.py ← HTTP emit client
|
|
542
|
+
│ ├── storage.py ← JSON persistence
|
|
543
|
+
│ ├── server.py ← FastAPI server (Python alternative)
|
|
544
|
+
│ ├── cli.py ← Python CLI entry point
|
|
545
|
+
│ └── ui/ ← web viewer UI
|
|
546
|
+
├── openjck-npm/ ← npm package (npx openjck)
|
|
547
|
+
│ ├── bin/openjck.js ← CLI entrypoint
|
|
548
|
+
│ ├── src/
|
|
549
|
+
│ │ ├── server.js ← Express server
|
|
550
|
+
│ │ ├── db.js ← SQLite database
|
|
551
|
+
│ │ ├── commands/ ← ui, traces, clear
|
|
552
|
+
│ │ └── ui/index.html ← dashboard UI
|
|
553
|
+
│ └── package.json
|
|
554
|
+
├── openjck-site/ ← docs site (Astro + Starlight)
|
|
555
|
+
├── examples/
|
|
556
|
+
│ ├── basic_agent.py ← demo agent
|
|
557
|
+
│ └── dashboard_demo.py ← dashboard demo
|
|
558
|
+
├── tests/
|
|
559
|
+
│ └── test_intelligence.py ← intelligence tests
|
|
560
|
+
└── README.md
|
|
561
|
+
```
|
|
562
|
+
|
|
563
|
+
---
|
|
564
|
+
|
|
565
|
+
## License
|
|
566
|
+
|
|
567
|
+
[MIT](LICENSE) — use it, fork it, ship it.
|
|
568
|
+
|
|
569
|
+
---
|
|
570
|
+
|
|
571
|
+
<div align="center">
|
|
572
|
+
|
|
573
|
+
<br />
|
|
574
|
+
|
|
575
|
+
**If this saved you an hour of debugging — [star the repo](https://github.com/RavaniRoshan/openjck).**
|
|
576
|
+
|
|
577
|
+
That's the only metric that matters right now.
|
|
578
|
+
|
|
579
|
+
<br />
|
|
580
|
+
|
|
581
|
+
Made with frustration and Python + Node.js
|
|
582
|
+
·
|
|
583
|
+
[GitHub](https://github.com/RavaniRoshan/openjck)
|
|
584
|
+
·
|
|
585
|
+
[npm](https://www.npmjs.com/package/openjck)
|
|
586
|
+
·
|
|
587
|
+
[PyPI](https://pypi.org/project/openjck/)
|
|
588
|
+
·
|
|
589
|
+
[Docs](https://openjck.dev)
|
|
590
|
+
|
|
591
|
+
<br />
|
|
592
|
+
|
|
593
|
+
</div>
|