@lumin-io/openclaw-diagnostics 0.1.0 → 0.1.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +111 -0
- package/package.json +20 -2
package/README.md
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
1
|
+
# @lumin-io/openclaw-diagnostics
|
|
2
|
+
|
|
3
|
+
> Full-fidelity Lumin observability for [OpenClaw](https://github.com/openclaw/openclaw) — every prompt, response, tool I/O, and reasoning trace, captured per turn.
|
|
4
|
+
|
|
5
|
+
[](https://www.npmjs.com/package/@lumin-io/openclaw-diagnostics)
|
|
6
|
+
[](https://github.com/amitbidlan/zistica-lumin/blob/main/LICENSE)
|
|
7
|
+
|
|
8
|
+
OpenClaw runs ship native OpenTelemetry through the bundled [`diagnostics-otel`](https://docs.openclaw.ai/gateway/opentelemetry) plugin. That gets you provider, model, token counts, and span structure — but **not the actual prompt or reply text**. The plugin exposes a `captureContent` flag, but as of OpenClaw 2026.5.x the runtime never populates the underlying event fields the exporter tries to read, so the flag is effectively a no-op.
|
|
9
|
+
|
|
10
|
+
This plugin uses a different surface — OpenClaw's typed-hook API (`api.on('llm_input', …)` / `api.on('llm_output', …)`) — which **does** carry full content at runtime. On every turn it builds a Lumin `SpanInput` and POSTs it to your local Lumin instance. The agent never blocks on Lumin; failures are swallowed with a short timeout.
|
|
11
|
+
|
|
12
|
+
## What you get
|
|
13
|
+
|
|
14
|
+
Per turn:
|
|
15
|
+
|
|
16
|
+
- Prompt text (just this turn's user message — not the whole replayed history)
|
|
17
|
+
- Assistant reply text
|
|
18
|
+
- **Reasoning trace** for thinking-emitting models (gpt-oss, o-series, Claude with extended thinking) — surfaced under `metadata.openclaw.content.thinking`
|
|
19
|
+
- Token usage (input / output)
|
|
20
|
+
- Model + provider + harness ID
|
|
21
|
+
- Trace ID stitched from OpenClaw's diagnostic context (so the typed-hook span fuses with whatever else you've ingested via OTel for the same run)
|
|
22
|
+
- Lightweight summary in metadata: `history_message_count`, `system_prompt_chars`, `images_count`
|
|
23
|
+
|
|
24
|
+
## Installation
|
|
25
|
+
|
|
26
|
+
```bash
|
|
27
|
+
openclaw plugins install @lumin-io/openclaw-diagnostics
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
Then enable conversation access for the plugin in your `~/.openclaw/openclaw.json` — non-bundled plugins are conversation-gated by default, so without this the typed hooks register silently and you'll see nothing:
|
|
31
|
+
|
|
32
|
+
```json
|
|
33
|
+
{
|
|
34
|
+
"plugins": {
|
|
35
|
+
"entries": {
|
|
36
|
+
"lumin-diagnostics": {
|
|
37
|
+
"hooks": { "allowConversationAccess": true }
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
Restart the gateway:
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
openclaw daemon restart
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
You should see this line in the gateway log on startup:
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
lumin-diagnostics: subscribed to llm_input + llm_output → http://localhost:8000/v1/spans (project=openclaw)
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
## Configuration
|
|
57
|
+
|
|
58
|
+
All optional, set under `plugins.entries.lumin-diagnostics.config` in `openclaw.json`:
|
|
59
|
+
|
|
60
|
+
| Key | Default | Description |
|
|
61
|
+
|---|---|---|
|
|
62
|
+
| `host` | `http://localhost:8000` (or `LUMIN_HOST` env) | Lumin API base URL. |
|
|
63
|
+
| `project` | `openclaw` | Sent as `X-Lumin-Project` so the agent grid groups OpenClaw runs together. |
|
|
64
|
+
| `captureSystemPrompt` | `false` | Whether to write the full system prompt to `metadata.openclaw.content.system_prompt`. Off by default — system prompts are often large and rarely actionable. The character count is captured either way. |
|
|
65
|
+
| `maxContentChars` | `32768` | Per-attribute content cap. Truncated values are tagged `…(truncated)`. |
|
|
66
|
+
| `timeoutMs` | `5000` | HTTP timeout for the POST to Lumin. |
|
|
67
|
+
|
|
68
|
+
Example:
|
|
69
|
+
|
|
70
|
+
```json
|
|
71
|
+
{
|
|
72
|
+
"plugins": {
|
|
73
|
+
"entries": {
|
|
74
|
+
"lumin-diagnostics": {
|
|
75
|
+
"hooks": { "allowConversationAccess": true },
|
|
76
|
+
"config": {
|
|
77
|
+
"host": "http://my-lumin-host:8000",
|
|
78
|
+
"captureSystemPrompt": true,
|
|
79
|
+
"maxContentChars": 65536
|
|
80
|
+
}
|
|
81
|
+
}
|
|
82
|
+
}
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
## Why a plugin instead of just OTel?
|
|
88
|
+
|
|
89
|
+
Two reasons:
|
|
90
|
+
|
|
91
|
+
1. **Content fidelity.** The OTel exporter ships fine for structure + sizes but doesn't get prompts or replies through (see the upstream bug note above). The typed-hook API does, and stays compatible across OpenClaw releases.
|
|
92
|
+
2. **Composability.** This plugin runs *alongside* `diagnostics-otel` — both can be enabled at the same time. OTel ships your spans to Honeycomb / Datadog / etc. with structure + sizes; this plugin ships full-content spans to your local Lumin for debugging. They don't conflict.
|
|
93
|
+
|
|
94
|
+
## Compatibility
|
|
95
|
+
|
|
96
|
+
- **OpenClaw**: ≥ 2026.4.25 (uses the typed-hook API surface). Older releases that predate `api.on(...)` register silently with a single warn line — no errors.
|
|
97
|
+
- **Lumin API**: any version with `POST /v1/spans` — i.e. all current versions.
|
|
98
|
+
|
|
99
|
+
## Caveats
|
|
100
|
+
|
|
101
|
+
- **Trace ID stitching.** OpenClaw's typed hooks and its OTel exporter sometimes run under different `runWithDiagnosticTraceContext` envelopes, so the trace IDs Lumin sees from the two rails *may* not match. The plugin always emits a deterministic trace ID derived from the OpenClaw `runId`, so two ingests of the same run idempotently land on the same trace.
|
|
102
|
+
- **History is summarized, not embedded.** Each turn captures only the current user prompt — the full conversation history (which OpenClaw replays to the model on every turn) is referenced by count, not embedded, so trace size doesn't grow linearly with conversation length. If you need the full conversation, use Lumin's `/sessions` view, which already groups turns by `session_id`.
|
|
103
|
+
|
|
104
|
+
## Source + issues
|
|
105
|
+
|
|
106
|
+
- Repo: <https://github.com/amitbidlan/zistica-lumin/tree/main/packages/integrations/openclaw-diagnostics>
|
|
107
|
+
- Issues: <https://github.com/amitbidlan/zistica-lumin/issues>
|
|
108
|
+
|
|
109
|
+
## License
|
|
110
|
+
|
|
111
|
+
Apache 2.0
|
package/package.json
CHANGED
|
@@ -1,7 +1,25 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@lumin-io/openclaw-diagnostics",
|
|
3
|
-
"version": "0.1.
|
|
4
|
-
"description": "Full-fidelity Lumin observability for OpenClaw — captures prompts, responses, tool I/O via OpenClaw's
|
|
3
|
+
"version": "0.1.1",
|
|
4
|
+
"description": "Full-fidelity Lumin observability for OpenClaw — captures prompts, responses, tool I/O, and reasoning traces via OpenClaw's typed-hook API.",
|
|
5
|
+
"homepage": "https://github.com/amitbidlan/zistica-lumin/tree/main/packages/integrations/openclaw-diagnostics",
|
|
6
|
+
"repository": {
|
|
7
|
+
"type": "git",
|
|
8
|
+
"url": "git+https://github.com/amitbidlan/zistica-lumin.git",
|
|
9
|
+
"directory": "packages/integrations/openclaw-diagnostics"
|
|
10
|
+
},
|
|
11
|
+
"bugs": {
|
|
12
|
+
"url": "https://github.com/amitbidlan/zistica-lumin/issues"
|
|
13
|
+
},
|
|
14
|
+
"keywords": [
|
|
15
|
+
"openclaw",
|
|
16
|
+
"lumin",
|
|
17
|
+
"observability",
|
|
18
|
+
"tracing",
|
|
19
|
+
"opentelemetry",
|
|
20
|
+
"ai-agent",
|
|
21
|
+
"llm-monitoring"
|
|
22
|
+
],
|
|
5
23
|
"license": "Apache-2.0",
|
|
6
24
|
"type": "module",
|
|
7
25
|
"main": "dist/index.js",
|