@steadwing/openalerts 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -14,7 +14,7 @@
14
14
  <p align="center">
15
15
  <a href="#quickstart">Quickstart</a> &middot;
16
16
  <a href="#alert-rules">Alert Rules</a> &middot;
17
- <a href="#configuration">Configuration</a> &middot;
17
+ <a href="#llm-enriched-alerts">LLM Enrichment</a> &middot;
18
18
  <a href="#dashboard">Dashboard</a> &middot;
19
19
  <a href="#commands">Commands</a>
20
20
  </p>
@@ -37,7 +37,9 @@ openclaw plugins install @steadwing/openalerts
37
37
 
38
38
  ### 2. Configure
39
39
 
40
- Add to your `openclaw.json`:
40
+ If you already have a channel paired with OpenClaw (e.g. Telegram via `openclaw pair`), **no config is needed** — OpenAlerts auto-detects where to send alerts.
41
+
42
+ Otherwise, set it explicitly in `openclaw.json`:
41
43
 
42
44
  ```jsonc
43
45
  {
@@ -55,69 +57,105 @@ Add to your `openclaw.json`:
55
57
  }
56
58
  ```
57
59
 
60
+ **Auto-detection priority:** explicit config > static `allowFrom` in channel config > pairing store.
61
+
58
62
  ### 3. Restart & verify
59
63
 
60
64
  ```bash
61
65
  openclaw gateway stop && openclaw gateway run
62
66
  ```
63
67
 
68
+
64
69
  Send `/health` to your bot. You should get a live status report back — zero LLM tokens consumed.
65
70
 
66
71
  That's it. OpenAlerts is now watching your agent.
67
72
 
68
- ## Alert Rules
73
+ ## Dashboard
74
+
75
+ A real-time web dashboard is embedded in the gateway at:
76
+
77
+ ```
78
+ http://127.0.0.1:18789/openalerts
79
+ ```
80
+
81
+ - **Activity** — Live event timeline with session flows, tool calls, LLM usage
82
+ - **System Logs** — Filtered, structured logs with search
83
+ - **Health** — Rule status, alert history, system stats
69
84
 
70
- Seven rules run against every event in real-time:
85
+ ## Alert Rules
71
86
 
72
- | Rule | Watches for | Severity |
73
- |---|---|---|
74
- | **llm-errors** | 3+ LLM failures in 5 minutes | ERROR |
75
- | **infra-errors** | 3+ infrastructure errors in 5 minutes | ERROR |
76
- | **gateway-down** | No heartbeat for 90+ seconds | CRITICAL |
77
- | **session-stuck** | Session idle for 120+ seconds | WARN |
78
- | **high-error-rate** | 50%+ of last 20 messages failed | ERROR |
79
- | **queue-depth** | 10+ items queued | WARN |
80
- | **heartbeat-fail** | 3 consecutive heartbeat failures | ERROR |
87
+ Eight rules run against every event in real-time. All thresholds and cooldowns are configurable.
81
88
 
82
- All thresholds and cooldowns are [configurable per-rule](#configuration).
89
+ | Rule | Watches for | Severity | Threshold (default) |
90
+ |---|---|---|---|
91
+ | `llm-errors` | LLM/agent failures in 1 min window | ERROR | `1` error |
92
+ | `infra-errors` | Infrastructure errors in 1 min window | ERROR | `1` error |
93
+ | `gateway-down` | No heartbeat received | CRITICAL | `30000` ms (30s) |
94
+ | `session-stuck` | Session idle too long | WARN | `120000` ms (2 min) |
95
+ | `high-error-rate` | Message failure rate over last 20 | ERROR | `50`% |
96
+ | `queue-depth` | Queued items piling up | WARN | `10` items |
97
+ | `tool-errors` | Tool failures in 1 min window | WARN | `1` error |
98
+ | `heartbeat-fail` | Consecutive heartbeat failures | ERROR | `3` failures |
83
99
 
84
- ## Configuration
100
+ Every rule also accepts:
101
+ - **`enabled`** — `false` to disable the rule (default: `true`)
102
+ - **`cooldownMinutes`** — minutes before the same rule can fire again (default: `15`)
85
103
 
86
- Full config reference under `plugins.entries.openalerts.config`:
104
+ To tune rules, add a `rules` object in your plugin config:
87
105
 
88
106
  ```jsonc
89
107
  {
90
- "alertChannel": "telegram", // telegram | discord | slack | whatsapp | signal
91
- "alertTo": "YOUR_CHAT_ID", // chat/user ID on that channel
92
- "cooldownMinutes": 15, // minutes between repeated alerts (default: 15)
93
- "quiet": false, // true = log only, no messages sent
94
-
95
- "rules": {
96
- "gateway-down": {
97
- "threshold": 120000 // override: 2 min instead of 90s
98
- },
99
- "high-error-rate": {
100
- "enabled": false // disable a rule entirely
101
- },
102
- "llm-errors": {
103
- "threshold": 5, // require 5 errors instead of 3
104
- "cooldownMinutes": 30 // longer cooldown for this rule
108
+ "plugins": {
109
+ "entries": {
110
+ "openalerts": {
111
+ "config": {
112
+ "cooldownMinutes": 10,
113
+ "rules": {
114
+ "llm-errors": { "threshold": 5 },
115
+ "infra-errors": { "cooldownMinutes": 30 },
116
+ "high-error-rate": { "enabled": false },
117
+ "gateway-down": { "threshold": 60000 }
118
+ }
119
+ }
120
+ }
105
121
  }
106
122
  }
107
123
  }
108
124
  ```
109
125
 
110
- ## Dashboard
126
+ Set `"quiet": true` at the config level for log-only mode (no messages sent).
111
127
 
112
- A real-time web dashboard is embedded in the gateway at:
128
+ ## LLM-Enriched Alerts
113
129
 
130
+ OpenAlerts can optionally use your configured LLM to enrich alerts with a human-friendly summary and an actionable suggestion. **This feature is disabled by default** — opt in by setting `"llmEnriched": true` in your plugin config:
131
+
132
+ ```jsonc
133
+ {
134
+ "plugins": {
135
+ "entries": {
136
+ "openalerts": {
137
+ "config": {
138
+ "llmEnriched": true
139
+ }
140
+ }
141
+ }
142
+ }
143
+ }
114
144
  ```
115
- http://127.0.0.1:18789/openalerts
145
+
146
+ When enabled, alerts include an LLM-generated summary and action:
147
+
116
148
  ```
149
+ 1 agent error(s) on unknown in the last minute. Last: 401 Incorrect API key...
117
150
 
118
- - **Activity** Live event timeline with session flows, tool calls, LLM usage
119
- - **System Logs** Filtered, structured logs with search
120
- - **Health** — Rule status, alert history, system stats
151
+ Summary: Your OpenAI API key is invalid or expired the agent cannot make LLM calls.
152
+ Action: Update your API key in ~/.openclaw/.env with a valid key from platform.openai.com/api-keys
153
+ ```
154
+
155
+ - **Model**: reads from `agents.defaults.model.primary` in your `openclaw.json` (e.g. `"openai/gpt-4o-mini"`)
156
+ - **API key**: reads from the corresponding environment variable (`OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `GROQ_API_KEY`, etc.)
157
+ - **Supported providers**: OpenAI, Anthropic, Groq, Together, DeepSeek (and any OpenAI-compatible API)
158
+ - **Graceful fallback**: if the LLM call fails or times out (10s), the original alert is sent unchanged
121
159
 
122
160
  ## Commands
123
161
 
@@ -129,17 +167,10 @@ Zero-token chat commands available in any connected channel:
129
167
  | `/alerts` | Recent alert history with severity and timestamps |
130
168
  | `/dashboard` | Returns the dashboard URL |
131
169
 
132
- ## Architecture
133
-
134
- ```
135
- src/core/ Framework-agnostic engine, zero dependencies
136
- Rules engine, evaluator, event bus, state store, formatter
137
-
138
- src/plugin/ OpenClaw adapter plugin
139
- Event translation, alert routing, dashboard, chat commands
140
- ```
170
+ ## Roadmap
141
171
 
142
- Everything ships as a single `@steadwing/openalerts` package. The core is completely framework-agnostic — adding monitoring for a new framework only requires writing an adapter.
172
+ - [ ] [nanobot](https://github.com/HKUDS/nanobot) adapter
173
+ - [ ] [OpenManus](https://github.com/FoundationAgents/OpenManus) adapter
143
174
 
144
175
  ## Development
145
176
 
@@ -13,11 +13,14 @@ export declare class OpenAlertsEngine {
13
13
  private stateDir;
14
14
  private dispatcher;
15
15
  private platform;
16
+ private enricher;
16
17
  private logger;
17
18
  private logPrefix;
18
19
  private watchdogTimer;
19
20
  private pruneTimer;
20
21
  private running;
22
+ private eventRing;
23
+ private static readonly RING_MAX;
21
24
  constructor(options: OpenAlertsInitOptions);
22
25
  /** Start the engine: warm from history, start timers. */
23
26
  start(): void;
@@ -30,12 +33,16 @@ export declare class OpenAlertsEngine {
30
33
  readonly name: string;
31
34
  send(alert: AlertEvent, formatted: string): Promise<void> | void;
32
35
  }): void;
36
+ /** Fire a test alert to verify delivery. */
37
+ sendTestAlert(): void;
33
38
  /** Whether the platform sync is connected. */
34
39
  get platformConnected(): boolean;
35
40
  /** Whether the engine is running. */
36
41
  get isRunning(): boolean;
37
42
  /** Read recent stored events (for /alerts command). */
38
43
  getRecentEvents(limit?: number): StoredEvent[];
44
+ /** Get recent full events from the in-memory ring buffer (for dashboard history). */
45
+ getRecentLiveEvents(limit?: number): OpenAlertsEvent[];
39
46
  private handleEvent;
40
47
  private fireAlert;
41
48
  }
@@ -17,14 +17,18 @@ export class OpenAlertsEngine {
17
17
  stateDir;
18
18
  dispatcher;
19
19
  platform = null;
20
+ enricher;
20
21
  logger;
21
22
  logPrefix;
22
23
  watchdogTimer = null;
23
24
  pruneTimer = null;
24
25
  running = false;
26
+ eventRing = [];
27
+ static RING_MAX = 500;
25
28
  constructor(options) {
26
29
  this.config = options.config;
27
30
  this.stateDir = options.stateDir;
31
+ this.enricher = options.enricher ?? null;
28
32
  this.logger = options.logger ?? console;
29
33
  this.logPrefix = options.logPrefix ?? "openalerts";
30
34
  this.bus = new OpenAlertsEventBus();
@@ -64,7 +68,9 @@ export class OpenAlertsEngine {
64
68
  this.watchdogTimer = setInterval(() => {
65
69
  const alerts = processWatchdogTick(this.state, this.config);
66
70
  for (const alert of alerts) {
67
- this.fireAlert(alert);
71
+ void this.fireAlert(alert).catch((err) => {
72
+ this.logger.error(`${this.logPrefix}: watchdog alert failed: ${String(err)}`);
73
+ });
68
74
  }
69
75
  }, DEFAULTS.watchdogIntervalMs);
70
76
  // Prune timer (cleans old log entries every 6h)
@@ -82,7 +88,7 @@ export class OpenAlertsEngine {
82
88
  const channelNames = this.dispatcher.hasChannels
83
89
  ? `${this.dispatcher.channelCount} channel(s)`
84
90
  : "log-only (no alert channels)";
85
- this.logger.info(`${this.logPrefix}: started, ${channelNames}, 7 rules active`);
91
+ this.logger.info(`${this.logPrefix}: started, ${channelNames}, 8 rules active`);
86
92
  }
87
93
  /** Ingest a universal event. Can be called directly or via the event bus. */
88
94
  ingest(event) {
@@ -109,6 +115,21 @@ export class OpenAlertsEngine {
109
115
  addChannel(channel) {
110
116
  this.dispatcher.addChannel(channel);
111
117
  }
118
+ /** Fire a test alert to verify delivery. */
119
+ sendTestAlert() {
120
+ void this.fireAlert({
121
+ type: "alert",
122
+ id: `test:manual:${Date.now()}`,
123
+ ruleId: "test",
124
+ severity: "info",
125
+ title: "Test alert — delivery verified",
126
+ detail: "This is a test alert from /test_alert. If you see this, alert delivery is working.",
127
+ ts: Date.now(),
128
+ fingerprint: "test:manual",
129
+ }).catch((err) => {
130
+ this.logger.error(`${this.logPrefix}: test alert failed: ${String(err)}`);
131
+ });
132
+ }
112
133
  /** Whether the platform sync is connected. */
113
134
  get platformConnected() {
114
135
  return this.platform?.isConnected() ?? false;
@@ -121,8 +142,17 @@ export class OpenAlertsEngine {
121
142
  getRecentEvents(limit = 100) {
122
143
  return readRecentEvents(this.stateDir, limit);
123
144
  }
145
+ /** Get recent full events from the in-memory ring buffer (for dashboard history). */
146
+ getRecentLiveEvents(limit = 200) {
147
+ return this.eventRing.slice(-limit);
148
+ }
124
149
  // ─── Internal ──────────────────────────────────────────────────────────────
125
150
  handleEvent(event) {
151
+ // Add to in-memory ring buffer
152
+ this.eventRing.push(event);
153
+ if (this.eventRing.length > OpenAlertsEngine.RING_MAX) {
154
+ this.eventRing = this.eventRing.slice(-OpenAlertsEngine.RING_MAX);
155
+ }
126
156
  // Persist as diagnostic snapshot
127
157
  const snapshot = {
128
158
  type: "diagnostic",
@@ -141,13 +171,15 @@ export class OpenAlertsEngine {
141
171
  // Run through evaluator
142
172
  const alerts = processEvent(this.state, this.config, event);
143
173
  for (const alert of alerts) {
144
- this.fireAlert(alert);
174
+ void this.fireAlert(alert).catch((err) => {
175
+ this.logger.error(`${this.logPrefix}: alert fire failed: ${String(err)}`);
176
+ });
145
177
  }
146
178
  // Forward to platform
147
179
  this.platform?.enqueue(snapshot);
148
180
  }
149
- fireAlert(alert) {
150
- // Persist alert
181
+ async fireAlert(alert) {
182
+ // Persist alert (original, before enrichment)
151
183
  try {
152
184
  appendEvent(this.stateDir, alert);
153
185
  }
@@ -156,9 +188,21 @@ export class OpenAlertsEngine {
156
188
  }
157
189
  // Forward to platform
158
190
  this.platform?.enqueue(alert);
191
+ // Enrich with LLM if enricher is available
192
+ let enriched = alert;
193
+ if (this.enricher) {
194
+ try {
195
+ const result = await this.enricher(alert);
196
+ if (result)
197
+ enriched = result;
198
+ }
199
+ catch (err) {
200
+ this.logger.warn(`${this.logPrefix}: llm enrichment failed, using original: ${String(err)}`);
201
+ }
202
+ }
159
203
  // Dispatch to channels (unless quiet mode)
160
204
  if (!this.config.quiet) {
161
- void this.dispatcher.dispatch(alert).catch((err) => {
205
+ void this.dispatcher.dispatch(enriched).catch((err) => {
162
206
  this.logger.error(`${this.logPrefix}: alert dispatch failed: ${String(err)}`);
163
207
  });
164
208
  }
@@ -65,7 +65,7 @@ export function processEvent(state, config, event) {
65
65
  state.stats.totalCostUsd = 0;
66
66
  state.stats.lastResetTs = now;
67
67
  }
68
- // Track event types in stats
68
+ // Track event types in stats (independent of rule enabled state)
69
69
  if (event.type === "infra.error") {
70
70
  state.stats.webhookErrors++;
71
71
  }
@@ -83,6 +83,16 @@ export function processEvent(state, config, event) {
83
83
  if (event.type === "session.start") {
84
84
  state.stats.sessionsStarted++;
85
85
  }
86
+ if (event.type === "session.stuck") {
87
+ state.stats.stuckSessions++;
88
+ }
89
+ if (event.type === "llm.call" || event.type === "llm.error" || event.type === "agent.error") {
90
+ state.stats.messagesProcessed++;
91
+ if (event.type === "llm.error" || event.type === "agent.error" ||
92
+ event.outcome === "error" || event.outcome === "timeout") {
93
+ state.stats.messageErrors++;
94
+ }
95
+ }
86
96
  if (event.type === "llm.token_usage") {
87
97
  if (typeof event.tokenCount === "number")
88
98
  state.stats.totalTokens += event.tokenCount;
@@ -103,7 +113,14 @@ export function processEvent(state, config, event) {
103
113
  const ctx = { state, config, now };
104
114
  const fired = [];
105
115
  for (const rule of ALL_RULES) {
106
- const alert = rule.evaluate(event, ctx);
116
+ let alert;
117
+ try {
118
+ alert = rule.evaluate(event, ctx);
119
+ }
120
+ catch {
121
+ // One broken rule must never block the rest
122
+ continue;
123
+ }
107
124
  if (!alert)
108
125
  continue;
109
126
  // Check cooldown
@@ -1,4 +1,4 @@
1
- export type { AlertChannel, AlertEvent, AlertRuleDefinition, AlertSeverity, AlertTarget, DiagnosticSnapshot, EvaluatorState, HeartbeatSnapshot, MonitorConfig, RuleContext, RuleOverride, OpenAlertsEvent, OpenAlertsEventType, OpenAlertsInitOptions, OpenAlertsLogger, StoredEvent, WindowEntry, } from "./types.js";
1
+ export type { AlertChannel, AlertEnricher, AlertEvent, AlertRuleDefinition, AlertSeverity, AlertTarget, DiagnosticSnapshot, EvaluatorState, HeartbeatSnapshot, MonitorConfig, RuleContext, RuleOverride, OpenAlertsEvent, OpenAlertsEventType, OpenAlertsInitOptions, OpenAlertsLogger, StoredEvent, WindowEntry, } from "./types.js";
2
2
  export { DEFAULTS, LOG_FILENAME, STORE_DIR_NAME } from "./types.js";
3
3
  export { OpenAlertsEngine } from "./engine.js";
4
4
  export { OpenAlertsEventBus } from "./event-bus.js";
@@ -6,6 +6,7 @@ export { AlertDispatcher } from "./alert-channel.js";
6
6
  export { createEvaluatorState, processEvent, processWatchdogTick, warmFromHistory, } from "./evaluator.js";
7
7
  export { ALL_RULES } from "./rules.js";
8
8
  export { appendEvent, pruneLog, readAllEvents, readRecentEvents, } from "./store.js";
9
+ export { createLlmEnricher, type LlmEnricherOptions } from "./llm-enrichment.js";
9
10
  export { formatAlertMessage, formatAlertsOutput, formatHealthOutput, } from "./formatter.js";
10
11
  export { createPlatformSync, type PlatformSync } from "./platform.js";
11
12
  export { BoundedMap, type BoundedMapOptions, type BoundedMapStats, } from "./bounded-map.js";
@@ -13,6 +13,8 @@ export { createEvaluatorState, processEvent, processWatchdogTick, warmFromHistor
13
13
  export { ALL_RULES } from "./rules.js";
14
14
  // Store
15
15
  export { appendEvent, pruneLog, readAllEvents, readRecentEvents, } from "./store.js";
16
+ // LLM Enrichment
17
+ export { createLlmEnricher } from "./llm-enrichment.js";
16
18
  // Formatter
17
19
  export { formatAlertMessage, formatAlertsOutput, formatHealthOutput, } from "./formatter.js";
18
20
  // Platform
@@ -0,0 +1,21 @@
1
+ import type { AlertEnricher, OpenAlertsLogger } from "./types.js";
2
+ export type LlmEnricherOptions = {
3
+ /** Model string from config, e.g. "openai/gpt-5-nano" */
4
+ modelString: string;
5
+ /** Pre-resolved API key (caller reads from env to avoid env+fetch in same file) */
6
+ apiKey: string;
7
+ /** Logger for debug/warn messages */
8
+ logger?: OpenAlertsLogger;
9
+ /** Timeout in ms (default: 10000) */
10
+ timeoutMs?: number;
11
+ };
12
+ /**
13
+ * Resolve the environment variable name for a given model string's provider.
14
+ * Returns null if the model string is invalid or the provider is unknown.
15
+ */
16
+ export declare function resolveApiKeyEnvVar(modelString: string): string | null;
17
+ /**
18
+ * Create an AlertEnricher that calls an LLM to add a summary + action to alerts.
19
+ * Returns null if provider can't be resolved.
20
+ */
21
+ export declare function createLlmEnricher(opts: LlmEnricherOptions): AlertEnricher | null;
@@ -0,0 +1,180 @@
1
+ const PROVIDER_MAP = {
2
+ openai: {
3
+ type: "openai-compatible",
4
+ baseUrl: "https://api.openai.com/v1",
5
+ apiKeyEnvVar: "OPENAI_API_KEY",
6
+ },
7
+ groq: {
8
+ type: "openai-compatible",
9
+ baseUrl: "https://api.groq.com/openai/v1",
10
+ apiKeyEnvVar: "GROQ_API_KEY",
11
+ },
12
+ together: {
13
+ type: "openai-compatible",
14
+ baseUrl: "https://api.together.xyz/v1",
15
+ apiKeyEnvVar: "TOGETHER_API_KEY",
16
+ },
17
+ deepseek: {
18
+ type: "openai-compatible",
19
+ baseUrl: "https://api.deepseek.com/v1",
20
+ apiKeyEnvVar: "DEEPSEEK_API_KEY",
21
+ },
22
+ anthropic: {
23
+ type: "anthropic",
24
+ baseUrl: "https://api.anthropic.com/v1",
25
+ apiKeyEnvVar: "ANTHROPIC_API_KEY",
26
+ },
27
+ };
28
+ // ─── Prompt ─────────────────────────────────────────────────────────────────
29
+ function buildPrompt(alert) {
30
+ return `You are a concise DevOps alert analyst. Given this monitoring alert, provide:
31
+ 1. A brief human-friendly summary (1 sentence, plain language)
32
+ 2. One actionable suggestion to resolve it
33
+
34
+ Alert:
35
+ - Rule: ${alert.ruleId}
36
+ - Severity: ${alert.severity}
37
+ - Title: ${alert.title}
38
+ - Detail: ${alert.detail}
39
+
40
+ Reply in exactly this format (2 lines only):
41
+ Summary: <your summary>
42
+ Action: <your suggestion>`;
43
+ }
44
+ // ─── Response Parsing ───────────────────────────────────────────────────────
45
+ function parseEnrichment(text) {
46
+ const lines = text.trim().split("\n");
47
+ let summary = "";
48
+ let action = "";
49
+ for (const line of lines) {
50
+ const trimmed = line.trim();
51
+ if (trimmed.toLowerCase().startsWith("summary:")) {
52
+ summary = trimmed.slice("summary:".length).trim();
53
+ }
54
+ else if (trimmed.toLowerCase().startsWith("action:")) {
55
+ action = trimmed.slice("action:".length).trim();
56
+ }
57
+ }
58
+ if (!summary && !action)
59
+ return null;
60
+ return { summary, action };
61
+ }
62
+ // ─── HTTP Calls ─────────────────────────────────────────────────────────────
63
+ async function callOpenAICompatible(baseUrl, apiKey, model, prompt, timeoutMs) {
64
+ const controller = new AbortController();
65
+ const timer = setTimeout(() => controller.abort(), timeoutMs);
66
+ try {
67
+ const res = await fetch(`${baseUrl}/chat/completions`, {
68
+ method: "POST",
69
+ headers: {
70
+ "Content-Type": "application/json",
71
+ Authorization: `Bearer ${apiKey}`,
72
+ },
73
+ body: JSON.stringify({
74
+ model,
75
+ messages: [{ role: "user", content: prompt }],
76
+ max_tokens: 200,
77
+ temperature: 0.3,
78
+ }),
79
+ signal: controller.signal,
80
+ });
81
+ if (!res.ok)
82
+ return null;
83
+ const data = (await res.json());
84
+ return data.choices?.[0]?.message?.content ?? null;
85
+ }
86
+ catch {
87
+ return null;
88
+ }
89
+ finally {
90
+ clearTimeout(timer);
91
+ }
92
+ }
93
+ async function callAnthropic(baseUrl, apiKey, model, prompt, timeoutMs) {
94
+ const controller = new AbortController();
95
+ const timer = setTimeout(() => controller.abort(), timeoutMs);
96
+ try {
97
+ const res = await fetch(`${baseUrl}/messages`, {
98
+ method: "POST",
99
+ headers: {
100
+ "Content-Type": "application/json",
101
+ "x-api-key": apiKey,
102
+ "anthropic-version": "2023-06-01",
103
+ },
104
+ body: JSON.stringify({
105
+ model,
106
+ max_tokens: 200,
107
+ messages: [{ role: "user", content: prompt }],
108
+ }),
109
+ signal: controller.signal,
110
+ });
111
+ if (!res.ok)
112
+ return null;
113
+ const data = (await res.json());
114
+ const textBlock = data.content?.find((b) => b.type === "text");
115
+ return textBlock?.text ?? null;
116
+ }
117
+ catch {
118
+ return null;
119
+ }
120
+ finally {
121
+ clearTimeout(timer);
122
+ }
123
+ }
124
+ // ─── Factory ────────────────────────────────────────────────────────────────
125
+ /**
126
+ * Resolve the environment variable name for a given model string's provider.
127
+ * Returns null if the model string is invalid or the provider is unknown.
128
+ */
129
+ export function resolveApiKeyEnvVar(modelString) {
130
+ const slashIdx = modelString.indexOf("/");
131
+ if (slashIdx < 1)
132
+ return null;
133
+ const providerKey = modelString.slice(0, slashIdx).toLowerCase();
134
+ return PROVIDER_MAP[providerKey]?.apiKeyEnvVar ?? null;
135
+ }
136
+ /**
137
+ * Create an AlertEnricher that calls an LLM to add a summary + action to alerts.
138
+ * Returns null if provider can't be resolved.
139
+ */
140
+ export function createLlmEnricher(opts) {
141
+ const { modelString, apiKey, logger, timeoutMs = 10_000 } = opts;
142
+ // Parse "provider/model-name" format
143
+ const slashIdx = modelString.indexOf("/");
144
+ if (slashIdx < 1) {
145
+ logger?.warn(`openalerts: llm-enrichment skipped — invalid model string "${modelString}"`);
146
+ return null;
147
+ }
148
+ const providerKey = modelString.slice(0, slashIdx).toLowerCase();
149
+ const model = modelString.slice(slashIdx + 1);
150
+ const providerConfig = PROVIDER_MAP[providerKey];
151
+ if (!providerConfig) {
152
+ logger?.warn(`openalerts: llm-enrichment skipped — unknown provider "${providerKey}"`);
153
+ return null;
154
+ }
155
+ logger?.info(`openalerts: llm-enrichment enabled (${providerKey}/${model})`);
156
+ return async (alert) => {
157
+ const prompt = buildPrompt(alert);
158
+ let responseText = null;
159
+ if (providerConfig.type === "anthropic") {
160
+ responseText = await callAnthropic(providerConfig.baseUrl, apiKey, model, prompt, timeoutMs);
161
+ }
162
+ else {
163
+ responseText = await callOpenAICompatible(providerConfig.baseUrl, apiKey, model, prompt, timeoutMs);
164
+ }
165
+ if (!responseText)
166
+ return null;
167
+ const parsed = parseEnrichment(responseText);
168
+ if (!parsed)
169
+ return null;
170
+ // Append enrichment to the original detail
171
+ let enrichedDetail = alert.detail;
172
+ if (parsed.summary) {
173
+ enrichedDetail += `\n\nSummary: ${parsed.summary}`;
174
+ }
175
+ if (parsed.action) {
176
+ enrichedDetail += `\nAction: ${parsed.action}`;
177
+ }
178
+ return { ...alert, detail: enrichedDetail };
179
+ };
180
+ }