@vainplex/openclaw-cortex 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -17,6 +17,16 @@
17
17
 
18
18
  Works **alongside** `memory-core` (OpenClaw's built-in memory) — doesn't replace it.
19
19
 
20
+ ### Regex + LLM Hybrid (v0.2.0)
21
+
22
+ By default, Cortex uses fast regex patterns (zero cost, instant). Optionally, you can plug in **any OpenAI-compatible LLM** for deeper analysis:
23
+
24
+ - **Ollama** (local, free): `mistral:7b`, `qwen2.5:7b`, `llama3.1:8b`
25
+ - **OpenAI**: `gpt-4o-mini`, `gpt-4o`
26
+ - **OpenRouter / vLLM / any OpenAI-compatible API**
27
+
28
+ The LLM runs **on top of regex** — it enhances, never replaces. If the LLM is down, Cortex falls back silently to regex-only.
29
+
20
30
  ## 🎬 Demo
21
31
 
22
32
  Try the interactive demo — it simulates a real bilingual dev conversation and shows every Cortex feature in action:
@@ -236,6 +246,52 @@ Add to your OpenClaw config:
236
246
  }
237
247
  ```
238
248
 
249
+ ### LLM Enhancement (optional)
250
+
251
+ Add an `llm` section to enable AI-powered analysis on top of regex:
252
+
253
+ ```json
254
+ {
255
+ "plugins": {
256
+ "openclaw-cortex": {
257
+ "enabled": true,
258
+ "llm": {
259
+ "enabled": true,
260
+ "endpoint": "http://localhost:11434/v1",
261
+ "model": "mistral:7b",
262
+ "apiKey": "",
263
+ "timeoutMs": 15000,
264
+ "batchSize": 3
265
+ }
266
+ }
267
+ }
268
+ }
269
+ ```
270
+
271
+ | Setting | Default | Description |
272
+ |---------|---------|-------------|
273
+ | `enabled` | `false` | Enable LLM enhancement |
274
+ | `endpoint` | `http://localhost:11434/v1` | Any OpenAI-compatible API endpoint |
275
+ | `model` | `mistral:7b` | Model identifier |
276
+ | `apiKey` | `""` | API key (optional, for cloud providers) |
277
+ | `timeoutMs` | `15000` | Timeout per LLM call |
278
+ | `batchSize` | `3` | Messages to buffer before calling the LLM |
279
+
280
+ **Examples:**
281
+
282
+ ```jsonc
283
+ // Ollama (local, free)
284
+ { "endpoint": "http://localhost:11434/v1", "model": "mistral:7b" }
285
+
286
+ // OpenAI
287
+ { "endpoint": "https://api.openai.com/v1", "model": "gpt-4o-mini", "apiKey": "sk-..." }
288
+
289
+ // OpenRouter
290
+ { "endpoint": "https://openrouter.ai/api/v1", "model": "meta-llama/llama-3.1-8b-instruct", "apiKey": "sk-or-..." }
291
+ ```
292
+
293
+ The LLM receives batches of messages and returns structured JSON: detected threads, decisions, closures, and mood. Results are merged with regex findings — the LLM can catch things regex misses (nuance, implicit decisions, context-dependent closures).
294
+
239
295
  Restart OpenClaw after configuring.
240
296
 
241
297
  ## How It Works
@@ -273,28 +329,49 @@ Thread and decision detection supports English, German, or both:
273
329
  - **Topic patterns**: "back to", "now about", "jetzt zu", "bzgl."
274
330
  - **Mood detection**: frustrated, excited, tense, productive, exploratory
275
331
 
332
+ ### LLM Enhancement Flow
333
+
334
+ When `llm.enabled: true`:
335
+
336
+ ```
337
+ message_received → regex analysis (instant, always)
338
+ → buffer message
339
+ → batch full? → LLM call (async, fire-and-forget)
340
+ → merge LLM results into threads + decisions
341
+ → LLM down? → silent fallback to regex-only
342
+ ```
343
+
344
+ The LLM sees a conversation snippet (configurable batch size) and returns:
345
+ - **Threads**: title, status (open/closed), summary
346
+ - **Decisions**: what was decided, who, impact level
347
+ - **Closures**: which threads were resolved
348
+ - **Mood**: overall conversation mood
349
+
276
350
  ### Graceful Degradation
277
351
 
278
352
  - Read-only workspace → runs in-memory, skips writes
279
353
  - Corrupt JSON → starts fresh, next write recovers
280
354
  - Missing directories → creates them automatically
281
355
  - Hook errors → caught and logged, never crashes the gateway
356
+ - LLM timeout/error → falls back to regex-only, no data loss
282
357
 
283
358
  ## Development
284
359
 
285
360
  ```bash
286
361
  npm install
287
- npm test # 270 tests
362
+ npm test # 288 tests
288
363
  npm run typecheck # TypeScript strict mode
289
364
  npm run build # Compile to dist/
290
365
  ```
291
366
 
292
367
  ## Performance
293
368
 
294
- - Zero runtime dependencies (Node built-ins only)
295
- - All hook handlers are non-blocking (fire-and-forget)
369
+ - Zero runtime dependencies (Node built-ins only — even LLM calls use `node:http`)
370
+ - Regex analysis: instant, runs on every message
371
+ - LLM enhancement: async, batched, fire-and-forget (never blocks hooks)
296
372
  - Atomic file writes via `.tmp` + rename
297
- - Tested with 270 unit + integration tests
373
+ - Noise filter prevents garbage threads from polluting state
374
+ - Tested with 288 unit + integration tests
298
375
 
299
376
  ## Architecture
300
377
 
@@ -148,6 +148,47 @@
148
148
  "description": "Language for regex pattern matching: English, German, or both"
149
149
  }
150
150
  }
151
+ },
152
+ "llm": {
153
+ "type": "object",
154
+ "additionalProperties": false,
155
+ "description": "Optional LLM enhancement — any OpenAI-compatible API (Ollama, OpenAI, OpenRouter, vLLM, etc.)",
156
+ "properties": {
157
+ "enabled": {
158
+ "type": "boolean",
159
+ "default": false,
160
+ "description": "Enable LLM-powered analysis on top of regex patterns"
161
+ },
162
+ "endpoint": {
163
+ "type": "string",
164
+ "default": "http://localhost:11434/v1",
165
+ "description": "OpenAI-compatible API endpoint"
166
+ },
167
+ "model": {
168
+ "type": "string",
169
+ "default": "mistral:7b",
170
+ "description": "Model identifier (e.g. mistral:7b, gpt-4o-mini)"
171
+ },
172
+ "apiKey": {
173
+ "type": "string",
174
+ "default": "",
175
+ "description": "API key (optional, for cloud providers)"
176
+ },
177
+ "timeoutMs": {
178
+ "type": "integer",
179
+ "minimum": 1000,
180
+ "maximum": 60000,
181
+ "default": 15000,
182
+ "description": "Timeout per LLM call in milliseconds"
183
+ },
184
+ "batchSize": {
185
+ "type": "integer",
186
+ "minimum": 1,
187
+ "maximum": 20,
188
+ "default": 3,
189
+ "description": "Number of messages to buffer before calling the LLM"
190
+ }
191
+ }
151
192
  }
152
193
  }
153
194
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vainplex/openclaw-cortex",
3
- "version": "0.2.0",
3
+ "version": "0.2.1",
4
4
  "description": "OpenClaw plugin: conversation intelligence — thread tracking, decision extraction, boot context, pre-compaction snapshots",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",