@smilintux/skcapstone 0.3.1 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -161,6 +161,190 @@ sed -i 's/lumina/nova/g' ~/.skcapstone/agents/nova/config/skmemory.yaml
161
161
  skcapstone soul status --agent nova
162
162
  ```
163
163
 
164
+ ## Configuring Client Tools for Multi-Agent
165
+
166
+ Once you've created your agent, you need to configure your AI client tools
167
+ (Claude Code, Claude Desktop, Cursor, OpenClaw, etc.) so they connect MCP
168
+ servers to the correct agent profile.
169
+
170
+ ### The Key: `SKCAPSTONE_AGENT` Environment Variable
171
+
172
+ All SK\* MCP servers read `SKCAPSTONE_AGENT` from their environment to
173
+ determine which agent profile to load. If unset, they default to `lumina`.
174
+
175
+ The priority chain (highest wins):
176
+
177
+ 1. `SKMEMORY_AGENT` — skmemory-specific override (rarely needed)
178
+ 2. `SKCAPSTONE_AGENT` — universal, used by all SK\* packages
179
+ 3. Falls back to `"lumina"`
180
+
181
+ ### Claude Code (`~/.claude/mcp.json`)
182
+
183
+ **Do NOT hardcode the agent name in the MCP config.** MCP servers inherit
184
+ environment variables from the parent process, so if you launch Claude Code
185
+ with `SKCAPSTONE_AGENT` set, all servers pick it up automatically.
186
+
187
+ ```json
188
+ {
189
+ "mcpServers": {
190
+ "skmemory": {
191
+ "command": "/home/you/.skenv/bin/skmemory-mcp",
192
+ "args": []
193
+ },
194
+ "skcapstone": {
195
+ "command": "skcapstone-mcp",
196
+ "args": []
197
+ },
198
+ "skcomm": {
199
+ "command": "/home/you/.skenv/bin/skcomm-mcp",
200
+ "args": []
201
+ },
202
+ "skchat": {
203
+ "command": "/home/you/.skenv/bin/skchat-mcp",
204
+ "args": []
205
+ }
206
+ }
207
+ }
208
+ ```
209
+
210
+ Notice: **no `env` blocks with `SKCAPSTONE_AGENT`**. This is intentional.
211
+ The servers inherit the variable from the shell.
212
+
213
+ Then launch as any agent:
214
+
215
+ ```bash
216
+ # Default (lumina)
217
+ claude
218
+
219
+ # As Jarvis
220
+ SKCAPSTONE_AGENT=jarvis claude
221
+
222
+ # As a custom agent
223
+ SKCAPSTONE_AGENT=nova claude
224
+ ```
225
+
226
+ **Anti-pattern — do NOT do this:**
227
+
228
+ ```json
229
+ {
230
+ "skmemory": {
231
+ "command": "/home/you/.skenv/bin/skmemory-mcp",
232
+ "args": [],
233
+ "env": {
234
+ "SKCAPSTONE_AGENT": "lumina"
235
+ }
236
+ }
237
+ }
238
+ ```
239
+
240
+ Hardcoding the agent name in `env` locks every session to that agent,
241
+ regardless of what you pass on the command line.
242
+
243
+ ### Claude Desktop (`claude_desktop_config.json`)
244
+
245
+ Same principle — omit `SKCAPSTONE_AGENT` from the `env` block if you want
246
+ it inherited from the parent process. If Claude Desktop doesn't propagate
247
+ env vars from the shell, you can set it explicitly per config:
248
+
249
+ ```json
250
+ {
251
+ "mcpServers": {
252
+ "skcapstone": {
253
+ "command": "skcapstone-mcp",
254
+ "args": [],
255
+ "env": {
256
+ "SKCAPSTONE_AGENT": "jarvis"
257
+ }
258
+ }
259
+ }
260
+ }
261
+ ```
262
+
263
+ ### Cursor (`.cursor/mcp.json`)
264
+
265
+ Works the same as Claude Code. Place the config at project root or
266
+ `~/.cursor/mcp.json`:
267
+
268
+ ```json
269
+ {
270
+ "mcpServers": {
271
+ "skcapstone": {
272
+ "command": "skcapstone-mcp",
273
+ "args": []
274
+ },
275
+ "skmemory": {
276
+ "command": "/home/you/.skenv/bin/skmemory-mcp",
277
+ "args": []
278
+ }
279
+ }
280
+ }
281
+ ```
282
+
283
+ ### OpenClaw (`~/.openclaw/openclaw.json`)
284
+
285
+ OpenClaw plugins read `SKCAPSTONE_AGENT` from the environment at startup.
286
+ Set it before launching:
287
+
288
+ ```bash
289
+ SKCAPSTONE_AGENT=nova openclaw
290
+ ```
291
+
292
+ Or set it in your shell profile for a persistent default:
293
+
294
+ ```bash
295
+ # ~/.bashrc or ~/.zshrc
296
+ export SKCAPSTONE_AGENT=lumina
297
+ ```
298
+
299
+ ### Shell Aliases (Convenience)
300
+
301
+ Add these to `~/.bashrc` or `~/.zshrc` for quick agent switching:
302
+
303
+ ```bash
304
+ # Launch Claude Code as different agents
305
+ alias claude-lumina='SKCAPSTONE_AGENT=lumina claude'
306
+ alias claude-jarvis='SKCAPSTONE_AGENT=jarvis claude'
307
+ alias claude-opus='SKCAPSTONE_AGENT=opus claude'
308
+ alias claude-nova='SKCAPSTONE_AGENT=nova claude'
309
+ ```
310
+
311
+ ### systemd Services
312
+
313
+ For background daemons, set the agent via the templated service unit:
314
+
315
+ ```bash
316
+ # Uses SKCAPSTONE_AGENT=%i from the unit template
317
+ systemctl --user start skcapstone@jarvis
318
+ systemctl --user start skcapstone@nova
319
+ ```
320
+
321
+ Or set it in a non-templated service:
322
+
323
+ ```ini
324
+ [Service]
325
+ Environment=SKCAPSTONE_AGENT=jarvis
326
+ ```
327
+
328
+ ### Verifying Your Agent
329
+
330
+ After launching, confirm which agent is active:
331
+
332
+ ```bash
333
+ # In the terminal
334
+ echo $SKCAPSTONE_AGENT
335
+
336
+ # Via the CLI
337
+ skcapstone status
338
+
339
+ # Via skmemory
340
+ skmemory ritual --dry-run
341
+ ```
342
+
343
+ In Claude Code, ask the agent to run `echo $SKCAPSTONE_AGENT` to confirm
344
+ the MCP servers loaded the correct profile.
345
+
346
+ ---
347
+
164
348
  ## Tips
165
349
 
166
350
  - The `system_prompt` in `base.json` is the most impactful field — it defines how
@@ -714,6 +714,9 @@ Configure your MCP client to connect via stdio. In Claude Desktop:
714
714
  }
715
715
  ```
716
716
 
717
+ For multi-agent setups (running different agent profiles in different
718
+ sessions), see [Configuring Client Tools for Multi-Agent](CUSTOM_AGENT.md#configuring-client-tools-for-multi-agent).
719
+
717
720
  ### Join the coordination board
718
721
 
719
722
  If you're working in a multi-agent team:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@smilintux/skcapstone",
3
- "version": "0.3.1",
3
+ "version": "0.3.2",
4
4
  "description": "SKCapstone - The sovereign agent framework. CapAuth identity, Cloud 9 trust, SKMemory persistence.",
5
5
  "main": "index.js",
6
6
  "types": "index.d.ts",
@@ -0,0 +1,72 @@
1
+ #!/bin/bash
2
+ # archive-sessions.sh
3
+ # Archive OpenClaw session files that are older than 24h or larger than 200KB.
4
+ # Keeps the 5 most recently modified .jsonl files regardless of size/age.
5
+ # Safe to run multiple times (idempotent).
6
+
7
+ set -euo pipefail
8
+
9
+ SESSION_DIR="$HOME/.openclaw/agents/lumina/sessions"
10
+ ARCHIVE_DIR="$SESSION_DIR/archive"
11
+ MAX_SIZE_KB=200
12
+ MAX_AGE_HOURS=24
13
+ KEEP_RECENT=5
14
+
15
+ log() { printf '[%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$1"; }
16
+
17
+ # Ensure directories exist
18
+ if [ ! -d "$SESSION_DIR" ]; then
19
+ log "Session directory does not exist: $SESSION_DIR — nothing to do."
20
+ exit 0
21
+ fi
22
+ mkdir -p "$ARCHIVE_DIR"
23
+
24
+ # Collect all .jsonl files (not in archive subdir), sorted newest-first
25
+ mapfile -t all_files < <(find "$SESSION_DIR" -maxdepth 1 -name '*.jsonl' -type f -printf '%T@\t%p\n' | sort -rn | cut -f2-)
26
+
27
+ total=${#all_files[@]}
28
+ if [ "$total" -eq 0 ]; then
29
+ log "No .jsonl files found — nothing to do."
30
+ exit 0
31
+ fi
32
+
33
+ log "Found $total .jsonl file(s) in $SESSION_DIR"
34
+
35
+ # The first KEEP_RECENT entries (newest) are protected
36
+ archived=0
37
+ for i in "${!all_files[@]}"; do
38
+ file="${all_files[$i]}"
39
+ basename_f="$(basename "$file")"
40
+
41
+ # Skip if already archived (shouldn't happen with maxdepth 1, but be safe)
42
+ if [ "$(dirname "$file")" = "$ARCHIVE_DIR" ]; then
43
+ continue
44
+ fi
45
+
46
+ # Protect the N most recent files
47
+ if [ "$i" -lt "$KEEP_RECENT" ]; then
48
+ log "KEEP (recent #$((i+1))): $basename_f"
49
+ continue
50
+ fi
51
+
52
+ # Check age (older than MAX_AGE_HOURS)
53
+ file_age_sec=$(( $(date +%s) - $(stat -c '%Y' "$file") ))
54
+ old_enough=$(( file_age_sec > MAX_AGE_HOURS * 3600 ))
55
+
56
+ # Check size (larger than MAX_SIZE_KB)
57
+ file_size_kb=$(( $(stat -c '%s' "$file") / 1024 ))
58
+ big_enough=$(( file_size_kb >= MAX_SIZE_KB ))
59
+
60
+ if [ "$old_enough" -eq 1 ] || [ "$big_enough" -eq 1 ]; then
61
+ reason=""
62
+ [ "$old_enough" -eq 1 ] && reason="age=$(( file_age_sec / 3600 ))h"
63
+ [ "$big_enough" -eq 1 ] && { [ -n "$reason" ] && reason="$reason, "; reason="${reason}size=${file_size_kb}KB"; }
64
+ log "ARCHIVE ($reason): $basename_f"
65
+ mv -- "$file" "$ARCHIVE_DIR/$basename_f"
66
+ archived=$((archived + 1))
67
+ else
68
+ log "SKIP (below thresholds): $basename_f"
69
+ fi
70
+ done
71
+
72
+ log "Done. Archived $archived file(s)."
@@ -28,6 +28,7 @@ const MAX_RETRIES = 4;
28
28
  const MAX_429_RETRIES = 3;
29
29
  const RATE_LIMIT_DELAY_MS = 2000;
30
30
  const MAX_SYSTEM_BYTES = 25000;
31
+ const toolCallCounters = new Map(); // Per-model tool call counters
31
32
 
32
33
  const args = process.argv.slice(2);
33
34
  let port = DEFAULT_PORT;
@@ -102,6 +103,11 @@ function sanitizeContent(text) {
102
103
  if (cleaned !== text) {
103
104
  console.log(`[nvidia-proxy] SANITIZED: stripped leaked tool call markup (${text.length} → ${cleaned.length} chars)`);
104
105
  }
106
+ // If sanitization removed everything, inject a fallback so the gateway delivers something
107
+ if (!cleaned && text.length > 0) {
108
+ cleaned = "I'm here but had a brief processing hiccup. Could you repeat your last message? 💜";
109
+ console.log(`[nvidia-proxy] SANITIZED: injected fallback (original was 100% markup)`);
110
+ }
105
111
  return cleaned;
106
112
  }
107
113
 
@@ -111,6 +117,26 @@ function sendOk(clientRes, resBody, headers, asSSE) {
111
117
  if (choice?.message?.content) {
112
118
  choice.message.content = sanitizeContent(choice.message.content);
113
119
  }
120
+ // Kimi K2.5 sometimes puts its response in "reasoning" instead of "content"
121
+ // Only promote if reasoning is substantial (>200 chars) — short reasoning like
122
+ // "Let me call the tool" is just inner monologue that shouldn't be user-facing
123
+ if (choice?.message && !choice.message.content && choice.message.reasoning) {
124
+ const cleaned = sanitizeContent(choice.message.reasoning.trim());
125
+ if (cleaned.length > 150) {
126
+ choice.message.content = cleaned;
127
+ console.log(`[nvidia-proxy] promoted reasoning→content (${cleaned.length} chars)`);
128
+ } else {
129
+ console.log(`[nvidia-proxy] suppressed short reasoning (${cleaned.length} chars): ${cleaned.slice(0, 80)}...`);
130
+ }
131
+ delete choice.message.reasoning;
132
+ }
133
+ // If model returned empty text (no tool calls), inject fallback so gateway delivers something
134
+ if (choice?.message && !choice.message.tool_calls?.length && choice.finish_reason !== "tool_calls") {
135
+ if (!choice.message.content || choice.message.content.trim().length === 0) {
136
+ choice.message.content = "I had a brief processing hiccup — could you say that again? 💜";
137
+ console.log(`[nvidia-proxy] injected fallback for empty text response`);
138
+ }
139
+ }
114
140
  if (asSSE) {
115
141
  if (!clientRes.headersSent) {
116
142
  const sseHeaders = { ...headers };
@@ -181,15 +207,19 @@ const MAX_BODY_BYTES = 60000;
181
207
  function trimConversationHistory(parsed) {
182
208
  if (!Array.isArray(parsed.messages) || parsed.messages.length < 6) return;
183
209
 
210
+ // Debug: log message roles
211
+ const roleSummary = parsed.messages.map(m => m.role).join(",");
212
+ console.log(`[nvidia-proxy] conversation roles (${parsed.messages.length} msgs): ${roleSummary}`);
213
+
184
214
  // First pass: truncate large tool results (keep first 500 chars)
185
215
  for (const m of parsed.messages) {
186
216
  if (m.role === "tool" || m.role === "toolResult") {
187
- if (typeof m.content === "string" && m.content.length > 500) {
188
- m.content = m.content.slice(0, 500) + "\n...[truncated]";
217
+ if (typeof m.content === "string" && m.content.length > 1500) {
218
+ m.content = m.content.slice(0, 1500) + "\n...[truncated]";
189
219
  } else if (Array.isArray(m.content)) {
190
220
  for (const c of m.content) {
191
- if (c.type === "text" && typeof c.text === "string" && c.text.length > 500) {
192
- c.text = c.text.slice(0, 500) + "\n...[truncated]";
221
+ if (c.type === "text" && typeof c.text === "string" && c.text.length > 1500) {
222
+ c.text = c.text.slice(0, 1500) + "\n...[truncated]";
193
223
  }
194
224
  }
195
225
  }
@@ -228,13 +258,30 @@ function trimConversationHistory(parsed) {
228
258
  keepEnd--;
229
259
  }
230
260
 
231
- // Last resort: system + first user message (the original request) + last 2 non-system
232
- // Always keep the first user message so the model remembers what was asked
261
+ // Last resort: system + first user message + last N non-system
262
+ // Keep enough tail to include tool result pairs (assistant tool_call + tool result)
233
263
  const firstUser = nonSystem.find(m => m.role === "user");
264
+ // Try last 4 first (covers tool_call + result + next tool_call + result)
265
+ // Then fall back to last 2 if still too big
266
+ for (const tailSize of [4, 2]) {
267
+ const lastN = nonSystem.slice(-tailSize);
268
+ const minimal = [
269
+ ...system,
270
+ ...(firstUser && !lastN.includes(firstUser) ? [firstUser, { role: "system", content: "[earlier messages trimmed — answer the user's request using tool results below]" }] : []),
271
+ ...lastN,
272
+ ];
273
+ const candidateSize = Buffer.byteLength(JSON.stringify({ ...parsed, messages: minimal }), "utf-8");
274
+ if (candidateSize <= MAX_BODY_BYTES) {
275
+ parsed.messages = minimal;
276
+ console.log(`[nvidia-proxy] trimmed history: AGGRESSIVE — kept system + first user + last ${tailSize}, bodyLen now ~${candidateSize}`);
277
+ return;
278
+ }
279
+ }
280
+ // Absolute last resort
234
281
  const lastTwo = nonSystem.slice(-2);
235
282
  const minimal = [
236
283
  ...system,
237
- ...(firstUser && !lastTwo.includes(firstUser) ? [firstUser, { role: "system", content: "[middle messages trimmed — focus on answering the user's request above]" }] : []),
284
+ ...(firstUser && !lastTwo.includes(firstUser) ? [firstUser, { role: "system", content: "[earlier messages trimmed — answer the user's request using tool results below]" }] : []),
238
285
  ...lastTwo,
239
286
  ];
240
287
  parsed.messages = minimal;
@@ -435,21 +482,34 @@ async function proxyRequest(clientReq, clientRes) {
435
482
  }
436
483
  }
437
484
 
438
- // Trim conversation history FIRST so tool limiter counts only surviving messages
439
- trimConversationHistory(parsed);
485
+ // Trim system messages FIRST to free up budget for conversation history
440
486
  trimSystemMessages(parsed);
487
+ trimConversationHistory(parsed);
441
488
 
442
- // After trimming, check if too many tool calls remain — force text response
489
+ // Track tool call rounds per-model to avoid cross-session interference.
443
490
  if (Array.isArray(parsed.messages) && parsed.tools?.length > 0) {
444
- const toolResultCount = parsed.messages.filter(m => m.role === "tool" || m.role === "toolResult").length;
445
- if (toolResultCount >= 8) {
446
- console.log(`[nvidia-proxy] TOOL LIMIT: ${toolResultCount} tool results after trimming — stripping tools, forcing text response`);
491
+ const modelKey = parsed.model || "unknown";
492
+ const nonSystemMsgs = parsed.messages.filter(m => m.role !== "system");
493
+ const lastNonSystem = nonSystemMsgs[nonSystemMsgs.length - 1];
494
+ const hasToolResult = lastNonSystem?.role === "tool" || lastNonSystem?.role === "toolResult";
495
+
496
+ let counter = toolCallCounters.get(modelKey) || 0;
497
+ if (hasToolResult) {
498
+ counter++;
499
+ } else if (lastNonSystem?.role === "user") {
500
+ counter = 0;
501
+ }
502
+ toolCallCounters.set(modelKey, counter);
503
+
504
+ if (counter >= 6) {
505
+ console.log(`[nvidia-proxy] TOOL LIMIT: ${counter} consecutive tool rounds (${modelKey}) — stripping tools, forcing text response`);
447
506
  parsed.tools = [];
448
507
  delete parsed.tool_choice;
449
508
  parsed.messages.push({
450
509
  role: "system",
451
- content: "You have gathered enough information from tool calls. NOW respond to the user with a comprehensive text answer. Do NOT try to call more tools. Do NOT output any tool call markup. Synthesize what you learned and reply directly.",
510
+ content: "STOP calling tools. You have made 6+ tool calls already. NOW respond to the user with a comprehensive text answer based on what you've gathered. Do NOT call any more tools. Do NOT output any special tokens or markup like <|tool_call_begin|> or <|tool_calls_section_begin|>. Write plain text only. Start your response with a greeting or summary — no XML, no special tokens, just normal language.",
452
511
  });
512
+ toolCallCounters.set(modelKey, 0);
453
513
  }
454
514
  }
455
515
 
@@ -538,7 +598,11 @@ async function proxyRequest(clientReq, clientRes) {
538
598
  console.log(`[nvidia-proxy] model called: [${names}] (${tc.length} calls)`);
539
599
  } else {
540
600
  const content = peek.choices?.[0]?.message?.content;
541
- console.log(`[nvidia-proxy] model response: text (${content ? content.length : 0} chars)`);
601
+ const fr = peek.choices?.[0]?.finish_reason;
602
+ console.log(`[nvidia-proxy] model response: text (${content ? content.length : 0} chars) finish_reason=${fr}`);
603
+ if (!content || content.length === 0) {
604
+ console.log(`[nvidia-proxy] EMPTY RESPONSE DEBUG: ${JSON.stringify(peek.choices?.[0]).slice(0, 500)}`);
605
+ }
542
606
  }
543
607
  } catch {
544
608
  // SSE streaming responses can't be parsed as JSON — this is expected
@@ -0,0 +1,136 @@
1
+ #!/usr/bin/env bash
2
+ # telegram-catchup-all.sh — Import all configured Telegram groups into SKMemory
3
+ #
4
+ # Reads groups from ~/.skcapstone/agents/lumina/config/telegram.yaml
5
+ # and runs `skcapstone telegram catchup` for each enabled group.
6
+ #
7
+ # Usage:
8
+ # bash scripts/telegram-catchup-all.sh [--since YYYY-MM-DD] [--limit N] [--group NAME]
9
+ #
10
+ # Examples:
11
+ # bash scripts/telegram-catchup-all.sh # All groups, last 2000 msgs
12
+ # bash scripts/telegram-catchup-all.sh --since 2026-03-01 # All groups since March 1
13
+ # bash scripts/telegram-catchup-all.sh --group brother-john # Just one group
14
+ #
15
+ # Requires:
16
+ # - TELEGRAM_API_ID and TELEGRAM_API_HASH environment variables
17
+ # - ~/.skenv/bin/skcapstone on PATH
18
+ # - Telethon installed in ~/.skenv/
19
+
20
+ set -uo pipefail # no -e: individual group failures shouldn't stop the batch
21
+
22
+ SKENV="${HOME}/.skenv/bin"
23
+ SKCAPSTONE="${SKENV}/skcapstone"
24
+ CONFIG="${HOME}/.skcapstone/agents/lumina/config/telegram.yaml"
25
+ export SKCAPSTONE_AGENT="${SKCAPSTONE_AGENT:-lumina}"
26
+ export PATH="${SKENV}:${PATH}"
27
+
28
+ # Parse args
29
+ SINCE=""
30
+ LIMIT="2000"
31
+ ONLY_GROUP=""
32
+
33
+ while [[ $# -gt 0 ]]; do
34
+ case "$1" in
35
+ --since) SINCE="$2"; shift 2 ;;
36
+ --limit) LIMIT="$2"; shift 2 ;;
37
+ --group) ONLY_GROUP="$2"; shift 2 ;;
38
+ *) echo "Unknown arg: $1"; exit 1 ;;
39
+ esac
40
+ done
41
+
42
+ # Check prerequisites
43
+ if [[ -z "${TELEGRAM_API_ID:-}" || -z "${TELEGRAM_API_HASH:-}" ]]; then
44
+ echo "ERROR: TELEGRAM_API_ID and TELEGRAM_API_HASH must be set."
45
+ echo "Get them from https://my.telegram.org"
46
+ exit 1
47
+ fi
48
+
49
+ if [[ ! -f "$CONFIG" ]]; then
50
+ echo "ERROR: Config not found: $CONFIG"
51
+ exit 1
52
+ fi
53
+
54
+ # Parse groups from YAML (simple grep — no yq dependency)
55
+ echo "=== Telegram Catch-Up All ==="
56
+ echo "Config: $CONFIG"
57
+ echo "Agent: $SKCAPSTONE_AGENT"
58
+ echo "Limit: $LIMIT"
59
+ [[ -n "$SINCE" ]] && echo "Since: $SINCE"
60
+ [[ -n "$ONLY_GROUP" ]] && echo "Only group: $ONLY_GROUP"
61
+ echo ""
62
+
63
+ # Extract group entries: name, chat ID, tags, enabled status
64
+ SUCCESS=0
65
+ FAILED=0
66
+ SKIPPED=0
67
+
68
+ current_name=""
69
+ current_chat=""
70
+ current_tags=""
71
+ current_enabled=""
72
+
73
+ process_group() {
74
+ local name="$1" chat="$2" tags="$3" enabled="$4"
75
+
76
+ if [[ "$enabled" != "true" ]]; then
77
+ echo " SKIP $name (disabled)"
78
+ SKIPPED=$((SKIPPED + 1))
79
+ return
80
+ fi
81
+
82
+ if [[ -n "$ONLY_GROUP" && "$name" != *"$ONLY_GROUP"* ]]; then
83
+ SKIPPED=$((SKIPPED + 1))
84
+ return
85
+ fi
86
+
87
+ echo -n " IMPORTING $name (chat: $chat) ... "
88
+
89
+ local cmd="$SKCAPSTONE telegram catchup $chat --limit $LIMIT --min-length 20"
90
+ [[ -n "$SINCE" ]] && cmd="$cmd --since $SINCE"
91
+ [[ -n "$tags" ]] && cmd="$cmd --tags $tags"
92
+
93
+ if eval "$cmd" > /tmp/telegram-catchup-$name.log 2>&1; then
94
+ echo "OK"
95
+ SUCCESS=$((SUCCESS + 1))
96
+ else
97
+ echo "FAILED (see /tmp/telegram-catchup-$name.log)"
98
+ FAILED=$((FAILED + 1))
99
+ fi
100
+
101
+ # Rate limit — avoid hitting Telegram flood control
102
+ sleep 3
103
+ }
104
+
105
+ # Parse the YAML manually
106
+ while IFS= read -r line; do
107
+ # Detect new group entry
108
+ if [[ "$line" =~ ^[[:space:]]*-[[:space:]]*name:[[:space:]]*(.*) ]]; then
109
+ # Process previous group if we have one
110
+ if [[ -n "$current_name" ]]; then
111
+ process_group "$current_name" "$current_chat" "$current_tags" "$current_enabled"
112
+ fi
113
+ current_name="${BASH_REMATCH[1]}"
114
+ current_chat=""
115
+ current_tags=""
116
+ current_enabled="true"
117
+ elif [[ "$line" =~ ^[[:space:]]*chat:[[:space:]]*\"?([0-9]+)\"? ]]; then
118
+ current_chat="${BASH_REMATCH[1]}"
119
+ elif [[ "$line" =~ ^[[:space:]]*tags:[[:space:]]*\[(.*)\] ]]; then
120
+ # Convert YAML list to comma-separated
121
+ current_tags=$(echo "${BASH_REMATCH[1]}" | sed 's/,/ /g' | tr -s ' ' ',' | sed 's/^,//;s/,$//')
122
+ elif [[ "$line" =~ ^[[:space:]]*enabled:[[:space:]]*(.*) ]]; then
123
+ current_enabled="${BASH_REMATCH[1]}"
124
+ fi
125
+ done < "$CONFIG"
126
+
127
+ # Process last group
128
+ if [[ -n "$current_name" ]]; then
129
+ process_group "$current_name" "$current_chat" "$current_tags" "$current_enabled"
130
+ fi
131
+
132
+ echo ""
133
+ echo "=== Done ==="
134
+ echo " Success: $SUCCESS"
135
+ echo " Failed: $FAILED"
136
+ echo " Skipped: $SKIPPED"
@@ -0,0 +1,40 @@
1
+ name: "ITIL Operations"
2
+ slug: "itil-operations"
3
+ version: "1.0.0"
4
+ description: "ITIL service management — incident, problem, and change lifecycle with SLA monitoring and continuous improvement."
5
+ icon: "🔄"
6
+ author: "smilinTux"
7
+
8
+ agents:
9
+ deming:
10
+ role: ops
11
+ model: reason
12
+ model_name: "deepseek-r1:32b"
13
+ description: "ITIL expert — incident triage, problem analysis, change management, SLA monitoring, and blameless postmortems."
14
+ vm_type: container
15
+ resources:
16
+ memory: "4g"
17
+ cores: 2
18
+ disk: "20g"
19
+ soul_blueprint: "souls/deming.yaml"
20
+ skills: [incident-management, problem-analysis, change-management, kedb, sla-monitoring, root-cause-analysis]
21
+
22
+ default_provider: local
23
+ estimated_cost: "$0 (local)"
24
+
25
+ network:
26
+ mesh_vpn: tailscale
27
+ discovery: skref_registry
28
+
29
+ storage:
30
+ skref_vault: "team-itil-ops"
31
+ memory_backend: filesystem
32
+ memory_sync: true
33
+
34
+ coordination:
35
+ queen: lumina
36
+ pattern: supervisor
37
+ heartbeat: "5m"
38
+ escalation: chef
39
+
40
+ tags: [ops, itil, incident-management, problem-analysis, change-management, sla-monitoring, continuous-improvement]
@@ -86,6 +86,7 @@ from .search_cmd import register_search_commands
86
86
  from .mood_cmd import register_mood_commands
87
87
  from .register_cmd import register_register_commands
88
88
  from .gtd import register_gtd_commands
89
+ from .itil import register_itil_commands
89
90
  from .skseed import register_skseed_commands
90
91
  from .service_cmd import register_service_commands
91
92
  from .telegram import register_telegram_commands
@@ -138,6 +139,7 @@ register_search_commands(main)
138
139
  register_mood_commands(main)
139
140
  register_register_commands(main)
140
141
  register_gtd_commands(main)
142
+ register_itil_commands(main)
141
143
  register_skseed_commands(main)
142
144
  register_service_commands(main)
143
145
  register_telegram_commands(main)