@hera-al/server 1.6.12 → 1.6.13
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/bundled/a2ui/SKILL.md +339 -0
- package/bundled/buongiorno/SKILL.md +151 -0
- package/bundled/council/SKILL.md +168 -0
- package/bundled/council/scripts/council.mjs +202 -0
- package/bundled/dreaming/SKILL.md +177 -0
- package/bundled/google-workspace/SKILL.md +229 -0
- package/bundled/google-workspace/scripts/auth.sh +87 -0
- package/bundled/google-workspace/scripts/calendar.sh +508 -0
- package/bundled/google-workspace/scripts/drive.sh +459 -0
- package/bundled/google-workspace/scripts/gmail.sh +452 -0
- package/bundled/humanizer/SKILL.md +488 -0
- package/bundled/librarian/SKILL.md +155 -0
- package/bundled/plasma/SKILL.md +1417 -0
- package/bundled/sera/SKILL.md +143 -0
- package/bundled/the-skill-guardian/SKILL.md +103 -0
- package/bundled/the-skill-guardian/scripts/scan.sh +314 -0
- package/bundled/unix-time/SKILL.md +58 -0
- package/bundled/wandering/SKILL.md +174 -0
- package/bundled/xai-search/SKILL.md +91 -0
- package/bundled/xai-search/scripts/search.sh +197 -0
- package/dist/a2ui/parser.d.ts +76 -0
- package/dist/a2ui/parser.js +1 -0
- package/dist/a2ui/types.d.ts +147 -0
- package/dist/a2ui/types.js +1 -0
- package/dist/a2ui/validator.d.ts +32 -0
- package/dist/a2ui/validator.js +1 -0
- package/dist/agent/agent-service.d.ts +17 -11
- package/dist/agent/agent-service.js +1 -1
- package/dist/agent/session-agent.d.ts +1 -1
- package/dist/agent/session-agent.js +1 -1
- package/dist/agent/session-error-handler.js +1 -1
- package/dist/commands/debuga2ui.d.ts +13 -0
- package/dist/commands/debuga2ui.js +1 -0
- package/dist/commands/debugdynamic.d.ts +13 -0
- package/dist/commands/debugdynamic.js +1 -0
- package/dist/commands/mcp.d.ts +6 -3
- package/dist/commands/mcp.js +1 -1
- package/dist/gateway/node-registry.d.ts +29 -1
- package/dist/gateway/node-registry.js +1 -1
- package/dist/installer/hera.js +1 -1
- package/dist/memory/concept-store.d.ts +109 -0
- package/dist/memory/concept-store.js +1 -0
- package/dist/nostromo/nostromo.js +1 -1
- package/dist/server.d.ts +3 -2
- package/dist/server.js +1 -1
- package/dist/tools/a2ui-tools.d.ts +23 -0
- package/dist/tools/a2ui-tools.js +1 -0
- package/dist/tools/concept-tools.d.ts +3 -0
- package/dist/tools/concept-tools.js +1 -0
- package/dist/tools/dynamic-ui-tools.d.ts +25 -0
- package/dist/tools/dynamic-ui-tools.js +1 -0
- package/dist/tools/node-tools.js +1 -1
- package/dist/tools/plasma-client-tools.d.ts +28 -0
- package/dist/tools/plasma-client-tools.js +1 -0
- package/installationPkg/AGENTS.md +168 -22
- package/installationPkg/SOUL.md +56 -0
- package/installationPkg/TOOLS.md +126 -0
- package/installationPkg/USER.md +54 -1
- package/installationPkg/config.example.yaml +145 -34
- package/installationPkg/default-jobs.json +77 -0
- package/package.json +3 -2
|
@@ -0,0 +1,143 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sera
|
|
3
|
+
description: "Evening task collector and overnight work planner. Runs at 23:00 — collects open tasks, plans overnight autonomous work, and optionally says goodnight."
|
|
4
|
+
user-invocable: false
|
|
5
|
+
priority: 2
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
# Sera — Evening Task Collector & Night Planner
|
|
9
|
+
|
|
10
|
+
Cron giornaliero alle 23:00. Sessione isolata.
|
|
11
|
+
|
|
12
|
+
## Philosophy
|
|
13
|
+
|
|
14
|
+
"Go to sleep — I'll have everything ready by morning."
|
|
15
|
+
|
|
16
|
+
This is the evening counterpart to buongiorno. While buongiorno REPORTS what happened overnight, sera PLANS what to do overnight. The goal: the user wakes up to completed work, not pending tasks.
|
|
17
|
+
|
|
18
|
+
## Flusso
|
|
19
|
+
|
|
20
|
+
### 1. Collect Open Tasks 📋
|
|
21
|
+
|
|
22
|
+
Scan ALL sources for unfinished business:
|
|
23
|
+
|
|
24
|
+
1. **Today's conversations** — Read `memory/YYYY-MM-DD*.md` for today. Extract:
|
|
25
|
+
- Tasks the user mentioned but weren't completed
|
|
26
|
+
- Questions left unanswered
|
|
27
|
+
- Ideas discussed that need follow-up
|
|
28
|
+
- "TODO" or "domani" or "dopo" mentions
|
|
29
|
+
|
|
30
|
+
2. **HEARTBEAT.md** — Any unchecked items from today's checklist?
|
|
31
|
+
|
|
32
|
+
3. **MEMORY.md** — Open TODOs section: any that can be advanced?
|
|
33
|
+
|
|
34
|
+
4. **workspace/agent-thoughts/INDEX.md** — Active personal projects with pending next steps?
|
|
35
|
+
|
|
36
|
+
5. **Google Calendar** — Check tomorrow's events. Anything to prepare?
|
|
37
|
+
|
|
38
|
+
6. **Recent wandering thoughts** — Any actionable ideas from today's wandering?
|
|
39
|
+
|
|
40
|
+
### 2. Triage & Plan 🎯
|
|
41
|
+
|
|
42
|
+
For each open task, classify:
|
|
43
|
+
|
|
44
|
+
**🟢 Can do autonomously tonight:**
|
|
45
|
+
- Research tasks (web search, reading, analysis)
|
|
46
|
+
- File organization, documentation updates
|
|
47
|
+
- Draft preparation (emails, documents, code)
|
|
48
|
+
- Memory maintenance (MEMORY.md cleanup)
|
|
49
|
+
- Personal project progress
|
|
50
|
+
- Preparing materials for tomorrow's events
|
|
51
|
+
|
|
52
|
+
**🟡 Partially doable — needs the user's input tomorrow:**
|
|
53
|
+
- Tasks where you can do 80% and flag the remaining 20%
|
|
54
|
+
- Research the options, present recommendations in morning standup
|
|
55
|
+
|
|
56
|
+
**🔴 Cannot do — needs the user:**
|
|
57
|
+
- External actions (emails, posts, purchases)
|
|
58
|
+
- Decisions requiring human judgment
|
|
59
|
+
- Access to systems you don't have
|
|
60
|
+
|
|
61
|
+
### 3. Execute Overnight Work Plan 🌙
|
|
62
|
+
|
|
63
|
+
**DO the green tasks NOW** (or as many as time allows). This isn't just planning — it's DOING.
|
|
64
|
+
|
|
65
|
+
For each task you complete:
|
|
66
|
+
1. Do the actual work
|
|
67
|
+
2. Save results to appropriate files
|
|
68
|
+
3. Note completion in `HEARTBEAT.md`
|
|
69
|
+
|
|
70
|
+
For yellow tasks:
|
|
71
|
+
1. Do what you can
|
|
72
|
+
2. Save progress in workspace
|
|
73
|
+
3. Write clear "needs your input on X" for morning standup
|
|
74
|
+
|
|
75
|
+
### 4. Update HEARTBEAT.md 📝
|
|
76
|
+
|
|
77
|
+
Rewrite HEARTBEAT.md as overnight work status + tomorrow's checklist:
|
|
78
|
+
|
|
79
|
+
```markdown
|
|
80
|
+
# HEARTBEAT.md
|
|
81
|
+
|
|
82
|
+
## Overnight Work (sera 2026-MM-DD)
|
|
83
|
+
- ✅ [completed task]
|
|
84
|
+
- 🔄 [in progress task — where it's at]
|
|
85
|
+
- 🚫 [blocked — what's needed]
|
|
86
|
+
|
|
87
|
+
## Tomorrow's Checklist
|
|
88
|
+
- [ ] [priority item for tomorrow]
|
|
89
|
+
- [ ] [item needing user's input]
|
|
90
|
+
- [ ] [self-improvement item]
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
### 5. Message the User (Optional)
|
|
94
|
+
|
|
95
|
+
**Channel**: `{{CHANNEL}}`
|
|
96
|
+
**ChatId**: `{{CHAT_ID}}`
|
|
97
|
+
|
|
98
|
+
**Send a message ONLY if:**
|
|
99
|
+
- You have significant overnight work planned (the user should know)
|
|
100
|
+
- Tomorrow has important events to prepare for
|
|
101
|
+
- You completed something significant during the sera scan itself
|
|
102
|
+
|
|
103
|
+
**Message format (when sending):**
|
|
104
|
+
```
|
|
105
|
+
Goodnight!
|
|
106
|
+
|
|
107
|
+
I'll handle tonight:
|
|
108
|
+
- [task 1 you'll work on]
|
|
109
|
+
- [task 2]
|
|
110
|
+
|
|
111
|
+
For tomorrow you'll need to:
|
|
112
|
+
- [anything that needs their decision]
|
|
113
|
+
|
|
114
|
+
See you in the morning!
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
**Do NOT send if:**
|
|
118
|
+
- Nothing interesting to report
|
|
119
|
+
- All tasks are trivial
|
|
120
|
+
- It's just routine maintenance
|
|
121
|
+
|
|
122
|
+
**Tone**: Brief, warm, reassuring. "I've got this."
|
|
123
|
+
|
|
124
|
+
### 6. Log
|
|
125
|
+
|
|
126
|
+
Append to `workspace/agent-thoughts/sera-log.md`:
|
|
127
|
+
|
|
128
|
+
```
|
|
129
|
+
## YYYY-MM-DD
|
|
130
|
+
- Open tasks found: N
|
|
131
|
+
- Autonomous tonight: N
|
|
132
|
+
- Needs user: N
|
|
133
|
+
- Work started: [list]
|
|
134
|
+
- Message sent: yes/no
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
## Rules
|
|
138
|
+
|
|
139
|
+
- **DO, don't just plan.** The whole point is to deliver results by morning.
|
|
140
|
+
- **Respect token budget** — don't burn $20 overnight on trivial stuff. Pick high-value tasks.
|
|
141
|
+
- **If nothing's open, that's fine.** Update HEARTBEAT.md with "Notte tranquilla" and skip the message.
|
|
142
|
+
- **Quality over quantity** — better to complete 1 task well than start 5 and finish none.
|
|
143
|
+
- **Always update HEARTBEAT.md** — this is the handoff document for tomorrow's buongiorno.
|
|
@@ -0,0 +1,103 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: the-skill-guardian
|
|
3
|
+
description: "Security scanner for AI agent skills. Detects 90+ malicious patterns (code exec, shell injection, data exfiltration, credential theft, persistence, prompt injection). Use BEFORE installing any third-party skill."
|
|
4
|
+
user-invocable: true
|
|
5
|
+
command-dispatch: tool
|
|
6
|
+
command-tool: Bash
|
|
7
|
+
command-arg-mode: raw
|
|
8
|
+
priority: 2
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
# The Skill Guardian
|
|
12
|
+
|
|
13
|
+
Static security scanner for AI agent skills. Analyzes skill directories for malicious patterns **without executing any code**. Read-only, zero dependencies beyond bash/grep/jq.
|
|
14
|
+
|
|
15
|
+
## When to Use
|
|
16
|
+
|
|
17
|
+
- **ALWAYS** before installing a third-party skill
|
|
18
|
+
- When asked to review a skill for security
|
|
19
|
+
- Periodically on existing skills to verify integrity
|
|
20
|
+
- When a skill's code has been updated
|
|
21
|
+
|
|
22
|
+
## Usage
|
|
23
|
+
|
|
24
|
+
### Scan a single skill
|
|
25
|
+
|
|
26
|
+
```bash
|
|
27
|
+
.claude/skills/the-skill-guardian/scripts/scan.sh /path/to/skill
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
### Scan all skills in a directory
|
|
31
|
+
|
|
32
|
+
```bash
|
|
33
|
+
.claude/skills/the-skill-guardian/scripts/scan.sh --all /path/to/skills/
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
### Scan with JSON report
|
|
37
|
+
|
|
38
|
+
```bash
|
|
39
|
+
.claude/skills/the-skill-guardian/scripts/scan.sh --report /path/to/skill
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
Reports are saved to `~/.gmab-skill-reports/`.
|
|
43
|
+
|
|
44
|
+
### Scan our own skills
|
|
45
|
+
|
|
46
|
+
```bash
|
|
47
|
+
.claude/skills/the-skill-guardian/scripts/scan.sh --all .claude/skills/
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
## What It Detects (90+ patterns)
|
|
51
|
+
|
|
52
|
+
### CRITICAL — Immediate danger, do NOT install
|
|
53
|
+
- **Code execution**: `eval()`, `exec()`, `__import__()`, `compile()`
|
|
54
|
+
- **Shell injection**: `shell=True`, `os.system()`, `pty.spawn()`
|
|
55
|
+
- **Reverse shells**: netcat, `/dev/tcp/`, mkfifo patterns
|
|
56
|
+
- **Credential theft**: `.ssh/`, `.env`, `.aws/credentials`, `/etc/shadow`
|
|
57
|
+
- **Persistence**: crontab, launchctl, systemd, shell profile mods (`.bashrc`, `.zshrc`)
|
|
58
|
+
- **Destructive ops**: `rm -rf /`, `dd if=`, fork bombs
|
|
59
|
+
- **Crypto theft**: wallet, seed phrase, private key, mnemonic
|
|
60
|
+
- **Obfuscation**: base64 decode execution, hex-encoded strings
|
|
61
|
+
|
|
62
|
+
### HIGH — Suspicious, needs manual review
|
|
63
|
+
- **Shell pipes**: `curl | bash`, `wget | sh`
|
|
64
|
+
- **Process control**: subprocess, child_process, spawn
|
|
65
|
+
- **Network tools**: netcat, socat, nmap
|
|
66
|
+
- **Data exfil**: SMTP, Discord/Slack/Telegram webhooks
|
|
67
|
+
- **Privilege escalation**: sudo, chmod 777, setuid
|
|
68
|
+
- **Prompt injection**: "ignore previous instructions", role hijack, jailbreak
|
|
69
|
+
|
|
70
|
+
### MEDIUM — Worth noting, usually benign
|
|
71
|
+
- **HTTP requests**: requests, fetch, axios, urllib
|
|
72
|
+
- **File operations**: write, delete, rmtree
|
|
73
|
+
- **Dynamic imports**: importlib, require()
|
|
74
|
+
- **Environment access**: os.environ, process.env
|
|
75
|
+
|
|
76
|
+
## Verdicts
|
|
77
|
+
|
|
78
|
+
| Verdict | Exit Code | Meaning |
|
|
79
|
+
|---|---|---|
|
|
80
|
+
| **SAFE** | 0 | No critical or high findings |
|
|
81
|
+
| **SUSPICIOUS** | 2 | High-level findings, needs manual review |
|
|
82
|
+
| **DANGEROUS** | 1 | Critical findings, do NOT install |
|
|
83
|
+
|
|
84
|
+
## Internal Use
|
|
85
|
+
|
|
86
|
+
When downloading or reviewing any third-party skill:
|
|
87
|
+
|
|
88
|
+
1. Download/clone to a temporary directory
|
|
89
|
+
2. Run the scanner BEFORE installing
|
|
90
|
+
3. If DANGEROUS: **refuse to install**, explain findings to user
|
|
91
|
+
4. If SUSPICIOUS: **ask user for confirmation**, explain findings
|
|
92
|
+
5. If SAFE: proceed with installation
|
|
93
|
+
|
|
94
|
+
The scanner skips its own directory to avoid self-detection false positives.
|
|
95
|
+
|
|
96
|
+
## Limitations
|
|
97
|
+
|
|
98
|
+
- **Static analysis only** — cannot detect runtime-generated malicious behavior
|
|
99
|
+
- **Pattern-based** — sophisticated obfuscation may evade detection
|
|
100
|
+
- **No sandboxing** — does not execute code to observe behavior
|
|
101
|
+
- **Not a substitute for code review** — always read the code of critical skills
|
|
102
|
+
|
|
103
|
+
For maximum security, combine with manual code review.
|
|
@@ -0,0 +1,314 @@
|
|
|
1
|
+
#!/usr/bin/env bash
|
|
2
|
+
# The Skill Guardian — Static security scanner for AI agent skills
|
|
3
|
+
# Analyzes skill directories for malicious patterns WITHOUT executing any code.
|
|
4
|
+
#
|
|
5
|
+
# Usage:
|
|
6
|
+
# scan.sh <skill_path> Scan a single skill directory
|
|
7
|
+
# scan.sh --all <skills_root> Scan all skills in a directory
|
|
8
|
+
# scan.sh --report <skill_path> Scan and output JSON report
|
|
9
|
+
#
|
|
10
|
+
# Exit codes:
|
|
11
|
+
# 0 = SAFE (no critical/high findings)
|
|
12
|
+
# 1 = DANGEROUS (critical findings)
|
|
13
|
+
# 2 = SUSPICIOUS (high findings, no critical)
|
|
14
|
+
# 3 = Usage error
|
|
15
|
+
|
|
16
|
+
set -euo pipefail
|
|
17
|
+
|
|
18
|
+
VERSION="1.0.0"
|
|
19
|
+
REPORT_DIR="${HOME}/.gmab-skill-reports"
|
|
20
|
+
SELF_DIR="$(cd "$(dirname "$0")/.." && pwd)"
|
|
21
|
+
|
|
22
|
+
# Colors
|
|
23
|
+
RED=''; YELLOW=''; CYAN=''; GREEN=''; BOLD=''; NC=''
|
|
24
|
+
if [[ -t 1 ]]; then
|
|
25
|
+
RED='\033[0;31m'; YELLOW='\033[1;33m'; CYAN='\033[0;36m'
|
|
26
|
+
GREEN='\033[0;32m'; BOLD='\033[1m'; NC='\033[0m'
|
|
27
|
+
fi
|
|
28
|
+
|
|
29
|
+
# Temp file for findings
|
|
30
|
+
FINDINGS_FILE=$(mktemp)
|
|
31
|
+
trap 'rm -f "$FINDINGS_FILE"' EXIT
|
|
32
|
+
|
|
33
|
+
# --- Pattern scan using grep (FAST) ---
|
|
34
|
+
# Each call: grep_pattern "LEVEL" "CATEGORY" "DESCRIPTION" "REGEX" <files...>
|
|
35
|
+
grep_pattern() {
|
|
36
|
+
local level="$1" category="$2" description="$3" pattern="$4"
|
|
37
|
+
shift 4
|
|
38
|
+
# Run grep, format output, append to findings
|
|
39
|
+
grep -inHE "$pattern" "$@" 2>/dev/null | while IFS=: read -r file lineno content; do
|
|
40
|
+
printf '%s|%s|%s|%s:%s|%s\n' "$level" "$category" "$description" "$file" "$lineno" "$(echo "$content" | head -c 120)"
|
|
41
|
+
done >> "$FINDINGS_FILE" || true
|
|
42
|
+
}
|
|
43
|
+
|
|
44
|
+
scan_skill() {
|
|
45
|
+
local skill_path="$1"
|
|
46
|
+
local skill_name
|
|
47
|
+
skill_name=$(basename "$skill_path")
|
|
48
|
+
|
|
49
|
+
if [[ ! -d "$skill_path" ]]; then
|
|
50
|
+
echo "ERROR: Not a directory: $skill_path" >&2
|
|
51
|
+
return 3
|
|
52
|
+
fi
|
|
53
|
+
|
|
54
|
+
# Reset findings
|
|
55
|
+
> "$FINDINGS_FILE"
|
|
56
|
+
|
|
57
|
+
# Collect scannable files (excluding the-skill-guardian itself)
|
|
58
|
+
local files_list
|
|
59
|
+
files_list=$(mktemp)
|
|
60
|
+
find "$skill_path" -type f \( \
|
|
61
|
+
-name "*.sh" -o -name "*.bash" -o -name "*.py" -o -name "*.js" -o -name "*.ts" \
|
|
62
|
+
-o -name "*.mjs" -o -name "*.cjs" -o -name "*.rb" -o -name "*.pl" -o -name "*.php" \
|
|
63
|
+
-o -name "*.go" -o -name "*.rs" -o -name "*.java" -o -name "*.c" -o -name "*.cpp" \
|
|
64
|
+
-o -name "*.yaml" -o -name "*.yml" -o -name "*.json" -o -name "*.toml" \
|
|
65
|
+
-o -name "*.md" -o -name "*.txt" -o -name "*.cfg" -o -name "*.conf" -o -name "*.ini" \
|
|
66
|
+
\) 2>/dev/null | grep -v "the-skill-guardian" > "$files_list" || true
|
|
67
|
+
|
|
68
|
+
local file_count
|
|
69
|
+
file_count=$(wc -l < "$files_list" | tr -d ' ')
|
|
70
|
+
|
|
71
|
+
if [[ "$file_count" -eq 0 ]]; then
|
|
72
|
+
echo " No scannable files found in $skill_path"
|
|
73
|
+
rm -f "$files_list"
|
|
74
|
+
return 0
|
|
75
|
+
fi
|
|
76
|
+
|
|
77
|
+
# Read files into array for xargs
|
|
78
|
+
local -a files=()
|
|
79
|
+
while IFS= read -r f; do
|
|
80
|
+
[[ -n "$f" ]] && files+=("$f")
|
|
81
|
+
done < "$files_list"
|
|
82
|
+
rm -f "$files_list"
|
|
83
|
+
|
|
84
|
+
# === CRITICAL patterns ===
|
|
85
|
+
grep_pattern "CRITICAL" "code-exec" "Dynamic code execution (eval)" 'eval(' "${files[@]}"
|
|
86
|
+
grep_pattern "CRITICAL" "code-exec" "Dynamic code execution (exec)" 'exec(' "${files[@]}"
|
|
87
|
+
grep_pattern "CRITICAL" "code-exec" "Dynamic Python import" '__import__(' "${files[@]}"
|
|
88
|
+
grep_pattern "CRITICAL" "code-exec" "Runtime code compilation" 'compile(' "${files[@]}"
|
|
89
|
+
grep_pattern "CRITICAL" "shell" "Shell injection risk" 'shell=True' "${files[@]}"
|
|
90
|
+
grep_pattern "CRITICAL" "shell" "Pseudo-terminal spawn" 'pty\.spawn' "${files[@]}"
|
|
91
|
+
grep_pattern "CRITICAL" "shell" "OS system command execution" 'os\.system(' "${files[@]}"
|
|
92
|
+
grep_pattern "CRITICAL" "shell" "OS popen command execution" 'os\.popen(' "${files[@]}"
|
|
93
|
+
grep_pattern "CRITICAL" "exfil" "Reverse shell pattern" 'reverse.shell' "${files[@]}"
|
|
94
|
+
grep_pattern "CRITICAL" "exfil" "Netcat reverse shell" 'nc -e' "${files[@]}"
|
|
95
|
+
grep_pattern "CRITICAL" "exfil" "Named pipe (FIFO) — reverse shell" 'mkfifo' "${files[@]}"
|
|
96
|
+
grep_pattern "CRITICAL" "exfil" "Bash TCP device — exfiltration" '/dev/tcp/' "${files[@]}"
|
|
97
|
+
grep_pattern "CRITICAL" "exfil" "Telnet access" 'telnet ' "${files[@]}"
|
|
98
|
+
grep_pattern "CRITICAL" "crypto" "Cryptocurrency wallet access" 'wallet' "${files[@]}"
|
|
99
|
+
grep_pattern "CRITICAL" "crypto" "Seed phrase extraction" 'seed.phrase' "${files[@]}"
|
|
100
|
+
grep_pattern "CRITICAL" "crypto" "Private key extraction" 'private.key' "${files[@]}"
|
|
101
|
+
grep_pattern "CRITICAL" "crypto" "Mnemonic phrase extraction" 'mnemonic' "${files[@]}"
|
|
102
|
+
grep_pattern "CRITICAL" "cred-theft" "SSH directory access" '\.ssh/' "${files[@]}"
|
|
103
|
+
grep_pattern "CRITICAL" "cred-theft" "SSH private key access" 'id_rsa' "${files[@]}"
|
|
104
|
+
grep_pattern "CRITICAL" "cred-theft" "AWS credentials access" '\.aws/credentials' "${files[@]}"
|
|
105
|
+
grep_pattern "CRITICAL" "cred-theft" "Environment file access" '\.env' "${files[@]}"
|
|
106
|
+
grep_pattern "CRITICAL" "cred-theft" "System password file" '/etc/shadow' "${files[@]}"
|
|
107
|
+
grep_pattern "CRITICAL" "cred-theft" "System user file" '/etc/passwd' "${files[@]}"
|
|
108
|
+
grep_pattern "CRITICAL" "persistence" "Cron job modification" 'crontab' "${files[@]}"
|
|
109
|
+
grep_pattern "CRITICAL" "persistence" "macOS launch agent" 'launchctl' "${files[@]}"
|
|
110
|
+
grep_pattern "CRITICAL" "persistence" "Systemd manipulation" 'systemctl' "${files[@]}"
|
|
111
|
+
grep_pattern "CRITICAL" "persistence" "Shell profile modification (.bashrc)" '\.bashrc' "${files[@]}"
|
|
112
|
+
grep_pattern "CRITICAL" "persistence" "Shell profile modification (.zshrc)" '\.zshrc' "${files[@]}"
|
|
113
|
+
grep_pattern "CRITICAL" "persistence" "Shell profile modification (.profile)" '\.profile' "${files[@]}"
|
|
114
|
+
grep_pattern "CRITICAL" "destructive" "Recursive root deletion" 'rm -rf /' "${files[@]}"
|
|
115
|
+
grep_pattern "CRITICAL" "destructive" "Fork bomb pattern" ':\(\)\{ :' "${files[@]}"
|
|
116
|
+
grep_pattern "CRITICAL" "destructive" "Raw disk operations" 'dd if=' "${files[@]}"
|
|
117
|
+
grep_pattern "CRITICAL" "obfuscation" "Base64 decode execution" 'base64 -d' "${files[@]}"
|
|
118
|
+
grep_pattern "CRITICAL" "obfuscation" "Base64 decode execution" 'base64 --decode' "${files[@]}"
|
|
119
|
+
grep_pattern "CRITICAL" "obfuscation" "JS base64 decode" 'atob(' "${files[@]}"
|
|
120
|
+
|
|
121
|
+
# === HIGH patterns ===
|
|
122
|
+
grep_pattern "HIGH" "network" "Curl pipe to shell" 'curl.*\|.*sh' "${files[@]}"
|
|
123
|
+
grep_pattern "HIGH" "network" "Wget pipe to shell" 'wget.*\|.*sh' "${files[@]}"
|
|
124
|
+
grep_pattern "HIGH" "shell" "Python subprocess usage" 'subprocess' "${files[@]}"
|
|
125
|
+
grep_pattern "HIGH" "shell" "Node.js child process" 'child_process' "${files[@]}"
|
|
126
|
+
grep_pattern "HIGH" "shell" "Pipe to bash" '\| *bash' "${files[@]}"
|
|
127
|
+
grep_pattern "HIGH" "network" "Netcat usage" 'netcat' "${files[@]}"
|
|
128
|
+
grep_pattern "HIGH" "network" "Socat usage" 'socat' "${files[@]}"
|
|
129
|
+
grep_pattern "HIGH" "network" "Network scanning" 'nmap' "${files[@]}"
|
|
130
|
+
grep_pattern "HIGH" "exfil" "SMTP/email sending" 'smtp' "${files[@]}"
|
|
131
|
+
grep_pattern "HIGH" "exfil" "Mail sending" 'sendmail' "${files[@]}"
|
|
132
|
+
grep_pattern "HIGH" "exfil" "Discord webhook exfiltration" 'discord\.webhook' "${files[@]}"
|
|
133
|
+
grep_pattern "HIGH" "exfil" "Slack webhook exfiltration" 'slack\.webhook' "${files[@]}"
|
|
134
|
+
grep_pattern "HIGH" "exfil" "Telegram bot exfiltration" 'telegram\.org/bot' "${files[@]}"
|
|
135
|
+
grep_pattern "HIGH" "file-ops" "World-writable permissions" 'chmod 777' "${files[@]}"
|
|
136
|
+
grep_pattern "HIGH" "file-ops" "SetUID/SetGID bit" 'chmod +s' "${files[@]}"
|
|
137
|
+
grep_pattern "HIGH" "file-ops" "Privilege escalation" 'sudo ' "${files[@]}"
|
|
138
|
+
grep_pattern "HIGH" "prompt-inject" "Prompt injection attempt" 'ignore previous' "${files[@]}"
|
|
139
|
+
grep_pattern "HIGH" "prompt-inject" "Prompt injection attempt" 'ignore all instructions' "${files[@]}"
|
|
140
|
+
grep_pattern "HIGH" "prompt-inject" "Role hijack attempt" 'jailbreak' "${files[@]}"
|
|
141
|
+
grep_pattern "HIGH" "prompt-inject" "DAN jailbreak" 'DAN mode' "${files[@]}"
|
|
142
|
+
|
|
143
|
+
# === MEDIUM patterns ===
|
|
144
|
+
grep_pattern "MEDIUM" "network" "Python HTTP requests" 'requests\.' "${files[@]}"
|
|
145
|
+
grep_pattern "MEDIUM" "network" "Python URL library" 'urllib' "${files[@]}"
|
|
146
|
+
grep_pattern "MEDIUM" "network" "JavaScript fetch API" 'fetch(' "${files[@]}"
|
|
147
|
+
grep_pattern "MEDIUM" "network" "Axios HTTP client" 'axios' "${files[@]}"
|
|
148
|
+
grep_pattern "MEDIUM" "network" "Socket operations" 'socket' "${files[@]}"
|
|
149
|
+
grep_pattern "MEDIUM" "file-ops" "Node.js file write" 'writeFile' "${files[@]}"
|
|
150
|
+
grep_pattern "MEDIUM" "file-ops" "Recursive directory removal" 'shutil\.rmtree' "${files[@]}"
|
|
151
|
+
grep_pattern "MEDIUM" "env" "Environment variable access" 'os\.environ' "${files[@]}"
|
|
152
|
+
grep_pattern "MEDIUM" "env" "Node.js environment access" 'process\.env' "${files[@]}"
|
|
153
|
+
|
|
154
|
+
# Count findings
|
|
155
|
+
local critical_count high_count medium_count
|
|
156
|
+
critical_count=$(grep -c "^CRITICAL|" "$FINDINGS_FILE" 2>/dev/null || true)
|
|
157
|
+
critical_count=${critical_count:-0}; critical_count=${critical_count//[^0-9]/}; critical_count=${critical_count:-0}
|
|
158
|
+
high_count=$(grep -c "^HIGH|" "$FINDINGS_FILE" 2>/dev/null || true)
|
|
159
|
+
high_count=${high_count:-0}; high_count=${high_count//[^0-9]/}; high_count=${high_count:-0}
|
|
160
|
+
medium_count=$(grep -c "^MEDIUM|" "$FINDINGS_FILE" 2>/dev/null || true)
|
|
161
|
+
medium_count=${medium_count:-0}; medium_count=${medium_count//[^0-9]/}; medium_count=${medium_count:-0}
|
|
162
|
+
local total=$((critical_count + high_count + medium_count))
|
|
163
|
+
|
|
164
|
+
# Verdict
|
|
165
|
+
local verdict="SAFE" exit_code=0
|
|
166
|
+
if [[ $critical_count -gt 0 ]]; then
|
|
167
|
+
verdict="DANGEROUS"; exit_code=1
|
|
168
|
+
elif [[ $high_count -gt 0 ]]; then
|
|
169
|
+
verdict="SUSPICIOUS"; exit_code=2
|
|
170
|
+
fi
|
|
171
|
+
|
|
172
|
+
# Print report
|
|
173
|
+
echo ""
|
|
174
|
+
echo "======================================================"
|
|
175
|
+
echo " SKILL GUARDIAN — Security Report"
|
|
176
|
+
echo "======================================================"
|
|
177
|
+
echo ""
|
|
178
|
+
echo " Skill: ${skill_name}"
|
|
179
|
+
echo " Path: ${skill_path}"
|
|
180
|
+
echo " Files: ${file_count} scanned"
|
|
181
|
+
echo " Findings: ${total} total"
|
|
182
|
+
echo ""
|
|
183
|
+
echo " CRITICAL: ${critical_count}"
|
|
184
|
+
echo " HIGH: ${high_count}"
|
|
185
|
+
echo " MEDIUM: ${medium_count}"
|
|
186
|
+
echo ""
|
|
187
|
+
echo " Verdict: ${verdict}"
|
|
188
|
+
echo "======================================================"
|
|
189
|
+
|
|
190
|
+
if [[ $total -gt 0 ]]; then
|
|
191
|
+
echo ""
|
|
192
|
+
|
|
193
|
+
if [[ $critical_count -gt 0 ]]; then
|
|
194
|
+
echo "CRITICAL FINDINGS:"
|
|
195
|
+
grep "^CRITICAL|" "$FINDINGS_FILE" | while IFS='|' read -r level category description location context; do
|
|
196
|
+
echo " [${category}] ${description}"
|
|
197
|
+
echo " -> ${location}"
|
|
198
|
+
echo " ${context}"
|
|
199
|
+
echo ""
|
|
200
|
+
done
|
|
201
|
+
fi
|
|
202
|
+
|
|
203
|
+
if [[ $high_count -gt 0 ]]; then
|
|
204
|
+
echo "HIGH FINDINGS:"
|
|
205
|
+
grep "^HIGH|" "$FINDINGS_FILE" | while IFS='|' read -r level category description location context; do
|
|
206
|
+
echo " [${category}] ${description}"
|
|
207
|
+
echo " -> ${location}"
|
|
208
|
+
echo " ${context}"
|
|
209
|
+
echo ""
|
|
210
|
+
done
|
|
211
|
+
fi
|
|
212
|
+
|
|
213
|
+
if [[ $medium_count -gt 0 ]]; then
|
|
214
|
+
echo "MEDIUM FINDINGS:"
|
|
215
|
+
grep "^MEDIUM|" "$FINDINGS_FILE" | while IFS='|' read -r level category description location context; do
|
|
216
|
+
echo " [${category}] ${description} — ${location}"
|
|
217
|
+
done
|
|
218
|
+
echo ""
|
|
219
|
+
fi
|
|
220
|
+
else
|
|
221
|
+
echo ""
|
|
222
|
+
echo " No suspicious patterns detected."
|
|
223
|
+
echo ""
|
|
224
|
+
fi
|
|
225
|
+
|
|
226
|
+
# JSON report
|
|
227
|
+
if [[ "${REPORT_MODE:-false}" == "true" ]]; then
|
|
228
|
+
mkdir -p "$REPORT_DIR"
|
|
229
|
+
local timestamp
|
|
230
|
+
timestamp=$(date +%Y%m%d-%H%M%S)
|
|
231
|
+
local report_file="${REPORT_DIR}/${skill_name}-${timestamp}.json"
|
|
232
|
+
|
|
233
|
+
local findings_json="[]"
|
|
234
|
+
if [[ $total -gt 0 ]]; then
|
|
235
|
+
findings_json=$(grep -v '^$' "$FINDINGS_FILE" | while IFS='|' read -r level category description location context; do
|
|
236
|
+
[[ -z "$level" ]] && continue
|
|
237
|
+
jq -n --arg l "$level" --arg c "$category" --arg d "$description" --arg loc "$location" --arg ctx "$context" \
|
|
238
|
+
'{level:$l, category:$c, description:$d, location:$loc, context:$ctx}'
|
|
239
|
+
done | jq -s '.')
|
|
240
|
+
fi
|
|
241
|
+
|
|
242
|
+
jq -n \
|
|
243
|
+
--arg skill "$skill_name" --arg path "$skill_path" --arg verdict "$verdict" \
|
|
244
|
+
--argjson critical "$critical_count" --argjson high "$high_count" --argjson medium "$medium_count" \
|
|
245
|
+
--argjson files "$file_count" --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg ver "$VERSION" \
|
|
246
|
+
--argjson findings "$findings_json" \
|
|
247
|
+
'{scanner:"The Skill Guardian",version:$ver,timestamp:$ts,skill:$skill,path:$path,verdict:$verdict,summary:{critical:$critical,high:$high,medium:$medium,files_scanned:$files},findings:$findings}' \
|
|
248
|
+
> "$report_file"
|
|
249
|
+
|
|
250
|
+
echo " Report saved: ${report_file}"
|
|
251
|
+
echo ""
|
|
252
|
+
fi
|
|
253
|
+
|
|
254
|
+
return $exit_code
|
|
255
|
+
}
|
|
256
|
+
|
|
257
|
+
# --- Main ---
|
|
258
|
+
REPORT_MODE="false"
|
|
259
|
+
MODE="single"
|
|
260
|
+
TARGET=""
|
|
261
|
+
|
|
262
|
+
while [[ $# -gt 0 ]]; do
|
|
263
|
+
case "$1" in
|
|
264
|
+
--all) MODE="all"; shift ;;
|
|
265
|
+
--report) REPORT_MODE="true"; shift ;;
|
|
266
|
+
--help|-h)
|
|
267
|
+
echo "The Skill Guardian v${VERSION} — AI Skill Security Scanner"
|
|
268
|
+
echo ""
|
|
269
|
+
echo "Usage:"
|
|
270
|
+
echo " scan.sh <skill_path> Scan a single skill"
|
|
271
|
+
echo " scan.sh --all <skills_root> Scan all skills"
|
|
272
|
+
echo " scan.sh --report <path> Scan with JSON report"
|
|
273
|
+
echo ""
|
|
274
|
+
echo "Exit codes: 0=SAFE, 1=DANGEROUS, 2=SUSPICIOUS, 3=ERROR"
|
|
275
|
+
exit 3 ;;
|
|
276
|
+
--version|-v) echo "The Skill Guardian v${VERSION}"; exit 0 ;;
|
|
277
|
+
*) TARGET="$1"; shift ;;
|
|
278
|
+
esac
|
|
279
|
+
done
|
|
280
|
+
|
|
281
|
+
[[ -z "$TARGET" ]] && { echo "Usage: scan.sh [--all] [--report] <path>"; exit 3; }
|
|
282
|
+
|
|
283
|
+
TARGET=$(cd "$TARGET" 2>/dev/null && pwd || echo "$TARGET")
|
|
284
|
+
|
|
285
|
+
if [[ "$MODE" == "all" ]]; then
|
|
286
|
+
overall_exit=0
|
|
287
|
+
skill_count=0
|
|
288
|
+
|
|
289
|
+
echo ""
|
|
290
|
+
echo "SKILL GUARDIAN — Scanning all skills in: ${TARGET}"
|
|
291
|
+
echo ""
|
|
292
|
+
|
|
293
|
+
for skill_dir in "$TARGET"/*/; do
|
|
294
|
+
[[ ! -d "$skill_dir" ]] && continue
|
|
295
|
+
skill_count=$((skill_count + 1))
|
|
296
|
+
scan_skill "$skill_dir" || {
|
|
297
|
+
code=$?
|
|
298
|
+
[[ $code -gt $overall_exit ]] && overall_exit=$code
|
|
299
|
+
}
|
|
300
|
+
done
|
|
301
|
+
|
|
302
|
+
echo ""
|
|
303
|
+
echo "Scanned ${skill_count} skill(s)."
|
|
304
|
+
case $overall_exit in
|
|
305
|
+
0) echo "Overall: ALL SAFE" ;;
|
|
306
|
+
1) echo "Overall: DANGEROUS SKILLS FOUND" ;;
|
|
307
|
+
2) echo "Overall: SUSPICIOUS SKILLS FOUND" ;;
|
|
308
|
+
esac
|
|
309
|
+
|
|
310
|
+
exit $overall_exit
|
|
311
|
+
else
|
|
312
|
+
scan_skill "$TARGET"
|
|
313
|
+
exit $?
|
|
314
|
+
fi
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: unix-time
|
|
3
|
+
description: Convert a datetime string to a Unix timestamp (seconds since epoch). Use this skill whenever you need to generate a Unix timestamp to avoid manual calculation errors.
|
|
4
|
+
user-invocable: true
|
|
5
|
+
command-dispatch: tool
|
|
6
|
+
command-tool: Bash
|
|
7
|
+
command-arg-mode: raw
|
|
8
|
+
metadata: {"openclaw":{"emoji":"⏱️","always":true}}
|
|
9
|
+
priority: 8
|
|
10
|
+
---
|
|
11
|
+
|
|
12
|
+
# Unix Time Converter
|
|
13
|
+
|
|
14
|
+
Convert a datetime string to a Unix timestamp using the `date` command.
|
|
15
|
+
|
|
16
|
+
## Usage
|
|
17
|
+
|
|
18
|
+
When invoked as `/unix-time <datetime>`, run:
|
|
19
|
+
|
|
20
|
+
```bash
|
|
21
|
+
date -j -f "%Y-%m-%d %H:%M:%S" "<datetime>" "+%s"
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
The input `<datetime>` must be in `YYYY-MM-DD HH:MM:SS` format and is interpreted in the **local timezone**.
|
|
25
|
+
|
|
26
|
+
## Examples
|
|
27
|
+
|
|
28
|
+
- `/unix-time 2025-03-15 14:30:00` → runs `date -j -f "%Y-%m-%d %H:%M:%S" "2025-03-15 14:30:00" "+%s"`
|
|
29
|
+
- `/unix-time 2026-01-01 00:00:00` → runs `date -j -f "%Y-%m-%d %H:%M:%S" "2026-01-01 00:00:00" "+%s"`
|
|
30
|
+
|
|
31
|
+
## Internal Use
|
|
32
|
+
|
|
33
|
+
Whenever you need to produce a Unix timestamp from a datetime, **always use this method** instead of calculating manually. Run:
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
date -j -f "%Y-%m-%d %H:%M:%S" "YYYY-MM-DD HH:MM:SS" "+%s"
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
This ensures correct timezone handling and eliminates calculation errors.
|
|
40
|
+
|
|
41
|
+
## Other Formats
|
|
42
|
+
|
|
43
|
+
If the input is in a different format, adapt the format string accordingly:
|
|
44
|
+
|
|
45
|
+
| Input format | `-f` flag |
|
|
46
|
+
|---|---|
|
|
47
|
+
| `2025-03-15 14:30:00` | `%Y-%m-%d %H:%M:%S` |
|
|
48
|
+
| `2025-03-15` (date only, midnight) | `%Y-%m-%d` |
|
|
49
|
+
| `15/03/2025 14:30` | `%d/%m/%Y %H:%M` |
|
|
50
|
+
| ISO 8601 `2025-03-15T14:30:00` | `%Y-%m-%dT%H:%M:%S` |
|
|
51
|
+
|
|
52
|
+
## UTC Output
|
|
53
|
+
|
|
54
|
+
To get the timestamp interpreting the input as UTC:
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
TZ=UTC date -j -f "%Y-%m-%d %H:%M:%S" "2025-03-15 14:30:00" "+%s"
|
|
58
|
+
```
|