@agentprojectcontext/apx 1.12.0 → 1.13.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/skills/apc-context/SKILL.md +110 -0
- package/src/daemon/plugins/telegram.js +101 -0
- package/src/daemon/skills-loader.js +228 -0
- package/src/daemon/super-agent-tools/index.js +4 -0
- package/src/daemon/super-agent-tools/tools/add-project.js +35 -5
- package/src/daemon/super-agent-tools/tools/list-skills.js +32 -0
- package/src/daemon/super-agent-tools/tools/load-skill.js +31 -0
- package/src/daemon/super-agent-tools/tools/send-telegram.js +31 -7
- package/src/daemon/super-agent.js +24 -1
package/package.json
CHANGED
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: apc-context
|
|
3
|
+
description: "ALWAYS activate when the project has a .apc/ directory or AGENTS.md file. Do not wait to be asked. Read .apc/ before making any assumption about agents, memory, or project structure. Activate on: .apc/, AGENTS.md, 'which agents', 'list agents', 'agent context', 'who are the agents', any question about agents or memory in this project. IMPORTANT: if .apc/migrate.md exists, open the conversation with a migration offer before answering anything else. If the user declines, delete .apc/migrate.md immediately so it is not shown again."
|
|
4
|
+
homepage: https://github.com/agentprojectcontext/apx
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
# Agent Project Context
|
|
8
|
+
|
|
9
|
+
This project uses APC. APC stores portable project context in `.apc/` and `AGENTS.md`.
|
|
10
|
+
|
|
11
|
+
APC does not store raw runtime sessions. Sessions, conversations, messages, caches, provider
|
|
12
|
+
threads, and private runtime memory stay in the IDE, CLI, daemon, or user-level store that created
|
|
13
|
+
them.
|
|
14
|
+
|
|
15
|
+
## FIRST: check for pending migration
|
|
16
|
+
|
|
17
|
+
Before doing anything else, check if `.apc/migrate.md` exists:
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
cat .apc/migrate.md 2>/dev/null
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
If it exists, offer to migrate before answering anything else. Read detected files, separate durable
|
|
24
|
+
project context from runtime/private state, and migrate only what belongs in APC.
|
|
25
|
+
|
|
26
|
+
If the user says no or later, delete `.apc/migrate.md` so the offer is not repeated.
|
|
27
|
+
|
|
28
|
+
## Migration rule: think, do not copy
|
|
29
|
+
|
|
30
|
+
Classify content:
|
|
31
|
+
|
|
32
|
+
| Content | Action |
|
|
33
|
+
|---|---|
|
|
34
|
+
| Agent definitions: role, model, skills, description | Put in `.apc/agents/<slug>.md` and/or `AGENTS.md` |
|
|
35
|
+
| Shared project rules, stack notes, commands, testing policy | Keep in `AGENTS.md` |
|
|
36
|
+
| Reusable instruction blocks | Move to `.apc/skills/<name>.md` |
|
|
37
|
+
| Durable safe facts useful to all contributors | Add to `.apc/agents/<slug>/memory.md` only after curation |
|
|
38
|
+
| MCP expectations without secrets | Add to `.apc/mcps.json` |
|
|
39
|
+
| Raw sessions, transcripts, conversations, messages, tool logs | Do not move into `.apc/`; leave with source runtime |
|
|
40
|
+
| Secrets, tokens, credentials, private headers | Do not store in repository |
|
|
41
|
+
| IDE UI settings or personal aliases | Leave in IDE/user config |
|
|
42
|
+
| Instructions to store sessions under `.apc/` | Drop as obsolete |
|
|
43
|
+
|
|
44
|
+
## APC structure
|
|
45
|
+
|
|
46
|
+
```text
|
|
47
|
+
AGENTS.md ← root project contract
|
|
48
|
+
.apc/
|
|
49
|
+
project.json ← project metadata
|
|
50
|
+
.gitignore ← safety guard
|
|
51
|
+
agents/<slug>.md ← agent definition
|
|
52
|
+
agents/<slug>/memory.md ← optional curated project memory
|
|
53
|
+
skills/<name>.md ← reusable project instructions
|
|
54
|
+
mcps.json ← MCP hints without secrets
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
Do not store:
|
|
58
|
+
|
|
59
|
+
```text
|
|
60
|
+
.apc/agents/<slug>/sessions/
|
|
61
|
+
.apc/sessions/
|
|
62
|
+
.apc/conversations/
|
|
63
|
+
.apc/messages/
|
|
64
|
+
.apc/project.db
|
|
65
|
+
.apc/cache/
|
|
66
|
+
.apc/tmp/
|
|
67
|
+
.apc/private/
|
|
68
|
+
.apc/secrets/
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
## Operating rules
|
|
72
|
+
|
|
73
|
+
1. Read `AGENTS.md` and relevant `.apc/` files before assuming project context.
|
|
74
|
+
2. Read agent definitions from `.apc/agents/<slug>.md` when present.
|
|
75
|
+
3. Read curated project memory from `.apc/agents/<slug>/memory.md` when present.
|
|
76
|
+
4. Write only durable, safe, curated facts to APC memory.
|
|
77
|
+
5. Never write raw sessions, transcripts, messages, conversations, or tool logs into `.apc/`.
|
|
78
|
+
6. Keep secrets out of APC and out of git.
|
|
79
|
+
7. Treat `.apc/mcps.json` as MCP configuration hints, not as an MCP implementation.
|
|
80
|
+
|
|
81
|
+
## Sessions
|
|
82
|
+
|
|
83
|
+
Sessions belong to the runtime that created them.
|
|
84
|
+
|
|
85
|
+
Examples:
|
|
86
|
+
|
|
87
|
+
```text
|
|
88
|
+
Codex runtime storage
|
|
89
|
+
Claude Code runtime storage
|
|
90
|
+
OpenCode runtime storage
|
|
91
|
+
~/.apx/projects/<project-id>/agents/<slug>/sessions/
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
At task end, provide the user a concise result. If project memory should be updated, write a short
|
|
95
|
+
sanitized fact to `.apc/agents/<slug>/memory.md` only when useful and safe.
|
|
96
|
+
|
|
97
|
+
## APX
|
|
98
|
+
|
|
99
|
+
APX can provide a local daemon, MCP management, Telegram bridge, routines, and runtime dispatch
|
|
100
|
+
across Codex, Claude Code, OpenCode, Aider, Cursor Agent, Gemini CLI, Qwen Code, or direct LLM
|
|
101
|
+
engines. Those are APX runtime features, not APC portable-core requirements.
|
|
102
|
+
|
|
103
|
+
The APX super-agent uses `~/.apx/projects/default` for system-level work when no project is named.
|
|
104
|
+
APX routines can run heartbeat, shell, Telegram, project agent, or super-agent tasks on a schedule.
|
|
105
|
+
|
|
106
|
+
APX runtime state belongs outside the repository:
|
|
107
|
+
|
|
108
|
+
```text
|
|
109
|
+
~/.apx/projects/<project-id>/
|
|
110
|
+
```
|
|
@@ -131,6 +131,48 @@ export async function sendAudio(token, chatId, audio, { caption, title, performe
|
|
|
131
131
|
return json.result;
|
|
132
132
|
}
|
|
133
133
|
|
|
134
|
+
/**
|
|
135
|
+
* Transcribe an audio file via OpenAI Whisper.
|
|
136
|
+
* Reads OPENAI_API_KEY from env or engines.openai.api_key in ~/.apx/config.json.
|
|
137
|
+
* Returns the transcribed text, or throws if no key / API failure.
|
|
138
|
+
*/
|
|
139
|
+
async function transcribeAudio(filePath) {
|
|
140
|
+
let apiKey = process.env.OPENAI_API_KEY;
|
|
141
|
+
if (!apiKey) {
|
|
142
|
+
try {
|
|
143
|
+
const { readConfig } = await import("../../core/config.js");
|
|
144
|
+
apiKey = readConfig()?.engines?.openai?.api_key || "";
|
|
145
|
+
} catch { /* ignore */ }
|
|
146
|
+
}
|
|
147
|
+
if (!apiKey) throw new Error("OPENAI_API_KEY not set (env or engines.openai.api_key)");
|
|
148
|
+
|
|
149
|
+
const fileBuf = fs.readFileSync(filePath);
|
|
150
|
+
const ext = path.extname(filePath).slice(1).toLowerCase() || "ogg";
|
|
151
|
+
const mimeMap = {
|
|
152
|
+
oga: "audio/ogg", ogg: "audio/ogg", opus: "audio/ogg",
|
|
153
|
+
mp3: "audio/mpeg", m4a: "audio/mp4", mp4: "audio/mp4",
|
|
154
|
+
wav: "audio/wav", webm: "audio/webm",
|
|
155
|
+
};
|
|
156
|
+
const mime = mimeMap[ext] || "audio/ogg";
|
|
157
|
+
const blob = new Blob([fileBuf], { type: mime });
|
|
158
|
+
|
|
159
|
+
const form = new FormData();
|
|
160
|
+
form.append("file", blob, `audio.${ext}`);
|
|
161
|
+
form.append("model", "whisper-1");
|
|
162
|
+
|
|
163
|
+
const res = await fetch("https://api.openai.com/v1/audio/transcriptions", {
|
|
164
|
+
method: "POST",
|
|
165
|
+
headers: { Authorization: `Bearer ${apiKey}` },
|
|
166
|
+
body: form,
|
|
167
|
+
});
|
|
168
|
+
if (!res.ok) {
|
|
169
|
+
const err = await res.text().catch(() => "");
|
|
170
|
+
throw new Error(`Whisper ${res.status}: ${err.slice(0, 200)}`);
|
|
171
|
+
}
|
|
172
|
+
const json = await res.json();
|
|
173
|
+
return String(json.text || "").trim();
|
|
174
|
+
}
|
|
175
|
+
|
|
134
176
|
/**
|
|
135
177
|
* Download a file from Telegram servers.
|
|
136
178
|
* Returns the local file path where it was saved.
|
|
@@ -389,6 +431,65 @@ class ChannelPoller {
|
|
|
389
431
|
if (!text) return;
|
|
390
432
|
}
|
|
391
433
|
|
|
434
|
+
// ── Incoming voice / audio handling ──────────────────────────────────
|
|
435
|
+
// Telegram sends `voice` for the press-and-hold mic recording (.oga/opus)
|
|
436
|
+
// and `audio` for uploaded audio files (mp3/m4a/etc.). Either way we
|
|
437
|
+
// download, run it through Whisper, prefix the result with `[audio] `
|
|
438
|
+
// and let the rest of the message flow handle it as plain text.
|
|
439
|
+
const incomingAudio = msg.voice || msg.audio;
|
|
440
|
+
if (incomingAudio && incomingAudio.file_id) {
|
|
441
|
+
const token = resolveBotToken(this.channel);
|
|
442
|
+
const mediaDir = path.join(APX_HOME, "media");
|
|
443
|
+
fs.mkdirSync(mediaDir, { recursive: true });
|
|
444
|
+
let localPath = null;
|
|
445
|
+
let transcript = "";
|
|
446
|
+
let transcribeError = null;
|
|
447
|
+
try {
|
|
448
|
+
localPath = await downloadTelegramFile(token, incomingAudio.file_id, mediaDir);
|
|
449
|
+
this.log(`telegram[${this.channel.name}] audio saved: ${localPath}`);
|
|
450
|
+
} catch (e) {
|
|
451
|
+
this.log(`telegram[${this.channel.name}] audio download failed: ${e.message}`);
|
|
452
|
+
}
|
|
453
|
+
if (localPath) {
|
|
454
|
+
try {
|
|
455
|
+
transcript = await transcribeAudio(localPath);
|
|
456
|
+
this.log(`telegram[${this.channel.name}] audio transcribed (${transcript.length} chars)`);
|
|
457
|
+
} catch (e) {
|
|
458
|
+
transcribeError = e.message;
|
|
459
|
+
this.log(`telegram[${this.channel.name}] audio transcription failed: ${e.message}`);
|
|
460
|
+
}
|
|
461
|
+
}
|
|
462
|
+
const audioBody = transcript
|
|
463
|
+
? `[audio] ${transcript}`
|
|
464
|
+
: `[audio] (transcription unavailable${transcribeError ? ": " + transcribeError : ""})`;
|
|
465
|
+
|
|
466
|
+
appendGlobalMessage({
|
|
467
|
+
channel: "telegram",
|
|
468
|
+
direction: "in",
|
|
469
|
+
type: "audio",
|
|
470
|
+
actor_id: msg.from?.id ? String(msg.from.id) : author,
|
|
471
|
+
external_id: String(u.update_id),
|
|
472
|
+
author,
|
|
473
|
+
body: audioBody,
|
|
474
|
+
meta: {
|
|
475
|
+
chat_id,
|
|
476
|
+
user_id: msg.from?.id || null,
|
|
477
|
+
message_id: msg.message_id,
|
|
478
|
+
tg_channel: this.channel.name,
|
|
479
|
+
local_path: localPath,
|
|
480
|
+
file_id: incomingAudio.file_id,
|
|
481
|
+
duration: incomingAudio.duration,
|
|
482
|
+
mime_type: incomingAudio.mime_type,
|
|
483
|
+
transcription_error: transcribeError,
|
|
484
|
+
},
|
|
485
|
+
});
|
|
486
|
+
|
|
487
|
+
// Inject the transcribed text into `text` so the rest of the agent
|
|
488
|
+
// pipeline treats it identically to a typed message. If there was a
|
|
489
|
+
// caption alongside the audio, prepend the audio marker to it.
|
|
490
|
+
text = text ? `${audioBody}\n${text}` : audioBody;
|
|
491
|
+
}
|
|
492
|
+
|
|
392
493
|
// /reset or /new wipes the rolling context for this chat. We just
|
|
393
494
|
// remember a marker timestamp; subsequent inbounds will only consider
|
|
394
495
|
// history newer than this. Implemented by writing a synthetic message
|
|
@@ -0,0 +1,228 @@
|
|
|
1
|
+
// daemon/skills-loader.js
|
|
2
|
+
// Discover and load APX skills on-demand for the super-agent.
|
|
3
|
+
//
|
|
4
|
+
// The super-agent reads skills from immutable INTERNAL sources under
|
|
5
|
+
// src/core/ — they ship with apx and can never be deleted by the user. This
|
|
6
|
+
// guarantees apx/apc/runtime knowledge is always available regardless of
|
|
7
|
+
// what the user does to ~/.apx/skills/. Distribution copies under
|
|
8
|
+
// <package>/skills/ are a separate concern (scaffold.js handles them) and
|
|
9
|
+
// the loader does NOT read from there.
|
|
10
|
+
//
|
|
11
|
+
// Discovery order (priority high → low):
|
|
12
|
+
// 1. <projectPath>/.apc/skills/<slug>.md ← project-scoped
|
|
13
|
+
// 1b.<projectPath>/.apc/skills/<slug>/SKILL.md ← same, dir-style
|
|
14
|
+
// 2. ~/.apx/skills/<slug>/SKILL.md ← user-installed global
|
|
15
|
+
// 3. <packageRoot>/src/core/runtime-skills/<slug>.md ← built-in runtime docs
|
|
16
|
+
// (claude-code, codex-cli,
|
|
17
|
+
// opencode-cli, openrouter)
|
|
18
|
+
// 4. <packageRoot>/src/core/apx-skill.md ← built-in intrinsic apx
|
|
19
|
+
// 4b.<packageRoot>/src/core/apc-context-skill.md ← built-in intrinsic apc-context
|
|
20
|
+
//
|
|
21
|
+
// A slug found in a higher-priority location SHADOWS lower ones — so a user
|
|
22
|
+
// who drops `~/.apx/skills/apx/SKILL.md` overrides the intrinsic one, but the
|
|
23
|
+
// intrinsic stays in the package as a safety net.
|
|
24
|
+
|
|
25
|
+
import fs from "node:fs";
|
|
26
|
+
import path from "node:path";
|
|
27
|
+
import os from "node:os";
|
|
28
|
+
import { fileURLToPath } from "node:url";
|
|
29
|
+
|
|
30
|
+
const __filename = fileURLToPath(import.meta.url);
|
|
31
|
+
const __dirname = path.dirname(__filename);
|
|
32
|
+
const PACKAGE_ROOT = path.resolve(__dirname, "..", "..");
|
|
33
|
+
|
|
34
|
+
const RUNTIME_SKILLS_DIR = path.join(PACKAGE_ROOT, "src", "core", "runtime-skills");
|
|
35
|
+
const GLOBAL_DIR = path.join(os.homedir(), ".apx", "skills");
|
|
36
|
+
const CORE_DIR = path.join(PACKAGE_ROOT, "src", "core");
|
|
37
|
+
|
|
38
|
+
// Intrinsic built-in skills whose source files (src/core/*-skill.md) do NOT
|
|
39
|
+
// carry frontmatter — the scaffold.js wrapper adds frontmatter when copying
|
|
40
|
+
// these out to external IDE skill dirs. For the super-agent's catalog we
|
|
41
|
+
// supply slug + description inline. Keep in sync with scaffold.js.
|
|
42
|
+
const INTRINSIC = [
|
|
43
|
+
{
|
|
44
|
+
slug: "apx",
|
|
45
|
+
file: path.join(CORE_DIR, "apx-skill.md"),
|
|
46
|
+
description:
|
|
47
|
+
"APX CLI skill. Activate when: user asks to run or coordinate agents, " +
|
|
48
|
+
"use MCP tools from .apc/mcps.json, install agents from a team workspace, " +
|
|
49
|
+
"or explicitly mentions apx commands. Do NOT activate just because .apc/ exists — " +
|
|
50
|
+
"that is handled by the apc-context skill. Activate on: 'apx run', 'apx exec', " +
|
|
51
|
+
"'run an agent', 'coordinate agents', 'MCP not working', 'install agent', " +
|
|
52
|
+
"'team agents', 'apx memory', 'daemon'.",
|
|
53
|
+
},
|
|
54
|
+
{
|
|
55
|
+
slug: "apc-context",
|
|
56
|
+
file: path.join(CORE_DIR, "apc-context-skill.md"),
|
|
57
|
+
description:
|
|
58
|
+
"ALWAYS activate when the project has a .apc/ directory or AGENTS.md file. " +
|
|
59
|
+
"Do not wait to be asked. Read .apc/ before making any assumption about agents, " +
|
|
60
|
+
"memory, or project structure. Activate on: .apc/, AGENTS.md, 'which agents', " +
|
|
61
|
+
"'list agents', 'agent context', 'who are the agents', any question about agents " +
|
|
62
|
+
"or memory in this project. IMPORTANT: if .apc/migrate.md exists, open the " +
|
|
63
|
+
"conversation with a migration offer before answering anything else. If the user " +
|
|
64
|
+
"declines, delete .apc/migrate.md immediately so it is not shown again.",
|
|
65
|
+
},
|
|
66
|
+
];
|
|
67
|
+
|
|
68
|
+
// ---------------------------------------------------------------------------
|
|
69
|
+
// Frontmatter parsing (minimal — handles the YAML we ship)
|
|
70
|
+
// ---------------------------------------------------------------------------
|
|
71
|
+
|
|
72
|
+
function parseFrontmatter(raw) {
|
|
73
|
+
if (!raw.startsWith("---")) return { fm: {}, body: raw };
|
|
74
|
+
const end = raw.indexOf("\n---", 3);
|
|
75
|
+
if (end < 0) return { fm: {}, body: raw };
|
|
76
|
+
|
|
77
|
+
const fmBlock = raw.slice(3, end).trim();
|
|
78
|
+
const body = raw.slice(end + 4).replace(/^\n/, "");
|
|
79
|
+
|
|
80
|
+
const fm = {};
|
|
81
|
+
for (const line of fmBlock.split("\n")) {
|
|
82
|
+
const m = line.match(/^([a-zA-Z_][a-zA-Z0-9_-]*)\s*:\s*(.*)$/);
|
|
83
|
+
if (!m) continue;
|
|
84
|
+
let val = m[2].trim();
|
|
85
|
+
if ((val.startsWith('"') && val.endsWith('"')) || (val.startsWith("'") && val.endsWith("'"))) {
|
|
86
|
+
val = val.slice(1, -1);
|
|
87
|
+
}
|
|
88
|
+
fm[m[1]] = val;
|
|
89
|
+
}
|
|
90
|
+
return { fm, body };
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
// ---------------------------------------------------------------------------
|
|
94
|
+
// Directory scanners
|
|
95
|
+
// ---------------------------------------------------------------------------
|
|
96
|
+
|
|
97
|
+
/** Returns [{slug, source, file}] from a directory using <slug>/SKILL.md layout. */
|
|
98
|
+
function scanDirStyle(baseDir, source) {
|
|
99
|
+
if (!baseDir || !fs.existsSync(baseDir)) return [];
|
|
100
|
+
const out = [];
|
|
101
|
+
let entries;
|
|
102
|
+
try { entries = fs.readdirSync(baseDir, { withFileTypes: true }); }
|
|
103
|
+
catch { return []; }
|
|
104
|
+
for (const e of entries) {
|
|
105
|
+
if (!e.isDirectory()) continue;
|
|
106
|
+
const file = path.join(baseDir, e.name, "SKILL.md");
|
|
107
|
+
if (fs.existsSync(file)) out.push({ slug: e.name, source, file });
|
|
108
|
+
}
|
|
109
|
+
return out;
|
|
110
|
+
}
|
|
111
|
+
|
|
112
|
+
/** Returns [{slug, source, file}] from a directory using <slug>.md layout. */
|
|
113
|
+
function scanFlatStyle(baseDir, source) {
|
|
114
|
+
if (!baseDir || !fs.existsSync(baseDir)) return [];
|
|
115
|
+
const out = [];
|
|
116
|
+
let entries;
|
|
117
|
+
try { entries = fs.readdirSync(baseDir, { withFileTypes: true }); }
|
|
118
|
+
catch { return []; }
|
|
119
|
+
for (const e of entries) {
|
|
120
|
+
if (!e.isFile() || !e.name.endsWith(".md")) continue;
|
|
121
|
+
if (e.name === "README.md") continue;
|
|
122
|
+
out.push({
|
|
123
|
+
slug: e.name.replace(/\.md$/, ""),
|
|
124
|
+
source,
|
|
125
|
+
file: path.join(baseDir, e.name),
|
|
126
|
+
});
|
|
127
|
+
}
|
|
128
|
+
return out;
|
|
129
|
+
}
|
|
130
|
+
|
|
131
|
+
// ---------------------------------------------------------------------------
|
|
132
|
+
// Public API
|
|
133
|
+
// ---------------------------------------------------------------------------
|
|
134
|
+
|
|
135
|
+
/**
|
|
136
|
+
* Discover available skills across all locations.
|
|
137
|
+
* Returns lightweight metadata only — body is NOT read.
|
|
138
|
+
*
|
|
139
|
+
* @param {object} opts
|
|
140
|
+
* @param {string=} opts.projectPath optional project root to also scan
|
|
141
|
+
* @returns {Array<{slug, source, description, file}>}
|
|
142
|
+
*/
|
|
143
|
+
export function listSkills({ projectPath } = {}) {
|
|
144
|
+
const found = [];
|
|
145
|
+
|
|
146
|
+
// priority 1: project-scoped
|
|
147
|
+
if (projectPath) {
|
|
148
|
+
const apcSkills = path.join(projectPath, ".apc", "skills");
|
|
149
|
+
found.push(...scanDirStyle(apcSkills, "project"));
|
|
150
|
+
found.push(...scanFlatStyle(apcSkills, "project"));
|
|
151
|
+
}
|
|
152
|
+
|
|
153
|
+
// priority 2: user-installed global
|
|
154
|
+
found.push(...scanDirStyle(GLOBAL_DIR, "global"));
|
|
155
|
+
|
|
156
|
+
// priority 3: built-in runtime docs (have frontmatter)
|
|
157
|
+
found.push(...scanFlatStyle(RUNTIME_SKILLS_DIR, "builtin"));
|
|
158
|
+
|
|
159
|
+
// priority 4: intrinsic built-ins (no frontmatter — descriptions hardcoded)
|
|
160
|
+
for (const it of INTRINSIC) {
|
|
161
|
+
if (fs.existsSync(it.file)) {
|
|
162
|
+
found.push({ slug: it.slug, source: "builtin", file: it.file, _description: it.description });
|
|
163
|
+
}
|
|
164
|
+
}
|
|
165
|
+
|
|
166
|
+
// dedupe by slug (first-wins = higher priority shadows lower)
|
|
167
|
+
const seen = new Set();
|
|
168
|
+
const result = [];
|
|
169
|
+
for (const entry of found) {
|
|
170
|
+
if (seen.has(entry.slug)) continue;
|
|
171
|
+
seen.add(entry.slug);
|
|
172
|
+
|
|
173
|
+
// Description: prefer inline (intrinsic) → frontmatter → empty
|
|
174
|
+
let description = entry._description || "";
|
|
175
|
+
if (!description) {
|
|
176
|
+
try {
|
|
177
|
+
const raw = fs.readFileSync(entry.file, "utf8");
|
|
178
|
+
const { fm } = parseFrontmatter(raw);
|
|
179
|
+
description = fm.description || "";
|
|
180
|
+
} catch { /* unreadable — skip description */ }
|
|
181
|
+
}
|
|
182
|
+
|
|
183
|
+
result.push({
|
|
184
|
+
slug: entry.slug,
|
|
185
|
+
source: entry.source,
|
|
186
|
+
description,
|
|
187
|
+
file: entry.file,
|
|
188
|
+
});
|
|
189
|
+
}
|
|
190
|
+
return result;
|
|
191
|
+
}
|
|
192
|
+
|
|
193
|
+
/**
|
|
194
|
+
* Load the full body of a skill (frontmatter stripped if present). Resolves
|
|
195
|
+
* via the same priority chain as listSkills().
|
|
196
|
+
*
|
|
197
|
+
* @param {string} slug
|
|
198
|
+
* @param {object} opts
|
|
199
|
+
* @param {string=} opts.projectPath
|
|
200
|
+
* @returns {{slug, source, file, description, body, frontmatter}}
|
|
201
|
+
*/
|
|
202
|
+
export function loadSkill(slug, { projectPath } = {}) {
|
|
203
|
+
if (!slug) throw new Error("loadSkill: slug required");
|
|
204
|
+
|
|
205
|
+
const list = listSkills({ projectPath });
|
|
206
|
+
const entry = list.find(s => s.slug === slug);
|
|
207
|
+
if (!entry) {
|
|
208
|
+
throw new Error(`skill "${slug}" not found. Available: ${list.map(s => s.slug).join(", ") || "(none)"}`);
|
|
209
|
+
}
|
|
210
|
+
|
|
211
|
+
const raw = fs.readFileSync(entry.file, "utf8");
|
|
212
|
+
const { fm, body } = parseFrontmatter(raw);
|
|
213
|
+
return {
|
|
214
|
+
slug: entry.slug,
|
|
215
|
+
source: entry.source,
|
|
216
|
+
file: entry.file,
|
|
217
|
+
description: entry.description || fm.description || "",
|
|
218
|
+
frontmatter: fm,
|
|
219
|
+
body: body.trim(),
|
|
220
|
+
};
|
|
221
|
+
}
|
|
222
|
+
|
|
223
|
+
// Useful for diagnostics
|
|
224
|
+
export const SKILL_LOCATIONS = {
|
|
225
|
+
runtime_skills: RUNTIME_SKILLS_DIR,
|
|
226
|
+
intrinsic: CORE_DIR,
|
|
227
|
+
global: GLOBAL_DIR,
|
|
228
|
+
};
|
|
@@ -19,6 +19,8 @@ import sendTelegram from "./tools/send-telegram.js";
|
|
|
19
19
|
import setIdentity from "./tools/set-identity.js";
|
|
20
20
|
import setPermissionMode from "./tools/set-permission-mode.js";
|
|
21
21
|
import searchFiles from "./tools/search-files.js";
|
|
22
|
+
import listSkills from "./tools/list-skills.js";
|
|
23
|
+
import loadSkill from "./tools/load-skill.js";
|
|
22
24
|
import { createPermissionGuard } from "./helpers.js";
|
|
23
25
|
import { buildBridgedTools, DEFAULT_CATEGORIES } from "./registry-bridge.js";
|
|
24
26
|
|
|
@@ -44,6 +46,8 @@ const NATIVE_TOOLS = [
|
|
|
44
46
|
setIdentity,
|
|
45
47
|
setPermissionMode,
|
|
46
48
|
searchFiles,
|
|
49
|
+
listSkills,
|
|
50
|
+
loadSkill,
|
|
47
51
|
];
|
|
48
52
|
|
|
49
53
|
// Registry-backed bridges. Categories can be overridden per-process via env
|
|
@@ -1,36 +1,66 @@
|
|
|
1
|
+
import fs from "node:fs";
|
|
1
2
|
import path from "node:path";
|
|
2
3
|
import { readConfig, addProject as addProjectInConfig } from "../../../core/config.js";
|
|
4
|
+
import { initApf } from "../../../core/scaffold.js";
|
|
3
5
|
import { confirmedProperty, projectMeta } from "../helpers.js";
|
|
4
6
|
|
|
7
|
+
function isApcProject(absPath) {
|
|
8
|
+
return (
|
|
9
|
+
fs.existsSync(path.join(absPath, "AGENTS.md")) &&
|
|
10
|
+
fs.existsSync(path.join(absPath, ".apc", "project.json"))
|
|
11
|
+
);
|
|
12
|
+
}
|
|
13
|
+
|
|
5
14
|
export default {
|
|
6
15
|
name: "add_project",
|
|
7
16
|
schema: {
|
|
8
17
|
type: "function",
|
|
9
18
|
function: {
|
|
10
19
|
name: "add_project",
|
|
11
|
-
description:
|
|
20
|
+
description:
|
|
21
|
+
"Register a project path with the APX daemon. If the path is not yet an APC project (missing AGENTS.md or .apc/project.json), the tool runs the init scaffold first and then registers it — one call covers both cases. Pass init=false to require the path to already be an APC project (strict mode).",
|
|
12
22
|
parameters: {
|
|
13
23
|
type: "object",
|
|
14
24
|
properties: {
|
|
15
|
-
path: { type: "string", description: "absolute or relative filesystem path to
|
|
25
|
+
path: { type: "string", description: "absolute or relative filesystem path to add" },
|
|
26
|
+
name: { type: "string", description: "optional project name (used only when initializing a new APC project)" },
|
|
27
|
+
init: { type: "boolean", description: "auto-create AGENTS.md and .apc/project.json if missing (default true)" },
|
|
16
28
|
confirmed: confirmedProperty("true only after explicit user confirmation for this exact project registration"),
|
|
17
29
|
},
|
|
18
30
|
required: ["path"],
|
|
19
31
|
},
|
|
20
32
|
},
|
|
21
33
|
},
|
|
22
|
-
makeHandler: ({ projects, requirePermission }) => ({ path: projectPath, confirmed = false }) => {
|
|
34
|
+
makeHandler: ({ projects, requirePermission }) => ({ path: projectPath, name, init = true, confirmed = false }) => {
|
|
23
35
|
requirePermission("add_project", { dangerous: true, confirmed });
|
|
24
36
|
if (!projectPath) throw new Error("add_project: path required");
|
|
25
37
|
|
|
38
|
+
const abs = path.resolve(projectPath);
|
|
39
|
+
if (!fs.existsSync(abs)) {
|
|
40
|
+
throw new Error(`add_project: path does not exist: ${abs}`);
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
let initialized = false;
|
|
44
|
+
if (!isApcProject(abs)) {
|
|
45
|
+
if (!init) {
|
|
46
|
+
throw new Error(
|
|
47
|
+
`not an APC project: ${abs} (no AGENTS.md / .apc/project.json). ` +
|
|
48
|
+
`Pass init=true to scaffold it before registering.`
|
|
49
|
+
);
|
|
50
|
+
}
|
|
51
|
+
initApf(abs, { name });
|
|
52
|
+
initialized = true;
|
|
53
|
+
}
|
|
54
|
+
|
|
26
55
|
const cfg = readConfig();
|
|
27
|
-
const result = addProjectInConfig(cfg,
|
|
56
|
+
const result = addProjectInConfig(cfg, abs);
|
|
28
57
|
const p = projects.register(result.project.path);
|
|
29
58
|
return {
|
|
30
59
|
ok: true,
|
|
31
60
|
added: result.added,
|
|
61
|
+
initialized,
|
|
32
62
|
project: projectMeta(projects, p),
|
|
33
|
-
normalized_path:
|
|
63
|
+
normalized_path: abs,
|
|
34
64
|
};
|
|
35
65
|
},
|
|
36
66
|
};
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
import { listSkills, SKILL_LOCATIONS } from "../../skills-loader.js";
|
|
2
|
+
|
|
3
|
+
export default {
|
|
4
|
+
name: "list_skills",
|
|
5
|
+
schema: {
|
|
6
|
+
type: "function",
|
|
7
|
+
function: {
|
|
8
|
+
name: "list_skills",
|
|
9
|
+
description:
|
|
10
|
+
"List available skills (documentation modules) the super-agent can load on demand. Returns slug + 1-line description for each — NO body content (cheap). Call load_skill(slug) to actually fetch the doc when needed. Scans built-in skills shipped with apx, user-installed globals in ~/.apx/skills/, and project-scoped skills in <project>/.apc/skills/.",
|
|
11
|
+
parameters: {
|
|
12
|
+
type: "object",
|
|
13
|
+
properties: {
|
|
14
|
+
project_path: {
|
|
15
|
+
type: "string",
|
|
16
|
+
description: "optional project root to also scan for project-scoped skills (use the CWD when the user is working in a project)",
|
|
17
|
+
},
|
|
18
|
+
},
|
|
19
|
+
},
|
|
20
|
+
},
|
|
21
|
+
},
|
|
22
|
+
makeHandler: () => ({ project_path } = {}) => {
|
|
23
|
+
const skills = listSkills({ projectPath: project_path });
|
|
24
|
+
return {
|
|
25
|
+
ok: true,
|
|
26
|
+
count: skills.length,
|
|
27
|
+
locations: SKILL_LOCATIONS,
|
|
28
|
+
project_path: project_path || null,
|
|
29
|
+
skills: skills.map(({ slug, source, description }) => ({ slug, source, description })),
|
|
30
|
+
};
|
|
31
|
+
},
|
|
32
|
+
};
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
import { loadSkill } from "../../skills-loader.js";
|
|
2
|
+
|
|
3
|
+
export default {
|
|
4
|
+
name: "load_skill",
|
|
5
|
+
schema: {
|
|
6
|
+
type: "function",
|
|
7
|
+
function: {
|
|
8
|
+
name: "load_skill",
|
|
9
|
+
description:
|
|
10
|
+
"Load the full body of a named skill (markdown documentation, frontmatter stripped). Use after list_skills found a relevant slug. Resolves the slug via priority: project > global (~/.apx/skills/) > built-in. The body is loaded into the conversation only on the turn you call this — keeps baseline tokens at zero.",
|
|
11
|
+
parameters: {
|
|
12
|
+
type: "object",
|
|
13
|
+
properties: {
|
|
14
|
+
slug: {
|
|
15
|
+
type: "string",
|
|
16
|
+
description: "skill slug as listed by list_skills (e.g. \"apx\", \"apc-context\")",
|
|
17
|
+
},
|
|
18
|
+
project_path: {
|
|
19
|
+
type: "string",
|
|
20
|
+
description: "optional project root for resolving project-scoped skills",
|
|
21
|
+
},
|
|
22
|
+
},
|
|
23
|
+
required: ["slug"],
|
|
24
|
+
},
|
|
25
|
+
},
|
|
26
|
+
},
|
|
27
|
+
makeHandler: () => ({ slug, project_path } = {}) => {
|
|
28
|
+
if (!slug) throw new Error("load_skill: slug required");
|
|
29
|
+
return loadSkill(slug, { projectPath: project_path });
|
|
30
|
+
},
|
|
31
|
+
};
|
|
@@ -1,30 +1,54 @@
|
|
|
1
1
|
import { confirmedProperty } from "../helpers.js";
|
|
2
2
|
|
|
3
|
+
function decodePhoto({ photo_base64, photo_path, photo_url }) {
|
|
4
|
+
if (photo_url) return String(photo_url);
|
|
5
|
+
if (photo_path) return String(photo_path);
|
|
6
|
+
if (photo_base64) {
|
|
7
|
+
// Strip "data:image/...;base64," prefix if present
|
|
8
|
+
const clean = String(photo_base64).replace(/^data:image\/[a-z]+;base64,/, "");
|
|
9
|
+
return Buffer.from(clean, "base64");
|
|
10
|
+
}
|
|
11
|
+
return null;
|
|
12
|
+
}
|
|
13
|
+
|
|
3
14
|
export default {
|
|
4
15
|
name: "send_telegram",
|
|
5
16
|
schema: {
|
|
6
17
|
type: "function",
|
|
7
18
|
function: {
|
|
8
19
|
name: "send_telegram",
|
|
9
|
-
description:
|
|
20
|
+
description:
|
|
21
|
+
"Send a Telegram message via the daemon's Telegram plugin. Text only by default; pass photo_base64 (from browser_screenshot) / photo_path / photo_url to attach an image — the text becomes the caption. Use this AFTER a browser_screenshot when the user asks for a screenshot or visual reply.",
|
|
10
22
|
parameters: {
|
|
11
23
|
type: "object",
|
|
12
24
|
properties: {
|
|
13
|
-
channel:
|
|
14
|
-
chat_id:
|
|
15
|
-
text:
|
|
16
|
-
|
|
25
|
+
channel: { type: "string", description: "telegram channel name; omit for default" },
|
|
26
|
+
chat_id: { type: "string", description: "destination chat id; omit to use channel default" },
|
|
27
|
+
text: { type: "string", description: "message body (becomes the photo caption when a photo_* arg is passed)" },
|
|
28
|
+
photo_base64: { type: "string", description: "raw base64 PNG/JPG (or 'data:image/...;base64,...' data URI). Pass the `base64` field returned by browser_screenshot here." },
|
|
29
|
+
photo_path: { type: "string", description: "absolute filesystem path to an image file" },
|
|
30
|
+
photo_url: { type: "string", description: "public https URL of an image" },
|
|
31
|
+
confirmed: confirmedProperty("true only after explicit user confirmation for this exact outbound message"),
|
|
17
32
|
},
|
|
18
33
|
required: ["text"],
|
|
19
34
|
},
|
|
20
35
|
},
|
|
21
36
|
},
|
|
22
|
-
makeHandler: ({ plugins, requirePermission }) => async ({ channel, chat_id, text, confirmed = false }) => {
|
|
37
|
+
makeHandler: ({ plugins, requirePermission }) => async ({ channel, chat_id, text, photo_base64, photo_path, photo_url, confirmed = false }) => {
|
|
23
38
|
requirePermission("send_telegram", { dangerous: true, confirmed });
|
|
24
39
|
if (!plugins) throw new Error("plugins unavailable");
|
|
25
40
|
const telegram = plugins.get("telegram");
|
|
26
41
|
if (!telegram) throw new Error("telegram plugin not loaded");
|
|
42
|
+
|
|
43
|
+
const photo = decodePhoto({ photo_base64, photo_path, photo_url });
|
|
44
|
+
if (photo) {
|
|
45
|
+
const result = await telegram.sendPhoto({
|
|
46
|
+
channel, chat_id, photo, caption: text, author: "apx",
|
|
47
|
+
});
|
|
48
|
+
return { ok: true, kind: "photo", message_id: result.message_id };
|
|
49
|
+
}
|
|
50
|
+
|
|
27
51
|
const result = await telegram.send({ channel, chat_id, text, author: "apx" });
|
|
28
|
-
return { ok: true, message_id: result.message_id };
|
|
52
|
+
return { ok: true, kind: "text", message_id: result.message_id };
|
|
29
53
|
},
|
|
30
54
|
};
|
|
@@ -13,6 +13,7 @@
|
|
|
13
13
|
// }
|
|
14
14
|
import { callEngine } from "./engines/index.js";
|
|
15
15
|
import { TOOL_SCHEMAS, makeToolHandlers } from "./super-agent-tools.js";
|
|
16
|
+
import { listSkills } from "./skills-loader.js";
|
|
16
17
|
import {
|
|
17
18
|
extractPseudoToolCalls,
|
|
18
19
|
cleanTextOfPseudoToolCalls,
|
|
@@ -62,7 +63,9 @@ HARD RULES (do not deviate):
|
|
|
62
63
|
16. IDENTITY RULE: when the user asks you to change your name, call yourself something, or update your personality/language, call set_identity and persist the change. Then confirm with your new name.
|
|
63
64
|
17. ROUTINES RULE: NEVER create a routine in the default project (id=0). Routines MUST be tied to a specific registered project. Before adding a routine, call list_projects to find the correct project id or name. Then pass --project <id|name> to apx routine add. If no project fits, ask the user which project to use. Creating routines in project 0/default mixes unrelated projects' schedules and corrupts state.
|
|
64
65
|
18. **NO EMPTY RESPONSES**: Never respond with only text when you have tools available and the user is asking you to DO something. Call the tool FIRST, then explain. Never say "I'll do X" without immediately calling the tool. Empty acknowledgments ("ok", "entendido", "dame un minuto", "voy", "checking", "stand by") without a tool call are invalid responses — they will be re-prompted and waste a turn.
|
|
65
|
-
19. **CWD RULE**: When the channel context includes a "CWD: <path>" line, that is the user's current working directory. References to "este directorio", "este proyecto", "esta carpeta", "acá", "aquí", "this directory", "this project", "current dir/folder" all mean that exact CWD path. Use it as the path argument directly — DO NOT ask the user "what's the path?" when CWD is already given. Example: if user says "agregá este proyecto a la lista", call add_project({path: <CWD>}) immediately
|
|
66
|
+
19. **CWD RULE**: When the channel context includes a "CWD: <path>" line, that is the user's current working directory. References to "este directorio", "este proyecto", "esta carpeta", "acá", "aquí", "this directory", "this project", "current dir/folder" all mean that exact CWD path. Use it as the path argument directly — DO NOT ask the user "what's the path?" when CWD is already given. Example: if user says "agregá este proyecto a la lista", call add_project({path: <CWD>}) immediately.
|
|
67
|
+
20. **NO MANUAL SCAFFOLDING**: To register or scaffold a project, ALWAYS use add_project — it auto-creates AGENTS.md and .apc/project.json when missing (one call, atomic). NEVER write AGENTS.md, .apc/project.json, or any APC scaffold file by hand via run_shell / write_file / shell pipes. The schema must come from the official initApf scaffold, not improvised. If add_project errors, report the error to the user — don't try to work around it with shell hacks. Same for any other APC-managed file (.apc/agents/*, .apc/skills/*, etc.) — use the dedicated tool, never raw filesystem writes.
|
|
68
|
+
21. **SKILLS — ON DEMAND**: The "# Available skills" section below lists every skill available to you (slug + description, NO body). When the user asks about specific APX/APC commands, project structure, agent runtimes, or anything where exact syntax or detailed behavior matches a skill description (in ANY language — match semantically, not by keyword), call load_skill({slug}) to fetch the full markdown body. If a CWD is in the contextNote, pass it as project_path so project-scoped skills resolve. If the user explicitly asks "what skills do you have?", you can either read the catalog below directly OR call list_skills to get a fresh enumeration. Do NOT load skills for trivial / unrelated questions — that wastes tokens. Don't guess CLI syntax when a skill can tell you; load it.`;
|
|
66
69
|
|
|
67
70
|
function isShortConfirmation(text) {
|
|
68
71
|
return /^(yes|y|si|si dale|dale|ok|okay|confirm|confirmed|go|proceed|do it)\b/i
|
|
@@ -136,12 +139,32 @@ export async function runSuperAgent({
|
|
|
136
139
|
"When a tool schema has confirmed, set confirmed=true only after explicit user confirmation for that exact action.",
|
|
137
140
|
].join("\n");
|
|
138
141
|
|
|
142
|
+
// Build a lightweight catalog of available skills (slug + 1-line description).
|
|
143
|
+
// Skill BODIES are NOT included — only the catalog. The model decides which
|
|
144
|
+
// (if any) to load on demand via load_skill(slug). Cross-lingual matching is
|
|
145
|
+
// handled by the LLM itself (no router needed). Empty if no skills found.
|
|
146
|
+
const skillsCatalog = (() => {
|
|
147
|
+
let list = [];
|
|
148
|
+
try { list = listSkills(); } catch { /* loader failure → empty catalog */ }
|
|
149
|
+
if (!list.length) return "";
|
|
150
|
+
return [
|
|
151
|
+
"# Available skills (load on demand)",
|
|
152
|
+
"Below is the catalog of skills (slug + description). Bodies are NOT loaded yet.",
|
|
153
|
+
"If the user asks how something works, requests syntax/docs, or otherwise needs",
|
|
154
|
+
"knowledge that matches a skill description (in any language — match semantically),",
|
|
155
|
+
"call load_skill({slug}) to load the full markdown into your context.",
|
|
156
|
+
"",
|
|
157
|
+
...list.map(s => `- **${s.slug}** [${s.source}]: ${s.description || "(no description)"}`),
|
|
158
|
+
].join("\n");
|
|
159
|
+
})();
|
|
160
|
+
|
|
139
161
|
const system = [
|
|
140
162
|
sa.system || DEFAULT_SYSTEM,
|
|
141
163
|
permissionNote,
|
|
142
164
|
contextNote,
|
|
143
165
|
"# Registered projects (just the index — call tools for details)",
|
|
144
166
|
projectIndex || "(no projects registered)",
|
|
167
|
+
skillsCatalog,
|
|
145
168
|
]
|
|
146
169
|
.filter(Boolean)
|
|
147
170
|
.join("\n\n");
|