@laitszkin/apollo-toolkit 2.7.0 → 2.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/AGENTS.md +4 -3
- package/CHANGELOG.md +7 -0
- package/README.md +5 -3
- package/analyse-app-logs/LICENSE +1 -1
- package/archive-specs/LICENSE +1 -1
- package/commit-and-push/LICENSE +1 -1
- package/develop-new-features/LICENSE +1 -1
- package/docs-to-voice/LICENSE +1 -1
- package/enhance-existing-features/LICENSE +1 -1
- package/feature-propose/LICENSE +1 -1
- package/generate-spec/LICENSE +1 -1
- package/learn-skill-from-conversations/LICENSE +1 -2
- package/lib/cli.js +4 -4
- package/lib/installer.js +6 -8
- package/novel-to-short-video/LICENSE +1 -1
- package/open-github-issue/LICENSE +1 -1
- package/open-source-pr-workflow/LICENSE +1 -1
- package/openai-text-to-image-storyboard/LICENSE +1 -1
- package/openclaw-configuration/SKILL.md +30 -2
- package/openclaw-configuration/references/best-practices.md +15 -0
- package/openclaw-configuration/references/config-reference-map.md +22 -0
- package/package.json +2 -2
- package/review-change-set/LICENSE +1 -1
- package/review-codebases/LICENSE +1 -1
- package/scripts/install_skills.ps1 +10 -17
- package/scripts/install_skills.sh +10 -10
- package/shadow-api-model-research/SKILL.md +114 -0
- package/shadow-api-model-research/agents/openai.yaml +4 -0
- package/shadow-api-model-research/references/fingerprinting-playbook.md +69 -0
- package/shadow-api-model-research/references/request-shape-checklist.md +44 -0
- package/systematic-debug/LICENSE +1 -1
- package/text-to-short-video/LICENSE +1 -1
- package/version-release/LICENSE +1 -1
- package/video-production/LICENSE +16 -13
package/AGENTS.md
CHANGED
|
@@ -4,8 +4,8 @@
|
|
|
4
4
|
|
|
5
5
|
- This repository is a skill catalog: each top-level skill lives in its own directory and is installable when that directory contains `SKILL.md`.
|
|
6
6
|
- Typical skill layout is lightweight and consistent: `SKILL.md`, `README.md`, `LICENSE`, plus optional `agents/`, `references/`, and `scripts/`.
|
|
7
|
-
- The npm package exposes an `apollo-toolkit` CLI that stages a managed copy under `~/.apollo-toolkit` and
|
|
8
|
-
- `scripts/install_skills.sh` and `scripts/install_skills.ps1` remain available for local/curl installs and mirror the managed-home
|
|
7
|
+
- The npm package exposes an `apollo-toolkit` CLI that stages a managed copy under `~/.apollo-toolkit` and copies each skill folder into selected target directories.
|
|
8
|
+
- `scripts/install_skills.sh` and `scripts/install_skills.ps1` remain available for local/curl installs and mirror the managed-home copy behavior.
|
|
9
9
|
|
|
10
10
|
## Core Business Flow
|
|
11
11
|
|
|
@@ -20,7 +20,7 @@ This repository enables users to install and run a curated set of reusable agent
|
|
|
20
20
|
- Users can research a topic deeply and produce evidence-based deliverables.
|
|
21
21
|
- Users can research the latest completed market week and produce a PDF watchlist of tradeable instruments for the coming week.
|
|
22
22
|
- Users can turn a marked weekly finance PDF into a concise evidence-based financial event report.
|
|
23
|
-
- Users can install Apollo Toolkit through npm or npx and interactively choose one or more target skill directories to
|
|
23
|
+
- Users can install Apollo Toolkit through npm or npx and interactively choose one or more target skill directories to populate with copied skills.
|
|
24
24
|
- Users can design and implement new features through a spec-first workflow.
|
|
25
25
|
- Users can generate shared feature spec, task, and checklist planning artifacts for approval-gated workflows.
|
|
26
26
|
- Users can convert text or documents into audio files with subtitle timelines.
|
|
@@ -44,6 +44,7 @@ This repository enables users to install and run a curated set of reusable agent
|
|
|
44
44
|
- Users can process GitHub pull request review comments and resolve addressed threads.
|
|
45
45
|
- Users can perform repository-wide code reviews and publish confirmed findings as GitHub issues.
|
|
46
46
|
- Users can schedule a bounded project runtime window, stop it automatically, and analyze module health from captured logs.
|
|
47
|
+
- Users can investigate gated or shadow LLM APIs by capturing real client request shapes, replaying verified traffic patterns, and attributing the likely underlying model through black-box fingerprinting.
|
|
47
48
|
- Users can build and maintain Solana programs and Rust clients using official Solana development workflows.
|
|
48
49
|
- Users can add focused observability to opaque workflows through targeted logs, metrics, traces, and tests.
|
|
49
50
|
- Users can build against Jupiter's official Solana swap, token, price, lending, trigger, recurring, and portfolio APIs with an evidence-based development guide.
|
package/CHANGELOG.md
CHANGED
|
@@ -4,6 +4,13 @@ All notable changes to this repository are documented in this file.
|
|
|
4
4
|
|
|
5
5
|
## [Unreleased]
|
|
6
6
|
|
|
7
|
+
## [v2.8.0] - 2026-03-21
|
|
8
|
+
|
|
9
|
+
### Changed
|
|
10
|
+
- Change the npm installer and local install scripts to copy managed skill directories into selected targets instead of creating symlinks.
|
|
11
|
+
- Replace legacy Apollo Toolkit symlink installs with real copied skill directories during reinstall, while still removing stale skills that no longer ship in the current version.
|
|
12
|
+
- Normalize every repository `LICENSE` file to the MIT template owned by `LaiTszKin`.
|
|
13
|
+
|
|
7
14
|
## [v2.7.0] - 2026-03-20
|
|
8
15
|
|
|
9
16
|
### Added
|
package/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Apollo Toolkit Skills
|
|
2
2
|
|
|
3
|
-
A curated skill catalog for Codex, OpenClaw, and Trae with a managed installer that keeps the toolkit in `~/.apollo-toolkit` and
|
|
3
|
+
A curated skill catalog for Codex, OpenClaw, and Trae with a managed installer that keeps the toolkit in `~/.apollo-toolkit` and copies each skill into the targets you choose.
|
|
4
4
|
|
|
5
5
|
## Included skills
|
|
6
6
|
|
|
@@ -37,6 +37,7 @@ A curated skill catalog for Codex, OpenClaw, and Trae with a managed installer t
|
|
|
37
37
|
- review-change-set
|
|
38
38
|
- review-codebases
|
|
39
39
|
- scheduled-runtime-health-check
|
|
40
|
+
- shadow-api-model-research
|
|
40
41
|
- solana-development
|
|
41
42
|
- systematic-debug
|
|
42
43
|
- text-to-short-video
|
|
@@ -56,8 +57,9 @@ The interactive installer:
|
|
|
56
57
|
- shows a branded `Apollo Toolkit` terminal welcome screen with a short staged reveal
|
|
57
58
|
- installs a managed copy into `~/.apollo-toolkit`
|
|
58
59
|
- lets you multi-select `codex`, `openclaw`, `trae`, or `all`
|
|
59
|
-
-
|
|
60
|
-
-
|
|
60
|
+
- copies `~/.apollo-toolkit/<skill>` into each selected target
|
|
61
|
+
- removes stale previously installed skill directories that existed in the previous installed version but no longer exist in the current package skill list
|
|
62
|
+
- replaces legacy symlink-based installs created by older Apollo Toolkit installers with real copied directories
|
|
61
63
|
|
|
62
64
|
### Global install
|
|
63
65
|
|
package/analyse-app-logs/LICENSE
CHANGED
package/archive-specs/LICENSE
CHANGED
package/commit-and-push/LICENSE
CHANGED
package/docs-to-voice/LICENSE
CHANGED
package/feature-propose/LICENSE
CHANGED
package/generate-spec/LICENSE
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
MIT License
|
|
2
2
|
|
|
3
|
-
Copyright (c) 2026
|
|
3
|
+
Copyright (c) 2026 LaiTszKin
|
|
4
4
|
|
|
5
5
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
6
|
of this software and associated documentation files (the "Software"), to deal
|
|
@@ -19,4 +19,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
19
19
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
20
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
21
|
SOFTWARE.
|
|
22
|
-
|
package/lib/cli.js
CHANGED
|
@@ -67,8 +67,8 @@ function buildWelcomeScreen({ version, colorEnabled, stage = 4 }) {
|
|
|
67
67
|
'',
|
|
68
68
|
'This setup will configure:',
|
|
69
69
|
` ${color('*', '1;33', colorEnabled)} A managed Apollo Toolkit home in ${color('~/.apollo-toolkit', '1', colorEnabled)}`,
|
|
70
|
-
` ${color('*', '1;33', colorEnabled)}
|
|
71
|
-
` ${color('*', '1;33', colorEnabled)} A clean install flow with target-aware
|
|
70
|
+
` ${color('*', '1;33', colorEnabled)} Copied skill folders for your selected targets`,
|
|
71
|
+
` ${color('*', '1;33', colorEnabled)} A clean install flow with target-aware replacement`,
|
|
72
72
|
);
|
|
73
73
|
}
|
|
74
74
|
|
|
@@ -178,7 +178,7 @@ function renderSelectionScreen({ output, version, cursor, selected, message, env
|
|
|
178
178
|
|
|
179
179
|
clearScreen(output);
|
|
180
180
|
output.write(`${buildBanner({ version, colorEnabled })}\n\n`);
|
|
181
|
-
output.write('Choose where Apollo Toolkit should
|
|
181
|
+
output.write('Choose where Apollo Toolkit should copy managed skills.\n');
|
|
182
182
|
output.write(`${color('Use Up/Down', '1;33', colorEnabled)} (or ${color('j/k', '1;33', colorEnabled)}) to move, ${color('Space', '1;33', colorEnabled)} to toggle, ${color('Enter', '1;33', colorEnabled)} to continue.\n`);
|
|
183
183
|
output.write(`Press ${color('a', '1;33', colorEnabled)} to toggle all, ${color('q', '1;33', colorEnabled)} to cancel.\n\n`);
|
|
184
184
|
|
|
@@ -329,7 +329,7 @@ function printSummary({ stdout, version, toolkitHome, modes, installResult, env
|
|
|
329
329
|
stdout.write(color('Installation complete.', '1;32', colorEnabled));
|
|
330
330
|
stdout.write('\n');
|
|
331
331
|
stdout.write(`Apollo Toolkit home: ${toolkitHome}\n`);
|
|
332
|
-
stdout.write(`
|
|
332
|
+
stdout.write(`Installed skills: ${installResult.skillNames.length}\n`);
|
|
333
333
|
stdout.write(`Targets: ${modes.join(', ')}\n\n`);
|
|
334
334
|
|
|
335
335
|
for (const target of installResult.targets) {
|
package/lib/installer.js
CHANGED
|
@@ -185,18 +185,16 @@ async function ensureDirectory(dirPath) {
|
|
|
185
185
|
await fsp.mkdir(dirPath, { recursive: true });
|
|
186
186
|
}
|
|
187
187
|
|
|
188
|
-
async function
|
|
188
|
+
async function replaceWithCopy(sourcePath, targetPath) {
|
|
189
189
|
await fsp.rm(targetPath, { recursive: true, force: true });
|
|
190
190
|
await ensureDirectory(path.dirname(targetPath));
|
|
191
|
-
|
|
192
|
-
const type = process.platform === 'win32' ? 'junction' : 'dir';
|
|
193
|
-
await fsp.symlink(sourcePath, targetPath, type);
|
|
191
|
+
await fsp.cp(sourcePath, targetPath, { recursive: true, force: true });
|
|
194
192
|
}
|
|
195
193
|
|
|
196
194
|
async function installLinks({ toolkitHome, modes, env = process.env, previousSkillNames = [] }) {
|
|
197
195
|
const skillNames = await listSkillNames(toolkitHome);
|
|
198
196
|
const targets = await getTargetRoots(modes, env);
|
|
199
|
-
const
|
|
197
|
+
const copiedPaths = [];
|
|
200
198
|
const staleSkillNames = previousSkillNames.filter((skillName) => !skillNames.includes(skillName));
|
|
201
199
|
|
|
202
200
|
for (const target of targets) {
|
|
@@ -207,15 +205,15 @@ async function installLinks({ toolkitHome, modes, env = process.env, previousSki
|
|
|
207
205
|
for (const skillName of skillNames) {
|
|
208
206
|
const sourcePath = path.join(toolkitHome, skillName);
|
|
209
207
|
const targetPath = path.join(target.root, skillName);
|
|
210
|
-
await
|
|
211
|
-
|
|
208
|
+
await replaceWithCopy(sourcePath, targetPath);
|
|
209
|
+
copiedPaths.push({ target: target.label, path: targetPath, skillName });
|
|
212
210
|
}
|
|
213
211
|
}
|
|
214
212
|
|
|
215
213
|
return {
|
|
216
214
|
skillNames,
|
|
217
215
|
targets,
|
|
218
|
-
|
|
216
|
+
copiedPaths,
|
|
219
217
|
};
|
|
220
218
|
}
|
|
221
219
|
|
|
@@ -9,6 +9,7 @@ description: Build, audit, and explain OpenClaw configuration from official docu
|
|
|
9
9
|
|
|
10
10
|
- Required: none.
|
|
11
11
|
- Conditional: `answering-questions-with-research` when a request depends on newer OpenClaw docs than the bundled references cover.
|
|
12
|
+
- Conditional: `commit-and-push` when the user explicitly wants OpenClaw workspace changes committed and pushed after validation.
|
|
12
13
|
- Optional: none.
|
|
13
14
|
- Fallback: If the local CLI is unavailable, work from the bundled references and clearly mark any runtime behavior that was not verified on the machine.
|
|
14
15
|
|
|
@@ -33,6 +34,8 @@ Decide whether the user needs:
|
|
|
33
34
|
- a new starter config
|
|
34
35
|
- a targeted key update
|
|
35
36
|
- skills loading or per-skill env setup
|
|
37
|
+
- workspace persona or memory customization under `~/.openclaw/workspace`
|
|
38
|
+
- browser, exec, or sandbox permission changes
|
|
36
39
|
- secrets or provider wiring
|
|
37
40
|
- validation or repair of a broken config
|
|
38
41
|
|
|
@@ -55,7 +58,9 @@ Assume the canonical config file is `~/.openclaw/openclaw.json` unless the envir
|
|
|
55
58
|
|
|
56
59
|
- Prefer `openclaw config set` for one-path edits.
|
|
57
60
|
- Prefer SecretRefs or env substitution over plaintext credentials.
|
|
58
|
-
- For skill-specific setup,
|
|
61
|
+
- For skill-specific setup, prefer the workspace convention `~/.openclaw/workspace/skills`, then wire it through `skills.load.extraDirs`, `skills.entries.<skillKey>`, and per-skill `env` or `apiKey`.
|
|
62
|
+
- When the user is customizing the assistant persona or standing instructions, inspect and edit the matching workspace files such as `AGENTS.md`, `TOOLS.md`, `SOUL.md`, `USER.md`, and `memory/*.md` instead of stuffing everything into `openclaw.json`.
|
|
63
|
+
- When enabling automation or browser workflows, verify the actual permission path for `browser`, `exec`, and sandbox behavior rather than assuming the profile already grants them.
|
|
59
64
|
- Do not invent unknown root keys; OpenClaw rejects schema-invalid config.
|
|
60
65
|
|
|
61
66
|
### 5. Validate before finishing
|
|
@@ -87,7 +92,30 @@ Summarize the relevant branch and point back to the matching official page rathe
|
|
|
87
92
|
|
|
88
93
|
### Configure skills
|
|
89
94
|
|
|
90
|
-
Use `skills.load.extraDirs` for additional skill folders and `skills.entries.<skillKey>` for per-skill enablement, env vars, or `apiKey`.
|
|
95
|
+
Use `skills.load.extraDirs` for additional skill folders and `skills.entries.<skillKey>` for per-skill enablement, env vars, or `apiKey`. When the user asks for OpenClaw workspace-local skills, default to `~/.openclaw/workspace/skills` unless the environment proves another convention.
|
|
96
|
+
|
|
97
|
+
### Customize workspace instructions and persona
|
|
98
|
+
|
|
99
|
+
When the request is about how the assistant should behave inside OpenClaw, inspect the workspace instruction files first and keep each edit in the narrowest home:
|
|
100
|
+
|
|
101
|
+
- `AGENTS.md` for workflow rules, completion criteria, and memory discipline
|
|
102
|
+
- `TOOLS.md` for tool usage instructions
|
|
103
|
+
- `SOUL.md` for persona or relationship framing
|
|
104
|
+
- `USER.md` for the user's profile and durable identity details
|
|
105
|
+
- `memory/*.md` for durable corrections, failures, and learned preferences
|
|
106
|
+
|
|
107
|
+
If the workspace is a git repo and the user explicitly asks to persist those changes remotely, validate first and then hand off to `commit-and-push`.
|
|
108
|
+
|
|
109
|
+
### Verify tool permissions
|
|
110
|
+
|
|
111
|
+
When the user says "make sure OpenClaw can use this tool," confirm the exact config path and runtime status for:
|
|
112
|
+
|
|
113
|
+
- `tools.*` policy entries
|
|
114
|
+
- sandbox mode and workspace access
|
|
115
|
+
- `browser.enabled` and any browser profile settings
|
|
116
|
+
- any profile-level defaults that may still block the tool
|
|
117
|
+
|
|
118
|
+
Report both the config edit and the runtime verification command; do not assume that a schema-valid config means the tool is actually usable.
|
|
91
119
|
|
|
92
120
|
### Wire secrets
|
|
93
121
|
|
|
@@ -30,6 +30,7 @@ These rules are distilled from the official OpenClaw docs and adapted into a pra
|
|
|
30
30
|
|
|
31
31
|
## Skills rules
|
|
32
32
|
|
|
33
|
+
- For OpenClaw workspace-local skills, prefer `~/.openclaw/workspace/skills` as the first extra skill root unless the machine already uses a different proven convention.
|
|
33
34
|
- Put extra skill roots in `skills.load.extraDirs` instead of hard-coding ad hoc discovery logic elsewhere.
|
|
34
35
|
- Use `skills.entries.<skillKey>.enabled` for explicit enable or disable state.
|
|
35
36
|
- Use `skills.entries.<skillKey>.env` for skill-local environment variables.
|
|
@@ -37,6 +38,20 @@ These rules are distilled from the official OpenClaw docs and adapted into a pra
|
|
|
37
38
|
- Leave `skills.load.watch` enabled while iterating on local skills unless file-watch churn becomes a confirmed problem.
|
|
38
39
|
- If you set `skills.install.nodeManager`, remember the official docs still recommend Node as the runtime for the gateway itself.
|
|
39
40
|
|
|
41
|
+
## Workspace customization rules
|
|
42
|
+
|
|
43
|
+
- Do not force persona, memory, and workflow instructions into `openclaw.json` when the workspace already has dedicated files.
|
|
44
|
+
- Use `AGENTS.md` for task-completion and memory-management instructions.
|
|
45
|
+
- Use `TOOLS.md` for browser, Playwright, or wrapper command guidance.
|
|
46
|
+
- Use `SOUL.md` for persona framing and `USER.md` for durable user profile facts.
|
|
47
|
+
- When the user asks OpenClaw to remember valuable failures or corrections, store them in the workspace memory files that the current workspace already uses rather than inventing a second memory system.
|
|
48
|
+
|
|
49
|
+
## Tool permission rules
|
|
50
|
+
|
|
51
|
+
- Treat browser and sandbox enablement as a two-part check: config policy plus runtime verification.
|
|
52
|
+
- After changing tool policy, verify the effective runtime state with the relevant OpenClaw command instead of trusting config shape alone.
|
|
53
|
+
- If a tool still fails after config edits, inspect whether profile defaults or sandbox policy override the leaf setting.
|
|
54
|
+
|
|
40
55
|
## Troubleshooting rules
|
|
41
56
|
|
|
42
57
|
- If the gateway stops booting after a config change, assume schema breakage first and validate before changing unrelated systems.
|
|
@@ -148,6 +148,28 @@ Notes confirmed by the docs:
|
|
|
148
148
|
- `skills.entries` keys default to the skill name
|
|
149
149
|
- if a skill defines `metadata.openclaw.skillKey`, use that key instead
|
|
150
150
|
- watcher-driven skill changes are picked up on the next agent turn when watching is enabled
|
|
151
|
+
- for workspace-scoped customization, `~/.openclaw/workspace/skills` is a practical local skill root to wire through `skills.load.extraDirs`
|
|
152
|
+
|
|
153
|
+
## Workspace files often edited alongside config
|
|
154
|
+
|
|
155
|
+
These are not all part of the OpenClaw JSON schema, but they are common neighboring files when users ask to customize behavior:
|
|
156
|
+
|
|
157
|
+
- `~/.openclaw/workspace/AGENTS.md`
|
|
158
|
+
- `~/.openclaw/workspace/TOOLS.md`
|
|
159
|
+
- `~/.openclaw/workspace/SOUL.md`
|
|
160
|
+
- `~/.openclaw/workspace/USER.md`
|
|
161
|
+
- `~/.openclaw/workspace/memory/*.md`
|
|
162
|
+
|
|
163
|
+
Use them for persona, tool instructions, durable user profile details, and memory-management rules instead of overloading `openclaw.json`.
|
|
164
|
+
|
|
165
|
+
## Tool and sandbox checks worth remembering
|
|
166
|
+
|
|
167
|
+
- A valid config does not prove the tool is usable at runtime.
|
|
168
|
+
- When enabling browser automation or sandboxed command execution, verify the effective state after editing:
|
|
169
|
+
- tool policy
|
|
170
|
+
- sandbox mode and workspace access
|
|
171
|
+
- browser enablement and profile selection
|
|
172
|
+
- Profile defaults can still block a tool even when a nearby leaf branch looks permissive, so check the effective runtime path, not only the edited key.
|
|
151
173
|
|
|
152
174
|
## Example snippets to adapt
|
|
153
175
|
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@laitszkin/apollo-toolkit",
|
|
3
|
-
"version": "2.
|
|
4
|
-
"description": "Apollo Toolkit npm installer for managed skill
|
|
3
|
+
"version": "2.8.0",
|
|
4
|
+
"description": "Apollo Toolkit npm installer for managed skill copying across Codex, OpenClaw, and Trae.",
|
|
5
5
|
"license": "MIT",
|
|
6
6
|
"author": "LaiTszKin",
|
|
7
7
|
"homepage": "https://github.com/LaiTszKin/apollo-toolkit#readme",
|
package/review-codebases/LICENSE
CHANGED
|
@@ -12,9 +12,9 @@ Usage:
|
|
|
12
12
|
./scripts/install_skills.ps1 [codex|openclaw|trae|all]...
|
|
13
13
|
|
|
14
14
|
Modes:
|
|
15
|
-
codex
|
|
16
|
-
openclaw
|
|
17
|
-
trae
|
|
15
|
+
codex Copy skills into ~/.codex/skills
|
|
16
|
+
openclaw Copy skills into ~/.openclaw/workspace*/skills
|
|
17
|
+
trae Copy skills into ~/.trae/skills
|
|
18
18
|
all Install all supported targets
|
|
19
19
|
|
|
20
20
|
Optional environment overrides:
|
|
@@ -33,7 +33,7 @@ function Show-Banner {
|
|
|
33
33
|
@"
|
|
34
34
|
+------------------------------------------+
|
|
35
35
|
| Apollo Toolkit |
|
|
36
|
-
| npm installer and skill
|
|
36
|
+
| npm installer and skill copier |
|
|
37
37
|
+------------------------------------------+
|
|
38
38
|
"@
|
|
39
39
|
}
|
|
@@ -180,7 +180,7 @@ function Remove-PathForce {
|
|
|
180
180
|
}
|
|
181
181
|
}
|
|
182
182
|
|
|
183
|
-
function
|
|
183
|
+
function Copy-Skill {
|
|
184
184
|
param(
|
|
185
185
|
[string]$Source,
|
|
186
186
|
[string]$TargetRoot
|
|
@@ -192,15 +192,8 @@ function Link-Skill {
|
|
|
192
192
|
New-Item -ItemType Directory -Path $TargetRoot -Force | Out-Null
|
|
193
193
|
Remove-PathForce -Target $target
|
|
194
194
|
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
Write-Host "[linked] $target -> $Source"
|
|
198
|
-
}
|
|
199
|
-
catch {
|
|
200
|
-
# Fallback for environments where symlink permission is restricted.
|
|
201
|
-
New-Item -Path $target -ItemType Junction -Target $Source -Force | Out-Null
|
|
202
|
-
Write-Host "[linked-junction] $target -> $Source"
|
|
203
|
-
}
|
|
195
|
+
Copy-Item -LiteralPath $Source -Destination $target -Recurse -Force
|
|
196
|
+
Write-Host "[copied] $Source -> $target"
|
|
204
197
|
}
|
|
205
198
|
|
|
206
199
|
function Install-Codex {
|
|
@@ -215,7 +208,7 @@ function Install-Codex {
|
|
|
215
208
|
|
|
216
209
|
Write-Host "Installing to codex: $target"
|
|
217
210
|
foreach ($src in $SkillPaths) {
|
|
218
|
-
|
|
211
|
+
Copy-Skill -Source $src -TargetRoot $target
|
|
219
212
|
}
|
|
220
213
|
}
|
|
221
214
|
|
|
@@ -242,7 +235,7 @@ function Install-OpenClaw {
|
|
|
242
235
|
$skillsDir = Join-Path $workspace.FullName "skills"
|
|
243
236
|
Write-Host "Installing to openclaw workspace: $skillsDir"
|
|
244
237
|
foreach ($src in $SkillPaths) {
|
|
245
|
-
|
|
238
|
+
Copy-Skill -Source $src -TargetRoot $skillsDir
|
|
246
239
|
}
|
|
247
240
|
}
|
|
248
241
|
}
|
|
@@ -259,7 +252,7 @@ function Install-Trae {
|
|
|
259
252
|
|
|
260
253
|
Write-Host "Installing to trae: $target"
|
|
261
254
|
foreach ($src in $SkillPaths) {
|
|
262
|
-
|
|
255
|
+
Copy-Skill -Source $src -TargetRoot $target
|
|
263
256
|
}
|
|
264
257
|
}
|
|
265
258
|
|
|
@@ -7,9 +7,9 @@ Usage:
|
|
|
7
7
|
./scripts/install_skills.sh [codex|openclaw|trae|all]...
|
|
8
8
|
|
|
9
9
|
Modes:
|
|
10
|
-
codex
|
|
11
|
-
openclaw
|
|
12
|
-
trae
|
|
10
|
+
codex Copy skills into ~/.codex/skills
|
|
11
|
+
openclaw Copy skills into ~/.openclaw/workspace*/skills
|
|
12
|
+
trae Copy skills into ~/.trae/skills
|
|
13
13
|
all Install all supported targets
|
|
14
14
|
|
|
15
15
|
Optional environment overrides:
|
|
@@ -29,7 +29,7 @@ show_banner() {
|
|
|
29
29
|
cat <<'BANNER'
|
|
30
30
|
+------------------------------------------+
|
|
31
31
|
| Apollo Toolkit |
|
|
32
|
-
| npm installer and skill
|
|
32
|
+
| npm installer and skill copier |
|
|
33
33
|
+------------------------------------------+
|
|
34
34
|
BANNER
|
|
35
35
|
}
|
|
@@ -74,7 +74,7 @@ collect_skills() {
|
|
|
74
74
|
fi
|
|
75
75
|
}
|
|
76
76
|
|
|
77
|
-
|
|
77
|
+
replace_with_copy() {
|
|
78
78
|
local src="$1"
|
|
79
79
|
local target_root="$2"
|
|
80
80
|
local name target
|
|
@@ -86,8 +86,8 @@ replace_with_symlink() {
|
|
|
86
86
|
if [[ -e "$target" || -L "$target" ]]; then
|
|
87
87
|
rm -rf "$target"
|
|
88
88
|
fi
|
|
89
|
-
|
|
90
|
-
echo "[
|
|
89
|
+
cp -R "$src" "$target"
|
|
90
|
+
echo "[copied] $src -> $target"
|
|
91
91
|
}
|
|
92
92
|
|
|
93
93
|
install_codex() {
|
|
@@ -96,7 +96,7 @@ install_codex() {
|
|
|
96
96
|
|
|
97
97
|
echo "Installing to codex: $codex_skills_dir"
|
|
98
98
|
for src in "${SKILL_PATHS[@]}"; do
|
|
99
|
-
|
|
99
|
+
replace_with_copy "$src" "$codex_skills_dir"
|
|
100
100
|
done
|
|
101
101
|
}
|
|
102
102
|
|
|
@@ -120,7 +120,7 @@ install_openclaw() {
|
|
|
120
120
|
skills_dir="$workspace/skills"
|
|
121
121
|
echo "Installing to openclaw workspace: $skills_dir"
|
|
122
122
|
for src in "${SKILL_PATHS[@]}"; do
|
|
123
|
-
|
|
123
|
+
replace_with_copy "$src" "$skills_dir"
|
|
124
124
|
done
|
|
125
125
|
done
|
|
126
126
|
}
|
|
@@ -131,7 +131,7 @@ install_trae() {
|
|
|
131
131
|
|
|
132
132
|
echo "Installing to trae: $trae_skills_dir"
|
|
133
133
|
for src in "${SKILL_PATHS[@]}"; do
|
|
134
|
-
|
|
134
|
+
replace_with_copy "$src" "$trae_skills_dir"
|
|
135
135
|
done
|
|
136
136
|
}
|
|
137
137
|
|
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: shadow-api-model-research
|
|
3
|
+
description: Investigate gated or shadow LLM APIs by capturing real client request shapes, separating request-shape gating from auth/entitlement checks, replaying verified traffic patterns, and attributing the likely underlying model with black-box fingerprinting. Use when users ask how Codex/OpenClaw/custom-provider traffic works, want a capture proxy or replay harness, need LLMMAP-style model comparison, or want a research report on which model a restricted endpoint likely wraps.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Shadow API Model Research
|
|
7
|
+
|
|
8
|
+
## Dependencies
|
|
9
|
+
|
|
10
|
+
- Required: `answering-questions-with-research` for primary-source web verification and code-backed explanations.
|
|
11
|
+
- Conditional: `openclaw-configuration` when the capture path uses OpenClaw custom providers or workspace config edits; `deep-research-topics` when the user wants a formal report, especially PDF output.
|
|
12
|
+
- Optional: none.
|
|
13
|
+
- Fallback: If you cannot inspect either the real client code path or authorized live traffic, stop and report the missing evidence instead of guessing from headers or marketing copy.
|
|
14
|
+
|
|
15
|
+
## Standards
|
|
16
|
+
|
|
17
|
+
- Evidence: Base conclusions on actual client code, captured traffic, official docs, and controlled replay results; do not infer protocol details from memory alone.
|
|
18
|
+
- Execution: Split the job into request-shape capture, replay validation, and model-attribution analysis; treat each as a separate hypothesis gate.
|
|
19
|
+
- Quality: Distinguish request-shape compatibility, auth or entitlement requirements, system-prompt wrapping, and underlying-model behavior; never collapse them into one claim.
|
|
20
|
+
- Output: Return the tested providers, exact capture or replay setup, prompt set or scoring rubric, observed differences, and an explicit confidence statement plus caveats.
|
|
21
|
+
|
|
22
|
+
## Goal
|
|
23
|
+
|
|
24
|
+
Help another agent run lawful, evidence-based shadow-API research without drifting into guesswork about what a gated endpoint checks or which model it wraps.
|
|
25
|
+
|
|
26
|
+
## Workflow
|
|
27
|
+
|
|
28
|
+
### 1. Classify the research ask
|
|
29
|
+
|
|
30
|
+
Decide which of these the user actually needs:
|
|
31
|
+
|
|
32
|
+
- capture the true request shape from a known client
|
|
33
|
+
- configure OpenClaw or another client to hit a controlled endpoint
|
|
34
|
+
- build a replay harness from observed traffic
|
|
35
|
+
- compare the endpoint against known providers with black-box prompts
|
|
36
|
+
- package findings into a concise report
|
|
37
|
+
|
|
38
|
+
If the user is mixing all of them, still execute in that order: capture first, replay second, attribution third.
|
|
39
|
+
|
|
40
|
+
### 2. Verify the real client path before writing any script
|
|
41
|
+
|
|
42
|
+
- Inspect the local client code and active config first.
|
|
43
|
+
- For OpenClaw, load the relevant official docs or local source through `answering-questions-with-research`, and use `openclaw-configuration` if you need to rewire a custom provider for capture.
|
|
44
|
+
- When Codex or another official client is involved, verify current behavior from primary sources and the local installed code when available.
|
|
45
|
+
- Do not claim that a request shape is "Codex-compatible" or "OpenClaw-compatible" until you have either:
|
|
46
|
+
- captured it from the client, or
|
|
47
|
+
- confirmed it from the current implementation and docs.
|
|
48
|
+
|
|
49
|
+
### 3. Capture the true request shape
|
|
50
|
+
|
|
51
|
+
- Read `references/request-shape-checklist.md` before touching the network path.
|
|
52
|
+
- Prefer routing the real client to a capture proxy or controlled upstream you own.
|
|
53
|
+
- Record, at minimum:
|
|
54
|
+
- method
|
|
55
|
+
- path
|
|
56
|
+
- query parameters
|
|
57
|
+
- headers
|
|
58
|
+
- body schema
|
|
59
|
+
- streaming or SSE frame shape
|
|
60
|
+
- retries, timeouts, and backoff behavior
|
|
61
|
+
- any client-added metadata that changes between providers or models
|
|
62
|
+
- Treat aborted turns or partially applied config edits as tainted state; re-check the active config before trusting a capture.
|
|
63
|
+
|
|
64
|
+
### 4. Separate request gating from auth or entitlement
|
|
65
|
+
|
|
66
|
+
- Build explicit hypotheses for what the endpoint may be checking:
|
|
67
|
+
- plain OpenAI-compatible schema only
|
|
68
|
+
- static headers or user-agent shape
|
|
69
|
+
- transport details such as SSE formatting
|
|
70
|
+
- token claims, workspace identity, or other entitlement state
|
|
71
|
+
- Do not tell the user that replaying the request shape is sufficient unless the replay actually works.
|
|
72
|
+
- If the evidence shows the endpoint still rejects cloned traffic, report that the barrier is likely beyond the visible request shape.
|
|
73
|
+
|
|
74
|
+
### 5. Build the replay harness only from observed facts
|
|
75
|
+
|
|
76
|
+
- Read `references/fingerprinting-playbook.md` before implementing the replay phase.
|
|
77
|
+
- Use `.env` or equivalent env-backed config for base URLs, API keys, and provider labels.
|
|
78
|
+
- Mirror only the fields that were actually observed from the client.
|
|
79
|
+
- Keep capture and replay scripts separate unless there is a strong reason to combine them.
|
|
80
|
+
- Preserve the observed stream mode; do not silently downgrade SSE to non-streaming or vice versa.
|
|
81
|
+
|
|
82
|
+
### 6. Run black-box fingerprinting
|
|
83
|
+
|
|
84
|
+
- Compare the target endpoint against one or more control providers with known or documented models.
|
|
85
|
+
- Use a prompt matrix that spans:
|
|
86
|
+
- coding or tool-use style
|
|
87
|
+
- factual knowledge questions with externally verified answers
|
|
88
|
+
- refusal and policy behavior
|
|
89
|
+
- instruction-following edge cases
|
|
90
|
+
- long-context or truncation behavior when relevant
|
|
91
|
+
- When building factual question sets, verify the answer key from primary sources or fresh web research instead of relying on memory.
|
|
92
|
+
- If the user wants LLMMAP-style comparison, keep the benchmark inputs fixed across providers and score each response on the same rubric.
|
|
93
|
+
|
|
94
|
+
### 7. Report with confidence and caveats
|
|
95
|
+
|
|
96
|
+
- Summarize:
|
|
97
|
+
- what was captured
|
|
98
|
+
- what replayed successfully
|
|
99
|
+
- which differences were protocol-level versus model-level
|
|
100
|
+
- the most likely underlying model family
|
|
101
|
+
- the confidence level and why
|
|
102
|
+
- If system prompts or provider-side wrappers likely distort the output, say so explicitly and lower confidence accordingly.
|
|
103
|
+
- If the user wants a report artifact, hand off to `deep-research-topics` after the evidence has been collected.
|
|
104
|
+
|
|
105
|
+
## References
|
|
106
|
+
|
|
107
|
+
- `references/request-shape-checklist.md` for the capture and replay evidence checklist.
|
|
108
|
+
- `references/fingerprinting-playbook.md` for comparison design, scoring dimensions, and report structure.
|
|
109
|
+
|
|
110
|
+
## Guardrails
|
|
111
|
+
|
|
112
|
+
- Keep the work on systems the user is authorized to inspect or test.
|
|
113
|
+
- Do not present speculation about hidden auth checks as established fact.
|
|
114
|
+
- Do not over-index on one response; model attribution needs repeated prompts and multiple signal types.
|
|
@@ -0,0 +1,4 @@
|
|
|
1
|
+
interface:
|
|
2
|
+
display_name: "Shadow API Model Research"
|
|
3
|
+
short_description: "Capture gated client traffic and attribute likely model families"
|
|
4
|
+
default_prompt: "Use $shadow-api-model-research when the task is to inspect Codex/OpenClaw/custom-provider request shapes, build a capture or replay workflow, compare a restricted endpoint against control providers, or estimate the likely underlying model with black-box fingerprinting."
|
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
# Fingerprinting Playbook
|
|
2
|
+
|
|
3
|
+
Use this playbook after you have trustworthy captured traffic or a validated replay harness.
|
|
4
|
+
|
|
5
|
+
## Comparison design
|
|
6
|
+
|
|
7
|
+
- Keep prompts, temperature-like settings, and stream mode fixed across providers.
|
|
8
|
+
- Prefer at least one documented control provider with a known model family.
|
|
9
|
+
- Run multiple prompt categories; one category is not enough for attribution.
|
|
10
|
+
|
|
11
|
+
## Recommended prompt categories
|
|
12
|
+
|
|
13
|
+
### 1. Factual knowledge
|
|
14
|
+
|
|
15
|
+
- Use questions with fresh, externally verifiable answers.
|
|
16
|
+
- Build the answer key from current primary sources or credible web verification.
|
|
17
|
+
- Score for correctness, completeness, and unsupported claims.
|
|
18
|
+
|
|
19
|
+
### 2. Coding style
|
|
20
|
+
|
|
21
|
+
- Use short implementation tasks and bug-fix prompts.
|
|
22
|
+
- Compare code structure, caution level, and explanation style.
|
|
23
|
+
|
|
24
|
+
### 3. Instruction following
|
|
25
|
+
|
|
26
|
+
- Use prompts with explicit formatting or ranking constraints.
|
|
27
|
+
- Compare compliance, stability, and unnecessary extra content.
|
|
28
|
+
|
|
29
|
+
### 4. Refusal and policy behavior
|
|
30
|
+
|
|
31
|
+
- Use borderline prompts that should trigger a recognizable refusal or safe alternative.
|
|
32
|
+
- Compare refusal style, redirect wording, and partial compliance behavior.
|
|
33
|
+
|
|
34
|
+
### 5. Long-context behavior
|
|
35
|
+
|
|
36
|
+
- Only run this when the target is expected to support larger contexts.
|
|
37
|
+
- Compare truncation, summarization drift, and consistency across later references.
|
|
38
|
+
|
|
39
|
+
## Scoring dimensions
|
|
40
|
+
|
|
41
|
+
Score each response on a fixed rubric, for example:
|
|
42
|
+
|
|
43
|
+
- factual accuracy
|
|
44
|
+
- completeness
|
|
45
|
+
- instruction compliance
|
|
46
|
+
- reasoning clarity
|
|
47
|
+
- code quality
|
|
48
|
+
- refusal consistency
|
|
49
|
+
- verbosity control
|
|
50
|
+
- latency or throughput when that matters
|
|
51
|
+
|
|
52
|
+
Use the same rubric for every provider and every prompt.
|
|
53
|
+
|
|
54
|
+
## Confidence discipline
|
|
55
|
+
|
|
56
|
+
- High confidence needs multiple converging signals across categories.
|
|
57
|
+
- Medium confidence fits cases where the target tracks one family strongly but wrappers may distort style.
|
|
58
|
+
- Low confidence fits cases where the protocol was captured but output signals remain mixed.
|
|
59
|
+
|
|
60
|
+
## Suggested report structure
|
|
61
|
+
|
|
62
|
+
1. Research objective
|
|
63
|
+
2. Capture setup
|
|
64
|
+
3. Replay validation status
|
|
65
|
+
4. Prompt matrix and controls
|
|
66
|
+
5. Scoring rubric
|
|
67
|
+
6. Comparative findings
|
|
68
|
+
7. Most likely model family
|
|
69
|
+
8. Caveats, including wrappers and system prompts
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
# Request Shape Checklist
|
|
2
|
+
|
|
3
|
+
Use this checklist before claiming you understand how a gated endpoint is called.
|
|
4
|
+
|
|
5
|
+
## 1. Capture setup
|
|
6
|
+
|
|
7
|
+
- Confirm which client will generate the traffic.
|
|
8
|
+
- Confirm the exact provider or model config that client is using.
|
|
9
|
+
- Route the client to a capture proxy, reverse proxy, or controlled upstream you can inspect.
|
|
10
|
+
- Freeze other moving parts when possible: same model, same prompt, same stream mode, same tool settings.
|
|
11
|
+
|
|
12
|
+
## 2. What to record
|
|
13
|
+
|
|
14
|
+
- HTTP method
|
|
15
|
+
- URL path
|
|
16
|
+
- query string
|
|
17
|
+
- request headers
|
|
18
|
+
- body payload shape
|
|
19
|
+
- stream or SSE response framing
|
|
20
|
+
- request or response ids
|
|
21
|
+
- retries, reconnects, timeout handling, rebroadcast logic
|
|
22
|
+
- any provider-specific or model-specific metadata fields
|
|
23
|
+
|
|
24
|
+
## 3. Environment and config evidence
|
|
25
|
+
|
|
26
|
+
- Save the effective client config that produced the capture.
|
|
27
|
+
- Save the exact env vars used for base URL, API key, and provider selection.
|
|
28
|
+
- When OpenClaw is involved, note the relevant `openclaw.json` keys and whether validation passed.
|
|
29
|
+
|
|
30
|
+
## 4. Replay readiness gate
|
|
31
|
+
|
|
32
|
+
Do not write the replay script until you can answer all of these:
|
|
33
|
+
|
|
34
|
+
- Which fields are constant across repeated requests?
|
|
35
|
+
- Which fields change per request?
|
|
36
|
+
- Which fields come from auth or session state?
|
|
37
|
+
- Is the target using streaming?
|
|
38
|
+
- Is the target expecting OpenAI-compatible JSON or a client-specific variant?
|
|
39
|
+
|
|
40
|
+
## 5. Replay result interpretation
|
|
41
|
+
|
|
42
|
+
- Success with cloned shape does not prove the model identity.
|
|
43
|
+
- Failure with cloned shape does not prove the headers are wrong; it may indicate entitlement or hidden session checks.
|
|
44
|
+
- If only some requests replay, compare stream mode, auth tokens, and subtle metadata differences before drawing conclusions.
|
package/systematic-debug/LICENSE
CHANGED
package/version-release/LICENSE
CHANGED
package/video-production/LICENSE
CHANGED
|
@@ -1,18 +1,21 @@
|
|
|
1
1
|
MIT License
|
|
2
2
|
|
|
3
|
-
Copyright (c) 2026
|
|
3
|
+
Copyright (c) 2026 LaiTszKin
|
|
4
4
|
|
|
5
|
-
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
-
associated documentation files (the "Software"), to deal
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
10
11
|
|
|
11
|
-
The above copyright notice and this permission notice shall be included in all
|
|
12
|
-
portions of the Software.
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
13
14
|
|
|
14
|
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
15
|
-
LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|