@laitszkin/apollo-toolkit 2.12.5 → 2.12.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +23 -0
- package/README.md +3 -0
- package/commit-and-push/SKILL.md +6 -1
- package/develop-new-features/SKILL.md +5 -0
- package/enhance-existing-features/SKILL.md +4 -0
- package/lib/cli.js +28 -1
- package/lib/installer.js +40 -8
- package/lib/updater.js +193 -0
- package/package.json +3 -2
- package/production-sim-debug/SKILL.md +15 -2
- package/scripts/install_skills.ps1 +56 -17
- package/scripts/install_skills.sh +44 -16
- /package/{codex-memory-manager → codex/codex-memory-manager}/LICENSE +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/README.md +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/SKILL.md +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/agents/openai.yaml +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/scripts/extract_recent_conversations.py +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/scripts/sync_memory_index.py +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/tests/test_extract_recent_conversations.py +0 -0
- /package/{codex-memory-manager → codex/codex-memory-manager}/tests/test_sync_memory_index.py +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/CHANGELOG.md +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/LICENSE +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/README.md +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/SKILL.md +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/agents/openai.yaml +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/scripts/extract_recent_conversations.py +0 -0
- /package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/tests/test_extract_recent_conversations.py +0 -0
package/CHANGELOG.md
CHANGED
|
@@ -4,6 +4,29 @@ All notable changes to this repository are documented in this file.
|
|
|
4
4
|
|
|
5
5
|
## [Unreleased]
|
|
6
6
|
|
|
7
|
+
## [v2.12.7] - 2026-04-02
|
|
8
|
+
|
|
9
|
+
### Added
|
|
10
|
+
- Add `claude-code` install mode for copying skills into `~/.claude/skills`, with `CLAUDE_CODE_SKILLS_DIR` environment override support.
|
|
11
|
+
|
|
12
|
+
### Changed
|
|
13
|
+
- Move `codex-memory-manager` and `learn-skill-from-conversations` into `codex/` subdirectory to clarify agent-specific skill boundaries.
|
|
14
|
+
- Update codex install mode to include skills from both root directory and the `codex/` subdirectory.
|
|
15
|
+
|
|
16
|
+
## [v2.12.6] - 2026-04-02
|
|
17
|
+
|
|
18
|
+
### Added
|
|
19
|
+
- Add the global `apltk` CLI alias so the Apollo Toolkit installer can be launched with a shorter command after npm installation.
|
|
20
|
+
|
|
21
|
+
### Changed
|
|
22
|
+
- Update `develop-new-features` and `enhance-existing-features` so any spec-backed change affecting more than three modules must be split into independent, non-conflicting, non-dependent spec sets.
|
|
23
|
+
- Expand `commit-and-push` with stricter worktree replay and cleanup rules so temporary worktree delivery verifies the authoritative target branch before removing the worktree.
|
|
24
|
+
- Strengthen `production-sim-debug` so protocol-sensitive simulation claims must be checked against official docs or upstream source, and infeasible local-simulation designs must be collapsed quickly instead of left as pending implementation.
|
|
25
|
+
- Update the Apollo Toolkit CLI so interactive global runs can start from `apltk`, check npm for newer published packages, and offer an in-place global update before continuing.
|
|
26
|
+
|
|
27
|
+
### Fixed
|
|
28
|
+
- Fix updater version comparison so prerelease builds such as `2.12.5-beta.1` no longer suppress available stable-release upgrade prompts.
|
|
29
|
+
|
|
7
30
|
## [v2.12.5] - 2026-04-01
|
|
8
31
|
|
|
9
32
|
### Changed
|
package/README.md
CHANGED
|
@@ -70,9 +70,12 @@ The interactive installer:
|
|
|
70
70
|
|
|
71
71
|
```bash
|
|
72
72
|
npm i -g @laitszkin/apollo-toolkit
|
|
73
|
+
apltk
|
|
73
74
|
apollo-toolkit
|
|
74
75
|
```
|
|
75
76
|
|
|
77
|
+
Global install 後,`apltk` 與 `apollo-toolkit` 都會啟動同一個 Apollo Toolkit CLI。直接執行 `apltk` 會打開互動安裝頁,並在互動模式下先檢查 npm registry 是否有新版可用;若有,CLI 會先詢問,再自動執行全域更新。
|
|
78
|
+
|
|
76
79
|
### Non-interactive install
|
|
77
80
|
|
|
78
81
|
```bash
|
package/commit-and-push/SKILL.md
CHANGED
|
@@ -15,7 +15,7 @@ description: "Guide the agent to submit local changes with commit and push only
|
|
|
15
15
|
## Standards
|
|
16
16
|
|
|
17
17
|
- Evidence: Inspect git state and classify the change set before deciding which quality gates apply, then compare the actual pending diff against root `CHANGELOG.md` `Unreleased` before committing.
|
|
18
|
-
- Execution: Run the required quality-gate skills when applicable, and treat every conditional gate whose scenario is met as blocking before submission; hand the repository to `submission-readiness-check` for changelog/docs/plan finalization, preserve staging intent, honor any explicit user-specified target branch, and when the worktree is already clean inspect local `HEAD`, upstream state, and the most recent relevant commit before deciding the request is a no-op; then commit and push without release steps; run dependent git mutations sequentially and verify the remote branch actually contains the new local `HEAD` before reporting success.
|
|
18
|
+
- Execution: Run the required quality-gate skills when applicable, and treat every conditional gate whose scenario is met as blocking before submission; hand the repository to `submission-readiness-check` for changelog/docs/plan finalization, preserve staging intent, honor any explicit user-specified target branch, and when the worktree is already clean inspect local `HEAD`, upstream state, and the most recent relevant commit before deciding the request is a no-op; when worktree-based delivery is involved, verify where the authoritative target branch lives before moving history, re-validate on that target branch after replay or merge, and remove the temporary worktree only after the target branch is safely updated; then commit and push without release steps; run dependent git mutations sequentially and verify the remote branch actually contains the new local `HEAD` before reporting success.
|
|
19
19
|
- Quality: Re-run relevant validation for runtime changes, preserve unrelated local work safely when branch switching or post-push local sync is required, and do not bypass blocking readiness findings such as missing/stale `Unreleased` bullets or unsynchronized project docs.
|
|
20
20
|
- Output: Produce a concise Conventional Commit, push it to the intended branch, and report any temporary stash/restore or local branch sync that was required.
|
|
21
21
|
|
|
@@ -51,6 +51,9 @@ Load only when needed:
|
|
|
51
51
|
- Preserve unrelated uncommitted work safely before branch operations, for example with `git stash push`, and restore it after the target branch has been updated.
|
|
52
52
|
- If the fix was committed on the wrong branch, move it to the requested branch with safe history-preserving operations such as `cherry-pick`, `merge --ff-only`, or a clean replay; do not force-push unless the user explicitly asks for it.
|
|
53
53
|
- If the user asks to sync the local target branch after pushing, fast-forward or pull that branch locally and then restore any preserved worktree changes.
|
|
54
|
+
- If the implementation lives in a detached or temporary `git worktree`, inspect both the temporary worktree and the main worktree before deciding the replay method.
|
|
55
|
+
- When the main worktree already contains staged or partially overlapping copies of the same changes, compare file content and branch tips first; do not create an unnecessary merge commit when a direct replay onto the authoritative target branch is safer.
|
|
56
|
+
- When the worktree diff is broader than the requested issue, stop and separate the requested commit scope before replaying anything to the target branch.
|
|
54
57
|
4. Run code-affecting dependency skills (when applicable)
|
|
55
58
|
- Run `review-change-set` for every code-affecting change before continuing; treat unresolved review findings as blocking.
|
|
56
59
|
- Run `discover-edge-cases` and `harden-app-security` for the same code-affecting scope when the reviewed risk profile or repository context says their coverage is needed; treat them as blocking review gates, not optional polish, whenever that condition is met.
|
|
@@ -71,6 +74,7 @@ Load only when needed:
|
|
|
71
74
|
- After pushing, verify the remote branch tip matches the local `HEAD`, for example by comparing `git rev-parse HEAD` with the target branch hash from `git rev-parse @{u}` or `git ls-remote --heads <remote> <branch>`.
|
|
72
75
|
- If the push result is ambiguous, out of order, or the hashes do not match, rerun the missing git step sequentially and re-check before reporting success.
|
|
73
76
|
- Confirm the local branch state matches the user's requested destination when post-push synchronization was requested.
|
|
77
|
+
- When the user explicitly asks to merge work back from a temporary worktree and delete that worktree, do the final verification on the authoritative target branch first, then remove the temporary worktree and prune stale worktree records before reporting completion.
|
|
74
78
|
|
|
75
79
|
## Notes
|
|
76
80
|
|
|
@@ -80,6 +84,7 @@ Load only when needed:
|
|
|
80
84
|
- Never downgrade `discover-edge-cases` or `harden-app-security` to optional follow-up when the change risk says they apply.
|
|
81
85
|
- Never claim the repository is ready to commit while root `CHANGELOG.md` `Unreleased` is missing the current change or still describes superseded work.
|
|
82
86
|
- Never fabricate a commit/push result when the worktree is already clean; either identify the exact existing commit/upstream state that satisfies the user's request or say that no matching new submission exists.
|
|
87
|
+
- Never delete a temporary worktree before the target branch has been updated, tested, and verified to contain the intended final content.
|
|
83
88
|
- If release/version/tag work is requested, use `version-release` instead.
|
|
84
89
|
- If a new branch is required, follow `references/branch-naming.md`.
|
|
85
90
|
- A pushed implementation can still leave an active spec set behind; commit completion and spec archival are separate decisions.
|
|
@@ -56,6 +56,10 @@ Use a shared spec-generation workflow for non-trivial new feature work, then imp
|
|
|
56
56
|
- narrowly scoped adjustments that touch only a few files/modules and do not require cross-team alignment or approval artifacts
|
|
57
57
|
- In those cases, do not create `spec.md` / `tasks.md` / `checklist.md`; instead use the appropriate direct implementation workflow (for example `enhance-existing-features` for small brownfield adjustments or `systematic-debug` for bug fixes).
|
|
58
58
|
- Specs are required when the request is truly a non-trivial new feature, product behavior change, or greenfield project that needs shared planning.
|
|
59
|
+
- Treat each spec set as a narrowly scoped workstream that covers at most three modules.
|
|
60
|
+
- If the requested change would require edits across more than three modules, do not force it into one oversized spec set.
|
|
61
|
+
- Instead, split the work into multiple independent spec sets, each covering no more than three modules.
|
|
62
|
+
- Define those spec sets so they do not conflict with each other and do not depend on another spec set being implemented first in order to be valid.
|
|
59
63
|
- Follow `$generate-spec` completely for:
|
|
60
64
|
- generating `docs/plans/{YYYY-MM-DD}_{change_name}/spec.md`, `tasks.md`, and `checklist.md`
|
|
61
65
|
- filling BDD requirements and risk-driven test plans
|
|
@@ -117,6 +121,7 @@ Rules:
|
|
|
117
121
|
|
|
118
122
|
- By default, write planning docs in the user's language.
|
|
119
123
|
- Keep implementation traceable to approved requirement IDs and planned risks.
|
|
124
|
+
- Keep each spec set limited to at most three modules; split larger changes into independent, non-conflicting, non-dependent spec sets before approval.
|
|
120
125
|
- Prefer realism over rigid templates: add or remove test coverage only when the risk profile justifies it.
|
|
121
126
|
- Every planned test should justify a distinct risk; remove shallow duplicates that only prove the code "still runs".
|
|
122
127
|
- Treat starter template alternatives as mutually exclusive options, not as boxes that all need to be checked.
|
|
@@ -66,6 +66,9 @@ When in doubt, prefer direct implementation for genuinely low-risk localized cha
|
|
|
66
66
|
If triggered:
|
|
67
67
|
- Run `$generate-spec` and follow its workflow completely.
|
|
68
68
|
- Use it to create or update `docs/plans/{YYYY-MM-DD}_{change_name}/spec.md`, `tasks.md`, and `checklist.md`.
|
|
69
|
+
- Keep each spec set scoped to at most three modules.
|
|
70
|
+
- If the requested change would require edits across more than three modules, split it into multiple spec sets instead of drafting one large coupled plan.
|
|
71
|
+
- Design the split spec sets so they are independently valid, do not conflict with each other, and do not require another spec set to land first.
|
|
69
72
|
- Ensure planned behaviors and edge cases cover external dependency states, abuse/adversarial paths, and any relevant authorization/idempotency/concurrency/data-integrity risks.
|
|
70
73
|
- After implementation and testing, update the same plan set so `spec.md` reflects requirement completion status in addition to task and checklist progress.
|
|
71
74
|
- If users answer clarification questions, update the planning docs and obtain explicit approval again before implementation.
|
|
@@ -130,6 +133,7 @@ Rules:
|
|
|
130
133
|
|
|
131
134
|
- Keep the solution minimal and executable.
|
|
132
135
|
- Always decide the need for specs only after exploring the existing codebase.
|
|
136
|
+
- When specs are used, keep each spec set limited to at most three modules; split broader work into independent, non-conflicting, non-dependent spec sets before approval.
|
|
133
137
|
- Maintain traceability between requirements, tasks, and tests when specs are present.
|
|
134
138
|
- Treat checklists as living artifacts: adjust items to match real change scope.
|
|
135
139
|
- Treat mutually exclusive template choices as a decision to record, not multiple boxes to finish.
|
package/lib/cli.js
CHANGED
|
@@ -1,4 +1,5 @@
|
|
|
1
1
|
const { createInterface } = require('node:readline/promises');
|
|
2
|
+
const fs = require('node:fs');
|
|
2
3
|
const path = require('node:path');
|
|
3
4
|
|
|
4
5
|
const {
|
|
@@ -9,6 +10,7 @@ const {
|
|
|
9
10
|
syncToolkitHome,
|
|
10
11
|
getTargetRoots,
|
|
11
12
|
} = require('./installer');
|
|
13
|
+
const { checkForPackageUpdate } = require('./updater');
|
|
12
14
|
|
|
13
15
|
const TARGET_OPTIONS = [
|
|
14
16
|
{ id: 'all', label: 'All', description: 'Install every supported target below' },
|
|
@@ -116,13 +118,18 @@ function buildHelpText({ version, colorEnabled }) {
|
|
|
116
118
|
buildBanner({ version, colorEnabled }),
|
|
117
119
|
'',
|
|
118
120
|
'Usage:',
|
|
121
|
+
' apltk [install] [codex|openclaw|trae|all]...',
|
|
119
122
|
' apollo-toolkit [install] [codex|openclaw|trae|all]...',
|
|
123
|
+
' apltk --help',
|
|
120
124
|
' apollo-toolkit --help',
|
|
121
125
|
'',
|
|
122
126
|
'Examples:',
|
|
127
|
+
' apltk',
|
|
128
|
+
' apltk codex openclaw',
|
|
123
129
|
' npx @laitszkin/apollo-toolkit',
|
|
124
130
|
' npx @laitszkin/apollo-toolkit codex openclaw',
|
|
125
131
|
' npm i -g @laitszkin/apollo-toolkit',
|
|
132
|
+
' apltk all',
|
|
126
133
|
' apollo-toolkit all',
|
|
127
134
|
'',
|
|
128
135
|
'Options:',
|
|
@@ -131,6 +138,10 @@ function buildHelpText({ version, colorEnabled }) {
|
|
|
131
138
|
].join('\n');
|
|
132
139
|
}
|
|
133
140
|
|
|
141
|
+
function readPackageJson(sourceRoot) {
|
|
142
|
+
return JSON.parse(fs.readFileSync(path.join(sourceRoot, 'package.json'), 'utf8'));
|
|
143
|
+
}
|
|
144
|
+
|
|
134
145
|
function parseArguments(argv) {
|
|
135
146
|
const args = [...argv];
|
|
136
147
|
const result = {
|
|
@@ -343,7 +354,7 @@ async function run(argv, context = {}) {
|
|
|
343
354
|
const stderr = context.stderr || process.stderr;
|
|
344
355
|
const stdin = context.stdin || process.stdin;
|
|
345
356
|
const env = context.env || process.env;
|
|
346
|
-
|
|
357
|
+
let packageJson = readPackageJson(sourceRoot);
|
|
347
358
|
|
|
348
359
|
try {
|
|
349
360
|
const parsed = parseArguments(argv);
|
|
@@ -352,6 +363,21 @@ async function run(argv, context = {}) {
|
|
|
352
363
|
return 0;
|
|
353
364
|
}
|
|
354
365
|
|
|
366
|
+
const updateResult = await checkForPackageUpdate({
|
|
367
|
+
packageName: packageJson.name,
|
|
368
|
+
currentVersion: packageJson.version,
|
|
369
|
+
env,
|
|
370
|
+
stdin,
|
|
371
|
+
stdout,
|
|
372
|
+
stderr,
|
|
373
|
+
exec: context.execCommand,
|
|
374
|
+
confirmUpdate: context.confirmUpdate,
|
|
375
|
+
});
|
|
376
|
+
|
|
377
|
+
if (updateResult.updated) {
|
|
378
|
+
packageJson = readPackageJson(sourceRoot);
|
|
379
|
+
}
|
|
380
|
+
|
|
355
381
|
const toolkitHome = parsed.toolkitHome || resolveToolkitHome(env);
|
|
356
382
|
const modes = parsed.modes.length > 0
|
|
357
383
|
? normalizeModes(parsed.modes)
|
|
@@ -401,5 +427,6 @@ module.exports = {
|
|
|
401
427
|
buildHelpText,
|
|
402
428
|
parseArguments,
|
|
403
429
|
promptForModes,
|
|
430
|
+
readPackageJson,
|
|
404
431
|
run,
|
|
405
432
|
};
|
package/lib/installer.js
CHANGED
|
@@ -3,7 +3,7 @@ const fsp = require('node:fs/promises');
|
|
|
3
3
|
const os = require('node:os');
|
|
4
4
|
const path = require('node:path');
|
|
5
5
|
|
|
6
|
-
const VALID_MODES = ['codex', 'openclaw', 'trae'];
|
|
6
|
+
const VALID_MODES = ['codex', 'openclaw', 'trae', 'claude-code'];
|
|
7
7
|
const COPY_FILES = new Set(['AGENTS.md', 'CHANGELOG.md', 'LICENSE', 'README.md', 'package.json']);
|
|
8
8
|
const COPY_DIRS = new Set(['scripts']);
|
|
9
9
|
|
|
@@ -61,7 +61,7 @@ function normalizeModes(inputModes) {
|
|
|
61
61
|
return modes;
|
|
62
62
|
}
|
|
63
63
|
|
|
64
|
-
async function listSkillNames(rootDir) {
|
|
64
|
+
async function listSkillNames(rootDir, modes = []) {
|
|
65
65
|
const entries = await fsp.readdir(rootDir, { withFileTypes: true });
|
|
66
66
|
const skillNames = [];
|
|
67
67
|
|
|
@@ -75,6 +75,19 @@ async function listSkillNames(rootDir) {
|
|
|
75
75
|
}
|
|
76
76
|
}
|
|
77
77
|
|
|
78
|
+
// For codex mode, also include codex-specific skills
|
|
79
|
+
if (modes.includes('codex')) {
|
|
80
|
+
const codexDir = path.join(rootDir, 'codex');
|
|
81
|
+
if (fs.existsSync(codexDir)) {
|
|
82
|
+
const codexEntries = await fsp.readdir(codexDir, { withFileTypes: true });
|
|
83
|
+
for (const entry of codexEntries) {
|
|
84
|
+
if (entry.isDirectory() && fs.existsSync(path.join(codexDir, entry.name, 'SKILL.md'))) {
|
|
85
|
+
skillNames.push(entry.name);
|
|
86
|
+
}
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
|
|
78
91
|
return skillNames.sort();
|
|
79
92
|
}
|
|
80
93
|
|
|
@@ -125,10 +138,10 @@ async function stageToolkitContents({ sourceRoot, destinationRoot, version }) {
|
|
|
125
138
|
return copiedEntries.sort();
|
|
126
139
|
}
|
|
127
140
|
|
|
128
|
-
async function syncToolkitHome({ sourceRoot, toolkitHome, version }) {
|
|
141
|
+
async function syncToolkitHome({ sourceRoot, toolkitHome, version, modes = [] }) {
|
|
129
142
|
const parentDir = path.dirname(toolkitHome);
|
|
130
143
|
const tempDir = path.join(parentDir, `.apollo-toolkit.tmp-${process.pid}-${Date.now()}`);
|
|
131
|
-
const previousSkillNames = await listSkillNames(toolkitHome).catch(() => []);
|
|
144
|
+
const previousSkillNames = await listSkillNames(toolkitHome, modes).catch(() => []);
|
|
132
145
|
|
|
133
146
|
await fsp.rm(tempDir, { recursive: true, force: true });
|
|
134
147
|
await stageToolkitContents({ sourceRoot, destinationRoot: tempDir, version });
|
|
@@ -145,7 +158,7 @@ async function syncToolkitHome({ sourceRoot, toolkitHome, version }) {
|
|
|
145
158
|
return {
|
|
146
159
|
toolkitHome,
|
|
147
160
|
previousSkillNames,
|
|
148
|
-
skillNames: await listSkillNames(toolkitHome),
|
|
161
|
+
skillNames: await listSkillNames(toolkitHome, modes),
|
|
149
162
|
};
|
|
150
163
|
}
|
|
151
164
|
|
|
@@ -197,6 +210,18 @@ async function getTargetRoots(modes, env = process.env) {
|
|
|
197
210
|
root: path.join(openclawHome, workspaceName, 'skills'),
|
|
198
211
|
});
|
|
199
212
|
}
|
|
213
|
+
continue;
|
|
214
|
+
}
|
|
215
|
+
|
|
216
|
+
if (mode === 'claude-code') {
|
|
217
|
+
targets.push({
|
|
218
|
+
mode,
|
|
219
|
+
label: 'Claude Code',
|
|
220
|
+
root: env.CLAUDE_CODE_SKILLS_DIR
|
|
221
|
+
? path.resolve(expandUserPath(env.CLAUDE_CODE_SKILLS_DIR, env))
|
|
222
|
+
: path.join(homeDir, '.claude', 'skills'),
|
|
223
|
+
});
|
|
224
|
+
continue;
|
|
200
225
|
}
|
|
201
226
|
}
|
|
202
227
|
|
|
@@ -214,8 +239,9 @@ async function replaceWithCopy(sourcePath, targetPath) {
|
|
|
214
239
|
}
|
|
215
240
|
|
|
216
241
|
async function installLinks({ toolkitHome, modes, env = process.env, previousSkillNames = [] }) {
|
|
217
|
-
const
|
|
218
|
-
const
|
|
242
|
+
const normalizedModes = normalizeModes(modes);
|
|
243
|
+
const skillNames = await listSkillNames(toolkitHome, normalizedModes);
|
|
244
|
+
const targets = await getTargetRoots(normalizedModes, env);
|
|
219
245
|
const copiedPaths = [];
|
|
220
246
|
const staleSkillNames = previousSkillNames.filter((skillName) => !skillNames.includes(skillName));
|
|
221
247
|
|
|
@@ -225,7 +251,13 @@ async function installLinks({ toolkitHome, modes, env = process.env, previousSki
|
|
|
225
251
|
await fsp.rm(path.join(target.root, staleSkillName), { recursive: true, force: true });
|
|
226
252
|
}
|
|
227
253
|
for (const skillName of skillNames) {
|
|
228
|
-
|
|
254
|
+
// For codex skills, use the ./codex/ subdirectory as source
|
|
255
|
+
let sourcePath;
|
|
256
|
+
if (normalizedModes.includes('codex') && fs.existsSync(path.join(toolkitHome, 'codex', skillName))) {
|
|
257
|
+
sourcePath = path.join(toolkitHome, 'codex', skillName);
|
|
258
|
+
} else {
|
|
259
|
+
sourcePath = path.join(toolkitHome, skillName);
|
|
260
|
+
}
|
|
229
261
|
const targetPath = path.join(target.root, skillName);
|
|
230
262
|
await replaceWithCopy(sourcePath, targetPath);
|
|
231
263
|
copiedPaths.push({ target: target.label, path: targetPath, skillName });
|
package/lib/updater.js
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
1
|
+
const { spawn } = require('node:child_process');
|
|
2
|
+
const { createInterface } = require('node:readline/promises');
|
|
3
|
+
|
|
4
|
+
function normalizeVersion(version) {
|
|
5
|
+
return String(version || '')
|
|
6
|
+
.trim()
|
|
7
|
+
.replace(/^v/i, '');
|
|
8
|
+
}
|
|
9
|
+
|
|
10
|
+
function parseVersion(version) {
|
|
11
|
+
const normalized = normalizeVersion(version);
|
|
12
|
+
const [core, prerelease = ''] = normalized.split('-', 2);
|
|
13
|
+
const parts = core.split('.').map((part) => Number.parseInt(part, 10) || 0);
|
|
14
|
+
|
|
15
|
+
return {
|
|
16
|
+
parts,
|
|
17
|
+
prerelease,
|
|
18
|
+
};
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
function compareVersions(left, right) {
|
|
22
|
+
const leftVersion = parseVersion(left);
|
|
23
|
+
const rightVersion = parseVersion(right);
|
|
24
|
+
const leftParts = leftVersion.parts;
|
|
25
|
+
const rightParts = rightVersion.parts;
|
|
26
|
+
const length = Math.max(leftParts.length, rightParts.length);
|
|
27
|
+
|
|
28
|
+
for (let index = 0; index < length; index += 1) {
|
|
29
|
+
const delta = (leftParts[index] || 0) - (rightParts[index] || 0);
|
|
30
|
+
if (delta !== 0) {
|
|
31
|
+
return delta;
|
|
32
|
+
}
|
|
33
|
+
}
|
|
34
|
+
|
|
35
|
+
if (leftVersion.prerelease && !rightVersion.prerelease) {
|
|
36
|
+
return -1;
|
|
37
|
+
}
|
|
38
|
+
|
|
39
|
+
if (!leftVersion.prerelease && rightVersion.prerelease) {
|
|
40
|
+
return 1;
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
if (leftVersion.prerelease !== rightVersion.prerelease) {
|
|
44
|
+
return leftVersion.prerelease.localeCompare(rightVersion.prerelease);
|
|
45
|
+
}
|
|
46
|
+
|
|
47
|
+
return 0;
|
|
48
|
+
}
|
|
49
|
+
|
|
50
|
+
function shouldSkipUpdateCheck({ env = process.env, stdin = process.stdin, stdout = process.stdout }) {
|
|
51
|
+
return env.APOLLO_TOOLKIT_SKIP_UPDATE_CHECK === '1' || !stdin.isTTY || !stdout.isTTY;
|
|
52
|
+
}
|
|
53
|
+
|
|
54
|
+
function execCommand(command, args, { env = process.env, stdout, stderr } = {}) {
|
|
55
|
+
return new Promise((resolve, reject) => {
|
|
56
|
+
const child = spawn(command, args, {
|
|
57
|
+
env,
|
|
58
|
+
stdio: ['ignore', 'pipe', 'pipe'],
|
|
59
|
+
});
|
|
60
|
+
|
|
61
|
+
let capturedStdout = '';
|
|
62
|
+
let capturedStderr = '';
|
|
63
|
+
|
|
64
|
+
child.stdout.on('data', (chunk) => {
|
|
65
|
+
capturedStdout += chunk.toString('utf8');
|
|
66
|
+
if (stdout) {
|
|
67
|
+
stdout.write(chunk);
|
|
68
|
+
}
|
|
69
|
+
});
|
|
70
|
+
|
|
71
|
+
child.stderr.on('data', (chunk) => {
|
|
72
|
+
capturedStderr += chunk.toString('utf8');
|
|
73
|
+
if (stderr) {
|
|
74
|
+
stderr.write(chunk);
|
|
75
|
+
}
|
|
76
|
+
});
|
|
77
|
+
|
|
78
|
+
child.on('error', reject);
|
|
79
|
+
child.on('close', (code) => {
|
|
80
|
+
if (code !== 0) {
|
|
81
|
+
reject(new Error(capturedStderr.trim() || `${command} exited with code ${code}`));
|
|
82
|
+
return;
|
|
83
|
+
}
|
|
84
|
+
|
|
85
|
+
resolve({
|
|
86
|
+
stdout: capturedStdout,
|
|
87
|
+
stderr: capturedStderr,
|
|
88
|
+
});
|
|
89
|
+
});
|
|
90
|
+
});
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
async function defaultConfirmUpdate({ stdin, stdout, currentVersion, latestVersion, packageName }) {
|
|
94
|
+
const rl = createInterface({ input: stdin, output: stdout });
|
|
95
|
+
try {
|
|
96
|
+
const answer = await rl.question(
|
|
97
|
+
`A newer ${packageName} release is available (${currentVersion} -> ${latestVersion}). Update now? [Y/n] `,
|
|
98
|
+
);
|
|
99
|
+
const normalized = answer.trim().toLowerCase();
|
|
100
|
+
return normalized === '' || normalized === 'y';
|
|
101
|
+
} finally {
|
|
102
|
+
rl.close();
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
|
|
106
|
+
async function getLatestPublishedVersion({
|
|
107
|
+
packageName,
|
|
108
|
+
env = process.env,
|
|
109
|
+
exec = execCommand,
|
|
110
|
+
}) {
|
|
111
|
+
const result = await exec('npm', ['view', packageName, 'version', '--json'], { env });
|
|
112
|
+
const parsed = JSON.parse(result.stdout.trim());
|
|
113
|
+
|
|
114
|
+
if (Array.isArray(parsed)) {
|
|
115
|
+
return String(parsed[parsed.length - 1] || '').trim();
|
|
116
|
+
}
|
|
117
|
+
|
|
118
|
+
return String(parsed || '').trim();
|
|
119
|
+
}
|
|
120
|
+
|
|
121
|
+
async function checkForPackageUpdate({
|
|
122
|
+
packageName,
|
|
123
|
+
currentVersion,
|
|
124
|
+
env = process.env,
|
|
125
|
+
stdin = process.stdin,
|
|
126
|
+
stdout = process.stdout,
|
|
127
|
+
stderr = process.stderr,
|
|
128
|
+
exec = execCommand,
|
|
129
|
+
confirmUpdate = defaultConfirmUpdate,
|
|
130
|
+
}) {
|
|
131
|
+
if (shouldSkipUpdateCheck({ env, stdin, stdout })) {
|
|
132
|
+
return {
|
|
133
|
+
checked: false,
|
|
134
|
+
updated: false,
|
|
135
|
+
};
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
try {
|
|
139
|
+
const latestVersion = await getLatestPublishedVersion({ packageName, env, exec });
|
|
140
|
+
if (!latestVersion || compareVersions(latestVersion, currentVersion) <= 0) {
|
|
141
|
+
return {
|
|
142
|
+
checked: true,
|
|
143
|
+
updated: false,
|
|
144
|
+
latestVersion,
|
|
145
|
+
};
|
|
146
|
+
}
|
|
147
|
+
|
|
148
|
+
const approved = await confirmUpdate({
|
|
149
|
+
stdin,
|
|
150
|
+
stdout,
|
|
151
|
+
currentVersion,
|
|
152
|
+
latestVersion,
|
|
153
|
+
packageName,
|
|
154
|
+
});
|
|
155
|
+
|
|
156
|
+
if (!approved) {
|
|
157
|
+
stdout.write(`Continuing with ${packageName} ${currentVersion}.\n`);
|
|
158
|
+
return {
|
|
159
|
+
checked: true,
|
|
160
|
+
updated: false,
|
|
161
|
+
latestVersion,
|
|
162
|
+
};
|
|
163
|
+
}
|
|
164
|
+
|
|
165
|
+
stdout.write(`Updating ${packageName} to ${latestVersion}...\n`);
|
|
166
|
+
await exec('npm', ['install', '-g', `${packageName}@latest`], { env, stdout, stderr });
|
|
167
|
+
stdout.write(`Update complete. Continuing with ${packageName} ${latestVersion}.\n`);
|
|
168
|
+
|
|
169
|
+
return {
|
|
170
|
+
checked: true,
|
|
171
|
+
updated: true,
|
|
172
|
+
latestVersion,
|
|
173
|
+
};
|
|
174
|
+
} catch (error) {
|
|
175
|
+
stderr.write(`Warning: unable to check or install package updates: ${error.message}\n`);
|
|
176
|
+
return {
|
|
177
|
+
checked: false,
|
|
178
|
+
updated: false,
|
|
179
|
+
error,
|
|
180
|
+
};
|
|
181
|
+
}
|
|
182
|
+
}
|
|
183
|
+
|
|
184
|
+
module.exports = {
|
|
185
|
+
checkForPackageUpdate,
|
|
186
|
+
compareVersions,
|
|
187
|
+
defaultConfirmUpdate,
|
|
188
|
+
execCommand,
|
|
189
|
+
getLatestPublishedVersion,
|
|
190
|
+
normalizeVersion,
|
|
191
|
+
parseVersion,
|
|
192
|
+
shouldSkipUpdateCheck,
|
|
193
|
+
};
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@laitszkin/apollo-toolkit",
|
|
3
|
-
"version": "2.12.
|
|
3
|
+
"version": "2.12.7",
|
|
4
4
|
"description": "Apollo Toolkit npm installer for managed skill copying across Codex, OpenClaw, and Trae.",
|
|
5
5
|
"license": "MIT",
|
|
6
6
|
"author": "LaiTszKin",
|
|
@@ -14,7 +14,8 @@
|
|
|
14
14
|
},
|
|
15
15
|
"type": "commonjs",
|
|
16
16
|
"bin": {
|
|
17
|
-
"apollo-toolkit": "bin/apollo-toolkit.js"
|
|
17
|
+
"apollo-toolkit": "bin/apollo-toolkit.js",
|
|
18
|
+
"apltk": "bin/apollo-toolkit.js"
|
|
18
19
|
},
|
|
19
20
|
"scripts": {
|
|
20
21
|
"test": "node --test"
|
|
@@ -14,8 +14,8 @@ description: Investigate production or local simulation runs for runtime-toolcha
|
|
|
14
14
|
|
|
15
15
|
## Standards
|
|
16
16
|
|
|
17
|
-
- Evidence: Base conclusions on the actual preset, runtime command, logs, SQLite event store, local stub responses,
|
|
18
|
-
- Execution: Reproduce with the exact scenario first, verify the bounded-run contract against the actual script/env implementation before launch, separate product logic failures from simulation-toolchain failures, make the smallest realistic toolchain fix, and rerun the same bounded scenario to validate.
|
|
17
|
+
- Evidence: Base conclusions on the actual preset, runtime command, logs, SQLite event store, local stub responses, the code paths that generated them, and official protocol or validator documentation whenever feasibility or instruction legality is in question.
|
|
18
|
+
- Execution: Reproduce with the exact scenario first, verify the bounded-run contract against the actual script/env implementation before launch, separate product logic failures from simulation-toolchain failures, verify protocol-sensitive claims against official docs or upstream source before changing code or specs, make the smallest realistic toolchain fix, and rerun the same bounded scenario to validate.
|
|
19
19
|
- Quality: Prefer harness or stub fixes that improve realism over one-off scenario hacks, avoid duplicating existing workflow skills, and record reusable presets when a scenario becomes part of the regular test suite.
|
|
20
20
|
- Output: Return the scenario contract, observed outcomes, root-cause chain, fixes applied, validation evidence, and any remaining realism gaps.
|
|
21
21
|
|
|
@@ -69,6 +69,7 @@ Use this skill to debug simulation workflows where the repository exposes a prod
|
|
|
69
69
|
### 4) Separate product failures from toolchain realism failures
|
|
70
70
|
|
|
71
71
|
- When the suspected blocker touches protocol rules, instruction legality, quote semantics, or liquidation invariants, verify the claim against the relevant official docs or upstream source before assigning blame.
|
|
72
|
+
- When the current spec or planned fix assumes a local-simulation capability, verify that the capability is actually supported by the validator and program ownership model before implementing it.
|
|
72
73
|
- For every major blocker, explicitly classify the result as one of:
|
|
73
74
|
- production bot problem
|
|
74
75
|
- simulation environment problem
|
|
@@ -87,6 +88,17 @@ Use this skill to debug simulation workflows where the repository exposes a prod
|
|
|
87
88
|
- If a local stub inflates or distorts profitability, preserve the runtime behavior and calibrate the stub.
|
|
88
89
|
- If a scenario intentionally stresses one dimension, make sure the harness is not accidentally stressing unrelated dimensions.
|
|
89
90
|
|
|
91
|
+
### 4.3) Collapse infeasible simulation designs quickly
|
|
92
|
+
|
|
93
|
+
- If official docs or upstream source prove that the proposed local-simulation design is impossible under the current architecture, stop trying to force the implementation through.
|
|
94
|
+
- Treat this as a first-class debugging outcome, not as an implementation blocker to hand-wave away.
|
|
95
|
+
- Name the precise external constraint, such as:
|
|
96
|
+
- validator preload behavior only applying at genesis/startup
|
|
97
|
+
- account data mutability being restricted to the owner program
|
|
98
|
+
- protocol instruction allowlists rejecting the proposed transaction shape
|
|
99
|
+
- When a live spec or plan still claims that infeasible design as in scope, update the spec artifacts immediately so they only describe the remaining feasible scope.
|
|
100
|
+
- Prefer narrowing the scenario to the strongest still-valid readiness or realism checks rather than leaving impossible tasks marked as pending.
|
|
101
|
+
|
|
90
102
|
### 4.1) Map the observed failure to the real pipeline stage
|
|
91
103
|
|
|
92
104
|
- Do not treat every `liquidation_event` row as evidence that the run reached verification or execution.
|
|
@@ -168,6 +180,7 @@ Use this skill to debug simulation workflows where the repository exposes a prod
|
|
|
168
180
|
- Explain the failing stage in the liquidation pipeline and whether the key counts represent positions, attempts, quotes, or executed outcomes.
|
|
169
181
|
- Summarize the narrow fix and the regression test or rerun evidence.
|
|
170
182
|
- If the final scenario should be reused, state where the preset or docs were added.
|
|
183
|
+
- If official docs disproved part of the planned simulation design, state which spec or plan artifacts were narrowed and why.
|
|
171
184
|
|
|
172
185
|
## Example invocation
|
|
173
186
|
|
|
@@ -9,22 +9,24 @@ $ErrorActionPreference = "Stop"
|
|
|
9
9
|
function Show-Usage {
|
|
10
10
|
@"
|
|
11
11
|
Usage:
|
|
12
|
-
./scripts/install_skills.ps1 [codex|openclaw|trae|agents|all]...
|
|
12
|
+
./scripts/install_skills.ps1 [codex|openclaw|trae|agents|claude-code|all]...
|
|
13
13
|
|
|
14
14
|
Modes:
|
|
15
|
-
codex
|
|
16
|
-
openclaw
|
|
17
|
-
trae
|
|
18
|
-
agents
|
|
19
|
-
|
|
15
|
+
codex Copy skills into ~/.codex/skills (includes ./codex/ agent-specific skills)
|
|
16
|
+
openclaw Copy skills into ~/.openclaw/workspace*/skills
|
|
17
|
+
trae Copy skills into ~/.trae/skills
|
|
18
|
+
agents Copy skills into ~/.agents/skills (for agent-skill-compatible software)
|
|
19
|
+
claude-code Copy skills into ~/.claude/skills
|
|
20
|
+
all Install all supported targets
|
|
20
21
|
|
|
21
22
|
Optional environment overrides:
|
|
22
|
-
CODEX_SKILLS_DIR
|
|
23
|
-
OPENCLAW_HOME
|
|
24
|
-
TRAE_SKILLS_DIR
|
|
25
|
-
AGENTS_SKILLS_DIR
|
|
26
|
-
|
|
27
|
-
|
|
23
|
+
CODEX_SKILLS_DIR Override codex skills destination path
|
|
24
|
+
OPENCLAW_HOME Override openclaw home path
|
|
25
|
+
TRAE_SKILLS_DIR Override trae skills destination path
|
|
26
|
+
AGENTS_SKILLS_DIR Override agents skills destination path
|
|
27
|
+
CLAUDE_CODE_SKILLS_DIR Override claude-code skills destination path
|
|
28
|
+
APOLLO_TOOLKIT_HOME Override local install path used when repo root is unavailable
|
|
29
|
+
APOLLO_TOOLKIT_REPO_URL Override git repository URL used when repo root is unavailable
|
|
28
30
|
"@
|
|
29
31
|
}
|
|
30
32
|
|
|
@@ -109,6 +111,8 @@ else {
|
|
|
109
111
|
}
|
|
110
112
|
|
|
111
113
|
function Get-SkillPaths {
|
|
114
|
+
param([string[]]$SelectedModes)
|
|
115
|
+
|
|
112
116
|
$dirs = Get-ChildItem -Path $RepoRoot -Directory | Sort-Object Name
|
|
113
117
|
$skills = @()
|
|
114
118
|
|
|
@@ -118,6 +122,19 @@ function Get-SkillPaths {
|
|
|
118
122
|
}
|
|
119
123
|
}
|
|
120
124
|
|
|
125
|
+
# For codex mode, also include codex-specific skills
|
|
126
|
+
if ($SelectedModes -contains "codex") {
|
|
127
|
+
$codexDir = Join-Path $RepoRoot "codex"
|
|
128
|
+
if (Test-Path -LiteralPath $codexDir -PathType Container) {
|
|
129
|
+
$codexDirs = Get-ChildItem -Path $codexDir -Directory | Sort-Object Name
|
|
130
|
+
foreach ($dir in $codexDirs) {
|
|
131
|
+
if (Test-Path -LiteralPath (Join-Path $dir.FullName "SKILL.md") -PathType Leaf) {
|
|
132
|
+
$skills += $dir.FullName
|
|
133
|
+
}
|
|
134
|
+
}
|
|
135
|
+
}
|
|
136
|
+
}
|
|
137
|
+
|
|
121
138
|
if ($skills.Count -eq 0) {
|
|
122
139
|
throw "No skill folders found in: $RepoRoot"
|
|
123
140
|
}
|
|
@@ -145,12 +162,13 @@ function Resolve-Modes {
|
|
|
145
162
|
Show-Banner
|
|
146
163
|
Write-Host ""
|
|
147
164
|
Write-Host "Select install options (comma-separated):"
|
|
148
|
-
Write-Host "1) codex (~/.codex/skills)"
|
|
165
|
+
Write-Host "1) codex (~/.codex/skills, includes ./codex/ agent-specific skills)"
|
|
149
166
|
Write-Host "2) openclaw (~/.openclaw/workspace*/skills)"
|
|
150
167
|
Write-Host "3) trae (~/.trae/skills)"
|
|
151
168
|
Write-Host "4) agents (~/.agents/skills)"
|
|
152
|
-
Write-Host "5)
|
|
153
|
-
|
|
169
|
+
Write-Host "5) claude-code (~/.claude/skills)"
|
|
170
|
+
Write-Host "6) all"
|
|
171
|
+
$inputValue = Read-Host "Enter choice(s) [1-6]"
|
|
154
172
|
|
|
155
173
|
foreach ($rawChoice in ($inputValue -split ",")) {
|
|
156
174
|
$choice = $rawChoice.Trim()
|
|
@@ -159,11 +177,13 @@ function Resolve-Modes {
|
|
|
159
177
|
"2" { Add-ModeOnce -Selected $selected -Mode "openclaw" }
|
|
160
178
|
"3" { Add-ModeOnce -Selected $selected -Mode "trae" }
|
|
161
179
|
"4" { Add-ModeOnce -Selected $selected -Mode "agents" }
|
|
162
|
-
"5" {
|
|
180
|
+
"5" { Add-ModeOnce -Selected $selected -Mode "claude-code" }
|
|
181
|
+
"6" {
|
|
163
182
|
Add-ModeOnce -Selected $selected -Mode "codex"
|
|
164
183
|
Add-ModeOnce -Selected $selected -Mode "openclaw"
|
|
165
184
|
Add-ModeOnce -Selected $selected -Mode "trae"
|
|
166
185
|
Add-ModeOnce -Selected $selected -Mode "agents"
|
|
186
|
+
Add-ModeOnce -Selected $selected -Mode "claude-code"
|
|
167
187
|
}
|
|
168
188
|
default {
|
|
169
189
|
throw "Invalid choice: $choice"
|
|
@@ -178,11 +198,13 @@ function Resolve-Modes {
|
|
|
178
198
|
"openclaw" { Add-ModeOnce -Selected $selected -Mode "openclaw" }
|
|
179
199
|
"trae" { Add-ModeOnce -Selected $selected -Mode "trae" }
|
|
180
200
|
"agents" { Add-ModeOnce -Selected $selected -Mode "agents" }
|
|
201
|
+
"claude-code" { Add-ModeOnce -Selected $selected -Mode "claude-code" }
|
|
181
202
|
"all" {
|
|
182
203
|
Add-ModeOnce -Selected $selected -Mode "codex"
|
|
183
204
|
Add-ModeOnce -Selected $selected -Mode "openclaw"
|
|
184
205
|
Add-ModeOnce -Selected $selected -Mode "trae"
|
|
185
206
|
Add-ModeOnce -Selected $selected -Mode "agents"
|
|
207
|
+
Add-ModeOnce -Selected $selected -Mode "claude-code"
|
|
186
208
|
}
|
|
187
209
|
default {
|
|
188
210
|
Show-Usage
|
|
@@ -299,13 +321,29 @@ function Install-Agents {
|
|
|
299
321
|
}
|
|
300
322
|
}
|
|
301
323
|
|
|
324
|
+
function Install-ClaudeCode {
|
|
325
|
+
param([string[]]$SkillPaths)
|
|
326
|
+
|
|
327
|
+
$target = if ($env:CLAUDE_CODE_SKILLS_DIR) {
|
|
328
|
+
Expand-UserPath $env:CLAUDE_CODE_SKILLS_DIR
|
|
329
|
+
}
|
|
330
|
+
else {
|
|
331
|
+
Join-Path $HOME ".claude/skills"
|
|
332
|
+
}
|
|
333
|
+
|
|
334
|
+
Write-Host "Installing to claude-code: $target"
|
|
335
|
+
foreach ($src in $SkillPaths) {
|
|
336
|
+
Copy-Skill -Source $src -TargetRoot $target
|
|
337
|
+
}
|
|
338
|
+
}
|
|
339
|
+
|
|
302
340
|
if ($Modes.Count -gt 0 -and ($Modes[0] -eq "-h" -or $Modes[0] -eq "--help")) {
|
|
303
341
|
Show-Usage
|
|
304
342
|
exit 0
|
|
305
343
|
}
|
|
306
344
|
|
|
307
345
|
$selectedModes = Resolve-Modes -Requested $Modes
|
|
308
|
-
$skillPaths = Get-SkillPaths
|
|
346
|
+
$skillPaths = Get-SkillPaths -SelectedModes $selectedModes
|
|
309
347
|
|
|
310
348
|
foreach ($mode in $selectedModes) {
|
|
311
349
|
switch ($mode) {
|
|
@@ -313,6 +351,7 @@ foreach ($mode in $selectedModes) {
|
|
|
313
351
|
"openclaw" { Install-OpenClaw -SkillPaths $skillPaths }
|
|
314
352
|
"trae" { Install-Trae -SkillPaths $skillPaths }
|
|
315
353
|
"agents" { Install-Agents -SkillPaths $skillPaths }
|
|
354
|
+
"claude-code" { Install-ClaudeCode -SkillPaths $skillPaths }
|
|
316
355
|
default { throw "Unknown mode: $mode" }
|
|
317
356
|
}
|
|
318
357
|
}
|
|
@@ -4,21 +4,23 @@ set -euo pipefail
|
|
|
4
4
|
usage() {
|
|
5
5
|
cat <<"USAGE"
|
|
6
6
|
Usage:
|
|
7
|
-
./scripts/install_skills.sh [codex|openclaw|trae|agents|all]...
|
|
7
|
+
./scripts/install_skills.sh [codex|openclaw|trae|agents|claude-code|all]...
|
|
8
8
|
|
|
9
9
|
Modes:
|
|
10
|
-
codex
|
|
11
|
-
openclaw
|
|
12
|
-
trae
|
|
13
|
-
agents
|
|
14
|
-
|
|
10
|
+
codex Copy skills into ~/.codex/skills (includes ./codex/ agent-specific skills)
|
|
11
|
+
openclaw Copy skills into ~/.openclaw/workspace*/skills
|
|
12
|
+
trae Copy skills into ~/.trae/skills
|
|
13
|
+
agents Copy skills into ~/.agents/skills (for agent-skill-compatible software)
|
|
14
|
+
claude-code Copy skills into ~/.claude/skills
|
|
15
|
+
all Install all supported targets
|
|
15
16
|
|
|
16
17
|
Optional environment overrides:
|
|
17
|
-
CODEX_SKILLS_DIR
|
|
18
|
-
OPENCLAW_HOME
|
|
19
|
-
TRAE_SKILLS_DIR
|
|
20
|
-
AGENTS_SKILLS_DIR
|
|
21
|
-
|
|
18
|
+
CODEX_SKILLS_DIR Override codex skills destination path
|
|
19
|
+
OPENCLAW_HOME Override openclaw home path
|
|
20
|
+
TRAE_SKILLS_DIR Override trae skills destination path
|
|
21
|
+
AGENTS_SKILLS_DIR Override agents skills destination path
|
|
22
|
+
CLAUDE_CODE_SKILLS_DIR Override claude-code skills destination path
|
|
23
|
+
APOLLO_TOOLKIT_HOME Override local install path used in curl/pipe mode
|
|
22
24
|
APOLLO_TOOLKIT_REPO_URL Override git repository URL used in curl/pipe mode
|
|
23
25
|
USAGE
|
|
24
26
|
}
|
|
@@ -87,6 +89,18 @@ collect_skills() {
|
|
|
87
89
|
fi
|
|
88
90
|
done < <(find "$REPO_ROOT" -mindepth 1 -maxdepth 1 -type d | sort)
|
|
89
91
|
|
|
92
|
+
# For codex mode, also include codex-specific skills
|
|
93
|
+
if [[ " ${SELECTED_MODES[*]} " =~ " codex " ]]; then
|
|
94
|
+
local codex_dir="$REPO_ROOT/codex"
|
|
95
|
+
if [[ -d "$codex_dir" ]]; then
|
|
96
|
+
while IFS= read -r dir; do
|
|
97
|
+
if [[ -f "$dir/SKILL.md" ]]; then
|
|
98
|
+
SKILL_PATHS+=("$dir")
|
|
99
|
+
fi
|
|
100
|
+
done < <(find "$codex_dir" -mindepth 1 -maxdepth 1 -type d | sort)
|
|
101
|
+
fi
|
|
102
|
+
fi
|
|
103
|
+
|
|
90
104
|
if [[ ${#SKILL_PATHS[@]} -eq 0 ]]; then
|
|
91
105
|
echo "No skill folders found in: $REPO_ROOT" >&2
|
|
92
106
|
exit 1
|
|
@@ -164,6 +178,16 @@ install_agents() {
|
|
|
164
178
|
done
|
|
165
179
|
}
|
|
166
180
|
|
|
181
|
+
install_claude_code() {
|
|
182
|
+
local claude_code_skills_dir
|
|
183
|
+
claude_code_skills_dir="$(expand_user_path "${CLAUDE_CODE_SKILLS_DIR:-$HOME/.claude/skills}")"
|
|
184
|
+
|
|
185
|
+
echo "Installing to claude-code: $claude_code_skills_dir"
|
|
186
|
+
for src in "${SKILL_PATHS[@]}"; do
|
|
187
|
+
replace_with_copy "$src" "$claude_code_skills_dir"
|
|
188
|
+
done
|
|
189
|
+
}
|
|
190
|
+
|
|
167
191
|
add_mode_once() {
|
|
168
192
|
local mode="$1"
|
|
169
193
|
local existing
|
|
@@ -182,7 +206,7 @@ parse_mode() {
|
|
|
182
206
|
local mode="$1"
|
|
183
207
|
|
|
184
208
|
case "$mode" in
|
|
185
|
-
codex|openclaw|trae|agents)
|
|
209
|
+
codex|openclaw|trae|agents|claude-code)
|
|
186
210
|
add_mode_once "$mode"
|
|
187
211
|
;;
|
|
188
212
|
all)
|
|
@@ -190,6 +214,7 @@ parse_mode() {
|
|
|
190
214
|
add_mode_once "openclaw"
|
|
191
215
|
add_mode_once "trae"
|
|
192
216
|
add_mode_once "agents"
|
|
217
|
+
add_mode_once "claude-code"
|
|
193
218
|
;;
|
|
194
219
|
*)
|
|
195
220
|
echo "Invalid mode: $mode" >&2
|
|
@@ -222,12 +247,13 @@ choose_modes_interactive() {
|
|
|
222
247
|
show_banner
|
|
223
248
|
echo
|
|
224
249
|
echo "Select install options (comma-separated):"
|
|
225
|
-
echo "1) codex (~/.codex/skills)"
|
|
250
|
+
echo "1) codex (~/.codex/skills, includes ./codex/ agent-specific skills)"
|
|
226
251
|
echo "2) openclaw (~/.openclaw/workspace*/skills)"
|
|
227
252
|
echo "3) trae (~/.trae/skills)"
|
|
228
253
|
echo "4) agents (~/.agents/skills)"
|
|
229
|
-
echo "5)
|
|
230
|
-
|
|
254
|
+
echo "5) claude-code (~/.claude/skills)"
|
|
255
|
+
echo "6) all"
|
|
256
|
+
choice="$(read_choice_from_user 'Enter choice(s) [1-6]: ')"
|
|
231
257
|
|
|
232
258
|
IFS=',' read -r -a choices <<< "$choice"
|
|
233
259
|
for raw_choice in "${choices[@]}"; do
|
|
@@ -237,7 +263,8 @@ choose_modes_interactive() {
|
|
|
237
263
|
2) add_mode_once "openclaw" ;;
|
|
238
264
|
3) add_mode_once "trae" ;;
|
|
239
265
|
4) add_mode_once "agents" ;;
|
|
240
|
-
5) add_mode_once "
|
|
266
|
+
5) add_mode_once "claude-code" ;;
|
|
267
|
+
6) add_mode_once "codex"; add_mode_once "openclaw"; add_mode_once "trae"; add_mode_once "agents"; add_mode_once "claude-code" ;;
|
|
241
268
|
*)
|
|
242
269
|
echo "Invalid choice: $raw_choice" >&2
|
|
243
270
|
exit 1
|
|
@@ -282,6 +309,7 @@ main() {
|
|
|
282
309
|
openclaw) install_openclaw ;;
|
|
283
310
|
trae) install_trae ;;
|
|
284
311
|
agents) install_agents ;;
|
|
312
|
+
claude-code) install_claude_code ;;
|
|
285
313
|
*)
|
|
286
314
|
usage
|
|
287
315
|
exit 1
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
/package/{codex-memory-manager → codex/codex-memory-manager}/scripts/extract_recent_conversations.py
RENAMED
|
File without changes
|
|
File without changes
|
|
File without changes
|
/package/{codex-memory-manager → codex/codex-memory-manager}/tests/test_sync_memory_index.py
RENAMED
|
File without changes
|
/package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/CHANGELOG.md
RENAMED
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
/package/{learn-skill-from-conversations → codex/learn-skill-from-conversations}/agents/openai.yaml
RENAMED
|
File without changes
|
|
File without changes
|