codymaster 4.6.0 → 4.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +19 -1
- package/README.md +80 -30
- package/dist/browse-server.js +251 -0
- package/dist/cli/command-registry.js +26 -0
- package/dist/cli/commands/agent.js +120 -0
- package/dist/cli/commands/dashboard.js +93 -0
- package/dist/cli/commands/design-studio.js +111 -0
- package/dist/cli/commands/distro.js +25 -0
- package/dist/cli/commands/engineering.js +488 -0
- package/dist/cli/commands/project.js +324 -0
- package/dist/cli/commands/skill-chain.js +269 -0
- package/dist/cli/commands/system.js +89 -0
- package/dist/cli/commands/task.js +254 -0
- package/dist/cli/update-check.js +83 -0
- package/dist/cm-config.js +110 -0
- package/dist/cm-suggest.js +77 -0
- package/dist/distro-validate.js +54 -0
- package/dist/guardian-core.js +74 -0
- package/dist/index.js +36 -2759
- package/dist/mcp-context-server.js +60 -1
- package/dist/mcp-skills-tools.js +81 -0
- package/dist/retro-summary.js +70 -0
- package/dist/second-opinion-providers.js +79 -0
- package/dist/sprint-pipeline.js +228 -0
- package/dist/storage-backend.js +5 -60
- package/dist/utils/cli-utils.js +76 -0
- package/dist/utils/skill-utils.js +32 -0
- package/install.sh +274 -50
- package/package.json +16 -5
- package/scripts/build-skills.mjs +51 -0
- package/scripts/gate-0-repo-hygiene.js +75 -0
- package/scripts/postinstall.js +55 -0
- package/scripts/security-scan.js +1 -1
- package/scripts/validate-skills.mjs +42 -0
- package/scripts/viking-demo.ts +105 -0
- package/skills/CLAUDE.md +2 -2
- package/skills/cm-ads-tracker/SKILL.md +3 -6
- package/skills/cm-browse/SKILL.md +28 -0
- package/skills/cm-conductor-worktrees/SKILL.md +24 -0
- package/skills/cm-content-factory/SKILL.md +1 -1
- package/skills/cm-content-factory/landing/docs/content/changelog.md +36 -0
- package/skills/cm-content-factory/landing/docs/content/deployment.md +46 -0
- package/skills/cm-content-factory/landing/docs/content/execution-flow.md +67 -0
- package/skills/cm-content-factory/landing/docs/content/openspace.md +27 -0
- package/skills/cm-content-factory/landing/docs/content/openviking.md +33 -0
- package/skills/cm-content-factory/landing/docs/content/use-cases.md +26 -0
- package/skills/cm-content-factory/landing/docs/content/v5-intro.md +28 -0
- package/skills/cm-content-factory/landing/docs/index.html +240 -0
- package/skills/cm-content-factory/landing/index.html +99 -99
- package/skills/cm-content-factory/landing/script.js +42 -0
- package/skills/cm-content-factory/landing/translations.js +400 -400
- package/skills/cm-design-studio/SKILL.md +30 -0
- package/skills/cm-ecosystem-roadmap/SKILL.md +11 -0
- package/skills/cm-engineering-meta/SKILL.md +69 -0
- package/skills/cm-growth-hacking/SKILL.md +1 -12
- package/skills/cm-guardian-runtime/SKILL.md +22 -0
- package/skills/cm-mcp-engineering/SKILL.md +18 -0
- package/skills/cm-notebooklm/SKILL.md +1 -17
- package/skills/cm-post-deploy-canary/SKILL.md +18 -0
- package/skills/cm-qa-visual-cli/SKILL.md +18 -0
- package/skills/cm-retro-cli/SKILL.md +19 -0
- package/skills/cm-second-opinion-cli/SKILL.md +19 -0
- package/skills/cm-secret-shield/SKILL.md +2 -2
- package/skills/cm-sprint-bus/SKILL.md +29 -0
- package/skills/cm-tdd/SKILL.md +61 -74
- package/skills/profiles/README.md +21 -0
- package/skills/profiles/core.txt +23 -0
- package/skills/profiles/design.txt +6 -0
- package/skills/profiles/full.txt +58 -0
- package/skills/profiles/growth.txt +10 -0
- package/skills/profiles/knowledge.txt +7 -0
- package/scripts/test-gemini.js +0 -13
- package/skills/cm-frappe-agent/SKILL.md +0 -134
- package/skills/cm-frappe-agent/agents/doctype-architect.md +0 -596
- package/skills/cm-frappe-agent/agents/erpnext-customizer.md +0 -643
- package/skills/cm-frappe-agent/agents/frappe-backend.md +0 -814
- package/skills/cm-frappe-agent/agents/frappe-custom-frontend.md +0 -557
- package/skills/cm-frappe-agent/agents/frappe-debugger.md +0 -625
- package/skills/cm-frappe-agent/agents/frappe-fixer.md +0 -275
- package/skills/cm-frappe-agent/agents/frappe-frontend.md +0 -660
- package/skills/cm-frappe-agent/agents/frappe-installer.md +0 -158
- package/skills/cm-frappe-agent/agents/frappe-performance.md +0 -307
- package/skills/cm-frappe-agent/agents/frappe-planner.md +0 -419
- package/skills/cm-frappe-agent/agents/frappe-remote-ops.md +0 -153
- package/skills/cm-frappe-agent/agents/github-workflow.md +0 -286
- package/skills/cm-frappe-agent/commands/frappe-app.md +0 -351
- package/skills/cm-frappe-agent/commands/frappe-backend.md +0 -162
- package/skills/cm-frappe-agent/commands/frappe-bench.md +0 -254
- package/skills/cm-frappe-agent/commands/frappe-debug.md +0 -263
- package/skills/cm-frappe-agent/commands/frappe-doctype-create.md +0 -272
- package/skills/cm-frappe-agent/commands/frappe-doctype-field.md +0 -310
- package/skills/cm-frappe-agent/commands/frappe-erpnext.md +0 -210
- package/skills/cm-frappe-agent/commands/frappe-fix.md +0 -59
- package/skills/cm-frappe-agent/commands/frappe-frontend.md +0 -210
- package/skills/cm-frappe-agent/commands/frappe-fullstack.md +0 -243
- package/skills/cm-frappe-agent/commands/frappe-github.md +0 -57
- package/skills/cm-frappe-agent/commands/frappe-install.md +0 -52
- package/skills/cm-frappe-agent/commands/frappe-plan.md +0 -442
- package/skills/cm-frappe-agent/commands/frappe-remote.md +0 -58
- package/skills/cm-frappe-agent/commands/frappe-test.md +0 -356
- package/skills/cm-frappe-agent/docs/README.md +0 -51
- package/skills/cm-frappe-agent/docs/agents-catalog.md +0 -113
- package/skills/cm-frappe-agent/docs/architecture.md +0 -149
- package/skills/cm-frappe-agent/docs/commands-catalog.md +0 -82
- package/skills/cm-frappe-agent/docs/resources-catalog.md +0 -66
- package/skills/cm-frappe-agent/docs/sitemap-urls.txt +0 -52
- package/skills/cm-frappe-agent/docs/sitemap.md +0 -81
- package/skills/cm-frappe-agent/docs/sop/user-guide.md +0 -178
- package/skills/cm-frappe-agent/docs/sop/vibe-coding-guide.md +0 -122
- package/skills/cm-frappe-agent/resources/7-layer-architecture.md +0 -985
- package/skills/cm-frappe-agent/resources/bench_commands.md +0 -73
- package/skills/cm-frappe-agent/resources/code-patterns-guide.md +0 -948
- package/skills/cm-frappe-agent/resources/common_pitfalls.md +0 -266
- package/skills/cm-frappe-agent/resources/doctype-registry.md +0 -158
- package/skills/cm-frappe-agent/resources/installation-guide.md +0 -289
- package/skills/cm-frappe-agent/resources/rest-api-patterns.md +0 -182
- package/skills/cm-frappe-agent/resources/scaffold_checklist.md +0 -82
- package/skills/cm-frappe-agent/resources/upgrade_patterns.md +0 -113
- package/skills/cm-frappe-agent/resources/web-form-patterns.md +0 -252
- package/skills/cm-frappe-agent/skills/bench-commands/SKILL.md +0 -621
- package/skills/cm-frappe-agent/skills/client-scripts/SKILL.md +0 -642
- package/skills/cm-frappe-agent/skills/doctype-patterns/SKILL.md +0 -576
- package/skills/cm-frappe-agent/skills/frappe-api/SKILL.md +0 -740
- package/skills/cm-frappe-agent/skills/remote-operations/SKILL.md +0 -47
- package/skills/cm-frappe-agent/skills/server-scripts/SKILL.md +0 -608
- package/skills/cm-frappe-agent/skills/web-forms/SKILL.md +0 -46
- package/skills/frappe-app-builder.zip +0 -0
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "codymaster",
|
|
3
|
-
"version": "4.
|
|
3
|
+
"version": "4.8.0",
|
|
4
4
|
"description": "68+ Skills. Ship 10x faster. AI-powered coding skill kit for Claude, Cursor, Gemini & more.",
|
|
5
5
|
"main": "dist/index.js",
|
|
6
6
|
"repository": {
|
|
@@ -19,15 +19,23 @@
|
|
|
19
19
|
"build": "tsc",
|
|
20
20
|
"start": "ts-node src/index.ts",
|
|
21
21
|
"test:gate": "vitest run --reporter=verbose",
|
|
22
|
+
"test:gate:kit": "npm run build && npm run validate:skills && npm run check:skills && vitest run --reporter=verbose",
|
|
22
23
|
"gate:secrets": "node scripts/gate-0-secrets.js",
|
|
24
|
+
"gate:hygiene": "node scripts/gate-0-repo-hygiene.js",
|
|
23
25
|
"gate:fix": "node scripts/security-fixer.js --fix",
|
|
24
26
|
"gate:check": "node scripts/security-fixer.js",
|
|
25
27
|
"gate:syntax": "node scripts/gate-1-syntax.js",
|
|
26
28
|
"gate:dist": "node scripts/gate-5-dist-verify.js",
|
|
27
29
|
"gate:smoke": "node scripts/gate-6-smoke-test.js",
|
|
28
|
-
"deploy": "npm run gate:secrets && npm run gate:syntax && npm run test:gate && npm run gate:dist && npm run gate:smoke",
|
|
29
|
-
"deploy:dry": "npm run gate:secrets && npm run gate:syntax && npm run test:gate && npm run gate:dist && echo '✅ All gates passed. Ready to deploy.'",
|
|
30
|
-
"postinstall": "node scripts/postinstall.js"
|
|
30
|
+
"deploy": "npm run gate:secrets && npm run gate:hygiene && npm run gate:syntax && npm run test:gate:kit && npm run gate:dist && npm run gate:smoke",
|
|
31
|
+
"deploy:dry": "npm run gate:secrets && npm run gate:hygiene && npm run gate:syntax && npm run test:gate:kit && npm run gate:dist && echo '✅ All gates passed. Ready to deploy.'",
|
|
32
|
+
"postinstall": "node scripts/postinstall.js",
|
|
33
|
+
"validate:skills": "node scripts/validate-skills.mjs",
|
|
34
|
+
"build:skills": "node scripts/build-skills.mjs",
|
|
35
|
+
"check:skills": "node scripts/build-skills.mjs --check",
|
|
36
|
+
"docs:dev": "vitepress dev docs",
|
|
37
|
+
"docs:build": "vitepress build docs",
|
|
38
|
+
"docs:preview": "vitepress preview docs"
|
|
31
39
|
},
|
|
32
40
|
"keywords": [
|
|
33
41
|
"ai",
|
|
@@ -65,7 +73,8 @@
|
|
|
65
73
|
"chokidar": "^5.0.0",
|
|
66
74
|
"commander": "^14.0.3",
|
|
67
75
|
"express": "^5.2.1",
|
|
68
|
-
"prompts": "^2.4.2"
|
|
76
|
+
"prompts": "^2.4.2",
|
|
77
|
+
"yaml": "^2.8.3"
|
|
69
78
|
},
|
|
70
79
|
"devDependencies": {
|
|
71
80
|
"@types/better-sqlite3": "^7.6.13",
|
|
@@ -74,8 +83,10 @@
|
|
|
74
83
|
"@types/prompts": "^2.4.9",
|
|
75
84
|
"acorn": "^8.16.0",
|
|
76
85
|
"jsdom": "^29.0.1",
|
|
86
|
+
"playwright": "^1.50.0",
|
|
77
87
|
"ts-node": "^10.9.2",
|
|
78
88
|
"typescript": "^5.9.3",
|
|
89
|
+
"vitepress": "^1.6.4",
|
|
79
90
|
"vitest": "^4.1.0"
|
|
80
91
|
},
|
|
81
92
|
"overrides": {
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Generate SKILL.md from SKILL.md.tmpl + meta.json when present.
|
|
4
|
+
* Usage: node scripts/build-skills.mjs [--check]
|
|
5
|
+
*/
|
|
6
|
+
import fs from 'fs';
|
|
7
|
+
import path from 'path';
|
|
8
|
+
import { fileURLToPath } from 'url';
|
|
9
|
+
|
|
10
|
+
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
11
|
+
const skillsRoot = path.join(__dirname, '..', 'skills');
|
|
12
|
+
const check = process.argv.includes('--check');
|
|
13
|
+
|
|
14
|
+
function render(tmpl, vars) {
|
|
15
|
+
return tmpl.replace(/\{\{(\w+)\}\}/g, (_, k) => (vars[k] != null ? String(vars[k]) : `{{${k}}}`));
|
|
16
|
+
}
|
|
17
|
+
|
|
18
|
+
let tmplCount = 0;
|
|
19
|
+
if (!fs.existsSync(skillsRoot)) process.exit(0);
|
|
20
|
+
|
|
21
|
+
for (const dir of fs.readdirSync(skillsRoot, { withFileTypes: true })) {
|
|
22
|
+
if (!dir.isDirectory()) continue;
|
|
23
|
+
const folder = path.join(skillsRoot, dir.name);
|
|
24
|
+
const tmplPath = path.join(folder, 'SKILL.md.tmpl');
|
|
25
|
+
const metaPath = path.join(folder, 'meta.json');
|
|
26
|
+
const outPath = path.join(folder, 'SKILL.md');
|
|
27
|
+
if (!fs.existsSync(tmplPath)) continue;
|
|
28
|
+
|
|
29
|
+
tmplCount++;
|
|
30
|
+
const tmpl = fs.readFileSync(tmplPath, 'utf8');
|
|
31
|
+
let meta = {};
|
|
32
|
+
if (fs.existsSync(metaPath)) {
|
|
33
|
+
meta = JSON.parse(fs.readFileSync(metaPath, 'utf8'));
|
|
34
|
+
}
|
|
35
|
+
const out = render(tmpl, meta);
|
|
36
|
+
if (check) {
|
|
37
|
+
const cur = fs.existsSync(outPath) ? fs.readFileSync(outPath, 'utf8') : '';
|
|
38
|
+
if (cur !== out) {
|
|
39
|
+
console.error(`check failed: ${outPath} out of date (run npm run build:skills)`);
|
|
40
|
+
process.exit(2);
|
|
41
|
+
}
|
|
42
|
+
} else {
|
|
43
|
+
fs.writeFileSync(outPath, out, 'utf8');
|
|
44
|
+
}
|
|
45
|
+
}
|
|
46
|
+
|
|
47
|
+
if (tmplCount === 0) {
|
|
48
|
+
console.log('build-skills: no SKILL.md.tmpl under skills/ (OK)');
|
|
49
|
+
} else {
|
|
50
|
+
console.log(check ? `build-skills: --check OK (${tmplCount})` : `build-skills: wrote ${tmplCount} skill(s)`);
|
|
51
|
+
}
|
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Gate 0b: Repo Hygiene
|
|
4
|
+
* Ensures tracked files and git remote URLs are safe before push/deploy.
|
|
5
|
+
*/
|
|
6
|
+
const { execFileSync } = require('child_process');
|
|
7
|
+
|
|
8
|
+
const FORBIDDEN_TRACKED_PATTERNS = [
|
|
9
|
+
/^\.DS_Store$/,
|
|
10
|
+
/^\.env(\..+)?$/,
|
|
11
|
+
/^\.dev\.vars(\..+)?$/,
|
|
12
|
+
/^.*\.(pem|key|p12|pfx)$/i,
|
|
13
|
+
/^.*\.(log|tmp|bak|swp)$/i,
|
|
14
|
+
];
|
|
15
|
+
|
|
16
|
+
function run(command, args) {
|
|
17
|
+
return execFileSync(command, args, { encoding: 'utf-8' }).trim();
|
|
18
|
+
}
|
|
19
|
+
|
|
20
|
+
function hasEmbeddedCredentials(remoteUrl) {
|
|
21
|
+
return /^https?:\/\/[^/\s]+@/i.test(remoteUrl);
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
function getTrackedFiles() {
|
|
25
|
+
const out = run('git', ['ls-files']);
|
|
26
|
+
if (!out) return [];
|
|
27
|
+
return out.split('\n').filter(Boolean);
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
function checkTrackedFiles() {
|
|
31
|
+
const tracked = getTrackedFiles();
|
|
32
|
+
return tracked.filter((file) =>
|
|
33
|
+
!file.endsWith('.example') &&
|
|
34
|
+
FORBIDDEN_TRACKED_PATTERNS.some((pattern) => pattern.test(file)),
|
|
35
|
+
);
|
|
36
|
+
}
|
|
37
|
+
|
|
38
|
+
function checkOriginRemote() {
|
|
39
|
+
try {
|
|
40
|
+
return run('git', ['remote', 'get-url', 'origin']);
|
|
41
|
+
} catch (_error) {
|
|
42
|
+
return '';
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
function main() {
|
|
47
|
+
const failures = [];
|
|
48
|
+
|
|
49
|
+
const originUrl = checkOriginRemote();
|
|
50
|
+
if (originUrl && hasEmbeddedCredentials(originUrl)) {
|
|
51
|
+
failures.push(
|
|
52
|
+
[
|
|
53
|
+
'origin remote URL has embedded credentials.',
|
|
54
|
+
'Fix: git remote set-url origin https://github.com/<owner>/<repo>.git',
|
|
55
|
+
].join(' '),
|
|
56
|
+
);
|
|
57
|
+
}
|
|
58
|
+
|
|
59
|
+
const badTrackedFiles = checkTrackedFiles();
|
|
60
|
+
if (badTrackedFiles.length > 0) {
|
|
61
|
+
failures.push(
|
|
62
|
+
`forbidden files are tracked by git: ${badTrackedFiles.join(', ')}`,
|
|
63
|
+
);
|
|
64
|
+
}
|
|
65
|
+
|
|
66
|
+
if (failures.length > 0) {
|
|
67
|
+
console.error('❌ Repo hygiene check failed:');
|
|
68
|
+
failures.forEach((failure) => console.error(` - ${failure}`));
|
|
69
|
+
process.exit(1);
|
|
70
|
+
}
|
|
71
|
+
|
|
72
|
+
console.log('✅ Repo hygiene check passed');
|
|
73
|
+
}
|
|
74
|
+
|
|
75
|
+
main();
|
package/scripts/postinstall.js
CHANGED
|
@@ -11,6 +11,8 @@ const NC = '\x1b[0m';
|
|
|
11
11
|
|
|
12
12
|
const fs = require('fs');
|
|
13
13
|
const path = require('path');
|
|
14
|
+
const { execSync, execFileSync } = require('child_process');
|
|
15
|
+
|
|
14
16
|
let skillCount = 60;
|
|
15
17
|
try {
|
|
16
18
|
const skillsDir = path.join(__dirname, '..', 'skills');
|
|
@@ -285,7 +287,60 @@ const printMenu = () => {
|
|
|
285
287
|
console.log('');
|
|
286
288
|
};
|
|
287
289
|
|
|
290
|
+
const installOpenViking = () => {
|
|
291
|
+
console.log('');
|
|
292
|
+
console.log(`${G}${BOLD}OpenViking — Installing Core Feature${NC}`);
|
|
293
|
+
try {
|
|
294
|
+
console.log(` ${W}Running: pip install openviking${NC}`);
|
|
295
|
+
execSync('pip install openviking', { stdio: 'inherit' });
|
|
296
|
+
console.log(` ${G}✅ OpenViking installed.${NC}`);
|
|
297
|
+
} catch (e) {
|
|
298
|
+
try {
|
|
299
|
+
console.log(` ${W}Running: pip3 install openviking${NC}`);
|
|
300
|
+
execSync('pip3 install openviking', { stdio: 'inherit' });
|
|
301
|
+
console.log(` ${G}✅ OpenViking installed.${NC}`);
|
|
302
|
+
} catch (err) {
|
|
303
|
+
console.log(` ${O}⚠️ Could not install OpenViking automatically. Please run 'pip install openviking' manually.${NC}`);
|
|
304
|
+
}
|
|
305
|
+
}
|
|
306
|
+
console.log('');
|
|
307
|
+
};
|
|
308
|
+
|
|
309
|
+
const isInstalledAsNpmDependency = () => {
|
|
310
|
+
const root = path.resolve(__dirname, '..').replace(/\\/g, '/');
|
|
311
|
+
return /[/\\]node_modules[/\\]codymaster$/i.test(root);
|
|
312
|
+
};
|
|
313
|
+
|
|
314
|
+
const npmCmd = () => (process.platform === 'win32' ? 'npm.cmd' : 'npm');
|
|
315
|
+
|
|
316
|
+
const activateCli = () => {
|
|
317
|
+
const pkgRoot = path.join(__dirname, '..');
|
|
318
|
+
|
|
319
|
+
if (isInstalledAsNpmDependency()) {
|
|
320
|
+
console.log(` ${C}CodyMaster CLI (per-project install — official path):${NC}`);
|
|
321
|
+
console.log(` ${W}npx cm${NC} or ${W}npx codymaster${NC}`);
|
|
322
|
+
console.log(
|
|
323
|
+
` ${DIM}Optional — bare ${W}cm${DIM} in any terminal: ${W}npm install -g codymaster${NC}`,
|
|
324
|
+
);
|
|
325
|
+
return;
|
|
326
|
+
}
|
|
327
|
+
|
|
328
|
+
if (fs.existsSync(path.join(pkgRoot, 'package.json'))) {
|
|
329
|
+
try {
|
|
330
|
+
const pkg = require(path.join(pkgRoot, 'package.json'));
|
|
331
|
+
if (pkg.name === 'codymaster') {
|
|
332
|
+
console.log(` ${W}Linking ${C}cm${W} for local repo development (npm link)...${NC}`);
|
|
333
|
+
execFileSync(npmCmd(), ['link'], { stdio: 'inherit', cwd: pkgRoot });
|
|
334
|
+
}
|
|
335
|
+
} catch (e) {
|
|
336
|
+
// Ignore if npm link fails
|
|
337
|
+
}
|
|
338
|
+
}
|
|
339
|
+
};
|
|
340
|
+
|
|
288
341
|
const main = () => {
|
|
342
|
+
installOpenViking();
|
|
343
|
+
activateCli();
|
|
289
344
|
printMenu();
|
|
290
345
|
};
|
|
291
346
|
|
package/scripts/security-scan.js
CHANGED
|
@@ -6,7 +6,7 @@ const DANGEROUS_PATTERNS = [
|
|
|
6
6
|
{ name: 'Anon Key Variable', regex: /ANON_KEY\s*[=:]\s*['\"][a-zA-Z0-9._\/-]{20,}/g },
|
|
7
7
|
{ name: 'Private Key Block', regex: /-----BEGIN\s+(RSA|EC|DSA|OPENSSH)?\s*PRIVATE KEY-----/g },
|
|
8
8
|
{ name: 'JWT Token', regex: /eyJ[a-zA-Z0-9_-]{10,}\.[a-zA-Z0-9_-]{10,}\.[a-zA-Z0-9_-]{10,}/g },
|
|
9
|
-
{ name: 'Generic API Key', regex: /(?:api[_-]?key|api[_-]?secret|access[_-]?token)\s*[=:]\s*['\"][a-zA-Z0-9\/+=]{20,}['\"
|
|
9
|
+
{ name: 'Generic API Key', regex: /(?:api[_-]?key|api[_-]?secret|access[_-]?token)\s*[=:]\s*['\"][a-zA-Z0-9\/+=]{20,}['\"]/gi },
|
|
10
10
|
{ name: 'AWS Key', regex: /AKIA[0-9A-Z]{16}/g },
|
|
11
11
|
{ name: 'Slack Token', regex: /xox[baprs]-[0-9a-zA-Z-]{10,}/g },
|
|
12
12
|
{ name: 'GitHub Token', regex: /gh[ps]_[a-zA-Z0-9]{36,}/g },
|
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
/**
|
|
3
|
+
* Ensures each skill folder under skills/ has SKILL.md with a title line.
|
|
4
|
+
*/
|
|
5
|
+
import fs from 'fs';
|
|
6
|
+
import path from 'path';
|
|
7
|
+
import { fileURLToPath } from 'url';
|
|
8
|
+
|
|
9
|
+
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
10
|
+
const root = path.join(__dirname, '..', 'skills');
|
|
11
|
+
|
|
12
|
+
/** Directories under skills/ that are not standalone skills (assets, packs). */
|
|
13
|
+
const SKIP_DIRS = new Set(['profiles', 'extensions', 'scripts']);
|
|
14
|
+
|
|
15
|
+
let errors = 0;
|
|
16
|
+
if (!fs.existsSync(root)) {
|
|
17
|
+
console.error('No skills/ directory');
|
|
18
|
+
process.exit(1);
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
for (const name of fs.readdirSync(root, { withFileTypes: true })) {
|
|
22
|
+
if (!name.isDirectory()) continue;
|
|
23
|
+
if (name.name.startsWith('_') || name.name.startsWith('.')) continue;
|
|
24
|
+
if (SKIP_DIRS.has(name.name)) continue;
|
|
25
|
+
const md = path.join(root, name.name, 'SKILL.md');
|
|
26
|
+
if (!fs.existsSync(md)) {
|
|
27
|
+
console.error(`Missing SKILL.md: ${name.name}`);
|
|
28
|
+
errors++;
|
|
29
|
+
continue;
|
|
30
|
+
}
|
|
31
|
+
const text = fs.readFileSync(md, 'utf8');
|
|
32
|
+
if (!/^#\s+\S/m.test(text)) {
|
|
33
|
+
console.error(`SKILL.md missing H1 title: ${name.name}`);
|
|
34
|
+
errors++;
|
|
35
|
+
}
|
|
36
|
+
}
|
|
37
|
+
|
|
38
|
+
if (errors) {
|
|
39
|
+
console.error(`validate-skills: ${errors} error(s)`);
|
|
40
|
+
process.exit(1);
|
|
41
|
+
}
|
|
42
|
+
console.log('validate-skills: OK');
|
|
@@ -0,0 +1,105 @@
|
|
|
1
|
+
import express from 'express';
|
|
2
|
+
import { VikingBackend } from '../src/backends/viking-backend';
|
|
3
|
+
import { DEFAULT_VIKING_CONFIG } from '../src/backends/viking-http-client';
|
|
4
|
+
import chalk from 'chalk';
|
|
5
|
+
|
|
6
|
+
/**
|
|
7
|
+
* Viking Demo — Hamster-Powered Semantic Search
|
|
8
|
+
*
|
|
9
|
+
* This script demonstrates the CodyMaster v4.6 integration with OpenViking.
|
|
10
|
+
* It starts a mock OpenViking REST API and then uses CodyMaster's VikingBackend
|
|
11
|
+
* to perform a semantic search query.
|
|
12
|
+
*/
|
|
13
|
+
|
|
14
|
+
async function runDemo() {
|
|
15
|
+
const PORT = 1933;
|
|
16
|
+
const app = express();
|
|
17
|
+
app.use(express.json());
|
|
18
|
+
|
|
19
|
+
console.log(chalk.bold.cyan('\n🐹 CodyMaster v4.6 — OpenViking Integration Demo\n'));
|
|
20
|
+
|
|
21
|
+
// ─── Phase 1: Mock OpenViking Server ───────────────────────────────────────
|
|
22
|
+
|
|
23
|
+
console.log(chalk.dim(' [1/3] Starting Mock OpenViking Server...'));
|
|
24
|
+
|
|
25
|
+
// Mock /health endpoint
|
|
26
|
+
app.get('/health', (req, res) => res.json({ status: 'ok', version: '1.2.0' }));
|
|
27
|
+
|
|
28
|
+
// Mock /search endpoint (the core of Viking)
|
|
29
|
+
app.post('/search', (req, res) => {
|
|
30
|
+
const { query, limit, workspace } = req.body;
|
|
31
|
+
console.log(chalk.yellow(` ☁️ Viking Server received search: "${query}" (limit: ${limit}, ws: ${workspace})`));
|
|
32
|
+
|
|
33
|
+
const mockResults = [
|
|
34
|
+
{
|
|
35
|
+
uri: 'ov://demo-vibe/learnings/lear-882.json',
|
|
36
|
+
score: 0.98,
|
|
37
|
+
content: JSON.stringify({
|
|
38
|
+
what_failed: 'Async fetch timeout on slow networks',
|
|
39
|
+
why_failed: 'Default timeout too short for 3G/4G high-latency spikes',
|
|
40
|
+
how_to_prevent: 'Increase timeout to 30s + implement exponential backoff retry'
|
|
41
|
+
})
|
|
42
|
+
},
|
|
43
|
+
{
|
|
44
|
+
uri: 'ov://demo-vibe/learnings/lear-451.json',
|
|
45
|
+
score: 0.72,
|
|
46
|
+
content: JSON.stringify({
|
|
47
|
+
what_failed: 'Database connection leak in long-running loops',
|
|
48
|
+
why_failed: 'await missing in cleanup block',
|
|
49
|
+
how_to_prevent: 'Use try-finally with await client.close()'
|
|
50
|
+
})
|
|
51
|
+
}
|
|
52
|
+
];
|
|
53
|
+
|
|
54
|
+
res.json({ items: mockResults });
|
|
55
|
+
});
|
|
56
|
+
|
|
57
|
+
const server = app.listen(PORT);
|
|
58
|
+
|
|
59
|
+
// ─── Phase 2: CodyMaster Viking Backend ────────────────────────────────────
|
|
60
|
+
|
|
61
|
+
console.log(chalk.dim(' [2/3] Initializing CodyMaster VikingBackend...'));
|
|
62
|
+
|
|
63
|
+
const backend = new VikingBackend({
|
|
64
|
+
host: 'localhost',
|
|
65
|
+
port: PORT,
|
|
66
|
+
workspace: 'demo-vibe',
|
|
67
|
+
timeout: 5000
|
|
68
|
+
});
|
|
69
|
+
|
|
70
|
+
// ─── Phase 3: Perform Semantic Search ──────────────────────────────────────
|
|
71
|
+
|
|
72
|
+
console.log(chalk.bold(' [3/3] Performing Semantic Search Query (Async Pipeline)...\n'));
|
|
73
|
+
|
|
74
|
+
const query = 'network latency issues';
|
|
75
|
+
console.log(chalk.white(` Query: "${query}"`));
|
|
76
|
+
|
|
77
|
+
// Use the native ASYNC extra searchAll instead of the SYNC wrapper
|
|
78
|
+
// This bypasses the blockUntil spin-loop for a clean demo.
|
|
79
|
+
const results = await backend.searchAll(query, 5);
|
|
80
|
+
|
|
81
|
+
if (results.length > 0) {
|
|
82
|
+
console.log(chalk.green(`\n ✅ FOUND ${results.length} RELEVANT MEMORIES VIA VIKING:\n`));
|
|
83
|
+
|
|
84
|
+
results.forEach((r, idx) => {
|
|
85
|
+
const content = JSON.parse(r.content || '{}');
|
|
86
|
+
console.log(chalk.bold(` ${idx + 1}. ${chalk.cyan(content.what_failed)}`));
|
|
87
|
+
console.log(chalk.dim(` Why: ${content.why_failed}`));
|
|
88
|
+
console.log(chalk.dim(` Fix: ${chalk.white(content.how_to_prevent)}`));
|
|
89
|
+
console.log(chalk.italic.magenta(` Vector Similarity: ${Math.round((r.score || 0) * 100)}%\n`));
|
|
90
|
+
});
|
|
91
|
+
} else {
|
|
92
|
+
console.log(chalk.red(' ❌ No results found.'));
|
|
93
|
+
}
|
|
94
|
+
|
|
95
|
+
// ─── Cleanup ───────────────────────────────────────────────────────────────
|
|
96
|
+
|
|
97
|
+
console.log(chalk.dim(' Demo complete. Shutting down...'));
|
|
98
|
+
server.close();
|
|
99
|
+
process.exit(0);
|
|
100
|
+
}
|
|
101
|
+
|
|
102
|
+
runDemo().catch(err => {
|
|
103
|
+
console.error(chalk.red('\n 🛑 Demo Failed:'), err);
|
|
104
|
+
process.exit(1);
|
|
105
|
+
});
|
package/skills/CLAUDE.md
CHANGED
|
@@ -91,14 +91,14 @@ See: [references/file.md](references/file.md)
|
|
|
91
91
|
|
|
92
92
|
### Required Fields
|
|
93
93
|
- `name`: Unique identifier (lowercase, hyphens for spaces, max 64 chars)
|
|
94
|
-
- `description`: What the skill does and when to use it (max 1024 chars). Use numbered use cases: `(1) ..., (2) ..., (3) ...`
|
|
94
|
+
- `description`: What the skill does and when to use it (max 1024 chars). Prefer **one or two sentences (~80–120 words / under ~800 chars)** so Google Antigravity, Windsurf, and other hosts do not burn customization token budget on discovery. Put long trigger lists in the markdown body (e.g. a short "Triggers" line under the title), not in YAML. Use numbered use cases when helpful: `(1) ..., (2) ..., (3) ...`
|
|
95
95
|
|
|
96
96
|
### Recommended Fields (for marketplace)
|
|
97
97
|
- `license`: License name (we use MIT)
|
|
98
98
|
- `metadata.author`: Author/organization name (we use `wondelai`)
|
|
99
99
|
- `metadata.version`: Semantic version
|
|
100
100
|
|
|
101
|
-
The YAML frontmatter `description` field is critical for skill discovery
|
|
101
|
+
The YAML frontmatter `description` field is critical for skill discovery: include the **minimum** keywords needed to route tasks; avoid duplicating every synonym in YAML. For **low token budget** installs, use `bash install.sh --gemini --profile core` and see `skills/profiles/README.md`. Single quotes in YAML values must be escaped by doubling them (`''`).
|
|
102
102
|
|
|
103
103
|
## Adding New Skills
|
|
104
104
|
|
|
@@ -1,15 +1,12 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: cm-ads-tracker
|
|
3
|
-
description:
|
|
4
|
-
Expert CRO conversion tracking strategist. From a single chat message, generates a COMPLETE tracking setup: Facebook/Meta Pixel + CAPI, TikTok Pixel + Events API, Google Ads Enhanced Conversions, GTM container architecture, first-touch/last-touch attribution, and cross-channel deduplication.
|
|
5
|
-
|
|
6
|
-
AUTO-DETECTS industry and maps correct standard events per platform specs. Outputs a full implementation document developers can use immediately — GTM tags, triggers, variables, dataLayer schema, UTM conventions, CAPI specs — all with the user's exact tracking IDs.
|
|
7
|
-
|
|
8
|
-
ALWAYS trigger for: pixel, tracking code, GTM, tag manager, Facebook pixel, Meta pixel, CAPI, Conversions API, TikTok pixel, Events API, Google Ads conversion, Enhanced Conversions, UTM, attribution, first-touch, last-touch, "setup tracking", "install tracking", "install pixel", "measure conversions", "tracking ads", "measure ROAS", "optimize conversions", conversion event, lead tracking, purchase tracking, ROAS measurement. Use even with partial information.
|
|
3
|
+
description: "End-to-end ad conversion tracking: Meta Pixel+CAPI, TikTok Events API, Google Ads Enhanced Conversions, GTM, attribution. Auto-detects industry, maps standard events, outputs a developer-ready implementation doc. Use for pixels, GTM, CAPI, ROAS, or 'set up tracking' requests."
|
|
9
4
|
---
|
|
10
5
|
|
|
11
6
|
# CM Ads Tracker v2
|
|
12
7
|
|
|
8
|
+
**Triggers (non-exhaustive):** pixel, GTM, Meta/Facebook CAPI, TikTok Events API, Google Ads conversions, UTM, first/last-touch attribution, install tracking, lead/purchase events, ROAS.
|
|
9
|
+
|
|
13
10
|
You are the world's best conversion tracking architect. Your mission: from **a single chat message**, produce a complete, platform-specific, attribution-aware tracking setup that any developer or marketer can implement immediately.
|
|
14
11
|
|
|
15
12
|
You know by heart every standard event spec for Meta, TikTok, and Google Ads. You think in dataLayer-first architecture, where GTM is the intelligent orchestration layer between the website and all ad platforms.
|
|
@@ -0,0 +1,28 @@
|
|
|
1
|
+
# cm-browse — local Playwright daemon
|
|
2
|
+
|
|
3
|
+
## When to use
|
|
4
|
+
|
|
5
|
+
- Visual QA, screenshots, post-deploy smoke through a **real browser** (not only Stitch/Pencil).
|
|
6
|
+
- Before claiming “UI works”, drive `cm browse` + `cm qa-visual`.
|
|
7
|
+
|
|
8
|
+
## CLI
|
|
9
|
+
|
|
10
|
+
```bash
|
|
11
|
+
export CM_BROWSE_TOKEN="$(openssl rand -hex 24)"
|
|
12
|
+
cm browse start --port 17395 --token "$CM_BROWSE_TOKEN"
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
## HTTP API (Bearer token)
|
|
16
|
+
|
|
17
|
+
- `POST /session/start` body `{ "headless": true }`
|
|
18
|
+
- `POST /navigate` `{ "url": "https://…" }`
|
|
19
|
+
- `POST /refs/refresh` — assigns `data-cm-ref` to interactive nodes; use `@e1` style refs.
|
|
20
|
+
- `POST /click` `{ "ref": "e1" }`
|
|
21
|
+
- `POST /fill` `{ "ref": "e2", "value": "text" }`
|
|
22
|
+
- `GET /screenshot` — PNG
|
|
23
|
+
- `GET /console` / `GET /network` — ring buffers
|
|
24
|
+
|
|
25
|
+
## Integrations
|
|
26
|
+
|
|
27
|
+
- `cm qa-visual --url …` calls the daemon locally.
|
|
28
|
+
- Pair with `cm-canary` / `cm-safe-deploy` for ship verification.
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
# cm-conductor-worktrees — parallel worktrees
|
|
2
|
+
|
|
3
|
+
## CLI
|
|
4
|
+
|
|
5
|
+
```bash
|
|
6
|
+
cm conductor add --at ../my-feature-wt --branch feat/my-feature --base main
|
|
7
|
+
cm conductor list
|
|
8
|
+
```
|
|
9
|
+
|
|
10
|
+
## Practice
|
|
11
|
+
|
|
12
|
+
- One **branch + worktree** per parallel agent/session.
|
|
13
|
+
- Reconcile with `git merge` / PR; avoid two agents editing the same files without coordination.
|
|
14
|
+
|
|
15
|
+
## ELI16 (3+ sessions)
|
|
16
|
+
|
|
17
|
+
When running **three or more** parallel sessions, re-ground each session with:
|
|
18
|
+
|
|
19
|
+
- Current branch name + worktree path.
|
|
20
|
+
- Last artifact from `cm sprint status` or `.cm/context-bus.json`.
|
|
21
|
+
|
|
22
|
+
## Future
|
|
23
|
+
|
|
24
|
+
Dashboard UI for active sprints is **not** in CLI yet — use `cm dashboard` / Hamster UI where available.
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: cm-content-factory
|
|
3
|
-
description: "
|
|
3
|
+
description: "Self-learning SEO content pipeline: dashboard, multi-agent queue, token budgets, research → write → audit → publish. StoryBrand/Cialdini/JTBD-style frameworks; config-driven. Use for content factory, batch articles, or scaled publishing."
|
|
4
4
|
---
|
|
5
5
|
|
|
6
6
|
# CM Content Factory v2.0 — AI Content Machine Platform
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
# Changelog & What's New in v5
|
|
2
|
+
|
|
3
|
+
Welcome to the **"Neural Spine"** era! Version 5 is a massive milestone that transitions CodyMaster from a specialized "Content Factory" tool into a full-fledged **Senior AI-Native Engineering Workspace**.
|
|
4
|
+
|
|
5
|
+
We achieved this paradigm shift by deeply studying and integrating the architectural breakthroughs from two external frameworks: **OpenViking** and **OpenSpace**.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 🚀 Key Architectural Shifts
|
|
10
|
+
|
|
11
|
+
### 1. Replaced "Dumb" RAG with OpenViking (Semantic Memory)
|
|
12
|
+
**The Problem in v4**: AI agents suffered from "code amnesia." Standard Retrieval-Augmented Generation (RAG) relied on chunking text indiscriminately. Agents hallucinated imports and forgot how system components linked together.
|
|
13
|
+
**The v5 Upgrade**:
|
|
14
|
+
By adopting OpenViking concepts, CodyMaster v5 introduces an **AST (Abstract Syntax Tree)** and **Vector-based Memory Engine**.
|
|
15
|
+
- It creates a **L0 Skeleton Index** of your entire system's structure instantly.
|
|
16
|
+
- It provides a **L1 Symbol Index** to grasp function signatures perfectly without bogging down the LLM context window with implementation details.
|
|
17
|
+
- Your AI agent now acts like a senior dev who inherently "knows" your monolith's entire structure before typing a single line.
|
|
18
|
+
|
|
19
|
+
### 2. Upgraded to OpenSpace (The Autonomous Executor)
|
|
20
|
+
**The Problem in v4**: Agents were essentially advanced type-writers. They wrote code, but you had to physically run the terminal commands, test the code, read the errors, and paste the errors back to the AI.
|
|
21
|
+
**The v5 Upgrade**:
|
|
22
|
+
We introduced **OpenSpace**—a secure sandbox and execution container.
|
|
23
|
+
- Agents can now natively execute Bash commands (`npm test`, `git pull`) within a safe holding environment.
|
|
24
|
+
- The framework auto-captures standard-error (stderr) logs and creates a self-healing loop. If the AI writes a bug, the test suite catches it, and the AI fixes itself autonomously *before* presenting the final code to you.
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
## 🌟 New Features at a Glance
|
|
29
|
+
|
|
30
|
+
* **100% Zero-Regression Deployments**: Through TDD-first gates in OpenSpace, no commit is permitted unless tests successfully pass.
|
|
31
|
+
* **Context Bus**: An infrastructure pipeline allowing parallel sub-agents (e.g., a "Backend Database" agent and a "Frontend React" agent) to exchange variables and context natively without losing vital token budgets.
|
|
32
|
+
* **Vision & UI Auto-Healing**: Equipped with frontend Playwright capture mechanisms, agents can take screenshots of their code results, run Vision Models to spot CSS un-alignments, and natively fix visual bugs.
|
|
33
|
+
* **Completely Self-Evolving Skills**: The Skill Chain Engine analyzes repeated commands in your workflow and dynamically builds *new* automation skills specifically tailored to your unique codebase.
|
|
34
|
+
|
|
35
|
+
> [!TIP]
|
|
36
|
+
> **To Experience It Directly:** Try assigning a task requiring multi-file context tracking, such as *"Migrate our authentication endpoints across the monolith to use the new JWT standard."* and watch OpenViking automatically pull the precise 7 files needed.
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
# Deployment Guide
|
|
2
|
+
|
|
3
|
+
Setting up CodyMaster v5 for your engineering team is straightforward. Follow this guide to initialize the Neural Spine architecture.
|
|
4
|
+
|
|
5
|
+
## 1. Prerequisites
|
|
6
|
+
|
|
7
|
+
Ensure your environment meets the following requirements:
|
|
8
|
+
- Node.js v18.0.0 or higher
|
|
9
|
+
- Git
|
|
10
|
+
- SQLite (for local OpenViking memory limits, pre-packaged)
|
|
11
|
+
|
|
12
|
+
## 2. Installation
|
|
13
|
+
|
|
14
|
+
Install the framework globally via npm to make the `cm` CLI available anywhere on your machine:
|
|
15
|
+
|
|
16
|
+
```bash
|
|
17
|
+
npm install -g @codymaster/cli@next
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## 3. Initializing a Workspace
|
|
21
|
+
|
|
22
|
+
Navigate to your existing Next.js, React, or Python repository and initialize the OpenSpace environment:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
cd my-enterprise-project
|
|
26
|
+
cm init
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
This will automatically:
|
|
30
|
+
1. Generate the `.agent/` folder structure to house specialized skills.
|
|
31
|
+
2. Spin up the **OpenViking** indexer, which will immediately begin mapping your project's Abstract Syntax Trees (AST) and caching semantic vectors.
|
|
32
|
+
|
|
33
|
+
## 4. Bootstrapping Agents
|
|
34
|
+
|
|
35
|
+
You can dispatch your first agent task utilizing the full memory and execution layer:
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
cm do "Refactor the user dashboard to utilize Tailwind CSS dark mode variants, ensuring all current unit tests still pass."
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
- Watch the terminal to see OpenViking extract relevant context without filling the token window with unnecessary `.json` configs.
|
|
42
|
+
- Watch OpenSpace dynamically spin up `npm run test` immediately after the code generation completes, self-correcting any errors before offering a Git commit.
|
|
43
|
+
|
|
44
|
+
## Continuous CI/CD (Founders Edition)
|
|
45
|
+
|
|
46
|
+
For teams on the Founders Edition, CodyMaster integrates directly into your GitHub Actions or GitLab CI pipelines. The agent intercepts failed PRs, pushes self-healing commits, and verifies visual integrity entirely autonomously.
|
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
# Architecture & Execution Flow
|
|
2
|
+
|
|
3
|
+
Understanding how the **Neural Spine** processes a command helps you write better instructions and harness the true autonomous power of CodyMaster v5.
|
|
4
|
+
|
|
5
|
+
When you dispatch a command to the system, it doesn't just blindly pass your text to an LLM. It routes the instruction through a multi-stage **Agent Lifecyle**.
|
|
6
|
+
|
|
7
|
+
## The High-Level Flow
|
|
8
|
+
|
|
9
|
+
Here is a visual breakdown of how the OpenViking and OpenSpace integrations handle an incoming task:
|
|
10
|
+
|
|
11
|
+
```mermaid
|
|
12
|
+
graph TD
|
|
13
|
+
User([User Prompt: "Refactor Authentication"]) ==> Router
|
|
14
|
+
|
|
15
|
+
subgraph "Phase 1: Knowledge Gathering"
|
|
16
|
+
Router[Task Router] --> OViking{OpenViking Engine}
|
|
17
|
+
OViking --> L0[L0: Skeleton Directory Map]
|
|
18
|
+
OViking --> L1[L1: Symbol Headers]
|
|
19
|
+
OViking --> L2[L2: Semantic Vectors]
|
|
20
|
+
L0 --> Compiler[Context Builder]
|
|
21
|
+
L1 --> Compiler
|
|
22
|
+
L2 --> Compiler
|
|
23
|
+
end
|
|
24
|
+
|
|
25
|
+
Compiler ==> SubAgent
|
|
26
|
+
|
|
27
|
+
subgraph "Phase 2: Execution (OpenSpace)"
|
|
28
|
+
SubAgent[AI Sub-Agent] --> Coding[Writes Code / Logic]
|
|
29
|
+
Coding --> Sandbox[OpenSpace Container]
|
|
30
|
+
Sandbox --> Bash[Excecutes Terminal/Tests]
|
|
31
|
+
Bash -- "Fails ❌" --> Feedback[Stderr Log Reader]
|
|
32
|
+
Feedback --> SubAgent
|
|
33
|
+
end
|
|
34
|
+
|
|
35
|
+
Bash -- "Passes ✅" --> Review[Frontend Integrity Gate]
|
|
36
|
+
Review --> Ship((Complete: Ready to Git Push))
|
|
37
|
+
|
|
38
|
+
style User fill:#3b82f6,stroke:#fff,stroke-width:2px,color:#fff
|
|
39
|
+
style Ship fill:#10b981,stroke:#fff,stroke-width:2px,color:#fff
|
|
40
|
+
style OViking fill:#8b5cf6,stroke:#fff,stroke-width:2px,color:#fff
|
|
41
|
+
style Sandbox fill:#f59e0b,stroke:#fff,stroke-width:2px,color:#111
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
|
|
46
|
+
## Step-by-Step Walkthrough
|
|
47
|
+
|
|
48
|
+
### 1. Task Routing & Context Building
|
|
49
|
+
The moment you hit enter, your command is sent to the **Task Router**. Before connecting to an external AI model (like Claude 3.5 or GPT-4o), the router queries **OpenViking**.
|
|
50
|
+
- OpenViking executes a rapid vector search over the local SQLite cache to find all functionally related files.
|
|
51
|
+
- It bundles the `L0` (Project Structure), `L1` (Function Interfaces), and `L2` (Implementation Logic) to create a highly compressed, precisely targeted knowledge package.
|
|
52
|
+
|
|
53
|
+
### 2. Autonomous Execution
|
|
54
|
+
The heavily enriched context is sent to the **AI Sub-Agent**, which formulates the new code. This code is immediately sent into **OpenSpace**.
|
|
55
|
+
- OpenSpace spins up an isolated sandbox.
|
|
56
|
+
- It forces the system to run immediate syntax linters (`eslint`, `mypy`) and your existing Unit Tests (`jest`, `pytest`).
|
|
57
|
+
|
|
58
|
+
### 3. The Self-Healing Loop
|
|
59
|
+
If the execution environment (OpenSpace) returns an error or a failed test:
|
|
60
|
+
- The exact failure trace (`stderr`) is siphoned off and sent *back* to the Sub-Agent.
|
|
61
|
+
- The AI autonomously rewrites the logic to resolve the bug, attempting again. **You are never interrupted to solve syntax errors.**
|
|
62
|
+
|
|
63
|
+
### 4. Integrity and Shipping
|
|
64
|
+
Once unit tests cleanly pass, the final validation stage evaluates Frontend Integrity (e.g. no missing padding, proper CSS compilation). Once confirmed, the result is packaged directly into a pristine, working commit for you to review or deploy.
|
|
65
|
+
|
|
66
|
+
> [!NOTE]
|
|
67
|
+
> All phases within the Execution Flow are entirely handled by the background Neural Spine mechanisms. You are simply engaging with a "Senior Developer" that solves the problem perfectly.
|