@ceyhunbilir/synaphex 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +73 -0
- package/dist/commands/init.d.ts +2 -0
- package/dist/commands/init.d.ts.map +1 -0
- package/dist/commands/init.js +18 -0
- package/dist/commands/init.js.map +1 -0
- package/dist/commands/list-outputs.d.ts +2 -0
- package/dist/commands/list-outputs.d.ts.map +1 -0
- package/dist/commands/list-outputs.js +27 -0
- package/dist/commands/list-outputs.js.map +1 -0
- package/dist/commands/load-memory.d.ts +2 -0
- package/dist/commands/load-memory.d.ts.map +1 -0
- package/dist/commands/load-memory.js +20 -0
- package/dist/commands/load-memory.js.map +1 -0
- package/dist/commands/save-output.d.ts +2 -0
- package/dist/commands/save-output.d.ts.map +1 -0
- package/dist/commands/save-output.js +37 -0
- package/dist/commands/save-output.js.map +1 -0
- package/dist/index.d.ts +3 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +34 -0
- package/dist/index.js.map +1 -0
- package/dist/lib/memory.d.ts +3 -0
- package/dist/lib/memory.d.ts.map +1 -0
- package/dist/lib/memory.js +39 -0
- package/dist/lib/memory.js.map +1 -0
- package/dist/lib/output.d.ts +4 -0
- package/dist/lib/output.d.ts.map +1 -0
- package/dist/lib/output.js +77 -0
- package/dist/lib/output.js.map +1 -0
- package/dist/lib/paths.d.ts +11 -0
- package/dist/lib/paths.d.ts.map +1 -0
- package/dist/lib/paths.js +21 -0
- package/dist/lib/paths.js.map +1 -0
- package/dist/lib/project.d.ts +2 -0
- package/dist/lib/project.d.ts.map +1 -0
- package/dist/lib/project.js +78 -0
- package/dist/lib/project.js.map +1 -0
- package/dist/types.d.ts +31 -0
- package/dist/types.d.ts.map +1 -0
- package/dist/types.js +3 -0
- package/dist/types.js.map +1 -0
- package/package.json +50 -0
- package/scripts/postinstall.js +43 -0
- package/skills/fix.md +188 -0
- package/skills/init.md +50 -0
- package/skills/query.md +193 -0
package/README.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
# Synaphex — Multi-Agent AI Pipeline for Code
|
|
2
|
+
|
|
3
|
+
A **Claude Code skill** (plugin) that brings a 5-stage AI agent pipeline directly into your IDE:
|
|
4
|
+
|
|
5
|
+
```
|
|
6
|
+
Task
|
|
7
|
+
↓
|
|
8
|
+
[Examiner] → Context Analysis
|
|
9
|
+
↓
|
|
10
|
+
[Researcher] → Best Practices (optional in /fix mode)
|
|
11
|
+
↓
|
|
12
|
+
[Coder] → Production-Ready Implementation
|
|
13
|
+
↓
|
|
14
|
+
[Questioner] → Critical Review & Improvements
|
|
15
|
+
↓
|
|
16
|
+
[Reviewer] → Final Quality Gate
|
|
17
|
+
↓
|
|
18
|
+
Output (Markdown + JSON)
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
Works across **Claude.ai**, **VS Code**, **JetBrains**, and any IDE running Claude Code.
|
|
22
|
+
|
|
23
|
+
## Installation
|
|
24
|
+
|
|
25
|
+
```bash
|
|
26
|
+
npm install -g @ceyhunbilir/synaphex
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
This installs three Claude Code skills:
|
|
30
|
+
- `/synaphex/init <project>` — Create a new project
|
|
31
|
+
- `/synaphex/query <task>` — Full 5-stage pipeline
|
|
32
|
+
- `/synaphex/fix <task>` — Quick fix (optional Researcher)
|
|
33
|
+
|
|
34
|
+
## Quick Start
|
|
35
|
+
|
|
36
|
+
### 1. Initialize a project
|
|
37
|
+
|
|
38
|
+
```
|
|
39
|
+
/synaphex/init my-project
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
Creates `~/.claude/projects/my-project/` with config, memory, and outputs.
|
|
43
|
+
|
|
44
|
+
### 2. Run a full pipeline
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
/synaphex/query implement retry logic for HTTP requests
|
|
48
|
+
```
|
|
49
|
+
|
|
50
|
+
### 3. Run a quick fix
|
|
51
|
+
|
|
52
|
+
```
|
|
53
|
+
/synaphex/fix add null check to the API handler
|
|
54
|
+
```
|
|
55
|
+
|
|
56
|
+
Asks if you want to skip the research phase.
|
|
57
|
+
|
|
58
|
+
## Project Structure
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
~/.claude/projects/<project>/
|
|
62
|
+
├── config.md ← LLM provider & model
|
|
63
|
+
├── memory/ ← project context (preserved)
|
|
64
|
+
└── outputs/ ← pipeline results
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
## Configuration
|
|
68
|
+
|
|
69
|
+
Edit `~/.claude/projects/<project>/config.md` to change LLM provider.
|
|
70
|
+
|
|
71
|
+
## License
|
|
72
|
+
|
|
73
|
+
MIT
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"init.d.ts","sourceRoot":"","sources":["../../src/commands/init.ts"],"names":[],"mappings":"AAEA,wBAAsB,IAAI,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAY7D"}
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.init = init;
|
|
4
|
+
const project_1 = require("../lib/project");
|
|
5
|
+
async function init(projectName) {
|
|
6
|
+
if (!projectName || !projectName.trim()) {
|
|
7
|
+
console.error('Error: project name is required');
|
|
8
|
+
process.exit(1);
|
|
9
|
+
}
|
|
10
|
+
try {
|
|
11
|
+
await (0, project_1.initializeProject)(projectName.trim());
|
|
12
|
+
}
|
|
13
|
+
catch (error) {
|
|
14
|
+
console.error('Error initializing project:', error instanceof Error ? error.message : error);
|
|
15
|
+
process.exit(1);
|
|
16
|
+
}
|
|
17
|
+
}
|
|
18
|
+
//# sourceMappingURL=init.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"init.js","sourceRoot":"","sources":["../../src/commands/init.ts"],"names":[],"mappings":";;AAEA,oBAYC;AAdD,4CAAmD;AAE5C,KAAK,UAAU,IAAI,CAAC,WAAmB;IAC5C,IAAI,CAAC,WAAW,IAAI,CAAC,WAAW,CAAC,IAAI,EAAE,EAAE,CAAC;QACxC,OAAO,CAAC,KAAK,CAAC,iCAAiC,CAAC,CAAC;QACjD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,CAAC;QACH,MAAM,IAAA,2BAAiB,EAAC,WAAW,CAAC,IAAI,EAAE,CAAC,CAAC;IAC9C,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAC,6BAA6B,EAAE,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC;QAC7F,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;AACH,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"list-outputs.d.ts","sourceRoot":"","sources":["../../src/commands/list-outputs.ts"],"names":[],"mappings":"AAEA,wBAAsB,iBAAiB,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAwB1E"}
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.handleListOutputs = handleListOutputs;
|
|
4
|
+
const output_1 = require("../lib/output");
|
|
5
|
+
async function handleListOutputs(projectName) {
|
|
6
|
+
if (!projectName || !projectName.trim()) {
|
|
7
|
+
console.error('Error: project name is required');
|
|
8
|
+
process.exit(1);
|
|
9
|
+
}
|
|
10
|
+
try {
|
|
11
|
+
const outputs = await (0, output_1.listOutputs)(projectName.trim());
|
|
12
|
+
if (outputs.length === 0) {
|
|
13
|
+
console.log('No pipeline outputs found.');
|
|
14
|
+
process.exit(0);
|
|
15
|
+
}
|
|
16
|
+
console.log(`Pipeline outputs for project "${projectName}":\n`);
|
|
17
|
+
outputs.forEach((output) => {
|
|
18
|
+
console.log(` - ${output}`);
|
|
19
|
+
});
|
|
20
|
+
process.exit(0);
|
|
21
|
+
}
|
|
22
|
+
catch (error) {
|
|
23
|
+
console.error('Error listing outputs:', error instanceof Error ? error.message : error);
|
|
24
|
+
process.exit(1);
|
|
25
|
+
}
|
|
26
|
+
}
|
|
27
|
+
//# sourceMappingURL=list-outputs.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"list-outputs.js","sourceRoot":"","sources":["../../src/commands/list-outputs.ts"],"names":[],"mappings":";;AAEA,8CAwBC;AA1BD,0CAA4C;AAErC,KAAK,UAAU,iBAAiB,CAAC,WAAmB;IACzD,IAAI,CAAC,WAAW,IAAI,CAAC,WAAW,CAAC,IAAI,EAAE,EAAE,CAAC;QACxC,OAAO,CAAC,KAAK,CAAC,iCAAiC,CAAC,CAAC;QACjD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,CAAC;QACH,MAAM,OAAO,GAAG,MAAM,IAAA,oBAAW,EAAC,WAAW,CAAC,IAAI,EAAE,CAAC,CAAC;QAEtD,IAAI,OAAO,CAAC,MAAM,KAAK,CAAC,EAAE,CAAC;YACzB,OAAO,CAAC,GAAG,CAAC,4BAA4B,CAAC,CAAC;YAC1C,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAClB,CAAC;QAED,OAAO,CAAC,GAAG,CAAC,iCAAiC,WAAW,MAAM,CAAC,CAAC;QAChE,OAAO,CAAC,OAAO,CAAC,CAAC,MAAM,EAAE,EAAE;YACzB,OAAO,CAAC,GAAG,CAAC,OAAO,MAAM,EAAE,CAAC,CAAC;QAC/B,CAAC,CAAC,CAAC;QAEH,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAC,wBAAwB,EAAE,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC;QACxF,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;AACH,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"load-memory.d.ts","sourceRoot":"","sources":["../../src/commands/load-memory.ts"],"names":[],"mappings":"AAEA,wBAAsB,gBAAgB,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAczE"}
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.handleLoadMemory = handleLoadMemory;
|
|
4
|
+
const memory_1 = require("../lib/memory");
|
|
5
|
+
async function handleLoadMemory(projectName) {
|
|
6
|
+
if (!projectName || !projectName.trim()) {
|
|
7
|
+
console.error('Error: project name is required');
|
|
8
|
+
process.exit(1);
|
|
9
|
+
}
|
|
10
|
+
try {
|
|
11
|
+
const memory = await (0, memory_1.loadMemory)(projectName.trim());
|
|
12
|
+
process.stdout.write(memory);
|
|
13
|
+
process.exit(0);
|
|
14
|
+
}
|
|
15
|
+
catch (error) {
|
|
16
|
+
console.error('Error loading memory:', error instanceof Error ? error.message : error);
|
|
17
|
+
process.exit(1);
|
|
18
|
+
}
|
|
19
|
+
}
|
|
20
|
+
//# sourceMappingURL=load-memory.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"load-memory.js","sourceRoot":"","sources":["../../src/commands/load-memory.ts"],"names":[],"mappings":";;AAEA,4CAcC;AAhBD,0CAA2C;AAEpC,KAAK,UAAU,gBAAgB,CAAC,WAAmB;IACxD,IAAI,CAAC,WAAW,IAAI,CAAC,WAAW,CAAC,IAAI,EAAE,EAAE,CAAC;QACxC,OAAO,CAAC,KAAK,CAAC,iCAAiC,CAAC,CAAC;QACjD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,CAAC;QACH,MAAM,MAAM,GAAG,MAAM,IAAA,mBAAU,EAAC,WAAW,CAAC,IAAI,EAAE,CAAC,CAAC;QACpD,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,MAAM,CAAC,CAAC;QAC7B,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAC,uBAAuB,EAAE,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC;QACvF,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;AACH,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"save-output.d.ts","sourceRoot":"","sources":["../../src/commands/save-output.ts"],"names":[],"mappings":"AAGA,wBAAsB,gBAAgB,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CAiCzE"}
|
|
@@ -0,0 +1,37 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.handleSaveOutput = handleSaveOutput;
|
|
4
|
+
const output_1 = require("../lib/output");
|
|
5
|
+
async function handleSaveOutput(projectName) {
|
|
6
|
+
if (!projectName || !projectName.trim()) {
|
|
7
|
+
console.error('Error: project name is required');
|
|
8
|
+
process.exit(1);
|
|
9
|
+
}
|
|
10
|
+
try {
|
|
11
|
+
// Read JSON from stdin
|
|
12
|
+
let jsonStr = '';
|
|
13
|
+
process.stdin.on('data', (chunk) => {
|
|
14
|
+
jsonStr += chunk.toString();
|
|
15
|
+
});
|
|
16
|
+
process.stdin.on('end', async () => {
|
|
17
|
+
try {
|
|
18
|
+
const output = JSON.parse(jsonStr);
|
|
19
|
+
await (0, output_1.saveOutput)(projectName.trim(), output);
|
|
20
|
+
process.exit(0);
|
|
21
|
+
}
|
|
22
|
+
catch (error) {
|
|
23
|
+
console.error('Error parsing JSON:', error instanceof Error ? error.message : error);
|
|
24
|
+
process.exit(1);
|
|
25
|
+
}
|
|
26
|
+
});
|
|
27
|
+
process.stdin.on('error', (error) => {
|
|
28
|
+
console.error('Error reading stdin:', error);
|
|
29
|
+
process.exit(1);
|
|
30
|
+
});
|
|
31
|
+
}
|
|
32
|
+
catch (error) {
|
|
33
|
+
console.error('Error:', error instanceof Error ? error.message : error);
|
|
34
|
+
process.exit(1);
|
|
35
|
+
}
|
|
36
|
+
}
|
|
37
|
+
//# sourceMappingURL=save-output.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"save-output.js","sourceRoot":"","sources":["../../src/commands/save-output.ts"],"names":[],"mappings":";;AAGA,4CAiCC;AApCD,0CAA2C;AAGpC,KAAK,UAAU,gBAAgB,CAAC,WAAmB;IACxD,IAAI,CAAC,WAAW,IAAI,CAAC,WAAW,CAAC,IAAI,EAAE,EAAE,CAAC;QACxC,OAAO,CAAC,KAAK,CAAC,iCAAiC,CAAC,CAAC;QACjD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;IAED,IAAI,CAAC;QACH,uBAAuB;QACvB,IAAI,OAAO,GAAG,EAAE,CAAC;QAEjB,OAAO,CAAC,KAAK,CAAC,EAAE,CAAC,MAAM,EAAE,CAAC,KAAK,EAAE,EAAE;YACjC,OAAO,IAAI,KAAK,CAAC,QAAQ,EAAE,CAAC;QAC9B,CAAC,CAAC,CAAC;QAEH,OAAO,CAAC,KAAK,CAAC,EAAE,CAAC,KAAK,EAAE,KAAK,IAAI,EAAE;YACjC,IAAI,CAAC;gBACH,MAAM,MAAM,GAAmB,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC;gBACnD,MAAM,IAAA,mBAAU,EAAC,WAAW,CAAC,IAAI,EAAE,EAAE,MAAM,CAAC,CAAC;gBAC7C,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;YAClB,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,OAAO,CAAC,KAAK,CAAC,qBAAqB,EAAE,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC;gBACrF,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;YAClB,CAAC;QACH,CAAC,CAAC,CAAC;QAEH,OAAO,CAAC,KAAK,CAAC,EAAE,CAAC,OAAO,EAAE,CAAC,KAAK,EAAE,EAAE;YAClC,OAAO,CAAC,KAAK,CAAC,sBAAsB,EAAE,KAAK,CAAC,CAAC;YAC7C,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAClB,CAAC,CAAC,CAAC;IACL,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,OAAO,CAAC,KAAK,CAAC,QAAQ,EAAE,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC;QACxE,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAClB,CAAC;AACH,CAAC"}
|
package/dist/index.d.ts
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":""}
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
"use strict";
|
|
3
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
4
|
+
const commander_1 = require("commander");
|
|
5
|
+
const init_1 = require("./commands/init");
|
|
6
|
+
const save_output_1 = require("./commands/save-output");
|
|
7
|
+
const load_memory_1 = require("./commands/load-memory");
|
|
8
|
+
const list_outputs_1 = require("./commands/list-outputs");
|
|
9
|
+
const program = new commander_1.Command();
|
|
10
|
+
program
|
|
11
|
+
.name('synaphex')
|
|
12
|
+
.description('Multi-agent AI pipeline for code analysis, generation, and improvement')
|
|
13
|
+
.version('1.0.0');
|
|
14
|
+
program
|
|
15
|
+
.command('init-project <name>')
|
|
16
|
+
.description('Initialize a new Synaphex project at ~/.claude/projects/<name>')
|
|
17
|
+
.action(init_1.init);
|
|
18
|
+
program
|
|
19
|
+
.command('save-output <project>')
|
|
20
|
+
.description('Save pipeline output (reads JSON from stdin)')
|
|
21
|
+
.action(save_output_1.handleSaveOutput);
|
|
22
|
+
program
|
|
23
|
+
.command('load-memory <project>')
|
|
24
|
+
.description('Load and output project memory files')
|
|
25
|
+
.action(load_memory_1.handleLoadMemory);
|
|
26
|
+
program
|
|
27
|
+
.command('list-outputs <project>')
|
|
28
|
+
.description('List all pipeline outputs for a project')
|
|
29
|
+
.action(list_outputs_1.handleListOutputs);
|
|
30
|
+
program.parse(process.argv);
|
|
31
|
+
if (!process.argv.slice(2).length) {
|
|
32
|
+
program.outputHelp();
|
|
33
|
+
}
|
|
34
|
+
//# sourceMappingURL=index.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":";;;AAEA,yCAAoC;AACpC,0CAAuC;AACvC,wDAA0D;AAC1D,wDAA0D;AAC1D,0DAA4D;AAE5D,MAAM,OAAO,GAAG,IAAI,mBAAO,EAAE,CAAC;AAE9B,OAAO;KACJ,IAAI,CAAC,UAAU,CAAC;KAChB,WAAW,CAAC,wEAAwE,CAAC;KACrF,OAAO,CAAC,OAAO,CAAC,CAAC;AAEpB,OAAO;KACJ,OAAO,CAAC,qBAAqB,CAAC;KAC9B,WAAW,CAAC,gEAAgE,CAAC;KAC7E,MAAM,CAAC,WAAI,CAAC,CAAC;AAEhB,OAAO;KACJ,OAAO,CAAC,uBAAuB,CAAC;KAChC,WAAW,CAAC,8CAA8C,CAAC;KAC3D,MAAM,CAAC,8BAAgB,CAAC,CAAC;AAE5B,OAAO;KACJ,OAAO,CAAC,uBAAuB,CAAC;KAChC,WAAW,CAAC,sCAAsC,CAAC;KACnD,MAAM,CAAC,8BAAgB,CAAC,CAAC;AAE5B,OAAO;KACJ,OAAO,CAAC,wBAAwB,CAAC;KACjC,WAAW,CAAC,yCAAyC,CAAC;KACtD,MAAM,CAAC,gCAAiB,CAAC,CAAC;AAE7B,OAAO,CAAC,KAAK,CAAC,OAAO,CAAC,IAAI,CAAC,CAAC;AAE5B,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,KAAK,CAAC,CAAC,CAAC,CAAC,MAAM,EAAE,CAAC;IAClC,OAAO,CAAC,UAAU,EAAE,CAAC;AACvB,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"memory.d.ts","sourceRoot":"","sources":["../../src/lib/memory.ts"],"names":[],"mappings":"AAIA,wBAAsB,UAAU,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,MAAM,CAAC,CA0BrE;AAED,wBAAsB,cAAc,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,MAAM,CAAC,CAQzE"}
|
|
@@ -0,0 +1,39 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.loadMemory = loadMemory;
|
|
7
|
+
exports.getMemoryIndex = getMemoryIndex;
|
|
8
|
+
const fs_extra_1 = __importDefault(require("fs-extra"));
|
|
9
|
+
const path_1 = __importDefault(require("path"));
|
|
10
|
+
const paths_1 = require("./paths");
|
|
11
|
+
async function loadMemory(projectName) {
|
|
12
|
+
const memoryDir = paths_1.Paths.memory(projectName);
|
|
13
|
+
if (!await fs_extra_1.default.pathExists(memoryDir)) {
|
|
14
|
+
return '# No project memory found\n';
|
|
15
|
+
}
|
|
16
|
+
// Read all .md files in memory directory
|
|
17
|
+
const files = await fs_extra_1.default.readdir(memoryDir);
|
|
18
|
+
const mdFiles = files
|
|
19
|
+
.filter((f) => f.endsWith('.md') && f !== 'MEMORY.md')
|
|
20
|
+
.sort();
|
|
21
|
+
if (mdFiles.length === 0) {
|
|
22
|
+
return '# No memory entries found\n';
|
|
23
|
+
}
|
|
24
|
+
let combinedContent = '# Project Memory\n\n';
|
|
25
|
+
for (const file of mdFiles) {
|
|
26
|
+
const filePath = path_1.default.join(memoryDir, file);
|
|
27
|
+
const content = await fs_extra_1.default.readFile(filePath, 'utf-8');
|
|
28
|
+
combinedContent += `## ${file}\n\n${content}\n\n---\n\n`;
|
|
29
|
+
}
|
|
30
|
+
return combinedContent;
|
|
31
|
+
}
|
|
32
|
+
async function getMemoryIndex(projectName) {
|
|
33
|
+
const indexPath = paths_1.Paths.memoryIndex(projectName);
|
|
34
|
+
if (!await fs_extra_1.default.pathExists(indexPath)) {
|
|
35
|
+
return '# Memory Index Not Found\n';
|
|
36
|
+
}
|
|
37
|
+
return fs_extra_1.default.readFile(indexPath, 'utf-8');
|
|
38
|
+
}
|
|
39
|
+
//# sourceMappingURL=memory.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"memory.js","sourceRoot":"","sources":["../../src/lib/memory.ts"],"names":[],"mappings":";;;;;AAIA,gCA0BC;AAED,wCAQC;AAxCD,wDAA0B;AAC1B,gDAAwB;AACxB,mCAAgC;AAEzB,KAAK,UAAU,UAAU,CAAC,WAAmB;IAClD,MAAM,SAAS,GAAG,aAAK,CAAC,MAAM,CAAC,WAAW,CAAC,CAAC;IAE5C,IAAI,CAAC,MAAM,kBAAE,CAAC,UAAU,CAAC,SAAS,CAAC,EAAE,CAAC;QACpC,OAAO,6BAA6B,CAAC;IACvC,CAAC;IAED,yCAAyC;IACzC,MAAM,KAAK,GAAG,MAAM,kBAAE,CAAC,OAAO,CAAC,SAAS,CAAC,CAAC;IAC1C,MAAM,OAAO,GAAG,KAAK;SAClB,MAAM,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,QAAQ,CAAC,KAAK,CAAC,IAAI,CAAC,KAAK,WAAW,CAAC;SACrD,IAAI,EAAE,CAAC;IAEV,IAAI,OAAO,CAAC,MAAM,KAAK,CAAC,EAAE,CAAC;QACzB,OAAO,6BAA6B,CAAC;IACvC,CAAC;IAED,IAAI,eAAe,GAAG,sBAAsB,CAAC;IAE7C,KAAK,MAAM,IAAI,IAAI,OAAO,EAAE,CAAC;QAC3B,MAAM,QAAQ,GAAG,cAAI,CAAC,IAAI,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC;QAC5C,MAAM,OAAO,GAAG,MAAM,kBAAE,CAAC,QAAQ,CAAC,QAAQ,EAAE,OAAO,CAAC,CAAC;QACrD,eAAe,IAAI,MAAM,IAAI,OAAO,OAAO,aAAa,CAAC;IAC3D,CAAC;IAED,OAAO,eAAe,CAAC;AACzB,CAAC;AAEM,KAAK,UAAU,cAAc,CAAC,WAAmB;IACtD,MAAM,SAAS,GAAG,aAAK,CAAC,WAAW,CAAC,WAAW,CAAC,CAAC;IAEjD,IAAI,CAAC,MAAM,kBAAE,CAAC,UAAU,CAAC,SAAS,CAAC,EAAE,CAAC;QACpC,OAAO,4BAA4B,CAAC;IACtC,CAAC;IAED,OAAO,kBAAE,CAAC,QAAQ,CAAC,SAAS,EAAE,OAAO,CAAC,CAAC;AACzC,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"output.d.ts","sourceRoot":"","sources":["../../src/lib/output.ts"],"names":[],"mappings":"AAIA,OAAO,EAAE,cAAc,EAAE,MAAM,UAAU,CAAC;AAmD1C,wBAAsB,UAAU,CAAC,WAAW,EAAE,MAAM,EAAE,MAAM,EAAE,cAAc,GAAG,OAAO,CAAC,MAAM,CAAC,CAmB7F;AAED,wBAAsB,WAAW,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,MAAM,EAAE,CAAC,CAYxE"}
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.saveOutput = saveOutput;
|
|
7
|
+
exports.listOutputs = listOutputs;
|
|
8
|
+
const fs_extra_1 = __importDefault(require("fs-extra"));
|
|
9
|
+
const path_1 = __importDefault(require("path"));
|
|
10
|
+
const crypto_1 = require("crypto");
|
|
11
|
+
const paths_1 = require("./paths");
|
|
12
|
+
function generateRunId() {
|
|
13
|
+
const now = new Date();
|
|
14
|
+
const timestamp = now
|
|
15
|
+
.toISOString()
|
|
16
|
+
.replace(/[-:]/g, '')
|
|
17
|
+
.replace(/\.\d{3}/, '');
|
|
18
|
+
const randomPart = (0, crypto_1.randomBytes)(4).toString('hex');
|
|
19
|
+
return `${timestamp.slice(0, 15)}_${randomPart}`;
|
|
20
|
+
}
|
|
21
|
+
function formatMarkdownOutput(output) {
|
|
22
|
+
const { command, task, skip_researcher, timestamp, stages } = output;
|
|
23
|
+
let markdown = `# Synaphex ${command.toUpperCase()} Pipeline Output
|
|
24
|
+
|
|
25
|
+
**Run ID**: ${output.run_id}
|
|
26
|
+
**Timestamp**: ${timestamp}
|
|
27
|
+
**Task**: ${task}
|
|
28
|
+
**Skip Researcher**: ${skip_researcher ? 'Yes' : 'No'}
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
`;
|
|
33
|
+
if (stages.examiner) {
|
|
34
|
+
markdown += `## Stage 1: Examiner\n\n${stages.examiner}\n\n---\n\n`;
|
|
35
|
+
}
|
|
36
|
+
if (stages.researcher && !skip_researcher) {
|
|
37
|
+
markdown += `## Stage 2: Researcher\n\n${stages.researcher}\n\n---\n\n`;
|
|
38
|
+
}
|
|
39
|
+
if (stages.coder) {
|
|
40
|
+
markdown += `## Stage ${skip_researcher ? '2' : '3'}: Coder\n\n${stages.coder}\n\n---\n\n`;
|
|
41
|
+
}
|
|
42
|
+
if (stages.questioner) {
|
|
43
|
+
markdown += `## Stage ${skip_researcher ? '3' : '4'}: Questioner\n\n${stages.questioner}\n\n---\n\n`;
|
|
44
|
+
}
|
|
45
|
+
if (stages.reviewer) {
|
|
46
|
+
markdown += `## Stage ${skip_researcher ? '4' : '5'}: Reviewer\n\n${stages.reviewer}\n\n---\n\n`;
|
|
47
|
+
}
|
|
48
|
+
markdown += `Generated by Synaphex on ${new Date().toLocaleString()}`;
|
|
49
|
+
return markdown;
|
|
50
|
+
}
|
|
51
|
+
async function saveOutput(projectName, output) {
|
|
52
|
+
const outputsDir = paths_1.Paths.outputs(projectName);
|
|
53
|
+
await fs_extra_1.default.ensureDir(outputsDir);
|
|
54
|
+
const runId = generateRunId();
|
|
55
|
+
const outputWithId = { ...output, run_id: runId };
|
|
56
|
+
// Save JSON
|
|
57
|
+
const jsonPath = path_1.default.join(outputsDir, `run_${runId}.json`);
|
|
58
|
+
await fs_extra_1.default.writeJSON(jsonPath, outputWithId, { spaces: 2 });
|
|
59
|
+
// Save Markdown
|
|
60
|
+
const mdContent = formatMarkdownOutput(outputWithId);
|
|
61
|
+
const mdPath = path_1.default.join(outputsDir, `run_${runId}.md`);
|
|
62
|
+
await fs_extra_1.default.writeFile(mdPath, mdContent, 'utf-8');
|
|
63
|
+
console.log(`✓ Output saved: ${jsonPath}`);
|
|
64
|
+
return runId;
|
|
65
|
+
}
|
|
66
|
+
async function listOutputs(projectName) {
|
|
67
|
+
const outputsDir = paths_1.Paths.outputs(projectName);
|
|
68
|
+
if (!await fs_extra_1.default.pathExists(outputsDir)) {
|
|
69
|
+
return [];
|
|
70
|
+
}
|
|
71
|
+
const files = await fs_extra_1.default.readdir(outputsDir);
|
|
72
|
+
return files
|
|
73
|
+
.filter((f) => f.endsWith('.json'))
|
|
74
|
+
.sort()
|
|
75
|
+
.reverse();
|
|
76
|
+
}
|
|
77
|
+
//# sourceMappingURL=output.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"output.js","sourceRoot":"","sources":["../../src/lib/output.ts"],"names":[],"mappings":";;;;;AAuDA,gCAmBC;AAED,kCAYC;AAxFD,wDAA0B;AAC1B,gDAAwB;AACxB,mCAAqC;AACrC,mCAAgC;AAGhC,SAAS,aAAa;IACpB,MAAM,GAAG,GAAG,IAAI,IAAI,EAAE,CAAC;IACvB,MAAM,SAAS,GAAG,GAAG;SAClB,WAAW,EAAE;SACb,OAAO,CAAC,OAAO,EAAE,EAAE,CAAC;SACpB,OAAO,CAAC,SAAS,EAAE,EAAE,CAAC,CAAC;IAC1B,MAAM,UAAU,GAAG,IAAA,oBAAW,EAAC,CAAC,CAAC,CAAC,QAAQ,CAAC,KAAK,CAAC,CAAC;IAClD,OAAO,GAAG,SAAS,CAAC,KAAK,CAAC,CAAC,EAAE,EAAE,CAAC,IAAI,UAAU,EAAE,CAAC;AACnD,CAAC;AAED,SAAS,oBAAoB,CAAC,MAAsB;IAClD,MAAM,EAAE,OAAO,EAAE,IAAI,EAAE,eAAe,EAAE,SAAS,EAAE,MAAM,EAAE,GAAG,MAAM,CAAC;IAErE,IAAI,QAAQ,GAAG,cAAc,OAAO,CAAC,WAAW,EAAE;;cAEtC,MAAM,CAAC,MAAM;iBACV,SAAS;YACd,IAAI;uBACO,eAAe,CAAC,CAAC,CAAC,KAAK,CAAC,CAAC,CAAC,IAAI;;;;CAIpD,CAAC;IAEA,IAAI,MAAM,CAAC,QAAQ,EAAE,CAAC;QACpB,QAAQ,IAAI,2BAA2B,MAAM,CAAC,QAAQ,aAAa,CAAC;IACtE,CAAC;IAED,IAAI,MAAM,CAAC,UAAU,IAAI,CAAC,eAAe,EAAE,CAAC;QAC1C,QAAQ,IAAI,6BAA6B,MAAM,CAAC,UAAU,aAAa,CAAC;IAC1E,CAAC;IAED,IAAI,MAAM,CAAC,KAAK,EAAE,CAAC;QACjB,QAAQ,IAAI,YAAY,eAAe,CAAC,CAAC,CAAC,GAAG,CAAC,CAAC,CAAC,GAAG,cAAc,MAAM,CAAC,KAAK,aAAa,CAAC;IAC7F,CAAC;IAED,IAAI,MAAM,CAAC,UAAU,EAAE,CAAC;QACtB,QAAQ,IAAI,YAAY,eAAe,CAAC,CAAC,CAAC,GAAG,CAAC,CAAC,CAAC,GAAG,mBAAmB,MAAM,CAAC,UAAU,aAAa,CAAC;IACvG,CAAC;IAED,IAAI,MAAM,CAAC,QAAQ,EAAE,CAAC;QACpB,QAAQ,IAAI,YAAY,eAAe,CAAC,CAAC,CAAC,GAAG,CAAC,CAAC,CAAC,GAAG,iBAAiB,MAAM,CAAC,QAAQ,aAAa,CAAC;IACnG,CAAC;IAED,QAAQ,IAAI,4BAA4B,IAAI,IAAI,EAAE,CAAC,cAAc,EAAE,EAAE,CAAC;IAEtE,OAAO,QAAQ,CAAC;AAClB,CAAC;AAEM,KAAK,UAAU,UAAU,CAAC,WAAmB,EAAE,MAAsB;IAC1E,MAAM,UAAU,GAAG,aAAK,CAAC,OAAO,CAAC,WAAW,CAAC,CAAC;IAC9C,MAAM,kBAAE,CAAC,SAAS,CAAC,UAAU,CAAC,CAAC;IAE/B,MAAM,KAAK,GAAG,aAAa,EAAE,CAAC;IAC9B,MAAM,YAAY,GAAG,EAAE,GAAG,MAAM,EAAE,MAAM,EAAE,KAAK,EAAE,CAAC;IAElD,YAAY;IACZ,MAAM,QAAQ,GAAG,cAAI,CAAC,IAAI,CAAC,UAAU,EAAE,OAAO,KAAK,OAAO,CAAC,CAAC;IAC5D,MAAM,kBAAE,CAAC,SAAS,CAAC,QAAQ,EAAE,YAAY,EAAE,EAAE,MAAM,EAAE,CAAC,EAAE,CAAC,CAAC;IAE1D,gBAAgB;IAChB,MAAM,SAAS,GAAG,oBAAoB,CAAC,YAAY,CAAC,CAAC;IACrD,MAAM,MAAM,GAAG,cAAI,CAAC,IAAI,CAAC,UAAU,EAAE,OAAO,KAAK,KAAK,CAAC,CAAC;IACxD,MAAM,kBAAE,CAAC,SAAS,CAAC,MAAM,EAAE,SAAS,EAAE,OAAO,CAAC,CAAC;IAE/C,OAAO,CAAC,GAAG,CAAC,mBAAmB,QAAQ,EAAE,CAAC,CAAC;IAE3C,OAAO,KAAK,CAAC;AACf,CAAC;AAEM,KAAK,UAAU,WAAW,CAAC,WAAmB;IACnD,MAAM,UAAU,GAAG,aAAK,CAAC,OAAO,CAAC,WAAW,CAAC,CAAC;IAE9C,IAAI,CAAC,MAAM,kBAAE,CAAC,UAAU,CAAC,UAAU,CAAC,EAAE,CAAC;QACrC,OAAO,EAAE,CAAC;IACZ,CAAC;IAED,MAAM,KAAK,GAAG,MAAM,kBAAE,CAAC,OAAO,CAAC,UAAU,CAAC,CAAC;IAC3C,OAAO,KAAK;SACT,MAAM,CAAC,CAAC,CAAC,EAAE,EAAE,CAAC,CAAC,CAAC,QAAQ,CAAC,OAAO,CAAC,CAAC;SAClC,IAAI,EAAE;SACN,OAAO,EAAE,CAAC;AACf,CAAC"}
|
|
@@ -0,0 +1,11 @@
|
|
|
1
|
+
export declare const Paths: {
|
|
2
|
+
claudeHome: string;
|
|
3
|
+
commandsDir: string;
|
|
4
|
+
projectsDir: string;
|
|
5
|
+
project: (name: string) => string;
|
|
6
|
+
memory: (name: string) => string;
|
|
7
|
+
memoryIndex: (name: string) => string;
|
|
8
|
+
config: (name: string) => string;
|
|
9
|
+
outputs: (name: string) => string;
|
|
10
|
+
};
|
|
11
|
+
//# sourceMappingURL=paths.d.ts.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"paths.d.ts","sourceRoot":"","sources":["../../src/lib/paths.ts"],"names":[],"mappings":"AAMA,eAAO,MAAM,KAAK;;;;oBAKA,MAAM,KAAG,MAAM;mBAChB,MAAM,KAAG,MAAM;wBACV,MAAM,KAAG,MAAM;mBACpB,MAAM,KAAG,MAAM;oBACd,MAAM,KAAG,MAAM;CAChC,CAAC"}
|
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.Paths = void 0;
|
|
7
|
+
const os_1 = __importDefault(require("os"));
|
|
8
|
+
const path_1 = __importDefault(require("path"));
|
|
9
|
+
const CLAUDE_HOME = path_1.default.join(os_1.default.homedir(), '.claude');
|
|
10
|
+
const PROJECTS_DIR = path_1.default.join(CLAUDE_HOME, 'projects');
|
|
11
|
+
exports.Paths = {
|
|
12
|
+
claudeHome: CLAUDE_HOME,
|
|
13
|
+
commandsDir: path_1.default.join(CLAUDE_HOME, 'commands', 'synaphex'),
|
|
14
|
+
projectsDir: PROJECTS_DIR,
|
|
15
|
+
project: (name) => path_1.default.join(PROJECTS_DIR, name),
|
|
16
|
+
memory: (name) => path_1.default.join(PROJECTS_DIR, name, 'memory'),
|
|
17
|
+
memoryIndex: (name) => path_1.default.join(PROJECTS_DIR, name, 'memory', 'MEMORY.md'),
|
|
18
|
+
config: (name) => path_1.default.join(PROJECTS_DIR, name, 'config.md'),
|
|
19
|
+
outputs: (name) => path_1.default.join(PROJECTS_DIR, name, 'outputs'),
|
|
20
|
+
};
|
|
21
|
+
//# sourceMappingURL=paths.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"paths.js","sourceRoot":"","sources":["../../src/lib/paths.ts"],"names":[],"mappings":";;;;;;AAAA,4CAAoB;AACpB,gDAAwB;AAExB,MAAM,WAAW,GAAG,cAAI,CAAC,IAAI,CAAC,YAAE,CAAC,OAAO,EAAE,EAAE,SAAS,CAAC,CAAC;AACvD,MAAM,YAAY,GAAG,cAAI,CAAC,IAAI,CAAC,WAAW,EAAE,UAAU,CAAC,CAAC;AAE3C,QAAA,KAAK,GAAG;IACnB,UAAU,EAAE,WAAW;IACvB,WAAW,EAAE,cAAI,CAAC,IAAI,CAAC,WAAW,EAAE,UAAU,EAAE,UAAU,CAAC;IAC3D,WAAW,EAAE,YAAY;IAEzB,OAAO,EAAE,CAAC,IAAY,EAAU,EAAE,CAAC,cAAI,CAAC,IAAI,CAAC,YAAY,EAAE,IAAI,CAAC;IAChE,MAAM,EAAE,CAAC,IAAY,EAAU,EAAE,CAAC,cAAI,CAAC,IAAI,CAAC,YAAY,EAAE,IAAI,EAAE,QAAQ,CAAC;IACzE,WAAW,EAAE,CAAC,IAAY,EAAU,EAAE,CAAC,cAAI,CAAC,IAAI,CAAC,YAAY,EAAE,IAAI,EAAE,QAAQ,EAAE,WAAW,CAAC;IAC3F,MAAM,EAAE,CAAC,IAAY,EAAU,EAAE,CAAC,cAAI,CAAC,IAAI,CAAC,YAAY,EAAE,IAAI,EAAE,WAAW,CAAC;IAC5E,OAAO,EAAE,CAAC,IAAY,EAAU,EAAE,CAAC,cAAI,CAAC,IAAI,CAAC,YAAY,EAAE,IAAI,EAAE,SAAS,CAAC;CAC5E,CAAC"}
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"project.d.ts","sourceRoot":"","sources":["../../src/lib/project.ts"],"names":[],"mappings":"AAIA,wBAAsB,iBAAiB,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAAC,IAAI,CAAC,CA2E1E"}
|
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.initializeProject = initializeProject;
|
|
7
|
+
const fs_extra_1 = __importDefault(require("fs-extra"));
|
|
8
|
+
const path_1 = __importDefault(require("path"));
|
|
9
|
+
const paths_1 = require("./paths");
|
|
10
|
+
async function initializeProject(projectName) {
|
|
11
|
+
const projectPath = paths_1.Paths.project(projectName);
|
|
12
|
+
const memoryPath = paths_1.Paths.memory(projectName);
|
|
13
|
+
const outputsPath = paths_1.Paths.outputs(projectName);
|
|
14
|
+
const configPath = paths_1.Paths.config(projectName);
|
|
15
|
+
const memoryIndexPath = paths_1.Paths.memoryIndex(projectName);
|
|
16
|
+
// Create directories
|
|
17
|
+
await fs_extra_1.default.ensureDir(memoryPath);
|
|
18
|
+
await fs_extra_1.default.ensureDir(outputsPath);
|
|
19
|
+
// Create config.md with YAML front-matter
|
|
20
|
+
const now = new Date().toISOString();
|
|
21
|
+
const configContent = `---
|
|
22
|
+
project: ${projectName}
|
|
23
|
+
provider: claude
|
|
24
|
+
model: claude-opus-4-5
|
|
25
|
+
created_at: ${now}
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
# Project Configuration
|
|
29
|
+
|
|
30
|
+
Edit this file to change the LLM provider and model for this project.
|
|
31
|
+
|
|
32
|
+
## Supported Providers
|
|
33
|
+
|
|
34
|
+
- **claude** — Anthropic Claude (requires ANTHROPIC_API_KEY)
|
|
35
|
+
- **openai** — OpenAI-compatible including Copilot (requires OPENAI_API_KEY)
|
|
36
|
+
- **gemini** — Google Gemini (requires GEMINI_API_KEY)
|
|
37
|
+
|
|
38
|
+
## Current Settings
|
|
39
|
+
|
|
40
|
+
Provider: claude
|
|
41
|
+
Model: claude-opus-4-5
|
|
42
|
+
Created: ${now}
|
|
43
|
+
`;
|
|
44
|
+
await fs_extra_1.default.writeFile(configPath, configContent, 'utf-8');
|
|
45
|
+
// Create memory/MEMORY.md index
|
|
46
|
+
const memoryIndexContent = `# Project Memory Index: ${projectName}
|
|
47
|
+
|
|
48
|
+
Add memory entries below as the project evolves.
|
|
49
|
+
|
|
50
|
+
## Entries
|
|
51
|
+
|
|
52
|
+
- [Overview](overview.md) — project overview and context
|
|
53
|
+
`;
|
|
54
|
+
await fs_extra_1.default.writeFile(memoryIndexPath, memoryIndexContent, 'utf-8');
|
|
55
|
+
// Create memory/overview.md
|
|
56
|
+
const overviewContent = `---
|
|
57
|
+
name: Project Overview
|
|
58
|
+
description: Initial project overview
|
|
59
|
+
type: project
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
# ${projectName}
|
|
63
|
+
|
|
64
|
+
Project initialized at ${now}.
|
|
65
|
+
|
|
66
|
+
## Quick Links
|
|
67
|
+
|
|
68
|
+
- **Config**: ../config.md
|
|
69
|
+
- **Outputs**: ../outputs/ (pipeline run results)
|
|
70
|
+
`;
|
|
71
|
+
const overviewPath = path_1.default.join(memoryPath, 'overview.md');
|
|
72
|
+
await fs_extra_1.default.writeFile(overviewPath, overviewContent, 'utf-8');
|
|
73
|
+
console.log(`✓ Project initialized: ${projectPath}`);
|
|
74
|
+
console.log(` - Config: ${configPath}`);
|
|
75
|
+
console.log(` - Memory: ${memoryPath}`);
|
|
76
|
+
console.log(` - Outputs: ${outputsPath}`);
|
|
77
|
+
}
|
|
78
|
+
//# sourceMappingURL=project.js.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"project.js","sourceRoot":"","sources":["../../src/lib/project.ts"],"names":[],"mappings":";;;;;AAIA,8CA2EC;AA/ED,wDAA0B;AAC1B,gDAAwB;AACxB,mCAAgC;AAEzB,KAAK,UAAU,iBAAiB,CAAC,WAAmB;IACzD,MAAM,WAAW,GAAG,aAAK,CAAC,OAAO,CAAC,WAAW,CAAC,CAAC;IAC/C,MAAM,UAAU,GAAG,aAAK,CAAC,MAAM,CAAC,WAAW,CAAC,CAAC;IAC7C,MAAM,WAAW,GAAG,aAAK,CAAC,OAAO,CAAC,WAAW,CAAC,CAAC;IAC/C,MAAM,UAAU,GAAG,aAAK,CAAC,MAAM,CAAC,WAAW,CAAC,CAAC;IAC7C,MAAM,eAAe,GAAG,aAAK,CAAC,WAAW,CAAC,WAAW,CAAC,CAAC;IAEvD,qBAAqB;IACrB,MAAM,kBAAE,CAAC,SAAS,CAAC,UAAU,CAAC,CAAC;IAC/B,MAAM,kBAAE,CAAC,SAAS,CAAC,WAAW,CAAC,CAAC;IAEhC,0CAA0C;IAC1C,MAAM,GAAG,GAAG,IAAI,IAAI,EAAE,CAAC,WAAW,EAAE,CAAC;IACrC,MAAM,aAAa,GAAG;WACb,WAAW;;;cAGR,GAAG;;;;;;;;;;;;;;;;;WAiBN,GAAG;CACb,CAAC;IAEA,MAAM,kBAAE,CAAC,SAAS,CAAC,UAAU,EAAE,aAAa,EAAE,OAAO,CAAC,CAAC;IAEvD,gCAAgC;IAChC,MAAM,kBAAkB,GAAG,2BAA2B,WAAW;;;;;;;CAOlE,CAAC;IAEA,MAAM,kBAAE,CAAC,SAAS,CAAC,eAAe,EAAE,kBAAkB,EAAE,OAAO,CAAC,CAAC;IAEjE,4BAA4B;IAC5B,MAAM,eAAe,GAAG;;;;;;IAMtB,WAAW;;yBAEU,GAAG;;;;;;CAM3B,CAAC;IAEA,MAAM,YAAY,GAAG,cAAI,CAAC,IAAI,CAAC,UAAU,EAAE,aAAa,CAAC,CAAC;IAC1D,MAAM,kBAAE,CAAC,SAAS,CAAC,YAAY,EAAE,eAAe,EAAE,OAAO,CAAC,CAAC;IAE3D,OAAO,CAAC,GAAG,CAAC,0BAA0B,WAAW,EAAE,CAAC,CAAC;IACrD,OAAO,CAAC,GAAG,CAAC,eAAe,UAAU,EAAE,CAAC,CAAC;IACzC,OAAO,CAAC,GAAG,CAAC,eAAe,UAAU,EAAE,CAAC,CAAC;IACzC,OAAO,CAAC,GAAG,CAAC,gBAAgB,WAAW,EAAE,CAAC,CAAC;AAC7C,CAAC"}
|
package/dist/types.d.ts
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
export interface ProjectConfig {
|
|
2
|
+
project: string;
|
|
3
|
+
provider: 'claude' | 'openai' | 'gemini';
|
|
4
|
+
model: string;
|
|
5
|
+
created_at: string;
|
|
6
|
+
}
|
|
7
|
+
export interface PipelineStages {
|
|
8
|
+
examiner?: string;
|
|
9
|
+
researcher?: string;
|
|
10
|
+
coder?: string;
|
|
11
|
+
questioner?: string;
|
|
12
|
+
reviewer?: string;
|
|
13
|
+
}
|
|
14
|
+
export interface PipelineOutput {
|
|
15
|
+
run_id: string;
|
|
16
|
+
command: 'query' | 'fix';
|
|
17
|
+
task: string;
|
|
18
|
+
skip_researcher: boolean;
|
|
19
|
+
timestamp: string;
|
|
20
|
+
stages: PipelineStages;
|
|
21
|
+
}
|
|
22
|
+
export interface MemoryEntry {
|
|
23
|
+
title: string;
|
|
24
|
+
file: string;
|
|
25
|
+
summary: string;
|
|
26
|
+
}
|
|
27
|
+
export interface MemoryIndex {
|
|
28
|
+
project: string;
|
|
29
|
+
entries: MemoryEntry[];
|
|
30
|
+
}
|
|
31
|
+
//# sourceMappingURL=types.d.ts.map
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"types.d.ts","sourceRoot":"","sources":["../src/types.ts"],"names":[],"mappings":"AAAA,MAAM,WAAW,aAAa;IAC5B,OAAO,EAAE,MAAM,CAAC;IAChB,QAAQ,EAAE,QAAQ,GAAG,QAAQ,GAAG,QAAQ,CAAC;IACzC,KAAK,EAAE,MAAM,CAAC;IACd,UAAU,EAAE,MAAM,CAAC;CACpB;AAED,MAAM,WAAW,cAAc;IAC7B,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,UAAU,CAAC,EAAE,MAAM,CAAC;IACpB,KAAK,CAAC,EAAE,MAAM,CAAC;IACf,UAAU,CAAC,EAAE,MAAM,CAAC;IACpB,QAAQ,CAAC,EAAE,MAAM,CAAC;CACnB;AAED,MAAM,WAAW,cAAc;IAC7B,MAAM,EAAE,MAAM,CAAC;IACf,OAAO,EAAE,OAAO,GAAG,KAAK,CAAC;IACzB,IAAI,EAAE,MAAM,CAAC;IACb,eAAe,EAAE,OAAO,CAAC;IACzB,SAAS,EAAE,MAAM,CAAC;IAClB,MAAM,EAAE,cAAc,CAAC;CACxB;AAED,MAAM,WAAW,WAAW;IAC1B,KAAK,EAAE,MAAM,CAAC;IACd,IAAI,EAAE,MAAM,CAAC;IACb,OAAO,EAAE,MAAM,CAAC;CACjB;AAED,MAAM,WAAW,WAAW;IAC1B,OAAO,EAAE,MAAM,CAAC;IAChB,OAAO,EAAE,WAAW,EAAE,CAAC;CACxB"}
|
package/dist/types.js
ADDED
|
@@ -0,0 +1 @@
|
|
|
1
|
+
{"version":3,"file":"types.js","sourceRoot":"","sources":["../src/types.ts"],"names":[],"mappings":""}
|
package/package.json
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@ceyhunbilir/synaphex",
|
|
3
|
+
"version": "1.0.1",
|
|
4
|
+
"description": "Multi-agent AI pipeline as a Claude Code skill — Examiner → Researcher → Coder → Questioner → Reviewer",
|
|
5
|
+
"main": "dist/index.js",
|
|
6
|
+
"bin": {
|
|
7
|
+
"synaphex": "dist/index.js"
|
|
8
|
+
},
|
|
9
|
+
"scripts": {
|
|
10
|
+
"build": "tsc",
|
|
11
|
+
"postinstall": "node scripts/postinstall.js",
|
|
12
|
+
"prepublishOnly": "npm run build",
|
|
13
|
+
"prepare": "husky install 2>/dev/null || true",
|
|
14
|
+
"lint:commit": "commitlint --from HEAD~1 --to HEAD"
|
|
15
|
+
},
|
|
16
|
+
"files": [
|
|
17
|
+
"dist/",
|
|
18
|
+
"skills/",
|
|
19
|
+
"scripts/postinstall.js"
|
|
20
|
+
],
|
|
21
|
+
"keywords": [
|
|
22
|
+
"claude-code",
|
|
23
|
+
"ai-pipeline",
|
|
24
|
+
"multi-agent",
|
|
25
|
+
"skill",
|
|
26
|
+
"claude"
|
|
27
|
+
],
|
|
28
|
+
"author": "ceyhunbilir",
|
|
29
|
+
"license": "MIT",
|
|
30
|
+
"repository": {
|
|
31
|
+
"type": "git",
|
|
32
|
+
"url": "https://github.com/ceyhunbilir/synaphex.git"
|
|
33
|
+
},
|
|
34
|
+
"homepage": "https://github.com/ceyhunbilir/synaphex",
|
|
35
|
+
"dependencies": {
|
|
36
|
+
"commander": "^12.0.0",
|
|
37
|
+
"fs-extra": "^11.2.0"
|
|
38
|
+
},
|
|
39
|
+
"devDependencies": {
|
|
40
|
+
"@commitlint/cli": "^18.0.0",
|
|
41
|
+
"@commitlint/config-conventional": "^18.0.0",
|
|
42
|
+
"@types/fs-extra": "^11.0.4",
|
|
43
|
+
"@types/node": "^20.0.0",
|
|
44
|
+
"husky": "^8.0.0",
|
|
45
|
+
"typescript": "^5.4.0"
|
|
46
|
+
},
|
|
47
|
+
"engines": {
|
|
48
|
+
"node": ">=18.0.0"
|
|
49
|
+
}
|
|
50
|
+
}
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
'use strict';
|
|
3
|
+
|
|
4
|
+
const fs = require('fs');
|
|
5
|
+
const path = require('path');
|
|
6
|
+
const os = require('os');
|
|
7
|
+
|
|
8
|
+
const SKILLS_SRC = path.join(__dirname, '..', 'skills');
|
|
9
|
+
const COMMANDS_DEST = path.join(os.homedir(), '.claude', 'commands', 'synaphex');
|
|
10
|
+
|
|
11
|
+
try {
|
|
12
|
+
// Create destination directory
|
|
13
|
+
fs.mkdirSync(COMMANDS_DEST, { recursive: true });
|
|
14
|
+
|
|
15
|
+
// Find all .md files in skills directory
|
|
16
|
+
const files = fs.readdirSync(SKILLS_SRC).filter((f) => f.endsWith('.md'));
|
|
17
|
+
|
|
18
|
+
if (files.length === 0) {
|
|
19
|
+
console.warn('[synaphex] Warning: No skill files found in skills/ directory');
|
|
20
|
+
process.exit(0);
|
|
21
|
+
}
|
|
22
|
+
|
|
23
|
+
// Copy each skill file
|
|
24
|
+
for (const file of files) {
|
|
25
|
+
const src = path.join(SKILLS_SRC, file);
|
|
26
|
+
const dest = path.join(COMMANDS_DEST, file);
|
|
27
|
+
fs.copyFileSync(src, dest);
|
|
28
|
+
console.log(`[synaphex] ✓ Installed skill: ~/.claude/commands/synaphex/${file}`);
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
console.log('\n[synaphex] Skills installed successfully!');
|
|
32
|
+
console.log('[synaphex] Available commands:');
|
|
33
|
+
console.log(' /synaphex/init <project>');
|
|
34
|
+
console.log(' /synaphex/query <task>');
|
|
35
|
+
console.log(' /synaphex/fix <task>');
|
|
36
|
+
console.log('\n[synaphex] Documentation: https://github.com/ceyhunbilir/synaphex');
|
|
37
|
+
} catch (err) {
|
|
38
|
+
// Non-fatal error — postinstall script failures block npm install
|
|
39
|
+
console.warn('[synaphex] ⚠️ Warning: Could not auto-install skills:', err.message);
|
|
40
|
+
console.warn('[synaphex] You can manually copy them:');
|
|
41
|
+
console.warn(` cp ${SKILLS_SRC}/*.md ~/.claude/commands/synaphex/`);
|
|
42
|
+
process.exit(0);
|
|
43
|
+
}
|
package/skills/fix.md
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Run Synaphex fix pipeline with optional researcher skip: Examiner → [Researcher?] → Coder → Questioner → Reviewer"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Synaphex Fix Pipeline
|
|
6
|
+
|
|
7
|
+
**Fix Request:** $ARGUMENTS
|
|
8
|
+
|
|
9
|
+
You are running the Synaphex fix pipeline. This is tailored for targeted bug fixes or code improvements where you may not need extensive research.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Step 1: Ask About Research Phase
|
|
14
|
+
|
|
15
|
+
Before starting, I need your input on whether to include the Researcher stage:
|
|
16
|
+
|
|
17
|
+
> **Would you like to include the Researcher phase?**
|
|
18
|
+
>
|
|
19
|
+
> - **Yes** — Full pipeline (Examiner → Researcher → Coder → Questioner → Reviewer)
|
|
20
|
+
> - More thorough, researches best practices and solutions
|
|
21
|
+
> - Takes longer
|
|
22
|
+
> - Good for complex fixes or unclear requirements
|
|
23
|
+
>
|
|
24
|
+
> - **No** — Fast pipeline (Examiner → Coder → Questioner → Reviewer)
|
|
25
|
+
> - Skips research, goes straight to implementation
|
|
26
|
+
> - Faster, assumes you know what to do
|
|
27
|
+
> - Good for straightforward fixes or when you're confident in the approach
|
|
28
|
+
|
|
29
|
+
Please respond with **Yes** or **No**.
|
|
30
|
+
|
|
31
|
+
---
|
|
32
|
+
|
|
33
|
+
## Pipeline Execution (Based on Your Choice)
|
|
34
|
+
|
|
35
|
+
Based on your response above, I'll now run the appropriate pipeline.
|
|
36
|
+
|
|
37
|
+
### Stage 1: EXAMINER — Context Analysis
|
|
38
|
+
|
|
39
|
+
**Role:** Project context analyst. Understand what needs to be fixed.
|
|
40
|
+
|
|
41
|
+
**Your Instructions:**
|
|
42
|
+
|
|
43
|
+
1. **Analyze the fix request.** What exactly needs to be fixed or improved?
|
|
44
|
+
|
|
45
|
+
2. **Examine the relevant code.** Read and understand:
|
|
46
|
+
- The code that needs fixing
|
|
47
|
+
- Related code that might be affected
|
|
48
|
+
- Existing patterns and conventions
|
|
49
|
+
|
|
50
|
+
3. **Load project memory** (if available):
|
|
51
|
+
```bash
|
|
52
|
+
synaphex load-memory $ARGUMENTS
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
4. **Identify the problem** and its context.
|
|
56
|
+
|
|
57
|
+
**Produce:** Context report with sections:
|
|
58
|
+
- **Problem Statement** — What's broken or needs improvement
|
|
59
|
+
- **Root Cause Analysis** — Why is it happening?
|
|
60
|
+
- **Affected Code** — Files and functions involved
|
|
61
|
+
- **Current Patterns** — How similar issues are handled
|
|
62
|
+
- **Impact Assessment** — What breaks if not fixed?
|
|
63
|
+
- **Fix Strategy Outline** — Initial approach
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
### Stage 2 (OPTIONAL): RESEARCHER — Best Practices
|
|
68
|
+
|
|
69
|
+
**⚠️ SKIP THIS STAGE IF USER CHOSE "NO"**
|
|
70
|
+
|
|
71
|
+
**Role:** Research analyst. Find the best approach to fix this.
|
|
72
|
+
|
|
73
|
+
**Your Instructions:**
|
|
74
|
+
|
|
75
|
+
1. **Research the problem domain.**
|
|
76
|
+
- What are standard solutions for this type of issue?
|
|
77
|
+
- Are there libraries or patterns that help?
|
|
78
|
+
|
|
79
|
+
2. **Evaluate fix approaches.**
|
|
80
|
+
- What are the options?
|
|
81
|
+
- What are trade-offs?
|
|
82
|
+
- What's most maintainable?
|
|
83
|
+
|
|
84
|
+
3. **Document best practices** for this type of fix.
|
|
85
|
+
|
|
86
|
+
**Produce:** Research report with sections:
|
|
87
|
+
- **Recommended Fix** — Detailed approach
|
|
88
|
+
- **Why This Approach** — Rationale
|
|
89
|
+
- **Best Practices** — How to do this right
|
|
90
|
+
- **Potential Issues** — What to watch out for
|
|
91
|
+
- **Reference Examples** — Similar patterns in codebase
|
|
92
|
+
|
|
93
|
+
---
|
|
94
|
+
|
|
95
|
+
### Stage 3: CODER — Implementation
|
|
96
|
+
|
|
97
|
+
**Role:** Senior developer. Write the fix.
|
|
98
|
+
|
|
99
|
+
**Your Instructions:**
|
|
100
|
+
|
|
101
|
+
1. **Implement the fix.** Based on Examiner (and Researcher if included):
|
|
102
|
+
- Write clean, tested code
|
|
103
|
+
- Follow existing patterns
|
|
104
|
+
- Include error handling
|
|
105
|
+
- Add comments where needed
|
|
106
|
+
|
|
107
|
+
2. **Keep it focused.** This is a fix, not a refactor.
|
|
108
|
+
- Don't change unrelated code
|
|
109
|
+
- Don't add features beyond the fix
|
|
110
|
+
- Minimal necessary changes
|
|
111
|
+
|
|
112
|
+
3. **Provide complete code** for all files that need changes.
|
|
113
|
+
|
|
114
|
+
**Produce:** Implementation with sections:
|
|
115
|
+
- **Code Changes** — Exact changes needed
|
|
116
|
+
- **Why This Fixes It** — Explanation of the fix
|
|
117
|
+
- **Side Effects** — Any other code that might be affected
|
|
118
|
+
- **Testing** — How to verify the fix works
|
|
119
|
+
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
### Stage 4: QUESTIONER — Critical Review
|
|
123
|
+
|
|
124
|
+
**Role:** Critical reviewer. Challenge the fix.
|
|
125
|
+
|
|
126
|
+
**Your Instructions:**
|
|
127
|
+
|
|
128
|
+
1. **Question the implementation.**
|
|
129
|
+
- Does it fully fix the issue?
|
|
130
|
+
- Are there edge cases?
|
|
131
|
+
- Could it break something else?
|
|
132
|
+
- Is error handling sufficient?
|
|
133
|
+
|
|
134
|
+
2. **Identify improvements** and issues.
|
|
135
|
+
|
|
136
|
+
3. **Suggest concrete fixes** with code examples.
|
|
137
|
+
|
|
138
|
+
**Produce:** Critique with sections:
|
|
139
|
+
- **Does It Fix The Problem?** — Yes/no assessment
|
|
140
|
+
- **Edge Cases** — Scenarios not covered
|
|
141
|
+
- **Potential Issues** — Bugs or problems the fix could introduce
|
|
142
|
+
- **Improvements** — Better approaches
|
|
143
|
+
- **Revised Code** — Specific fixes for issues found
|
|
144
|
+
|
|
145
|
+
---
|
|
146
|
+
|
|
147
|
+
### Stage 5: REVIEWER — Final Decision
|
|
148
|
+
|
|
149
|
+
**Role:** Senior architect. Final quality gate.
|
|
150
|
+
|
|
151
|
+
**Your Instructions:**
|
|
152
|
+
|
|
153
|
+
1. **Review all prior outputs.**
|
|
154
|
+
- Is the fix correct?
|
|
155
|
+
- Is it complete?
|
|
156
|
+
- Are all Questioner feedback incorporated?
|
|
157
|
+
|
|
158
|
+
2. **Make final decision:** APPROVED or NEEDS REVISION
|
|
159
|
+
|
|
160
|
+
3. **Deliver the final implementation.**
|
|
161
|
+
|
|
162
|
+
**Produce:** Final assessment with sections:
|
|
163
|
+
- **Final Verdict** — APPROVED or NEEDS REVISION
|
|
164
|
+
- **Complete Implementation** — Final fixed code
|
|
165
|
+
- **Quality Verification** — Checklist of concerns addressed
|
|
166
|
+
- **Impact Summary** — What changed and why
|
|
167
|
+
- **Next Steps** (if any)
|
|
168
|
+
|
|
169
|
+
---
|
|
170
|
+
|
|
171
|
+
## Save Pipeline Output
|
|
172
|
+
|
|
173
|
+
After all stages complete:
|
|
174
|
+
|
|
175
|
+
```bash
|
|
176
|
+
synaphex save-output $ARGUMENTS '{"command":"fix","task":"$ARGUMENTS","skip_researcher":false,"timestamp":"$(date -u +%Y-%m-%dT%H:%M:%SZ)","stages":{"examiner":"...","coder":"...","questioner":"...","reviewer":"..."}}'
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
(Set `skip_researcher` to `true` if the user chose to skip the research phase.)
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
## Important Notes
|
|
184
|
+
|
|
185
|
+
- **Focused fixes:** This is for targeted fixes, not full rewrites
|
|
186
|
+
- **Hard failure guards:** Stop immediately if any stage fails
|
|
187
|
+
- **Quick feedback loops:** Fix pipeline is faster than query because research is optional
|
|
188
|
+
- **Researcher inclusion:** Is your choice — skip for confidence, include for thorough vetting
|
package/skills/init.md
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Initialize a new Synaphex project at ~/.claude/projects/<project> with config, memory, and outputs directories"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Synaphex Init
|
|
6
|
+
|
|
7
|
+
I'll initialize a new Synaphex project for you. The project name is: **$ARGUMENTS**
|
|
8
|
+
|
|
9
|
+
## Step 1: Create Project Directory
|
|
10
|
+
|
|
11
|
+
Let me create the project structure:
|
|
12
|
+
|
|
13
|
+
```bash
|
|
14
|
+
synaphex init-project $ARGUMENTS
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
This creates `~/.claude/projects/$ARGUMENTS/` with:
|
|
18
|
+
- **config.md** — LLM provider and model settings
|
|
19
|
+
- **memory/** — project memory files for context across sessions
|
|
20
|
+
- **outputs/** — pipeline run results
|
|
21
|
+
|
|
22
|
+
## Step 2: Configure LLM Provider
|
|
23
|
+
|
|
24
|
+
Your project is initialized with **Claude (Anthropic)** as the default provider.
|
|
25
|
+
|
|
26
|
+
The available providers are:
|
|
27
|
+
|
|
28
|
+
1. **claude** — Anthropic Claude (requires `ANTHROPIC_API_KEY`)
|
|
29
|
+
- Recommended: `claude-opus-4-5`
|
|
30
|
+
|
|
31
|
+
2. **openai** — OpenAI API compatible (includes Copilot, GPT-4)
|
|
32
|
+
- Requires: `OPENAI_API_KEY`
|
|
33
|
+
- Recommended: `gpt-4-turbo`
|
|
34
|
+
|
|
35
|
+
3. **gemini** — Google Gemini
|
|
36
|
+
- Requires: `GEMINI_API_KEY`
|
|
37
|
+
- Recommended: `gemini-pro`
|
|
38
|
+
|
|
39
|
+
Would you like to keep Claude as your provider, or would you prefer to use OpenAI or Gemini?
|
|
40
|
+
|
|
41
|
+
If you want to change it, I can update the config file for you.
|
|
42
|
+
|
|
43
|
+
## Step 3: Ready to Go
|
|
44
|
+
|
|
45
|
+
Your project **$ARGUMENTS** is ready! You can now use:
|
|
46
|
+
|
|
47
|
+
- **`/synaphex/query <task>`** — Run the full 5-stage pipeline
|
|
48
|
+
- **`/synaphex/fix <task>`** — Run the pipeline with optional Researcher skip
|
|
49
|
+
|
|
50
|
+
Each pipeline run will be saved to `~/.claude/projects/$ARGUMENTS/outputs/` automatically.
|
package/skills/query.md
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Run the full Synaphex 5-stage pipeline: Examiner → Researcher → Coder → Questioner → Reviewer"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Synaphex Query Pipeline
|
|
6
|
+
|
|
7
|
+
**Task:** $ARGUMENTS
|
|
8
|
+
|
|
9
|
+
You are running the complete Synaphex multi-agent pipeline. This will take you through 5 stages of analysis and refinement. Think deeply at each stage and produce thorough, high-quality outputs.
|
|
10
|
+
|
|
11
|
+
---
|
|
12
|
+
|
|
13
|
+
## Stage 1: EXAMINER — Context Analysis
|
|
14
|
+
|
|
15
|
+
**Role:** You are a project context analyst. Your job is to understand the current state and what needs to be done.
|
|
16
|
+
|
|
17
|
+
**Your Instructions:**
|
|
18
|
+
|
|
19
|
+
1. **Analyze the task.** What exactly is being asked? What are the success criteria?
|
|
20
|
+
|
|
21
|
+
2. **Examine the current codebase.** Read relevant files to understand:
|
|
22
|
+
- Existing architecture and patterns
|
|
23
|
+
- Technology stack in use
|
|
24
|
+
- File structure and organization
|
|
25
|
+
- Existing implementations that are related
|
|
26
|
+
|
|
27
|
+
3. **Identify constraints and context.**
|
|
28
|
+
- What patterns are already established?
|
|
29
|
+
- What libraries/frameworks are in use?
|
|
30
|
+
- What conventions should be followed?
|
|
31
|
+
|
|
32
|
+
4. **Load project memory** (if available). Run:
|
|
33
|
+
```bash
|
|
34
|
+
synaphex load-memory $ARGUMENTS
|
|
35
|
+
```
|
|
36
|
+
Review the output to understand project history and context.
|
|
37
|
+
|
|
38
|
+
**Produce:** A comprehensive context report with sections:
|
|
39
|
+
- **Task Analysis** — What needs to be done and why
|
|
40
|
+
- **Current State** — Existing code, architecture, patterns
|
|
41
|
+
- **Technology Stack** — Languages, frameworks, libraries in use
|
|
42
|
+
- **Constraints & Conventions** — What to follow
|
|
43
|
+
- **Identified Gaps** — What's missing or needs improvement
|
|
44
|
+
- **Recommended Approach** — Initial strategy
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## Stage 2: RESEARCHER — Best Practices & Solutions
|
|
49
|
+
|
|
50
|
+
**Role:** You are a research analyst. Based on the context from Examiner, research the best approach.
|
|
51
|
+
|
|
52
|
+
**Your Instructions:**
|
|
53
|
+
|
|
54
|
+
1. **Research the problem domain.** What are the accepted best practices for this type of task?
|
|
55
|
+
|
|
56
|
+
2. **Evaluate approaches.** Consider:
|
|
57
|
+
- Multiple solutions and their trade-offs
|
|
58
|
+
- Performance implications
|
|
59
|
+
- Security considerations
|
|
60
|
+
- Maintainability and readability
|
|
61
|
+
- Testing strategies
|
|
62
|
+
|
|
63
|
+
3. **Check for existing solutions.** Are there libraries, patterns, or examples that directly apply?
|
|
64
|
+
|
|
65
|
+
4. **Document findings.** Be specific about why certain approaches are better.
|
|
66
|
+
|
|
67
|
+
**Produce:** A structured research report with sections:
|
|
68
|
+
- **Recommended Approach** — Detailed explanation and rationale
|
|
69
|
+
- **Alternative Approaches** — What else could work (and why you rejected them)
|
|
70
|
+
- **Best Practices** — Security, performance, testing, patterns
|
|
71
|
+
- **Relevant Libraries/Tools** — What to use and why
|
|
72
|
+
- **Common Pitfalls** — What to avoid
|
|
73
|
+
- **Reference Implementation** — Code examples or patterns to follow
|
|
74
|
+
|
|
75
|
+
---
|
|
76
|
+
|
|
77
|
+
## Stage 3: CODER — Implementation
|
|
78
|
+
|
|
79
|
+
**Role:** You are a senior developer. Implement the solution based on all prior analysis.
|
|
80
|
+
|
|
81
|
+
**Your Instructions:**
|
|
82
|
+
|
|
83
|
+
1. **Write production-ready code.** This means:
|
|
84
|
+
- Follow the patterns identified by the Examiner
|
|
85
|
+
- Use the approaches recommended by the Researcher
|
|
86
|
+
- Include proper error handling
|
|
87
|
+
- Write clear, maintainable code
|
|
88
|
+
- Add comments for non-obvious logic
|
|
89
|
+
- Include type hints/types where applicable
|
|
90
|
+
|
|
91
|
+
2. **Comprehensive implementation.** Provide:
|
|
92
|
+
- All necessary file changes or new files
|
|
93
|
+
- Complete, runnable code (not pseudocode)
|
|
94
|
+
- Integration points with existing code
|
|
95
|
+
- Usage examples or tests
|
|
96
|
+
|
|
97
|
+
3. **Quality standards:**
|
|
98
|
+
- Consistent with the existing codebase style
|
|
99
|
+
- No unnecessary complexity
|
|
100
|
+
- Proper structure and organization
|
|
101
|
+
- Security-conscious (validate inputs, prevent common attacks)
|
|
102
|
+
|
|
103
|
+
**Produce:** Complete implementation with sections:
|
|
104
|
+
- **Code Changes** — All files to create or modify
|
|
105
|
+
- **Key Implementation Details** — How it works, important decisions
|
|
106
|
+
- **Integration Points** — Where this connects to existing code
|
|
107
|
+
- **Usage Examples** — How to use the new code
|
|
108
|
+
- **Testing Suggestions** — How to verify it works
|
|
109
|
+
|
|
110
|
+
---
|
|
111
|
+
|
|
112
|
+
## Stage 4: QUESTIONER — Critical Review
|
|
113
|
+
|
|
114
|
+
**Role:** You are a critical code reviewer (Socratic method). Challenge every assumption.
|
|
115
|
+
|
|
116
|
+
**Your Instructions:**
|
|
117
|
+
|
|
118
|
+
1. **Question the implementation.**
|
|
119
|
+
- Does it handle all edge cases?
|
|
120
|
+
- Are there security vulnerabilities?
|
|
121
|
+
- Could it be more performant?
|
|
122
|
+
- Is the error handling sufficient?
|
|
123
|
+
|
|
124
|
+
2. **Identify improvements:**
|
|
125
|
+
- Critical issues (must fix)
|
|
126
|
+
- Optimization opportunities
|
|
127
|
+
- Better patterns or libraries
|
|
128
|
+
- Edge cases not handled
|
|
129
|
+
|
|
130
|
+
3. **Suggest concrete fixes.** For each issue, provide:
|
|
131
|
+
- What's wrong
|
|
132
|
+
- Why it matters
|
|
133
|
+
- Specific fix with code examples
|
|
134
|
+
|
|
135
|
+
**Produce:** A structured critique with sections:
|
|
136
|
+
- **Strengths** — What the implementation does well
|
|
137
|
+
- **Critical Issues** — Things that must be fixed
|
|
138
|
+
- **Suggested Improvements** — Better approaches, edge cases
|
|
139
|
+
- **Performance Considerations** — Any bottlenecks?
|
|
140
|
+
- **Security Review** — Any vulnerabilities?
|
|
141
|
+
- **Revised Code Snippets** — Specific fixes for the most important issues
|
|
142
|
+
|
|
143
|
+
---
|
|
144
|
+
|
|
145
|
+
## Stage 5: REVIEWER — Final Quality Gate
|
|
146
|
+
|
|
147
|
+
**Role:** You are a senior architect doing final review. Consolidate everything and deliver the final solution.
|
|
148
|
+
|
|
149
|
+
**Your Instructions:**
|
|
150
|
+
|
|
151
|
+
1. **Review all prior outputs holistically.**
|
|
152
|
+
- Does the final code address the original task?
|
|
153
|
+
- Have all Questioner feedback been incorporated?
|
|
154
|
+
- Is the solution complete and coherent?
|
|
155
|
+
|
|
156
|
+
2. **Make a final decision:** APPROVED or NEEDS REVISION
|
|
157
|
+
|
|
158
|
+
3. **If APPROVED:**
|
|
159
|
+
- Provide the complete, final implementation incorporating all improvements
|
|
160
|
+
- Explain what changed from Coder → final version
|
|
161
|
+
- Summarize the quality gates passed
|
|
162
|
+
|
|
163
|
+
4. **If NEEDS REVISION:**
|
|
164
|
+
- Identify what's blocking approval
|
|
165
|
+
- Provide specific next steps
|
|
166
|
+
|
|
167
|
+
**Produce:** Final verdict with sections:
|
|
168
|
+
- **Final Verdict** — APPROVED or NEEDS REVISION (and why)
|
|
169
|
+
- **Complete Implementation** — The final, polished code
|
|
170
|
+
- **Changes Made** — Summary of improvements from Questioner feedback
|
|
171
|
+
- **Quality Summary** — Verification checklist
|
|
172
|
+
- **Next Steps** (if applicable) — What remains to be done
|
|
173
|
+
|
|
174
|
+
---
|
|
175
|
+
|
|
176
|
+
## Save Pipeline Output
|
|
177
|
+
|
|
178
|
+
After all 5 stages complete successfully, save the results:
|
|
179
|
+
|
|
180
|
+
```bash
|
|
181
|
+
synaphex save-output <project-name> '{"command":"query","task":"$ARGUMENTS","skip_researcher":false,"timestamp":"$(date -u +%Y-%m-%dT%H:%M:%SZ)","stages":{"examiner":"...","researcher":"...","coder":"...","questioner":"...","reviewer":"..."}}'
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
(Note: In practice, this would be a complete JSON object with all stage outputs. You can output this to a file and pipe it to the save command if the output is large.)
|
|
185
|
+
|
|
186
|
+
---
|
|
187
|
+
|
|
188
|
+
## Important Notes
|
|
189
|
+
|
|
190
|
+
- **Hard failure guards:** If any prior stage fails or produces no output, immediately stop the pipeline and report the failure. Do not continue.
|
|
191
|
+
- **Token budget awareness:** Coder stage gets the most tokens (produces the most detailed code). Reviewer stage is concise.
|
|
192
|
+
- **Output format:** Each stage should have clear section headers and complete thoughts.
|
|
193
|
+
- **Quality over speed:** Take time to do this right. Better to produce a thorough output than a rushed one.
|