@thelogicatelier/sylva 1.0.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +110 -0
- package/dist/cli.d.ts +6 -0
- package/dist/cli.js +177 -0
- package/dist/constants.d.ts +18 -0
- package/dist/constants.js +122 -0
- package/dist/index.d.ts +0 -0
- package/dist/index.js +2 -0
- package/dist/modelConfig.d.ts +14 -0
- package/dist/modelConfig.js +76 -0
- package/dist/modules.d.ts +53 -0
- package/dist/modules.js +69 -0
- package/dist/prompts.d.ts +119 -0
- package/dist/prompts.js +70 -0
- package/dist/utils.d.ts +41 -0
- package/dist/utils.js +173 -0
- package/package.json +66 -0
package/README.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
# Sylva 🌲
|
|
2
|
+
|
|
3
|
+
Sylva is an automated AI tool to auto-generate comprehensive `AGENTS.md` and `CODEBASE_CONVENTIONS.md` documentation by reading your entire repository recursively using **Ax-LLM**.
|
|
4
|
+
|
|
5
|
+
It extracts the folder structures, code conventions, state management patterns, and specific architecture styles embedded in your code directly into a format that future AI Coding Assistants (or human engineers) can perfectly follow.
|
|
6
|
+
|
|
7
|
+
📖 **[Full Documentation](https://achatt89.github.io/sylva/)**
|
|
8
|
+
|
|
9
|
+
### Features
|
|
10
|
+
- Support for OpenAI, Google Gemini, Anthropic, and Azure LLMs.
|
|
11
|
+
- Direct `--github-repository` fetching using Git Clone.
|
|
12
|
+
- Direct `--local-repository` analysis.
|
|
13
|
+
- Multi-LLM model configuration with automatic mini-model fallback for Codebase Analysis Iterations.
|
|
14
|
+
- Strict Type-Safety achieved with Ax-LLM's internal framework tools.
|
|
15
|
+
|
|
16
|
+
### Setup
|
|
17
|
+
Install dependencies:
|
|
18
|
+
```bash
|
|
19
|
+
npm install
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
Configure your environment variables:
|
|
23
|
+
```bash
|
|
24
|
+
cp .env-example .env
|
|
25
|
+
# Edit .env and supply your API Keys (GEMINI_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY, etc)
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
Build the project:
|
|
29
|
+
```bash
|
|
30
|
+
npm run build
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
### Execution
|
|
34
|
+
|
|
35
|
+
The project is published to the NPM registry and can be executed natively anywhere!
|
|
36
|
+
|
|
37
|
+
**Option 1: Run seamlessly without installing**
|
|
38
|
+
```bash
|
|
39
|
+
# Analyze the current working directory natively
|
|
40
|
+
npx @thelogicatelier/sylva
|
|
41
|
+
|
|
42
|
+
# Provide a specific GitHub repo
|
|
43
|
+
npx @thelogicatelier/sylva --github-repository https://github.com/expressjs/express
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
**Option 2: Install globally on your system**
|
|
47
|
+
```bash
|
|
48
|
+
npm install -g @thelogicatelier/sylva
|
|
49
|
+
|
|
50
|
+
# Once installed globally, you can execute it anywhere!
|
|
51
|
+
sylva
|
|
52
|
+
sylva --help
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Alternatively, if running from source:
|
|
56
|
+
```bash
|
|
57
|
+
npm start -- --github-repository https://github.com/expressjs/express -m openai/gpt-4o
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
**List Supported Models:**
|
|
61
|
+
```bash
|
|
62
|
+
npx @thelogicatelier/sylva --list-models
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
### Advanced Usage: Tuning for Your Project
|
|
66
|
+
|
|
67
|
+
The iteration count (`-i`) and model choice (`-m`) dramatically affect output accuracy. Here's a battle-tested guide:
|
|
68
|
+
|
|
69
|
+
| Project Type | Example | Command | Rationale |
|
|
70
|
+
|---|---|---|---|
|
|
71
|
+
| **Small library** (<20 files) | `pallets/click` | `npx @thelogicatelier/sylva -m openai/gpt-4o -i 1` | Entire codebase fits in one context pass |
|
|
72
|
+
| **Medium app** (20-100 files) | `expressjs/express` | `npx @thelogicatelier/sylva -m openai/gpt-4o -i 5` | Needs ~5 passes to traverse nested modules |
|
|
73
|
+
| **Large monorepo** (multi-stack) | React + FastAPI + APIs | `npx @thelogicatelier/sylva -m openai/gpt-5.2 -i 25` | Deep traversal of cross-stack dependencies |
|
|
74
|
+
| **Enterprise codebase** (500+ files) | Microservices | `npx @thelogicatelier/sylva -m anthropic/claude-sonnet-4.6 -i 35` | Maximum depth for service-oriented architectures |
|
|
75
|
+
|
|
76
|
+
**Real-world example:** Analyzing a React/Tailwind frontend + Python/FastAPI backend + Wix API monorepo:
|
|
77
|
+
```bash
|
|
78
|
+
npx @thelogicatelier/sylva --local-repository . -m openai/gpt-5.2 -i 25
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
For detailed guidance, see the [Choosing the Right Model](https://achatt89.github.io/sylva/models/choosing.html) and [Iteration Depth Guide](https://achatt89.github.io/sylva/models/iterations.html) docs.
|
|
82
|
+
|
|
83
|
+
### Environment Overrides
|
|
84
|
+
- `AUTOSKILL_MODEL`: Set this to `gemini` or `anthropic` or `openai` to change the default execution provider globally without providing `-m` on every execution.
|
|
85
|
+
- `GITHUB_REPO_URL`: Hardcode a repository for the CLI parser to utilize if no explicit configuration flags are provided.
|
|
86
|
+
|
|
87
|
+
### Test Structure
|
|
88
|
+
Unit tests are located at `tests/` which mirror the `/src` folder structure separating logical test groupings.
|
|
89
|
+
Execute tests by running:
|
|
90
|
+
```bash
|
|
91
|
+
npm test
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
## Contributing
|
|
95
|
+
|
|
96
|
+
We strictly follow a feature-branch workflow. Please ensure your contributions maintain high code quality and test coverage.
|
|
97
|
+
|
|
98
|
+
1. Fork the Project
|
|
99
|
+
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
|
|
100
|
+
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) - *Our Husky pre-commit hooks will automatically lint and test your code.*
|
|
101
|
+
4. Push to the Branch (`git push origin feature/AmazingFeature`)
|
|
102
|
+
5. Open a Pull Request
|
|
103
|
+
|
|
104
|
+
## Issues, Bugs, and Feature Requests
|
|
105
|
+
|
|
106
|
+
Please report all issues and feature requests using our [GitHub Issues](https://github.com/achatt89/sylva/issues) tracker. When reporting bugs, please provide a minimally reproducible example along with the console output (if applicable).
|
|
107
|
+
|
|
108
|
+
---
|
|
109
|
+
|
|
110
|
+
Built by [The Logic Atelier](https://thelogicatelier.com)
|
package/dist/cli.d.ts
ADDED
package/dist/cli.js
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
"use strict";
|
|
3
|
+
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
|
|
4
|
+
if (k2 === undefined) k2 = k;
|
|
5
|
+
var desc = Object.getOwnPropertyDescriptor(m, k);
|
|
6
|
+
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
|
|
7
|
+
desc = { enumerable: true, get: function() { return m[k]; } };
|
|
8
|
+
}
|
|
9
|
+
Object.defineProperty(o, k2, desc);
|
|
10
|
+
}) : (function(o, m, k, k2) {
|
|
11
|
+
if (k2 === undefined) k2 = k;
|
|
12
|
+
o[k2] = m[k];
|
|
13
|
+
}));
|
|
14
|
+
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
|
|
15
|
+
Object.defineProperty(o, "default", { enumerable: true, value: v });
|
|
16
|
+
}) : function(o, v) {
|
|
17
|
+
o["default"] = v;
|
|
18
|
+
});
|
|
19
|
+
var __importStar = (this && this.__importStar) || (function () {
|
|
20
|
+
var ownKeys = function(o) {
|
|
21
|
+
ownKeys = Object.getOwnPropertyNames || function (o) {
|
|
22
|
+
var ar = [];
|
|
23
|
+
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
|
|
24
|
+
return ar;
|
|
25
|
+
};
|
|
26
|
+
return ownKeys(o);
|
|
27
|
+
};
|
|
28
|
+
return function (mod) {
|
|
29
|
+
if (mod && mod.__esModule) return mod;
|
|
30
|
+
var result = {};
|
|
31
|
+
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
|
|
32
|
+
__setModuleDefault(result, mod);
|
|
33
|
+
return result;
|
|
34
|
+
};
|
|
35
|
+
})();
|
|
36
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
37
|
+
exports.resolveRepositoryTarget = resolveRepositoryTarget;
|
|
38
|
+
const commander_1 = require("commander");
|
|
39
|
+
const dotenv = __importStar(require("dotenv"));
|
|
40
|
+
const path = __importStar(require("path"));
|
|
41
|
+
const fs = __importStar(require("fs"));
|
|
42
|
+
const readline = __importStar(require("readline/promises"));
|
|
43
|
+
const modelConfig_1 = require("./modelConfig");
|
|
44
|
+
const utils_1 = require("./utils");
|
|
45
|
+
const modules_1 = require("./modules");
|
|
46
|
+
function initEnvironment() {
|
|
47
|
+
const envPath = path.resolve(process.cwd(), ".env");
|
|
48
|
+
if (fs.existsSync(envPath)) {
|
|
49
|
+
dotenv.config({ path: envPath, override: true });
|
|
50
|
+
}
|
|
51
|
+
}
|
|
52
|
+
async function resolveRepositoryTarget(args) {
|
|
53
|
+
let githubRepo = args.githubRepository;
|
|
54
|
+
let localRepo = args.localRepository;
|
|
55
|
+
if (!githubRepo && !localRepo && args.repo) {
|
|
56
|
+
if (args.repo.startsWith("http") || args.repo.startsWith("git@")) {
|
|
57
|
+
githubRepo = args.repo;
|
|
58
|
+
}
|
|
59
|
+
else {
|
|
60
|
+
localRepo = args.repo;
|
|
61
|
+
}
|
|
62
|
+
}
|
|
63
|
+
if (!githubRepo && !localRepo) {
|
|
64
|
+
const githubEnv = process.env.GITHUB_REPO_URL;
|
|
65
|
+
if (githubEnv) {
|
|
66
|
+
githubRepo = githubEnv;
|
|
67
|
+
}
|
|
68
|
+
else {
|
|
69
|
+
const rl = readline.createInterface({
|
|
70
|
+
input: process.stdin,
|
|
71
|
+
output: process.stdout,
|
|
72
|
+
});
|
|
73
|
+
const answer = await rl.question("Enter absolute path to local repository (or press Enter for current directory): ");
|
|
74
|
+
rl.close();
|
|
75
|
+
localRepo = answer.trim() ? answer.trim() : process.cwd();
|
|
76
|
+
}
|
|
77
|
+
}
|
|
78
|
+
if (githubRepo) {
|
|
79
|
+
const repoUrl = githubRepo.trim();
|
|
80
|
+
let repoName = repoUrl.replace(/\/$/, "").split("/").pop() || "unknown-repo";
|
|
81
|
+
if (repoName.endsWith(".git"))
|
|
82
|
+
repoName = repoName.slice(0, -4);
|
|
83
|
+
return { repoUrl, localPath: null, repoName };
|
|
84
|
+
}
|
|
85
|
+
else {
|
|
86
|
+
const localPath = path.resolve(localRepo);
|
|
87
|
+
if (!fs.existsSync(localPath))
|
|
88
|
+
throw new Error(`Local repository path does not exist: ${localPath}`);
|
|
89
|
+
const repoName = path.basename(localPath);
|
|
90
|
+
return { repoUrl: null, localPath, repoName };
|
|
91
|
+
}
|
|
92
|
+
}
|
|
93
|
+
async function runPipeline(repoDir, repoName, llmPrimary, llmMini, maxIterations) {
|
|
94
|
+
console.log(`\n======================================================`);
|
|
95
|
+
console.log(`🕵️♂️ SYLVA / AGENTS.md Generator Pipeline`);
|
|
96
|
+
console.log(`======================================================\n`);
|
|
97
|
+
const sourceTree = (0, utils_1.loadSourceTree)(repoDir);
|
|
98
|
+
const numFiles = Object.keys(sourceTree).length;
|
|
99
|
+
if (numFiles === 0) {
|
|
100
|
+
console.warn("\n⚠️ Warning: The loaded source tree is empty! Check IGNORED_DIRS or ensure the repository contains supported source files.");
|
|
101
|
+
}
|
|
102
|
+
else {
|
|
103
|
+
console.log(`✅ Extracted representation for ${numFiles} top-level file(s)/directory(ies).`);
|
|
104
|
+
}
|
|
105
|
+
const extractor = new modules_1.CodebaseConventionExtractor(maxIterations);
|
|
106
|
+
const extractResult = await extractor.extract(sourceTree);
|
|
107
|
+
// Use the PRIMARY model for RLM analysis (needs strong reasoning to avoid hallucination)
|
|
108
|
+
console.log(`=> Running the Codebase Analyzer RLM workflow...`);
|
|
109
|
+
const rlmResult = await extractResult.analyzer.forward(llmPrimary, {
|
|
110
|
+
sourceContext: extractResult.contextString,
|
|
111
|
+
});
|
|
112
|
+
// Use the PRIMARY model for compiling conventions (needs to accurately synthesize)
|
|
113
|
+
const conventionsMarkdown = await extractor.compileMarkdown(llmPrimary, rlmResult);
|
|
114
|
+
// Use MINI model for section extraction (cheaper, deterministic task)
|
|
115
|
+
const creator = new modules_1.AgentsMdCreator();
|
|
116
|
+
const sections = await creator.extractAndCompileSections(llmMini, conventionsMarkdown, repoName);
|
|
117
|
+
const finalAgentsMd = (0, utils_1.compileAgentsMd)(sections, repoName);
|
|
118
|
+
(0, utils_1.saveAgentsToDisk)(repoName, finalAgentsMd);
|
|
119
|
+
console.log("\n======================================================");
|
|
120
|
+
console.log("🎉 AGENTS.md Generation Complete!");
|
|
121
|
+
console.log("======================================================\n");
|
|
122
|
+
}
|
|
123
|
+
async function main() {
|
|
124
|
+
initEnvironment();
|
|
125
|
+
const program = new commander_1.Command();
|
|
126
|
+
program
|
|
127
|
+
.name("sylva")
|
|
128
|
+
.description("Auto-generate AGENTS.md for your repository using Ax-LLM")
|
|
129
|
+
.version("1.0.0")
|
|
130
|
+
.argument("[repo]", "Absolute path to a local repository to analyze (default)")
|
|
131
|
+
.option("--github-repository <url>", "Public GitHub repository URL to analyze")
|
|
132
|
+
.option("--local-repository <path>", "Absolute path to a local repository to analyze")
|
|
133
|
+
.option("-m, --model <model>", "The LLM model to use (PROVIDER/MODEL)")
|
|
134
|
+
.option("--list-models", "List all supported models and exit")
|
|
135
|
+
.option("-i, --max-iterations <number>", "Max RLM iterations", "35")
|
|
136
|
+
.action(async (repo, options) => {
|
|
137
|
+
if (options.listModels) {
|
|
138
|
+
console.log((0, modelConfig_1.listSupportedModels)());
|
|
139
|
+
process.exit(0);
|
|
140
|
+
}
|
|
141
|
+
const parsedArgs = {
|
|
142
|
+
repo: repo,
|
|
143
|
+
githubRepository: options.githubRepository,
|
|
144
|
+
localRepository: options.localRepository,
|
|
145
|
+
model: options.model,
|
|
146
|
+
maxIterations: parseInt(options.maxIterations, 10),
|
|
147
|
+
};
|
|
148
|
+
try {
|
|
149
|
+
const { repoUrl, localPath, repoName } = await resolveRepositoryTarget(parsedArgs);
|
|
150
|
+
const modelConfig = (0, modelConfig_1.resolveModelConfig)(parsedArgs.model);
|
|
151
|
+
console.log(`Using provider: ${modelConfig.provider} | Primary: ${modelConfig.model} | Mini: ${modelConfig.model_mini}`);
|
|
152
|
+
const llmPrimary = (0, modelConfig_1.getLanguageModelService)(modelConfig, false);
|
|
153
|
+
const llmMini = (0, modelConfig_1.getLanguageModelService)(modelConfig, true);
|
|
154
|
+
let targetDir = localPath;
|
|
155
|
+
if (repoUrl) {
|
|
156
|
+
targetDir = fs.mkdtempSync(path.join(process.cwd(), `sylva-tmp-${repoName}-`));
|
|
157
|
+
(0, utils_1.cloneRepo)(repoUrl, targetDir);
|
|
158
|
+
}
|
|
159
|
+
if (!targetDir)
|
|
160
|
+
throw new Error("Could not resolve target directory");
|
|
161
|
+
await runPipeline(targetDir, repoName, llmPrimary, llmMini, parsedArgs.maxIterations);
|
|
162
|
+
if (repoUrl && targetDir) {
|
|
163
|
+
console.log(`Cleaning up temporary repository dir: ${targetDir}`);
|
|
164
|
+
fs.rmSync(targetDir, { recursive: true, force: true });
|
|
165
|
+
}
|
|
166
|
+
process.exit(0);
|
|
167
|
+
}
|
|
168
|
+
catch (err) {
|
|
169
|
+
console.error(`\n❌ Error occurred: ${err.message}`);
|
|
170
|
+
process.exit(1);
|
|
171
|
+
}
|
|
172
|
+
});
|
|
173
|
+
await program.parseAsync(process.argv);
|
|
174
|
+
}
|
|
175
|
+
if (require.main === module) {
|
|
176
|
+
main();
|
|
177
|
+
}
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
export interface ModelMetadata {
|
|
2
|
+
provider: string;
|
|
3
|
+
tier: "primary" | "mini";
|
|
4
|
+
}
|
|
5
|
+
export declare const PROVIDER_GEMINI = "gemini";
|
|
6
|
+
export declare const PROVIDER_ANTHROPIC = "anthropic";
|
|
7
|
+
export declare const PROVIDER_OPENAI = "openai";
|
|
8
|
+
export declare const MODEL_CATALOG: Record<string, ModelMetadata>;
|
|
9
|
+
export declare const DEFAULT_MODELS: Record<string, string>;
|
|
10
|
+
export declare const DEFAULT_MINI_MODELS: Record<string, string>;
|
|
11
|
+
export declare const API_KEY_ENV_VARS: Record<string, string[]>;
|
|
12
|
+
export declare const ALLOWED_EXTENSIONS: Set<string>;
|
|
13
|
+
export declare const IGNORED_DIRS: Set<string>;
|
|
14
|
+
/**
|
|
15
|
+
* Dependency manifest files that should be hoisted to the top of the serialized
|
|
16
|
+
* source tree so the AI reads them FIRST, preventing framework hallucination.
|
|
17
|
+
*/
|
|
18
|
+
export declare const DEPENDENCY_MANIFESTS: Set<string>;
|
|
@@ -0,0 +1,122 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.DEPENDENCY_MANIFESTS = exports.IGNORED_DIRS = exports.ALLOWED_EXTENSIONS = exports.API_KEY_ENV_VARS = exports.DEFAULT_MINI_MODELS = exports.DEFAULT_MODELS = exports.MODEL_CATALOG = exports.PROVIDER_OPENAI = exports.PROVIDER_ANTHROPIC = exports.PROVIDER_GEMINI = void 0;
|
|
4
|
+
exports.PROVIDER_GEMINI = "gemini";
|
|
5
|
+
exports.PROVIDER_ANTHROPIC = "anthropic";
|
|
6
|
+
exports.PROVIDER_OPENAI = "openai";
|
|
7
|
+
exports.MODEL_CATALOG = {
|
|
8
|
+
// Google Gemini (2026)
|
|
9
|
+
"gemini/gemini-3.1-pro": { provider: exports.PROVIDER_GEMINI, tier: "primary" },
|
|
10
|
+
"gemini/gemini-3.1-flash": { provider: exports.PROVIDER_GEMINI, tier: "mini" },
|
|
11
|
+
"gemini/gemini-3-deep-think": { provider: exports.PROVIDER_GEMINI, tier: "primary" },
|
|
12
|
+
// Anthropic Claude (2026)
|
|
13
|
+
"anthropic/claude-opus-4.6": { provider: exports.PROVIDER_ANTHROPIC, tier: "primary" },
|
|
14
|
+
"anthropic/claude-sonnet-4.6": { provider: exports.PROVIDER_ANTHROPIC, tier: "primary" },
|
|
15
|
+
"anthropic/claude-sonnet-5": { provider: exports.PROVIDER_ANTHROPIC, tier: "primary" },
|
|
16
|
+
"anthropic/claude-haiku-4.5": { provider: exports.PROVIDER_ANTHROPIC, tier: "mini" },
|
|
17
|
+
"anthropic/claude-haiku-3-20250519": { provider: exports.PROVIDER_ANTHROPIC, tier: "mini" },
|
|
18
|
+
// OpenAI (2026)
|
|
19
|
+
"openai/gpt-5.3": { provider: exports.PROVIDER_OPENAI, tier: "primary" },
|
|
20
|
+
"openai/gpt-5.3-codex": { provider: exports.PROVIDER_OPENAI, tier: "primary" },
|
|
21
|
+
"openai/gpt-5.2": { provider: exports.PROVIDER_OPENAI, tier: "primary" },
|
|
22
|
+
"openai/gpt-4o": { provider: exports.PROVIDER_OPENAI, tier: "primary" },
|
|
23
|
+
"openai/gpt-4o-mini": { provider: exports.PROVIDER_OPENAI, tier: "mini" },
|
|
24
|
+
};
|
|
25
|
+
exports.DEFAULT_MODELS = {
|
|
26
|
+
[exports.PROVIDER_GEMINI]: "gemini/gemini-3.1-pro",
|
|
27
|
+
[exports.PROVIDER_ANTHROPIC]: "anthropic/claude-sonnet-4.6",
|
|
28
|
+
[exports.PROVIDER_OPENAI]: "openai/gpt-4o",
|
|
29
|
+
};
|
|
30
|
+
exports.DEFAULT_MINI_MODELS = {
|
|
31
|
+
[exports.PROVIDER_GEMINI]: "gemini/gemini-3.1-flash",
|
|
32
|
+
[exports.PROVIDER_ANTHROPIC]: "anthropic/claude-haiku-3-20250519",
|
|
33
|
+
[exports.PROVIDER_OPENAI]: "openai/gpt-4o-mini",
|
|
34
|
+
};
|
|
35
|
+
exports.API_KEY_ENV_VARS = {
|
|
36
|
+
[exports.PROVIDER_GEMINI]: ["GEMINI_API_KEY", "GOOGLE_API_KEY"],
|
|
37
|
+
[exports.PROVIDER_ANTHROPIC]: ["ANTHROPIC_API_KEY"],
|
|
38
|
+
[exports.PROVIDER_OPENAI]: ["OPENAI_API_KEY"],
|
|
39
|
+
};
|
|
40
|
+
exports.ALLOWED_EXTENSIONS = new Set([
|
|
41
|
+
".py",
|
|
42
|
+
".js",
|
|
43
|
+
".ts",
|
|
44
|
+
".jsx",
|
|
45
|
+
".tsx",
|
|
46
|
+
".vue",
|
|
47
|
+
".java",
|
|
48
|
+
".md",
|
|
49
|
+
".json",
|
|
50
|
+
".yml",
|
|
51
|
+
".yaml",
|
|
52
|
+
".txt",
|
|
53
|
+
".html",
|
|
54
|
+
".css",
|
|
55
|
+
".scss",
|
|
56
|
+
".less",
|
|
57
|
+
".c",
|
|
58
|
+
".cpp",
|
|
59
|
+
".h",
|
|
60
|
+
".hpp",
|
|
61
|
+
".cs",
|
|
62
|
+
".go",
|
|
63
|
+
".rb",
|
|
64
|
+
".php",
|
|
65
|
+
".rs",
|
|
66
|
+
".sh",
|
|
67
|
+
".swift",
|
|
68
|
+
".kt",
|
|
69
|
+
".sql",
|
|
70
|
+
".xml",
|
|
71
|
+
".toml",
|
|
72
|
+
".ini",
|
|
73
|
+
".dart",
|
|
74
|
+
".scala",
|
|
75
|
+
".r",
|
|
76
|
+
".m",
|
|
77
|
+
".pl",
|
|
78
|
+
]);
|
|
79
|
+
exports.IGNORED_DIRS = new Set([
|
|
80
|
+
"node_modules",
|
|
81
|
+
"__pycache__",
|
|
82
|
+
"venv",
|
|
83
|
+
"env",
|
|
84
|
+
"dist",
|
|
85
|
+
"build",
|
|
86
|
+
"target",
|
|
87
|
+
"vendor",
|
|
88
|
+
"bin",
|
|
89
|
+
"obj",
|
|
90
|
+
"out",
|
|
91
|
+
"coverage",
|
|
92
|
+
"logs",
|
|
93
|
+
"tmp",
|
|
94
|
+
"temp",
|
|
95
|
+
"packages",
|
|
96
|
+
"pkg",
|
|
97
|
+
".git",
|
|
98
|
+
]);
|
|
99
|
+
/**
|
|
100
|
+
* Dependency manifest files that should be hoisted to the top of the serialized
|
|
101
|
+
* source tree so the AI reads them FIRST, preventing framework hallucination.
|
|
102
|
+
*/
|
|
103
|
+
exports.DEPENDENCY_MANIFESTS = new Set([
|
|
104
|
+
"requirements.txt",
|
|
105
|
+
"pyproject.toml",
|
|
106
|
+
"Pipfile",
|
|
107
|
+
"setup.py",
|
|
108
|
+
"setup.cfg",
|
|
109
|
+
"go.mod",
|
|
110
|
+
"Cargo.toml",
|
|
111
|
+
"pom.xml",
|
|
112
|
+
"build.gradle",
|
|
113
|
+
"build.gradle.kts",
|
|
114
|
+
"Gemfile",
|
|
115
|
+
"composer.json",
|
|
116
|
+
"Package.swift",
|
|
117
|
+
"Makefile",
|
|
118
|
+
"Dockerfile",
|
|
119
|
+
"docker-compose.yml",
|
|
120
|
+
"docker-compose.yaml",
|
|
121
|
+
"package.json",
|
|
122
|
+
]);
|
package/dist/index.d.ts
ADDED
|
File without changes
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
1
|
+
export interface ModelMetadata {
|
|
2
|
+
provider: string;
|
|
3
|
+
tier: "primary" | "mini";
|
|
4
|
+
}
|
|
5
|
+
export interface ModelConfig {
|
|
6
|
+
provider: string;
|
|
7
|
+
model: string;
|
|
8
|
+
model_mini: string;
|
|
9
|
+
api_key: string;
|
|
10
|
+
}
|
|
11
|
+
export declare function resolveApiKey(provider: string): string;
|
|
12
|
+
export declare function resolveModelConfig(modelArg?: string): ModelConfig;
|
|
13
|
+
export declare function getLanguageModelService(config: ModelConfig, isMini?: boolean): any;
|
|
14
|
+
export declare function listSupportedModels(): string;
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.resolveApiKey = resolveApiKey;
|
|
4
|
+
exports.resolveModelConfig = resolveModelConfig;
|
|
5
|
+
exports.getLanguageModelService = getLanguageModelService;
|
|
6
|
+
exports.listSupportedModels = listSupportedModels;
|
|
7
|
+
const ax_1 = require("@ax-llm/ax");
|
|
8
|
+
const constants_1 = require("./constants");
|
|
9
|
+
function resolveApiKey(provider) {
|
|
10
|
+
for (const envVar of constants_1.API_KEY_ENV_VARS[provider]) {
|
|
11
|
+
if (process.env[envVar])
|
|
12
|
+
return process.env[envVar];
|
|
13
|
+
}
|
|
14
|
+
throw new Error(`Environment variable(s) ${constants_1.API_KEY_ENV_VARS[provider].join(" or ")} not set for provider ${provider}. Exiting.`);
|
|
15
|
+
}
|
|
16
|
+
function resolveModelConfig(modelArg) {
|
|
17
|
+
let arg = modelArg || process.env.AUTOSKILL_MODEL || constants_1.PROVIDER_GEMINI;
|
|
18
|
+
arg = arg.trim();
|
|
19
|
+
let modelName = arg;
|
|
20
|
+
if (constants_1.DEFAULT_MODELS[arg]) {
|
|
21
|
+
modelName = constants_1.DEFAULT_MODELS[arg];
|
|
22
|
+
}
|
|
23
|
+
else if (!constants_1.MODEL_CATALOG[arg]) {
|
|
24
|
+
throw new Error(`Unknown model '${arg}'. Supported models:\n ${Object.keys(constants_1.MODEL_CATALOG).sort().join(", ")}`);
|
|
25
|
+
}
|
|
26
|
+
const provider = constants_1.MODEL_CATALOG[modelName].provider;
|
|
27
|
+
const apiKey = resolveApiKey(provider);
|
|
28
|
+
const miniName = constants_1.DEFAULT_MINI_MODELS[provider];
|
|
29
|
+
return {
|
|
30
|
+
provider,
|
|
31
|
+
model: modelName,
|
|
32
|
+
model_mini: miniName,
|
|
33
|
+
api_key: apiKey,
|
|
34
|
+
};
|
|
35
|
+
}
|
|
36
|
+
function getLanguageModelService(config, isMini = true) {
|
|
37
|
+
const targetModel = isMini ? config.model_mini : config.model;
|
|
38
|
+
const rawModelName = targetModel.split("/")[1]; // Strip provider prefix
|
|
39
|
+
if (config.provider === constants_1.PROVIDER_GEMINI) {
|
|
40
|
+
return new ax_1.AxAIGoogleGemini({
|
|
41
|
+
apiKey: config.api_key,
|
|
42
|
+
config: { model: rawModelName },
|
|
43
|
+
});
|
|
44
|
+
}
|
|
45
|
+
else if (config.provider === constants_1.PROVIDER_ANTHROPIC) {
|
|
46
|
+
return new ax_1.AxAIAnthropic({
|
|
47
|
+
apiKey: config.api_key,
|
|
48
|
+
config: { model: rawModelName },
|
|
49
|
+
});
|
|
50
|
+
}
|
|
51
|
+
else {
|
|
52
|
+
return new ax_1.AxAIOpenAI({
|
|
53
|
+
apiKey: config.api_key,
|
|
54
|
+
config: { model: rawModelName },
|
|
55
|
+
});
|
|
56
|
+
}
|
|
57
|
+
}
|
|
58
|
+
function listSupportedModels() {
|
|
59
|
+
let output = "\nSupported models:\n";
|
|
60
|
+
let currentProvider = "";
|
|
61
|
+
for (const [name, meta] of Object.entries(constants_1.MODEL_CATALOG)) {
|
|
62
|
+
if (meta.provider !== currentProvider) {
|
|
63
|
+
currentProvider = meta.provider;
|
|
64
|
+
output += ` ${currentProvider.toUpperCase()}\n`;
|
|
65
|
+
}
|
|
66
|
+
let defaultTag = "";
|
|
67
|
+
if (name === constants_1.DEFAULT_MODELS[meta.provider]) {
|
|
68
|
+
defaultTag = " (default)";
|
|
69
|
+
}
|
|
70
|
+
else if (name === constants_1.DEFAULT_MINI_MODELS[meta.provider]) {
|
|
71
|
+
defaultTag = " (default mini)";
|
|
72
|
+
}
|
|
73
|
+
output += ` ${name}${defaultTag}\n`;
|
|
74
|
+
}
|
|
75
|
+
return output;
|
|
76
|
+
}
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
import { AgentsMdSections, TreeType } from "./utils";
|
|
2
|
+
export declare class CodebaseConventionExtractor {
|
|
3
|
+
private maxIterations;
|
|
4
|
+
constructor(maxIterations?: number);
|
|
5
|
+
extract(sourceTree: {
|
|
6
|
+
[key: string]: TreeType;
|
|
7
|
+
}): Promise<{
|
|
8
|
+
analyzer: import("@ax-llm/ax").AxAgent<{
|
|
9
|
+
readonly sourceContext: string;
|
|
10
|
+
}, {
|
|
11
|
+
readonly projectOverview: string;
|
|
12
|
+
} & {
|
|
13
|
+
readonly agentPersona: string;
|
|
14
|
+
} & {
|
|
15
|
+
readonly techStack: string;
|
|
16
|
+
} & {
|
|
17
|
+
readonly directoryStructure: string;
|
|
18
|
+
} & {
|
|
19
|
+
readonly executionCommands: string;
|
|
20
|
+
} & {
|
|
21
|
+
readonly codeStyleAndFormatting: string;
|
|
22
|
+
} & {
|
|
23
|
+
readonly architectureAndDesignPatterns: string;
|
|
24
|
+
} & {
|
|
25
|
+
readonly antiPatternsAndRestrictions: string;
|
|
26
|
+
} & {
|
|
27
|
+
readonly dependencyManagement: string;
|
|
28
|
+
} & {
|
|
29
|
+
readonly stateManagementGuidelines: string;
|
|
30
|
+
} & {
|
|
31
|
+
readonly databaseAndDataHandling: string;
|
|
32
|
+
} & {
|
|
33
|
+
readonly errorHandlingAndLogging: string;
|
|
34
|
+
} & {
|
|
35
|
+
readonly testingStrategy: string;
|
|
36
|
+
} & {
|
|
37
|
+
readonly securityAndCompliance: string;
|
|
38
|
+
} & {
|
|
39
|
+
readonly gitAndVersionControl: string;
|
|
40
|
+
} & {
|
|
41
|
+
readonly documentationStandards: string;
|
|
42
|
+
} & {
|
|
43
|
+
readonly agentWorkflow: string;
|
|
44
|
+
} & {
|
|
45
|
+
readonly fewShotExamples: string;
|
|
46
|
+
}>;
|
|
47
|
+
contextString: string;
|
|
48
|
+
}>;
|
|
49
|
+
compileMarkdown(llm: any, extractResult: any): Promise<string>;
|
|
50
|
+
}
|
|
51
|
+
export declare class AgentsMdCreator {
|
|
52
|
+
extractAndCompileSections(llm: any, conventionsMarkdown: string, repositoryName: string): Promise<AgentsMdSections>;
|
|
53
|
+
}
|
package/dist/modules.js
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.AgentsMdCreator = exports.CodebaseConventionExtractor = void 0;
|
|
4
|
+
const ax_1 = require("@ax-llm/ax");
|
|
5
|
+
const prompts_1 = require("./prompts");
|
|
6
|
+
// We must serialize the source tree into a text/markdown format to feed into AxAgent context.
|
|
7
|
+
function serializeSourceTree(tree, indent = "") {
|
|
8
|
+
let output = "";
|
|
9
|
+
for (const [key, value] of Object.entries(tree)) {
|
|
10
|
+
if (typeof value === "string") {
|
|
11
|
+
output += `${indent}- File: ${key}\n`;
|
|
12
|
+
output += `${indent} Content:\n\`\`\`\n${value}\n\`\`\`\n\n`;
|
|
13
|
+
}
|
|
14
|
+
else {
|
|
15
|
+
output += `${indent}- Directory: ${key}/\n`;
|
|
16
|
+
output += serializeSourceTree(value, indent + " ");
|
|
17
|
+
}
|
|
18
|
+
}
|
|
19
|
+
return output;
|
|
20
|
+
}
|
|
21
|
+
class CodebaseConventionExtractor {
|
|
22
|
+
maxIterations;
|
|
23
|
+
constructor(maxIterations = 35) {
|
|
24
|
+
this.maxIterations = maxIterations;
|
|
25
|
+
}
|
|
26
|
+
async extract(sourceTree) {
|
|
27
|
+
console.log("=> Preparing and serializing Source Tree for RLM analysis...");
|
|
28
|
+
const contextString = serializeSourceTree(sourceTree);
|
|
29
|
+
console.log(`=> Running AxAgent (RLM) for Codebase Analysis on ${Object.keys(sourceTree).length} root modules...`);
|
|
30
|
+
// Use the f() builder for reliable type definition instead of long strings
|
|
31
|
+
const agentSig = prompts_1.CODEBASE_ANALYSIS_SIGNATURE;
|
|
32
|
+
// Pass config properties properly conforming to AxAgentConfig
|
|
33
|
+
const analyzer = (0, ax_1.agent)(agentSig, {
|
|
34
|
+
agentIdentity: prompts_1.CODEBASE_ANALYZER_IDENTITY,
|
|
35
|
+
contextFields: ["sourceContext"],
|
|
36
|
+
runtime: new ax_1.AxJSRuntime(),
|
|
37
|
+
maxLlmCalls: this.maxIterations,
|
|
38
|
+
});
|
|
39
|
+
// We run the agent forwarding the generic LLM.
|
|
40
|
+
// We will receive the initialized `llm` instance from the CLI.
|
|
41
|
+
return {
|
|
42
|
+
analyzer,
|
|
43
|
+
contextString,
|
|
44
|
+
};
|
|
45
|
+
}
|
|
46
|
+
async compileMarkdown(llm, extractResult) {
|
|
47
|
+
console.log("=> Compiling Codebase Analysis into Cohesive Markdown...");
|
|
48
|
+
// Equivalent to dspy.ChainOfThought(CompileConventionsMarkdown)
|
|
49
|
+
const compileSig = prompts_1.COMPILE_CONVENTIONS_SIGNATURE;
|
|
50
|
+
const compiler = (0, ax_1.ax)(compileSig);
|
|
51
|
+
const finalResult = await compiler.forward(llm, extractResult);
|
|
52
|
+
return finalResult.markdownDocument;
|
|
53
|
+
}
|
|
54
|
+
}
|
|
55
|
+
exports.CodebaseConventionExtractor = CodebaseConventionExtractor;
|
|
56
|
+
class AgentsMdCreator {
|
|
57
|
+
async extractAndCompileSections(llm, conventionsMarkdown, repositoryName) {
|
|
58
|
+
console.log(`=> Extracting individual AGENTS.md sections for repository: ${repositoryName}...`);
|
|
59
|
+
// Equivalent to ExtractAgentsSections via ChainOfThought
|
|
60
|
+
const sectionSig = prompts_1.EXTRACT_AGENTS_SECTIONS_SIGNATURE;
|
|
61
|
+
const sectionExtractor = (0, ax_1.ax)(sectionSig);
|
|
62
|
+
const sections = await sectionExtractor.forward(llm, {
|
|
63
|
+
conventionsMarkdown,
|
|
64
|
+
repositoryName,
|
|
65
|
+
});
|
|
66
|
+
return sections;
|
|
67
|
+
}
|
|
68
|
+
}
|
|
69
|
+
exports.AgentsMdCreator = AgentsMdCreator;
|
|
@@ -0,0 +1,119 @@
|
|
|
1
|
+
export declare const CODEBASE_ANALYSIS_SIGNATURE: import("@ax-llm/ax").AxSignature<{
|
|
2
|
+
readonly sourceContext: string;
|
|
3
|
+
}, {
|
|
4
|
+
readonly projectOverview: string;
|
|
5
|
+
} & {
|
|
6
|
+
readonly agentPersona: string;
|
|
7
|
+
} & {
|
|
8
|
+
readonly techStack: string;
|
|
9
|
+
} & {
|
|
10
|
+
readonly directoryStructure: string;
|
|
11
|
+
} & {
|
|
12
|
+
readonly executionCommands: string;
|
|
13
|
+
} & {
|
|
14
|
+
readonly codeStyleAndFormatting: string;
|
|
15
|
+
} & {
|
|
16
|
+
readonly architectureAndDesignPatterns: string;
|
|
17
|
+
} & {
|
|
18
|
+
readonly antiPatternsAndRestrictions: string;
|
|
19
|
+
} & {
|
|
20
|
+
readonly dependencyManagement: string;
|
|
21
|
+
} & {
|
|
22
|
+
readonly stateManagementGuidelines: string;
|
|
23
|
+
} & {
|
|
24
|
+
readonly databaseAndDataHandling: string;
|
|
25
|
+
} & {
|
|
26
|
+
readonly errorHandlingAndLogging: string;
|
|
27
|
+
} & {
|
|
28
|
+
readonly testingStrategy: string;
|
|
29
|
+
} & {
|
|
30
|
+
readonly securityAndCompliance: string;
|
|
31
|
+
} & {
|
|
32
|
+
readonly gitAndVersionControl: string;
|
|
33
|
+
} & {
|
|
34
|
+
readonly documentationStandards: string;
|
|
35
|
+
} & {
|
|
36
|
+
readonly agentWorkflow: string;
|
|
37
|
+
} & {
|
|
38
|
+
readonly fewShotExamples: string;
|
|
39
|
+
}>;
|
|
40
|
+
export declare const CODEBASE_ANALYZER_IDENTITY: {
|
|
41
|
+
name: string;
|
|
42
|
+
description: string;
|
|
43
|
+
};
|
|
44
|
+
export declare const COMPILE_CONVENTIONS_SIGNATURE: import("@ax-llm/ax").AxSignature<{
|
|
45
|
+
readonly projectOverview: string;
|
|
46
|
+
} & {
|
|
47
|
+
readonly agentPersona: string;
|
|
48
|
+
} & {
|
|
49
|
+
readonly techStack: string;
|
|
50
|
+
} & {
|
|
51
|
+
readonly directoryStructure: string;
|
|
52
|
+
} & {
|
|
53
|
+
readonly executionCommands: string;
|
|
54
|
+
} & {
|
|
55
|
+
readonly codeStyleAndFormatting: string;
|
|
56
|
+
} & {
|
|
57
|
+
readonly architectureAndDesignPatterns: string;
|
|
58
|
+
} & {
|
|
59
|
+
readonly antiPatternsAndRestrictions: string;
|
|
60
|
+
} & {
|
|
61
|
+
readonly dependencyManagement: string;
|
|
62
|
+
} & {
|
|
63
|
+
readonly stateManagementGuidelines: string;
|
|
64
|
+
} & {
|
|
65
|
+
readonly databaseAndDataHandling: string;
|
|
66
|
+
} & {
|
|
67
|
+
readonly errorHandlingAndLogging: string;
|
|
68
|
+
} & {
|
|
69
|
+
readonly testingStrategy: string;
|
|
70
|
+
} & {
|
|
71
|
+
readonly securityAndCompliance: string;
|
|
72
|
+
} & {
|
|
73
|
+
readonly gitAndVersionControl: string;
|
|
74
|
+
} & {
|
|
75
|
+
readonly documentationStandards: string;
|
|
76
|
+
} & {
|
|
77
|
+
readonly agentWorkflow: string;
|
|
78
|
+
} & {
|
|
79
|
+
readonly fewShotExamples: string;
|
|
80
|
+
}, {
|
|
81
|
+
readonly markdownDocument: string;
|
|
82
|
+
}>;
|
|
83
|
+
export declare const EXTRACT_AGENTS_SECTIONS_SIGNATURE: import("@ax-llm/ax").AxSignature<{
|
|
84
|
+
readonly conventionsMarkdown: string;
|
|
85
|
+
} & {
|
|
86
|
+
readonly repositoryName: string;
|
|
87
|
+
}, {
|
|
88
|
+
readonly projectOverview: string;
|
|
89
|
+
} & {
|
|
90
|
+
readonly techStack: string;
|
|
91
|
+
} & {
|
|
92
|
+
readonly architecture: string;
|
|
93
|
+
} & {
|
|
94
|
+
readonly codeStyle: string;
|
|
95
|
+
} & {
|
|
96
|
+
readonly antiPatternsAndRestrictions: string;
|
|
97
|
+
} & {
|
|
98
|
+
readonly databaseAndState: string;
|
|
99
|
+
} & {
|
|
100
|
+
readonly errorHandlingAndLogging: string;
|
|
101
|
+
} & {
|
|
102
|
+
readonly testingCommands: string;
|
|
103
|
+
} & {
|
|
104
|
+
readonly testingGuidelines: string;
|
|
105
|
+
} & {
|
|
106
|
+
readonly securityAndCompliance: string;
|
|
107
|
+
} & {
|
|
108
|
+
readonly dependenciesAndEnvironment: string;
|
|
109
|
+
} & {
|
|
110
|
+
readonly prAndGitRules: string;
|
|
111
|
+
} & {
|
|
112
|
+
readonly documentationStandards: string;
|
|
113
|
+
} & {
|
|
114
|
+
readonly commonPatterns: string;
|
|
115
|
+
} & {
|
|
116
|
+
readonly agentWorkflow: string;
|
|
117
|
+
} & {
|
|
118
|
+
readonly fewShotExamples: string;
|
|
119
|
+
}>;
|
package/dist/prompts.js
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.EXTRACT_AGENTS_SECTIONS_SIGNATURE = exports.COMPILE_CONVENTIONS_SIGNATURE = exports.CODEBASE_ANALYZER_IDENTITY = exports.CODEBASE_ANALYSIS_SIGNATURE = void 0;
|
|
4
|
+
const ax_1 = require("@ax-llm/ax");
|
|
5
|
+
exports.CODEBASE_ANALYSIS_SIGNATURE = (0, ax_1.f)()
|
|
6
|
+
.input("sourceContext", ax_1.f.string())
|
|
7
|
+
.output("projectOverview", ax_1.f.string("Project Overview & Context: Exhaustively describe all primary sub-services, their purpose, and what languages or frameworks power them."))
|
|
8
|
+
.output("agentPersona", ax_1.f.string("Agent Persona / Role"))
|
|
9
|
+
.output("techStack", ax_1.f.string("Tech Stack & Versions: List EVERY distinct language, library, database, and external API dependency used. WARNING: Do NOT guess frameworks based on assumptions or the presence of a package.json. You must explicitly scan the actual code files (e.g. imports in .py, .ts, .js) and dependency manifests (e.g. requirements.txt, go.mod) to determine the EXACT tech stack."))
|
|
10
|
+
.output("directoryStructure", ax_1.f.string("Directory Structure (The Map): Deeply map out all root folders, separating different sub-projects (frontend vs backend) and their specific structures."))
|
|
11
|
+
.output("executionCommands", ax_1.f.string("Execution Commands: Exact terminal commands to run or build."))
|
|
12
|
+
.output("codeStyleAndFormatting", ax_1.f.string("Code Style & Formatting: Language-specific formatting and strictly enforced linting rules."))
|
|
13
|
+
.output("architectureAndDesignPatterns", ax_1.f.string("Architecture & Design Patterns: Detailed cross-service logical flow, API boundaries, and system design logic."))
|
|
14
|
+
.output("antiPatternsAndRestrictions", ax_1.f.string("Anti-Patterns & Restrictions"))
|
|
15
|
+
.output("dependencyManagement", ax_1.f.string("Dependency Management"))
|
|
16
|
+
.output("stateManagementGuidelines", ax_1.f.string("State Management Guidelines"))
|
|
17
|
+
.output("databaseAndDataHandling", ax_1.f.string("Database & Data Handling"))
|
|
18
|
+
.output("errorHandlingAndLogging", ax_1.f.string("Error Handling & Logging"))
|
|
19
|
+
.output("testingStrategy", ax_1.f.string("Testing Strategy"))
|
|
20
|
+
.output("securityAndCompliance", ax_1.f.string("Security & Compliance"))
|
|
21
|
+
.output("gitAndVersionControl", ax_1.f.string("Git & Version Control"))
|
|
22
|
+
.output("documentationStandards", ax_1.f.string("Documentation Standards"))
|
|
23
|
+
.output("agentWorkflow", ax_1.f.string("Agent Workflow / SOP"))
|
|
24
|
+
.output("fewShotExamples", ax_1.f.string("Few-Shot Examples: Specific code snippets showing 'how to do X correctly'."))
|
|
25
|
+
.build();
|
|
26
|
+
exports.CODEBASE_ANALYZER_IDENTITY = {
|
|
27
|
+
name: "CodebaseAnalyzer",
|
|
28
|
+
description: "A hyper-detailed technical architect generating strict developer manifests. You must analyze the structural backbone, data flow, and day-to-day coding conventions of the application using recursive analysis of the source code. NEVER hallucinate frameworks; always verify by scanning actual source imports and dependency files.",
|
|
29
|
+
};
|
|
30
|
+
exports.COMPILE_CONVENTIONS_SIGNATURE = (0, ax_1.f)()
|
|
31
|
+
.input("projectOverview", ax_1.f.string("Project Overview & Context."))
|
|
32
|
+
.input("agentPersona", ax_1.f.string("Agent Persona / Role."))
|
|
33
|
+
.input("techStack", ax_1.f.string("Tech Stack & Versions."))
|
|
34
|
+
.input("directoryStructure", ax_1.f.string("Directory Structure (The Map)."))
|
|
35
|
+
.input("executionCommands", ax_1.f.string("Execution Commands."))
|
|
36
|
+
.input("codeStyleAndFormatting", ax_1.f.string("Code Style & Formatting."))
|
|
37
|
+
.input("architectureAndDesignPatterns", ax_1.f.string("Architecture & Design Patterns."))
|
|
38
|
+
.input("antiPatternsAndRestrictions", ax_1.f.string("Anti-Patterns & Restrictions."))
|
|
39
|
+
.input("dependencyManagement", ax_1.f.string("Dependency Management."))
|
|
40
|
+
.input("stateManagementGuidelines", ax_1.f.string("State Management Guidelines."))
|
|
41
|
+
.input("databaseAndDataHandling", ax_1.f.string("Database & Data Handling."))
|
|
42
|
+
.input("errorHandlingAndLogging", ax_1.f.string("Error Handling & Logging."))
|
|
43
|
+
.input("testingStrategy", ax_1.f.string("Testing Strategy."))
|
|
44
|
+
.input("securityAndCompliance", ax_1.f.string("Security & Compliance."))
|
|
45
|
+
.input("gitAndVersionControl", ax_1.f.string("Git & Version Control."))
|
|
46
|
+
.input("documentationStandards", ax_1.f.string("Documentation Standards."))
|
|
47
|
+
.input("agentWorkflow", ax_1.f.string("Agent Workflow / SOP."))
|
|
48
|
+
.input("fewShotExamples", ax_1.f.string("Few-Shot Examples."))
|
|
49
|
+
.output("markdownDocument", ax_1.f.string("Comprehensive CODEBASE_CONVENTIONS.md document formatted with clear headings, bullet points, and specific code/file snippets as evidence."))
|
|
50
|
+
.build();
|
|
51
|
+
exports.EXTRACT_AGENTS_SECTIONS_SIGNATURE = (0, ax_1.f)()
|
|
52
|
+
.input("conventionsMarkdown", ax_1.f.string("The extracted architectural, data flow, and granular coding conventions"))
|
|
53
|
+
.input("repositoryName", ax_1.f.string("The name of the repository or project"))
|
|
54
|
+
.output("projectOverview", ax_1.f.string("Comprehensive description of the project: what it does, its tech stack, its primary languages, and its overall purpose and functionality."))
|
|
55
|
+
.output("techStack", ax_1.f.string("Explicit and exhaustive list of supported languages, frameworks, UI libraries, backend runtimes, and tools used in the repository. Annotate what each technology is used for (e.g., 'X Framework for UI', 'Y Language for REST Services')."))
|
|
56
|
+
.output("architecture", ax_1.f.string("Deep mapping of where things live: directory layout, key modules, entry points, and their responsibilities. You MUST generate an ASCII diagram showing the architecture, module relationships, and sub-services. Break down complex monorepos explicitly (e.g., separating frontend code vs backend code)."))
|
|
57
|
+
.output("codeStyle", ax_1.f.string("Granular coding standards observed: language version, formatting, naming conventions, import ordering, type-hinting rules, preferred patterns vs anti-patterns. Explicitly mention how different stacks in a monorepo communicate (e.g., REST, GraphQL, etc.) and how proprietary or 3rd-party external APIs are wrapped or invoked. Provide concrete examples from the codebase. All code blocks must be properly opened AND closed with triple backticks."))
|
|
58
|
+
.output("antiPatternsAndRestrictions", ax_1.f.string("Specific anti-patterns and 'NEVER do this' rules the AI must strictly avoid."))
|
|
59
|
+
.output("databaseAndState", ax_1.f.string("Guidelines on how data and state should flow through the application, including databases, external API data syncing, or state managers."))
|
|
60
|
+
.output("errorHandlingAndLogging", ax_1.f.string("Conventions for handling exceptions and formatting logs, highlighting any specific utilities to use."))
|
|
61
|
+
.output("testingCommands", ax_1.f.string("Exact CLI commands to build, lint, test, and run the project. Include per-file test commands if available. Format as a bullet list of runnable commands. All code blocks must be properly opened AND closed with triple backticks."))
|
|
62
|
+
.output("testingGuidelines", ax_1.f.string("How tests should be written in this project: framework used, file placement conventions, naming patterns, mocking strategies, and coverage expectations. All code blocks must be properly opened AND closed with triple backticks."))
|
|
63
|
+
.output("securityAndCompliance", ax_1.f.string("Strict security guardrails, such as rules against exposing secrets or logging PII."))
|
|
64
|
+
.output("dependenciesAndEnvironment", ax_1.f.string("How to install dependencies, required environment variables, external service setup, and supported runtime versions."))
|
|
65
|
+
.output("prAndGitRules", ax_1.f.string("Commit message format, branch naming conventions, required checks before merging, and any PR review policies observed in the codebase."))
|
|
66
|
+
.output("documentationStandards", ax_1.f.string("Standards for writing docstrings, comments, and updating system/user documentation."))
|
|
67
|
+
.output("commonPatterns", ax_1.f.string("Recurring design patterns, error handling idioms, logging conventions, and strict 'ALWAYS do X / NEVER do Y' rules observed across the codebase. All code blocks must be properly opened AND closed with triple backticks."))
|
|
68
|
+
.output("agentWorkflow", ax_1.f.string("Standard Operating Procedure (SOP) for how the AI should approach generic or specific tasks in this codebase."))
|
|
69
|
+
.output("fewShotExamples", ax_1.f.string("Concrete 'Good' vs 'Bad' code snippets to perfectly align the agent via demonstration. Provide detailed examples of standard implementation paths. All code blocks must be properly opened AND closed with triple backticks."))
|
|
70
|
+
.build();
|
package/dist/utils.d.ts
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
export type TreeType = string | {
|
|
2
|
+
[key: string]: TreeType;
|
|
3
|
+
};
|
|
4
|
+
/**
|
|
5
|
+
* Recursively load the source tree into a nested dictionary,
|
|
6
|
+
* skipping ignored directories and unsupported extensions to save LLM context.
|
|
7
|
+
*/
|
|
8
|
+
export declare function loadSourceTree(rootDir: string): {
|
|
9
|
+
[key: string]: TreeType;
|
|
10
|
+
};
|
|
11
|
+
/**
|
|
12
|
+
* Clones a Git repository to the specified destination handling child_process execution.
|
|
13
|
+
*/
|
|
14
|
+
export declare function cloneRepo(repoUrl: string, destDir: string): void;
|
|
15
|
+
/**
|
|
16
|
+
* Saves compiled AGENTS.md back to disk on a standardized path.
|
|
17
|
+
*/
|
|
18
|
+
export declare function saveAgentsToDisk(repoName: string, agentsContent: string, baseDir?: string): void;
|
|
19
|
+
export interface AgentsMdSections {
|
|
20
|
+
projectOverview?: string;
|
|
21
|
+
agentPersona?: string;
|
|
22
|
+
techStack?: string;
|
|
23
|
+
architecture?: string;
|
|
24
|
+
codeStyle?: string;
|
|
25
|
+
antiPatternsAndRestrictions?: string;
|
|
26
|
+
databaseAndState?: string;
|
|
27
|
+
errorHandlingAndLogging?: string;
|
|
28
|
+
testingCommands?: string;
|
|
29
|
+
testingGuidelines?: string;
|
|
30
|
+
securityAndCompliance?: string;
|
|
31
|
+
dependenciesAndEnvironment?: string;
|
|
32
|
+
prAndGitRules?: string;
|
|
33
|
+
documentationStandards?: string;
|
|
34
|
+
commonPatterns?: string;
|
|
35
|
+
agentWorkflow?: string;
|
|
36
|
+
fewShotExamples?: string;
|
|
37
|
+
}
|
|
38
|
+
/**
|
|
39
|
+
* Joins evaluated section variables together into the uniform AGENTS.md map.
|
|
40
|
+
*/
|
|
41
|
+
export declare function compileAgentsMd(sections: AgentsMdSections, repoName: string): string;
|
package/dist/utils.js
ADDED
|
@@ -0,0 +1,173 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
|
|
3
|
+
if (k2 === undefined) k2 = k;
|
|
4
|
+
var desc = Object.getOwnPropertyDescriptor(m, k);
|
|
5
|
+
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
|
|
6
|
+
desc = { enumerable: true, get: function() { return m[k]; } };
|
|
7
|
+
}
|
|
8
|
+
Object.defineProperty(o, k2, desc);
|
|
9
|
+
}) : (function(o, m, k, k2) {
|
|
10
|
+
if (k2 === undefined) k2 = k;
|
|
11
|
+
o[k2] = m[k];
|
|
12
|
+
}));
|
|
13
|
+
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
|
|
14
|
+
Object.defineProperty(o, "default", { enumerable: true, value: v });
|
|
15
|
+
}) : function(o, v) {
|
|
16
|
+
o["default"] = v;
|
|
17
|
+
});
|
|
18
|
+
var __importStar = (this && this.__importStar) || (function () {
|
|
19
|
+
var ownKeys = function(o) {
|
|
20
|
+
ownKeys = Object.getOwnPropertyNames || function (o) {
|
|
21
|
+
var ar = [];
|
|
22
|
+
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
|
|
23
|
+
return ar;
|
|
24
|
+
};
|
|
25
|
+
return ownKeys(o);
|
|
26
|
+
};
|
|
27
|
+
return function (mod) {
|
|
28
|
+
if (mod && mod.__esModule) return mod;
|
|
29
|
+
var result = {};
|
|
30
|
+
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
|
|
31
|
+
__setModuleDefault(result, mod);
|
|
32
|
+
return result;
|
|
33
|
+
};
|
|
34
|
+
})();
|
|
35
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
36
|
+
exports.loadSourceTree = loadSourceTree;
|
|
37
|
+
exports.cloneRepo = cloneRepo;
|
|
38
|
+
exports.saveAgentsToDisk = saveAgentsToDisk;
|
|
39
|
+
exports.compileAgentsMd = compileAgentsMd;
|
|
40
|
+
const fs = __importStar(require("fs"));
|
|
41
|
+
const path = __importStar(require("path"));
|
|
42
|
+
const child_process_1 = require("child_process");
|
|
43
|
+
const constants_1 = require("./constants");
|
|
44
|
+
/**
|
|
45
|
+
* Recursively load the source tree into a nested dictionary,
|
|
46
|
+
* skipping ignored directories and unsupported extensions to save LLM context.
|
|
47
|
+
*/
|
|
48
|
+
function loadSourceTree(rootDir) {
|
|
49
|
+
const tree = {};
|
|
50
|
+
if (!fs.existsSync(rootDir))
|
|
51
|
+
return tree;
|
|
52
|
+
const entries = fs.readdirSync(rootDir);
|
|
53
|
+
for (const entry of entries) {
|
|
54
|
+
if (constants_1.IGNORED_DIRS.has(entry) || entry === ".git" || entry === ".DS_Store") {
|
|
55
|
+
continue;
|
|
56
|
+
}
|
|
57
|
+
const fullPath = path.join(rootDir, entry);
|
|
58
|
+
let stat;
|
|
59
|
+
try {
|
|
60
|
+
stat = fs.statSync(fullPath);
|
|
61
|
+
}
|
|
62
|
+
catch {
|
|
63
|
+
console.log(`Failed to parse file: ${fullPath}, skipping.`);
|
|
64
|
+
continue;
|
|
65
|
+
}
|
|
66
|
+
if (stat.isDirectory()) {
|
|
67
|
+
if (!entry.startsWith(".")) {
|
|
68
|
+
const subTree = loadSourceTree(fullPath);
|
|
69
|
+
if (Object.keys(subTree).length > 0) {
|
|
70
|
+
tree[entry] = subTree;
|
|
71
|
+
}
|
|
72
|
+
}
|
|
73
|
+
}
|
|
74
|
+
else {
|
|
75
|
+
const ext = path.extname(entry).toLowerCase();
|
|
76
|
+
// Allow specific files even without standard extension
|
|
77
|
+
if (!constants_1.ALLOWED_EXTENSIONS.has(ext) && entry !== "Dockerfile" && entry !== "Makefile")
|
|
78
|
+
continue;
|
|
79
|
+
try {
|
|
80
|
+
const content = fs.readFileSync(fullPath, "utf-8");
|
|
81
|
+
if (content.length < 500000) {
|
|
82
|
+
tree[entry] = content;
|
|
83
|
+
}
|
|
84
|
+
else {
|
|
85
|
+
console.warn(`File ${fullPath} skipped due to being too large (${content.length} chars)`);
|
|
86
|
+
}
|
|
87
|
+
}
|
|
88
|
+
catch (error) {
|
|
89
|
+
console.warn(`File ${fullPath} skipped due to read/encoding issues: ${error.message}`);
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
}
|
|
93
|
+
// Hoist dependency manifests to the top of the tree so the AI reads them first
|
|
94
|
+
const hoisted = {};
|
|
95
|
+
const rest = {};
|
|
96
|
+
for (const [key, value] of Object.entries(tree)) {
|
|
97
|
+
if (constants_1.DEPENDENCY_MANIFESTS.has(key)) {
|
|
98
|
+
hoisted[key] = value;
|
|
99
|
+
}
|
|
100
|
+
else {
|
|
101
|
+
rest[key] = value;
|
|
102
|
+
}
|
|
103
|
+
}
|
|
104
|
+
return { ...hoisted, ...rest };
|
|
105
|
+
}
|
|
106
|
+
/**
|
|
107
|
+
* Clones a Git repository to the specified destination handling child_process execution.
|
|
108
|
+
*/
|
|
109
|
+
function cloneRepo(repoUrl, destDir) {
|
|
110
|
+
console.log(`Cloning ${repoUrl} into ${destDir}...`);
|
|
111
|
+
try {
|
|
112
|
+
(0, child_process_1.execSync)(`git clone --depth 1 ${repoUrl} ${destDir}`, { stdio: "pipe" });
|
|
113
|
+
}
|
|
114
|
+
catch (error) {
|
|
115
|
+
console.error(`Failed to clone repository: ${error.message}`);
|
|
116
|
+
if (error.stderr)
|
|
117
|
+
console.error(error.stderr.toString());
|
|
118
|
+
throw new Error("Git clone failed", { cause: error });
|
|
119
|
+
}
|
|
120
|
+
}
|
|
121
|
+
/**
|
|
122
|
+
* Saves compiled AGENTS.md back to disk on a standardized path.
|
|
123
|
+
*/
|
|
124
|
+
function saveAgentsToDisk(repoName, agentsContent, baseDir = "projects") {
|
|
125
|
+
let cleanContent = agentsContent.trim();
|
|
126
|
+
// Strip surrounding markdown code blocks if the LLM wrapped it completely
|
|
127
|
+
cleanContent = cleanContent.replace(/^```(?:markdown)?\s*\n/, "");
|
|
128
|
+
cleanContent = cleanContent.replace(/```\s*$/, "");
|
|
129
|
+
cleanContent = cleanContent.trim();
|
|
130
|
+
const folderName = repoName.toLowerCase().replace(/\s+/g, "-");
|
|
131
|
+
const targetDir = path.join(baseDir, folderName);
|
|
132
|
+
fs.mkdirSync(targetDir, { recursive: true });
|
|
133
|
+
const filePath = path.join(targetDir, "AGENTS.md");
|
|
134
|
+
try {
|
|
135
|
+
fs.writeFileSync(filePath, cleanContent, "utf-8");
|
|
136
|
+
console.log(`✅ Successfully saved AGENTS.md to: ${filePath}`);
|
|
137
|
+
}
|
|
138
|
+
catch (error) {
|
|
139
|
+
console.error(`Failed to save AGENTS.md to ${filePath}: ${error.message}`);
|
|
140
|
+
}
|
|
141
|
+
}
|
|
142
|
+
const AGENTS_SECTION_HEADINGS = [
|
|
143
|
+
["projectOverview", "Project Overview"],
|
|
144
|
+
["agentPersona", "Agent Persona"],
|
|
145
|
+
["techStack", "Tech Stack"],
|
|
146
|
+
["architecture", "Architecture"],
|
|
147
|
+
["codeStyle", "Code Style"],
|
|
148
|
+
["antiPatternsAndRestrictions", "Anti-Patterns & Restrictions"],
|
|
149
|
+
["databaseAndState", "Database & State Management"],
|
|
150
|
+
["errorHandlingAndLogging", "Error Handling & Logging"],
|
|
151
|
+
["testingCommands", "Testing Commands"],
|
|
152
|
+
["testingGuidelines", "Testing Guidelines"],
|
|
153
|
+
["securityAndCompliance", "Security & Compliance"],
|
|
154
|
+
["dependenciesAndEnvironment", "Dependencies & Environment"],
|
|
155
|
+
["prAndGitRules", "PR & Git Rules"],
|
|
156
|
+
["documentationStandards", "Documentation Standards"],
|
|
157
|
+
["commonPatterns", "Common Patterns"],
|
|
158
|
+
["agentWorkflow", "Agent Workflow / SOP"],
|
|
159
|
+
["fewShotExamples", "Few-Shot Examples"],
|
|
160
|
+
];
|
|
161
|
+
/**
|
|
162
|
+
* Joins evaluated section variables together into the uniform AGENTS.md map.
|
|
163
|
+
*/
|
|
164
|
+
function compileAgentsMd(sections, repoName) {
|
|
165
|
+
const parts = [`# AGENTS.md — ${repoName}\n`];
|
|
166
|
+
for (const [key, heading] of AGENTS_SECTION_HEADINGS) {
|
|
167
|
+
const content = sections[key];
|
|
168
|
+
if (content && content.trim() !== "") {
|
|
169
|
+
parts.push(`## ${heading}\n\n${content.trim()}\n`);
|
|
170
|
+
}
|
|
171
|
+
}
|
|
172
|
+
return parts.join("\n");
|
|
173
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@thelogicatelier/sylva",
|
|
3
|
+
"version": "1.0.4",
|
|
4
|
+
"description": "Auto-generate AGENTS.md for your repository using Ax-LLM. Analyze the structural backbone, data flow, and day-to-day coding conventions natively.",
|
|
5
|
+
"main": "dist/index.js",
|
|
6
|
+
"bin": {
|
|
7
|
+
"sylva": "dist/cli.js"
|
|
8
|
+
},
|
|
9
|
+
"files": [
|
|
10
|
+
"dist"
|
|
11
|
+
],
|
|
12
|
+
"scripts": {
|
|
13
|
+
"start": "ts-node src/cli.ts",
|
|
14
|
+
"build": "tsc",
|
|
15
|
+
"test": "vitest run",
|
|
16
|
+
"test:watch": "vitest",
|
|
17
|
+
"lint": "eslint src/ tests/src/",
|
|
18
|
+
"lint:fix": "eslint src/ tests/src/ --fix",
|
|
19
|
+
"format": "prettier --write \"src/**/*.ts\" \"tests/src/**/*.ts\"",
|
|
20
|
+
"prepare": "husky",
|
|
21
|
+
"docs:build": "npx honkit@5.1.4 build docs docs-site",
|
|
22
|
+
"docs:serve": "npx honkit@5.1.4 serve docs",
|
|
23
|
+
"prepublishOnly": "npm run build"
|
|
24
|
+
},
|
|
25
|
+
"repository": {
|
|
26
|
+
"type": "git",
|
|
27
|
+
"url": "git+https://github.com/achatt89/sylva.git"
|
|
28
|
+
},
|
|
29
|
+
"keywords": [
|
|
30
|
+
"agents",
|
|
31
|
+
"llm",
|
|
32
|
+
"documentation",
|
|
33
|
+
"cli",
|
|
34
|
+
"automation",
|
|
35
|
+
"ai",
|
|
36
|
+
"typescript",
|
|
37
|
+
"ax-llm"
|
|
38
|
+
],
|
|
39
|
+
"author": "Abhijit Chatterjee",
|
|
40
|
+
"license": "MIT",
|
|
41
|
+
"type": "commonjs",
|
|
42
|
+
"bugs": {
|
|
43
|
+
"url": "https://github.com/achatt89/sylva/issues"
|
|
44
|
+
},
|
|
45
|
+
"publishConfig": {
|
|
46
|
+
"access": "public"
|
|
47
|
+
},
|
|
48
|
+
"homepage": "https://achatt89.github.io/sylva/",
|
|
49
|
+
"dependencies": {
|
|
50
|
+
"@ax-llm/ax": "^19.0.2",
|
|
51
|
+
"commander": "^14.0.3",
|
|
52
|
+
"dotenv": "^17.3.1",
|
|
53
|
+
"ts-node": "^10.9.2",
|
|
54
|
+
"typescript": "^5.9.3",
|
|
55
|
+
"vitest": "^4.0.18"
|
|
56
|
+
},
|
|
57
|
+
"devDependencies": {
|
|
58
|
+
"@eslint/js": "^10.0.1",
|
|
59
|
+
"@types/node": "^25.3.0",
|
|
60
|
+
"eslint": "^10.0.2",
|
|
61
|
+
"eslint-config-prettier": "^10.1.8",
|
|
62
|
+
"husky": "^9.1.7",
|
|
63
|
+
"prettier": "^3.8.1",
|
|
64
|
+
"typescript-eslint": "^8.56.1"
|
|
65
|
+
}
|
|
66
|
+
}
|