@iamharshil/cortex 5.0.0-beta.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +259 -0
- package/bin/index.js +2 -0
- package/dist/bin/index.js +783 -0
- package/dist/index.js +92 -0
- package/package.json +76 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Harshil
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
1
|
+
# Cortex
|
|
2
|
+
|
|
3
|
+
**The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled**
|
|
4
|
+
|
|
5
|
+
[](https://www.npmjs.com/package/@iamharshil/cortex)
|
|
6
|
+
[](LICENSE)
|
|
7
|
+
[](https://nodejs.org/)
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## What is Cortex?
|
|
12
|
+
|
|
13
|
+
Cortex is an autonomous AI coding agent that runs entirely locally on your machine. It combines the best features from Claude Code, OpenCode, PI, and GitHub Copilot while prioritizing **token optimization** and **hardness**.
|
|
14
|
+
|
|
15
|
+
```
|
|
16
|
+
┌──────────────────────────────────────────────────┐
|
|
17
|
+
│ $ cortex run "build a todo app" │
|
|
18
|
+
│ │
|
|
19
|
+
│ 🔍 Analyzing project... │
|
|
20
|
+
│ 📝 Creating plan... │
|
|
21
|
+
│ ✨ cortex/src/index.ts │
|
|
22
|
+
│ ✨ cortex/src/agent/engine.ts │
|
|
23
|
+
│ │
|
|
24
|
+
│ Your code stays local. Always. │
|
|
25
|
+
└──────────────────────────────────────────────────┘
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
### Key Features
|
|
29
|
+
|
|
30
|
+
- 🔒 **Privacy-first** — All inference runs locally, no data leaves your machine
|
|
31
|
+
- 🧠 **Smart Context** — Token-optimized context management with auto-compaction
|
|
32
|
+
- 💾 **Persistent Memory** — Project (CORTEX.md) and user memory across sessions
|
|
33
|
+
- 🔌 **MCP Support** — Connect to Model Context Protocol servers
|
|
34
|
+
- 🎯 **Plan Mode** — Supervised autonomy with plan previews
|
|
35
|
+
- 🔧 **7 Primitives** — Minimal tools: Read, Write, Edit, Bash, Glob, Grep, TodoWrite
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Getting Started
|
|
40
|
+
|
|
41
|
+
### Prerequisites
|
|
42
|
+
|
|
43
|
+
| Requirement | Description |
|
|
44
|
+
| -------------------------------------------------------------------- | ------------------ |
|
|
45
|
+
| [Node.js](https://nodejs.org/) ≥ 20 | JavaScript runtime |
|
|
46
|
+
| [LM Studio](https://lmstudio.ai) **or** [Ollama](https://ollama.com) | Local model server |
|
|
47
|
+
|
|
48
|
+
### Install
|
|
49
|
+
|
|
50
|
+
```bash
|
|
51
|
+
npm install -g @iamharshil/cortex
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### Initialize
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
cortex init
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
This creates:
|
|
61
|
+
|
|
62
|
+
- `.cortex/config.json` — Provider configuration
|
|
63
|
+
- `.cortex/mcp.json` — MCP server configuration
|
|
64
|
+
- `CORTEX.md` — Project memory file
|
|
65
|
+
|
|
66
|
+
---
|
|
67
|
+
|
|
68
|
+
## Usage
|
|
69
|
+
|
|
70
|
+
### Run a Task
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
# Simple task
|
|
74
|
+
cortex run "create a hello world function"
|
|
75
|
+
|
|
76
|
+
# With specific provider
|
|
77
|
+
cortex run "refactor auth" --provider ollama --model llama3.2
|
|
78
|
+
|
|
79
|
+
# Plan mode - shows plan before executing
|
|
80
|
+
cortex run "migrate to typescript" --plan-mode
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
### Interactive Chat
|
|
84
|
+
|
|
85
|
+
```bash
|
|
86
|
+
cortex chat
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Check Status
|
|
90
|
+
|
|
91
|
+
```bash
|
|
92
|
+
cortex status
|
|
93
|
+
cortex models --provider ollama
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
### Configuration
|
|
97
|
+
|
|
98
|
+
```bash
|
|
99
|
+
cortex setup --provider ollama --model llama3.2
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
## Configuration
|
|
105
|
+
|
|
106
|
+
Cortex stores configuration in:
|
|
107
|
+
|
|
108
|
+
| Platform | Path |
|
|
109
|
+
| -------- | ------------------- |
|
|
110
|
+
| macOS | `~/.cortex/` |
|
|
111
|
+
| Linux | `~/.cortex/` |
|
|
112
|
+
| Windows | `%APPDATA%\cortex\` |
|
|
113
|
+
|
|
114
|
+
### config.json
|
|
115
|
+
|
|
116
|
+
```json
|
|
117
|
+
{
|
|
118
|
+
"provider": "ollama",
|
|
119
|
+
"model": "llama3.2",
|
|
120
|
+
"url": "http://localhost:11434"
|
|
121
|
+
}
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
### MCP Configuration
|
|
125
|
+
|
|
126
|
+
```json
|
|
127
|
+
{
|
|
128
|
+
"servers": {
|
|
129
|
+
"filesystem": {
|
|
130
|
+
"command": "npx",
|
|
131
|
+
"args": ["-y", "@modelcontextprotocol/server-filesystem", "./"]
|
|
132
|
+
}
|
|
133
|
+
}
|
|
134
|
+
}
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
## Memory System
|
|
140
|
+
|
|
141
|
+
### Project Memory (CORTEX.md)
|
|
142
|
+
|
|
143
|
+
Create a `CORTEX.md` file in your project root:
|
|
144
|
+
|
|
145
|
+
```markdown
|
|
146
|
+
# Cortex Project Memory
|
|
147
|
+
|
|
148
|
+
## Project Overview
|
|
149
|
+
|
|
150
|
+
- React-based todo app
|
|
151
|
+
- TypeScript, Vite
|
|
152
|
+
|
|
153
|
+
## Coding Conventions
|
|
154
|
+
|
|
155
|
+
- Functional components
|
|
156
|
+
- CSS modules for styling
|
|
157
|
+
|
|
158
|
+
## Testing
|
|
159
|
+
|
|
160
|
+
- Vitest for unit tests
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
### User Memory (~/.cortex/memory.md)
|
|
164
|
+
|
|
165
|
+
Global preferences and context loaded for all projects.
|
|
166
|
+
|
|
167
|
+
---
|
|
168
|
+
|
|
169
|
+
## Token Optimization
|
|
170
|
+
|
|
171
|
+
Cortex is designed for **minimum token usage**:
|
|
172
|
+
|
|
173
|
+
| Component | Target |
|
|
174
|
+
| ------------------ | ----------------- |
|
|
175
|
+
| System Prompt | ~3K tokens |
|
|
176
|
+
| Tool Definitions | ~5K tokens (lazy) |
|
|
177
|
+
| Project Memory | ~5K tokens |
|
|
178
|
+
| Available for Work | **180K+ tokens** |
|
|
179
|
+
|
|
180
|
+
### Auto-Compaction
|
|
181
|
+
|
|
182
|
+
When context reaches 95%, Cortex automatically:
|
|
183
|
+
|
|
184
|
+
1. Summarizes conversation history
|
|
185
|
+
2. Preserves key information (file paths, conclusions)
|
|
186
|
+
3. Clears old tool outputs
|
|
187
|
+
|
|
188
|
+
### Manual Control
|
|
189
|
+
|
|
190
|
+
```bash
|
|
191
|
+
# Check token usage
|
|
192
|
+
/context
|
|
193
|
+
|
|
194
|
+
# Manual compaction
|
|
195
|
+
/compact preserve file paths and current task
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
---
|
|
199
|
+
|
|
200
|
+
## Providers
|
|
201
|
+
|
|
202
|
+
### Local (Default)
|
|
203
|
+
|
|
204
|
+
| Provider | Default Port | URL |
|
|
205
|
+
| --------- | ------------ | -------------------------- |
|
|
206
|
+
| Ollama | 11434 | `http://localhost:11434` |
|
|
207
|
+
| LM Studio | 1234 | `http://localhost:1234/v1` |
|
|
208
|
+
|
|
209
|
+
### Cloud (Future)
|
|
210
|
+
|
|
211
|
+
- OpenRouter (Coming soon)
|
|
212
|
+
- Anthropic (Coming soon)
|
|
213
|
+
- OpenAI (Coming soon)
|
|
214
|
+
- Gemini (Coming soon)
|
|
215
|
+
|
|
216
|
+
---
|
|
217
|
+
|
|
218
|
+
## Development
|
|
219
|
+
|
|
220
|
+
```bash
|
|
221
|
+
# Clone and setup
|
|
222
|
+
git clone https://github.com/iamharshil/cortex.git
|
|
223
|
+
cd cortex
|
|
224
|
+
npm install
|
|
225
|
+
|
|
226
|
+
# Development
|
|
227
|
+
npm run dev
|
|
228
|
+
|
|
229
|
+
# Build
|
|
230
|
+
npm run build
|
|
231
|
+
|
|
232
|
+
# Test
|
|
233
|
+
npm test
|
|
234
|
+
npm run lint
|
|
235
|
+
npm run typecheck
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
---
|
|
239
|
+
|
|
240
|
+
## Architecture
|
|
241
|
+
|
|
242
|
+
Cortex follows the **"Less Scaffolding, More Model"** philosophy (inspired by PI):
|
|
243
|
+
|
|
244
|
+
1. **Minimal Primitives** — Only 7 core tools, trust the model to orchestrate
|
|
245
|
+
2. **Token Efficiency** — Lazy loading, smart truncation, auto-compaction
|
|
246
|
+
3. **Local First** — Privacy, no cloud dependencies
|
|
247
|
+
4. **MCP Extensible** — Connect to any external service
|
|
248
|
+
|
|
249
|
+
---
|
|
250
|
+
|
|
251
|
+
## License
|
|
252
|
+
|
|
253
|
+
[MIT](LICENSE) © [Harshil](https://github.com/iamharshil)
|
|
254
|
+
|
|
255
|
+
---
|
|
256
|
+
|
|
257
|
+
<div align="center">
|
|
258
|
+
<sub>Built for developers who care about privacy and efficiency.</sub>
|
|
259
|
+
</div>
|
package/bin/index.js
ADDED
|
@@ -0,0 +1,783 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
#!/usr/bin/env node
|
|
3
|
+
|
|
4
|
+
"use strict";
|
|
5
|
+
var __create = Object.create;
|
|
6
|
+
var __defProp = Object.defineProperty;
|
|
7
|
+
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
|
|
8
|
+
var __getOwnPropNames = Object.getOwnPropertyNames;
|
|
9
|
+
var __getProtoOf = Object.getPrototypeOf;
|
|
10
|
+
var __hasOwnProp = Object.prototype.hasOwnProperty;
|
|
11
|
+
var __esm = (fn, res) => function __init() {
|
|
12
|
+
return fn && (res = (0, fn[__getOwnPropNames(fn)[0]])(fn = 0)), res;
|
|
13
|
+
};
|
|
14
|
+
var __copyProps = (to, from, except, desc) => {
|
|
15
|
+
if (from && typeof from === "object" || typeof from === "function") {
|
|
16
|
+
for (let key of __getOwnPropNames(from))
|
|
17
|
+
if (!__hasOwnProp.call(to, key) && key !== except)
|
|
18
|
+
__defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
|
|
19
|
+
}
|
|
20
|
+
return to;
|
|
21
|
+
};
|
|
22
|
+
var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps(
|
|
23
|
+
// If the importer is in node compatibility mode or this is not an ESM
|
|
24
|
+
// file that has been converted to a CommonJS file using a Babel-
|
|
25
|
+
// compatible transform (i.e. "__esModule" has not been set), then set
|
|
26
|
+
// "default" to the CommonJS "module.exports" for node compatibility.
|
|
27
|
+
isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", { value: mod, enumerable: true }) : target,
|
|
28
|
+
mod
|
|
29
|
+
));
|
|
30
|
+
|
|
31
|
+
// dist/index.js
|
|
32
|
+
var dist_exports = {};
|
|
33
|
+
function y(i) {
|
|
34
|
+
switch (i.type) {
|
|
35
|
+
case "ollama":
|
|
36
|
+
return new S(i);
|
|
37
|
+
case "lmstudio":
|
|
38
|
+
return new k(i);
|
|
39
|
+
case "openrouter":
|
|
40
|
+
return new $(i);
|
|
41
|
+
default:
|
|
42
|
+
throw new Error(`Unknown provider type: ${i.type}`);
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
async function I(i, e, t) {
|
|
46
|
+
let o = `call_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
|
|
47
|
+
try {
|
|
48
|
+
switch (i) {
|
|
49
|
+
case "Read": {
|
|
50
|
+
let s = e.file_path, r = e.offset || 1, n = e.limit || 1e3, c = (await (0, import_promises4.readFile)(s, "utf-8")).split(`
|
|
51
|
+
`), l = c.slice(r - 1, r - 1 + n);
|
|
52
|
+
return { tool_call_id: o, output: l.join(`
|
|
53
|
+
`) + (c.length > r - 1 + n ? `
|
|
54
|
+
|
|
55
|
+
... ${c.length - (r - 1 + n)} more lines` : "") };
|
|
56
|
+
}
|
|
57
|
+
case "Write": {
|
|
58
|
+
let s = e.file_path, r = e.content, n = (0, import_path4.dirname)(s);
|
|
59
|
+
return await V(n, { recursive: true }), await (0, import_promises4.writeFile)(s, r, "utf-8"), { tool_call_id: o, output: `File written: ${s}` };
|
|
60
|
+
}
|
|
61
|
+
case "Edit": {
|
|
62
|
+
let s = e.file_path, r = e.old_string, n = e.new_string, a = await (0, import_promises4.readFile)(s, "utf-8");
|
|
63
|
+
if (!a.includes(r)) return { tool_call_id: o, output: "Error: Could not find the specified text to replace. The old_string must match exactly.", is_error: true };
|
|
64
|
+
let c = a.replace(r, n);
|
|
65
|
+
return await (0, import_promises4.writeFile)(s, c, "utf-8"), { tool_call_id: o, output: `File edited: ${s}` };
|
|
66
|
+
}
|
|
67
|
+
case "Bash": {
|
|
68
|
+
let s = e.command, r = e.description || "", { stdout: n, stderr: a, exitCode: c } = await (0, import_execa.execa)(s, { shell: true, cwd: t, timeout: 12e4 }), l = a ? `${n}
|
|
69
|
+
${a}` : n;
|
|
70
|
+
if (c !== 0 && !n) return { tool_call_id: o, output: `Command failed with exit code ${c}: ${s}
|
|
71
|
+
${a}`, is_error: true };
|
|
72
|
+
let u = l.length > 15e3 ? l.slice(0, 15e3) + `
|
|
73
|
+
|
|
74
|
+
... output truncated (${l.length} chars)` : l;
|
|
75
|
+
return { tool_call_id: o, output: u || `Command executed successfully (exit code: ${c})` };
|
|
76
|
+
}
|
|
77
|
+
case "Glob": {
|
|
78
|
+
let s = e.pattern, r = e.path || ".", n = (0, import_path4.resolve)(t, r), a = await (0, import_fast_glob.default)(s, { cwd: n, absolute: true, onlyFiles: true });
|
|
79
|
+
return { tool_call_id: o, output: a.length > 0 ? a.join(`
|
|
80
|
+
`) : "No files found" };
|
|
81
|
+
}
|
|
82
|
+
case "Grep": {
|
|
83
|
+
let s = e.pattern, r = e.path || ".", n = e.include, a = (0, import_path4.resolve)(t, r), c = await (0, import_fast_glob.default)(n || "**/*", { cwd: a, onlyFiles: true }), l = [];
|
|
84
|
+
for (let w of c.slice(0, 100)) try {
|
|
85
|
+
let _ = (await (0, import_promises4.readFile)(w, "utf-8")).split(`
|
|
86
|
+
`);
|
|
87
|
+
for (let f = 0; f < _.length; f++) new RegExp(s, "i").test(_[f]) && l.push(`${w}:${f + 1}: ${_[f]}`);
|
|
88
|
+
} catch {
|
|
89
|
+
}
|
|
90
|
+
let u = l.length > 100 ? l.slice(0, 100).concat([`... ${l.length - 100} more matches`]) : l;
|
|
91
|
+
return { tool_call_id: o, output: u.length > 0 ? u.join(`
|
|
92
|
+
`) : "No matches found" };
|
|
93
|
+
}
|
|
94
|
+
case "TodoWrite": {
|
|
95
|
+
let s = e.todos;
|
|
96
|
+
return { tool_call_id: o, output: `Todo list updated with ${s.length} items:
|
|
97
|
+
${s.map((r, n) => `${n + 1}. [${r.status}] ${r.content}`).join(`
|
|
98
|
+
`)}` };
|
|
99
|
+
}
|
|
100
|
+
default:
|
|
101
|
+
return { tool_call_id: o, output: `Unknown tool: ${i}`, is_error: true };
|
|
102
|
+
}
|
|
103
|
+
} catch (s) {
|
|
104
|
+
return { tool_call_id: o, output: `Error executing ${i}: ${s instanceof Error ? s.message : String(s)}`, is_error: true };
|
|
105
|
+
}
|
|
106
|
+
}
|
|
107
|
+
async function V(i, e) {
|
|
108
|
+
let { mkdirSync: t } = await import("fs");
|
|
109
|
+
t(i, e);
|
|
110
|
+
}
|
|
111
|
+
async function E() {
|
|
112
|
+
let i = [(0, import_path5.join)(process.cwd(), ".cortex", "config.json"), (0, import_path5.join)((0, import_os4.homedir)(), ".cortex", "config.json")];
|
|
113
|
+
for (let e of i) try {
|
|
114
|
+
let t = await (0, import_promises5.readFile)(e, "utf-8");
|
|
115
|
+
return JSON.parse(t);
|
|
116
|
+
} catch {
|
|
117
|
+
}
|
|
118
|
+
return {};
|
|
119
|
+
}
|
|
120
|
+
async function re() {
|
|
121
|
+
console.log(import_chalk.default.blue("Initializing Cortex project..."));
|
|
122
|
+
let i = (0, import_path5.join)(process.cwd(), ".cortex"), e = (0, import_path5.join)(i, "config.json"), t = (0, import_path5.join)(i, "mcp.json"), o = (0, import_path5.join)(process.cwd(), "CORTEX.md");
|
|
123
|
+
try {
|
|
124
|
+
await (0, import_promises5.mkdir)(i, { recursive: true });
|
|
125
|
+
} catch {
|
|
126
|
+
}
|
|
127
|
+
try {
|
|
128
|
+
await (0, import_promises5.access)(e), console.log(import_chalk.default.yellow("Cortex already initialized in this project"));
|
|
129
|
+
return;
|
|
130
|
+
} catch {
|
|
131
|
+
}
|
|
132
|
+
await (0, import_promises5.writeFile)(e, JSON.stringify({ provider: "ollama", model: "llama3.2" }, null, 2)), console.log(import_chalk.default.green("\u2705 Created .cortex/config.json"));
|
|
133
|
+
try {
|
|
134
|
+
await (0, import_promises5.access)(o);
|
|
135
|
+
} catch {
|
|
136
|
+
await (0, import_promises5.writeFile)(o, `# Cortex Project Memory
|
|
137
|
+
|
|
138
|
+
## Project Overview
|
|
139
|
+
<!-- Fill in your project description -->
|
|
140
|
+
|
|
141
|
+
## Coding Conventions
|
|
142
|
+
<!-- Document your coding style preferences -->
|
|
143
|
+
|
|
144
|
+
## Testing Preferences
|
|
145
|
+
<!-- Document how tests should be written -->
|
|
146
|
+
|
|
147
|
+
## Architecture Notes
|
|
148
|
+
<!-- Document important architectural decisions -->
|
|
149
|
+
`), console.log(import_chalk.default.green("\u2705 Created CORTEX.md"));
|
|
150
|
+
}
|
|
151
|
+
await (0, import_promises5.writeFile)(t, JSON.stringify({ servers: {} }, null, 2)), console.log(import_chalk.default.green("\u2705 Created .cortex/mcp.json")), console.log(import_chalk.default.green(`
|
|
152
|
+
\u{1F389} Cortex initialized! Run "cortex" to start coding.`));
|
|
153
|
+
}
|
|
154
|
+
async function ie(i, e) {
|
|
155
|
+
let t = await E(), o = e.provider || t.provider || "ollama", s = e.model || t.model, r = e.url || t.url, n = e.planMode || t.planMode, a = { provider: o, model: s, providerUrl: r, planMode: n }, c = new T(a, process.cwd());
|
|
156
|
+
try {
|
|
157
|
+
await c.initialize(), await c.run(i);
|
|
158
|
+
let l = c.getContextInfo();
|
|
159
|
+
console.log(import_chalk.default.dim(`
|
|
160
|
+
\u{1F4CA} Session: ${l.messages} messages, ${l.tokens} tokens`));
|
|
161
|
+
} catch (l) {
|
|
162
|
+
console.error(import_chalk.default.red(`
|
|
163
|
+
\u274C Error: ${l instanceof Error ? l.message : String(l)}`)), process.exit(1);
|
|
164
|
+
} finally {
|
|
165
|
+
await c.cleanup();
|
|
166
|
+
}
|
|
167
|
+
}
|
|
168
|
+
async function ae() {
|
|
169
|
+
console.log(import_chalk.default.blue(`\u{1F50D} Checking provider status...
|
|
170
|
+
`));
|
|
171
|
+
let i = await E(), e = i.provider || "ollama";
|
|
172
|
+
try {
|
|
173
|
+
let t = y({ type: e, baseUrl: i.url });
|
|
174
|
+
console.log(import_chalk.default.green(`\u2705 Provider: ${t.name}`));
|
|
175
|
+
let o = await t.listModels();
|
|
176
|
+
if (o.length > 0) {
|
|
177
|
+
console.log(import_chalk.default.cyan(`
|
|
178
|
+
\u{1F4E6} Available models:`));
|
|
179
|
+
for (let s of o.slice(0, 10)) console.log(` - ${s}`);
|
|
180
|
+
o.length > 10 && console.log(import_chalk.default.dim(` ... and ${o.length - 10} more`));
|
|
181
|
+
} else console.log(import_chalk.default.yellow("\u26A0\uFE0F No models found. Make sure your provider is running."));
|
|
182
|
+
} catch (t) {
|
|
183
|
+
console.error(import_chalk.default.red(`\u274C Provider error: ${t instanceof Error ? t.message : String(t)}`));
|
|
184
|
+
}
|
|
185
|
+
}
|
|
186
|
+
async function ce(i) {
|
|
187
|
+
console.log(import_chalk.default.blue(`\u2699\uFE0F Setting up Cortex...
|
|
188
|
+
`));
|
|
189
|
+
let e = (0, import_path5.join)(process.cwd(), ".cortex", "config.json"), t = {};
|
|
190
|
+
try {
|
|
191
|
+
let s = await (0, import_promises5.readFile)(e, "utf-8");
|
|
192
|
+
t = JSON.parse(s);
|
|
193
|
+
} catch {
|
|
194
|
+
}
|
|
195
|
+
let o = { provider: i.provider || t.provider || "ollama", model: i.model || t.model || "llama3.2", url: t.url || "http://localhost:11434" };
|
|
196
|
+
await (0, import_promises5.writeFile)(e, JSON.stringify(o, null, 2)), console.log(import_chalk.default.green("\u2705 Configuration saved!")), console.log(import_chalk.default.dim(` Provider: ${o.provider}`)), console.log(import_chalk.default.dim(` Model: ${o.model}`));
|
|
197
|
+
}
|
|
198
|
+
async function le(i) {
|
|
199
|
+
let e = await E(), t = i.provider || e.provider || "ollama";
|
|
200
|
+
try {
|
|
201
|
+
let s = await y({ type: t, baseUrl: e.url }).listModels();
|
|
202
|
+
if (s.length === 0) {
|
|
203
|
+
console.log(import_chalk.default.yellow("No models available"));
|
|
204
|
+
return;
|
|
205
|
+
}
|
|
206
|
+
console.log(import_chalk.default.cyan(`Available models on ${t}:`));
|
|
207
|
+
for (let r of s) console.log(` ${r}`);
|
|
208
|
+
} catch (o) {
|
|
209
|
+
console.error(import_chalk.default.red(`Error: ${o instanceof Error ? o.message : String(o)}`));
|
|
210
|
+
}
|
|
211
|
+
}
|
|
212
|
+
function v(i = 0) {
|
|
213
|
+
console.log(), console.log(import_chalk.default.dim(i === 0 ? "\u{1F44B} Goodbye!" : "\u274C Cancelled.")), process.exit(i);
|
|
214
|
+
}
|
|
215
|
+
var import_commander, import_chalk, import_chalk2, import_promises, import_path, import_os, import_path2, import_os2, import_promises2, import_promises3, import_path3, import_os3, import_execa, import_promises4, import_path4, import_fast_glob, import_execa2, import_events, import_promises5, import_path5, import_os4, S, k, $, G, x, W, C, U, P, T, p, me;
|
|
216
|
+
var init_dist = __esm({
|
|
217
|
+
"dist/index.js"() {
|
|
218
|
+
"use strict";
|
|
219
|
+
import_commander = require("commander");
|
|
220
|
+
import_chalk = __toESM(require("chalk"), 1);
|
|
221
|
+
import_chalk2 = __toESM(require("chalk"), 1);
|
|
222
|
+
import_promises = require("fs/promises");
|
|
223
|
+
import_path = require("path");
|
|
224
|
+
import_os = require("os");
|
|
225
|
+
import_path2 = require("path");
|
|
226
|
+
import_os2 = require("os");
|
|
227
|
+
import_promises2 = require("fs/promises");
|
|
228
|
+
import_promises3 = require("fs/promises");
|
|
229
|
+
import_path3 = require("path");
|
|
230
|
+
import_os3 = require("os");
|
|
231
|
+
import_execa = require("execa");
|
|
232
|
+
import_promises4 = require("fs/promises");
|
|
233
|
+
import_path4 = require("path");
|
|
234
|
+
import_fast_glob = __toESM(require("fast-glob"), 1);
|
|
235
|
+
import_execa2 = require("execa");
|
|
236
|
+
import_events = require("events");
|
|
237
|
+
import_promises5 = require("fs/promises");
|
|
238
|
+
import_path5 = require("path");
|
|
239
|
+
import_os4 = require("os");
|
|
240
|
+
S = class {
|
|
241
|
+
name = "ollama";
|
|
242
|
+
baseUrl;
|
|
243
|
+
defaultModel;
|
|
244
|
+
timeout;
|
|
245
|
+
constructor(e) {
|
|
246
|
+
this.baseUrl = e.baseUrl || "http://localhost:11434", this.defaultModel = e.defaultModel || "llama3.2", this.timeout = e.timeout || 12e4;
|
|
247
|
+
}
|
|
248
|
+
async chat(e, t, o) {
|
|
249
|
+
let r = { model: o || this.defaultModel, messages: this.formatMessages(e), stream: false };
|
|
250
|
+
t && t.length > 0 && (r.tools = t.map((n) => ({ type: "function", function: { name: n.name, description: n.description, parameters: n.input_schema } })));
|
|
251
|
+
try {
|
|
252
|
+
let n = await fetch(`${this.baseUrl}/api/chat`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(r), signal: AbortSignal.timeout(this.timeout) });
|
|
253
|
+
if (!n.ok) {
|
|
254
|
+
let c = await n.text();
|
|
255
|
+
throw new Error(`Ollama API error: ${n.status} - ${c}`);
|
|
256
|
+
}
|
|
257
|
+
let a = await n.json();
|
|
258
|
+
return { content: a.message?.content || "", tool_calls: a.message?.tool_calls?.map((c) => ({ id: c.id, type: "function", function: { name: c.function.name, arguments: c.function.arguments } })), stop_reason: a.done ? "stop" : void 0 };
|
|
259
|
+
} catch (n) {
|
|
260
|
+
throw n instanceof Error && n.name === "TimeoutError" ? new Error(`Ollama request timed out after ${this.timeout}ms`) : n;
|
|
261
|
+
}
|
|
262
|
+
}
|
|
263
|
+
async listModels() {
|
|
264
|
+
try {
|
|
265
|
+
let e = await fetch(`${this.baseUrl}/api/tags`);
|
|
266
|
+
return e.ok ? (await e.json()).models?.map((o) => o.name) || [] : [];
|
|
267
|
+
} catch {
|
|
268
|
+
return [];
|
|
269
|
+
}
|
|
270
|
+
}
|
|
271
|
+
getTokenizer() {
|
|
272
|
+
return (e) => Math.ceil(e.length / 4);
|
|
273
|
+
}
|
|
274
|
+
formatMessages(e) {
|
|
275
|
+
return e.map((t) => ({ role: t.role === "tool" ? "user" : t.role, content: t.content }));
|
|
276
|
+
}
|
|
277
|
+
};
|
|
278
|
+
k = class {
|
|
279
|
+
name = "lmstudio";
|
|
280
|
+
baseUrl;
|
|
281
|
+
defaultModel;
|
|
282
|
+
timeout;
|
|
283
|
+
constructor(e) {
|
|
284
|
+
this.baseUrl = e.baseUrl || "http://localhost:1234/v1", this.defaultModel = e.defaultModel || "llama-3.1-8b", this.timeout = e.timeout || 12e4;
|
|
285
|
+
}
|
|
286
|
+
async chat(e, t, o) {
|
|
287
|
+
let r = { model: o || this.defaultModel, messages: this.formatMessages(e), stream: false };
|
|
288
|
+
t && t.length > 0 && (r.tools = t.map((n) => ({ type: "function", function: { name: n.name, description: n.description, parameters: n.input_schema } })));
|
|
289
|
+
try {
|
|
290
|
+
let n = await fetch(`${this.baseUrl}/chat/completions`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(r), signal: AbortSignal.timeout(this.timeout) });
|
|
291
|
+
if (!n.ok) {
|
|
292
|
+
let l = await n.text();
|
|
293
|
+
throw new Error(`LM Studio API error: ${n.status} - ${l}`);
|
|
294
|
+
}
|
|
295
|
+
let c = (await n.json()).choices?.[0];
|
|
296
|
+
return { content: c?.message?.content || "", tool_calls: c?.message?.tool_calls?.map((l) => ({ id: l.id, type: "function", function: { name: l.function.name, arguments: l.function.arguments } })) };
|
|
297
|
+
} catch (n) {
|
|
298
|
+
throw n instanceof Error && n.name === "TimeoutError" ? new Error(`LM Studio request timed out after ${this.timeout}ms`) : n;
|
|
299
|
+
}
|
|
300
|
+
}
|
|
301
|
+
async listModels() {
|
|
302
|
+
try {
|
|
303
|
+
let e = await fetch(`${this.baseUrl.replace("/v1", "")}/api/v0/models`);
|
|
304
|
+
return e.ok ? (await e.json()).data?.map((o) => o.id) || [] : [];
|
|
305
|
+
} catch {
|
|
306
|
+
return [];
|
|
307
|
+
}
|
|
308
|
+
}
|
|
309
|
+
getTokenizer() {
|
|
310
|
+
return (e) => Math.ceil(e.length / 4);
|
|
311
|
+
}
|
|
312
|
+
formatMessages(e) {
|
|
313
|
+
return e.map((t) => ({ role: t.role === "tool" ? "user" : t.role, content: t.content }));
|
|
314
|
+
}
|
|
315
|
+
};
|
|
316
|
+
$ = class {
|
|
317
|
+
name = "openrouter";
|
|
318
|
+
apiKey;
|
|
319
|
+
baseUrl;
|
|
320
|
+
defaultModel;
|
|
321
|
+
timeout;
|
|
322
|
+
constructor(e) {
|
|
323
|
+
this.apiKey = e.apiKey, this.baseUrl = e.baseUrl || "https://openrouter.ai/api/v1", this.defaultModel = e.defaultModel || "anthropic/claude-3.5-sonnet", this.timeout = e.timeout || 12e4;
|
|
324
|
+
}
|
|
325
|
+
async chat(e, t, o) {
|
|
326
|
+
let r = { model: o || this.defaultModel, messages: this.formatMessages(e) };
|
|
327
|
+
t && t.length > 0 && (r.tools = t.map((n) => ({ type: "function", function: { name: n.name, description: n.description, parameters: n.input_schema } })));
|
|
328
|
+
try {
|
|
329
|
+
let n = await fetch(`${this.baseUrl}/chat/completions`, { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${this.apiKey}`, "HTTP-Referer": "https://cortex.dev", "X-Title": "Cortex" }, body: JSON.stringify(r), signal: AbortSignal.timeout(this.timeout) });
|
|
330
|
+
if (!n.ok) {
|
|
331
|
+
let l = await n.text();
|
|
332
|
+
throw new Error(`OpenRouter API error: ${n.status} - ${l}`);
|
|
333
|
+
}
|
|
334
|
+
let c = (await n.json()).choices?.[0];
|
|
335
|
+
return { content: c?.message?.content || "", tool_calls: c?.message?.tool_calls?.map((l) => ({ id: l.id, type: "function", function: { name: l.function.name, arguments: l.function.arguments } })), stop_reason: c?.finish_reason };
|
|
336
|
+
} catch (n) {
|
|
337
|
+
throw n instanceof Error && n.name === "TimeoutError" ? new Error(`OpenRouter request timed out after ${this.timeout}ms`) : n;
|
|
338
|
+
}
|
|
339
|
+
}
|
|
340
|
+
async listModels() {
|
|
341
|
+
try {
|
|
342
|
+
let e = await fetch(`${this.baseUrl}/models`, { headers: { Authorization: `Bearer ${this.apiKey}` } });
|
|
343
|
+
return e.ok ? (await e.json()).data?.map((o) => o.id) || [] : [];
|
|
344
|
+
} catch {
|
|
345
|
+
return [];
|
|
346
|
+
}
|
|
347
|
+
}
|
|
348
|
+
getTokenizer() {
|
|
349
|
+
return (e) => Math.ceil(e.length / 4);
|
|
350
|
+
}
|
|
351
|
+
formatMessages(e) {
|
|
352
|
+
return e.map((t) => t.tool_calls ? { role: "assistant", content: t.content || "", tool_calls: t.tool_calls.map((o) => ({ id: o.id, type: o.type, function: { name: o.function.name, arguments: o.function.arguments } })) } : t.tool_call_id ? { role: "tool", content: t.content, tool_call_id: t.tool_call_id } : { role: t.role === "tool" ? "user" : t.role, content: t.content });
|
|
353
|
+
}
|
|
354
|
+
};
|
|
355
|
+
G = { maxTokens: 18e4, systemPromptTokens: 3e3, toolDefTokens: 5e3, warningThreshold: 0.7, autoCompactThreshold: 0.95 };
|
|
356
|
+
x = class {
|
|
357
|
+
messages = [];
|
|
358
|
+
config;
|
|
359
|
+
tokenCount = 0;
|
|
360
|
+
memoryFiles = [];
|
|
361
|
+
constructor(e = {}) {
|
|
362
|
+
this.config = { ...G, ...e };
|
|
363
|
+
}
|
|
364
|
+
getSystemPrompt() {
|
|
365
|
+
return `You are Cortex, an autonomous AI coding agent. You have access to tools to read, write, and edit files, execute shell commands, and search through the codebase.
|
|
366
|
+
|
|
367
|
+
## Your Capabilities
|
|
368
|
+
- Read, write, and edit files in the codebase
|
|
369
|
+
- Execute shell commands
|
|
370
|
+
- Search and explore the codebase
|
|
371
|
+
- Plan and execute complex tasks
|
|
372
|
+
- Ask clarifying questions when needed
|
|
373
|
+
|
|
374
|
+
## Guidelines
|
|
375
|
+
- Always verify changes before committing
|
|
376
|
+
- Ask before making destructive changes
|
|
377
|
+
- Use precise edits rather than overwriting entire files
|
|
378
|
+
- Explain your plan before executing significant changes
|
|
379
|
+
- Use the TodoWrite tool to track multi-step tasks
|
|
380
|
+
|
|
381
|
+
## Working Directory
|
|
382
|
+
You are working in the current directory. Use absolute paths for file operations.
|
|
383
|
+
|
|
384
|
+
## Tools
|
|
385
|
+
You have access to the following tools:
|
|
386
|
+
${W.map((e) => `- ${e.name}: ${e.description}`).join(`
|
|
387
|
+
`)}
|
|
388
|
+
`;
|
|
389
|
+
}
|
|
390
|
+
addMessage(e) {
|
|
391
|
+
this.messages.push(e), this.updateTokenCount();
|
|
392
|
+
}
|
|
393
|
+
getMessages() {
|
|
394
|
+
return [...this.messages];
|
|
395
|
+
}
|
|
396
|
+
getMessagesForAPI() {
|
|
397
|
+
return [{ role: "system", content: this.getSystemPrompt() }, ...this.messages];
|
|
398
|
+
}
|
|
399
|
+
async compact(e) {
|
|
400
|
+
let t = this.summarizeConversation(e), o = this.messages.filter((r) => r.role === "user").pop(), s = this.messages.filter((r) => r.role === "assistant").pop();
|
|
401
|
+
this.messages = [], o && this.messages.push({ role: "user", content: `[Previous conversation summary]: ${t}
|
|
402
|
+
|
|
403
|
+
[Continuing from previous session]
|
|
404
|
+
|
|
405
|
+
${o.content}` }), s && this.messages.push({ role: "assistant", content: s.content }), this.updateTokenCount();
|
|
406
|
+
}
|
|
407
|
+
summarizeConversation(e) {
|
|
408
|
+
let t = e ? 500 : 300, o = this.messages.filter((a) => a.tool_calls).map((a) => a.tool_calls?.map((c) => c.function.name).join(", ")).join("; "), s = this.messages.filter((a) => a.role === "tool" && a.content?.includes("File edited")).length, r = this.messages.filter((a) => a.role === "tool" && a.content?.includes("File written")).length, n = `Session summary: ${s} files edited, ${r} files created`;
|
|
409
|
+
return o && (n += `. Tools used: ${o}`), n.slice(0, t);
|
|
410
|
+
}
|
|
411
|
+
getTokenUsage() {
|
|
412
|
+
let e = this.config.systemPromptTokens, t = this.config.toolDefTokens, o = this.estimateTokens(this.messagesToString()), s = this.memoryFiles.length * 500, r = e + t + o + s, n = this.config.maxTokens - r;
|
|
413
|
+
return { system: e, tools: t, messages: o, memory: s, available: n, total: this.config.maxTokens };
|
|
414
|
+
}
|
|
415
|
+
shouldCompact() {
|
|
416
|
+
let e = this.getTokenUsage();
|
|
417
|
+
return (this.config.maxTokens - e.available) / this.config.maxTokens > this.config.autoCompactThreshold;
|
|
418
|
+
}
|
|
419
|
+
shouldWarn() {
|
|
420
|
+
let e = this.getTokenUsage();
|
|
421
|
+
return (this.config.maxTokens - e.available) / this.config.maxTokens > this.config.warningThreshold;
|
|
422
|
+
}
|
|
423
|
+
async loadProjectMemory(e) {
|
|
424
|
+
let t = (0, import_path2.join)(e, "CORTEX.md");
|
|
425
|
+
try {
|
|
426
|
+
await (0, import_promises2.access)(t), this.memoryFiles.push(t);
|
|
427
|
+
} catch {
|
|
428
|
+
}
|
|
429
|
+
}
|
|
430
|
+
async loadUserMemory() {
|
|
431
|
+
let e = (0, import_os2.homedir)(), t = (0, import_path2.join)(e, ".cortex", "memory.md");
|
|
432
|
+
try {
|
|
433
|
+
await (0, import_promises2.access)(t), this.memoryFiles.push(t);
|
|
434
|
+
} catch {
|
|
435
|
+
}
|
|
436
|
+
}
|
|
437
|
+
clear() {
|
|
438
|
+
this.messages = [], this.tokenCount = 0;
|
|
439
|
+
}
|
|
440
|
+
updateTokenCount() {
|
|
441
|
+
let e = this.messagesToString();
|
|
442
|
+
this.tokenCount = this.estimateTokens(e);
|
|
443
|
+
}
|
|
444
|
+
messagesToString() {
|
|
445
|
+
return this.messages.map((e) => {
|
|
446
|
+
let t = `${e.role}: ${e.content}`;
|
|
447
|
+
return e.tool_calls && (t += `
|
|
448
|
+
` + JSON.stringify(e.tool_calls)), t;
|
|
449
|
+
}).join(`
|
|
450
|
+
`);
|
|
451
|
+
}
|
|
452
|
+
estimateTokens(e) {
|
|
453
|
+
return Math.ceil(e.length / 4);
|
|
454
|
+
}
|
|
455
|
+
};
|
|
456
|
+
W = [{ name: "Read", description: "Read the contents of a file from the filesystem." }, { name: "Write", description: "Create a new file or overwrite an existing file." }, { name: "Edit", description: "Make a precise edit to an existing file." }, { name: "Bash", description: "Execute a shell command in the terminal." }, { name: "Glob", description: "Find files matching a pattern." }, { name: "Grep", description: "Search for text within files." }, { name: "TodoWrite", description: "Create and manage a todo list." }];
|
|
457
|
+
C = class {
|
|
458
|
+
projectMemory = "";
|
|
459
|
+
userMemory = "";
|
|
460
|
+
sessions = [];
|
|
461
|
+
configDir;
|
|
462
|
+
projectDir = "";
|
|
463
|
+
constructor() {
|
|
464
|
+
this.configDir = (0, import_path3.join)((0, import_os3.homedir)(), ".cortex");
|
|
465
|
+
}
|
|
466
|
+
async initialize(e) {
|
|
467
|
+
this.projectDir = e, await this.ensureConfigDir(), await this.loadProjectMemory(e), await this.loadUserMemory(), await this.loadSessionHistory();
|
|
468
|
+
}
|
|
469
|
+
async ensureConfigDir() {
|
|
470
|
+
try {
|
|
471
|
+
await (0, import_promises3.mkdir)(this.configDir, { recursive: true });
|
|
472
|
+
} catch {
|
|
473
|
+
}
|
|
474
|
+
}
|
|
475
|
+
async loadProjectMemory(e) {
|
|
476
|
+
let t = (0, import_path3.join)(e, "CORTEX.md");
|
|
477
|
+
try {
|
|
478
|
+
this.projectMemory = await (0, import_promises3.readFile)(t, "utf-8");
|
|
479
|
+
} catch {
|
|
480
|
+
this.projectMemory = "";
|
|
481
|
+
}
|
|
482
|
+
}
|
|
483
|
+
async loadUserMemory() {
|
|
484
|
+
let e = (0, import_path3.join)(this.configDir, "memory.md");
|
|
485
|
+
try {
|
|
486
|
+
this.userMemory = await (0, import_promises3.readFile)(e, "utf-8");
|
|
487
|
+
} catch {
|
|
488
|
+
this.userMemory = "";
|
|
489
|
+
}
|
|
490
|
+
}
|
|
491
|
+
async loadSessionHistory() {
|
|
492
|
+
let e = (0, import_path3.join)(this.configDir, "sessions.json");
|
|
493
|
+
try {
|
|
494
|
+
let t = await (0, import_promises3.readFile)(e, "utf-8");
|
|
495
|
+
this.sessions = JSON.parse(t);
|
|
496
|
+
} catch {
|
|
497
|
+
this.sessions = [];
|
|
498
|
+
}
|
|
499
|
+
}
|
|
500
|
+
async saveSession(e) {
|
|
501
|
+
this.sessions.push(e), this.sessions.length > 50 && (this.sessions = this.sessions.slice(-50));
|
|
502
|
+
let t = (0, import_path3.join)(this.configDir, "sessions.json");
|
|
503
|
+
await (0, import_promises3.writeFile)(t, JSON.stringify(this.sessions, null, 2));
|
|
504
|
+
}
|
|
505
|
+
async updateProjectMemory(e) {
|
|
506
|
+
if (!this.projectDir) return;
|
|
507
|
+
let t = (0, import_path3.join)(this.projectDir, "CORTEX.md");
|
|
508
|
+
await (0, import_promises3.writeFile)(t, e, "utf-8"), this.projectMemory = e;
|
|
509
|
+
}
|
|
510
|
+
async updateUserMemory(e) {
|
|
511
|
+
let t = (0, import_path3.join)(this.configDir, "memory.md");
|
|
512
|
+
await (0, import_promises3.writeFile)(t, e, "utf-8"), this.userMemory = e;
|
|
513
|
+
}
|
|
514
|
+
getProjectMemory() {
|
|
515
|
+
return this.projectMemory;
|
|
516
|
+
}
|
|
517
|
+
getUserMemory() {
|
|
518
|
+
return this.userMemory;
|
|
519
|
+
}
|
|
520
|
+
getMemoryForContext() {
|
|
521
|
+
let e = [];
|
|
522
|
+
if (this.projectMemory && e.push(`## Project Memory (${this.projectDir})
|
|
523
|
+
${this.projectMemory}`), this.userMemory && e.push(`## User Memory
|
|
524
|
+
${this.userMemory}`), this.sessions.length > 0) {
|
|
525
|
+
let t = this.sessions.slice(-5);
|
|
526
|
+
e.push(`## Recent Sessions
|
|
527
|
+
${t.map((o) => `- ${new Date(o.endTime).toLocaleDateString()}: ${o.summary}`).join(`
|
|
528
|
+
`)}`);
|
|
529
|
+
}
|
|
530
|
+
return e.join(`
|
|
531
|
+
|
|
532
|
+
`);
|
|
533
|
+
}
|
|
534
|
+
getSessions() {
|
|
535
|
+
return this.sessions;
|
|
536
|
+
}
|
|
537
|
+
async createProjectMemoryFile() {
|
|
538
|
+
if (!this.projectDir) return;
|
|
539
|
+
let e = (0, import_path3.join)(this.projectDir, "CORTEX.md");
|
|
540
|
+
try {
|
|
541
|
+
await (0, import_promises3.access)(e);
|
|
542
|
+
} catch {
|
|
543
|
+
let t = `# Cortex Project Memory
|
|
544
|
+
|
|
545
|
+
## Project Overview
|
|
546
|
+
<!-- Fill in your project description -->
|
|
547
|
+
|
|
548
|
+
## Coding Conventions
|
|
549
|
+
<!-- Document your coding style preferences -->
|
|
550
|
+
|
|
551
|
+
## Testing Preferences
|
|
552
|
+
<!-- Document how tests should be written -->
|
|
553
|
+
|
|
554
|
+
## Architecture Notes
|
|
555
|
+
<!-- Document important architectural decisions -->
|
|
556
|
+
`;
|
|
557
|
+
await (0, import_promises3.writeFile)(e, t), this.projectMemory = t;
|
|
558
|
+
}
|
|
559
|
+
}
|
|
560
|
+
};
|
|
561
|
+
U = [{ name: "Read", description: "Read the contents of a file from the filesystem. Use this when you need to see what's in a file. Provide the absolute_path to the file. Use offset and limit to read specific portions of large files.", input_schema: { type: "object", properties: { file_path: { type: "string", description: "Absolute path to the file to read" }, offset: { type: "number", description: "Line number to start reading from (1-indexed)", default: 1 }, limit: { type: "number", description: "Maximum number of lines to read", default: 1e3 } }, required: ["file_path"] } }, { name: "Write", description: "Create a new file or overwrite an existing file with new content. Use this to create new files or make significant changes that warrant full replacement rather than targeted edits.", input_schema: { type: "object", properties: { file_path: { type: "string", description: "Absolute path to the file to write" }, content: { type: "string", description: "Complete file content to write" } }, required: ["file_path", "content"] } }, { name: "Edit", description: "Make a precise edit to an existing file using a search and replace pattern. Use this for targeted modifications. The old_string must match exactly including whitespace and indentation.", input_schema: { type: "object", properties: { file_path: { type: "string", description: "Absolute path to the file to edit" }, old_string: { type: "string", description: "The exact text to find and replace (must match exactly including whitespace)" }, new_string: { type: "string", description: "The replacement text" } }, required: ["file_path", "old_string", "new_string"] } }, { name: "Bash", description: "Execute a shell command in the terminal. Use this to run commands, scripts, git operations, or any other system operations. The working_directory defaults to the current directory.", input_schema: { type: "object", properties: { command: { type: "string", description: "The command to execute" }, description: { type: "string", description: "What this command does (for context)", default: "" } }, required: ["command"] } }, { name: "Glob", description: "Find files matching a pattern using glob syntax. Useful for discovering files in a project without reading their contents.", input_schema: { type: "object", properties: { pattern: { type: "string", description: 'Glob pattern (e.g., "**/*.ts", "src/**/*.js")' }, path: { type: "string", description: "Directory to search in (defaults to current directory)", default: "." } }, required: ["pattern"] } }, { name: "Grep", description: "Search for text within files. Returns matching lines with file paths.", input_schema: { type: "object", properties: { pattern: { type: "string", description: "Regular expression pattern to search for" }, path: { type: "string", description: "Directory or file to search in", default: "." }, include: { type: "string", description: 'File pattern to include (e.g., "*.ts", "*.js")' } }, required: ["pattern"] } }, { name: "TodoWrite", description: "Create and manage a todo list for tracking multi-step tasks. Use this to organize complex tasks into manageable steps.", input_schema: { type: "object", properties: { todos: { type: "array", description: "Array of todo items with content, status, and priority", items: { type: "object", properties: { content: { type: "string", description: "Description of the task" }, status: { type: "string", enum: ["pending", "in_progress", "completed", "cancelled"], default: "pending" }, priority: { type: "string", enum: ["high", "medium", "low"], default: "medium" } }, required: ["content", "status", "priority"] } } }, required: ["todos"] } }];
|
|
562
|
+
P = class extends import_events.EventEmitter {
|
|
563
|
+
servers = /* @__PURE__ */ new Map();
|
|
564
|
+
processes = /* @__PURE__ */ new Map();
|
|
565
|
+
tools = /* @__PURE__ */ new Map();
|
|
566
|
+
requestId = 0;
|
|
567
|
+
pendingRequests = /* @__PURE__ */ new Map();
|
|
568
|
+
initialized = false;
|
|
569
|
+
async addServer(e) {
|
|
570
|
+
this.servers.set(e.name, e), e.type === "stdio" ? await this.connectStdioServer(e) : e.type === "http" && e.url && await this.connectHttpServer(e);
|
|
571
|
+
}
|
|
572
|
+
async removeServer(e) {
|
|
573
|
+
let t = this.servers.get(e);
|
|
574
|
+
if (t) {
|
|
575
|
+
if (t.type === "stdio") {
|
|
576
|
+
let o = this.processes.get(e);
|
|
577
|
+
o && (o.kill(), this.processes.delete(e));
|
|
578
|
+
}
|
|
579
|
+
this.servers.delete(e), this.tools.delete(e);
|
|
580
|
+
}
|
|
581
|
+
}
|
|
582
|
+
async listTools(e) {
|
|
583
|
+
let t = {};
|
|
584
|
+
if (e) {
|
|
585
|
+
let o = this.tools.get(e);
|
|
586
|
+
if (o) for (let s of o) t[`${e}/${s.name}`] = s;
|
|
587
|
+
} else for (let [o, s] of this.tools) for (let r of s) t[`${o}/${r.name}`] = r;
|
|
588
|
+
return t;
|
|
589
|
+
}
|
|
590
|
+
async callTool(e, t, o) {
|
|
591
|
+
let s = this.servers.get(e);
|
|
592
|
+
if (!s) throw new Error(`Unknown MCP server: ${e}`);
|
|
593
|
+
return s.type === "http" && s.url ? this.callHttpTool(s.url, t, o) : this.callStdioTool(e, t, o);
|
|
594
|
+
}
|
|
595
|
+
async connectStdioServer(e) {
|
|
596
|
+
let t = ++this.requestId;
|
|
597
|
+
try {
|
|
598
|
+
let o = (0, import_execa2.execa)(e.command, e.args || [], { env: { ...process.env, ...e.env }, stdio: ["pipe", "pipe", "pipe"] });
|
|
599
|
+
this.processes.set(e.name, o), o.stdout?.on("data", (s) => {
|
|
600
|
+
this.handleStdioMessage(e.name, s.toString());
|
|
601
|
+
}), o.stderr?.on("data", (s) => {
|
|
602
|
+
console.error(`[MCP ${e.name}]`, s.toString());
|
|
603
|
+
}), await this.sendStdioRequest(e.name, "initialize", { protocolVersion: "2024-11-05", capabilities: {}, clientInfo: { name: "cortex", version: "5.0.0" } }), this.tools.set(e.name, []), this.initialized = true;
|
|
604
|
+
} catch (o) {
|
|
605
|
+
console.error(`Failed to connect to MCP server ${e.name}:`, o);
|
|
606
|
+
}
|
|
607
|
+
}
|
|
608
|
+
async connectHttpServer(e) {
|
|
609
|
+
if (e.url) try {
|
|
610
|
+
let t = await fetch(`${e.url}/initialize`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ protocolVersion: "2024-11-05", capabilities: {}, clientInfo: { name: "cortex", version: "5.0.0" } }) });
|
|
611
|
+
if (t.ok) {
|
|
612
|
+
let o = await t.json();
|
|
613
|
+
this.tools.set(e.name, o.tools || []);
|
|
614
|
+
}
|
|
615
|
+
} catch (t) {
|
|
616
|
+
console.error(`Failed to connect to MCP server ${e.name}:`, t);
|
|
617
|
+
}
|
|
618
|
+
}
|
|
619
|
+
async sendStdioRequest(e, t, o) {
|
|
620
|
+
let s = ++this.requestId, r = this.processes.get(e);
|
|
621
|
+
if (!r || !r.stdin) throw new Error(`MCP server ${e} not connected`);
|
|
622
|
+
return new Promise((n, a) => {
|
|
623
|
+
this.pendingRequests.set(s, { resolve: n, reject: a }), r.stdin?.write(JSON.stringify({ jsonrpc: "2.0", id: s, method: t, params: o }) + `
|
|
624
|
+
`), setTimeout(() => {
|
|
625
|
+
this.pendingRequests.has(s) && (this.pendingRequests.delete(s), a(new Error("MCP request timeout")));
|
|
626
|
+
}, 3e4);
|
|
627
|
+
});
|
|
628
|
+
}
|
|
629
|
+
handleStdioMessage(e, t) {
|
|
630
|
+
try {
|
|
631
|
+
let o = t.split(`
|
|
632
|
+
`).filter(Boolean);
|
|
633
|
+
for (let s of o) {
|
|
634
|
+
let r = JSON.parse(s);
|
|
635
|
+
if (r.id && this.pendingRequests.has(r.id)) {
|
|
636
|
+
let n = this.pendingRequests.get(r.id);
|
|
637
|
+
this.pendingRequests.delete(r.id), r.error ? n.reject(new Error(r.error.message || "MCP error")) : n.resolve(r.result);
|
|
638
|
+
}
|
|
639
|
+
if (r.method === "tools/list") {
|
|
640
|
+
let n = r.params?.tools || [];
|
|
641
|
+
this.tools.set(e, n), this.emit("toolsUpdated", { server: e, tools: n });
|
|
642
|
+
}
|
|
643
|
+
r.method === "notifications/tools_changed" && this.refreshTools(e);
|
|
644
|
+
}
|
|
645
|
+
} catch {
|
|
646
|
+
}
|
|
647
|
+
}
|
|
648
|
+
async callStdioTool(e, t, o) {
|
|
649
|
+
let s = await this.sendStdioRequest(e, "tools/call", { name: t, arguments: o });
|
|
650
|
+
return JSON.stringify(s);
|
|
651
|
+
}
|
|
652
|
+
async callHttpTool(e, t, o) {
|
|
653
|
+
let s = await fetch(`${e}/tools/call`, { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ name: t, arguments: o }) });
|
|
654
|
+
if (!s.ok) throw new Error(`MCP tool call failed: ${s.status}`);
|
|
655
|
+
let r = await s.json();
|
|
656
|
+
return JSON.stringify(r.content?.[0]?.text || "");
|
|
657
|
+
}
|
|
658
|
+
async refreshTools(e) {
|
|
659
|
+
let t = this.servers.get(e);
|
|
660
|
+
if (t) if (t.type === "http" && t.url) await this.connectHttpServer(t);
|
|
661
|
+
else try {
|
|
662
|
+
let s = (await this.sendStdioRequest(e, "tools/list", {}))?.tools || [];
|
|
663
|
+
this.tools.set(e, s);
|
|
664
|
+
} catch {
|
|
665
|
+
}
|
|
666
|
+
}
|
|
667
|
+
getToolsForContext() {
|
|
668
|
+
let e = [];
|
|
669
|
+
for (let [t, o] of this.tools) for (let s of o) e.push({ name: `${t}/${s.name}`, description: s.description, input_schema: s.inputSchema });
|
|
670
|
+
return e;
|
|
671
|
+
}
|
|
672
|
+
disconnect() {
|
|
673
|
+
for (let [e, t] of this.processes) t.kill();
|
|
674
|
+
this.processes.clear(), this.servers.clear(), this.tools.clear(), this.initialized = false;
|
|
675
|
+
}
|
|
676
|
+
};
|
|
677
|
+
T = class {
|
|
678
|
+
provider;
|
|
679
|
+
context;
|
|
680
|
+
memory;
|
|
681
|
+
mcp;
|
|
682
|
+
config;
|
|
683
|
+
tools = [...U];
|
|
684
|
+
cwd;
|
|
685
|
+
planMode = false;
|
|
686
|
+
currentPlan = [];
|
|
687
|
+
constructor(e, t) {
|
|
688
|
+
this.config = e, this.cwd = t;
|
|
689
|
+
let o = { type: e.provider, baseUrl: e.providerUrl, apiKey: e.apiKey, defaultModel: e.model };
|
|
690
|
+
this.provider = y(o), this.context = new x(), this.memory = new C(), this.mcp = new P(), this.planMode = e.planMode || false;
|
|
691
|
+
}
|
|
692
|
+
async initialize() {
|
|
693
|
+
console.log(import_chalk2.default.blue("\u{1F9E0} Initializing Cortex...")), await this.memory.initialize(this.cwd), await this.context.loadProjectMemory(this.cwd), await this.context.loadUserMemory(), await this.loadMCPConfig(), console.log(import_chalk2.default.green("\u2705 Cortex ready!"));
|
|
694
|
+
}
|
|
695
|
+
async loadMCPConfig() {
|
|
696
|
+
let e = [(0, import_path.join)(this.cwd, ".cortex", "mcp.json"), (0, import_path.join)((0, import_os.homedir)(), ".cortex", "mcp.json")];
|
|
697
|
+
for (let t of e) try {
|
|
698
|
+
let o = await (0, import_promises.readFile)(t, "utf-8"), s = JSON.parse(o);
|
|
699
|
+
if (s.servers) for (let [r, n] of Object.entries(s.servers)) await this.mcp.addServer({ name: r, command: n.command, args: n.args, env: n.env, type: n.type || "stdio", url: n.url });
|
|
700
|
+
} catch {
|
|
701
|
+
}
|
|
702
|
+
if (this.mcp.listTools()) {
|
|
703
|
+
let t = this.mcp.getToolsForContext();
|
|
704
|
+
this.tools = [...U, ...t];
|
|
705
|
+
}
|
|
706
|
+
}
|
|
707
|
+
async run(e) {
|
|
708
|
+
console.log(import_chalk2.default.cyan(`
|
|
709
|
+
\u{1F916} Thinking...
|
|
710
|
+
`)), this.planMode ? await this.runWithPlanMode(e) : await this.runDirect(e);
|
|
711
|
+
}
|
|
712
|
+
async runDirect(e) {
|
|
713
|
+
let t = 0, o = this.config.maxIterations || 50, s = false;
|
|
714
|
+
for (this.context.addMessage({ role: "user", content: e }); !s && t < o; ) {
|
|
715
|
+
t++, this.context.shouldWarn() && console.log(import_chalk2.default.yellow(`\u26A0\uFE0F Context usage high (${Math.round((1 - this.context.getTokenUsage().available / this.context.getTokenUsage().total) * 100)}%)`));
|
|
716
|
+
try {
|
|
717
|
+
let r = await this.provider.chat(this.context.getMessagesForAPI(), this.tools, this.config.model);
|
|
718
|
+
if (console.log(import_chalk2.default.white(r.content)), r.tool_calls && r.tool_calls.length > 0) {
|
|
719
|
+
let n = { role: "assistant", content: r.content, tool_calls: r.tool_calls };
|
|
720
|
+
this.context.addMessage(n);
|
|
721
|
+
for (let a of r.tool_calls) {
|
|
722
|
+
let c = a.function.name, l = JSON.parse(a.function.arguments);
|
|
723
|
+
console.log(import_chalk2.default.dim(`
|
|
724
|
+
\u{1F527} ${c}: ${JSON.stringify(l)}`));
|
|
725
|
+
let u = await I(c, l, this.cwd), w = { role: "tool", content: u.output, tool_call_id: a.id, name: c };
|
|
726
|
+
this.context.addMessage(w), u.is_error && console.log(import_chalk2.default.red(`\u274C ${u.output}`));
|
|
727
|
+
}
|
|
728
|
+
} else s = true;
|
|
729
|
+
(r.stop_reason === "stop" || r.stop_reason === "end_turn") && (s = true);
|
|
730
|
+
} catch (r) {
|
|
731
|
+
console.error(import_chalk2.default.red(`Error: ${r instanceof Error ? r.message : String(r)}`));
|
|
732
|
+
break;
|
|
733
|
+
}
|
|
734
|
+
this.context.shouldCompact() && (console.log(import_chalk2.default.yellow("\u{1F4E6} Compacting context...")), await this.context.compact());
|
|
735
|
+
}
|
|
736
|
+
t >= o && console.log(import_chalk2.default.yellow(`\u26A0\uFE0F Reached max iterations (${o})`));
|
|
737
|
+
}
|
|
738
|
+
async runWithPlanMode(e) {
|
|
739
|
+
console.log(import_chalk2.default.blue("\u{1F4DD} Running in Plan Mode...")), this.context.addMessage({ role: "user", content: `[PLANNING MODE] ${e}
|
|
740
|
+
|
|
741
|
+
Please analyze the task and create a detailed plan before executing anything. List each step with the files that will be modified.` });
|
|
742
|
+
let t = await this.provider.chat(this.context.getMessagesForAPI(), void 0, this.config.model);
|
|
743
|
+
console.log(import_chalk2.default.cyan(`
|
|
744
|
+
\u{1F4CB} Proposed Plan:`)), console.log(import_chalk2.default.white(t.content)), console.log(import_chalk2.default.green(`
|
|
745
|
+
\u2705 Proceeding with execution...
|
|
746
|
+
`)), this.context.clear(), this.context.addMessage({ role: "user", content: e }), await this.runDirect(e);
|
|
747
|
+
}
|
|
748
|
+
getContextInfo() {
|
|
749
|
+
let e = this.context.getTokenUsage();
|
|
750
|
+
return { tokens: `${e.total - e.available} / ${e.total}`, messages: this.context.getMessages().length };
|
|
751
|
+
}
|
|
752
|
+
async cleanup() {
|
|
753
|
+
this.mcp.disconnect();
|
|
754
|
+
}
|
|
755
|
+
};
|
|
756
|
+
p = new import_commander.Command();
|
|
757
|
+
p.name("cortex").description("The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled").version("5.0.0-beta.1").option("-p, --provider <name>", "Provider to use: ollama, lmstudio, openrouter").option("-m, --model <name>", "Model to use").option("-u, --url <url>", "Provider URL").option("--plan-mode", "Show plan before executing changes");
|
|
758
|
+
p.command("init").description("Initialize Cortex in the current project").action(re);
|
|
759
|
+
p.command("run [prompt...]").description("Run a coding task").option("-p, --provider <name>", "Provider to use").option("-m, --model <name>", "Model to use").option("-u, --url <url>", "Provider URL").option("--plan-mode", "Show plan before executing").action(async (i, e) => {
|
|
760
|
+
(!i || i.length === 0) && (console.log(import_chalk.default.yellow('Please provide a prompt. Usage: cortex run "your task"')), process.exit(1)), await ie(i.join(" "), e);
|
|
761
|
+
});
|
|
762
|
+
p.command("status").description("Check provider and model status").action(ae);
|
|
763
|
+
p.command("setup").description("Configure Cortex settings").option("-p, --provider <name>", "Provider name").option("-m, --model <name>", "Default model").option("-f, --force", "Overwrite existing config").action(ce);
|
|
764
|
+
p.command("models").description("List available models").option("-p, --provider <name>", "Provider to query").action(le);
|
|
765
|
+
p.command("chat").description("Start interactive chat session").option("-p, --provider <name>", "Provider to use").option("-m, --model <name>", "Model to use").action(async (i) => {
|
|
766
|
+
console.log(import_chalk.default.yellow("Interactive chat coming soon!"));
|
|
767
|
+
});
|
|
768
|
+
process.on("SIGINT", () => v(0));
|
|
769
|
+
process.on("SIGTERM", () => v(0));
|
|
770
|
+
process.on("uncaughtException", (i) => {
|
|
771
|
+
i.message?.includes("ExitPromptError") || i.message?.includes("User force closed") || i.message?.includes("prompt") ? v(0) : (console.error(import_chalk.default.red("Error:"), i.message), process.exit(1));
|
|
772
|
+
});
|
|
773
|
+
process.on("unhandledRejection", (i) => {
|
|
774
|
+
let e = String(i);
|
|
775
|
+
(e.includes("ExitPromptError") || e.includes("User force closed") || e.includes("prompt")) && v(0);
|
|
776
|
+
});
|
|
777
|
+
me = process.argv.slice(2);
|
|
778
|
+
me.length === 0 ? p.help() : p.parse();
|
|
779
|
+
}
|
|
780
|
+
});
|
|
781
|
+
|
|
782
|
+
// bin/index.js
|
|
783
|
+
Promise.resolve().then(() => init_dist());
|
package/dist/index.js
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
import{Command as oe}from"commander";import m from"chalk";import d from"chalk";import{readFile as ee}from"fs/promises";import{join as z}from"path";import{homedir as te}from"os";var S=class{name="ollama";baseUrl;defaultModel;timeout;constructor(e){this.baseUrl=e.baseUrl||"http://localhost:11434",this.defaultModel=e.defaultModel||"llama3.2",this.timeout=e.timeout||12e4}async chat(e,t,o){let r={model:o||this.defaultModel,messages:this.formatMessages(e),stream:!1};t&&t.length>0&&(r.tools=t.map(n=>({type:"function",function:{name:n.name,description:n.description,parameters:n.input_schema}})));try{let n=await fetch(`${this.baseUrl}/api/chat`,{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(r),signal:AbortSignal.timeout(this.timeout)});if(!n.ok){let c=await n.text();throw new Error(`Ollama API error: ${n.status} - ${c}`)}let a=await n.json();return{content:a.message?.content||"",tool_calls:a.message?.tool_calls?.map(c=>({id:c.id,type:"function",function:{name:c.function.name,arguments:c.function.arguments}})),stop_reason:a.done?"stop":void 0}}catch(n){throw n instanceof Error&&n.name==="TimeoutError"?new Error(`Ollama request timed out after ${this.timeout}ms`):n}}async listModels(){try{let e=await fetch(`${this.baseUrl}/api/tags`);return e.ok?(await e.json()).models?.map(o=>o.name)||[]:[]}catch{return[]}}getTokenizer(){return e=>Math.ceil(e.length/4)}formatMessages(e){return e.map(t=>({role:t.role==="tool"?"user":t.role,content:t.content}))}},k=class{name="lmstudio";baseUrl;defaultModel;timeout;constructor(e){this.baseUrl=e.baseUrl||"http://localhost:1234/v1",this.defaultModel=e.defaultModel||"llama-3.1-8b",this.timeout=e.timeout||12e4}async chat(e,t,o){let r={model:o||this.defaultModel,messages:this.formatMessages(e),stream:!1};t&&t.length>0&&(r.tools=t.map(n=>({type:"function",function:{name:n.name,description:n.description,parameters:n.input_schema}})));try{let n=await fetch(`${this.baseUrl}/chat/completions`,{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(r),signal:AbortSignal.timeout(this.timeout)});if(!n.ok){let l=await n.text();throw new Error(`LM Studio API error: ${n.status} - ${l}`)}let c=(await n.json()).choices?.[0];return{content:c?.message?.content||"",tool_calls:c?.message?.tool_calls?.map(l=>({id:l.id,type:"function",function:{name:l.function.name,arguments:l.function.arguments}}))}}catch(n){throw n instanceof Error&&n.name==="TimeoutError"?new Error(`LM Studio request timed out after ${this.timeout}ms`):n}}async listModels(){try{let e=await fetch(`${this.baseUrl.replace("/v1","")}/api/v0/models`);return e.ok?(await e.json()).data?.map(o=>o.id)||[]:[]}catch{return[]}}getTokenizer(){return e=>Math.ceil(e.length/4)}formatMessages(e){return e.map(t=>({role:t.role==="tool"?"user":t.role,content:t.content}))}},$=class{name="openrouter";apiKey;baseUrl;defaultModel;timeout;constructor(e){this.apiKey=e.apiKey,this.baseUrl=e.baseUrl||"https://openrouter.ai/api/v1",this.defaultModel=e.defaultModel||"anthropic/claude-3.5-sonnet",this.timeout=e.timeout||12e4}async chat(e,t,o){let r={model:o||this.defaultModel,messages:this.formatMessages(e)};t&&t.length>0&&(r.tools=t.map(n=>({type:"function",function:{name:n.name,description:n.description,parameters:n.input_schema}})));try{let n=await fetch(`${this.baseUrl}/chat/completions`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${this.apiKey}`,"HTTP-Referer":"https://cortex.dev","X-Title":"Cortex"},body:JSON.stringify(r),signal:AbortSignal.timeout(this.timeout)});if(!n.ok){let l=await n.text();throw new Error(`OpenRouter API error: ${n.status} - ${l}`)}let c=(await n.json()).choices?.[0];return{content:c?.message?.content||"",tool_calls:c?.message?.tool_calls?.map(l=>({id:l.id,type:"function",function:{name:l.function.name,arguments:l.function.arguments}})),stop_reason:c?.finish_reason}}catch(n){throw n instanceof Error&&n.name==="TimeoutError"?new Error(`OpenRouter request timed out after ${this.timeout}ms`):n}}async listModels(){try{let e=await fetch(`${this.baseUrl}/models`,{headers:{Authorization:`Bearer ${this.apiKey}`}});return e.ok?(await e.json()).data?.map(o=>o.id)||[]:[]}catch{return[]}}getTokenizer(){return e=>Math.ceil(e.length/4)}formatMessages(e){return e.map(t=>t.tool_calls?{role:"assistant",content:t.content||"",tool_calls:t.tool_calls.map(o=>({id:o.id,type:o.type,function:{name:o.function.name,arguments:o.function.arguments}}))}:t.tool_call_id?{role:"tool",content:t.content,tool_call_id:t.tool_call_id}:{role:t.role==="tool"?"user":t.role,content:t.content})}};function y(i){switch(i.type){case"ollama":return new S(i);case"lmstudio":return new k(i);case"openrouter":return new $(i);default:throw new Error(`Unknown provider type: ${i.type}`)}}import{join as O}from"path";import{homedir as L}from"os";import{access as A}from"fs/promises";var G={maxTokens:18e4,systemPromptTokens:3e3,toolDefTokens:5e3,warningThreshold:.7,autoCompactThreshold:.95},x=class{messages=[];config;tokenCount=0;memoryFiles=[];constructor(e={}){this.config={...G,...e}}getSystemPrompt(){return`You are Cortex, an autonomous AI coding agent. You have access to tools to read, write, and edit files, execute shell commands, and search through the codebase.
|
|
3
|
+
|
|
4
|
+
## Your Capabilities
|
|
5
|
+
- Read, write, and edit files in the codebase
|
|
6
|
+
- Execute shell commands
|
|
7
|
+
- Search and explore the codebase
|
|
8
|
+
- Plan and execute complex tasks
|
|
9
|
+
- Ask clarifying questions when needed
|
|
10
|
+
|
|
11
|
+
## Guidelines
|
|
12
|
+
- Always verify changes before committing
|
|
13
|
+
- Ask before making destructive changes
|
|
14
|
+
- Use precise edits rather than overwriting entire files
|
|
15
|
+
- Explain your plan before executing significant changes
|
|
16
|
+
- Use the TodoWrite tool to track multi-step tasks
|
|
17
|
+
|
|
18
|
+
## Working Directory
|
|
19
|
+
You are working in the current directory. Use absolute paths for file operations.
|
|
20
|
+
|
|
21
|
+
## Tools
|
|
22
|
+
You have access to the following tools:
|
|
23
|
+
${W.map(e=>`- ${e.name}: ${e.description}`).join(`
|
|
24
|
+
`)}
|
|
25
|
+
`}addMessage(e){this.messages.push(e),this.updateTokenCount()}getMessages(){return[...this.messages]}getMessagesForAPI(){return[{role:"system",content:this.getSystemPrompt()},...this.messages]}async compact(e){let t=this.summarizeConversation(e),o=this.messages.filter(r=>r.role==="user").pop(),s=this.messages.filter(r=>r.role==="assistant").pop();this.messages=[],o&&this.messages.push({role:"user",content:`[Previous conversation summary]: ${t}
|
|
26
|
+
|
|
27
|
+
[Continuing from previous session]
|
|
28
|
+
|
|
29
|
+
${o.content}`}),s&&this.messages.push({role:"assistant",content:s.content}),this.updateTokenCount()}summarizeConversation(e){let t=e?500:300,o=this.messages.filter(a=>a.tool_calls).map(a=>a.tool_calls?.map(c=>c.function.name).join(", ")).join("; "),s=this.messages.filter(a=>a.role==="tool"&&a.content?.includes("File edited")).length,r=this.messages.filter(a=>a.role==="tool"&&a.content?.includes("File written")).length,n=`Session summary: ${s} files edited, ${r} files created`;return o&&(n+=`. Tools used: ${o}`),n.slice(0,t)}getTokenUsage(){let e=this.config.systemPromptTokens,t=this.config.toolDefTokens,o=this.estimateTokens(this.messagesToString()),s=this.memoryFiles.length*500,r=e+t+o+s,n=this.config.maxTokens-r;return{system:e,tools:t,messages:o,memory:s,available:n,total:this.config.maxTokens}}shouldCompact(){let e=this.getTokenUsage();return(this.config.maxTokens-e.available)/this.config.maxTokens>this.config.autoCompactThreshold}shouldWarn(){let e=this.getTokenUsage();return(this.config.maxTokens-e.available)/this.config.maxTokens>this.config.warningThreshold}async loadProjectMemory(e){let t=O(e,"CORTEX.md");try{await A(t),this.memoryFiles.push(t)}catch{}}async loadUserMemory(){let e=L(),t=O(e,".cortex","memory.md");try{await A(t),this.memoryFiles.push(t)}catch{}}clear(){this.messages=[],this.tokenCount=0}updateTokenCount(){let e=this.messagesToString();this.tokenCount=this.estimateTokens(e)}messagesToString(){return this.messages.map(e=>{let t=`${e.role}: ${e.content}`;return e.tool_calls&&(t+=`
|
|
30
|
+
`+JSON.stringify(e.tool_calls)),t}).join(`
|
|
31
|
+
`)}estimateTokens(e){return Math.ceil(e.length/4)}},W=[{name:"Read",description:"Read the contents of a file from the filesystem."},{name:"Write",description:"Create a new file or overwrite an existing file."},{name:"Edit",description:"Make a precise edit to an existing file."},{name:"Bash",description:"Execute a shell command in the terminal."},{name:"Glob",description:"Find files matching a pattern."},{name:"Grep",description:"Search for text within files."},{name:"TodoWrite",description:"Create and manage a todo list."}];import{readFile as j,writeFile as M,mkdir as B,access as K}from"fs/promises";import{join as g}from"path";import{homedir as H}from"os";var C=class{projectMemory="";userMemory="";sessions=[];configDir;projectDir="";constructor(){this.configDir=g(H(),".cortex")}async initialize(e){this.projectDir=e,await this.ensureConfigDir(),await this.loadProjectMemory(e),await this.loadUserMemory(),await this.loadSessionHistory()}async ensureConfigDir(){try{await B(this.configDir,{recursive:!0})}catch{}}async loadProjectMemory(e){let t=g(e,"CORTEX.md");try{this.projectMemory=await j(t,"utf-8")}catch{this.projectMemory=""}}async loadUserMemory(){let e=g(this.configDir,"memory.md");try{this.userMemory=await j(e,"utf-8")}catch{this.userMemory=""}}async loadSessionHistory(){let e=g(this.configDir,"sessions.json");try{let t=await j(e,"utf-8");this.sessions=JSON.parse(t)}catch{this.sessions=[]}}async saveSession(e){this.sessions.push(e),this.sessions.length>50&&(this.sessions=this.sessions.slice(-50));let t=g(this.configDir,"sessions.json");await M(t,JSON.stringify(this.sessions,null,2))}async updateProjectMemory(e){if(!this.projectDir)return;let t=g(this.projectDir,"CORTEX.md");await M(t,e,"utf-8"),this.projectMemory=e}async updateUserMemory(e){let t=g(this.configDir,"memory.md");await M(t,e,"utf-8"),this.userMemory=e}getProjectMemory(){return this.projectMemory}getUserMemory(){return this.userMemory}getMemoryForContext(){let e=[];if(this.projectMemory&&e.push(`## Project Memory (${this.projectDir})
|
|
32
|
+
${this.projectMemory}`),this.userMemory&&e.push(`## User Memory
|
|
33
|
+
${this.userMemory}`),this.sessions.length>0){let t=this.sessions.slice(-5);e.push(`## Recent Sessions
|
|
34
|
+
${t.map(o=>`- ${new Date(o.endTime).toLocaleDateString()}: ${o.summary}`).join(`
|
|
35
|
+
`)}`)}return e.join(`
|
|
36
|
+
|
|
37
|
+
`)}getSessions(){return this.sessions}async createProjectMemoryFile(){if(!this.projectDir)return;let e=g(this.projectDir,"CORTEX.md");try{await K(e)}catch{let t=`# Cortex Project Memory
|
|
38
|
+
|
|
39
|
+
## Project Overview
|
|
40
|
+
<!-- Fill in your project description -->
|
|
41
|
+
|
|
42
|
+
## Coding Conventions
|
|
43
|
+
<!-- Document your coding style preferences -->
|
|
44
|
+
|
|
45
|
+
## Testing Preferences
|
|
46
|
+
<!-- Document how tests should be written -->
|
|
47
|
+
|
|
48
|
+
## Architecture Notes
|
|
49
|
+
<!-- Document important architectural decisions -->
|
|
50
|
+
`;await M(e,t),this.projectMemory=t}}};import{execa as X}from"execa";import{readFile as R,writeFile as D}from"fs/promises";import{resolve as F,dirname as Y}from"path";import q from"fast-glob";var U=[{name:"Read",description:"Read the contents of a file from the filesystem. Use this when you need to see what's in a file. Provide the absolute_path to the file. Use offset and limit to read specific portions of large files.",input_schema:{type:"object",properties:{file_path:{type:"string",description:"Absolute path to the file to read"},offset:{type:"number",description:"Line number to start reading from (1-indexed)",default:1},limit:{type:"number",description:"Maximum number of lines to read",default:1e3}},required:["file_path"]}},{name:"Write",description:"Create a new file or overwrite an existing file with new content. Use this to create new files or make significant changes that warrant full replacement rather than targeted edits.",input_schema:{type:"object",properties:{file_path:{type:"string",description:"Absolute path to the file to write"},content:{type:"string",description:"Complete file content to write"}},required:["file_path","content"]}},{name:"Edit",description:"Make a precise edit to an existing file using a search and replace pattern. Use this for targeted modifications. The old_string must match exactly including whitespace and indentation.",input_schema:{type:"object",properties:{file_path:{type:"string",description:"Absolute path to the file to edit"},old_string:{type:"string",description:"The exact text to find and replace (must match exactly including whitespace)"},new_string:{type:"string",description:"The replacement text"}},required:["file_path","old_string","new_string"]}},{name:"Bash",description:"Execute a shell command in the terminal. Use this to run commands, scripts, git operations, or any other system operations. The working_directory defaults to the current directory.",input_schema:{type:"object",properties:{command:{type:"string",description:"The command to execute"},description:{type:"string",description:"What this command does (for context)",default:""}},required:["command"]}},{name:"Glob",description:"Find files matching a pattern using glob syntax. Useful for discovering files in a project without reading their contents.",input_schema:{type:"object",properties:{pattern:{type:"string",description:'Glob pattern (e.g., "**/*.ts", "src/**/*.js")'},path:{type:"string",description:"Directory to search in (defaults to current directory)",default:"."}},required:["pattern"]}},{name:"Grep",description:"Search for text within files. Returns matching lines with file paths.",input_schema:{type:"object",properties:{pattern:{type:"string",description:"Regular expression pattern to search for"},path:{type:"string",description:"Directory or file to search in",default:"."},include:{type:"string",description:'File pattern to include (e.g., "*.ts", "*.js")'}},required:["pattern"]}},{name:"TodoWrite",description:"Create and manage a todo list for tracking multi-step tasks. Use this to organize complex tasks into manageable steps.",input_schema:{type:"object",properties:{todos:{type:"array",description:"Array of todo items with content, status, and priority",items:{type:"object",properties:{content:{type:"string",description:"Description of the task"},status:{type:"string",enum:["pending","in_progress","completed","cancelled"],default:"pending"},priority:{type:"string",enum:["high","medium","low"],default:"medium"}},required:["content","status","priority"]}}},required:["todos"]}}];async function I(i,e,t){let o=`call_${Date.now()}_${Math.random().toString(36).substr(2,9)}`;try{switch(i){case"Read":{let s=e.file_path,r=e.offset||1,n=e.limit||1e3,c=(await R(s,"utf-8")).split(`
|
|
51
|
+
`),l=c.slice(r-1,r-1+n);return{tool_call_id:o,output:l.join(`
|
|
52
|
+
`)+(c.length>r-1+n?`
|
|
53
|
+
|
|
54
|
+
... ${c.length-(r-1+n)} more lines`:"")}}case"Write":{let s=e.file_path,r=e.content,n=Y(s);return await V(n,{recursive:!0}),await D(s,r,"utf-8"),{tool_call_id:o,output:`File written: ${s}`}}case"Edit":{let s=e.file_path,r=e.old_string,n=e.new_string,a=await R(s,"utf-8");if(!a.includes(r))return{tool_call_id:o,output:"Error: Could not find the specified text to replace. The old_string must match exactly.",is_error:!0};let c=a.replace(r,n);return await D(s,c,"utf-8"),{tool_call_id:o,output:`File edited: ${s}`}}case"Bash":{let s=e.command,r=e.description||"",{stdout:n,stderr:a,exitCode:c}=await X(s,{shell:!0,cwd:t,timeout:12e4}),l=a?`${n}
|
|
55
|
+
${a}`:n;if(c!==0&&!n)return{tool_call_id:o,output:`Command failed with exit code ${c}: ${s}
|
|
56
|
+
${a}`,is_error:!0};let u=l.length>15e3?l.slice(0,15e3)+`
|
|
57
|
+
|
|
58
|
+
... output truncated (${l.length} chars)`:l;return{tool_call_id:o,output:u||`Command executed successfully (exit code: ${c})`}}case"Glob":{let s=e.pattern,r=e.path||".",n=F(t,r),a=await q(s,{cwd:n,absolute:!0,onlyFiles:!0});return{tool_call_id:o,output:a.length>0?a.join(`
|
|
59
|
+
`):"No files found"}}case"Grep":{let s=e.pattern,r=e.path||".",n=e.include,a=F(t,r),c=await q(n||"**/*",{cwd:a,onlyFiles:!0}),l=[];for(let w of c.slice(0,100))try{let _=(await R(w,"utf-8")).split(`
|
|
60
|
+
`);for(let f=0;f<_.length;f++)new RegExp(s,"i").test(_[f])&&l.push(`${w}:${f+1}: ${_[f]}`)}catch{}let u=l.length>100?l.slice(0,100).concat([`... ${l.length-100} more matches`]):l;return{tool_call_id:o,output:u.length>0?u.join(`
|
|
61
|
+
`):"No matches found"}}case"TodoWrite":{let s=e.todos;return{tool_call_id:o,output:`Todo list updated with ${s.length} items:
|
|
62
|
+
${s.map((r,n)=>`${n+1}. [${r.status}] ${r.content}`).join(`
|
|
63
|
+
`)}`}}default:return{tool_call_id:o,output:`Unknown tool: ${i}`,is_error:!0}}}catch(s){return{tool_call_id:o,output:`Error executing ${i}: ${s instanceof Error?s.message:String(s)}`,is_error:!0}}}async function V(i,e){let{mkdirSync:t}=await import("fs");t(i,e)}import{execa as Q}from"execa";import{EventEmitter as Z}from"events";var P=class extends Z{servers=new Map;processes=new Map;tools=new Map;requestId=0;pendingRequests=new Map;initialized=!1;async addServer(e){this.servers.set(e.name,e),e.type==="stdio"?await this.connectStdioServer(e):e.type==="http"&&e.url&&await this.connectHttpServer(e)}async removeServer(e){let t=this.servers.get(e);if(t){if(t.type==="stdio"){let o=this.processes.get(e);o&&(o.kill(),this.processes.delete(e))}this.servers.delete(e),this.tools.delete(e)}}async listTools(e){let t={};if(e){let o=this.tools.get(e);if(o)for(let s of o)t[`${e}/${s.name}`]=s}else for(let[o,s]of this.tools)for(let r of s)t[`${o}/${r.name}`]=r;return t}async callTool(e,t,o){let s=this.servers.get(e);if(!s)throw new Error(`Unknown MCP server: ${e}`);return s.type==="http"&&s.url?this.callHttpTool(s.url,t,o):this.callStdioTool(e,t,o)}async connectStdioServer(e){let t=++this.requestId;try{let o=Q(e.command,e.args||[],{env:{...process.env,...e.env},stdio:["pipe","pipe","pipe"]});this.processes.set(e.name,o),o.stdout?.on("data",s=>{this.handleStdioMessage(e.name,s.toString())}),o.stderr?.on("data",s=>{console.error(`[MCP ${e.name}]`,s.toString())}),await this.sendStdioRequest(e.name,"initialize",{protocolVersion:"2024-11-05",capabilities:{},clientInfo:{name:"cortex",version:"5.0.0"}}),this.tools.set(e.name,[]),this.initialized=!0}catch(o){console.error(`Failed to connect to MCP server ${e.name}:`,o)}}async connectHttpServer(e){if(e.url)try{let t=await fetch(`${e.url}/initialize`,{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({protocolVersion:"2024-11-05",capabilities:{},clientInfo:{name:"cortex",version:"5.0.0"}})});if(t.ok){let o=await t.json();this.tools.set(e.name,o.tools||[])}}catch(t){console.error(`Failed to connect to MCP server ${e.name}:`,t)}}async sendStdioRequest(e,t,o){let s=++this.requestId,r=this.processes.get(e);if(!r||!r.stdin)throw new Error(`MCP server ${e} not connected`);return new Promise((n,a)=>{this.pendingRequests.set(s,{resolve:n,reject:a}),r.stdin?.write(JSON.stringify({jsonrpc:"2.0",id:s,method:t,params:o})+`
|
|
64
|
+
`),setTimeout(()=>{this.pendingRequests.has(s)&&(this.pendingRequests.delete(s),a(new Error("MCP request timeout")))},3e4)})}handleStdioMessage(e,t){try{let o=t.split(`
|
|
65
|
+
`).filter(Boolean);for(let s of o){let r=JSON.parse(s);if(r.id&&this.pendingRequests.has(r.id)){let n=this.pendingRequests.get(r.id);this.pendingRequests.delete(r.id),r.error?n.reject(new Error(r.error.message||"MCP error")):n.resolve(r.result)}if(r.method==="tools/list"){let n=r.params?.tools||[];this.tools.set(e,n),this.emit("toolsUpdated",{server:e,tools:n})}r.method==="notifications/tools_changed"&&this.refreshTools(e)}}catch{}}async callStdioTool(e,t,o){let s=await this.sendStdioRequest(e,"tools/call",{name:t,arguments:o});return JSON.stringify(s)}async callHttpTool(e,t,o){let s=await fetch(`${e}/tools/call`,{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({name:t,arguments:o})});if(!s.ok)throw new Error(`MCP tool call failed: ${s.status}`);let r=await s.json();return JSON.stringify(r.content?.[0]?.text||"")}async refreshTools(e){let t=this.servers.get(e);if(t)if(t.type==="http"&&t.url)await this.connectHttpServer(t);else try{let s=(await this.sendStdioRequest(e,"tools/list",{}))?.tools||[];this.tools.set(e,s)}catch{}}getToolsForContext(){let e=[];for(let[t,o]of this.tools)for(let s of o)e.push({name:`${t}/${s.name}`,description:s.description,input_schema:s.inputSchema});return e}disconnect(){for(let[e,t]of this.processes)t.kill();this.processes.clear(),this.servers.clear(),this.tools.clear(),this.initialized=!1}};var T=class{provider;context;memory;mcp;config;tools=[...U];cwd;planMode=!1;currentPlan=[];constructor(e,t){this.config=e,this.cwd=t;let o={type:e.provider,baseUrl:e.providerUrl,apiKey:e.apiKey,defaultModel:e.model};this.provider=y(o),this.context=new x,this.memory=new C,this.mcp=new P,this.planMode=e.planMode||!1}async initialize(){console.log(d.blue("\u{1F9E0} Initializing Cortex...")),await this.memory.initialize(this.cwd),await this.context.loadProjectMemory(this.cwd),await this.context.loadUserMemory(),await this.loadMCPConfig(),console.log(d.green("\u2705 Cortex ready!"))}async loadMCPConfig(){let e=[z(this.cwd,".cortex","mcp.json"),z(te(),".cortex","mcp.json")];for(let t of e)try{let o=await ee(t,"utf-8"),s=JSON.parse(o);if(s.servers)for(let[r,n]of Object.entries(s.servers))await this.mcp.addServer({name:r,command:n.command,args:n.args,env:n.env,type:n.type||"stdio",url:n.url})}catch{}if(this.mcp.listTools()){let t=this.mcp.getToolsForContext();this.tools=[...U,...t]}}async run(e){console.log(d.cyan(`
|
|
66
|
+
\u{1F916} Thinking...
|
|
67
|
+
`)),this.planMode?await this.runWithPlanMode(e):await this.runDirect(e)}async runDirect(e){let t=0,o=this.config.maxIterations||50,s=!1;for(this.context.addMessage({role:"user",content:e});!s&&t<o;){t++,this.context.shouldWarn()&&console.log(d.yellow(`\u26A0\uFE0F Context usage high (${Math.round((1-this.context.getTokenUsage().available/this.context.getTokenUsage().total)*100)}%)`));try{let r=await this.provider.chat(this.context.getMessagesForAPI(),this.tools,this.config.model);if(console.log(d.white(r.content)),r.tool_calls&&r.tool_calls.length>0){let n={role:"assistant",content:r.content,tool_calls:r.tool_calls};this.context.addMessage(n);for(let a of r.tool_calls){let c=a.function.name,l=JSON.parse(a.function.arguments);console.log(d.dim(`
|
|
68
|
+
\u{1F527} ${c}: ${JSON.stringify(l)}`));let u=await I(c,l,this.cwd),w={role:"tool",content:u.output,tool_call_id:a.id,name:c};this.context.addMessage(w),u.is_error&&console.log(d.red(`\u274C ${u.output}`))}}else s=!0;(r.stop_reason==="stop"||r.stop_reason==="end_turn")&&(s=!0)}catch(r){console.error(d.red(`Error: ${r instanceof Error?r.message:String(r)}`));break}this.context.shouldCompact()&&(console.log(d.yellow("\u{1F4E6} Compacting context...")),await this.context.compact())}t>=o&&console.log(d.yellow(`\u26A0\uFE0F Reached max iterations (${o})`))}async runWithPlanMode(e){console.log(d.blue("\u{1F4DD} Running in Plan Mode...")),this.context.addMessage({role:"user",content:`[PLANNING MODE] ${e}
|
|
69
|
+
|
|
70
|
+
Please analyze the task and create a detailed plan before executing anything. List each step with the files that will be modified.`});let t=await this.provider.chat(this.context.getMessagesForAPI(),void 0,this.config.model);console.log(d.cyan(`
|
|
71
|
+
\u{1F4CB} Proposed Plan:`)),console.log(d.white(t.content)),console.log(d.green(`
|
|
72
|
+
\u2705 Proceeding with execution...
|
|
73
|
+
`)),this.context.clear(),this.context.addMessage({role:"user",content:e}),await this.runDirect(e)}getContextInfo(){let e=this.context.getTokenUsage();return{tokens:`${e.total-e.available} / ${e.total}`,messages:this.context.getMessages().length}}async cleanup(){this.mcp.disconnect()}};import{readFile as J,writeFile as b,access as N,mkdir as se}from"fs/promises";import{join as h}from"path";import{homedir as ne}from"os";var p=new oe;p.name("cortex").description("The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled").version("5.0.0-beta.1").option("-p, --provider <name>","Provider to use: ollama, lmstudio, openrouter").option("-m, --model <name>","Model to use").option("-u, --url <url>","Provider URL").option("--plan-mode","Show plan before executing changes");async function E(){let i=[h(process.cwd(),".cortex","config.json"),h(ne(),".cortex","config.json")];for(let e of i)try{let t=await J(e,"utf-8");return JSON.parse(t)}catch{}return{}}async function re(){console.log(m.blue("Initializing Cortex project..."));let i=h(process.cwd(),".cortex"),e=h(i,"config.json"),t=h(i,"mcp.json"),o=h(process.cwd(),"CORTEX.md");try{await se(i,{recursive:!0})}catch{}try{await N(e),console.log(m.yellow("Cortex already initialized in this project"));return}catch{}await b(e,JSON.stringify({provider:"ollama",model:"llama3.2"},null,2)),console.log(m.green("\u2705 Created .cortex/config.json"));try{await N(o)}catch{await b(o,`# Cortex Project Memory
|
|
74
|
+
|
|
75
|
+
## Project Overview
|
|
76
|
+
<!-- Fill in your project description -->
|
|
77
|
+
|
|
78
|
+
## Coding Conventions
|
|
79
|
+
<!-- Document your coding style preferences -->
|
|
80
|
+
|
|
81
|
+
## Testing Preferences
|
|
82
|
+
<!-- Document how tests should be written -->
|
|
83
|
+
|
|
84
|
+
## Architecture Notes
|
|
85
|
+
<!-- Document important architectural decisions -->
|
|
86
|
+
`),console.log(m.green("\u2705 Created CORTEX.md"))}await b(t,JSON.stringify({servers:{}},null,2)),console.log(m.green("\u2705 Created .cortex/mcp.json")),console.log(m.green(`
|
|
87
|
+
\u{1F389} Cortex initialized! Run "cortex" to start coding.`))}async function ie(i,e){let t=await E(),o=e.provider||t.provider||"ollama",s=e.model||t.model,r=e.url||t.url,n=e.planMode||t.planMode,a={provider:o,model:s,providerUrl:r,planMode:n},c=new T(a,process.cwd());try{await c.initialize(),await c.run(i);let l=c.getContextInfo();console.log(m.dim(`
|
|
88
|
+
\u{1F4CA} Session: ${l.messages} messages, ${l.tokens} tokens`))}catch(l){console.error(m.red(`
|
|
89
|
+
\u274C Error: ${l instanceof Error?l.message:String(l)}`)),process.exit(1)}finally{await c.cleanup()}}async function ae(){console.log(m.blue(`\u{1F50D} Checking provider status...
|
|
90
|
+
`));let i=await E(),e=i.provider||"ollama";try{let t=y({type:e,baseUrl:i.url});console.log(m.green(`\u2705 Provider: ${t.name}`));let o=await t.listModels();if(o.length>0){console.log(m.cyan(`
|
|
91
|
+
\u{1F4E6} Available models:`));for(let s of o.slice(0,10))console.log(` - ${s}`);o.length>10&&console.log(m.dim(` ... and ${o.length-10} more`))}else console.log(m.yellow("\u26A0\uFE0F No models found. Make sure your provider is running."))}catch(t){console.error(m.red(`\u274C Provider error: ${t instanceof Error?t.message:String(t)}`))}}async function ce(i){console.log(m.blue(`\u2699\uFE0F Setting up Cortex...
|
|
92
|
+
`));let e=h(process.cwd(),".cortex","config.json"),t={};try{let s=await J(e,"utf-8");t=JSON.parse(s)}catch{}let o={provider:i.provider||t.provider||"ollama",model:i.model||t.model||"llama3.2",url:t.url||"http://localhost:11434"};await b(e,JSON.stringify(o,null,2)),console.log(m.green("\u2705 Configuration saved!")),console.log(m.dim(` Provider: ${o.provider}`)),console.log(m.dim(` Model: ${o.model}`))}async function le(i){let e=await E(),t=i.provider||e.provider||"ollama";try{let s=await y({type:t,baseUrl:e.url}).listModels();if(s.length===0){console.log(m.yellow("No models available"));return}console.log(m.cyan(`Available models on ${t}:`));for(let r of s)console.log(` ${r}`)}catch(o){console.error(m.red(`Error: ${o instanceof Error?o.message:String(o)}`))}}p.command("init").description("Initialize Cortex in the current project").action(re);p.command("run [prompt...]").description("Run a coding task").option("-p, --provider <name>","Provider to use").option("-m, --model <name>","Model to use").option("-u, --url <url>","Provider URL").option("--plan-mode","Show plan before executing").action(async(i,e)=>{(!i||i.length===0)&&(console.log(m.yellow('Please provide a prompt. Usage: cortex run "your task"')),process.exit(1)),await ie(i.join(" "),e)});p.command("status").description("Check provider and model status").action(ae);p.command("setup").description("Configure Cortex settings").option("-p, --provider <name>","Provider name").option("-m, --model <name>","Default model").option("-f, --force","Overwrite existing config").action(ce);p.command("models").description("List available models").option("-p, --provider <name>","Provider to query").action(le);p.command("chat").description("Start interactive chat session").option("-p, --provider <name>","Provider to use").option("-m, --model <name>","Model to use").action(async i=>{console.log(m.yellow("Interactive chat coming soon!"))});function v(i=0){console.log(),console.log(m.dim(i===0?"\u{1F44B} Goodbye!":"\u274C Cancelled.")),process.exit(i)}process.on("SIGINT",()=>v(0));process.on("SIGTERM",()=>v(0));process.on("uncaughtException",i=>{i.message?.includes("ExitPromptError")||i.message?.includes("User force closed")||i.message?.includes("prompt")?v(0):(console.error(m.red("Error:"),i.message),process.exit(1))});process.on("unhandledRejection",i=>{let e=String(i);(e.includes("ExitPromptError")||e.includes("User force closed")||e.includes("prompt"))&&v(0)});var me=process.argv.slice(2);me.length===0?p.help():p.parse();
|
package/package.json
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@iamharshil/cortex",
|
|
3
|
+
"version": "5.0.0-beta.1",
|
|
4
|
+
"description": "The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled",
|
|
5
|
+
"keywords": [
|
|
6
|
+
"cli",
|
|
7
|
+
"ai",
|
|
8
|
+
"coding-agent",
|
|
9
|
+
"llm",
|
|
10
|
+
"local-ai",
|
|
11
|
+
"developer-tools",
|
|
12
|
+
"mcp",
|
|
13
|
+
"autonomous-agent"
|
|
14
|
+
],
|
|
15
|
+
"author": "Harshil <hello@iamharshil.com>",
|
|
16
|
+
"license": "MIT",
|
|
17
|
+
"repository": {
|
|
18
|
+
"type": "git",
|
|
19
|
+
"url": "git+https://github.com/iamharshil/cortex.git"
|
|
20
|
+
},
|
|
21
|
+
"bugs": {
|
|
22
|
+
"url": "https://github.com/iamharshil/cortex/issues"
|
|
23
|
+
},
|
|
24
|
+
"homepage": "https://github.com/iamharshil/cortex#readme",
|
|
25
|
+
"type": "module",
|
|
26
|
+
"main": "./bin/index.js",
|
|
27
|
+
"types": "./dist/index.d.ts",
|
|
28
|
+
"bin": {
|
|
29
|
+
"cortex": "./bin/index.js"
|
|
30
|
+
},
|
|
31
|
+
"files": [
|
|
32
|
+
"dist",
|
|
33
|
+
"README.md",
|
|
34
|
+
"LICENSE"
|
|
35
|
+
],
|
|
36
|
+
"scripts": {
|
|
37
|
+
"build": "node scripts/build.js",
|
|
38
|
+
"prepublishOnly": "npm run build",
|
|
39
|
+
"dev": "tsx src/index.ts",
|
|
40
|
+
"start": "node bin/index.js",
|
|
41
|
+
"test": "vitest run",
|
|
42
|
+
"test:watch": "vitest",
|
|
43
|
+
"lint": "eslint src --ext .ts",
|
|
44
|
+
"format": "prettier --write \"src/**/*.ts\"",
|
|
45
|
+
"typecheck": "tsc --noEmit"
|
|
46
|
+
},
|
|
47
|
+
"dependencies": {
|
|
48
|
+
"chalk": "^5.3.0",
|
|
49
|
+
"commander": "^12.1.0",
|
|
50
|
+
"conf": "^12.0.0",
|
|
51
|
+
"execa": "^9.5.2",
|
|
52
|
+
"fast-glob": "^3.3.0",
|
|
53
|
+
"inquirer": "^10.0.0",
|
|
54
|
+
"ora": "^8.0.0"
|
|
55
|
+
},
|
|
56
|
+
"devDependencies": {
|
|
57
|
+
"@types/inquirer": "^9.0.7",
|
|
58
|
+
"@types/node": "^20.17.0",
|
|
59
|
+
"@typescript-eslint/eslint-plugin": "^8.18.0",
|
|
60
|
+
"@typescript-eslint/parser": "^8.18.0",
|
|
61
|
+
"esbuild": "^0.24.0",
|
|
62
|
+
"eslint": "^9.16.0",
|
|
63
|
+
"prettier": "^3.4.0",
|
|
64
|
+
"tsx": "^4.19.0",
|
|
65
|
+
"typescript": "^5.7.0",
|
|
66
|
+
"vitest": "^2.1.0"
|
|
67
|
+
},
|
|
68
|
+
"engines": {
|
|
69
|
+
"node": ">=20.0.0"
|
|
70
|
+
},
|
|
71
|
+
"os": [
|
|
72
|
+
"darwin",
|
|
73
|
+
"linux",
|
|
74
|
+
"win32"
|
|
75
|
+
]
|
|
76
|
+
}
|