codemaxxing 0.2.1 β†’ 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -121,11 +121,56 @@ codemaxxing --provider openrouter
121
121
  ### πŸ”₯ Streaming Tokens
122
122
  Real-time token display. See the model think, not just the final answer.
123
123
 
124
- ### ⚠️ Tool Approval
125
- Dangerous operations (file writes, shell commands) require your approval. Press `y` to allow, `n` to deny, `a` to always allow for the session.
124
+ ### ⚠️ Tool Approval + Diff Preview
125
+ Dangerous operations require your approval. File writes show a **unified diff** of what will change before you say yes. Press `y` to allow, `n` to deny, `a` to always allow.
126
+
127
+ ### πŸ—οΈ Architect Mode
128
+ Dual-model planning. A "planner" model reasons through the approach, then your editor model executes the changes.
129
+ - `/architect` β€” toggle on/off
130
+ - `/architect claude-3-5-sonnet` β€” set the planner model
131
+ - Great for pairing expensive reasoning models with fast editors
132
+
133
+ ### 🧠 Skills System (21 Built-In)
134
+ Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills:
135
+
136
+ **Frontend:** react-expert, nextjs-app, tailwind-ui, svelte-kit
137
+ **Mobile:** react-native, swift-ios, flutter
138
+ **Backend:** python-pro, node-backend, go-backend, rust-systems
139
+ **Data:** sql-master, supabase
140
+ **Practices:** typescript-strict, api-designer, test-engineer, doc-writer, security-audit, devops-toolkit, git-workflow
141
+ **Game Dev:** unity-csharp
142
+
143
+ ```
144
+ /skills # Browse & install from registry
145
+ /skills install X # Quick install
146
+ /skills on/off X # Toggle per session
147
+ ```
148
+
149
+ Project-level config: add `.codemaxxing/skills.json` to scope skills per project.
150
+
151
+ ### πŸ“‹ CODEMAXXING.md β€” Project Rules
152
+ Drop a `CODEMAXXING.md` in your project root for project-specific instructions. Auto-loaded every session. Also supports `.cursorrules` for Cursor migrants.
153
+
154
+ ### πŸ”§ Auto-Lint
155
+ Automatically runs your linter after every file edit and feeds errors back to the model for auto-fix. Detects eslint, biome, ruff, clippy, golangci-lint, and more.
156
+ - `/lint on` / `/lint off` β€” toggle (ON by default)
126
157
 
127
158
  ### πŸ“‚ Smart Context (Repo Map)
128
- Automatically scans your codebase and builds a map of functions, classes, and types. The model knows what exists where without reading every file.
159
+ Scans your codebase and builds a map of functions, classes, and types. The model knows what exists where without reading every file.
160
+
161
+ ### πŸ“¦ Context Compression
162
+ When conversation history exceeds 80k tokens, older messages are automatically summarized to free up context. Configurable via `contextCompressionThreshold`.
163
+
164
+ ### πŸ’° Cost Tracking
165
+ Per-session token usage and estimated cost in the status bar. Pricing for 20+ common models. Saved to session history.
166
+
167
+ ### πŸ–₯️ Headless/CI Mode
168
+ Run codemaxxing in scripts and pipelines without the TUI:
169
+ ```bash
170
+ codemaxxing exec "add error handling to api.ts"
171
+ codemaxxing exec --auto-approve "fix all lint errors"
172
+ echo "add tests" | codemaxxing exec
173
+ ```
129
174
 
130
175
  ### πŸ”€ Git Integration
131
176
  Opt-in git commands built in:
@@ -138,18 +183,23 @@ Opt-in git commands built in:
138
183
  ### πŸ’Ύ Session Persistence
139
184
  Conversations auto-save to SQLite. Pick up where you left off:
140
185
  - `/sessions` β€” list past sessions
186
+ - `/session delete` β€” remove a session
141
187
  - `/resume` β€” interactive session picker
142
188
 
143
189
  ### πŸ”„ Multi-Provider
144
190
  Switch models mid-session without restarting:
145
191
  - `/model gpt-4o` β€” switch to a different model
146
192
  - `/models` β€” list available models from your provider
193
+ - Native Anthropic API support (not just OpenAI-compatible)
194
+
195
+ ### 🎨 14 Themes
196
+ `/theme` to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rosΓ©-pine, synthwave, blood-moon, mono, solarized, hacker, acid
147
197
 
148
198
  ### πŸ” Authentication
149
- One command to connect any LLM provider. OpenRouter OAuth (browser login for 200+ models), Anthropic subscription linking, Codex/Qwen CLI import, GitHub Copilot device flow, or manual API keys. Use `codemaxxing login` or `/login` in-session.
199
+ One command to connect any LLM provider. OpenRouter OAuth, Anthropic subscription linking, Codex/Qwen CLI import, GitHub Copilot device flow, or manual API keys.
150
200
 
151
201
  ### πŸ“‹ Smart Paste
152
- Paste large code blocks without breaking the UI. Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges (like Claude Code).
202
+ Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges.
153
203
 
154
204
  ### ⌨️ Slash Commands
155
205
  Type `/` for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to select.
@@ -159,11 +209,17 @@ Type `/` for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to s
159
209
  | Command | Description |
160
210
  |---------|-------------|
161
211
  | `/help` | Show all commands |
212
+ | `/connect` | Retry LLM connection |
162
213
  | `/login` | Interactive auth setup |
214
+ | `/architect` | Toggle architect mode / set model |
215
+ | `/skills` | Browse, install, manage skills |
216
+ | `/lint on/off` | Toggle auto-linting |
163
217
  | `/model <name>` | Switch model mid-session |
164
218
  | `/models` | List available models |
219
+ | `/theme` | Switch color theme |
165
220
  | `/map` | Show repository map |
166
221
  | `/sessions` | List past sessions |
222
+ | `/session delete` | Delete a session |
167
223
  | `/resume` | Resume a past session |
168
224
  | `/reset` | Clear conversation |
169
225
  | `/context` | Show message count + tokens |
@@ -174,7 +230,17 @@ Type `/` for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to s
174
230
  | `/git on/off` | Toggle auto-commits |
175
231
  | `/quit` | Exit |
176
232
 
177
- ## CLI Flags
233
+ ## CLI
234
+
235
+ ```bash
236
+ codemaxxing # Start TUI
237
+ codemaxxing login # Auth setup
238
+ codemaxxing auth list # Show saved credentials
239
+ codemaxxing exec "prompt" # Headless mode (no TUI)
240
+ codemaxxing exec --auto-approve "x" # Skip approval prompts
241
+ codemaxxing exec --json "x" # JSON output for scripts
242
+ echo "fix tests" | codemaxxing exec # Pipe from stdin
243
+ ```
178
244
 
179
245
  ```
180
246
  -m, --model <model> Model name to use
package/dist/agent.d.ts CHANGED
@@ -1,3 +1,4 @@
1
+ import { type ConnectedServer } from "./utils/mcp.js";
1
2
  import type { ProviderConfig } from "./config.js";
2
3
  export interface AgentOptions {
3
4
  provider: ProviderConfig;
@@ -11,6 +12,9 @@ export interface AgentOptions {
11
12
  onToolApproval?: (name: string, args: Record<string, unknown>, diff?: string) => Promise<"yes" | "no" | "always">;
12
13
  onGitCommit?: (message: string) => void;
13
14
  onContextCompressed?: (oldTokens: number, newTokens: number) => void;
15
+ onArchitectPlan?: (plan: string) => void;
16
+ onLintResult?: (file: string, errors: string) => void;
17
+ onMCPStatus?: (server: string, status: string) => void;
14
18
  contextCompressionThreshold?: number;
15
19
  }
16
20
  export declare class CodingAgent {
@@ -35,6 +39,11 @@ export declare class CodingAgent {
35
39
  private systemPrompt;
36
40
  private compressionThreshold;
37
41
  private sessionDisabledSkills;
42
+ private projectRulesSource;
43
+ private architectModel;
44
+ private autoLintEnabled;
45
+ private detectedLinter;
46
+ private mcpServers;
38
47
  constructor(options: AgentOptions);
39
48
  /**
40
49
  * Initialize the agent β€” call this after constructor to build async context
@@ -53,6 +62,10 @@ export declare class CodingAgent {
53
62
  * Rebuild the repo map (useful after file changes)
54
63
  */
55
64
  refreshRepoMap(): Promise<string>;
65
+ /**
66
+ * Send a message, routing through architect model if enabled
67
+ */
68
+ send(userMessage: string): Promise<string>;
56
69
  /**
57
70
  * Stream a response from the model.
58
71
  * Assembles tool call chunks, emits tokens in real-time,
@@ -97,5 +110,26 @@ export declare class CodingAgent {
97
110
  getSessionDisabledSkills(): Set<string>;
98
111
  getActiveSkillCount(): number;
99
112
  getCwd(): string;
113
+ getProjectRulesSource(): string | null;
114
+ setArchitectModel(model: string | null): void;
115
+ getArchitectModel(): string | null;
116
+ setAutoLint(enabled: boolean): void;
117
+ isAutoLintEnabled(): boolean;
118
+ getDetectedLinter(): {
119
+ command: string;
120
+ name: string;
121
+ } | null;
122
+ setDetectedLinter(linter: {
123
+ command: string;
124
+ name: string;
125
+ } | null): void;
126
+ /**
127
+ * Run the architect model to generate a plan, then feed to editor model
128
+ */
129
+ private architectChat;
130
+ getMCPServerCount(): number;
131
+ getMCPServers(): ConnectedServer[];
132
+ disconnectMCP(): Promise<void>;
133
+ reconnectMCP(): Promise<void>;
100
134
  reset(): void;
101
135
  }
package/dist/agent.js CHANGED
@@ -1,10 +1,12 @@
1
1
  import OpenAI from "openai";
2
2
  import Anthropic from "@anthropic-ai/sdk";
3
3
  import { FILE_TOOLS, executeTool, generateDiff, getExistingContent } from "./tools/files.js";
4
- import { buildProjectContext, getSystemPrompt } from "./utils/context.js";
4
+ import { detectLinter, runLinter } from "./utils/lint.js";
5
+ import { buildProjectContext, getSystemPrompt, loadProjectRules } from "./utils/context.js";
5
6
  import { isGitRepo, autoCommit } from "./utils/git.js";
6
7
  import { buildSkillPrompts, getActiveSkillCount } from "./utils/skills.js";
7
8
  import { createSession, saveMessage, updateTokenEstimate, updateSessionCost, loadMessages } from "./utils/sessions.js";
9
+ import { loadMCPConfig, connectToServers, disconnectAll, getAllMCPTools, parseMCPToolName, callMCPTool } from "./utils/mcp.js";
8
10
  // Tools that can modify your project β€” require approval
9
11
  const DANGEROUS_TOOLS = new Set(["write_file", "run_command"]);
10
12
  // Cost per 1M tokens (input/output) for common models
@@ -73,6 +75,11 @@ export class CodingAgent {
73
75
  systemPrompt = "";
74
76
  compressionThreshold;
75
77
  sessionDisabledSkills = new Set();
78
+ projectRulesSource = null;
79
+ architectModel = null;
80
+ autoLintEnabled = true;
81
+ detectedLinter = null;
82
+ mcpServers = [];
76
83
  constructor(options) {
77
84
  this.options = options;
78
85
  this.providerType = options.provider.type || "openai";
@@ -102,7 +109,21 @@ export class CodingAgent {
102
109
  async init() {
103
110
  const context = await buildProjectContext(this.cwd);
104
111
  const skillPrompts = buildSkillPrompts(this.cwd, this.sessionDisabledSkills);
105
- this.systemPrompt = await getSystemPrompt(context, skillPrompts);
112
+ const rules = loadProjectRules(this.cwd);
113
+ if (rules)
114
+ this.projectRulesSource = rules.source;
115
+ this.systemPrompt = await getSystemPrompt(context, skillPrompts, rules?.content ?? "");
116
+ // Detect project linter
117
+ this.detectedLinter = detectLinter(this.cwd);
118
+ // Connect to MCP servers
119
+ const mcpConfig = loadMCPConfig(this.cwd);
120
+ if (Object.keys(mcpConfig.mcpServers).length > 0) {
121
+ this.mcpServers = await connectToServers(mcpConfig, this.options.onMCPStatus);
122
+ if (this.mcpServers.length > 0) {
123
+ const mcpTools = getAllMCPTools(this.mcpServers);
124
+ this.tools = [...FILE_TOOLS, ...mcpTools];
125
+ }
126
+ }
106
127
  this.messages = [
107
128
  { role: "system", content: this.systemPrompt },
108
129
  ];
@@ -138,6 +159,15 @@ export class CodingAgent {
138
159
  this.repoMap = await buildRepoMap(this.cwd);
139
160
  return this.repoMap;
140
161
  }
162
+ /**
163
+ * Send a message, routing through architect model if enabled
164
+ */
165
+ async send(userMessage) {
166
+ if (this.architectModel) {
167
+ return this.architectChat(userMessage);
168
+ }
169
+ return this.chat(userMessage);
170
+ }
141
171
  /**
142
172
  * Stream a response from the model.
143
173
  * Assembles tool call chunks, emits tokens in real-time,
@@ -286,7 +316,15 @@ export class CodingAgent {
286
316
  }
287
317
  }
288
318
  }
289
- const result = await executeTool(toolCall.name, args, this.cwd);
319
+ // Route to MCP or built-in tool
320
+ const mcpParsed = parseMCPToolName(toolCall.name);
321
+ let result;
322
+ if (mcpParsed) {
323
+ result = await callMCPTool(mcpParsed.serverName, mcpParsed.toolName, args);
324
+ }
325
+ else {
326
+ result = await executeTool(toolCall.name, args, this.cwd);
327
+ }
290
328
  this.options.onToolResult?.(toolCall.name, result);
291
329
  // Auto-commit after successful write_file (only if enabled)
292
330
  if (this.gitEnabled && this.autoCommitEnabled && toolCall.name === "write_file" && result.startsWith("βœ…")) {
@@ -296,6 +334,22 @@ export class CodingAgent {
296
334
  this.options.onGitCommit?.(`write ${path}`);
297
335
  }
298
336
  }
337
+ // Auto-lint after successful write_file
338
+ if (this.autoLintEnabled && this.detectedLinter && toolCall.name === "write_file" && result.startsWith("βœ…")) {
339
+ const filePath = String(args.path ?? "");
340
+ const lintErrors = runLinter(this.detectedLinter, filePath, this.cwd);
341
+ if (lintErrors) {
342
+ this.options.onLintResult?.(filePath, lintErrors);
343
+ const lintMsg = {
344
+ role: "tool",
345
+ tool_call_id: toolCall.id,
346
+ content: result + `\n\nLint errors detected in ${filePath}:\n${lintErrors}\nPlease fix these issues.`,
347
+ };
348
+ this.messages.push(lintMsg);
349
+ saveMessage(this.sessionId, lintMsg);
350
+ continue; // skip the normal tool message push
351
+ }
352
+ }
299
353
  const toolMsg = {
300
354
  role: "tool",
301
355
  tool_call_id: toolCall.id,
@@ -467,7 +521,15 @@ export class CodingAgent {
467
521
  }
468
522
  }
469
523
  }
470
- const result = await executeTool(toolCall.name, args, this.cwd);
524
+ // Route to MCP or built-in tool
525
+ const mcpParsed = parseMCPToolName(toolCall.name);
526
+ let result;
527
+ if (mcpParsed) {
528
+ result = await callMCPTool(mcpParsed.serverName, mcpParsed.toolName, args);
529
+ }
530
+ else {
531
+ result = await executeTool(toolCall.name, args, this.cwd);
532
+ }
471
533
  this.options.onToolResult?.(toolCall.name, result);
472
534
  // Auto-commit after successful write_file
473
535
  if (this.gitEnabled && this.autoCommitEnabled && toolCall.name === "write_file" && result.startsWith("βœ…")) {
@@ -477,6 +539,22 @@ export class CodingAgent {
477
539
  this.options.onGitCommit?.(`write ${path}`);
478
540
  }
479
541
  }
542
+ // Auto-lint after successful write_file
543
+ if (this.autoLintEnabled && this.detectedLinter && toolCall.name === "write_file" && result.startsWith("βœ…")) {
544
+ const filePath = String(args.path ?? "");
545
+ const lintErrors = runLinter(this.detectedLinter, filePath, this.cwd);
546
+ if (lintErrors) {
547
+ this.options.onLintResult?.(filePath, lintErrors);
548
+ const lintMsg = {
549
+ role: "tool",
550
+ tool_call_id: toolCall.id,
551
+ content: result + `\n\nLint errors detected in ${filePath}:\n${lintErrors}\nPlease fix these issues.`,
552
+ };
553
+ this.messages.push(lintMsg);
554
+ saveMessage(this.sessionId, lintMsg);
555
+ continue;
556
+ }
557
+ }
480
558
  const toolMsg = {
481
559
  role: "tool",
482
560
  tool_call_id: toolCall.id,
@@ -632,6 +710,83 @@ export class CodingAgent {
632
710
  getCwd() {
633
711
  return this.cwd;
634
712
  }
713
+ getProjectRulesSource() {
714
+ return this.projectRulesSource;
715
+ }
716
+ setArchitectModel(model) {
717
+ this.architectModel = model;
718
+ }
719
+ getArchitectModel() {
720
+ return this.architectModel;
721
+ }
722
+ setAutoLint(enabled) {
723
+ this.autoLintEnabled = enabled;
724
+ }
725
+ isAutoLintEnabled() {
726
+ return this.autoLintEnabled;
727
+ }
728
+ getDetectedLinter() {
729
+ return this.detectedLinter;
730
+ }
731
+ setDetectedLinter(linter) {
732
+ this.detectedLinter = linter;
733
+ }
734
+ /**
735
+ * Run the architect model to generate a plan, then feed to editor model
736
+ */
737
+ async architectChat(userMessage) {
738
+ const architectSystemPrompt = "You are a senior software architect. Analyze the request and create a detailed implementation plan. List exactly which files to modify, what changes to make, and in what order. Do NOT write code β€” just plan.";
739
+ let plan = "";
740
+ if (this.providerType === "anthropic" && this.anthropicClient) {
741
+ const response = await this.anthropicClient.messages.create({
742
+ model: this.architectModel,
743
+ max_tokens: this.maxTokens,
744
+ system: architectSystemPrompt,
745
+ messages: [{ role: "user", content: userMessage }],
746
+ });
747
+ plan = response.content
748
+ .filter((b) => b.type === "text")
749
+ .map((b) => b.text)
750
+ .join("");
751
+ }
752
+ else {
753
+ const response = await this.client.chat.completions.create({
754
+ model: this.architectModel,
755
+ max_tokens: this.maxTokens,
756
+ messages: [
757
+ { role: "system", content: architectSystemPrompt },
758
+ { role: "user", content: userMessage },
759
+ ],
760
+ });
761
+ plan = response.choices[0]?.message?.content ?? "(no plan generated)";
762
+ }
763
+ this.options.onArchitectPlan?.(plan);
764
+ // Feed plan + original request to the editor model
765
+ const editorPrompt = `## Architect Plan\n${plan}\n\n## Original Request\n${userMessage}\n\nExecute the plan above. Follow it step by step.`;
766
+ return this.chat(editorPrompt);
767
+ }
768
+ getMCPServerCount() {
769
+ return this.mcpServers.length;
770
+ }
771
+ getMCPServers() {
772
+ return this.mcpServers;
773
+ }
774
+ async disconnectMCP() {
775
+ await disconnectAll();
776
+ this.mcpServers = [];
777
+ this.tools = FILE_TOOLS;
778
+ }
779
+ async reconnectMCP() {
780
+ await this.disconnectMCP();
781
+ const mcpConfig = loadMCPConfig(this.cwd);
782
+ if (Object.keys(mcpConfig.mcpServers).length > 0) {
783
+ this.mcpServers = await connectToServers(mcpConfig, this.options.onMCPStatus);
784
+ if (this.mcpServers.length > 0) {
785
+ const mcpTools = getAllMCPTools(this.mcpServers);
786
+ this.tools = [...FILE_TOOLS, ...mcpTools];
787
+ }
788
+ }
789
+ }
635
790
  reset() {
636
791
  const systemMsg = this.messages[0];
637
792
  this.messages = [systemMsg];
package/dist/cli.d.ts CHANGED
@@ -1,6 +1,6 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * Codemaxxing CLI entry point
4
- * Routes subcommands (login, auth) to auth-cli, everything else to the TUI
4
+ * Routes subcommands (login, auth, exec) to handlers, everything else to the TUI
5
5
  */
6
6
  export {};
package/dist/cli.js CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * Codemaxxing CLI entry point
4
- * Routes subcommands (login, auth) to auth-cli, everything else to the TUI
4
+ * Routes subcommands (login, auth, exec) to handlers, everything else to the TUI
5
5
  */
6
6
  import { spawn } from "node:child_process";
7
7
  import { fileURLToPath } from "node:url";
@@ -21,6 +21,11 @@ if (subcmd === "login" || subcmd === "auth") {
21
21
  });
22
22
  child.on("exit", (code) => process.exit(code ?? 0));
23
23
  }
24
+ else if (subcmd === "exec") {
25
+ // Headless/CI mode β€” no TUI
26
+ const { runExec } = await import("./exec.js");
27
+ await runExec(process.argv.slice(3));
28
+ }
24
29
  else {
25
30
  // TUI mode β€” import directly (not spawn) to preserve raw stdin
26
31
  await import("./index.js");
package/dist/config.d.ts CHANGED
@@ -15,6 +15,8 @@ export interface CodemaxxingConfig {
15
15
  contextFiles: number;
16
16
  maxTokens: number;
17
17
  contextCompressionThreshold?: number;
18
+ architectModel?: string;
19
+ autoLint?: boolean;
18
20
  };
19
21
  }
20
22
  export interface CLIArgs {
package/dist/config.js CHANGED
@@ -55,6 +55,7 @@ codemaxxing β€” your code. your model. no excuses.
55
55
 
56
56
  Usage:
57
57
  codemaxxing [options]
58
+ codemaxxing exec "prompt" [exec-options]
58
59
 
59
60
  Options:
60
61
  -m, --model <model> Model name to use
@@ -63,11 +64,19 @@ Options:
63
64
  -u, --base-url <url> Base URL for the provider API
64
65
  -h, --help Show this help
65
66
 
67
+ Exec options (headless/CI mode):
68
+ --auto-approve Skip tool approval prompts
69
+ --json Output JSON instead of streaming text
70
+ -m, --model <model> Model to use
71
+ -p, --provider <name> Provider profile
72
+
66
73
  Examples:
67
74
  codemaxxing # Auto-detect local LLM
68
75
  codemaxxing -m gpt-4o -u https://api.openai.com/v1 -k sk-...
69
76
  codemaxxing -p openrouter # Use saved provider profile
70
77
  codemaxxing -m qwen3.5-35b # Override model only
78
+ codemaxxing exec "fix the failing tests" # Headless mode
79
+ echo "explain this code" | codemaxxing exec # Pipe input
71
80
 
72
81
  Config: ~/.codemaxxing/settings.json
73
82
  `);
package/dist/exec.d.ts ADDED
@@ -0,0 +1,7 @@
1
+ /**
2
+ * Headless/CI execution mode β€” runs agent without TUI
3
+ * Usage: codemaxxing exec "your prompt here"
4
+ * Flags: --auto-approve, --json, --model <model>, --provider <name>
5
+ * Supports stdin pipe: echo "fix the tests" | codemaxxing exec
6
+ */
7
+ export declare function runExec(argv: string[]): Promise<void>;
package/dist/exec.js ADDED
@@ -0,0 +1,164 @@
1
+ /**
2
+ * Headless/CI execution mode β€” runs agent without TUI
3
+ * Usage: codemaxxing exec "your prompt here"
4
+ * Flags: --auto-approve, --json, --model <model>, --provider <name>
5
+ * Supports stdin pipe: echo "fix the tests" | codemaxxing exec
6
+ */
7
+ import { CodingAgent } from "./agent.js";
8
+ import { loadConfig, applyOverrides, detectLocalProvider } from "./config.js";
9
+ import { disconnectAll } from "./utils/mcp.js";
10
+ function parseExecArgs(argv) {
11
+ const args = {
12
+ prompt: "",
13
+ autoApprove: false,
14
+ json: false,
15
+ };
16
+ const positional = [];
17
+ for (let i = 0; i < argv.length; i++) {
18
+ const arg = argv[i];
19
+ const next = argv[i + 1];
20
+ if (arg === "--auto-approve") {
21
+ args.autoApprove = true;
22
+ }
23
+ else if (arg === "--json") {
24
+ args.json = true;
25
+ }
26
+ else if ((arg === "--model" || arg === "-m") && next) {
27
+ args.model = next;
28
+ i++;
29
+ }
30
+ else if ((arg === "--provider" || arg === "-p") && next) {
31
+ args.provider = next;
32
+ i++;
33
+ }
34
+ else if (!arg.startsWith("-")) {
35
+ positional.push(arg);
36
+ }
37
+ }
38
+ args.prompt = positional.join(" ");
39
+ return args;
40
+ }
41
+ async function readStdin() {
42
+ // Check if stdin has data (piped input)
43
+ if (process.stdin.isTTY)
44
+ return "";
45
+ return new Promise((resolve) => {
46
+ let data = "";
47
+ process.stdin.setEncoding("utf-8");
48
+ process.stdin.on("data", (chunk) => { data += chunk; });
49
+ process.stdin.on("end", () => resolve(data.trim()));
50
+ // Timeout after 1s if no data arrives
51
+ setTimeout(() => resolve(data.trim()), 1000);
52
+ });
53
+ }
54
+ export async function runExec(argv) {
55
+ const args = parseExecArgs(argv);
56
+ // Read from stdin if no prompt provided
57
+ if (!args.prompt) {
58
+ args.prompt = await readStdin();
59
+ }
60
+ if (!args.prompt) {
61
+ process.stderr.write("Error: No prompt provided.\n");
62
+ process.stderr.write("Usage: codemaxxing exec \"your prompt here\"\n");
63
+ process.stderr.write(" echo \"fix tests\" | codemaxxing exec\n");
64
+ process.stderr.write("\nFlags:\n");
65
+ process.stderr.write(" --auto-approve Skip approval prompts\n");
66
+ process.stderr.write(" --json JSON output\n");
67
+ process.stderr.write(" -m, --model Model to use\n");
68
+ process.stderr.write(" -p, --provider Provider profile\n");
69
+ process.exit(1);
70
+ }
71
+ // Resolve provider config
72
+ const rawConfig = loadConfig();
73
+ const cliArgs = {
74
+ model: args.model,
75
+ provider: args.provider,
76
+ };
77
+ const config = applyOverrides(rawConfig, cliArgs);
78
+ let provider = config.provider;
79
+ // Auto-detect local provider if needed
80
+ if (provider.model === "auto" || (provider.baseUrl === "http://localhost:1234/v1" && !args.provider)) {
81
+ const detected = await detectLocalProvider();
82
+ if (detected) {
83
+ if (args.model)
84
+ detected.model = args.model;
85
+ provider = detected;
86
+ }
87
+ else if (!args.provider) {
88
+ process.stderr.write("Error: No LLM provider found. Start a local server or use --provider.\n");
89
+ process.exit(1);
90
+ }
91
+ }
92
+ process.stderr.write(`Provider: ${provider.baseUrl}\n`);
93
+ process.stderr.write(`Model: ${provider.model}\n`);
94
+ process.stderr.write(`Prompt: ${args.prompt.slice(0, 100)}${args.prompt.length > 100 ? "..." : ""}\n`);
95
+ process.stderr.write("---\n");
96
+ const cwd = process.cwd();
97
+ let hasChanges = false;
98
+ let fullResponse = "";
99
+ const toolResults = [];
100
+ const agent = new CodingAgent({
101
+ provider,
102
+ cwd,
103
+ maxTokens: config.defaults.maxTokens,
104
+ autoApprove: args.autoApprove,
105
+ onToken: (token) => {
106
+ if (!args.json) {
107
+ process.stdout.write(token);
108
+ }
109
+ fullResponse += token;
110
+ },
111
+ onToolCall: (name, toolArgs) => {
112
+ process.stderr.write(`Tool: ${name}(${Object.values(toolArgs).map(v => String(v).slice(0, 60)).join(", ")})\n`);
113
+ if (name === "write_file")
114
+ hasChanges = true;
115
+ },
116
+ onToolResult: (name, result) => {
117
+ const lines = result.split("\n").length;
118
+ process.stderr.write(` β”” ${lines} lines\n`);
119
+ toolResults.push({ tool: name, args: {}, result });
120
+ },
121
+ onToolApproval: async (name, toolArgs, diff) => {
122
+ if (args.autoApprove)
123
+ return "yes";
124
+ // In non-interactive mode without auto-approve, deny dangerous tools
125
+ process.stderr.write(`⚠ Denied ${name} (use --auto-approve to allow)\n`);
126
+ return "no";
127
+ },
128
+ onMCPStatus: (server, status) => {
129
+ process.stderr.write(`MCP ${server}: ${status}\n`);
130
+ },
131
+ });
132
+ try {
133
+ await agent.init();
134
+ const mcpCount = agent.getMCPServerCount();
135
+ if (mcpCount > 0) {
136
+ process.stderr.write(`MCP: ${mcpCount} server${mcpCount > 1 ? "s" : ""} connected\n`);
137
+ }
138
+ await agent.send(args.prompt);
139
+ if (!args.json) {
140
+ // Ensure newline at end of output
141
+ process.stdout.write("\n");
142
+ }
143
+ else {
144
+ // JSON output mode
145
+ const output = {
146
+ response: fullResponse,
147
+ model: provider.model,
148
+ tools_used: toolResults.length,
149
+ has_changes: hasChanges,
150
+ };
151
+ process.stdout.write(JSON.stringify(output, null, 2) + "\n");
152
+ }
153
+ await disconnectAll();
154
+ process.exit(hasChanges ? 0 : 2);
155
+ }
156
+ catch (err) {
157
+ await disconnectAll();
158
+ process.stderr.write(`Error: ${err.message}\n`);
159
+ if (args.json) {
160
+ process.stdout.write(JSON.stringify({ error: err.message }, null, 2) + "\n");
161
+ }
162
+ process.exit(1);
163
+ }
164
+ }