neckbeard-agent 0.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +106 -0
- package/dist/index.cjs +590 -0
- package/dist/index.d.cts +188 -0
- package/dist/index.d.ts +188 -0
- package/dist/index.js +556 -0
- package/package.json +73 -0
package/README.md
ADDED
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
<img width="1024" height="1024" alt="image" src="https://github.com/user-attachments/assets/52b7f9cf-b9c7-4cae-be11-62273cd1489a" />
|
|
2
|
+
|
|
3
|
+
# neckbeard
|
|
4
|
+
|
|
5
|
+
There's a weird thing that happens when you try to deploy an AI agent.
|
|
6
|
+
|
|
7
|
+
Most people think of agents as fancy API calls. You send a prompt, you get a response. But that's not what's actually happening. The agent is running code. It's executing bash commands, writing files, installing packages. It runs for minutes at a time, maintaining state between steps. It's a process, not a request.
|
|
8
|
+
|
|
9
|
+
This creates an obvious problem: do you really want that process running on your production server?
|
|
10
|
+
|
|
11
|
+
Anthropic's answer is no. Their [hosting docs](https://docs.claude.com/en/docs/agent-sdk/hosting) say you should run the entire agent inside a sandbox. Not just intercept the dangerous tool calls—put the whole thing in a container where it can't escape.
|
|
12
|
+
|
|
13
|
+
That sounds simple. It's not.
|
|
14
|
+
|
|
15
|
+
## The Plumbing Problem
|
|
16
|
+
|
|
17
|
+
Sandboxes like E2B give you a fresh Linux container. Your agent code lives in your repo. Bridging these two worlds is surprisingly annoying.
|
|
18
|
+
|
|
19
|
+
First, you have to get your code into the sandbox. You could bake it into a custom template, but then you're rebuilding templates every time you change a line. You could git clone on boot, but that's slow and requires auth. You could bundle and upload at runtime, which works, but now you're writing bundler configs.
|
|
20
|
+
|
|
21
|
+
Then you have to pass input. How do you get the user's prompt into a process running inside a sandbox? CLI arguments require escaping and have length limits. Environment variables have size limits. Writing to a file works, but adds boilerplate.
|
|
22
|
+
|
|
23
|
+
The worst part is output. When you run `sandbox.exec("node agent.js")`, you get back everything the process printed. SDK logs, debug output, streaming tokens, and somewhere in there, your actual result. Good luck parsing that reliably.
|
|
24
|
+
|
|
25
|
+
So you end up writing results to a file, reading it back, parsing JSON, validating the shape, handling errors. Every team building sandboxed agents writes some version of this plumbing. It's tedious.
|
|
26
|
+
|
|
27
|
+
## What This Does
|
|
28
|
+
|
|
29
|
+
Neckbeard handles all of that so you can just write your agent:
|
|
30
|
+
|
|
31
|
+
```typescript
|
|
32
|
+
import { Agent } from 'neckbeard-agent';
|
|
33
|
+
import { query } from '@anthropic-ai/claude-agent-sdk';
|
|
34
|
+
import { z } from 'zod';
|
|
35
|
+
|
|
36
|
+
const agent = new Agent({
|
|
37
|
+
template: 'code-interpreter-v1',
|
|
38
|
+
inputSchema: z.object({ topic: z.string() }),
|
|
39
|
+
outputSchema: z.object({
|
|
40
|
+
title: z.string(),
|
|
41
|
+
summary: z.string(),
|
|
42
|
+
keyPoints: z.array(z.string()),
|
|
43
|
+
}),
|
|
44
|
+
run: async (input) => {
|
|
45
|
+
for await (const message of query({
|
|
46
|
+
prompt: `Research "${input.topic}" and return JSON`,
|
|
47
|
+
options: { maxTurns: 10 },
|
|
48
|
+
})) {
|
|
49
|
+
if (message.type === 'result') {
|
|
50
|
+
return JSON.parse(message.result ?? '{}');
|
|
51
|
+
}
|
|
52
|
+
}
|
|
53
|
+
},
|
|
54
|
+
});
|
|
55
|
+
|
|
56
|
+
await agent.deploy(); // bundles, uploads to E2B
|
|
57
|
+
const result = await agent.run({ topic: 'TypeScript generics' });
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
`deploy()` bundles your code with esbuild and uploads it. `run()` writes input to a file, executes, reads the result back, and validates it against your schema. You don't think about file paths or stdout parsing.
|
|
61
|
+
|
|
62
|
+
## Setup
|
|
63
|
+
|
|
64
|
+
```bash
|
|
65
|
+
npm install neckbeard-agent
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
```bash
|
|
69
|
+
export E2B_API_KEY=your-key
|
|
70
|
+
export ANTHROPIC_API_KEY=your-key
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
## The Details
|
|
74
|
+
|
|
75
|
+
The constructor takes a few options:
|
|
76
|
+
|
|
77
|
+
```typescript
|
|
78
|
+
new Agent({
|
|
79
|
+
template: string, // E2B template (e.g. 'code-interpreter-v1')
|
|
80
|
+
inputSchema: ZodSchema,
|
|
81
|
+
outputSchema: ZodSchema,
|
|
82
|
+
run: (input, ctx) => Promise,
|
|
83
|
+
maxDuration?: number, // seconds, default 300
|
|
84
|
+
sandboxId?: string, // reuse existing sandbox
|
|
85
|
+
dependencies?: {
|
|
86
|
+
apt?: string[],
|
|
87
|
+
commands?: string[],
|
|
88
|
+
},
|
|
89
|
+
files?: [{ url, path }], // pre-download into sandbox
|
|
90
|
+
claudeDir?: string, // upload .claude/ skills directory
|
|
91
|
+
})
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
If you already have a sandbox deployed, pass `sandboxId` and skip `deploy()`.
|
|
95
|
+
|
|
96
|
+
The `files` option downloads things into the sandbox before your agent runs—useful for models or config files. Relative paths resolve from `/home/user/`.
|
|
97
|
+
|
|
98
|
+
The `claudeDir` option uploads a local `.claude/` directory to the sandbox, enabling Claude Agent SDK skills. Point it at a directory containing `.claude/skills/*/SKILL.md` files.
|
|
99
|
+
|
|
100
|
+
Some packages can't be bundled because they spawn child processes or have native modules. The Claude Agent SDK is like this. These get automatically marked as external and installed via npm in the sandbox.
|
|
101
|
+
|
|
102
|
+
The `run` function gets a context object with an `executionId`, an `AbortSignal`, environment variables, and a logger.
|
|
103
|
+
|
|
104
|
+
## License
|
|
105
|
+
|
|
106
|
+
MIT
|
package/dist/index.cjs
ADDED
|
@@ -0,0 +1,590 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __create = Object.create;
|
|
3
|
+
var __defProp = Object.defineProperty;
|
|
4
|
+
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
|
|
5
|
+
var __getOwnPropNames = Object.getOwnPropertyNames;
|
|
6
|
+
var __getProtoOf = Object.getPrototypeOf;
|
|
7
|
+
var __hasOwnProp = Object.prototype.hasOwnProperty;
|
|
8
|
+
var __export = (target, all) => {
|
|
9
|
+
for (var name in all)
|
|
10
|
+
__defProp(target, name, { get: all[name], enumerable: true });
|
|
11
|
+
};
|
|
12
|
+
var __copyProps = (to, from, except, desc) => {
|
|
13
|
+
if (from && typeof from === "object" || typeof from === "function") {
|
|
14
|
+
for (let key of __getOwnPropNames(from))
|
|
15
|
+
if (!__hasOwnProp.call(to, key) && key !== except)
|
|
16
|
+
__defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
|
|
17
|
+
}
|
|
18
|
+
return to;
|
|
19
|
+
};
|
|
20
|
+
var __toESM = (mod, isNodeMode, target) => (target = mod != null ? __create(__getProtoOf(mod)) : {}, __copyProps(
|
|
21
|
+
// If the importer is in node compatibility mode or this is not an ESM
|
|
22
|
+
// file that has been converted to a CommonJS file using a Babel-
|
|
23
|
+
// compatible transform (i.e. "__esModule" has not been set), then set
|
|
24
|
+
// "default" to the CommonJS "module.exports" for node compatibility.
|
|
25
|
+
isNodeMode || !mod || !mod.__esModule ? __defProp(target, "default", { value: mod, enumerable: true }) : target,
|
|
26
|
+
mod
|
|
27
|
+
));
|
|
28
|
+
var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
|
|
29
|
+
|
|
30
|
+
// src/index.ts
|
|
31
|
+
var index_exports = {};
|
|
32
|
+
__export(index_exports, {
|
|
33
|
+
Agent: () => Agent,
|
|
34
|
+
ConfigurationError: () => ConfigurationError,
|
|
35
|
+
DEFAULT_DEPENDENCIES: () => DEFAULT_DEPENDENCIES,
|
|
36
|
+
DeploymentError: () => DeploymentError,
|
|
37
|
+
ExecutionError: () => ExecutionError,
|
|
38
|
+
NeckbeardError: () => NeckbeardError,
|
|
39
|
+
ValidationError: () => ValidationError
|
|
40
|
+
});
|
|
41
|
+
module.exports = __toCommonJS(index_exports);
|
|
42
|
+
var import_node_url = require("url");
|
|
43
|
+
var import_node_fs = require("fs");
|
|
44
|
+
var import_node_path = require("path");
|
|
45
|
+
var import_node_module = require("module");
|
|
46
|
+
var getEsbuild = () => import("esbuild");
|
|
47
|
+
var getE2b = () => import("e2b");
|
|
48
|
+
var getPkgTypes = () => import("pkg-types");
|
|
49
|
+
var DEFAULT_MAX_DURATION = 300;
|
|
50
|
+
var SANDBOX_COMMAND_TIMEOUT_MS = 3e5;
|
|
51
|
+
var QUICK_COMMAND_TIMEOUT_MS = 3e4;
|
|
52
|
+
var NODE_TARGET = "node20";
|
|
53
|
+
var SANDBOX_HOME = "/home/user";
|
|
54
|
+
var SANDBOX_AGENT_DIR = `${SANDBOX_HOME}/agent`;
|
|
55
|
+
var SANDBOX_PATHS = {
|
|
56
|
+
/** Agent code directory */
|
|
57
|
+
agentDir: SANDBOX_AGENT_DIR,
|
|
58
|
+
/** I/O directory for input/output files */
|
|
59
|
+
ioDir: `${SANDBOX_HOME}/io`,
|
|
60
|
+
/** Task input file */
|
|
61
|
+
inputJson: `${SANDBOX_HOME}/io/input.json`,
|
|
62
|
+
/** Result output file (contains success/error) */
|
|
63
|
+
outputJson: `${SANDBOX_HOME}/io/output.json`,
|
|
64
|
+
/** Bundled agent module */
|
|
65
|
+
agentModule: `${SANDBOX_AGENT_DIR}/agent.mjs`,
|
|
66
|
+
/** Runner module that executes the agent */
|
|
67
|
+
runnerModule: `${SANDBOX_AGENT_DIR}/runner.mjs`,
|
|
68
|
+
/** Claude skills and settings directory */
|
|
69
|
+
claudeDir: `${SANDBOX_AGENT_DIR}/.claude`,
|
|
70
|
+
/** Package.json for npm dependencies */
|
|
71
|
+
packageJson: `${SANDBOX_AGENT_DIR}/package.json`
|
|
72
|
+
};
|
|
73
|
+
var NeckbeardError = class extends Error {
|
|
74
|
+
constructor(message) {
|
|
75
|
+
super(message);
|
|
76
|
+
this.name = "NeckbeardError";
|
|
77
|
+
}
|
|
78
|
+
};
|
|
79
|
+
var DeploymentError = class extends NeckbeardError {
|
|
80
|
+
constructor(message, cause) {
|
|
81
|
+
super(message);
|
|
82
|
+
this.cause = cause;
|
|
83
|
+
this.name = "DeploymentError";
|
|
84
|
+
}
|
|
85
|
+
};
|
|
86
|
+
var ValidationError = class extends NeckbeardError {
|
|
87
|
+
constructor(message, cause) {
|
|
88
|
+
super(message);
|
|
89
|
+
this.cause = cause;
|
|
90
|
+
this.name = "ValidationError";
|
|
91
|
+
}
|
|
92
|
+
};
|
|
93
|
+
var ExecutionError = class extends NeckbeardError {
|
|
94
|
+
constructor(message, cause) {
|
|
95
|
+
super(message);
|
|
96
|
+
this.cause = cause;
|
|
97
|
+
this.name = "ExecutionError";
|
|
98
|
+
}
|
|
99
|
+
};
|
|
100
|
+
var ConfigurationError = class extends NeckbeardError {
|
|
101
|
+
constructor(message) {
|
|
102
|
+
super(message);
|
|
103
|
+
this.name = "ConfigurationError";
|
|
104
|
+
}
|
|
105
|
+
};
|
|
106
|
+
var DEFAULT_DEPENDENCIES = {};
|
|
107
|
+
function requireEnv(name) {
|
|
108
|
+
const value = process.env[name];
|
|
109
|
+
if (!value) {
|
|
110
|
+
throw new ConfigurationError(
|
|
111
|
+
`${name} environment variable is required. Please set it before calling deploy() or run().`
|
|
112
|
+
);
|
|
113
|
+
}
|
|
114
|
+
return value;
|
|
115
|
+
}
|
|
116
|
+
function shellEscape(str) {
|
|
117
|
+
return `'${str.replace(/'/g, "'\\''")}'`;
|
|
118
|
+
}
|
|
119
|
+
async function runSandboxCommand(sandbox, cmd, timeoutMs) {
|
|
120
|
+
let stdout = "";
|
|
121
|
+
let stderr = "";
|
|
122
|
+
try {
|
|
123
|
+
const result = await sandbox.commands.run(cmd, {
|
|
124
|
+
timeoutMs,
|
|
125
|
+
onStdout: (data) => {
|
|
126
|
+
stdout += data;
|
|
127
|
+
},
|
|
128
|
+
onStderr: (data) => {
|
|
129
|
+
stderr += data;
|
|
130
|
+
}
|
|
131
|
+
});
|
|
132
|
+
return {
|
|
133
|
+
exitCode: result.exitCode,
|
|
134
|
+
stdout: stdout || result.stdout,
|
|
135
|
+
stderr: stderr || result.stderr
|
|
136
|
+
};
|
|
137
|
+
} catch (error) {
|
|
138
|
+
const exitCode = error.exitCode ?? 1;
|
|
139
|
+
const errorMessage = error instanceof Error ? error.message : String(error);
|
|
140
|
+
return {
|
|
141
|
+
exitCode,
|
|
142
|
+
stdout,
|
|
143
|
+
stderr: stderr || errorMessage
|
|
144
|
+
};
|
|
145
|
+
}
|
|
146
|
+
}
|
|
147
|
+
async function resolvePackageVersion(pkg, sourceFileDir) {
|
|
148
|
+
try {
|
|
149
|
+
const pkgJsonPath = require.resolve(`${pkg}/package.json`, { paths: [sourceFileDir] });
|
|
150
|
+
const pkgJson = JSON.parse((0, import_node_fs.readFileSync)(pkgJsonPath, "utf-8"));
|
|
151
|
+
if (pkgJson.version) return pkgJson.version;
|
|
152
|
+
} catch {
|
|
153
|
+
}
|
|
154
|
+
try {
|
|
155
|
+
const pkgJsonPath = require.resolve(`${pkg}/package.json`, { paths: [process.cwd()] });
|
|
156
|
+
const pkgJson = JSON.parse((0, import_node_fs.readFileSync)(pkgJsonPath, "utf-8"));
|
|
157
|
+
if (pkgJson.version) return pkgJson.version;
|
|
158
|
+
} catch {
|
|
159
|
+
}
|
|
160
|
+
try {
|
|
161
|
+
const { resolvePackageJSON, readPackageJSON } = await getPkgTypes();
|
|
162
|
+
const pkgDir = (0, import_node_path.join)(process.cwd(), "node_modules", pkg);
|
|
163
|
+
const pkgJsonPath = await resolvePackageJSON(pkgDir);
|
|
164
|
+
if (pkgJsonPath) {
|
|
165
|
+
const pkgJson = await readPackageJSON(pkgJsonPath);
|
|
166
|
+
if (pkgJson.version) return pkgJson.version;
|
|
167
|
+
}
|
|
168
|
+
} catch {
|
|
169
|
+
}
|
|
170
|
+
try {
|
|
171
|
+
const directPath = (0, import_node_path.join)(process.cwd(), "node_modules", pkg, "package.json");
|
|
172
|
+
const pkgJson = JSON.parse((0, import_node_fs.readFileSync)(directPath, "utf-8"));
|
|
173
|
+
if (pkgJson.version) return pkgJson.version;
|
|
174
|
+
} catch {
|
|
175
|
+
}
|
|
176
|
+
let currentDir = sourceFileDir;
|
|
177
|
+
while (currentDir !== (0, import_node_path.dirname)(currentDir)) {
|
|
178
|
+
try {
|
|
179
|
+
const pkgJsonPath = (0, import_node_path.join)(currentDir, "node_modules", pkg, "package.json");
|
|
180
|
+
const pkgJson = JSON.parse((0, import_node_fs.readFileSync)(pkgJsonPath, "utf-8"));
|
|
181
|
+
if (pkgJson.version) return pkgJson.version;
|
|
182
|
+
} catch {
|
|
183
|
+
}
|
|
184
|
+
currentDir = (0, import_node_path.dirname)(currentDir);
|
|
185
|
+
}
|
|
186
|
+
throw new Error(
|
|
187
|
+
`Could not resolve version for package "${pkg}". Ensure it is installed in node_modules. Searched from: ${sourceFileDir} and ${process.cwd()}`
|
|
188
|
+
);
|
|
189
|
+
}
|
|
190
|
+
function getCallerFile() {
|
|
191
|
+
const stack = new Error().stack?.split("\n") ?? [];
|
|
192
|
+
for (const line of stack.slice(2)) {
|
|
193
|
+
const match = line.match(/\((.+?):\d+:\d+\)/) || line.match(/at (.+?):\d+:\d+/);
|
|
194
|
+
if (match) {
|
|
195
|
+
let file = match[1];
|
|
196
|
+
if (file.startsWith("file://")) file = (0, import_node_url.fileURLToPath)(file);
|
|
197
|
+
if (!file.includes("node:") && !file.includes("node_modules/agent-neckbeard") && !file.includes("agent-neckbeard/dist")) return file;
|
|
198
|
+
}
|
|
199
|
+
}
|
|
200
|
+
throw new Error("Could not determine source file");
|
|
201
|
+
}
|
|
202
|
+
function readDirectoryRecursively(dirPath) {
|
|
203
|
+
const files = [];
|
|
204
|
+
function walkDir(currentPath) {
|
|
205
|
+
const entries = (0, import_node_fs.readdirSync)(currentPath);
|
|
206
|
+
for (const entry of entries) {
|
|
207
|
+
const fullPath = (0, import_node_path.join)(currentPath, entry);
|
|
208
|
+
const stat = (0, import_node_fs.statSync)(fullPath);
|
|
209
|
+
if (stat.isDirectory()) {
|
|
210
|
+
walkDir(fullPath);
|
|
211
|
+
} else if (stat.isFile()) {
|
|
212
|
+
const relPath = (0, import_node_path.relative)(dirPath, fullPath);
|
|
213
|
+
const isBinary = /\.(png|jpg|jpeg|gif|ico|pdf|zip|tar|gz|bin|exe|dll|so|dylib|wasm)$/i.test(entry);
|
|
214
|
+
if (isBinary) {
|
|
215
|
+
const buffer = (0, import_node_fs.readFileSync)(fullPath);
|
|
216
|
+
const arrayBuffer = buffer.buffer.slice(buffer.byteOffset, buffer.byteOffset + buffer.byteLength);
|
|
217
|
+
files.push({ relativePath: relPath, content: arrayBuffer });
|
|
218
|
+
} else {
|
|
219
|
+
files.push({ relativePath: relPath, content: (0, import_node_fs.readFileSync)(fullPath, "utf-8") });
|
|
220
|
+
}
|
|
221
|
+
}
|
|
222
|
+
}
|
|
223
|
+
}
|
|
224
|
+
walkDir(dirPath);
|
|
225
|
+
return files;
|
|
226
|
+
}
|
|
227
|
+
var Agent = class {
|
|
228
|
+
template;
|
|
229
|
+
inputSchema;
|
|
230
|
+
outputSchema;
|
|
231
|
+
maxDuration;
|
|
232
|
+
dependencies;
|
|
233
|
+
files;
|
|
234
|
+
claudeDir;
|
|
235
|
+
/** @internal Used by the sandbox runner - must be public for bundled code access */
|
|
236
|
+
_run;
|
|
237
|
+
_sourceFile;
|
|
238
|
+
_sandboxId;
|
|
239
|
+
constructor(config) {
|
|
240
|
+
this.template = config.template;
|
|
241
|
+
this.inputSchema = config.inputSchema;
|
|
242
|
+
this.outputSchema = config.outputSchema;
|
|
243
|
+
this.maxDuration = config.maxDuration ?? DEFAULT_MAX_DURATION;
|
|
244
|
+
this._run = config.run;
|
|
245
|
+
this._sourceFile = getCallerFile();
|
|
246
|
+
this._sandboxId = config.sandboxId;
|
|
247
|
+
this.dependencies = config.dependencies ?? DEFAULT_DEPENDENCIES;
|
|
248
|
+
this.files = config.files ?? [];
|
|
249
|
+
this.claudeDir = config.claudeDir;
|
|
250
|
+
}
|
|
251
|
+
get sandboxId() {
|
|
252
|
+
return this._sandboxId;
|
|
253
|
+
}
|
|
254
|
+
/**
|
|
255
|
+
* Deploys the agent to an E2B sandbox.
|
|
256
|
+
*
|
|
257
|
+
* This method bundles the agent code using esbuild, creates a new E2B sandbox,
|
|
258
|
+
* installs any specified OS dependencies, downloads configured files, and
|
|
259
|
+
* uploads the agent code to the sandbox.
|
|
260
|
+
*
|
|
261
|
+
* The sandbox ID is stored and can be accessed via the `sandboxId` property
|
|
262
|
+
* after deployment completes.
|
|
263
|
+
*
|
|
264
|
+
* @throws {ConfigurationError} If E2B_API_KEY environment variable is not set
|
|
265
|
+
* @throws {DeploymentError} If sandbox creation, dependency installation, or file upload fails
|
|
266
|
+
*
|
|
267
|
+
* @example
|
|
268
|
+
* ```typescript
|
|
269
|
+
* const agent = new Agent({ ... });
|
|
270
|
+
* await agent.deploy();
|
|
271
|
+
* console.log(`Deployed to sandbox: ${agent.sandboxId}`);
|
|
272
|
+
* ```
|
|
273
|
+
*/
|
|
274
|
+
async deploy() {
|
|
275
|
+
if (this._sandboxId) return;
|
|
276
|
+
const e2bApiKey = requireEnv("E2B_API_KEY");
|
|
277
|
+
const esbuild = await getEsbuild();
|
|
278
|
+
const { Sandbox, Template } = await getE2b();
|
|
279
|
+
const templateExists = await Template.aliasExists(this.template, { apiKey: e2bApiKey });
|
|
280
|
+
if (!templateExists) {
|
|
281
|
+
throw new DeploymentError(
|
|
282
|
+
`Template "${this.template}" not found. Create it via E2B CLI: e2b template build --name ${this.template} --cpu-count 2 --memory-mb 2048
|
|
283
|
+
Or use an existing template like 'code-interpreter-v1' (2 cores, 2048 MB) or 'base' (2 cores, 512 MB).`
|
|
284
|
+
);
|
|
285
|
+
}
|
|
286
|
+
const collectedExternals = /* @__PURE__ */ new Set();
|
|
287
|
+
const result = await esbuild.build({
|
|
288
|
+
entryPoints: [this._sourceFile],
|
|
289
|
+
bundle: true,
|
|
290
|
+
platform: "node",
|
|
291
|
+
target: NODE_TARGET,
|
|
292
|
+
format: "esm",
|
|
293
|
+
write: false,
|
|
294
|
+
minify: true,
|
|
295
|
+
keepNames: true,
|
|
296
|
+
treeShaking: false,
|
|
297
|
+
// Preserve exports for the sandbox runner to import
|
|
298
|
+
banner: {
|
|
299
|
+
js: `import { fileURLToPath as __neckbeard_fileURLToPath } from 'node:url';
|
|
300
|
+
import { dirname as __neckbeard_dirname } from 'node:path';
|
|
301
|
+
var __filename = __neckbeard_fileURLToPath(import.meta.url);
|
|
302
|
+
var __dirname = __neckbeard_dirname(__filename);
|
|
303
|
+
`
|
|
304
|
+
},
|
|
305
|
+
plugins: [{
|
|
306
|
+
name: "agent-neckbeard-externals",
|
|
307
|
+
setup(build) {
|
|
308
|
+
build.onResolve({ filter: /^agent-neckbeard$/ }, () => ({
|
|
309
|
+
path: "agent-neckbeard",
|
|
310
|
+
namespace: "agent-shim"
|
|
311
|
+
}));
|
|
312
|
+
build.onLoad({ filter: /.*/, namespace: "agent-shim" }, () => ({
|
|
313
|
+
contents: `
|
|
314
|
+
export class Agent {
|
|
315
|
+
constructor(config) {
|
|
316
|
+
this.inputSchema = config.inputSchema;
|
|
317
|
+
this.outputSchema = config.outputSchema;
|
|
318
|
+
this.maxDuration = config.maxDuration ?? 300;
|
|
319
|
+
this._run = config.run;
|
|
320
|
+
}
|
|
321
|
+
}
|
|
322
|
+
`,
|
|
323
|
+
loader: "js"
|
|
324
|
+
}));
|
|
325
|
+
build.onResolve({ filter: /^[^.\/]|^\.[^.\/]|^\.\.[^\/]/ }, (args) => {
|
|
326
|
+
if (args.path.startsWith("node:")) {
|
|
327
|
+
return null;
|
|
328
|
+
}
|
|
329
|
+
if (import_node_module.builtinModules.includes(args.path.replace("node:", ""))) {
|
|
330
|
+
return null;
|
|
331
|
+
}
|
|
332
|
+
const match = args.path.match(/^(@[^/]+\/[^/]+|[^/]+)/);
|
|
333
|
+
if (match) {
|
|
334
|
+
collectedExternals.add(match[1]);
|
|
335
|
+
}
|
|
336
|
+
return { external: true };
|
|
337
|
+
});
|
|
338
|
+
}
|
|
339
|
+
}]
|
|
340
|
+
});
|
|
341
|
+
const runnerCode = `
|
|
342
|
+
import { readFileSync, writeFileSync, mkdirSync } from 'node:fs';
|
|
343
|
+
|
|
344
|
+
const IO_DIR = '/home/user/io';
|
|
345
|
+
const INPUT_FILE = IO_DIR + '/input.json';
|
|
346
|
+
const OUTPUT_FILE = IO_DIR + '/output.json';
|
|
347
|
+
|
|
348
|
+
mkdirSync(IO_DIR, { recursive: true });
|
|
349
|
+
|
|
350
|
+
let input, executionId;
|
|
351
|
+
try {
|
|
352
|
+
const taskData = JSON.parse(readFileSync(INPUT_FILE, 'utf-8'));
|
|
353
|
+
input = taskData.input;
|
|
354
|
+
executionId = taskData.executionId;
|
|
355
|
+
} catch (parseError) {
|
|
356
|
+
writeFileSync(OUTPUT_FILE, JSON.stringify({ success: false, error: { message: 'Task parse failed: ' + parseError.message, stack: parseError.stack } }));
|
|
357
|
+
process.exit(0);
|
|
358
|
+
}
|
|
359
|
+
|
|
360
|
+
let mod;
|
|
361
|
+
try {
|
|
362
|
+
mod = await import('./agent.mjs');
|
|
363
|
+
} catch (importError) {
|
|
364
|
+
writeFileSync(OUTPUT_FILE, JSON.stringify({ success: false, error: { message: 'Import failed: ' + importError.message, stack: importError.stack } }));
|
|
365
|
+
process.exit(0);
|
|
366
|
+
}
|
|
367
|
+
|
|
368
|
+
const agent = mod.default || Object.values(mod).find(v => v instanceof Object && v._run);
|
|
369
|
+
if (!agent) {
|
|
370
|
+
writeFileSync(OUTPUT_FILE, JSON.stringify({ success: false, error: { message: 'No agent found in module. Exports: ' + Object.keys(mod).join(', ') } }));
|
|
371
|
+
process.exit(0);
|
|
372
|
+
}
|
|
373
|
+
|
|
374
|
+
const ctx = {
|
|
375
|
+
executionId,
|
|
376
|
+
signal: AbortSignal.timeout(${this.maxDuration * 1e3}),
|
|
377
|
+
env: process.env,
|
|
378
|
+
logger: {
|
|
379
|
+
debug: (msg, ...args) => console.log('[DEBUG]', msg, ...args),
|
|
380
|
+
info: (msg, ...args) => console.log('[INFO]', msg, ...args),
|
|
381
|
+
warn: (msg, ...args) => console.warn('[WARN]', msg, ...args),
|
|
382
|
+
error: (msg, ...args) => console.error('[ERROR]', msg, ...args),
|
|
383
|
+
},
|
|
384
|
+
};
|
|
385
|
+
|
|
386
|
+
try {
|
|
387
|
+
const validated = agent.inputSchema.parse(input);
|
|
388
|
+
const output = await agent._run(validated, ctx);
|
|
389
|
+
const validatedOutput = agent.outputSchema.parse(output);
|
|
390
|
+
writeFileSync(OUTPUT_FILE, JSON.stringify({ success: true, output: validatedOutput }));
|
|
391
|
+
} catch (error) {
|
|
392
|
+
writeFileSync(OUTPUT_FILE, JSON.stringify({ success: false, error: { message: error.message, stack: error.stack } }));
|
|
393
|
+
}
|
|
394
|
+
`;
|
|
395
|
+
const sandbox = await Sandbox.create(this.template, {
|
|
396
|
+
apiKey: e2bApiKey
|
|
397
|
+
});
|
|
398
|
+
await runSandboxCommand(sandbox, `mkdir -p ${SANDBOX_PATHS.agentDir}`, QUICK_COMMAND_TIMEOUT_MS);
|
|
399
|
+
const { apt, commands } = this.dependencies;
|
|
400
|
+
if (apt && apt.length > 0) {
|
|
401
|
+
const aptCmd = `sudo apt-get update && sudo apt-get install -y ${apt.join(" ")}`;
|
|
402
|
+
const aptResult = await runSandboxCommand(sandbox, aptCmd, SANDBOX_COMMAND_TIMEOUT_MS);
|
|
403
|
+
if (aptResult.exitCode !== 0) {
|
|
404
|
+
const details = [
|
|
405
|
+
`Failed to install apt packages: ${apt.join(", ")}`,
|
|
406
|
+
aptResult.stderr ? `stderr: ${aptResult.stderr}` : "",
|
|
407
|
+
aptResult.stdout ? `stdout: ${aptResult.stdout}` : ""
|
|
408
|
+
].filter(Boolean).join("\n");
|
|
409
|
+
throw new DeploymentError(details);
|
|
410
|
+
}
|
|
411
|
+
}
|
|
412
|
+
if (commands && commands.length > 0) {
|
|
413
|
+
for (const cmd of commands) {
|
|
414
|
+
const cmdResult = await runSandboxCommand(sandbox, cmd, SANDBOX_COMMAND_TIMEOUT_MS);
|
|
415
|
+
if (cmdResult.exitCode !== 0) {
|
|
416
|
+
const details = [
|
|
417
|
+
`Failed to run command: ${cmd}`,
|
|
418
|
+
cmdResult.stderr ? `stderr: ${cmdResult.stderr}` : "",
|
|
419
|
+
cmdResult.stdout ? `stdout: ${cmdResult.stdout}` : ""
|
|
420
|
+
].filter(Boolean).join("\n");
|
|
421
|
+
throw new DeploymentError(details);
|
|
422
|
+
}
|
|
423
|
+
}
|
|
424
|
+
}
|
|
425
|
+
if (this.files.length > 0) {
|
|
426
|
+
for (const file of this.files) {
|
|
427
|
+
const destPath = file.path.startsWith("/") ? file.path : `${SANDBOX_HOME}/${file.path}`;
|
|
428
|
+
const parentDir = destPath.substring(0, destPath.lastIndexOf("/"));
|
|
429
|
+
if (parentDir) {
|
|
430
|
+
await runSandboxCommand(sandbox, `mkdir -p ${shellEscape(parentDir)}`, QUICK_COMMAND_TIMEOUT_MS);
|
|
431
|
+
}
|
|
432
|
+
const curlCmd = `curl -fsSL -o ${shellEscape(destPath)} ${shellEscape(file.url)}`;
|
|
433
|
+
const downloadResult = await runSandboxCommand(sandbox, curlCmd, SANDBOX_COMMAND_TIMEOUT_MS);
|
|
434
|
+
if (downloadResult.exitCode !== 0) {
|
|
435
|
+
const details = [
|
|
436
|
+
`Failed to download file from ${file.url} to ${destPath}`,
|
|
437
|
+
downloadResult.stderr ? `stderr: ${downloadResult.stderr}` : "",
|
|
438
|
+
downloadResult.stdout ? `stdout: ${downloadResult.stdout}` : ""
|
|
439
|
+
].filter(Boolean).join("\n");
|
|
440
|
+
throw new DeploymentError(details);
|
|
441
|
+
}
|
|
442
|
+
}
|
|
443
|
+
}
|
|
444
|
+
if (this.claudeDir) {
|
|
445
|
+
const claudeFiles = readDirectoryRecursively(this.claudeDir);
|
|
446
|
+
await runSandboxCommand(sandbox, `mkdir -p ${SANDBOX_PATHS.claudeDir}`, QUICK_COMMAND_TIMEOUT_MS);
|
|
447
|
+
for (const file of claudeFiles) {
|
|
448
|
+
const destPath = `${SANDBOX_PATHS.claudeDir}/${file.relativePath}`;
|
|
449
|
+
const parentDir = destPath.substring(0, destPath.lastIndexOf("/"));
|
|
450
|
+
if (parentDir && parentDir !== SANDBOX_PATHS.claudeDir) {
|
|
451
|
+
await runSandboxCommand(sandbox, `mkdir -p ${shellEscape(parentDir)}`, QUICK_COMMAND_TIMEOUT_MS);
|
|
452
|
+
}
|
|
453
|
+
await sandbox.files.write(destPath, file.content);
|
|
454
|
+
}
|
|
455
|
+
}
|
|
456
|
+
await sandbox.files.write(SANDBOX_PATHS.agentModule, result.outputFiles[0].text);
|
|
457
|
+
await sandbox.files.write(SANDBOX_PATHS.runnerModule, runnerCode);
|
|
458
|
+
if (collectedExternals.size > 0) {
|
|
459
|
+
const dependencies = {};
|
|
460
|
+
const sourceFileDir = (0, import_node_path.dirname)(this._sourceFile);
|
|
461
|
+
for (const pkg of collectedExternals) {
|
|
462
|
+
try {
|
|
463
|
+
dependencies[pkg] = await resolvePackageVersion(pkg, sourceFileDir);
|
|
464
|
+
} catch (err) {
|
|
465
|
+
throw new DeploymentError(
|
|
466
|
+
`Failed to resolve version for package "${pkg}": ${err instanceof Error ? err.message : String(err)}`
|
|
467
|
+
);
|
|
468
|
+
}
|
|
469
|
+
}
|
|
470
|
+
const pkgJson = JSON.stringify({
|
|
471
|
+
name: "agent-sandbox",
|
|
472
|
+
type: "module",
|
|
473
|
+
dependencies
|
|
474
|
+
});
|
|
475
|
+
await sandbox.files.write(SANDBOX_PATHS.packageJson, pkgJson);
|
|
476
|
+
const installResult = await runSandboxCommand(sandbox, `cd ${SANDBOX_PATHS.agentDir} && npm install --legacy-peer-deps`, SANDBOX_COMMAND_TIMEOUT_MS);
|
|
477
|
+
if (installResult.exitCode !== 0) {
|
|
478
|
+
const details = [
|
|
479
|
+
`Failed to install npm packages: ${Object.keys(dependencies).join(", ")}`,
|
|
480
|
+
installResult.stderr ? `stderr: ${installResult.stderr}` : "",
|
|
481
|
+
installResult.stdout ? `stdout: ${installResult.stdout}` : ""
|
|
482
|
+
].filter(Boolean).join("\n");
|
|
483
|
+
throw new DeploymentError(details);
|
|
484
|
+
}
|
|
485
|
+
}
|
|
486
|
+
this._sandboxId = sandbox.sandboxId;
|
|
487
|
+
}
|
|
488
|
+
/**
|
|
489
|
+
* Executes the agent with the given input.
|
|
490
|
+
*
|
|
491
|
+
* The agent must be deployed before calling this method. Input is validated
|
|
492
|
+
* against the input schema, then the agent runs in the sandbox with the
|
|
493
|
+
* validated input. Output is validated against the output schema before
|
|
494
|
+
* being returned.
|
|
495
|
+
*
|
|
496
|
+
* @param input - The input data for the agent, must conform to inputSchema
|
|
497
|
+
* @returns A result object indicating success or failure with output or error
|
|
498
|
+
*
|
|
499
|
+
* @example
|
|
500
|
+
* ```typescript
|
|
501
|
+
* const result = await agent.run({ prompt: 'Hello, world!' });
|
|
502
|
+
* if (result.ok) {
|
|
503
|
+
* console.log('Output:', result.output);
|
|
504
|
+
* } else {
|
|
505
|
+
* console.error('Error:', result.error.message);
|
|
506
|
+
* }
|
|
507
|
+
* ```
|
|
508
|
+
*/
|
|
509
|
+
async run(input) {
|
|
510
|
+
const executionId = `exec_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`;
|
|
511
|
+
if (!this._sandboxId) {
|
|
512
|
+
return {
|
|
513
|
+
ok: false,
|
|
514
|
+
executionId,
|
|
515
|
+
error: new ExecutionError("Agent not deployed. Call agent.deploy() first or pass sandboxId to constructor.")
|
|
516
|
+
};
|
|
517
|
+
}
|
|
518
|
+
try {
|
|
519
|
+
const e2bApiKey = requireEnv("E2B_API_KEY");
|
|
520
|
+
const { Sandbox } = await getE2b();
|
|
521
|
+
const validatedInput = this.inputSchema.parse(input);
|
|
522
|
+
const sandbox = await Sandbox.connect(this._sandboxId, {
|
|
523
|
+
apiKey: e2bApiKey
|
|
524
|
+
});
|
|
525
|
+
await sandbox.files.write(SANDBOX_PATHS.inputJson, JSON.stringify({ input: validatedInput, executionId }));
|
|
526
|
+
let capturedStdout = "";
|
|
527
|
+
let capturedStderr = "";
|
|
528
|
+
const result = await sandbox.commands.run(`cd ${SANDBOX_PATHS.agentDir} && node runner.mjs`, {
|
|
529
|
+
timeoutMs: this.maxDuration * 1e3,
|
|
530
|
+
envs: {
|
|
531
|
+
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY ?? ""
|
|
532
|
+
},
|
|
533
|
+
onStdout: (data) => {
|
|
534
|
+
capturedStdout += data;
|
|
535
|
+
},
|
|
536
|
+
onStderr: (data) => {
|
|
537
|
+
capturedStderr += data;
|
|
538
|
+
}
|
|
539
|
+
});
|
|
540
|
+
if (result.exitCode !== 0) {
|
|
541
|
+
let resultError = "";
|
|
542
|
+
try {
|
|
543
|
+
const resultJson = await sandbox.files.read(SANDBOX_PATHS.outputJson);
|
|
544
|
+
const parsed = JSON.parse(resultJson);
|
|
545
|
+
if (!parsed.success && parsed.error) {
|
|
546
|
+
resultError = `
|
|
547
|
+
Result: ${parsed.error.message}`;
|
|
548
|
+
if (parsed.error.stack) resultError += `
|
|
549
|
+
Stack: ${parsed.error.stack}`;
|
|
550
|
+
}
|
|
551
|
+
} catch {
|
|
552
|
+
}
|
|
553
|
+
const errorDetails = [
|
|
554
|
+
`Agent failed with exit code ${result.exitCode}`,
|
|
555
|
+
result.stderr ? `Stderr: ${result.stderr}` : "",
|
|
556
|
+
capturedStderr ? `Captured stderr: ${capturedStderr}` : "",
|
|
557
|
+
capturedStdout ? `Stdout: ${capturedStdout}` : "",
|
|
558
|
+
resultError
|
|
559
|
+
].filter(Boolean).join("\n");
|
|
560
|
+
return { ok: false, executionId, error: new ExecutionError(errorDetails) };
|
|
561
|
+
}
|
|
562
|
+
const output = JSON.parse(await sandbox.files.read(SANDBOX_PATHS.outputJson));
|
|
563
|
+
if (!output.success) {
|
|
564
|
+
const errorDetails = [
|
|
565
|
+
output.error.message,
|
|
566
|
+
capturedStderr ? `
|
|
567
|
+
Captured stderr: ${capturedStderr}` : "",
|
|
568
|
+
capturedStdout ? `
|
|
569
|
+
Stdout: ${capturedStdout}` : ""
|
|
570
|
+
].filter(Boolean).join("");
|
|
571
|
+
const err = new ExecutionError(errorDetails);
|
|
572
|
+
err.stack = output.error.stack;
|
|
573
|
+
return { ok: false, executionId, error: err };
|
|
574
|
+
}
|
|
575
|
+
return { ok: true, executionId, output: this.outputSchema.parse(output.output) };
|
|
576
|
+
} catch (err) {
|
|
577
|
+
return { ok: false, executionId, error: err instanceof Error ? err : new Error(String(err)) };
|
|
578
|
+
}
|
|
579
|
+
}
|
|
580
|
+
};
|
|
581
|
+
// Annotate the CommonJS export names for ESM import in node:
|
|
582
|
+
0 && (module.exports = {
|
|
583
|
+
Agent,
|
|
584
|
+
ConfigurationError,
|
|
585
|
+
DEFAULT_DEPENDENCIES,
|
|
586
|
+
DeploymentError,
|
|
587
|
+
ExecutionError,
|
|
588
|
+
NeckbeardError,
|
|
589
|
+
ValidationError
|
|
590
|
+
});
|