agent-neckbeard 0.1.0 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of agent-neckbeard might be problematic. Click here for more details.
- package/README.md +57 -91
- package/dist/index.d.ts +59 -2
- package/dist/index.js +181 -27
- package/package.json +20 -9
package/README.md
CHANGED
|
@@ -1,139 +1,105 @@
|
|
|
1
|
-
|
|
1
|
+
<img width="1024" height="1024" alt="image" src="https://github.com/user-attachments/assets/52b7f9cf-b9c7-4cae-be11-62273cd1489a" />
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
# neckbeard
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
There's a weird thing that happens when you try to deploy an AI agent.
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
Most people think of agents as fancy API calls. You send a prompt, you get a response. But that's not what's actually happening. The agent is running code. It's executing bash commands, writing files, installing packages. It runs for minutes at a time, maintaining state between steps. It's a process, not a request.
|
|
8
8
|
|
|
9
|
-
|
|
10
|
-
- All execution is isolated and secure
|
|
11
|
-
- The agent maintains conversation state across tool calls
|
|
9
|
+
This creates an obvious problem: do you really want that process running on your production server?
|
|
12
10
|
|
|
13
|
-
|
|
11
|
+
Anthropic's answer is no. Their [hosting docs](https://docs.claude.com/en/docs/agent-sdk/hosting) say you should run the entire agent inside a sandbox. Not just intercept the dangerous tool calls—put the whole thing in a container where it can't escape.
|
|
14
12
|
|
|
15
|
-
|
|
13
|
+
That sounds simple. It's not.
|
|
16
14
|
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
15
|
+
## The Plumbing Problem
|
|
16
|
+
|
|
17
|
+
Sandboxes like E2B give you a fresh Linux container. Your agent code lives in your repo. Bridging these two worlds is surprisingly annoying.
|
|
18
|
+
|
|
19
|
+
First, you have to get your code into the sandbox. You could bake it into a custom template, but then you're rebuilding templates every time you change a line. You could git clone on boot, but that's slow and requires auth. You could bundle and upload at runtime, which works, but now you're writing bundler configs.
|
|
20
|
+
|
|
21
|
+
Then you have to pass input. How do you get the user's prompt into a process running inside a sandbox? CLI arguments require escaping and have length limits. Environment variables have size limits. Writing to a file works, but adds boilerplate.
|
|
22
|
+
|
|
23
|
+
The worst part is output. When you run `sandbox.exec("node agent.js")`, you get back everything the process printed. SDK logs, debug output, streaming tokens, and somewhere in there, your actual result. Good luck parsing that reliably.
|
|
24
|
+
|
|
25
|
+
So you end up writing results to a file, reading it back, parsing JSON, validating the shape, handling errors. Every team building sandboxed agents writes some version of this plumbing. It's tedious.
|
|
26
|
+
|
|
27
|
+
## What This Does
|
|
28
|
+
|
|
29
|
+
Neckbeard handles all of that so you can just write your agent:
|
|
20
30
|
|
|
21
31
|
```typescript
|
|
22
32
|
import { Agent } from 'agent-neckbeard';
|
|
23
|
-
import { query } from '@anthropic-ai/claude-
|
|
33
|
+
import { query } from '@anthropic-ai/claude-agent-sdk';
|
|
24
34
|
import { z } from 'zod';
|
|
25
35
|
|
|
26
|
-
const
|
|
36
|
+
const agent = new Agent({
|
|
27
37
|
id: 'summary',
|
|
28
|
-
|
|
29
|
-
inputSchema: z.object({
|
|
30
|
-
topic: z.string(),
|
|
31
|
-
}),
|
|
38
|
+
inputSchema: z.object({ topic: z.string() }),
|
|
32
39
|
outputSchema: z.object({
|
|
33
40
|
title: z.string(),
|
|
34
41
|
summary: z.string(),
|
|
35
42
|
keyPoints: z.array(z.string()),
|
|
36
43
|
}),
|
|
37
|
-
run: async (input
|
|
38
|
-
let result = { title: '', summary: '', keyPoints: [] as string[] };
|
|
39
|
-
|
|
44
|
+
run: async (input) => {
|
|
40
45
|
for await (const message of query({
|
|
41
|
-
prompt: `Research "${input.topic}" and return JSON
|
|
42
|
-
options: { maxTurns: 10
|
|
43
|
-
abortSignal: ctx.signal,
|
|
46
|
+
prompt: `Research "${input.topic}" and return JSON`,
|
|
47
|
+
options: { maxTurns: 10 },
|
|
44
48
|
})) {
|
|
45
49
|
if (message.type === 'result') {
|
|
46
|
-
|
|
50
|
+
return JSON.parse(message.result ?? '{}');
|
|
47
51
|
}
|
|
48
52
|
}
|
|
49
|
-
|
|
50
|
-
return result;
|
|
51
53
|
},
|
|
52
54
|
});
|
|
53
55
|
|
|
54
|
-
//
|
|
55
|
-
await
|
|
56
|
-
|
|
57
|
-
// Run many times
|
|
58
|
-
const result = await summaryAgent.run({ topic: 'TypeScript generics' });
|
|
59
|
-
console.log(result.title); // string - validated
|
|
60
|
-
console.log(result.keyPoints); // string[] - validated
|
|
56
|
+
await agent.deploy(); // bundles, uploads to E2B
|
|
57
|
+
const result = await agent.run({ topic: 'TypeScript generics' });
|
|
61
58
|
```
|
|
62
59
|
|
|
63
|
-
|
|
60
|
+
`deploy()` bundles your code with esbuild and uploads it. `run()` writes input to a file, executes, reads the result back, and validates it against your schema. You don't think about file paths or stdout parsing.
|
|
61
|
+
|
|
62
|
+
## Setup
|
|
64
63
|
|
|
65
64
|
```bash
|
|
66
65
|
npm install agent-neckbeard
|
|
67
66
|
```
|
|
68
67
|
|
|
69
|
-
Set your E2B API key:
|
|
70
|
-
|
|
71
68
|
```bash
|
|
72
|
-
export E2B_API_KEY=your-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
## API
|
|
76
|
-
|
|
77
|
-
### `new Agent(config)`
|
|
78
|
-
|
|
79
|
-
```typescript
|
|
80
|
-
const agent = new Agent({
|
|
81
|
-
id: string, // Unique identifier
|
|
82
|
-
inputSchema: ZodSchema, // Validates input before run
|
|
83
|
-
outputSchema: ZodSchema, // Validates output after run
|
|
84
|
-
run: (input, ctx) => Promise, // Your agent logic
|
|
85
|
-
maxDuration?: number, // Timeout in seconds (default: 300)
|
|
86
|
-
sandboxId?: string, // Reuse existing sandbox (skip deploy)
|
|
87
|
-
});
|
|
69
|
+
export E2B_API_KEY=your-key
|
|
70
|
+
export ANTHROPIC_API_KEY=your-key
|
|
88
71
|
```
|
|
89
72
|
|
|
90
|
-
|
|
73
|
+
## The Details
|
|
91
74
|
|
|
92
|
-
|
|
75
|
+
The constructor takes a few options:
|
|
93
76
|
|
|
94
77
|
```typescript
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
78
|
+
new Agent({
|
|
79
|
+
id: string,
|
|
80
|
+
inputSchema: ZodSchema,
|
|
81
|
+
outputSchema: ZodSchema,
|
|
82
|
+
run: (input, ctx) => Promise,
|
|
83
|
+
maxDuration?: number, // seconds, default 300
|
|
84
|
+
sandboxId?: string, // reuse existing sandbox
|
|
85
|
+
dependencies?: {
|
|
86
|
+
apt?: string[],
|
|
87
|
+
commands?: string[],
|
|
88
|
+
},
|
|
89
|
+
files?: [{ url, path }], // pre-download into sandbox
|
|
90
|
+
claudeDir?: string, // upload .claude/ skills directory
|
|
91
|
+
})
|
|
106
92
|
```
|
|
107
93
|
|
|
108
|
-
|
|
94
|
+
If you already have a sandbox deployed, pass `sandboxId` and skip `deploy()`.
|
|
109
95
|
|
|
110
|
-
|
|
96
|
+
The `files` option downloads things into the sandbox before your agent runs—useful for models or config files. Relative paths resolve from `/home/user/`.
|
|
111
97
|
|
|
112
|
-
|
|
113
|
-
const agent = new Agent({
|
|
114
|
-
id: 'summary',
|
|
115
|
-
sandboxId: 'existing-sandbox-id', // Skip deploy()
|
|
116
|
-
inputSchema,
|
|
117
|
-
outputSchema,
|
|
118
|
-
run,
|
|
119
|
-
});
|
|
98
|
+
The `claudeDir` option uploads a local `.claude/` directory to the sandbox, enabling Claude Agent SDK skills. Point it at a directory containing `.claude/skills/*/SKILL.md` files.
|
|
120
99
|
|
|
121
|
-
|
|
122
|
-
const result = await agent.run({ topic: 'hello' });
|
|
123
|
-
```
|
|
100
|
+
Some packages can't be bundled because they spawn child processes or have native modules. The Claude Agent SDK is like this. These get automatically marked as external and installed via npm in the sandbox.
|
|
124
101
|
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
The `run` function receives a context object:
|
|
128
|
-
|
|
129
|
-
```typescript
|
|
130
|
-
run: async (input, ctx) => {
|
|
131
|
-
ctx.executionId // Unique ID for this execution
|
|
132
|
-
ctx.signal // AbortSignal for cancellation
|
|
133
|
-
ctx.env // Environment variables
|
|
134
|
-
ctx.logger // { debug, info, warn, error }
|
|
135
|
-
}
|
|
136
|
-
```
|
|
102
|
+
The `run` function gets a context object with an `executionId`, an `AbortSignal`, environment variables, and a logger.
|
|
137
103
|
|
|
138
104
|
## License
|
|
139
105
|
|
package/dist/index.d.ts
CHANGED
|
@@ -12,9 +12,41 @@ interface AgentRunContext {
|
|
|
12
12
|
error: (message: string, ...args: unknown[]) => void;
|
|
13
13
|
};
|
|
14
14
|
}
|
|
15
|
+
type AgentRunResult<TOutput> = {
|
|
16
|
+
ok: true;
|
|
17
|
+
executionId: string;
|
|
18
|
+
output: TOutput;
|
|
19
|
+
} | {
|
|
20
|
+
ok: false;
|
|
21
|
+
executionId: string;
|
|
22
|
+
error: Error;
|
|
23
|
+
};
|
|
15
24
|
interface SchemaLike<T> {
|
|
16
25
|
parse: (data: unknown) => T;
|
|
17
26
|
}
|
|
27
|
+
interface OsDependencies {
|
|
28
|
+
/** APT packages to install (e.g., ['curl', 'git']) */
|
|
29
|
+
apt?: string[];
|
|
30
|
+
/** Custom shell commands to run during setup */
|
|
31
|
+
commands?: string[];
|
|
32
|
+
}
|
|
33
|
+
/**
|
|
34
|
+
* Configuration for downloading a file into the sandbox filesystem.
|
|
35
|
+
* Files are downloaded during deploy() before the agent runs.
|
|
36
|
+
*/
|
|
37
|
+
interface FileDownload {
|
|
38
|
+
/** URL to download the file from (supports http/https) */
|
|
39
|
+
url: string;
|
|
40
|
+
/**
|
|
41
|
+
* Destination path in the sandbox.
|
|
42
|
+
* Can be absolute (e.g., '/home/user/data/model.bin') or
|
|
43
|
+
* relative to /home/user/ (e.g., 'data/model.bin').
|
|
44
|
+
* Parent directories are created automatically.
|
|
45
|
+
*/
|
|
46
|
+
path: string;
|
|
47
|
+
}
|
|
48
|
+
/** Default dependencies - empty by default, specify what you need */
|
|
49
|
+
declare const DEFAULT_DEPENDENCIES: OsDependencies;
|
|
18
50
|
interface AgentConfig<TInput, TOutput> {
|
|
19
51
|
id: string;
|
|
20
52
|
inputSchema: SchemaLike<TInput>;
|
|
@@ -22,12 +54,37 @@ interface AgentConfig<TInput, TOutput> {
|
|
|
22
54
|
run: (input: TInput, ctx: AgentRunContext) => Promise<TOutput>;
|
|
23
55
|
maxDuration?: number;
|
|
24
56
|
sandboxId?: string;
|
|
57
|
+
/** OS-level dependencies to install in the sandbox. Defaults to Claude Code. Set to {} to skip. */
|
|
58
|
+
dependencies?: OsDependencies;
|
|
59
|
+
/**
|
|
60
|
+
* Files to download and initialize in the sandbox filesystem.
|
|
61
|
+
* Downloaded during deploy() before the agent code runs.
|
|
62
|
+
* Useful for pre-loading models, datasets, configuration files, etc.
|
|
63
|
+
*/
|
|
64
|
+
files?: FileDownload[];
|
|
65
|
+
/**
|
|
66
|
+
* Local path to a .claude directory containing skills and settings.
|
|
67
|
+
* The directory will be uploaded to /home/user/.claude/ in the sandbox.
|
|
68
|
+
* This enables Claude Agent SDK to access skills defined in SKILL.md files
|
|
69
|
+
* within .claude/skills/ subdirectories.
|
|
70
|
+
*
|
|
71
|
+
* Example: claudeDir: './my-project/.claude'
|
|
72
|
+
*
|
|
73
|
+
* The Claude Agent SDK will discover skills when:
|
|
74
|
+
* - cwd is set to /home/user/ (containing .claude/)
|
|
75
|
+
* - settingSources includes 'project'
|
|
76
|
+
* - allowedTools includes 'Skill'
|
|
77
|
+
*/
|
|
78
|
+
claudeDir?: string;
|
|
25
79
|
}
|
|
26
80
|
declare class Agent<TInput, TOutput> {
|
|
27
81
|
readonly id: string;
|
|
28
82
|
readonly inputSchema: SchemaLike<TInput>;
|
|
29
83
|
readonly outputSchema: SchemaLike<TOutput>;
|
|
30
84
|
readonly maxDuration: number;
|
|
85
|
+
readonly dependencies: OsDependencies;
|
|
86
|
+
readonly files: FileDownload[];
|
|
87
|
+
readonly claudeDir?: string;
|
|
31
88
|
/** @internal Used by the sandbox runner - must be public for bundled code access */
|
|
32
89
|
_run: (input: TInput, ctx: AgentRunContext) => Promise<TOutput>;
|
|
33
90
|
private _sourceFile;
|
|
@@ -35,7 +92,7 @@ declare class Agent<TInput, TOutput> {
|
|
|
35
92
|
constructor(config: AgentConfig<TInput, TOutput>);
|
|
36
93
|
get sandboxId(): string | undefined;
|
|
37
94
|
deploy(): Promise<void>;
|
|
38
|
-
run(input: TInput): Promise<TOutput
|
|
95
|
+
run(input: TInput): Promise<AgentRunResult<TOutput>>;
|
|
39
96
|
}
|
|
40
97
|
|
|
41
|
-
export { Agent, type AgentConfig, type AgentRunContext };
|
|
98
|
+
export { Agent, type AgentConfig, type AgentRunContext, type AgentRunResult, DEFAULT_DEPENDENCIES, type FileDownload, type OsDependencies };
|
package/dist/index.js
CHANGED
|
@@ -1,7 +1,10 @@
|
|
|
1
1
|
// src/index.ts
|
|
2
|
-
import * as esbuild from "esbuild";
|
|
3
|
-
import { Sandbox } from "e2b";
|
|
4
2
|
import { fileURLToPath } from "url";
|
|
3
|
+
import { readdirSync, readFileSync, statSync } from "fs";
|
|
4
|
+
import { join, relative } from "path";
|
|
5
|
+
var getEsbuild = () => import("esbuild");
|
|
6
|
+
var getE2b = () => import("e2b");
|
|
7
|
+
var DEFAULT_DEPENDENCIES = {};
|
|
5
8
|
function getCallerFile() {
|
|
6
9
|
const stack = new Error().stack?.split("\n") ?? [];
|
|
7
10
|
for (const line of stack.slice(2)) {
|
|
@@ -9,16 +12,44 @@ function getCallerFile() {
|
|
|
9
12
|
if (match) {
|
|
10
13
|
let file = match[1];
|
|
11
14
|
if (file.startsWith("file://")) file = fileURLToPath(file);
|
|
12
|
-
if (!file.includes("node:") && !file.includes("agent-neckbeard")) return file;
|
|
15
|
+
if (!file.includes("node:") && !file.includes("node_modules/agent-neckbeard") && !file.includes("agent-neckbeard/dist")) return file;
|
|
13
16
|
}
|
|
14
17
|
}
|
|
15
18
|
throw new Error("Could not determine source file");
|
|
16
19
|
}
|
|
20
|
+
function readDirectoryRecursively(dirPath) {
|
|
21
|
+
const files = [];
|
|
22
|
+
function walkDir(currentPath) {
|
|
23
|
+
const entries = readdirSync(currentPath);
|
|
24
|
+
for (const entry of entries) {
|
|
25
|
+
const fullPath = join(currentPath, entry);
|
|
26
|
+
const stat = statSync(fullPath);
|
|
27
|
+
if (stat.isDirectory()) {
|
|
28
|
+
walkDir(fullPath);
|
|
29
|
+
} else if (stat.isFile()) {
|
|
30
|
+
const relPath = relative(dirPath, fullPath);
|
|
31
|
+
const isBinary = /\.(png|jpg|jpeg|gif|ico|pdf|zip|tar|gz|bin|exe|dll|so|dylib|wasm)$/i.test(entry);
|
|
32
|
+
if (isBinary) {
|
|
33
|
+
const buffer = readFileSync(fullPath);
|
|
34
|
+
const arrayBuffer = buffer.buffer.slice(buffer.byteOffset, buffer.byteOffset + buffer.byteLength);
|
|
35
|
+
files.push({ relativePath: relPath, content: arrayBuffer });
|
|
36
|
+
} else {
|
|
37
|
+
files.push({ relativePath: relPath, content: readFileSync(fullPath, "utf-8") });
|
|
38
|
+
}
|
|
39
|
+
}
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
walkDir(dirPath);
|
|
43
|
+
return files;
|
|
44
|
+
}
|
|
17
45
|
var Agent = class {
|
|
18
46
|
id;
|
|
19
47
|
inputSchema;
|
|
20
48
|
outputSchema;
|
|
21
49
|
maxDuration;
|
|
50
|
+
dependencies;
|
|
51
|
+
files;
|
|
52
|
+
claudeDir;
|
|
22
53
|
/** @internal Used by the sandbox runner - must be public for bundled code access */
|
|
23
54
|
_run;
|
|
24
55
|
_sourceFile;
|
|
@@ -31,12 +62,22 @@ var Agent = class {
|
|
|
31
62
|
this._run = config.run;
|
|
32
63
|
this._sourceFile = getCallerFile();
|
|
33
64
|
this._sandboxId = config.sandboxId;
|
|
65
|
+
this.dependencies = config.dependencies ?? DEFAULT_DEPENDENCIES;
|
|
66
|
+
this.files = config.files ?? [];
|
|
67
|
+
this.claudeDir = config.claudeDir;
|
|
34
68
|
}
|
|
35
69
|
get sandboxId() {
|
|
36
70
|
return this._sandboxId;
|
|
37
71
|
}
|
|
38
72
|
async deploy() {
|
|
39
73
|
if (this._sandboxId) return;
|
|
74
|
+
const esbuild = await getEsbuild();
|
|
75
|
+
const { Sandbox } = await getE2b();
|
|
76
|
+
const collectedExternals = /* @__PURE__ */ new Set();
|
|
77
|
+
const mustBeExternal = [
|
|
78
|
+
/^@anthropic-ai\/claude-agent-sdk/
|
|
79
|
+
// Spawns cli.js as child process
|
|
80
|
+
];
|
|
40
81
|
const result = await esbuild.build({
|
|
41
82
|
entryPoints: [this._sourceFile],
|
|
42
83
|
bundle: true,
|
|
@@ -45,15 +86,55 @@ var Agent = class {
|
|
|
45
86
|
format: "esm",
|
|
46
87
|
write: false,
|
|
47
88
|
minify: true,
|
|
48
|
-
keepNames: true
|
|
89
|
+
keepNames: true,
|
|
90
|
+
treeShaking: false,
|
|
91
|
+
// Preserve exports for the sandbox runner to import
|
|
92
|
+
plugins: [{
|
|
93
|
+
name: "agent-neckbeard-externals",
|
|
94
|
+
setup(build) {
|
|
95
|
+
build.onResolve({ filter: /^agent-neckbeard$/ }, () => ({
|
|
96
|
+
path: "agent-neckbeard",
|
|
97
|
+
namespace: "agent-shim"
|
|
98
|
+
}));
|
|
99
|
+
build.onLoad({ filter: /.*/, namespace: "agent-shim" }, () => ({
|
|
100
|
+
contents: `
|
|
101
|
+
export class Agent {
|
|
102
|
+
constructor(config) {
|
|
103
|
+
this.id = config.id;
|
|
104
|
+
this.inputSchema = config.inputSchema;
|
|
105
|
+
this.outputSchema = config.outputSchema;
|
|
106
|
+
this.maxDuration = config.maxDuration ?? 300;
|
|
107
|
+
this._run = config.run;
|
|
108
|
+
}
|
|
109
|
+
}
|
|
110
|
+
`,
|
|
111
|
+
loader: "js"
|
|
112
|
+
}));
|
|
113
|
+
build.onResolve({ filter: /.*/ }, (args) => {
|
|
114
|
+
if (args.path.startsWith(".") || args.path.startsWith("/") || args.path.startsWith("node:")) {
|
|
115
|
+
return null;
|
|
116
|
+
}
|
|
117
|
+
for (const pattern of mustBeExternal) {
|
|
118
|
+
if (pattern.test(args.path)) {
|
|
119
|
+
const match = args.path.match(/^(@[^/]+\/[^/]+|[^/]+)/);
|
|
120
|
+
if (match) {
|
|
121
|
+
collectedExternals.add(match[1]);
|
|
122
|
+
}
|
|
123
|
+
return { external: true };
|
|
124
|
+
}
|
|
125
|
+
}
|
|
126
|
+
return null;
|
|
127
|
+
});
|
|
128
|
+
}
|
|
129
|
+
}]
|
|
49
130
|
});
|
|
50
131
|
const runnerCode = `
|
|
51
132
|
import { readFileSync, writeFileSync, mkdirSync } from 'node:fs';
|
|
52
133
|
|
|
53
|
-
mkdirSync('/input', { recursive: true });
|
|
54
|
-
mkdirSync('/output', { recursive: true });
|
|
134
|
+
mkdirSync('/home/user/input', { recursive: true });
|
|
135
|
+
mkdirSync('/home/user/output', { recursive: true });
|
|
55
136
|
|
|
56
|
-
const { input, executionId } = JSON.parse(readFileSync('/input/task.json', 'utf-8'));
|
|
137
|
+
const { input, executionId } = JSON.parse(readFileSync('/home/user/input/task.json', 'utf-8'));
|
|
57
138
|
|
|
58
139
|
const ctx = {
|
|
59
140
|
executionId,
|
|
@@ -74,41 +155,114 @@ try {
|
|
|
74
155
|
const validated = agent.inputSchema.parse(input);
|
|
75
156
|
const output = await agent._run(validated, ctx);
|
|
76
157
|
const validatedOutput = agent.outputSchema.parse(output);
|
|
77
|
-
writeFileSync('/output/result.json', JSON.stringify({ success: true, output: validatedOutput }));
|
|
158
|
+
writeFileSync('/home/user/output/result.json', JSON.stringify({ success: true, output: validatedOutput }));
|
|
78
159
|
} catch (error) {
|
|
79
|
-
writeFileSync('/output/result.json', JSON.stringify({ success: false, error: { message: error.message, stack: error.stack } }));
|
|
160
|
+
writeFileSync('/home/user/output/result.json', JSON.stringify({ success: false, error: { message: error.message, stack: error.stack } }));
|
|
80
161
|
}
|
|
81
162
|
`;
|
|
82
163
|
const sandbox = await Sandbox.create("base", {
|
|
83
164
|
apiKey: process.env.E2B_API_KEY
|
|
84
165
|
});
|
|
166
|
+
const { apt, commands } = this.dependencies;
|
|
167
|
+
if (apt && apt.length > 0) {
|
|
168
|
+
const aptCmd = `apt-get update && apt-get install -y ${apt.join(" ")}`;
|
|
169
|
+
const aptResult = await sandbox.commands.run(aptCmd, { timeoutMs: 3e5 });
|
|
170
|
+
if (aptResult.exitCode !== 0) {
|
|
171
|
+
throw new Error(`Failed to install apt packages: ${aptResult.stderr}`);
|
|
172
|
+
}
|
|
173
|
+
}
|
|
174
|
+
if (commands && commands.length > 0) {
|
|
175
|
+
for (const cmd of commands) {
|
|
176
|
+
const cmdResult = await sandbox.commands.run(cmd, { timeoutMs: 3e5 });
|
|
177
|
+
if (cmdResult.exitCode !== 0) {
|
|
178
|
+
throw new Error(`Failed to run command "${cmd}": ${cmdResult.stderr}`);
|
|
179
|
+
}
|
|
180
|
+
}
|
|
181
|
+
}
|
|
182
|
+
if (this.files.length > 0) {
|
|
183
|
+
for (const file of this.files) {
|
|
184
|
+
const destPath = file.path.startsWith("/") ? file.path : `/home/user/${file.path}`;
|
|
185
|
+
const parentDir = destPath.substring(0, destPath.lastIndexOf("/"));
|
|
186
|
+
if (parentDir) {
|
|
187
|
+
await sandbox.commands.run(`mkdir -p "${parentDir}"`, { timeoutMs: 3e4 });
|
|
188
|
+
}
|
|
189
|
+
const curlCmd = `curl -fsSL -o "${destPath}" "${file.url}"`;
|
|
190
|
+
const downloadResult = await sandbox.commands.run(curlCmd, { timeoutMs: 3e5 });
|
|
191
|
+
if (downloadResult.exitCode !== 0) {
|
|
192
|
+
throw new Error(`Failed to download file from ${file.url} to ${destPath}: ${downloadResult.stderr}`);
|
|
193
|
+
}
|
|
194
|
+
}
|
|
195
|
+
}
|
|
196
|
+
if (this.claudeDir) {
|
|
197
|
+
const claudeFiles = readDirectoryRecursively(this.claudeDir);
|
|
198
|
+
await sandbox.commands.run("mkdir -p /home/user/.claude", { timeoutMs: 3e4 });
|
|
199
|
+
for (const file of claudeFiles) {
|
|
200
|
+
const destPath = `/home/user/.claude/${file.relativePath}`;
|
|
201
|
+
const parentDir = destPath.substring(0, destPath.lastIndexOf("/"));
|
|
202
|
+
if (parentDir && parentDir !== "/home/user/.claude") {
|
|
203
|
+
await sandbox.commands.run(`mkdir -p "${parentDir}"`, { timeoutMs: 3e4 });
|
|
204
|
+
}
|
|
205
|
+
await sandbox.files.write(destPath, file.content);
|
|
206
|
+
}
|
|
207
|
+
}
|
|
85
208
|
await sandbox.files.write("/home/user/agent.mjs", result.outputFiles[0].text);
|
|
86
209
|
await sandbox.files.write("/home/user/runner.mjs", runnerCode);
|
|
210
|
+
if (collectedExternals.size > 0) {
|
|
211
|
+
const dependencies = {};
|
|
212
|
+
for (const pkg of collectedExternals) {
|
|
213
|
+
dependencies[pkg] = "*";
|
|
214
|
+
}
|
|
215
|
+
const pkgJson = JSON.stringify({
|
|
216
|
+
name: "agent-sandbox",
|
|
217
|
+
type: "module",
|
|
218
|
+
dependencies
|
|
219
|
+
});
|
|
220
|
+
await sandbox.files.write("/home/user/package.json", pkgJson);
|
|
221
|
+
const installResult = await sandbox.commands.run("cd /home/user && npm install", { timeoutMs: 3e5 });
|
|
222
|
+
if (installResult.exitCode !== 0) {
|
|
223
|
+
throw new Error(`Failed to install external packages: ${installResult.stderr}`);
|
|
224
|
+
}
|
|
225
|
+
}
|
|
87
226
|
this._sandboxId = sandbox.sandboxId;
|
|
88
227
|
}
|
|
89
228
|
async run(input) {
|
|
229
|
+
const executionId = `exec_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`;
|
|
90
230
|
if (!this._sandboxId) {
|
|
91
|
-
|
|
231
|
+
return {
|
|
232
|
+
ok: false,
|
|
233
|
+
executionId,
|
|
234
|
+
error: new Error("Agent not deployed. Call agent.deploy() first or pass sandboxId to constructor.")
|
|
235
|
+
};
|
|
92
236
|
}
|
|
93
|
-
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
237
|
+
try {
|
|
238
|
+
const { Sandbox } = await getE2b();
|
|
239
|
+
const validatedInput = this.inputSchema.parse(input);
|
|
240
|
+
const sandbox = await Sandbox.connect(this._sandboxId, {
|
|
241
|
+
apiKey: process.env.E2B_API_KEY
|
|
242
|
+
});
|
|
243
|
+
await sandbox.files.write("/home/user/input/task.json", JSON.stringify({ input: validatedInput, executionId }));
|
|
244
|
+
const result = await sandbox.commands.run("node /home/user/runner.mjs", {
|
|
245
|
+
timeoutMs: this.maxDuration * 1e3,
|
|
246
|
+
envs: {
|
|
247
|
+
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY ?? ""
|
|
248
|
+
}
|
|
249
|
+
});
|
|
250
|
+
if (result.exitCode !== 0) {
|
|
251
|
+
return { ok: false, executionId, error: new Error(`Agent failed: ${result.stderr}`) };
|
|
252
|
+
}
|
|
253
|
+
const output = JSON.parse(await sandbox.files.read("/home/user/output/result.json"));
|
|
254
|
+
if (!output.success) {
|
|
255
|
+
const err = new Error(output.error.message);
|
|
256
|
+
err.stack = output.error.stack;
|
|
257
|
+
return { ok: false, executionId, error: err };
|
|
258
|
+
}
|
|
259
|
+
return { ok: true, executionId, output: this.outputSchema.parse(output.output) };
|
|
260
|
+
} catch (err) {
|
|
261
|
+
return { ok: false, executionId, error: err instanceof Error ? err : new Error(String(err)) };
|
|
108
262
|
}
|
|
109
|
-
return this.outputSchema.parse(output.output);
|
|
110
263
|
}
|
|
111
264
|
};
|
|
112
265
|
export {
|
|
113
|
-
Agent
|
|
266
|
+
Agent,
|
|
267
|
+
DEFAULT_DEPENDENCIES
|
|
114
268
|
};
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "agent-neckbeard",
|
|
3
|
-
"version": "0.1
|
|
3
|
+
"version": "1.0.1",
|
|
4
4
|
"description": "Deploy AI agents to E2B sandboxes",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"exports": {
|
|
@@ -13,7 +13,9 @@
|
|
|
13
13
|
},
|
|
14
14
|
"main": "./dist/index.js",
|
|
15
15
|
"types": "./dist/index.d.ts",
|
|
16
|
-
"files": [
|
|
16
|
+
"files": [
|
|
17
|
+
"dist"
|
|
18
|
+
],
|
|
17
19
|
"engines": {
|
|
18
20
|
"node": ">=20.0.0"
|
|
19
21
|
},
|
|
@@ -23,21 +25,30 @@
|
|
|
23
25
|
"typecheck": "tsc --noEmit"
|
|
24
26
|
},
|
|
25
27
|
"dependencies": {
|
|
26
|
-
"
|
|
27
|
-
"
|
|
28
|
+
"e2b": "^1.0.0",
|
|
29
|
+
"esbuild": "^0.24.0"
|
|
28
30
|
},
|
|
29
31
|
"peerDependencies": {
|
|
30
|
-
"zod": "^3.0.0"
|
|
32
|
+
"zod": "^3.0.0 || ^4.0.0"
|
|
31
33
|
},
|
|
32
34
|
"peerDependenciesMeta": {
|
|
33
|
-
"zod": {
|
|
35
|
+
"zod": {
|
|
36
|
+
"optional": true
|
|
37
|
+
}
|
|
34
38
|
},
|
|
35
39
|
"devDependencies": {
|
|
36
|
-
"
|
|
40
|
+
"@types/node": "^22.0.0",
|
|
37
41
|
"tsup": "^8.3.0",
|
|
38
|
-
"
|
|
42
|
+
"typescript": "^5.6.0"
|
|
39
43
|
},
|
|
40
|
-
"keywords": [
|
|
44
|
+
"keywords": [
|
|
45
|
+
"ai",
|
|
46
|
+
"agent",
|
|
47
|
+
"deploy",
|
|
48
|
+
"e2b",
|
|
49
|
+
"sandbox",
|
|
50
|
+
"claude"
|
|
51
|
+
],
|
|
41
52
|
"license": "MIT",
|
|
42
53
|
"author": "zacwellmer",
|
|
43
54
|
"repository": {
|