@economic/agents 0.0.1-alpha.10 → 0.0.1-alpha.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (4) hide show
  1. package/README.md +357 -51
  2. package/dist/index.d.mts +80 -522
  3. package/dist/index.mjs +175 -467
  4. package/package.json +12 -16
package/README.md CHANGED
@@ -1,87 +1,393 @@
1
1
  # @economic/agents
2
2
 
3
- Base classes and utilities for building LLM agents on Cloudflare's Agents SDK with lazy tool loading.
3
+ Base class and AI SDK wrappers for building LLM chat agents on Cloudflare's Agents SDK with lazy skill loading and optional message compaction.
4
4
 
5
- ## Exports
5
+ ```bash
6
+ npm install @economic/agents ai @cloudflare/ai-chat
7
+ ```
6
8
 
7
- - **`AIChatAgent`** — base class that owns the full `onChatMessage` lifecycle. Implement `getModel()`, `getTools()`, `getSkills()`, and `getSystemPrompt()`. Compaction is **enabled by default** (uses `getModel()` for summarisation).
8
- - **`AIChatAgentBase`** — base class for when you need full control over `streamText`. Implement `getTools()`, `getSkills()`, and your own `onChatMessage` decorated with `@withSkills`. Compaction is **disabled by default**.
9
- - **`withSkills`** — method decorator used with `AIChatAgentBase`.
10
- - **`createSkills`** — lower-level factory for wiring lazy skill loading into any agent subclass yourself.
11
- - **`filterEphemeralMessages`**, **`injectGuidance`** — utilities used internally, exported for custom wiring.
12
- - **`compactIfNeeded`**, **`compactMessages`**, **`estimateMessagesTokens`**, **`COMPACT_TOKEN_THRESHOLD`** — compaction utilities, exported for use with `AIChatAgentBase` or fully custom agents.
13
- - Types: `Tool`, `Skill`, `SkillsConfig`, `SkillsResult`, `SkillContext`.
9
+ ---
14
10
 
15
- See [COMPARISON.md](./COMPARISON.md) for a side-by-side code example of both base classes.
11
+ ## Overview
16
12
 
17
- See [src/features/skills/README.md](./src/features/skills/README.md) for full `createSkills` documentation.
13
+ `@economic/agents` provides:
18
14
 
19
- ## Development
15
+ - **`AIChatAgent`** — an abstract Cloudflare Durable Object base class. Implement `onChatMessage` and call our `streamText` wrapper.
16
+ - **`streamText` / `generateText`** — wrappers around the Vercel AI SDK functions that accept `UIMessage[]` directly, add a `skills` / `activeSkills` pair for lazy tool loading, and a `compact` option for message compaction.
20
17
 
21
- ```bash
22
- vp install # install dependencies
23
- vp test # run tests
24
- vp pack # build
18
+ Skills and compaction are AI SDK concerns — they control what goes to the LLM. The CF layer is responsible for WebSockets, Durable Objects, and message persistence. These are kept separate.
19
+
20
+ ---
21
+
22
+ ## Quick start
23
+
24
+ ```typescript
25
+ import { AIChatAgent, streamText } from "@economic/agents";
26
+ import type { Skill } from "@economic/agents";
27
+ import { openai } from "@ai-sdk/openai";
28
+ import { stepCountIs, tool } from "ai";
29
+ import { z } from "zod";
30
+
31
+ const searchSkill: Skill = {
32
+ name: "search",
33
+ description: "Web search tools",
34
+ guidance: "Use search_web for any queries requiring up-to-date information.",
35
+ tools: {
36
+ search_web: tool({
37
+ description: "Search the web",
38
+ inputSchema: z.object({ query: z.string() }),
39
+ execute: async ({ query }) => `Results for: ${query}`,
40
+ }),
41
+ },
42
+ };
43
+
44
+ export class MyAgent extends AIChatAgent<Env> {
45
+ async onChatMessage(onFinish, options) {
46
+ const stream = await streamText({
47
+ model: openai("gpt-4o"),
48
+ messages: await convertToModelMessages(this.messages),
49
+ system: "You are a helpful assistant.",
50
+ skills: [searchSkill],
51
+ activeSkills: await this.getLoadedSkills(),
52
+ stopWhen: stepCountIs(20),
53
+ abortSignal: options?.abortSignal,
54
+ onFinish,
55
+ });
56
+ return stream.toUIMessageStreamResponse();
57
+ }
58
+ }
59
+ ```
60
+
61
+ No D1 database needed — skill state is persisted to Durable Object SQLite automatically.
62
+
63
+ ---
64
+
65
+ ## Prerequisites
66
+
67
+ ### Cloudflare environment
68
+
69
+ Your agent class is a Durable Object. Declare it in `wrangler.jsonc`:
70
+
71
+ ```jsonc
72
+ {
73
+ "durable_objects": {
74
+ "bindings": [{ "name": "MyAgent", "class_name": "MyAgent" }],
75
+ },
76
+ "migrations": [{ "tag": "v1", "new_sqlite_classes": ["MyAgent"] }],
77
+ }
25
78
  ```
26
79
 
80
+ Run `wrangler types` after to generate typed `Env` bindings.
81
+
27
82
  ---
28
83
 
29
- ## Implementing your own agent
84
+ ## `AIChatAgent`
30
85
 
31
- Extend `AIChatAgent` and implement the four required methods:
86
+ Extend this class and implement `onChatMessage`. Call our `streamText` wrapper — which accepts `UIMessage[]`, skills, and compaction and return the response.
32
87
 
33
88
  ```typescript
34
- import { AIChatAgent } from "@economic/agents";
89
+ import { AIChatAgent, streamText } from "@economic/agents";
90
+ import type { RequestBody } from "./types";
35
91
 
36
- export class MyAgent extends AIChatAgent {
37
- getModel() {
38
- return openai("gpt-4o");
39
- }
40
- getTools() {
41
- return [myAlwaysOnTool];
42
- }
43
- getSkills() {
44
- return [searchSkill, codeSkill];
45
- }
46
- getSystemPrompt() {
47
- return "You are a helpful assistant.";
48
- }
92
+ export class ChatAgent extends AIChatAgent<Env> {
93
+ async onChatMessage(onFinish, options) {
94
+ const body = (options?.body ?? {}) as RequestBody;
95
+ const model = body.userTier === "pro" ? openai("gpt-4o") : openai("gpt-4o-mini");
49
96
 
50
- // Return the D1 binding — typed in Cloudflare.Env after `wrangler types`
51
- protected getDB() {
52
- return this.env.AGENT_DB;
97
+ const stream = await streamText({
98
+ model,
99
+ messages: await convertToModelMessages(this.messages),
100
+ system: "You are a helpful assistant.",
101
+ skills: [searchSkill, calcSkill], // available for on-demand loading
102
+ activeSkills: await this.getLoadedSkills(), // loaded from DO SQLite
103
+ tools: { alwaysOnTool }, // always active
104
+ stopWhen: stepCountIs(20),
105
+ abortSignal: options?.abortSignal,
106
+ onFinish,
107
+ });
108
+ return stream.toUIMessageStreamResponse();
53
109
  }
54
110
  }
55
111
  ```
56
112
 
57
- If you need control over the response — custom model options, middleware, varying the model per request — use `AIChatAgentBase` with the `@withSkills` decorator instead. See [COMPARISON.md](./COMPARISON.md) for a side-by-side example and `src/features/skills/README.md` for full `createSkills` documentation.
113
+ ### `getLoadedSkills()`
114
+
115
+ Protected method on `AIChatAgent`. Returns skill names persisted from previous turns (read from DO SQLite). Pass the result as `activeSkills`.
116
+
117
+ ### `persistMessages` (automatic)
118
+
119
+ When `persistMessages` runs at the end of each turn, it:
120
+
121
+ 1. Scans `activate_skill` tool results for newly loaded skill state.
122
+ 2. Writes the updated skill name list to DO SQLite (no D1 needed).
123
+ 3. Strips all `activate_skill` and `list_capabilities` messages from history.
124
+ 4. Delegates to the CF base `persistMessages` for message storage and WS broadcast.
58
125
 
59
- ### Message compaction
126
+ ### `onConnect` (automatic)
60
127
 
61
- `AIChatAgent` automatically compacts the conversation history when it approaches the token limit (140k tokens). Older messages are summarised by the LLM into a single system message; the most recent messages are kept verbatim. The verbatim tail size is `maxPersistedMessages - 1` (default: 49 messages + 1 summary message).
128
+ Replays the full message history to newly connected clients without this, a page refresh would show an empty UI even though history is in DO SQLite.
62
129
 
63
- The compaction model defaults to `getModel()`. To use a cheaper model for summarisation, override `getCompactionModel()`:
130
+ ---
131
+
132
+ ## `streamText` wrapper
133
+
134
+ Drop-in replacement for `streamText` from `ai` with three extra params:
135
+
136
+ | Extra param | Type | Description |
137
+ | -------------- | ----------------------------------------------- | ------------------------------------------------------------------------------ |
138
+ | `messages` | `UIMessage[]` | Converted to `ModelMessage[]` internally. Pass `this.messages`. |
139
+ | `skills` | `Skill[]` | Skills available for on-demand loading. Wires up meta-tools automatically. |
140
+ | `activeSkills` | `string[]` | Names of skills loaded in previous turns. Pass `await this.getLoadedSkills()`. |
141
+ | `compact` | `{ model: LanguageModel; maxMessages: number }` | When provided, compacts old messages before sending to the model. |
142
+
143
+ When `skills` are provided the wrapper:
144
+
145
+ - Registers `activate_skill` and `list_capabilities` meta-tools
146
+ - Sets initial `activeTools` (meta + always-on + loaded skill tools)
147
+ - Wires up `prepareStep` to update `activeTools` after each step
148
+ - Composes `system` with guidance from loaded skills
149
+ - Merges any `activeTools` / `prepareStep` you also pass (additive)
64
150
 
65
151
  ```typescript
66
- protected override getCompactionModel(): LanguageModel {
67
- return openai("gpt-4o-mini"); // cheaper model for summarisation
68
- }
152
+ // With skills and compaction
153
+ const stream = await streamText({
154
+ model: openai("gpt-4o"),
155
+ messages: await convertToModelMessages(this.messages),
156
+ system: "You are a helpful assistant.",
157
+ skills: [searchSkill, codeSkill],
158
+ activeSkills: await this.getLoadedSkills(),
159
+ tools: { myAlwaysOnTool },
160
+ compact: { model: openai("gpt-4o-mini"), maxMessages: 30 },
161
+ onFinish,
162
+ });
163
+ return stream.toUIMessageStreamResponse();
164
+
165
+ // Without skills — identical to AI SDK, no API breakage
166
+ const stream = await streamText({
167
+ model: openai("gpt-4o"),
168
+ messages: await convertToModelMessages(this.messages),
169
+ tools: { myTool },
170
+ activeTools: ["myTool"],
171
+ onFinish,
172
+ });
173
+ return stream.toUIMessageStreamResponse();
69
174
  ```
70
175
 
71
- To disable compaction entirely, override `getCompactionModel()` to return `undefined`:
176
+ ---
177
+
178
+ ## `generateText` wrapper
179
+
180
+ Same additions as `streamText` — accepts `UIMessage[]`, `skills` / `activeSkills`, and `compact`.
72
181
 
73
182
  ```typescript
74
- protected override getCompactionModel(): LanguageModel | undefined {
75
- return undefined; // no compaction — older messages are dropped at maxPersistedMessages
76
- }
183
+ const result = await generateText({
184
+ model: openai("gpt-4o"),
185
+ messages: await convertToModelMessages(this.messages),
186
+ skills: [searchSkill],
187
+ activeSkills: await this.getLoadedSkills(),
188
+ });
77
189
  ```
78
190
 
79
- `AIChatAgentBase` does not enable compaction by default. To add it, override `getCompactionModel()` to return a model — the `persistMessages` override will pick it up automatically:
191
+ ---
192
+
193
+ ## Defining skills
80
194
 
81
195
  ```typescript
82
- protected override getCompactionModel(): LanguageModel {
83
- return openai("gpt-4o-mini");
84
- }
196
+ import { tool } from "ai";
197
+ import { z } from "zod";
198
+ import type { Skill } from "@economic/agents";
199
+
200
+ // Skill with guidance — injected into the system prompt when the skill is loaded
201
+ export const calculatorSkill: Skill = {
202
+ name: "calculator",
203
+ description: "Mathematical calculation and expression evaluation",
204
+ guidance:
205
+ "Use the calculate tool for any arithmetic or algebraic expressions. " +
206
+ "Always show the expression you are evaluating.",
207
+ tools: {
208
+ calculate: tool({
209
+ description: "Evaluate a mathematical expression and return the result.",
210
+ inputSchema: z.object({
211
+ expression: z.string().describe('e.g. "2 + 2", "Math.sqrt(144)"'),
212
+ }),
213
+ execute: async ({ expression }) => {
214
+ const result = new Function(`"use strict"; return (${expression})`)();
215
+ return `${expression} = ${result}`;
216
+ },
217
+ }),
218
+ },
219
+ };
220
+
221
+ // Skill without guidance — tools are self-explanatory
222
+ export const datetimeSkill: Skill = {
223
+ name: "datetime",
224
+ description: "Current date and time information in any timezone",
225
+ tools: {
226
+ get_current_datetime: tool({
227
+ description: "Get the current date and time in an optional IANA timezone.",
228
+ inputSchema: z.object({
229
+ timezone: z.string().optional().describe('e.g. "Europe/Copenhagen"'),
230
+ }),
231
+ execute: async ({ timezone = "UTC" }) =>
232
+ new Date().toLocaleString("en-GB", {
233
+ timeZone: timezone,
234
+ dateStyle: "full",
235
+ timeStyle: "long",
236
+ }),
237
+ }),
238
+ },
239
+ };
85
240
  ```
86
241
 
87
- Alternatively, import `compactIfNeeded` and `COMPACT_TOKEN_THRESHOLD` from `@economic/agents` and call them yourself inside a custom `persistMessages` override for full control over the compaction logic.
242
+ ### `Skill` fields
243
+
244
+ | Field | Type | Required | Description |
245
+ | ------------- | --------- | -------- | ---------------------------------------------------------------------------- |
246
+ | `name` | `string` | Yes | Unique identifier used by `activate_skill` and for DO SQLite persistence. |
247
+ | `description` | `string` | Yes | One-line description shown in the `activate_skill` schema. |
248
+ | `guidance` | `string` | No | Instructions appended to the `system` prompt when this skill is loaded. |
249
+ | `tools` | `ToolSet` | Yes | Record of tool names to `tool()` definitions. Names must be globally unique. |
250
+
251
+ ---
252
+
253
+ ## Compaction
254
+
255
+ When `compact` is provided to `streamText` / `generateText`, the wrapper compacts `messages` before converting and sending to the model:
256
+
257
+ 1. The message list is split into an older window and a recent verbatim tail (`maxMessages`).
258
+ 2. A model call generates a concise summary of the older window.
259
+ 3. That summary + the verbatim tail is what gets sent to the LLM.
260
+ 4. Full history in DO SQLite is unaffected — compaction is in-memory only.
261
+
262
+ ```typescript
263
+ const stream = await streamText({
264
+ model: openai("gpt-4o"),
265
+ messages: await convertToModelMessages(this.messages),
266
+ system: "...",
267
+ compact: {
268
+ model: openai("gpt-4o-mini"), // cheaper model for summarisation
269
+ maxMessages: 30, // keep last 30 messages verbatim
270
+ },
271
+ onFinish,
272
+ });
273
+ return stream.toUIMessageStreamResponse();
274
+ ```
275
+
276
+ ---
277
+
278
+ ## Built-in meta tools
279
+
280
+ Two meta tools are automatically registered when `skills` are provided. You do not need to define or wire them.
281
+
282
+ ### `activate_skill`
283
+
284
+ Loads one or more skills by name, making their tools available for the rest of the conversation. The LLM calls this when it needs capabilities it does not currently have.
285
+
286
+ - Loading is idempotent — calling for an already-loaded skill is a no-op.
287
+ - The skills available are exactly those passed as `skills` — filter by request body to control access.
288
+ - When skills are successfully loaded, the new state is embedded in the tool result. `persistMessages` extracts it and writes to DO SQLite.
289
+ - All `activate_skill` messages are stripped from history before persistence — state is restored from DO SQLite, not from message history.
290
+
291
+ ### `list_capabilities`
292
+
293
+ Returns a summary of active tools, loaded skills, and skills available to load. Always stripped from history before persistence.
294
+
295
+ ---
296
+
297
+ ## Passing request context to tools
298
+
299
+ Pass arbitrary data via the `body` option of `useAgentChat`. It arrives as `experimental_context` in tool `execute` functions:
300
+
301
+ ```typescript
302
+ // Client
303
+ useAgentChat({ body: { authorization: token, userId: "u_123" } });
304
+
305
+ // Tool
306
+ execute: async (args, { experimental_context }) => {
307
+ const { authorization } = experimental_context as { authorization: string };
308
+ };
309
+
310
+ // In onChatMessage — forward body to tools
311
+ const stream = await streamText({
312
+ ...
313
+ experimental_context: options?.body,
314
+ });
315
+ return stream.toUIMessageStreamResponse();
316
+ ```
317
+
318
+ ---
319
+
320
+ ## Advanced: `createSkills` directly
321
+
322
+ `createSkills` is the lower-level factory used internally by the `streamText` wrapper. Use it only if you are building a fully custom agent that does not use the wrappers.
323
+
324
+ ```typescript
325
+ import { createSkills, filterEphemeralMessages } from "@economic/agents";
326
+
327
+ const skillsCtx = createSkills({
328
+ tools: alwaysOnTools,
329
+ skills: permittedSkills,
330
+ initialLoadedSkills: await getLoadedSkills(), // from storage
331
+ systemPrompt: "You are a helpful assistant.",
332
+ });
333
+
334
+ const result = aiStreamText({
335
+ model,
336
+ system: skillsCtx.getSystem(),
337
+ messages: convertToModelMessages(messages),
338
+ tools: skillsCtx.tools,
339
+ activeTools: skillsCtx.activeTools,
340
+ prepareStep: skillsCtx.prepareStep,
341
+ });
342
+ ```
343
+
344
+ ---
345
+
346
+ ## API reference
347
+
348
+ ### Classes
349
+
350
+ | Export | Description |
351
+ | ------------- | --------------------------------------------------------------------------------------------------- |
352
+ | `AIChatAgent` | Abstract CF Durable Object base class. Implement `onChatMessage`. Manages skill state in DO SQLite. |
353
+
354
+ ### Functions
355
+
356
+ | Export | Signature | Description |
357
+ | ------------------------- | ------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
358
+ | `streamText` | `async (params: StreamTextParams) => StreamTextResult` | Wraps AI SDK `streamText`; accepts `UIMessage[]`, `skills`, `compact`. |
359
+ | `generateText` | `async (params: GenerateTextParams) => GenerateTextResult` | Wraps AI SDK `generateText`; same extra params as `streamText`. |
360
+ | `createSkills` | `(config: SkillsConfig) => SkillsResult` | Lower-level factory for building the skill loading system. |
361
+ | `filterEphemeralMessages` | `(messages: UIMessage[]) => UIMessage[]` | Strips all `activate_skill` and `list_capabilities` tool calls. |
362
+ | `injectGuidance` | `(messages: ModelMessage[], guidance: string, prev?: string) => ModelMessage[]` | Inserts guidance just before the last user message. **Deprecated** — use `system` in the wrappers instead. |
363
+ | `compactIfNeeded` | `(messages, model, tailSize) => Promise<UIMessage[]>` | Compacts if token estimate exceeds threshold; no-op if model is `undefined`. |
364
+ | `compactMessages` | `(messages, model, tailSize) => Promise<UIMessage[]>` | Summarises the older window and returns `[summaryMsg, ...verbatimTail]`. |
365
+ | `estimateMessagesTokens` | `(messages: UIMessage[]) => number` | Character-count heuristic (÷ 3.5) over text, reasoning, and tool parts. |
366
+
367
+ ### Constants
368
+
369
+ | Export | Value | Description |
370
+ | ------------------------- | --------- | ---------------------------------------------------- |
371
+ | `COMPACT_TOKEN_THRESHOLD` | `140_000` | Token count above which compaction is triggered. |
372
+ | `SKILL_STATE_SENTINEL` | `string` | Delimiter used to embed skill state in tool results. |
373
+
374
+ ### Types
375
+
376
+ | Export | Description |
377
+ | -------------------- | ----------------------------------------------- |
378
+ | `Skill` | A named group of tools with optional guidance. |
379
+ | `SkillsConfig` | Configuration object for `createSkills()`. |
380
+ | `SkillsResult` | Return type of `createSkills()`. |
381
+ | `StreamTextParams` | Params for the `streamText` wrapper. |
382
+ | `GenerateTextParams` | Params for the `generateText` wrapper. |
383
+ | `CompactOptions` | `{ model: LanguageModel; maxMessages: number }` |
384
+
385
+ ---
386
+
387
+ ## Development
388
+
389
+ ```bash
390
+ vp install # install dependencies
391
+ vp test # run tests
392
+ vp pack # build
393
+ ```