agentick 0.2.0 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +148 -213
  2. package/package.json +4 -4
package/README.md CHANGED
@@ -1,82 +1,68 @@
1
1
  # agentick
2
2
 
3
- **React for AI agents.**
3
+ **Build agents like you build apps.**
4
4
 
5
- A React reconciler where the render target is a language model. No prompt templates, no YAML chains, no Jinja. You build the context window with JSX — the same components, hooks, and composition you already know — and the framework compiles it into what the model sees.
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg?style=for-the-badge)](LICENSE)
6
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.9-blue?style=for-the-badge&logo=typescript&logoColor=white)](https://www.typescriptlang.org/)
7
+ [![React](https://img.shields.io/badge/React_19-reconciler-blue?style=for-the-badge&logo=react&logoColor=white)](https://react.dev/)
8
+ [![Node.js](https://img.shields.io/badge/Node.js-%E2%89%A520-339933?style=for-the-badge&logo=node.js&logoColor=white)](https://nodejs.org/)
9
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=for-the-badge)](https://github.com/agenticklabs/agentick/pulls)
6
10
 
7
- You're not configuring a chatbot. You're building the application through which the model sees and experiences the world.
8
-
9
- [![License: ISC](https://img.shields.io/badge/License-ISC-blue.svg?style=for-the-badge)](LICENSE)
11
+ A React reconciler where the render target is a language model. You build the context window with JSX — the same components, hooks, and composition you already know — and the framework compiles it into what the model sees.
10
12
 
11
13
  ```tsx
12
- import { createApp, System, Timeline, Message, Section,
13
- createTool, useContinuation } from "@agentick/core";
14
+ import { createApp, System, Timeline, createTool, useContinuation } from "@agentick/core";
14
15
  import { openai } from "@agentick/openai";
15
16
  import { z } from "zod";
16
17
 
17
- // Tools are components — they render state into model context
18
18
  const Search = createTool({
19
19
  name: "search",
20
20
  description: "Search the knowledge base",
21
21
  input: z.object({ query: z.string() }),
22
- handler: async ({ query }, com) => {
22
+ handler: async ({ query }) => {
23
23
  const results = await knowledgeBase.search(query);
24
- const sources = com.getState("sources") ?? [];
25
- com.setState("sources", [...sources, ...results.map((r) => r.title)]);
26
24
  return [{ type: "text", text: JSON.stringify(results) }];
27
25
  },
28
- // render() injects live state into the context window every tick
29
- render: (tickState, com) => {
30
- const sources = com.getState("sources");
31
- return sources?.length ? (
32
- <Section id="sources" audience="model">
33
- Sources found so far: {sources.join(", ")}
34
- </Section>
35
- ) : null;
36
- },
37
26
  });
38
27
 
39
- // Agents are functions that return JSX
40
- function ResearchAgent({ topic }: { topic: string }) {
41
- // The model auto-continues when it makes tool calls.
42
- // Hooks add your own stop conditions.
43
- useContinuation((result) => {
44
- if (result.tick >= 20) result.stop("too-many-ticks");
45
- });
28
+ function ResearchAgent() {
29
+ useContinuation((result) => result.tick < 10);
46
30
 
47
31
  return (
48
32
  <>
49
- <System>
50
- You are a research agent. Search thoroughly, then write a summary.
51
- </System>
52
-
53
- {/* You control exactly how conversation history renders */}
54
- <Timeline>
55
- {(history, pending) => <>
56
- {history.map((entry, i) =>
57
- i < history.length - 4
58
- ? <CompactMessage key={i} entry={entry} />
59
- : <Message key={i} {...entry.message} />
60
- )}
61
- {pending.map((msg, i) => <Message key={`p-${i}`} {...msg.message} />)}
62
- </>}
63
- </Timeline>
64
-
33
+ <System>Search thoroughly, then write a summary.</System>
34
+ <Timeline />
65
35
  <Search />
66
36
  </>
67
37
  );
68
38
  }
69
39
 
70
- const model = openai({ model: "gpt-4o" });
71
- const app = createApp(ResearchAgent, { model });
40
+ const app = createApp(ResearchAgent, { model: openai({ model: "gpt-4o" }) });
72
41
  const result = await app.run({
73
- props: { topic: "quantum computing" },
74
- messages: [{ role: "user", content: [{ type: "text", text: "What's new in quantum computing?" }] }],
42
+ messages: [
43
+ { role: "user", content: [{ type: "text", text: "What's new in quantum computing?" }] },
44
+ ],
75
45
  });
76
-
77
46
  console.log(result.response);
78
47
  ```
79
48
 
49
+ ## Quick Start
50
+
51
+ ```bash
52
+ npm install agentick @agentick/openai zod
53
+ ```
54
+
55
+ Add to `tsconfig.json`:
56
+
57
+ ```json
58
+ {
59
+ "compilerOptions": {
60
+ "jsx": "react-jsx",
61
+ "jsxImportSource": "react"
62
+ }
63
+ }
64
+ ```
65
+
80
66
  ## Why Agentick
81
67
 
82
68
  Every other AI framework gives you a pipeline. A chain. A graph. You slot your prompt into a template, bolt on some tools, and hope the model figures it out.
@@ -89,10 +75,52 @@ This is application development, not chatbot configuration.
89
75
 
90
76
  ## The Context Is Yours
91
77
 
92
- The core insight: **only what you render gets sent to the model.** `<Timeline>` isn't a magic black box — it accepts a render function with `(history, pending)`, and you decide exactly how every message appears in the context window. Skip a message? The model never sees it. Rewrite it? That's what the model reads.
78
+ The core insight: **only what you render gets sent to the model.** `<Timeline>` isn't a magic black box — it accepts a render function, and you decide exactly how every message appears in the context window. Skip a message? The model never sees it. Rewrite it? That's what the model reads.
79
+
80
+ ```tsx
81
+ <Timeline>
82
+ {(history, pending) => (
83
+ <>
84
+ {history.map((entry, i) => {
85
+ const msg = entry.message;
86
+ const isOld = i < history.length - 6;
87
+
88
+ if (isOld && msg.role === "user") {
89
+ const textOnly = msg.content
90
+ .filter((b) => b.type === "text")
91
+ .map((b) => b.text)
92
+ .join(" ");
93
+ return (
94
+ <Message key={i} role="user">
95
+ [Earlier: {textOnly.slice(0, 100)}...]
96
+ </Message>
97
+ );
98
+ }
99
+
100
+ if (isOld && msg.role === "assistant") {
101
+ return (
102
+ <Message key={i} role="assistant">
103
+ [Previous response]
104
+ </Message>
105
+ );
106
+ }
107
+
108
+ return <Message key={i} {...msg} />;
109
+ })}
110
+ {pending.map((msg, i) => (
111
+ <Message key={`p-${i}`} {...msg.message} />
112
+ ))}
113
+ </>
114
+ )}
115
+ </Timeline>
116
+ ```
117
+
118
+ Images from 20 messages ago eating your context window? Render them as `[Image: beach sunset]`. Tool results from early in the conversation? Collapse them. Recent messages? Full detail. You write the function, you decide.
93
119
 
94
120
  ### Default — Just Works
95
121
 
122
+ With no children, `<Timeline />` renders conversation history with sensible defaults:
123
+
96
124
  ```tsx
97
125
  function SimpleAgent() {
98
126
  return (
@@ -104,80 +132,54 @@ function SimpleAgent() {
104
132
  }
105
133
  ```
106
134
 
107
- `<Timeline />` with no children renders conversation history with sensible defaults.
108
-
109
- ### Custom Rendering — Control What the Model Sees
110
-
111
- The render function receives `history` (completed entries) and `pending` (messages queued this tick). Only what you return from this function enters the model's context:
112
-
113
- ```tsx
114
- <Timeline>
115
- {(history, pending) => <>
116
- {history.map((entry, i) => {
117
- const msg = entry.message;
118
- const isOld = i < history.length - 6;
119
-
120
- // Old user messages — drop images, keep text summaries
121
- if (isOld && msg.role === "user") {
122
- const textOnly = msg.content
123
- .filter((b) => b.type === "text")
124
- .map((b) => b.text)
125
- .join(" ");
126
- return <Message key={i} role="user">[Earlier: {textOnly.slice(0, 100)}...]</Message>;
127
- }
128
-
129
- // Old assistant messages — collapse
130
- if (isOld && msg.role === "assistant") {
131
- return <Message key={i} role="assistant">[Previous response]</Message>;
132
- }
133
-
134
- // Recent messages — full fidelity
135
- return <Message key={i} {...msg} />;
136
- })}
137
- {pending.map((msg, i) => <Message key={`p-${i}`} {...msg.message} />)}
138
- </>}
139
- </Timeline>
140
- ```
141
-
142
- Images from 20 messages ago eating your context window? Render them as `[Image: beach sunset]`. Tool results from early in the conversation? Collapse them. Recent messages? Full detail. You write the function, you decide.
143
-
144
135
  ### Composability — It's React
145
136
 
146
- That render logic getting complex? Extract it into a component. It's React — components compose:
137
+ That render logic getting complex? Extract it into a component:
147
138
 
148
139
  ```tsx
149
- // A reusable component for rendering older messages compactly
150
140
  function CompactMessage({ entry }: { entry: COMTimelineEntry }) {
151
141
  const msg = entry.message;
152
142
 
153
- // Walk content blocks — handle each type differently
154
- const summary = msg.content.map((block) => {
155
- switch (block.type) {
156
- case "text": return block.text.slice(0, 80);
157
- case "image": return `[Image: ${block.source?.description ?? "image"}]`;
158
- case "tool_use": return `[Called ${block.name}]`;
159
- case "tool_result": return `[Result from ${block.name}]`;
160
- default: return "";
161
- }
162
- }).filter(Boolean).join(" | ");
143
+ const summary = msg.content
144
+ .map((block) => {
145
+ switch (block.type) {
146
+ case "text":
147
+ return block.text.slice(0, 80);
148
+ case "image":
149
+ return `[Image: ${block.source?.description ?? "image"}]`;
150
+ case "tool_use":
151
+ return `[Called ${block.name}]`;
152
+ case "tool_result":
153
+ return `[Result from ${block.name}]`;
154
+ default:
155
+ return "";
156
+ }
157
+ })
158
+ .filter(Boolean)
159
+ .join(" | ");
163
160
 
164
161
  return <Message role={msg.role}>{summary}</Message>;
165
162
  }
166
163
 
167
- // Use it in your Timeline
168
164
  function Agent() {
169
165
  return (
170
166
  <>
171
167
  <System>You are helpful.</System>
172
168
  <Timeline>
173
- {(history, pending) => <>
174
- {history.map((entry, i) =>
175
- i < history.length - 4
176
- ? <CompactMessage key={i} entry={entry} />
177
- : <Message key={i} {...entry.message} />
178
- )}
179
- {pending.map((msg, i) => <Message key={`p-${i}`} {...msg.message} />)}
180
- </>}
169
+ {(history, pending) => (
170
+ <>
171
+ {history.map((entry, i) =>
172
+ i < history.length - 4 ? (
173
+ <CompactMessage key={i} entry={entry} />
174
+ ) : (
175
+ <Message key={i} {...entry.message} />
176
+ ),
177
+ )}
178
+ {pending.map((msg, i) => (
179
+ <Message key={`p-${i}`} {...msg.message} />
180
+ ))}
181
+ </>
182
+ )}
181
183
  </Timeline>
182
184
  </>
183
185
  );
@@ -206,7 +208,7 @@ function NarrativeAgent() {
206
208
 
207
209
  The framework doesn't care how you structure the context. Multiple messages, one message, XML, prose — anything that compiles to content blocks gets sent.
208
210
 
209
- ### Sections — Structured Context for the Model
211
+ ### Sections — Structured Context
210
212
 
211
213
  ```tsx
212
214
  function AgentWithContext({ userId }: { userId: string }) {
@@ -215,11 +217,9 @@ function AgentWithContext({ userId }: { userId: string }) {
215
217
  return (
216
218
  <>
217
219
  <System>You are a support agent.</System>
218
-
219
220
  <Section id="user-context" audience="model">
220
221
  Customer: {profile?.name}, Plan: {profile?.plan}, Since: {profile?.joinDate}
221
222
  </Section>
222
-
223
223
  <Timeline />
224
224
  <TicketTool />
225
225
  </>
@@ -229,43 +229,39 @@ function AgentWithContext({ userId }: { userId: string }) {
229
229
 
230
230
  `<Section>` injects structured context that the model sees every tick — live data, computed state, whatever you need. The `audience` prop controls visibility (`"model"`, `"user"`, or `"all"`).
231
231
 
232
- ## Hooks Control Everything
232
+ ## Hooks Control the Loop
233
233
 
234
- Hooks are where the real power lives. They're real React hooks — `useState`, `useEffect`, `useMemo` — plus lifecycle hooks that fire at each phase of execution.
234
+ Hooks are real React hooks — `useState`, `useEffect`, `useMemo` — plus lifecycle hooks that fire at each phase of execution.
235
235
 
236
- ### `useContinuation` — Add Stop Conditions
236
+ ### Stop Conditions
237
237
 
238
- The agent loop auto-continues when the model makes tool calls. `useContinuation` lets you add your own stop conditions:
238
+ The agent loop auto-continues when the model makes tool calls. `useContinuation` adds your own stop conditions:
239
239
 
240
240
  ```tsx
241
- // Stop after a done marker
242
241
  useContinuation((result) => !result.text?.includes("<DONE>"));
243
242
 
244
- // Stop after too many ticks or too many tokens
245
243
  useContinuation((result) => {
246
- if (result.tick >= 10) { result.stop("max-ticks"); return false; }
244
+ if (result.tick >= 10) {
245
+ result.stop("max-ticks");
246
+ return false;
247
+ }
247
248
  if (result.usage && result.usage.totalTokens > 100_000) {
248
- result.stop("token-budget"); return false;
249
+ result.stop("token-budget");
250
+ return false;
249
251
  }
250
252
  });
251
253
  ```
252
254
 
253
- ### `useOnTickEnd` — Run Code After Every Model Response
255
+ ### Between-Tick Logic
254
256
 
255
- `useContinuation` is sugar for `useOnTickEnd`. Use the full version when you need to do real work between ticks:
257
+ `useContinuation` is sugar for `useOnTickEnd`. Use the full version when you need to do real work:
256
258
 
257
259
  ```tsx
258
260
  function VerifiedAgent() {
259
261
  useOnTickEnd(async (result) => {
260
- // Log every tick
261
- analytics.track("tick", { tokens: result.usage?.totalTokens });
262
-
263
- // When the model is done (no more tool calls), verify before accepting
264
262
  if (result.text && !result.toolCalls.length) {
265
263
  const quality = await verifyWithModel(result.text);
266
- if (!quality.acceptable) {
267
- result.continue("failed-verification"); // force another tick
268
- }
264
+ if (!quality.acceptable) result.continue("failed-verification");
269
265
  }
270
266
  });
271
267
 
@@ -278,7 +274,7 @@ function VerifiedAgent() {
278
274
  }
279
275
  ```
280
276
 
281
- ### Build Your Own Hooks
277
+ ### Custom Hooks
282
278
 
283
279
  Custom hooks work exactly like React — they're just functions that call other hooks:
284
280
 
@@ -313,48 +309,18 @@ function CarefulAgent() {
313
309
  return (
314
310
  <>
315
311
  <System>You have a token budget. Be concise.</System>
316
- <Section id="budget" audience="model">Tokens used: {spent}</Section>
312
+ <Section id="budget" audience="model">
313
+ Tokens used: {spent}
314
+ </Section>
317
315
  <Timeline />
318
316
  </>
319
317
  );
320
318
  }
321
319
  ```
322
320
 
323
- ## Everything Is Dual-Use
321
+ ## Tools Render State
324
322
 
325
- `createTool` and `createAdapter` (used under the hood by `openai()`, `google()`, etc.) return objects that work both as JSX components and as direct function calls:
326
-
327
- ```tsx
328
- const Search = createTool({ name: "search", ... });
329
- const model = openai({ model: "gpt-4o" });
330
-
331
- // As JSX — self-closing tags in the component tree
332
- <model temperature={0.2} />
333
- <Search />
334
-
335
- // As direct calls — use programmatically
336
- const handle = await model.generate(input);
337
- const output = await Search.run({ query: "test" });
338
- ```
339
-
340
- Context is maintained with AsyncLocalStorage, so tools and hooks can access session state from anywhere — no prop drilling required.
341
-
342
- ## More Examples
343
-
344
- ### One-Shot Run
345
-
346
- ```tsx
347
- import { run, System, Timeline } from "@agentick/core";
348
- import { openai } from "@agentick/openai";
349
-
350
- const result = await run(
351
- <><System>You are helpful.</System><Timeline /></>,
352
- { model: openai({ model: "gpt-4o" }), messages: [{ role: "user", content: [{ type: "text", text: "Hello!" }] }] },
353
- );
354
- console.log(result.response);
355
- ```
356
-
357
- ### Stateful Tool with Render
323
+ Tools aren't just functions the model calls they render their state back into the context window. The model sees the current state _every time it thinks_, not just in the tool response.
358
324
 
359
325
  ```tsx
360
326
  const TodoTool = createTool({
@@ -365,12 +331,11 @@ const TodoTool = createTool({
365
331
  text: z.string().optional(),
366
332
  id: z.number().optional(),
367
333
  }),
368
- handler: async ({ action, text, id }) => {
334
+ handler: async ({ action, text, id }, ctx) => {
369
335
  if (action === "add") todos.push({ id: todos.length, text, done: false });
370
336
  if (action === "complete") todos[id!].done = true;
371
337
  return [{ type: "text", text: "Done." }];
372
338
  },
373
- // render() injects live state into the model's context every tick
374
339
  render: () => (
375
340
  <Section id="todos" audience="model">
376
341
  Current todos: {JSON.stringify(todos)}
@@ -379,15 +344,28 @@ const TodoTool = createTool({
379
344
  });
380
345
  ```
381
346
 
382
- The model sees the current todo list _every time it thinks_ — not just in the tool response, but as persistent context. When it decides what to do next, the state is right there.
347
+ Everything is dual-use tools and models work as JSX components in the tree _and_ as direct function calls:
383
348
 
384
- ### Multi-Turn Session
349
+ ```tsx
350
+ // JSX — in the component tree
351
+ <Search />
352
+ <model temperature={0.2} />
353
+
354
+ // Direct calls — use programmatically
355
+ const output = await Search.run({ query: "test" });
356
+ const handle = await model.generate(input);
357
+ ```
358
+
359
+ ## Sessions
385
360
 
386
361
  ```tsx
387
362
  const app = createApp(Agent, { model: openai({ model: "gpt-4o" }) });
388
363
  const session = await app.session("conv-1");
389
364
 
390
- const msg = (text: string) => ({ role: "user" as const, content: [{ type: "text" as const, text }] });
365
+ const msg = (text: string) => ({
366
+ role: "user" as const,
367
+ content: [{ type: "text" as const, text }],
368
+ });
391
369
 
392
370
  await session.send({ messages: [msg("Hi there!")] });
393
371
  await session.send({ messages: [msg("Tell me a joke")] });
@@ -402,18 +380,16 @@ session.close();
402
380
 
403
381
  ### Dynamic Model Selection
404
382
 
405
- Models are JSX components — conditionally render them to switch models mid-session:
383
+ Models are JSX components — conditionally render them:
406
384
 
407
385
  ```tsx
408
386
  const gpt = openai({ model: "gpt-4o" });
409
387
  const gemini = google({ model: "gemini-2.5-pro" });
410
388
 
411
389
  function AdaptiveAgent({ task }: { task: string }) {
412
- const needsCreativity = task.includes("creative");
413
-
414
390
  return (
415
391
  <>
416
- {needsCreativity ? <gemini temperature={0.9} /> : <gpt temperature={0.2} />}
392
+ {task.includes("creative") ? <gemini temperature={0.9} /> : <gpt temperature={0.2} />}
417
393
  <System>Handle this task: {task}</System>
418
394
  <Timeline />
419
395
  </>
@@ -441,30 +417,6 @@ function AdaptiveAgent({ task }: { task: string }) {
441
417
  | `@agentick/server` | Server utilities |
442
418
  | `@agentick/socket.io` | Socket.IO transport |
443
419
 
444
- ```
445
- ┌─────────────────────────────────────────────────────────────────┐
446
- │ Applications │
447
- │ (express, nestjs, cli, user apps) │
448
- └──────────────────────────┬──────────────────────────────────────┘
449
-
450
- ┌──────────────────────────┴──────────────────────────────────────┐
451
- │ Framework Layer │
452
- │ @agentick/core @agentick/gateway @agentick/client │
453
- │ @agentick/express @agentick/devtools │
454
- └──────────────────────────┬──────────────────────────────────────┘
455
-
456
- ┌──────────────────────────┴──────────────────────────────────────┐
457
- │ Adapter Layer │
458
- │ @agentick/openai @agentick/google @agentick/ai-sdk │
459
- └──────────────────────────┬──────────────────────────────────────┘
460
-
461
- ┌──────────────────────────┴──────────────────────────────────────┐
462
- │ Foundation Layer │
463
- │ @agentick/kernel @agentick/shared │
464
- │ (Node.js only) (Platform-independent) │
465
- └─────────────────────────────────────────────────────────────────┘
466
- ```
467
-
468
420
  ## Adapters
469
421
 
470
422
  Three built-in, same interface. Or build your own — implement `prepareInput`, `mapChunk`, `execute`, and `executeStream`. See [`packages/adapters/README.md`](packages/adapters/README.md).
@@ -501,23 +453,6 @@ const gateway = createGateway({
501
453
  });
502
454
  ```
503
455
 
504
- ## Quick Start
505
-
506
- ```bash
507
- npm install agentick @agentick/openai zod
508
- ```
509
-
510
- **TypeScript config** — add to `tsconfig.json`:
511
-
512
- ```json
513
- {
514
- "compilerOptions": {
515
- "jsx": "react-jsx",
516
- "jsxImportSource": "react"
517
- }
518
- }
519
- ```
520
-
521
456
  ## License
522
457
 
523
458
  MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "agentick",
3
- "version": "0.2.0",
3
+ "version": "0.2.1",
4
4
  "description": "Build agents like you build apps.",
5
5
  "keywords": [
6
6
  "agent",
@@ -31,9 +31,9 @@
31
31
  "access": "public"
32
32
  },
33
33
  "dependencies": {
34
- "@agentick/core": "0.2.0",
35
- "@agentick/agent": "0.2.0",
36
- "@agentick/guardrails": "0.2.0"
34
+ "@agentick/agent": "0.2.1",
35
+ "@agentick/guardrails": "0.2.1",
36
+ "@agentick/core": "0.2.1"
37
37
  },
38
38
  "scripts": {
39
39
  "build": "tsc -p tsconfig.build.json",