agentick 0.1.9 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,458 @@
1
+ # agentick
2
+
3
+ **Build agents like you build apps.**
4
+
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg?style=for-the-badge)](LICENSE)
6
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.9-blue?style=for-the-badge&logo=typescript&logoColor=white)](https://www.typescriptlang.org/)
7
+ [![React](https://img.shields.io/badge/React_19-reconciler-blue?style=for-the-badge&logo=react&logoColor=white)](https://react.dev/)
8
+ [![Node.js](https://img.shields.io/badge/Node.js-%E2%89%A520-339933?style=for-the-badge&logo=node.js&logoColor=white)](https://nodejs.org/)
9
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=for-the-badge)](https://github.com/agenticklabs/agentick/pulls)
10
+
11
+ A React reconciler where the render target is a language model. You build the context window with JSX — the same components, hooks, and composition you already know — and the framework compiles it into what the model sees.
12
+
13
+ ```tsx
14
+ import { createApp, System, Timeline, createTool, useContinuation } from "@agentick/core";
15
+ import { openai } from "@agentick/openai";
16
+ import { z } from "zod";
17
+
18
+ const Search = createTool({
19
+ name: "search",
20
+ description: "Search the knowledge base",
21
+ input: z.object({ query: z.string() }),
22
+ handler: async ({ query }) => {
23
+ const results = await knowledgeBase.search(query);
24
+ return [{ type: "text", text: JSON.stringify(results) }];
25
+ },
26
+ });
27
+
28
+ function ResearchAgent() {
29
+ useContinuation((result) => result.tick < 10);
30
+
31
+ return (
32
+ <>
33
+ <System>Search thoroughly, then write a summary.</System>
34
+ <Timeline />
35
+ <Search />
36
+ </>
37
+ );
38
+ }
39
+
40
+ const app = createApp(ResearchAgent, { model: openai({ model: "gpt-4o" }) });
41
+ const result = await app.run({
42
+ messages: [
43
+ { role: "user", content: [{ type: "text", text: "What's new in quantum computing?" }] },
44
+ ],
45
+ });
46
+ console.log(result.response);
47
+ ```
48
+
49
+ ## Quick Start
50
+
51
+ ```bash
52
+ npm install agentick @agentick/openai zod
53
+ ```
54
+
55
+ Add to `tsconfig.json`:
56
+
57
+ ```json
58
+ {
59
+ "compilerOptions": {
60
+ "jsx": "react-jsx",
61
+ "jsxImportSource": "react"
62
+ }
63
+ }
64
+ ```
65
+
66
+ ## Why Agentick
67
+
68
+ Every other AI framework gives you a pipeline. A chain. A graph. You slot your prompt into a template, bolt on some tools, and hope the model figures it out.
69
+
70
+ Agentick gives you a **programming language for AI applications.** The context window is your canvas. Components compose into it. Tools render their state back into it. Hooks run arbitrary code between ticks — verify output, summarize history, gate continuation. The model's entire world is JSX that you control, down to how individual content blocks render.
71
+
72
+ There are no prompt templates because JSX _is_ the template language. There are no special abstractions between you and what the model sees — you build it, the framework compiles it, the model reads it. When the model calls a tool, your component re-renders. When you want older messages compressed, you write a component. When you need to verify output before continuing, you write a hook.
73
+
74
+ This is application development, not chatbot configuration.
75
+
76
+ ## The Context Is Yours
77
+
78
+ The core insight: **only what you render gets sent to the model.** `<Timeline>` isn't a magic black box — it accepts a render function, and you decide exactly how every message appears in the context window. Skip a message? The model never sees it. Rewrite it? That's what the model reads.
79
+
80
+ ```tsx
81
+ <Timeline>
82
+ {(history, pending) => (
83
+ <>
84
+ {history.map((entry, i) => {
85
+ const msg = entry.message;
86
+ const isOld = i < history.length - 6;
87
+
88
+ if (isOld && msg.role === "user") {
89
+ const textOnly = msg.content
90
+ .filter((b) => b.type === "text")
91
+ .map((b) => b.text)
92
+ .join(" ");
93
+ return (
94
+ <Message key={i} role="user">
95
+ [Earlier: {textOnly.slice(0, 100)}...]
96
+ </Message>
97
+ );
98
+ }
99
+
100
+ if (isOld && msg.role === "assistant") {
101
+ return (
102
+ <Message key={i} role="assistant">
103
+ [Previous response]
104
+ </Message>
105
+ );
106
+ }
107
+
108
+ return <Message key={i} {...msg} />;
109
+ })}
110
+ {pending.map((msg, i) => (
111
+ <Message key={`p-${i}`} {...msg.message} />
112
+ ))}
113
+ </>
114
+ )}
115
+ </Timeline>
116
+ ```
117
+
118
+ Images from 20 messages ago eating your context window? Render them as `[Image: beach sunset]`. Tool results from early in the conversation? Collapse them. Recent messages? Full detail. You write the function, you decide.
119
+
120
+ ### Default — Just Works
121
+
122
+ With no children, `<Timeline />` renders conversation history with sensible defaults:
123
+
124
+ ```tsx
125
+ function SimpleAgent() {
126
+ return (
127
+ <>
128
+ <System>You are helpful.</System>
129
+ <Timeline />
130
+ </>
131
+ );
132
+ }
133
+ ```
134
+
135
+ ### Composability — It's React
136
+
137
+ That render logic getting complex? Extract it into a component:
138
+
139
+ ```tsx
140
+ function CompactMessage({ entry }: { entry: COMTimelineEntry }) {
141
+ const msg = entry.message;
142
+
143
+ const summary = msg.content
144
+ .map((block) => {
145
+ switch (block.type) {
146
+ case "text":
147
+ return block.text.slice(0, 80);
148
+ case "image":
149
+ return `[Image: ${block.source?.description ?? "image"}]`;
150
+ case "tool_use":
151
+ return `[Called ${block.name}]`;
152
+ case "tool_result":
153
+ return `[Result from ${block.name}]`;
154
+ default:
155
+ return "";
156
+ }
157
+ })
158
+ .filter(Boolean)
159
+ .join(" | ");
160
+
161
+ return <Message role={msg.role}>{summary}</Message>;
162
+ }
163
+
164
+ function Agent() {
165
+ return (
166
+ <>
167
+ <System>You are helpful.</System>
168
+ <Timeline>
169
+ {(history, pending) => (
170
+ <>
171
+ {history.map((entry, i) =>
172
+ i < history.length - 4 ? (
173
+ <CompactMessage key={i} entry={entry} />
174
+ ) : (
175
+ <Message key={i} {...entry.message} />
176
+ ),
177
+ )}
178
+ {pending.map((msg, i) => (
179
+ <Message key={`p-${i}`} {...msg.message} />
180
+ ))}
181
+ </>
182
+ )}
183
+ </Timeline>
184
+ </>
185
+ );
186
+ }
187
+ ```
188
+
189
+ Or go further — you don't even need `<Timeline>`. Render the entire conversation as a single user message:
190
+
191
+ ```tsx
192
+ function NarrativeAgent() {
193
+ return (
194
+ <>
195
+ <System>Continue the conversation.</System>
196
+ <Timeline>
197
+ {(history) => (
198
+ <User>
199
+ Here's what happened so far:{"\n"}
200
+ {history.map((e) => `${e.message.role}: ${extractText(e)}`).join("\n")}
201
+ </User>
202
+ )}
203
+ </Timeline>
204
+ </>
205
+ );
206
+ }
207
+ ```
208
+
209
+ The framework doesn't care how you structure the context. Multiple messages, one message, XML, prose — anything that compiles to content blocks gets sent.
210
+
211
+ ### Sections — Structured Context
212
+
213
+ ```tsx
214
+ function AgentWithContext({ userId }: { userId: string }) {
215
+ const profile = useData("profile", () => fetchProfile(userId), [userId]);
216
+
217
+ return (
218
+ <>
219
+ <System>You are a support agent.</System>
220
+ <Section id="user-context" audience="model">
221
+ Customer: {profile?.name}, Plan: {profile?.plan}, Since: {profile?.joinDate}
222
+ </Section>
223
+ <Timeline />
224
+ <TicketTool />
225
+ </>
226
+ );
227
+ }
228
+ ```
229
+
230
+ `<Section>` injects structured context that the model sees every tick — live data, computed state, whatever you need. The `audience` prop controls visibility (`"model"`, `"user"`, or `"all"`).
231
+
232
+ ## Hooks Control the Loop
233
+
234
+ Hooks are real React hooks — `useState`, `useEffect`, `useMemo` — plus lifecycle hooks that fire at each phase of execution.
235
+
236
+ ### Stop Conditions
237
+
238
+ The agent loop auto-continues when the model makes tool calls. `useContinuation` adds your own stop conditions:
239
+
240
+ ```tsx
241
+ useContinuation((result) => !result.text?.includes("<DONE>"));
242
+
243
+ useContinuation((result) => {
244
+ if (result.tick >= 10) {
245
+ result.stop("max-ticks");
246
+ return false;
247
+ }
248
+ if (result.usage && result.usage.totalTokens > 100_000) {
249
+ result.stop("token-budget");
250
+ return false;
251
+ }
252
+ });
253
+ ```
254
+
255
+ ### Between-Tick Logic
256
+
257
+ `useContinuation` is sugar for `useOnTickEnd`. Use the full version when you need to do real work:
258
+
259
+ ```tsx
260
+ function VerifiedAgent() {
261
+ useOnTickEnd(async (result) => {
262
+ if (result.text && !result.toolCalls.length) {
263
+ const quality = await verifyWithModel(result.text);
264
+ if (!quality.acceptable) result.continue("failed-verification");
265
+ }
266
+ });
267
+
268
+ return (
269
+ <>
270
+ <System>Be accurate. Your responses will be verified.</System>
271
+ <Timeline />
272
+ </>
273
+ );
274
+ }
275
+ ```
276
+
277
+ ### Custom Hooks
278
+
279
+ Custom hooks work exactly like React — they're just functions that call other hooks:
280
+
281
+ ```tsx
282
+ // Reusable hook: stop after a token budget
283
+ function useTokenBudget(maxTokens: number) {
284
+ const [spent, setSpent] = useState(0);
285
+
286
+ useOnTickEnd((result) => {
287
+ const total = spent + (result.usage?.totalTokens ?? 0);
288
+ setSpent(total);
289
+ if (total > maxTokens) result.stop("budget-exceeded");
290
+ });
291
+
292
+ return spent;
293
+ }
294
+
295
+ // Reusable hook: verify output before finishing
296
+ function useVerifiedOutput(verifier: (text: string) => Promise<boolean>) {
297
+ useOnTickEnd(async (result) => {
298
+ if (!result.text || result.toolCalls.length > 0) return;
299
+ const ok = await verifier(result.text);
300
+ if (!ok) result.continue("failed-verification");
301
+ });
302
+ }
303
+
304
+ // Compose them — it's just functions
305
+ function CarefulAgent() {
306
+ const spent = useTokenBudget(50_000);
307
+ useVerifiedOutput(myVerifier);
308
+
309
+ return (
310
+ <>
311
+ <System>You have a token budget. Be concise.</System>
312
+ <Section id="budget" audience="model">
313
+ Tokens used: {spent}
314
+ </Section>
315
+ <Timeline />
316
+ </>
317
+ );
318
+ }
319
+ ```
320
+
321
+ ## Tools Render State
322
+
323
+ Tools aren't just functions the model calls — they render their state back into the context window. The model sees the current state _every time it thinks_, not just in the tool response.
324
+
325
+ ```tsx
326
+ const TodoTool = createTool({
327
+ name: "manage_todos",
328
+ description: "Add, complete, or list todos",
329
+ input: z.object({
330
+ action: z.enum(["add", "complete", "list"]),
331
+ text: z.string().optional(),
332
+ id: z.number().optional(),
333
+ }),
334
+ handler: async ({ action, text, id }, ctx) => {
335
+ if (action === "add") todos.push({ id: todos.length, text, done: false });
336
+ if (action === "complete") todos[id!].done = true;
337
+ return [{ type: "text", text: "Done." }];
338
+ },
339
+ render: () => (
340
+ <Section id="todos" audience="model">
341
+ Current todos: {JSON.stringify(todos)}
342
+ </Section>
343
+ ),
344
+ });
345
+ ```
346
+
347
+ Everything is dual-use — tools and models work as JSX components in the tree _and_ as direct function calls:
348
+
349
+ ```tsx
350
+ // JSX — in the component tree
351
+ <Search />
352
+ <model temperature={0.2} />
353
+
354
+ // Direct calls — use programmatically
355
+ const output = await Search.run({ query: "test" });
356
+ const handle = await model.generate(input);
357
+ ```
358
+
359
+ ## Sessions
360
+
361
+ ```tsx
362
+ const app = createApp(Agent, { model: openai({ model: "gpt-4o" }) });
363
+ const session = await app.session("conv-1");
364
+
365
+ const msg = (text: string) => ({
366
+ role: "user" as const,
367
+ content: [{ type: "text" as const, text }],
368
+ });
369
+
370
+ await session.send({ messages: [msg("Hi there!")] });
371
+ await session.send({ messages: [msg("Tell me a joke")] });
372
+
373
+ // Stream responses
374
+ for await (const event of session.send({ messages: [msg("Another one")] })) {
375
+ if (event.type === "content_delta") process.stdout.write(event.delta);
376
+ }
377
+
378
+ session.close();
379
+ ```
380
+
381
+ ### Dynamic Model Selection
382
+
383
+ Models are JSX components — conditionally render them:
384
+
385
+ ```tsx
386
+ const gpt = openai({ model: "gpt-4o" });
387
+ const gemini = google({ model: "gemini-2.5-pro" });
388
+
389
+ function AdaptiveAgent({ task }: { task: string }) {
390
+ return (
391
+ <>
392
+ {task.includes("creative") ? <gemini temperature={0.9} /> : <gpt temperature={0.2} />}
393
+ <System>Handle this task: {task}</System>
394
+ <Timeline />
395
+ </>
396
+ );
397
+ }
398
+ ```
399
+
400
+ ## Packages
401
+
402
+ | Package | Description |
403
+ | --------------------- | ------------------------------------------------------------ |
404
+ | `@agentick/core` | Reconciler, components, hooks, tools, sessions |
405
+ | `@agentick/kernel` | Execution kernel — procedures, context, middleware, channels |
406
+ | `@agentick/shared` | Platform-independent types and utilities |
407
+ | `@agentick/openai` | OpenAI adapter (GPT-4o, o1, etc.) |
408
+ | `@agentick/google` | Google AI adapter (Gemini) |
409
+ | `@agentick/ai-sdk` | Vercel AI SDK adapter (any provider) |
410
+ | `@agentick/gateway` | Multi-app server with auth, routing, and channels |
411
+ | `@agentick/express` | Express.js integration |
412
+ | `@agentick/nestjs` | NestJS integration |
413
+ | `@agentick/client` | TypeScript client for gateway connections |
414
+ | `@agentick/react` | React hooks for building UIs over sessions |
415
+ | `@agentick/devtools` | Fiber tree inspector, tick scrubber, token tracker |
416
+ | `@agentick/cli` | CLI for running agents |
417
+ | `@agentick/server` | Server utilities |
418
+ | `@agentick/socket.io` | Socket.IO transport |
419
+
420
+ ## Adapters
421
+
422
+ Three built-in, same interface. Or build your own — implement `prepareInput`, `mapChunk`, `execute`, and `executeStream`. See [`packages/adapters/README.md`](packages/adapters/README.md).
423
+
424
+ ```tsx
425
+ import { openai } from "@agentick/openai";
426
+ import { google } from "@agentick/google";
427
+ import { aiSdk } from "@agentick/ai-sdk";
428
+
429
+ const gpt = openai({ model: "gpt-4o" });
430
+ const gemini = google({ model: "gemini-2.5-pro" });
431
+ const sdk = aiSdk({ model: yourAiSdkModel });
432
+ ```
433
+
434
+ ## DevTools
435
+
436
+ ```tsx
437
+ const app = createApp(Agent, { model, devTools: true });
438
+ ```
439
+
440
+ Fiber tree inspector, tick-by-tick scrubber, token usage tracking, real-time execution timeline. Record full sessions for replay with `session({ recording: 'full' })`.
441
+
442
+ ## Gateway
443
+
444
+ Deploy multiple apps behind a single server with auth, routing, and channel adapters:
445
+
446
+ ```tsx
447
+ import { createGateway } from "@agentick/gateway";
448
+
449
+ const gateway = createGateway({
450
+ apps: { support: supportApp, sales: salesApp },
451
+ defaultApp: "support",
452
+ auth: { type: "token", token: process.env.API_TOKEN! },
453
+ });
454
+ ```
455
+
456
+ ## License
457
+
458
+ MIT
package/dist/index.d.ts CHANGED
@@ -1,2 +1,4 @@
1
1
  export * from "@agentick/core";
2
+ export * from "@agentick/agent";
3
+ export * from "@agentick/guardrails";
2
4
  //# sourceMappingURL=index.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA,cAAc,gBAAgB,CAAC"}
1
+ {"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA,cAAc,gBAAgB,CAAC;AAC/B,cAAc,iBAAiB,CAAC;AAChC,cAAc,sBAAsB,CAAC"}
package/dist/index.js CHANGED
@@ -1,2 +1,4 @@
1
1
  export * from "@agentick/core";
2
+ export * from "@agentick/agent";
3
+ export * from "@agentick/guardrails";
2
4
  //# sourceMappingURL=index.js.map
package/dist/index.js.map CHANGED
@@ -1 +1 @@
1
- {"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA,cAAc,gBAAgB,CAAC"}
1
+ {"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA,cAAc,gBAAgB,CAAC;AAC/B,cAAc,iBAAiB,CAAC;AAChC,cAAc,sBAAsB,CAAC"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "agentick",
3
- "version": "0.1.9",
3
+ "version": "0.2.1",
4
4
  "description": "Build agents like you build apps.",
5
5
  "keywords": [
6
6
  "agent",
@@ -31,7 +31,9 @@
31
31
  "access": "public"
32
32
  },
33
33
  "dependencies": {
34
- "@agentick/core": "0.1.9"
34
+ "@agentick/agent": "0.2.1",
35
+ "@agentick/guardrails": "0.2.1",
36
+ "@agentick/core": "0.2.1"
35
37
  },
36
38
  "scripts": {
37
39
  "build": "tsc -p tsconfig.build.json",