@a3s-lab/code 0.3.1 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,7 +5,7 @@
5
5
  </p>
6
6
 
7
7
  <p align="center">
8
- <em>Full-featured gRPC client for building AI coding agent applications</em>
8
+ <em>Session-centric AI coding agent SDK with Vercel AI SDK-compatible convenience API</em>
9
9
  </p>
10
10
 
11
11
  <p align="center">
@@ -16,9 +16,10 @@
16
16
  </p>
17
17
 
18
18
  <p align="center">
19
- <a href="#features">Features</a> •
20
- <a href="#installation">Installation</a> •
21
19
  <a href="#quick-start">Quick Start</a> •
20
+ <a href="#session-api">Session API</a> •
21
+ <a href="#tool-calling">Tool Calling</a> •
22
+ <a href="#convenience-api">Convenience API</a> •
22
23
  <a href="#api-reference">API Reference</a> •
23
24
  <a href="./examples">Examples</a>
24
25
  </p>
@@ -27,572 +28,438 @@
27
28
 
28
29
  ## Overview
29
30
 
30
- **@a3s-lab/code** is the official TypeScript SDK for [A3S Code](https://github.com/a3s-lab/a3s), providing a complete gRPC client implementation for the CodeAgentService API. Build AI-powered coding assistants, IDE integrations, and automation tools with full access to A3S Code's capabilities.
31
-
32
- ### Why This SDK?
33
-
34
- - **100% API Coverage**: All 53 RPCs from CodeAgentService fully implemented
35
- - **Type-Safe**: Full TypeScript definitions for all types and enums
36
- - **Async/Await**: Modern Promise-based API with streaming support
37
- - **Flexible Configuration**: Environment variables, config files, or programmatic setup
38
- - **Real-World Examples**: Comprehensive examples with real LLM API integration
39
-
40
- ## Features
41
-
42
- | Category | Features |
43
- |----------|----------|
44
- | **Lifecycle** | Health check, capabilities, initialization, shutdown |
45
- | **Sessions** | Create, list, get, configure, destroy with persistence |
46
- | **Generation** | Streaming/non-streaming responses, structured output, context compaction |
47
- | **Tools** | Load/unload skills, list available tools, custom tool registration |
48
- | **Context** | Add/clear context, manage conversation history, usage monitoring |
49
- | **Control** | Abort operations, pause/resume, cancel confirmations |
50
- | **Events** | Subscribe to real-time agent events, tool execution tracking |
51
- | **HITL** | Human-in-the-loop confirmations, approval workflows |
52
- | **Permissions** | Fine-grained permission policies, tool access control |
53
- | **Todos** | Task tracking for multi-step workflows, goal management |
54
- | **Providers** | Multi-provider LLM configuration (Anthropic, OpenAI, KIMI, etc.) |
55
- | **Planning** | Execution plans, goal extraction, achievement checking |
56
- | **Memory** | Episodic/semantic/procedural memory storage and retrieval |
57
- | **Storage** | Configurable session storage (memory, file, custom) |
31
+ **@a3s-lab/code** is the official TypeScript SDK for [A3S Code](https://github.com/a3s-lab/a3s). The SDK is session-centric every interaction goes through a `Session` object:
32
+
33
+ 1. **Session API** — The core pattern. Create a session with `client.createSession()`, then call `session.generateText()`, `session.streamText()`, etc. Model and workspace are bound at creation time and immutable. Supports `await using` for automatic cleanup.
34
+
35
+ 2. **Convenience API** Standalone functions (`generateText`, `streamText`, etc.) that create temporary sessions under the hood. Great for one-shot operations.
58
36
 
59
37
  ## Installation
60
38
 
61
39
  ```bash
62
40
  npm install @a3s-lab/code
63
- # or
64
- yarn add @a3s-lab/code
65
- # or
66
- pnpm add @a3s-lab/code
67
41
  ```
68
42
 
69
43
  ## Quick Start
70
44
 
71
- ### Basic Usage
72
-
73
45
  ```typescript
74
- import { A3sClient } from '@a3s-lab/code';
46
+ import { A3sClient, createProvider } from '@a3s-lab/code';
75
47
 
76
- // Create client with default config
77
- const client = new A3sClient();
78
-
79
- // Or with explicit address
80
48
  const client = new A3sClient({ address: 'localhost:4088' });
49
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
81
50
 
82
- // Or load from config file
83
- const client = new A3sClient({ configDir: '/path/to/.a3s' });
84
-
85
- async function main() {
86
- // Check health
87
- const health = await client.healthCheck();
88
- console.log('Agent status:', health.status);
89
-
90
- // Create a session
91
- const session = await client.createSession({
92
- name: 'my-session',
93
- workspace: '/path/to/project',
94
- systemPrompt: 'You are a helpful coding assistant.',
95
- });
51
+ // Create session model and workspace bound here, immutable after
52
+ await using session = await client.createSession({
53
+ model: openai('gpt-4o'),
54
+ workspace: '/project',
55
+ system: 'You are a helpful coding assistant.',
56
+ });
96
57
 
97
- // Generate a response (streaming)
98
- for await (const chunk of client.streamGenerate(session.sessionId, [
99
- { role: 'user', content: 'Explain this codebase structure' }
100
- ])) {
101
- if (chunk.type === 'CHUNK_TYPE_CONTENT' && chunk.content) {
102
- process.stdout.write(chunk.content);
103
- }
104
- }
58
+ // Generate text
59
+ const { text } = await session.generateText({
60
+ prompt: 'Explain this codebase',
61
+ });
62
+ console.log(text);
105
63
 
106
- // Clean up
107
- await client.destroySession(session.sessionId);
108
- client.close();
64
+ // Stream text
65
+ const { textStream } = session.streamText({
66
+ prompt: 'Now refactor it',
67
+ });
68
+ for await (const chunk of textStream) {
69
+ process.stdout.write(chunk);
109
70
  }
110
71
 
111
- main().catch(console.error);
112
- ```
113
-
114
- ### 📚 Complete Examples
115
-
116
- See the [examples](./examples) directory for comprehensive, runnable examples:
117
-
118
- | Example | Description | Run |
119
- |---------|-------------|-----|
120
- | [kimi-test.ts](./examples/src/kimi-test.ts) | Test with KIMI K2.5 model | `npm run kimi-test` |
121
- | [chat-simulation.ts](./examples/src/chat-simulation.ts) | Multi-turn chat with skills | `npm run chat` |
122
- | [code-generation-interactive.ts](./examples/src/code-generation-interactive.ts) | Interactive code generation | `npm run code-gen` |
123
- | [skill-usage-demo.ts](./examples/src/skill-usage-demo.ts) | Skill loading and usage | `npm run skill-demo` |
124
- | [simple-test.ts](./examples/src/simple-test.ts) | Basic SDK usage | `npm run dev` |
125
- | [storage-configuration.ts](./examples/src/storage-configuration.ts) | Memory vs file storage | `npm run storage` |
126
- | [hitl-confirmation.ts](./examples/src/hitl-confirmation.ts) | Human-in-the-loop | `npm run hitl` |
127
- | [provider-config.ts](./examples/src/provider-config.ts) | Provider management | `npm run provider` |
128
- | [context-management.ts](./examples/src/context-management.ts) | Context monitoring | `npm run context` |
129
- | [code-review-agent.ts](./examples/src/code-review-agent.ts) | Complete production example | `npm run code-review` |
130
-
131
- **Quick start with examples:**
132
-
133
- ```bash
134
- cd examples
135
- npm install
136
-
137
- # Test with KIMI model (recommended)
138
- npm run kimi-test
72
+ // Multi-turn: session remembers context
73
+ const { text: followUp } = await session.generateText({
74
+ prompt: 'What about error handling?',
75
+ });
139
76
 
140
- # Try chat simulation
141
- npm run chat
77
+ // Context management
78
+ const usage = await session.getContextUsage();
79
+ await session.compactContext();
80
+ // session.close() called automatically via `await using`
142
81
  ```
143
82
 
144
- See [examples/README.md](./examples/README.md) for detailed documentation and [TESTING_WITH_REAL_MODELS.md](./examples/TESTING_WITH_REAL_MODELS.md) for API configuration guide.
83
+ ## Session API
145
84
 
146
- ## Usage Examples
85
+ Sessions are the core concept in A3S Code. Each session binds a model and workspace at creation time (immutable). The session maintains conversation history, context, permissions, and tool state.
147
86
 
148
- ### Multi-Turn Conversations
87
+ ### Session Lifecycle
149
88
 
150
89
  ```typescript
151
- import { A3sClient } from '@a3s-lab/code';
152
-
153
- const client = new A3sClient({ configDir: '~/.a3s' });
154
-
155
- async function multiTurnChat() {
156
- await client.connect();
90
+ import { A3sClient, createProvider } from '@a3s-lab/code';
157
91
 
158
- const { sessionId } = await client.createSession({
159
- name: 'chat-session',
160
- workspace: '/path/to/project',
161
- });
92
+ const client = new A3sClient({ address: 'localhost:4088' });
93
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
162
94
 
163
- // First turn
164
- for await (const chunk of client.streamGenerate(sessionId, [
165
- { role: 'user', content: 'List all TypeScript files in this project' }
166
- ])) {
167
- if (chunk.content) process.stdout.write(chunk.content);
168
- }
95
+ // Create session — model and workspace are immutable after creation
96
+ const session = await client.createSession({
97
+ model: openai('gpt-4o'),
98
+ workspace: '/project',
99
+ system: 'You are a code reviewer.',
100
+ });
169
101
 
170
- // Second turn - context is preserved
171
- for await (const chunk of client.streamGenerate(sessionId, [
172
- { role: 'user', content: 'Now analyze the main entry point' }
173
- ])) {
174
- if (chunk.content) process.stdout.write(chunk.content);
175
- }
102
+ // Multi-turn conversation (session remembers context)
103
+ await session.generateText({ prompt: 'Review the auth module' });
104
+ await session.generateText({ prompt: 'What about error handling?' });
176
105
 
177
- // Get conversation history
178
- const { messages } = await client.getMessages(sessionId, { limit: 10 });
179
- console.log(`\nConversation has ${messages.length} messages`);
106
+ // Context management
107
+ const usage = await session.getContextUsage();
108
+ console.log(`Tokens: ${usage?.totalTokens}`);
109
+ await session.compactContext(); // Compress when large
180
110
 
181
- await client.destroySession(sessionId);
182
- client.close();
183
- }
111
+ // Cleanup
112
+ await session.close();
184
113
  ```
185
114
 
186
- ### Event Subscription
115
+ ### Auto-Cleanup with `using`
187
116
 
188
117
  ```typescript
189
- import { A3sClient } from '@a3s-lab/code';
190
-
191
- const client = new A3sClient();
192
-
193
- async function subscribeToEvents() {
194
- await client.connect();
195
-
196
- const { sessionId } = await client.createSession({
197
- name: 'event-demo',
198
- workspace: '/tmp/workspace',
118
+ // `await using` calls session.close() automatically when the block exits
119
+ {
120
+ await using session = await client.createSession({
121
+ model: openai('gpt-4o'),
122
+ workspace: '/project',
199
123
  });
200
124
 
201
- // Subscribe to all events
202
- const eventStream = client.subscribeEvents(sessionId);
125
+ const { text } = await session.generateText({ prompt: 'Hello' });
126
+ // session.close() called automatically here
127
+ }
128
+ ```
203
129
 
204
- // Handle events in background
205
- (async () => {
206
- for await (const event of eventStream) {
207
- console.log(`[${event.type}] ${event.message}`);
130
+ ### Streaming
208
131
 
209
- if (event.type === 'EVENT_TYPE_TOOL_CALLED') {
210
- console.log(` Tool: ${event.metadata?.tool_name}`);
211
- }
132
+ ```typescript
133
+ const { textStream, fullStream, toolStream, text, steps } = session.streamText({
134
+ prompt: 'Explain this codebase',
135
+ tools: { weather: weatherTool },
136
+ maxSteps: 5,
137
+ });
212
138
 
213
- if (event.type === 'EVENT_TYPE_CONFIRMATION_REQUIRED') {
214
- console.log(` Confirmation needed for: ${event.metadata?.tool_name}`);
215
- }
216
- }
217
- })();
139
+ // Text-only stream
140
+ for await (const chunk of textStream) {
141
+ process.stdout.write(chunk);
142
+ }
218
143
 
219
- // Generate with tool usage
220
- for await (const chunk of client.streamGenerate(sessionId, [
221
- { role: 'user', content: 'Read the README.md file' }
222
- ])) {
223
- if (chunk.content) process.stdout.write(chunk.content);
144
+ // Or full event stream
145
+ for await (const chunk of fullStream) {
146
+ switch (chunk.type) {
147
+ case 'content':
148
+ process.stdout.write(chunk.content);
149
+ break;
150
+ case 'tool_call':
151
+ console.log(`Tool: ${chunk.toolCall?.name}`);
152
+ break;
153
+ case 'done':
154
+ console.log(`\nFinish: ${chunk.finishReason}`);
155
+ break;
224
156
  }
225
-
226
- await client.destroySession(sessionId);
227
- client.close();
228
157
  }
229
158
  ```
230
159
 
231
- ### Human-in-the-Loop (HITL)
160
+ ### Structured Output
232
161
 
233
162
  ```typescript
234
- import { A3sClient } from '@a3s-lab/code';
235
-
236
- const client = new A3sClient();
237
-
238
- async function hitlDemo() {
239
- await client.connect();
240
-
241
- const { sessionId } = await client.createSession({
242
- name: 'hitl-demo',
243
- workspace: '/path/to/project',
244
- });
245
-
246
- // Set confirmation policy - require approval for bash commands
247
- await client.setConfirmationPolicy(sessionId, {
248
- defaultAction: 'TIMEOUT_ACTION_REJECT',
249
- timeoutMs: 30000,
250
- rules: [
251
- {
252
- toolPattern: 'bash',
253
- action: 'TIMEOUT_ACTION_REJECT',
254
- requireConfirmation: true,
255
- }
256
- ]
257
- });
258
-
259
- // Subscribe to events to detect confirmation requests
260
- const eventStream = client.subscribeEvents(sessionId);
261
-
262
- (async () => {
263
- for await (const event of eventStream) {
264
- if (event.type === 'EVENT_TYPE_CONFIRMATION_REQUIRED') {
265
- const toolName = event.metadata?.tool_name;
266
- const toolArgs = event.metadata?.tool_args;
267
-
268
- console.log(`\nConfirmation required:`);
269
- console.log(` Tool: ${toolName}`);
270
- console.log(` Args: ${toolArgs}`);
271
-
272
- // Auto-approve for demo (in real app, prompt user)
273
- const approved = true;
163
+ // Unary
164
+ const { object, data, usage } = await session.generateObject({
165
+ schema: JSON.stringify({
166
+ type: 'object',
167
+ properties: { summary: { type: 'string' }, files: { type: 'array' } },
168
+ }),
169
+ prompt: 'Analyze this project',
170
+ });
274
171
 
275
- await client.confirmToolExecution(sessionId, {
276
- approved,
277
- reason: approved ? 'User approved' : 'User rejected',
278
- });
279
- }
280
- }
281
- })();
172
+ // Streaming
173
+ const { partialStream, object: finalObject } = session.streamObject({
174
+ schema: '{"type":"object","properties":{"items":{"type":"array"}}}',
175
+ prompt: 'List project dependencies',
176
+ });
177
+ for await (const partial of partialStream) {
178
+ process.stdout.write(partial);
179
+ }
180
+ const result = await finalObject;
181
+ ```
282
182
 
283
- // This will trigger confirmation
284
- for await (const chunk of client.streamGenerate(sessionId, [
285
- { role: 'user', content: 'Run "ls -la" command' }
286
- ])) {
287
- if (chunk.content) process.stdout.write(chunk.content);
288
- }
183
+ ### Events (Low-Level Client)
289
184
 
290
- await client.destroySession(sessionId);
291
- client.close();
185
+ ```typescript
186
+ for await (const event of client.subscribeEvents(session.id)) {
187
+ console.log(`[${event.type}] ${event.message}`);
292
188
  }
293
189
  ```
294
190
 
295
- ### Permission Policies
191
+ ## Tool Calling
192
+
193
+ Define client-side tools with `tool()` and enable multi-step agent behavior with `maxSteps`:
296
194
 
297
195
  ```typescript
298
- import { A3sClient } from '@a3s-lab/code';
196
+ import { A3sClient, createProvider, tool } from '@a3s-lab/code';
299
197
 
300
198
  const client = new A3sClient();
199
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
200
+
201
+ const weather = tool({
202
+ description: 'Get weather for a city',
203
+ parameters: {
204
+ type: 'object',
205
+ properties: {
206
+ city: { type: 'string', description: 'City name' },
207
+ },
208
+ required: ['city'],
209
+ },
210
+ execute: async ({ city }) => ({
211
+ city,
212
+ temperature: 72,
213
+ condition: 'sunny',
214
+ }),
215
+ });
301
216
 
302
- async function permissionDemo() {
303
- await client.connect();
304
-
305
- const { sessionId } = await client.createSession({
306
- name: 'permission-demo',
307
- workspace: '/path/to/project',
308
- });
217
+ await using session = await client.createSession({
218
+ model: openai('gpt-4o'),
219
+ system: 'You are a helpful assistant with weather tools.',
220
+ });
309
221
 
310
- // Set permission policy - read-only mode
311
- await client.setPermissionPolicy(sessionId, {
312
- defaultDecision: 'PERMISSION_DECISION_DENY',
313
- rules: [
314
- {
315
- toolPattern: 'read',
316
- decision: 'PERMISSION_DECISION_ALLOW',
317
- },
318
- {
319
- toolPattern: 'grep',
320
- decision: 'PERMISSION_DECISION_ALLOW',
321
- },
322
- {
323
- toolPattern: 'glob',
324
- decision: 'PERMISSION_DECISION_ALLOW',
325
- },
326
- {
327
- toolPattern: 'ls',
328
- decision: 'PERMISSION_DECISION_ALLOW',
329
- },
330
- {
331
- toolPattern: 'write',
332
- decision: 'PERMISSION_DECISION_DENY',
333
- },
334
- {
335
- toolPattern: 'bash',
336
- decision: 'PERMISSION_DECISION_ASK',
337
- }
338
- ]
339
- });
222
+ // Multi-step: model calls tools, gets results, continues reasoning
223
+ const { text, steps } = await session.generateText({
224
+ prompt: 'What is the weather in Tokyo and Paris?',
225
+ tools: { weather },
226
+ maxSteps: 5,
227
+ onStepFinish: (step) => {
228
+ console.log(`Step ${step.stepIndex}: ${step.toolCalls.length} tool calls`);
229
+ },
230
+ onToolCall: ({ toolName, args }) => {
231
+ console.log(`Calling ${toolName}`, args);
232
+ },
233
+ });
340
234
 
341
- // Check permission before operation
342
- const canWrite = await client.checkPermission(sessionId, {
343
- toolName: 'write',
344
- args: { file_path: '/tmp/test.txt' }
345
- });
235
+ console.log(text);
236
+ console.log(`Completed in ${steps.length} steps`);
237
+ ```
346
238
 
347
- console.log(`Can write: ${canWrite.decision === 'PERMISSION_DECISION_ALLOW'}`);
239
+ ### Streaming with Tools
348
240
 
349
- // This will be allowed (read-only tools)
350
- for await (const chunk of client.streamGenerate(sessionId, [
351
- { role: 'user', content: 'List all files in the current directory' }
352
- ])) {
353
- if (chunk.content) process.stdout.write(chunk.content);
354
- }
241
+ ```typescript
242
+ const { textStream, toolStream } = session.streamText({
243
+ prompt: 'Check the weather everywhere',
244
+ tools: { weather },
245
+ maxSteps: 5,
246
+ });
355
247
 
356
- await client.destroySession(sessionId);
357
- client.close();
248
+ for await (const chunk of textStream) {
249
+ process.stdout.write(chunk);
358
250
  }
359
251
  ```
360
252
 
361
- ### Provider Configuration
253
+ ### Tools Without Execute (onToolCall)
362
254
 
363
255
  ```typescript
364
- import { A3sClient } from '@a3s-lab/code';
365
-
366
- const client = new A3sClient();
367
-
368
- async function providerDemo() {
369
- await client.connect();
370
-
371
- // List available providers
372
- const { providers } = await client.listProviders();
373
- console.log('Available providers:', providers.map(p => p.name));
374
-
375
- // Add a new provider
376
- await client.addProvider({
377
- name: 'openai',
378
- apiKey: 'sk-...',
379
- baseUrl: 'https://api.openai.com/v1',
380
- models: [
381
- {
382
- id: 'gpt-4',
383
- name: 'GPT-4',
384
- family: 'gpt',
385
- toolCall: true,
386
- }
387
- ]
388
- });
389
-
390
- // Set default model
391
- await client.setDefaultModel('openai', 'gpt-4');
392
-
393
- // Get current default
394
- const { provider, model } = await client.getDefaultModel();
395
- console.log(`Default: ${provider}/${model}`);
396
-
397
- // Create session with specific model
398
- const { sessionId } = await client.createSession({
399
- name: 'openai-session',
400
- workspace: '/tmp/workspace',
401
- llmConfig: {
402
- provider: 'openai',
403
- model: 'gpt-4',
404
- temperature: 0.7,
256
+ const { text } = await session.generateText({
257
+ prompt: 'Look up the user profile',
258
+ tools: {
259
+ getUser: tool({
260
+ description: 'Get user profile by ID',
261
+ parameters: {
262
+ type: 'object',
263
+ properties: { userId: { type: 'string' } },
264
+ },
265
+ // No execute handled by onToolCall
266
+ }),
267
+ },
268
+ maxSteps: 3,
269
+ onToolCall: async ({ toolName, args }) => {
270
+ if (toolName === 'getUser') {
271
+ return { name: 'Alice', role: 'admin' };
405
272
  }
406
- });
407
-
408
- await client.destroySession(sessionId);
409
- client.close();
410
- }
273
+ },
274
+ });
411
275
  ```
412
276
 
413
- ### Context Management
277
+ ## Convenience API
278
+
279
+ Standalone functions that create temporary sessions under the hood. Useful for one-shot operations:
414
280
 
415
281
  ```typescript
416
- import { A3sClient } from '@a3s-lab/code';
282
+ import { generateText, streamText, createProvider } from '@a3s-lab/code';
417
283
 
418
- const client = new A3sClient();
284
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
419
285
 
420
- async function contextDemo() {
421
- await client.connect();
286
+ // One-shot generation (auto session)
287
+ const { text } = await generateText({
288
+ model: openai('gpt-4o'),
289
+ prompt: 'Explain this codebase',
290
+ workspace: '/project',
291
+ });
422
292
 
423
- const { sessionId } = await client.createSession({
424
- name: 'context-demo',
425
- workspace: '/path/to/project',
426
- });
293
+ // Streaming (auto session)
294
+ const { textStream } = streamText({
295
+ model: openai('gpt-4o'),
296
+ prompt: 'Explain this codebase',
297
+ });
298
+ for await (const chunk of textStream) {
299
+ process.stdout.write(chunk);
300
+ }
301
+ ```
427
302
 
428
- // Have a long conversation...
429
- for (let i = 0; i < 10; i++) {
430
- await client.generate(sessionId, [
431
- { role: 'user', content: `Question ${i + 1}: Tell me about this project` }
432
- ]);
433
- }
303
+ ### createChat (Convenience Wrapper)
434
304
 
435
- // Check context usage
436
- const usage = await client.getContextUsage(sessionId);
437
- console.log(`Context tokens: ${usage.totalTokens}/${usage.maxTokens}`);
438
- console.log(`Messages: ${usage.messageCount}`);
305
+ `createChat()` is a convenience wrapper that manages a session internally:
439
306
 
440
- if (usage.totalTokens > usage.maxTokens * 0.8) {
441
- console.log('Context is getting full, compacting...');
307
+ ```typescript
308
+ import { createChat, createProvider } from '@a3s-lab/code';
442
309
 
443
- // Compact context using LLM summarization
444
- const result = await client.compactContext(sessionId);
445
- console.log(`Compacted: ${result.originalMessages} → ${result.compactedMessages} messages`);
446
- console.log(`Saved: ${result.tokensSaved} tokens`);
447
- }
310
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
448
311
 
449
- await client.destroySession(sessionId);
450
- client.close();
312
+ const chat = createChat({
313
+ model: openai('gpt-4o'),
314
+ workspace: '/project',
315
+ system: 'You are a helpful code assistant',
316
+ });
317
+
318
+ const { text } = await chat.send('What does main.rs do?');
319
+ const { textStream } = chat.stream('Now refactor it');
320
+ for await (const chunk of textStream) {
321
+ process.stdout.write(chunk);
451
322
  }
323
+ await chat.close();
452
324
  ```
453
325
 
454
- ### Skills Management
326
+ For new code, prefer using `Session` directly — it provides the same functionality with more control.
455
327
 
456
- ```typescript
457
- import { A3sClient } from '@a3s-lab/code';
458
- import { readFileSync } from 'fs';
328
+ ## Message Conversion (UIMessage ↔ ModelMessage)
459
329
 
460
- const client = new A3sClient();
330
+ The SDK provides Vercel AI SDK-style message types for frontend ↔ backend conversion:
461
331
 
462
- async function skillsDemo() {
463
- await client.connect();
464
-
465
- const { sessionId } = await client.createSession({
466
- name: 'skills-demo',
467
- workspace: '/path/to/project',
468
- });
332
+ - `UIMessage` — Frontend format with `id`, `createdAt`, `parts` (for rendering in chat UIs)
333
+ - `ModelMessage` — Backend format with `role`, `content` (for LLM / generateText / streamText)
469
334
 
470
- // Load a custom skill from markdown file
471
- const skillContent = readFileSync('./my-skill.md', 'utf-8');
335
+ ### Frontend Backend
472
336
 
473
- await client.loadSkill(sessionId, 'my-custom-tool', skillContent);
474
-
475
- // List all available skills
476
- const { skills } = await client.listSkills(sessionId);
477
- console.log('Available skills:', skills.map(s => s.name));
478
-
479
- // Use the custom skill
480
- for await (const chunk of client.streamGenerate(sessionId, [
481
- { role: 'user', content: 'Use my-custom-tool to process data' }
482
- ])) {
483
- if (chunk.content) process.stdout.write(chunk.content);
484
- }
337
+ ```typescript
338
+ import { convertToModelMessages, generateText, createProvider } from '@a3s-lab/code';
339
+ import type { UIMessage } from '@a3s-lab/code';
340
+
341
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
342
+
343
+ // UIMessages from your frontend (e.g., useChat hook, database, etc.)
344
+ const uiMessages: UIMessage[] = [
345
+ {
346
+ id: 'msg-1',
347
+ role: 'user',
348
+ content: 'What does main.rs do?',
349
+ parts: [{ type: 'text', text: 'What does main.rs do?' }],
350
+ createdAt: new Date(),
351
+ },
352
+ ];
353
+
354
+ // Convert to model format before calling generateText/streamText
355
+ const modelMessages = convertToModelMessages(uiMessages);
356
+ const { text } = await generateText({
357
+ model: openai('gpt-4o'),
358
+ messages: modelMessages,
359
+ });
360
+ ```
485
361
 
486
- // Unload the skill
487
- await client.unloadSkill(sessionId, 'my-custom-tool');
362
+ ### Backend Frontend
488
363
 
489
- await client.destroySession(sessionId);
490
- client.close();
491
- }
364
+ ```typescript
365
+ import { convertToUIMessages } from '@a3s-lab/code';
366
+ import type { ModelMessage } from '@a3s-lab/code';
367
+
368
+ // ModelMessages from LLM response or database
369
+ const modelMessages: ModelMessage[] = [
370
+ { role: 'user', content: 'Hello' },
371
+ { role: 'assistant', content: 'Hi! How can I help?' },
372
+ ];
373
+
374
+ // Convert to UIMessage format for rendering
375
+ const uiMessages = convertToUIMessages(modelMessages);
376
+ // uiMessages[0].parts → [{ type: 'text', text: 'Hello' }]
377
+ // uiMessages[1].parts → [{ type: 'text', text: 'Hi! How can I help?' }]
492
378
  ```
493
379
 
494
- ### Todo/Task Tracking
380
+ ### A3S Message ↔ UIMessage (Shorthand)
495
381
 
496
382
  ```typescript
497
- import { A3sClient } from '@a3s-lab/code';
383
+ import { a3sMessagesToUI, uiMessagesToA3s } from '@a3s-lab/code';
498
384
 
499
- const client = new A3sClient();
385
+ // A3S session messages → UIMessage (for rendering)
386
+ const messages = await client.getMessages(sessionId);
387
+ const uiMessages = a3sMessagesToUI(messages.messages);
500
388
 
501
- async function todoDemo() {
502
- await client.connect();
389
+ // UIMessage → A3S messages (for session generation)
390
+ const a3sMessages = uiMessagesToA3s(uiMessages);
391
+ await client.generate(sessionId, a3sMessages);
392
+ ```
503
393
 
504
- const { sessionId } = await client.createSession({
505
- name: 'todo-demo',
506
- workspace: '/path/to/project',
507
- });
394
+ ### Tool Invocations in UIMessage
508
395
 
509
- // Set initial todos
510
- await client.setTodos(sessionId, [
396
+ UIMessages support rich tool invocation parts for rendering tool calls in chat UIs:
397
+
398
+ ```typescript
399
+ const assistantMessage: UIMessage = {
400
+ id: 'msg-2',
401
+ role: 'assistant',
402
+ content: 'The weather in Tokyo is 22°C.',
403
+ parts: [
511
404
  {
512
- id: '1',
513
- title: 'Analyze codebase structure',
514
- description: 'Understand the project layout',
515
- completed: false,
405
+ type: 'tool-invocation',
406
+ toolInvocation: {
407
+ toolCallId: 'call-1',
408
+ toolName: 'weather',
409
+ args: { city: 'Tokyo' },
410
+ state: 'result',
411
+ result: { temperature: 22, condition: 'sunny' },
412
+ },
516
413
  },
517
- {
518
- id: '2',
519
- title: 'Fix bug in authentication',
520
- description: 'User login fails with invalid token',
521
- completed: false,
522
- }
523
- ]);
524
-
525
- // Agent works on tasks...
526
- await client.generate(sessionId, [
527
- { role: 'user', content: 'Complete the first todo item' }
528
- ]);
529
-
530
- // Get updated todos
531
- const { todos } = await client.getTodos(sessionId);
532
- console.log('Todos:');
533
- todos.forEach(todo => {
534
- const status = todo.completed ? '✓' : '○';
535
- console.log(` ${status} ${todo.title}`);
536
- });
414
+ { type: 'text', text: 'The weather in Tokyo is 22°C and sunny.' },
415
+ ],
416
+ };
537
417
 
538
- await client.destroySession(sessionId);
539
- client.close();
540
- }
418
+ // Converts to: assistant message with toolCalls + tool result message
419
+ const modelMessages = convertToModelMessages([assistantMessage]);
541
420
  ```
542
421
 
543
- ### Operation Control
422
+ ## Structured Output
544
423
 
545
424
  ```typescript
546
- import { A3sClient } from '@a3s-lab/code';
547
-
548
- const client = new A3sClient();
549
-
550
- async function controlDemo() {
551
- await client.connect();
552
-
553
- const { sessionId } = await client.createSession({
554
- name: 'control-demo',
555
- workspace: '/path/to/project',
556
- });
557
-
558
- // Start a long-running operation
559
- const generatePromise = (async () => {
560
- for await (const chunk of client.streamGenerate(sessionId, [
561
- { role: 'user', content: 'Analyze all files in this large project' }
562
- ])) {
563
- if (chunk.content) process.stdout.write(chunk.content);
564
- }
565
- })();
566
-
567
- // Cancel after 5 seconds
568
- setTimeout(async () => {
569
- console.log('\nCancelling operation...');
570
- await client.cancel(sessionId);
571
- }, 5000);
572
-
573
- try {
574
- await generatePromise;
575
- } catch (error) {
576
- console.log('Operation was cancelled');
577
- }
578
-
579
- // Pause and resume
580
- await client.pause(sessionId);
581
- console.log('Session paused');
425
+ import { generateObject, streamObject, createProvider } from '@a3s-lab/code';
426
+
427
+ const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
428
+
429
+ // Generate a typed object
430
+ const { object } = await generateObject({
431
+ model: openai('gpt-4o'),
432
+ schema: JSON.stringify({
433
+ type: 'object',
434
+ properties: {
435
+ summary: { type: 'string' },
436
+ files: { type: 'array', items: { type: 'string' } },
437
+ complexity: { type: 'string', enum: ['low', 'medium', 'high'] },
438
+ },
439
+ required: ['summary', 'files', 'complexity'],
440
+ }),
441
+ prompt: 'Analyze this project structure',
442
+ workspace: '/project',
443
+ });
582
444
 
583
- await client.resume(sessionId);
584
- console.log('Session resumed');
445
+ // Stream partial results
446
+ const { partialStream, object: finalObject } = streamObject({
447
+ model: openai('gpt-4o'),
448
+ schema: '{"type":"object","properties":{"items":{"type":"array"}}}',
449
+ prompt: 'List all project dependencies',
450
+ });
585
451
 
586
- await client.destroySession(sessionId);
587
- client.close();
452
+ for await (const partial of partialStream) {
453
+ process.stdout.write(partial);
588
454
  }
455
+ const result = await finalObject;
589
456
  ```
590
457
 
591
458
  ## Configuration
592
459
 
593
460
  ### Using Real LLM APIs
594
461
 
595
- The SDK requires a running A3S Code service configured with real LLM API credentials. See [examples/TESTING_WITH_REAL_MODELS.md](./examples/TESTING_WITH_REAL_MODELS.md) for detailed setup instructions.
462
+ The SDK requires a running A3S Code service. See [examples/TESTING_WITH_REAL_MODELS.md](./examples/TESTING_WITH_REAL_MODELS.md) for detailed setup.
596
463
 
597
464
  **Quick setup:**
598
465
 
@@ -639,31 +506,28 @@ cd /path/to/a3s
639
506
  ./target/debug/a3s-code -d .a3s -w /tmp/a3s-workspace
640
507
  ```
641
508
 
642
- 3. **Use SDK with config:**
509
+ 3. **Use SDK:**
643
510
 
644
511
  ```typescript
645
- import { A3sClient } from '@a3s-lab/code';
512
+ import { A3sClient, loadConfigFromDir } from '@a3s-lab/code';
646
513
 
647
- // Load configuration from A3S Code config directory
514
+ const config = loadConfigFromDir('/path/to/a3s/.a3s');
648
515
  const client = new A3sClient({
649
- address: 'localhost:4088',
650
- configDir: '/path/to/a3s/.a3s'
516
+ address: config.address || 'localhost:4088',
517
+ configDir: '/path/to/a3s/.a3s',
651
518
  });
652
519
 
653
- // Create session - will use default model from config
654
- const session = await client.createSession({
520
+ // Create session uses default model from config
521
+ const { sessionId } = await client.createSession({
655
522
  name: 'my-session',
656
- workspace: '/tmp/workspace'
523
+ workspace: '/tmp/workspace',
657
524
  });
658
525
 
659
526
  // Or specify model explicitly
660
- const session = await client.createSession({
527
+ const { sessionId: s2 } = await client.createSession({
661
528
  name: 'my-session',
662
529
  workspace: '/tmp/workspace',
663
- llm: {
664
- provider: 'openai',
665
- model: 'kimi-k2.5'
666
- }
530
+ llm: { provider: 'openai', model: 'kimi-k2.5' },
667
531
  });
668
532
  ```
669
533
 
@@ -674,48 +538,46 @@ const session = await client.createSession({
674
538
  | `A3S_ADDRESS` | gRPC server address | `localhost:4088` |
675
539
  | `A3S_CONFIG_DIR` | Configuration directory | - |
676
540
 
677
- ### Programmatic Configuration
678
-
679
- ```typescript
680
- import { A3sClient, loadConfigFromDir } from '@a3s-lab/code';
681
-
682
- // Load config from directory
683
- const config = loadConfigFromDir('/path/to/.a3s');
684
-
685
- // Create client with loaded config
686
- const client = new A3sClient({
687
- address: config.address || 'localhost:4088',
688
- configDir: '/path/to/.a3s'
689
- });
690
-
691
- // Access config values
692
- console.log('Default provider:', config.defaultProvider);
693
- console.log('Default model:', config.defaultModel);
694
- console.log('API key:', config.apiKey ? '(set)' : '(not set)');
695
- ```
696
-
697
- ### Legacy Config File Format
698
-
699
- ```json
700
- {
701
- "address": "localhost:4088",
702
- "defaultProvider": "anthropic",
703
- "defaultModel": "claude-sonnet-4-20250514",
704
- "providers": [
705
- {
706
- "name": "anthropic",
707
- "apiKey": "sk-ant-...",
708
- "models": [
709
- { "id": "claude-sonnet-4-20250514", "name": "Claude Sonnet 4" }
710
- ]
711
- }
712
- ]
713
- }
714
- ```
715
-
716
541
  ## API Reference
717
542
 
718
- ### Client Methods
543
+ ### Session (Core)
544
+
545
+ | Method | Description |
546
+ |--------|-------------|
547
+ | `session.generateText(options)` | Generate text, supports tools + maxSteps |
548
+ | `session.streamText(options)` | Stream text, returns `textStream`/`fullStream`/`toolStream` |
549
+ | `session.generateObject(options)` | Generate structured JSON output |
550
+ | `session.streamObject(options)` | Stream structured JSON output |
551
+ | `session.getContextUsage()` | Get context token usage |
552
+ | `session.compactContext()` | Compact session context |
553
+ | `session.clearContext()` | Clear conversation history |
554
+ | `session.getMessages(limit?)` | Get conversation messages |
555
+ | `session.close()` | Close session and release resources |
556
+ | `session.id` | Session ID (readonly) |
557
+ | `session.closed` | Whether session is closed (readonly) |
558
+
559
+ ### Convenience Functions
560
+
561
+ | Function | Description |
562
+ |----------|-------------|
563
+ | `generateText(options)` | Generate text (auto session) |
564
+ | `streamText(options)` | Stream text (auto session) |
565
+ | `generateObject(options)` | Generate structured output (auto session) |
566
+ | `streamObject(options)` | Stream structured output (auto session) |
567
+ | `createChat(options)` | Create multi-turn chat (auto session) |
568
+ | `createProvider(options)` | Create provider factory for model selection |
569
+ | `tool(definition)` | Define a client-side tool |
570
+
571
+ ### Message Conversion
572
+
573
+ | Function | Description |
574
+ |----------|-------------|
575
+ | `convertToModelMessages(uiMessages)` | UIMessage[] → ModelMessage[] |
576
+ | `convertToUIMessages(modelMessages)` | ModelMessage[] → UIMessage[] |
577
+ | `a3sMessagesToUI(messages)` | A3S Message[] → UIMessage[] |
578
+ | `uiMessagesToA3s(uiMessages)` | UIMessage[] → A3S Message[] |
579
+
580
+ ### Low-Level Client (A3sClient)
719
581
 
720
582
  #### Lifecycle (4 methods)
721
583