@a3s-lab/code 0.6.0 → 0.7.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/index.d.ts +87 -0
  2. package/index.darwin-arm64.node +0 -0
  3. package/index.darwin-x64.node +0 -0
  4. package/index.js +128 -0
  5. package/index.linux-arm64-gnu.node +0 -0
  6. package/index.linux-arm64-musl.node +0 -0
  7. package/index.linux-x64-gnu.node +0 -0
  8. package/index.linux-x64-musl.node +0 -0
  9. package/package.json +27 -34
  10. package/LICENSE +0 -21
  11. package/README.md +0 -764
  12. package/dist/chat.d.ts +0 -97
  13. package/dist/chat.d.ts.map +0 -1
  14. package/dist/chat.js +0 -179
  15. package/dist/chat.js.map +0 -1
  16. package/dist/client.d.ts +0 -1356
  17. package/dist/client.d.ts.map +0 -1
  18. package/dist/client.js +0 -1014
  19. package/dist/client.js.map +0 -1
  20. package/dist/config.d.ts +0 -106
  21. package/dist/config.d.ts.map +0 -1
  22. package/dist/config.js +0 -142
  23. package/dist/config.js.map +0 -1
  24. package/dist/generate.d.ts +0 -130
  25. package/dist/generate.d.ts.map +0 -1
  26. package/dist/generate.js +0 -283
  27. package/dist/generate.js.map +0 -1
  28. package/dist/index.d.ts +0 -54
  29. package/dist/index.d.ts.map +0 -1
  30. package/dist/index.js +0 -60
  31. package/dist/index.js.map +0 -1
  32. package/dist/message.d.ts +0 -157
  33. package/dist/message.d.ts.map +0 -1
  34. package/dist/message.js +0 -279
  35. package/dist/message.js.map +0 -1
  36. package/dist/openai-compat.d.ts +0 -186
  37. package/dist/openai-compat.d.ts.map +0 -1
  38. package/dist/openai-compat.js +0 -263
  39. package/dist/openai-compat.js.map +0 -1
  40. package/dist/provider.d.ts +0 -64
  41. package/dist/provider.d.ts.map +0 -1
  42. package/dist/provider.js +0 -60
  43. package/dist/provider.js.map +0 -1
  44. package/dist/session.d.ts +0 -573
  45. package/dist/session.d.ts.map +0 -1
  46. package/dist/session.js +0 -1153
  47. package/dist/session.js.map +0 -1
  48. package/dist/tool.d.ts +0 -106
  49. package/dist/tool.d.ts.map +0 -1
  50. package/dist/tool.js +0 -71
  51. package/dist/tool.js.map +0 -1
  52. package/proto/code_agent.proto +0 -1851
package/README.md DELETED
@@ -1,764 +0,0 @@
1
- # A3S Code TypeScript SDK
2
-
3
- <p align="center">
4
- <strong>TypeScript/JavaScript Client for A3S Code Agent</strong>
5
- </p>
6
-
7
- <p align="center">
8
- <em>Session-centric AI coding agent SDK with Vercel AI SDK-compatible convenience API</em>
9
- </p>
10
-
11
- <p align="center">
12
- <img src="https://img.shields.io/badge/API_Coverage-100%25-brightgreen" alt="API Coverage">
13
- <img src="https://img.shields.io/badge/Methods-53-blue" alt="Methods">
14
- <img src="https://img.shields.io/badge/TypeScript-5.3+-blue" alt="TypeScript">
15
- <img src="https://img.shields.io/badge/License-MIT-green" alt="License">
16
- </p>
17
-
18
- <p align="center">
19
- <a href="#quick-start">Quick Start</a> •
20
- <a href="#session-api">Session API</a> •
21
- <a href="#tool-calling">Tool Calling</a> •
22
- <a href="#convenience-api">Convenience API</a> •
23
- <a href="#api-reference">API Reference</a> •
24
- <a href="./examples">Examples</a>
25
- </p>
26
-
27
- ---
28
-
29
- ## Overview
30
-
31
- **@a3s-lab/code** is the official TypeScript SDK for [A3S Code](https://github.com/a3s-lab/a3s). The SDK is session-centric — every interaction goes through a `Session` object:
32
-
33
- 1. **Session API** — The core pattern. Create a session with `client.createSession()`, then call `session.generateText()`, `session.streamText()`, etc. Model and workspace are bound at creation time and immutable. Supports `await using` for automatic cleanup.
34
-
35
- 2. **Convenience API** — Standalone functions (`generateText`, `streamText`, etc.) that create temporary sessions under the hood. Great for one-shot operations.
36
-
37
- ## Installation
38
-
39
- ```bash
40
- npm install @a3s-lab/code
41
- ```
42
-
43
- ## Quick Start
44
-
45
- ```typescript
46
- import { A3sClient, createProvider } from '@a3s-lab/code';
47
-
48
- const client = new A3sClient({ address: 'localhost:4088' });
49
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
50
-
51
- // Create session — model and workspace bound here, immutable after
52
- await using session = await client.createSession({
53
- model: openai('gpt-4o'),
54
- workspace: '/project',
55
- system: 'You are a helpful coding assistant.',
56
- });
57
-
58
- // Generate text
59
- const { text } = await session.generateText({
60
- prompt: 'Explain this codebase',
61
- });
62
- console.log(text);
63
-
64
- // Stream text
65
- const { textStream } = session.streamText({
66
- prompt: 'Now refactor it',
67
- });
68
- for await (const chunk of textStream) {
69
- process.stdout.write(chunk);
70
- }
71
-
72
- // Multi-turn: session remembers context
73
- const { text: followUp } = await session.generateText({
74
- prompt: 'What about error handling?',
75
- });
76
-
77
- // Context management
78
- const usage = await session.getContextUsage();
79
- await session.compactContext();
80
- // session.close() called automatically via `await using`
81
- ```
82
-
83
- ## Session API
84
-
85
- Sessions are the core concept in A3S Code. Each session binds a model and workspace at creation time (immutable). The session maintains conversation history, context, permissions, and tool state.
86
-
87
- ### Session Lifecycle
88
-
89
- ```typescript
90
- import { A3sClient, createProvider } from '@a3s-lab/code';
91
-
92
- const client = new A3sClient({ address: 'localhost:4088' });
93
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
94
-
95
- // Create session — model and workspace are immutable after creation
96
- const session = await client.createSession({
97
- model: openai('gpt-4o'),
98
- workspace: '/project',
99
- system: 'You are a code reviewer.',
100
- });
101
-
102
- // Multi-turn conversation (session remembers context)
103
- await session.generateText({ prompt: 'Review the auth module' });
104
- await session.generateText({ prompt: 'What about error handling?' });
105
-
106
- // Context management
107
- const usage = await session.getContextUsage();
108
- console.log(`Tokens: ${usage?.totalTokens}`);
109
- await session.compactContext(); // Compress when large
110
-
111
- // Cleanup
112
- await session.close();
113
- ```
114
-
115
- ### Auto-Cleanup with `using`
116
-
117
- ```typescript
118
- // `await using` calls session.close() automatically when the block exits
119
- {
120
- await using session = await client.createSession({
121
- model: openai('gpt-4o'),
122
- workspace: '/project',
123
- });
124
-
125
- const { text } = await session.generateText({ prompt: 'Hello' });
126
- // session.close() called automatically here
127
- }
128
- ```
129
-
130
- ### Streaming
131
-
132
- ```typescript
133
- const { textStream, fullStream, toolStream, text, steps } = session.streamText({
134
- prompt: 'Explain this codebase',
135
- tools: { weather: weatherTool },
136
- maxSteps: 5,
137
- });
138
-
139
- // Text-only stream
140
- for await (const chunk of textStream) {
141
- process.stdout.write(chunk);
142
- }
143
-
144
- // Or full event stream
145
- for await (const chunk of fullStream) {
146
- switch (chunk.type) {
147
- case 'content':
148
- process.stdout.write(chunk.content);
149
- break;
150
- case 'tool_call':
151
- console.log(`Tool: ${chunk.toolCall?.name}`);
152
- break;
153
- case 'done':
154
- console.log(`\nFinish: ${chunk.finishReason}`);
155
- break;
156
- }
157
- }
158
- ```
159
-
160
- ### Structured Output
161
-
162
- ```typescript
163
- // Unary
164
- const { object, data, usage } = await session.generateObject({
165
- schema: JSON.stringify({
166
- type: 'object',
167
- properties: { summary: { type: 'string' }, files: { type: 'array' } },
168
- }),
169
- prompt: 'Analyze this project',
170
- });
171
-
172
- // Streaming
173
- const { partialStream, object: finalObject } = session.streamObject({
174
- schema: '{"type":"object","properties":{"items":{"type":"array"}}}',
175
- prompt: 'List project dependencies',
176
- });
177
- for await (const partial of partialStream) {
178
- process.stdout.write(partial);
179
- }
180
- const result = await finalObject;
181
- ```
182
-
183
- ### Events (Low-Level Client)
184
-
185
- ```typescript
186
- for await (const event of client.subscribeEvents(session.id)) {
187
- console.log(`[${event.type}] ${event.message}`);
188
- }
189
- ```
190
-
191
- ## Tool Calling
192
-
193
- Define client-side tools with `tool()` and enable multi-step agent behavior with `maxSteps`:
194
-
195
- ```typescript
196
- import { A3sClient, createProvider, tool } from '@a3s-lab/code';
197
-
198
- const client = new A3sClient();
199
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
200
-
201
- const weather = tool({
202
- description: 'Get weather for a city',
203
- parameters: {
204
- type: 'object',
205
- properties: {
206
- city: { type: 'string', description: 'City name' },
207
- },
208
- required: ['city'],
209
- },
210
- execute: async ({ city }) => ({
211
- city,
212
- temperature: 72,
213
- condition: 'sunny',
214
- }),
215
- });
216
-
217
- await using session = await client.createSession({
218
- model: openai('gpt-4o'),
219
- system: 'You are a helpful assistant with weather tools.',
220
- });
221
-
222
- // Multi-step: model calls tools, gets results, continues reasoning
223
- const { text, steps } = await session.generateText({
224
- prompt: 'What is the weather in Tokyo and Paris?',
225
- tools: { weather },
226
- maxSteps: 5,
227
- onStepFinish: (step) => {
228
- console.log(`Step ${step.stepIndex}: ${step.toolCalls.length} tool calls`);
229
- },
230
- onToolCall: ({ toolName, args }) => {
231
- console.log(`Calling ${toolName}`, args);
232
- },
233
- });
234
-
235
- console.log(text);
236
- console.log(`Completed in ${steps.length} steps`);
237
- ```
238
-
239
- ### Streaming with Tools
240
-
241
- ```typescript
242
- const { textStream, toolStream } = session.streamText({
243
- prompt: 'Check the weather everywhere',
244
- tools: { weather },
245
- maxSteps: 5,
246
- });
247
-
248
- for await (const chunk of textStream) {
249
- process.stdout.write(chunk);
250
- }
251
- ```
252
-
253
- ### Tools Without Execute (onToolCall)
254
-
255
- ```typescript
256
- const { text } = await session.generateText({
257
- prompt: 'Look up the user profile',
258
- tools: {
259
- getUser: tool({
260
- description: 'Get user profile by ID',
261
- parameters: {
262
- type: 'object',
263
- properties: { userId: { type: 'string' } },
264
- },
265
- // No execute — handled by onToolCall
266
- }),
267
- },
268
- maxSteps: 3,
269
- onToolCall: async ({ toolName, args }) => {
270
- if (toolName === 'getUser') {
271
- return { name: 'Alice', role: 'admin' };
272
- }
273
- },
274
- });
275
- ```
276
-
277
- ## Convenience API
278
-
279
- Standalone functions that create temporary sessions under the hood. Useful for one-shot operations:
280
-
281
- ```typescript
282
- import { generateText, streamText, createProvider } from '@a3s-lab/code';
283
-
284
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
285
-
286
- // One-shot generation (auto session)
287
- const { text } = await generateText({
288
- model: openai('gpt-4o'),
289
- prompt: 'Explain this codebase',
290
- workspace: '/project',
291
- });
292
-
293
- // Streaming (auto session)
294
- const { textStream } = streamText({
295
- model: openai('gpt-4o'),
296
- prompt: 'Explain this codebase',
297
- });
298
- for await (const chunk of textStream) {
299
- process.stdout.write(chunk);
300
- }
301
- ```
302
-
303
- ### createChat (Convenience Wrapper)
304
-
305
- `createChat()` is a convenience wrapper that manages a session internally:
306
-
307
- ```typescript
308
- import { createChat, createProvider } from '@a3s-lab/code';
309
-
310
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
311
-
312
- const chat = createChat({
313
- model: openai('gpt-4o'),
314
- workspace: '/project',
315
- system: 'You are a helpful code assistant',
316
- });
317
-
318
- const { text } = await chat.send('What does main.rs do?');
319
- const { textStream } = chat.stream('Now refactor it');
320
- for await (const chunk of textStream) {
321
- process.stdout.write(chunk);
322
- }
323
- await chat.close();
324
- ```
325
-
326
- For new code, prefer using `Session` directly — it provides the same functionality with more control.
327
-
328
- ## Message Conversion (UIMessage ↔ ModelMessage)
329
-
330
- The SDK provides Vercel AI SDK-style message types for frontend ↔ backend conversion:
331
-
332
- - `UIMessage` — Frontend format with `id`, `createdAt`, `parts` (for rendering in chat UIs)
333
- - `ModelMessage` — Backend format with `role`, `content` (for LLM / generateText / streamText)
334
-
335
- ### Frontend → Backend
336
-
337
- ```typescript
338
- import { convertToModelMessages, generateText, createProvider } from '@a3s-lab/code';
339
- import type { UIMessage } from '@a3s-lab/code';
340
-
341
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
342
-
343
- // UIMessages from your frontend (e.g., useChat hook, database, etc.)
344
- const uiMessages: UIMessage[] = [
345
- {
346
- id: 'msg-1',
347
- role: 'user',
348
- content: 'What does main.rs do?',
349
- parts: [{ type: 'text', text: 'What does main.rs do?' }],
350
- createdAt: new Date(),
351
- },
352
- ];
353
-
354
- // Convert to model format before calling generateText/streamText
355
- const modelMessages = convertToModelMessages(uiMessages);
356
- const { text } = await generateText({
357
- model: openai('gpt-4o'),
358
- messages: modelMessages,
359
- });
360
- ```
361
-
362
- ### Backend → Frontend
363
-
364
- ```typescript
365
- import { convertToUIMessages } from '@a3s-lab/code';
366
- import type { ModelMessage } from '@a3s-lab/code';
367
-
368
- // ModelMessages from LLM response or database
369
- const modelMessages: ModelMessage[] = [
370
- { role: 'user', content: 'Hello' },
371
- { role: 'assistant', content: 'Hi! How can I help?' },
372
- ];
373
-
374
- // Convert to UIMessage format for rendering
375
- const uiMessages = convertToUIMessages(modelMessages);
376
- // uiMessages[0].parts → [{ type: 'text', text: 'Hello' }]
377
- // uiMessages[1].parts → [{ type: 'text', text: 'Hi! How can I help?' }]
378
- ```
379
-
380
- ### A3S Message ↔ UIMessage (Shorthand)
381
-
382
- ```typescript
383
- import { a3sMessagesToUI, uiMessagesToA3s } from '@a3s-lab/code';
384
-
385
- // A3S session messages → UIMessage (for rendering)
386
- const messages = await client.getMessages(sessionId);
387
- const uiMessages = a3sMessagesToUI(messages.messages);
388
-
389
- // UIMessage → A3S messages (for session generation)
390
- const a3sMessages = uiMessagesToA3s(uiMessages);
391
- await client.generate(sessionId, a3sMessages);
392
- ```
393
-
394
- ### Tool Invocations in UIMessage
395
-
396
- UIMessages support rich tool invocation parts for rendering tool calls in chat UIs:
397
-
398
- ```typescript
399
- const assistantMessage: UIMessage = {
400
- id: 'msg-2',
401
- role: 'assistant',
402
- content: 'The weather in Tokyo is 22°C.',
403
- parts: [
404
- {
405
- type: 'tool-invocation',
406
- toolInvocation: {
407
- toolCallId: 'call-1',
408
- toolName: 'weather',
409
- args: { city: 'Tokyo' },
410
- state: 'result',
411
- result: { temperature: 22, condition: 'sunny' },
412
- },
413
- },
414
- { type: 'text', text: 'The weather in Tokyo is 22°C and sunny.' },
415
- ],
416
- };
417
-
418
- // Converts to: assistant message with toolCalls + tool result message
419
- const modelMessages = convertToModelMessages([assistantMessage]);
420
- ```
421
-
422
- ## Structured Output
423
-
424
- ```typescript
425
- import { generateObject, streamObject, createProvider } from '@a3s-lab/code';
426
-
427
- const openai = createProvider({ name: 'openai', apiKey: 'sk-xxx' });
428
-
429
- // Generate a typed object
430
- const { object } = await generateObject({
431
- model: openai('gpt-4o'),
432
- schema: JSON.stringify({
433
- type: 'object',
434
- properties: {
435
- summary: { type: 'string' },
436
- files: { type: 'array', items: { type: 'string' } },
437
- complexity: { type: 'string', enum: ['low', 'medium', 'high'] },
438
- },
439
- required: ['summary', 'files', 'complexity'],
440
- }),
441
- prompt: 'Analyze this project structure',
442
- workspace: '/project',
443
- });
444
-
445
- // Stream partial results
446
- const { partialStream, object: finalObject } = streamObject({
447
- model: openai('gpt-4o'),
448
- schema: '{"type":"object","properties":{"items":{"type":"array"}}}',
449
- prompt: 'List all project dependencies',
450
- });
451
-
452
- for await (const partial of partialStream) {
453
- process.stdout.write(partial);
454
- }
455
- const result = await finalObject;
456
- ```
457
-
458
- ## Configuration
459
-
460
- ### Using Real LLM APIs
461
-
462
- The SDK requires a running A3S Code service. See [examples/TESTING_WITH_REAL_MODELS.md](./examples/TESTING_WITH_REAL_MODELS.md) for detailed setup.
463
-
464
- **Quick setup:**
465
-
466
- 1. **Configure A3S Code** - Edit `a3s/.a3s/config.json`:
467
-
468
- ```json
469
- {
470
- "defaultProvider": "openai",
471
- "defaultModel": "kimi-k2.5",
472
- "providers": [
473
- {
474
- "name": "anthropic",
475
- "apiKey": "sk-ant-xxx",
476
- "baseUrl": "https://api.anthropic.com",
477
- "models": [
478
- {
479
- "id": "claude-sonnet-4-20250514",
480
- "name": "Claude Sonnet 4",
481
- "family": "claude-sonnet",
482
- "toolCall": true
483
- }
484
- ]
485
- },
486
- {
487
- "name": "openai",
488
- "models": [
489
- {
490
- "id": "kimi-k2.5",
491
- "name": "KIMI K2.5",
492
- "apiKey": "sk-xxx",
493
- "baseUrl": "http://your-endpoint/v1",
494
- "toolCall": true
495
- }
496
- ]
497
- }
498
- ]
499
- }
500
- ```
501
-
502
- 2. **Start A3S Code service:**
503
-
504
- ```bash
505
- cd /path/to/a3s
506
- ./target/debug/a3s-code -d .a3s -w /tmp/a3s-workspace
507
- ```
508
-
509
- 3. **Use SDK:**
510
-
511
- ```typescript
512
- import { A3sClient, loadConfigFromDir } from '@a3s-lab/code';
513
-
514
- const config = loadConfigFromDir('/path/to/a3s/.a3s');
515
- const client = new A3sClient({
516
- address: config.address || 'localhost:4088',
517
- configDir: '/path/to/a3s/.a3s',
518
- });
519
-
520
- // Create session — uses default model from config
521
- const { sessionId } = await client.createSession({
522
- name: 'my-session',
523
- workspace: '/tmp/workspace',
524
- });
525
-
526
- // Or specify model explicitly
527
- const { sessionId: s2 } = await client.createSession({
528
- name: 'my-session',
529
- workspace: '/tmp/workspace',
530
- llm: { provider: 'openai', model: 'kimi-k2.5' },
531
- });
532
- ```
533
-
534
- ### Environment Variables
535
-
536
- | Variable | Description | Default |
537
- |----------|-------------|---------|
538
- | `A3S_ADDRESS` | gRPC server address | `localhost:4088` |
539
- | `A3S_CONFIG_DIR` | Configuration directory | - |
540
-
541
- ## API Reference
542
-
543
- ### Session (Core)
544
-
545
- | Method | Description |
546
- |--------|-------------|
547
- | `session.generateText(options)` | Generate text, supports tools + maxSteps |
548
- | `session.streamText(options)` | Stream text, returns `textStream`/`fullStream`/`toolStream` |
549
- | `session.generateObject(options)` | Generate structured JSON output |
550
- | `session.streamObject(options)` | Stream structured JSON output |
551
- | `session.getContextUsage()` | Get context token usage |
552
- | `session.compactContext()` | Compact session context |
553
- | `session.clearContext()` | Clear conversation history |
554
- | `session.getMessages(limit?)` | Get conversation messages |
555
- | `session.close()` | Close session and release resources |
556
- | `session.id` | Session ID (readonly) |
557
- | `session.closed` | Whether session is closed (readonly) |
558
-
559
- ### Convenience Functions
560
-
561
- | Function | Description |
562
- |----------|-------------|
563
- | `generateText(options)` | Generate text (auto session) |
564
- | `streamText(options)` | Stream text (auto session) |
565
- | `generateObject(options)` | Generate structured output (auto session) |
566
- | `streamObject(options)` | Stream structured output (auto session) |
567
- | `createChat(options)` | Create multi-turn chat (auto session) |
568
- | `createProvider(options)` | Create provider factory for model selection |
569
- | `tool(definition)` | Define a client-side tool |
570
-
571
- ### Message Conversion
572
-
573
- | Function | Description |
574
- |----------|-------------|
575
- | `convertToModelMessages(uiMessages)` | UIMessage[] → ModelMessage[] |
576
- | `convertToUIMessages(modelMessages)` | ModelMessage[] → UIMessage[] |
577
- | `a3sMessagesToUI(messages)` | A3S Message[] → UIMessage[] |
578
- | `uiMessagesToA3s(uiMessages)` | UIMessage[] → A3S Message[] |
579
-
580
- ### Low-Level Client (A3sClient)
581
-
582
- #### Lifecycle (4 methods)
583
-
584
- | Method | Description |
585
- |--------|-------------|
586
- | `healthCheck()` | Check agent health status |
587
- | `getCapabilities()` | Get agent capabilities |
588
- | `initialize(workspace, env?)` | Initialize the agent |
589
- | `shutdown()` | Shutdown the agent |
590
-
591
- #### Sessions (6 methods)
592
-
593
- | Method | Description |
594
- |--------|-------------|
595
- | `createSession(config)` | Create a new session |
596
- | `destroySession(id)` | Destroy a session |
597
- | `listSessions()` | List all sessions |
598
- | `getSession(id)` | Get session by ID |
599
- | `configureSession(id, config)` | Update session configuration |
600
- | `getMessages(id, limit?)` | Get conversation history |
601
-
602
- #### Generation (4 methods)
603
-
604
- | Method | Description |
605
- |--------|-------------|
606
- | `generate(sessionId, messages)` | Generate response (non-streaming) |
607
- | `streamGenerate(sessionId, messages)` | Generate response (streaming) |
608
- | `generateStructured(sessionId, messages, schema)` | Generate structured output |
609
- | `streamGenerateStructured(sessionId, messages, schema)` | Stream structured output |
610
-
611
- #### Context Management (3 methods)
612
-
613
- | Method | Description |
614
- |--------|-------------|
615
- | `getContextUsage(sessionId)` | Get context token usage |
616
- | `compactContext(sessionId)` | Compact session context |
617
- | `clearContext(sessionId)` | Clear session context |
618
-
619
- #### Skills (3 methods)
620
-
621
- | Method | Description |
622
- |--------|-------------|
623
- | `loadSkill(sessionId, name)` | Load a skill |
624
- | `unloadSkill(sessionId, name)` | Unload a skill |
625
- | `listSkills(sessionId?)` | List available/loaded skills |
626
-
627
- #### Control (3 methods)
628
-
629
- | Method | Description |
630
- |--------|-------------|
631
- | `cancel(sessionId)` | Cancel running operation |
632
- | `pause(sessionId)` | Pause session |
633
- | `resume(sessionId)` | Resume session |
634
-
635
- #### Events (1 method)
636
-
637
- | Method | Description |
638
- |--------|-------------|
639
- | `subscribeEvents(sessionId)` | Subscribe to real-time events |
640
-
641
- #### HITL (3 methods)
642
-
643
- | Method | Description |
644
- |--------|-------------|
645
- | `confirmToolExecution(sessionId, response)` | Respond to confirmation request |
646
- | `setConfirmationPolicy(sessionId, policy)` | Set confirmation policy |
647
- | `getConfirmationPolicy(sessionId)` | Get confirmation policy |
648
-
649
- #### Permissions (4 methods)
650
-
651
- | Method | Description |
652
- |--------|-------------|
653
- | `setPermissionPolicy(sessionId, policy)` | Set permission policy |
654
- | `getPermissionPolicy(sessionId)` | Get permission policy |
655
- | `checkPermission(sessionId, request)` | Check tool permission |
656
- | `addPermissionRule(sessionId, rule)` | Add permission rule |
657
-
658
- #### External Tasks (4 methods)
659
-
660
- | Method | Description |
661
- |--------|-------------|
662
- | `setLaneHandler(sessionId, lane, handler)` | Set lane handler |
663
- | `getLaneHandler(sessionId, lane)` | Get lane handler |
664
- | `completeExternalTask(sessionId, taskId, result)` | Complete external task |
665
- | `listPendingExternalTasks(sessionId)` | List pending tasks |
666
-
667
- #### Todos (2 methods)
668
-
669
- | Method | Description |
670
- |--------|-------------|
671
- | `getTodos(sessionId)` | Get todo list |
672
- | `setTodos(sessionId, todos)` | Set todo list |
673
-
674
- #### Providers (7 methods)
675
-
676
- | Method | Description |
677
- |--------|-------------|
678
- | `listProviders()` | List available providers |
679
- | `getProvider(name)` | Get provider details |
680
- | `addProvider(provider)` | Add a provider |
681
- | `updateProvider(name, provider)` | Update provider |
682
- | `removeProvider(name)` | Remove provider |
683
- | `setDefaultModel(provider, model)` | Set default model |
684
- | `getDefaultModel()` | Get default model |
685
-
686
- #### Planning & Goals (4 methods)
687
-
688
- | Method | Description |
689
- |--------|-------------|
690
- | `createPlan(sessionId, prompt, context?)` | Create execution plan |
691
- | `getPlan(sessionId, planId)` | Get existing plan |
692
- | `extractGoal(sessionId, prompt)` | Extract goal from prompt |
693
- | `checkGoalAchievement(sessionId, goal, state)` | Check goal completion |
694
-
695
- #### Memory System (5 methods)
696
-
697
- | Method | Description |
698
- |--------|-------------|
699
- | `storeMemory(sessionId, memory)` | Store memory item |
700
- | `retrieveMemory(sessionId, memoryId)` | Retrieve memory by ID |
701
- | `searchMemories(sessionId, query, tags?, limit?)` | Search memories |
702
- | `getMemoryStats(sessionId)` | Get memory statistics |
703
- | `clearMemories(sessionId, type?)` | Clear memories |
704
-
705
- **Total: 53 methods (100% API coverage)**
706
-
707
- ### Types
708
-
709
- See `ts/types.ts` for complete type definitions including:
710
-
711
- - `SessionConfig`, `Session`
712
- - `Message`, `MessageRole`
713
- - `GenerateResponse`, `GenerateChunk`
714
- - `HealthStatus`, `HealthStatusCode`
715
- - `ProviderInfo`, `ModelInfo`
716
- - `Todo`, `Skill`, `AgentEvent`
717
-
718
- ## Development
719
-
720
- ```bash
721
- # Install dependencies
722
- just install
723
-
724
- # Build
725
- just build
726
-
727
- # Run tests
728
- just test
729
-
730
- # Run tests with coverage
731
- just test-cov
732
-
733
- # Type check
734
- just check
735
-
736
- # Lint
737
- just lint
738
-
739
- # Format
740
- just fmt
741
-
742
- # All checks
743
- just ci
744
- ```
745
-
746
- ## A3S Ecosystem
747
-
748
- This SDK is part of the A3S ecosystem:
749
-
750
- | Project | Package | Purpose |
751
- |---------|---------|---------|
752
- | [a3s](https://github.com/a3s-lab/a3s) | `a3s-code` | AI coding agent framework |
753
- | [sdk/typescript](.) | `@a3s-lab/code` | TypeScript SDK (this package) |
754
- | [sdk/python](../python) | `a3s-code` | Python SDK |
755
-
756
- ## License
757
-
758
- MIT License - see [LICENSE](LICENSE) for details.
759
-
760
- ---
761
-
762
- <p align="center">
763
- Built by <a href="https://github.com/a3s-lab">A3S Lab</a>
764
- </p>