@genui-a3/create 0.1.8 → 0.1.9

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,104 @@
1
+ # Logging
2
+
3
+ A3 uses [LogLayer](https://loglayer.dev) as its logging abstraction and [tslog](https://tslog.js.org) as the default output backend.
4
+
5
+ ## For package users
6
+
7
+ If you want A3's logs to flow through your own logging infrastructure, see [Custom Logging](./CUSTOM_LOGGING.md).
8
+
9
+ ---
10
+
11
+ ## Architecture
12
+
13
+ ### The `log` singleton
14
+
15
+ All logging within the A3 package is done through a single `log` object exported from `src/utils/logger/`.
16
+
17
+ ```typescript
18
+ import { log } from '@utils/logger'
19
+ ```
20
+
21
+ Internally, `log` is a JavaScript `Proxy` that delegates every property access to `getLogger()` at call time.
22
+ This means:
23
+
24
+ - Any file can import `log` once at the top and use it directly — no need to call `getLogger()` at each use site.
25
+ - If a user calls `configureLogger()` at application startup (before the first log statement fires), the new logger takes effect automatically.
26
+ The `log` reference does not need to be re-imported.
27
+
28
+ ### `getLogger()` and `configureLogger()`
29
+
30
+ - `getLogger()` — lazily initialises and returns the active [`ILogLayer`](https://loglayer.dev) instance.
31
+ On first call, if no custom logger has been set, it creates the default tslog-backed logger.
32
+ - `configureLogger(logger: ILogLayer)` — replaces the active logger.
33
+ Should be called once at application startup, before any `ChatSession` is created.
34
+
35
+ Both are exported from `@genui-a3/core` as part of the public API.
36
+
37
+ ### Default logger
38
+
39
+ The default logger is a `LogLayer` instance wrapping a `tslog` transport:
40
+
41
+ - Pretty, human-readable output when `NODE_ENV !== 'production'`
42
+ - Structured JSON output when `NODE_ENV === 'production'`
43
+ - Log level controlled by the `A3_LOG_LEVEL` environment variable (default: `info`)
44
+
45
+ ---
46
+
47
+ ## Adding logs in A3 code
48
+
49
+ Import `log` and use the [LogLayer API](https://loglayer.dev):
50
+
51
+ ```typescript
52
+ import { log } from '@utils/logger'
53
+
54
+ // Basic logging
55
+ log.info('Hello world!')
56
+
57
+ // Logging with metadata (attached to this log entry only)
58
+ log.withMetadata({ agentId: 'greeting', sessionId: 'abc' }).debug('Agent selected')
59
+
60
+ // Logging with context (persists across subsequent log calls on this instance)
61
+ log.withContext({ sessionId: 'abc' })
62
+ log.info('Processing request')
63
+
64
+ // Logging errors
65
+ log.withError(new Error('Something went wrong')).error('Failed to process request')
66
+ ```
67
+
68
+ For the full LogLayer API — including `withPrefix`, child loggers, plugins, and multi-transport — see the [LogLayer documentation](https://loglayer.dev).
69
+
70
+ ### ⚠️ `withContext()` and shared server state
71
+
72
+ `log` is a module-level singleton shared across all requests in a running process.
73
+ `withContext()` mutates the logger instance and **persists across all subsequent log calls** — including those from other users' requests on the same pod.
74
+
75
+ Use `withMetadata()` for any request-scoped data (agentId, sessionId, etc.).
76
+ It applies only to the single log call it's chained on.
77
+
78
+ ```typescript
79
+ // ✅ Safe — applies to this log call only
80
+ log.withMetadata({ agentId: 'greeting', sessionId: 'abc' }).debug('Agent selected')
81
+
82
+ // ❌ Dangerous in a server — persists on the shared instance across all requests
83
+ log.withContext({ sessionId: 'abc' })
84
+ ```
85
+
86
+ ---
87
+
88
+ ## Log levels
89
+
90
+ The `A3_LOG_LEVEL` environment variable accepts the following values (lowest to highest):
91
+
92
+ `silly` → `trace` → `debug` → `info` _(default)_ → `warn` → `error` → `fatal`
93
+
94
+ ```bash
95
+ A3_LOG_LEVEL=debug node your-app.js
96
+ ```
97
+
98
+ ---
99
+
100
+ ## Key files
101
+
102
+ - `src/utils/logger/index.ts` — logger module: `log`, `getLogger()`, `configureLogger()`, default tslog setup
103
+ - `src/index.ts` — public exports: `configureLogger`, `getLogger`, `ILogLayer`
104
+ - `jest.setup.ts` — global Jest mock for `@utils/logger` (replaces `log` with a mock object in tests)
@@ -0,0 +1,217 @@
1
+ # Providers
2
+
3
+ LLM provider implementations for the A3 agentic framework.
4
+
5
+ A3 ships with **AWS Bedrock**, **Anthropic**, and **OpenAI** providers out of the box.
6
+ All three support blocking and streaming modes, model fallback, and structured output via Zod schemas.
7
+
8
+ ## Quick Start
9
+
10
+ ### AWS Bedrock
11
+
12
+ ```typescript
13
+ import { createBedrockProvider } from '@genui-a3/providers/bedrock'
14
+
15
+ const provider = createBedrockProvider({
16
+ models: ['us.anthropic.claude-sonnet-4-5-20250929-v1:0'],
17
+ region: 'us-east-1', // optional, defaults to AWS SDK default
18
+ })
19
+ ```
20
+
21
+ ### Anthropic
22
+
23
+ ```typescript
24
+ import { createAnthropicProvider } from '@genui-a3/providers/anthropic'
25
+
26
+ const provider = createAnthropicProvider({
27
+ models: ['claude-sonnet-4-5-20250929', 'claude-haiku-4-5-20251001'],
28
+ apiKey: process.env.ANTHROPIC_API_KEY, // optional, defaults to ANTHROPIC_API_KEY env var
29
+ })
30
+ ```
31
+
32
+ ### OpenAI
33
+
34
+ ```typescript
35
+ import { createOpenAIProvider } from '@genui-a3/providers/openai'
36
+
37
+ const provider = createOpenAIProvider({
38
+ models: ['gpt-4o', 'gpt-4o-mini'],
39
+ apiKey: process.env.OPENAI_API_KEY, // optional, defaults to OPENAI_API_KEY env var
40
+ })
41
+ ```
42
+
43
+ ### Use with A3
44
+
45
+ ```typescript
46
+ import { ChatSession, MemorySessionStore } from '@genui-a3/core'
47
+
48
+ const session = new ChatSession({
49
+ sessionId: 'user-123',
50
+ store: new MemorySessionStore(),
51
+ initialAgentId: 'greeting',
52
+ initialState: {},
53
+ provider, // any provider from above
54
+ })
55
+
56
+ // Blocking
57
+ const response = await session.send({ message: 'Hello!' })
58
+
59
+ // Streaming
60
+ for await (const event of session.send({ message: 'Hello!', stream: true })) {
61
+ console.log(event)
62
+ }
63
+ ```
64
+
65
+ ## Provider Reference
66
+
67
+ ### Bedrock — `createBedrockProvider(config)`
68
+
69
+ Communicates with AWS Bedrock via the [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html).
70
+
71
+ | Option | Type | Required | Description |
72
+ |---|---|---|---|
73
+ | `models` | `string[]` | Yes | Model IDs in preference order (first = primary, rest = fallbacks) |
74
+ | `region` | `string` | No | AWS region. Defaults to AWS SDK default |
75
+ | `resilience` | `ResilienceConfig` | No | Retry, backoff, and timeout settings. Uses defaults if omitted |
76
+
77
+ **Behaviour:**
78
+
79
+ - Uses **tool-based JSON extraction** (`structuredResponse` tool) for reliable structured output
80
+ - **Streaming** yields text deltas in real-time, then emits a validated tool-call result at the end
81
+ - **Merges sequential same-role messages** to satisfy Bedrock's alternating-role requirement
82
+ - **Prepends an initial user message** (`"Hi"`) so the conversation always starts with a user turn
83
+
84
+ **Prerequisites:** AWS credentials configured via environment variables, IAM role, or AWS profile — the same setup the AWS SDK expects.
85
+
86
+ ---
87
+
88
+ ### Anthropic — `createAnthropicProvider(config)`
89
+
90
+ Communicates with the Anthropic Messages API using the [Vercel AI SDK](https://sdk.vercel.ai/providers/ai-sdk-providers/anthropic) (`@ai-sdk/anthropic`) for structured output.
91
+
92
+ | Option | Type | Required | Description |
93
+ |---|---|---|---|
94
+ | `models` | `string[]` | Yes | Model IDs in preference order (first = primary, rest = fallbacks) |
95
+ | `apiKey` | `string` | No | API key. Defaults to `ANTHROPIC_API_KEY` env var |
96
+ | `baseURL` | `string` | No | Custom base URL for the Anthropic API |
97
+ | `resilience` | `ResilienceConfig` | No | Retry, backoff, and timeout settings. Uses defaults if omitted |
98
+
99
+ **Behaviour:**
100
+
101
+ - Uses the Vercel AI SDK's `Output.object()` for structured output — Zod schema conversion and partial JSON parsing handled internally
102
+ - **Streaming** yields text deltas in real-time via partial object tracking, then emits a validated tool-call result at the end
103
+ - **Appends a `"Continue"` user message** if the last message has an assistant role, to satisfy the alternating-role requirement
104
+
105
+ **Prerequisites:** An Anthropic API key, either passed directly or set as `ANTHROPIC_API_KEY`.
106
+
107
+ ---
108
+
109
+ ### OpenAI — `createOpenAIProvider(config)`
110
+
111
+ Communicates with the OpenAI Chat Completions API using [structured output](https://platform.openai.com/docs/guides/structured-outputs) (`response_format: json_schema`).
112
+
113
+ | Option | Type | Required | Description |
114
+ |---|---|---|---|
115
+ | `models` | `string[]` | Yes | Model IDs in preference order (first = primary, rest = fallbacks) |
116
+ | `apiKey` | `string` | No | API key. Defaults to `OPENAI_API_KEY` env var |
117
+ | `baseURL` | `string` | No | Custom base URL for Azure OpenAI or compatible endpoints |
118
+ | `organization` | `string` | No | OpenAI organization ID |
119
+ | `resilience` | `ResilienceConfig` | No | Retry, backoff, and timeout settings. Uses defaults if omitted |
120
+
121
+ **Behaviour:**
122
+
123
+ - Uses **structured output** (`response_format` with `json_schema`) — no tool calls required
124
+ - **Enforces strict schemas** automatically (`additionalProperties: false`, all properties `required`)
125
+ - **Streaming** extracts `chatbotMessage` text progressively from the JSON response via a character-level state machine, yielding text deltas in real-time
126
+ - Detects **truncated responses** (`finish_reason: length`) and surfaces them as errors
127
+
128
+ **Prerequisites:** An OpenAI API key, either passed directly or set as `OPENAI_API_KEY`.
129
+
130
+ ## Model Fallback
131
+
132
+ All providers support automatic model fallback.
133
+ List models in order of preference:
134
+
135
+ ```typescript
136
+ const provider = createBedrockProvider({
137
+ models: [
138
+ 'us.anthropic.claude-sonnet-4-5-20250929-v1:0', // primary
139
+ 'us.anthropic.claude-haiku-4-5-20251001-v1:0', // fallback
140
+ ],
141
+ })
142
+ ```
143
+
144
+ If the primary model fails, the provider automatically retries with the next model in the list.
145
+ If all models fail, the last error is thrown.
146
+
147
+ All providers include built-in resilience: automatic retries with exponential backoff, per-request and total timeouts, and model fallback.
148
+ See the [Resilience documentation](./RESILIENCE.md) for configuration options and defaults.
149
+
150
+ ## Per-Agent Provider Override
151
+
152
+ Each agent can override the session-level provider:
153
+
154
+ ```typescript
155
+ import { createOpenAIProvider } from '@genui-a3/providers/openai'
156
+ import { createBedrockProvider } from '@genui-a3/providers/bedrock'
157
+
158
+ // Session uses Bedrock by default
159
+ const session = new ChatSession({
160
+ provider: createBedrockProvider({ models: ['us.anthropic.claude-sonnet-4-5-20250929-v1:0'] }),
161
+ // ...
162
+ })
163
+
164
+ // This agent uses OpenAI instead
165
+ const premiumAgent = {
166
+ id: 'premium',
167
+ description: 'Handles premium tier requests using GPT-4o',
168
+ provider: createOpenAIProvider({ models: ['gpt-4o'] }),
169
+ // ...
170
+ }
171
+ ```
172
+
173
+ ## Provider Interface
174
+
175
+ All providers implement the `Provider` interface from `@genui-a3`:
176
+
177
+ | Member | Description |
178
+ |---|---|
179
+ | `sendRequest(request)` | Blocking request → `Promise<ProviderResponse>` |
180
+ | `sendRequestStream(request)` | Streaming request → `AsyncGenerator<StreamEvent>` |
181
+ | `name` | Human-readable name (`'bedrock'`, `'anthropic'`, or `'openai'`) |
182
+
183
+ To create a custom provider, implement this interface and pass it to `ChatSession` or an individual agent.
184
+ See [Creating a Custom Provider](./CUSTOM_PROVIDERS.md) for a step-by-step guide.
185
+
186
+ ## Exports
187
+
188
+ This package uses [subpath exports](https://nodejs.org/api/packages.html#subpath-exports).
189
+ Import from the specific provider entry point:
190
+
191
+ ```typescript
192
+ // Correct
193
+ import { createBedrockProvider } from '@genui-a3/providers/bedrock'
194
+ import { createAnthropicProvider } from '@genui-a3/providers/anthropic'
195
+ import { createOpenAIProvider } from '@genui-a3/providers/openai'
196
+
197
+ // No bare import
198
+ import { ... } from '@genui-a3/providers'
199
+ ```
200
+
201
+ | Entry point | Export | Description |
202
+ |---|---|---|
203
+ | `@genui-a3/providers/bedrock` | `createBedrockProvider` | Factory function returning a Bedrock `Provider` |
204
+ | `@genui-a3/providers/bedrock` | `BedrockProviderConfig` | TypeScript config interface |
205
+ | `@genui-a3/providers/anthropic` | `createAnthropicProvider` | Factory function returning an Anthropic `Provider` |
206
+ | `@genui-a3/providers/anthropic` | `AnthropicProviderConfig` | TypeScript config interface |
207
+ | `@genui-a3/providers/openai` | `createOpenAIProvider` | Factory function returning an OpenAI `Provider` |
208
+ | `@genui-a3/providers/openai` | `OpenAIProviderConfig` | TypeScript config interface |
209
+
210
+ ## Requirements
211
+
212
+ - Node.js 20.19.0+
213
+ - TypeScript 5.9+
214
+ - `@genui-a3/core` (peer dependency)
215
+ - **Bedrock**: AWS credentials configured in the environment
216
+ - **Anthropic**: `ANTHROPIC_API_KEY` environment variable or `apiKey` config option
217
+ - **OpenAI**: `OPENAI_API_KEY` environment variable or `apiKey` config option
@@ -0,0 +1,197 @@
1
+ # Quick Start
2
+
3
+ The fastest way to get started with A3 is using the interactive CLI. It scaffolds a full Next.js application, configures your LLM providers, and installs dependencies.
4
+
5
+ ```bash
6
+ npx @genui-a3/create@latest
7
+ ```
8
+
9
+ Follow the prompts to name your project and provide your API keys. Once finished:
10
+
11
+ ```bash
12
+ cd your-project-name
13
+ npm run dev
14
+ ```
15
+
16
+ For more details on the CLI, see the [@genui-a3/create README](https://www.npmjs.com/package/@genui-a3/create).
17
+
18
+ ---
19
+
20
+ ## Manual Installation
21
+
22
+ If you are adding A3 to an existing project or prefer a manual setup, follow these steps.
23
+
24
+ ### Install
25
+
26
+ ```bash
27
+ npm install @genui-a3/core
28
+ ```
29
+
30
+ ## Define an agent
31
+
32
+ ```typescript
33
+ import { z } from 'zod'
34
+ import { Agent, BaseState } from '@genui-a3/core'
35
+
36
+ interface State extends BaseState {
37
+ userName?: string
38
+ }
39
+
40
+ export const greetingAgent: Agent<State> = {
41
+ id: 'greeting',
42
+ name: 'Greeting Agent',
43
+ description: 'Greets the user and collects their name',
44
+ prompt: async () => `
45
+ You are a friendly greeting agent. Your goal is to greet the user
46
+ and learn their name. Once you have their name, set goalAchieved to true.
47
+ `,
48
+ outputSchema: z.object({
49
+ userName: z.string().optional(),
50
+ }),
51
+ transition: (_state, goalAchieved) =>
52
+ goalAchieved ? 'end' : 'greeting',
53
+ }
54
+ ```
55
+
56
+ ## Register and run
57
+
58
+ ```typescript
59
+ import { AgentRegistry, ChatSession, MemorySessionStore } from '@genui-a3/core'
60
+ import { createBedrockProvider } from '@genui-a3/providers/bedrock'
61
+
62
+ const registry = AgentRegistry.getInstance<State>()
63
+ registry.register(greetingAgent)
64
+
65
+ const provider = createBedrockProvider({
66
+ models: ['us.anthropic.claude-sonnet-4-5-20250929-v1:0'],
67
+ })
68
+
69
+ const session = new ChatSession<State>({
70
+ sessionId: 'demo',
71
+ store: new MemorySessionStore(),
72
+ initialAgentId: 'greeting',
73
+ initialState: { userName: undefined },
74
+ provider,
75
+ })
76
+
77
+ const response = await session.send({ message: 'Hi there!' })
78
+ console.log(response.responseMessage)
79
+ // => "Hello! I'd love to get to know you. What's your name?"
80
+ ```
81
+
82
+ That's it.
83
+ One agent, one session, one function call.
84
+
85
+ ## Multi-Agent Example
86
+
87
+ Here's a pattern with three agents that route between each other, demonstrating how state flows across agent boundaries.
88
+
89
+ ### Define the agents
90
+
91
+ ```typescript
92
+ import { z } from 'zod'
93
+ import { Agent, BaseState } from '@genui-a3/core'
94
+
95
+ interface AppState extends BaseState {
96
+ userName?: string
97
+ isAuthenticated: boolean
98
+ issueCategory?: string
99
+ }
100
+
101
+ // Agent 1: Greeting -- collects the user's name, then routes to auth
102
+ const greetingAgent: Agent<AppState> = {
103
+ id: 'greeting',
104
+ name: 'Greeting Agent',
105
+ description: 'Greets the user and collects their name',
106
+ prompt: async () => `
107
+ Greet the user warmly. Ask for their name.
108
+ Once you have it, set goalAchieved to true.
109
+ `,
110
+ outputSchema: z.object({ userName: z.string().optional() }),
111
+ transition: (_state, goalAchieved) =>
112
+ goalAchieved ? 'auth' : 'greeting',
113
+ }
114
+
115
+ // Agent 2: Auth -- verifies identity, then routes to support
116
+ const authAgent: Agent<AppState> = {
117
+ id: 'auth',
118
+ name: 'Auth Agent',
119
+ description: 'Verifies user identity',
120
+ prompt: async ({ sessionData }) => `
121
+ The user's name is ${sessionData.state.userName}.
122
+ Ask them to confirm their email to verify identity.
123
+ Set goalAchieved to true once verified.
124
+ `,
125
+ outputSchema: z.object({ isAuthenticated: z.boolean() }),
126
+ transition: (_state, goalAchieved) =>
127
+ goalAchieved ? 'support' : 'auth',
128
+ }
129
+
130
+ // Agent 3: Support -- handles the user's issue
131
+ const supportAgent: Agent<AppState> = {
132
+ id: 'support',
133
+ name: 'Support Agent',
134
+ description: 'Helps resolve user issues',
135
+ prompt: async ({ sessionData }) => `
136
+ The user ${sessionData.state.userName} is authenticated.
137
+ Help them with their issue. Categorize it.
138
+ Set goalAchieved when resolved.
139
+ `,
140
+ outputSchema: z.object({
141
+ issueCategory: z.string().optional(),
142
+ }),
143
+ transition: (_state, goalAchieved) =>
144
+ goalAchieved ? 'end' : 'support',
145
+ }
146
+ ```
147
+
148
+ ### Wire them up
149
+
150
+ ```typescript
151
+ import { AgentRegistry, ChatSession, MemorySessionStore } from '@genui-a3/core'
152
+ import { createBedrockProvider } from '@genui-a3/providers/bedrock'
153
+
154
+ const registry = AgentRegistry.getInstance<AppState>()
155
+ registry.register([greetingAgent, authAgent, supportAgent])
156
+
157
+ const provider = createBedrockProvider({
158
+ models: ['us.anthropic.claude-sonnet-4-5-20250929-v1:0'],
159
+ })
160
+
161
+ const session = new ChatSession<AppState>({
162
+ sessionId: 'user-456',
163
+ store: new MemorySessionStore(),
164
+ initialAgentId: 'greeting',
165
+ initialState: { isAuthenticated: false },
166
+ provider,
167
+ })
168
+ ```
169
+
170
+ ### Conversation flow
171
+
172
+ ```typescript
173
+ // Turn 1: User greets, greeting agent responds
174
+ await session.send({ message: 'Hello!' })
175
+ // => Greeting agent asks for name
176
+
177
+ // Turn 2: User provides name, greeting agent completes and chains to auth
178
+ await session.send({ message: "I'm Alex" })
179
+ // => Auth agent asks for email verification
180
+ // (greeting → auth happened automatically in one request)
181
+
182
+ // Turn 3: User verifies, auth completes and chains to support
183
+ await session.send({ message: 'alex@example.com' })
184
+ // => Support agent asks how it can help
185
+ // State now: { userName: 'Alex', isAuthenticated: true }
186
+
187
+ // Turn 4: Support agent handles the issue
188
+ await session.send({ message: 'I need help with my billing' })
189
+ // => Support agent resolves the issue
190
+ // State: { userName: 'Alex', isAuthenticated: true, issueCategory: 'billing' }
191
+ ```
192
+
193
+ Notice that:
194
+
195
+ - **State persists across agents**: `userName` set by the greeting agent is available to auth and support
196
+ - **Agent chaining is automatic**: when greeting completes, auth starts in the same request
197
+ - **Each agent has its own prompt and schema**: they extract different data but share the same state