gx402 1.3.5 → 1.3.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,15 +1,179 @@
1
- # uaiv2
1
+ # gx402
2
2
 
3
- To install dependencies:
3
+ [![npm version](https://badge.fury.io/js/gx402.svg)](https://badge.fury.io/js/gx402)
4
+ [![Bun](https://img.shields.io/badge/Bun-tested-blueviolet)](https://bun.sh/)
5
+ [![TypeScript](https://img.shields.io/badge/TypeScript-strict-blue)](https://www.typescriptlang.org/)
6
+
7
+ gx402 is a lightweight TypeScript library for building AI agents powered by large language models (LLMs). It emphasizes structured inputs and outputs using [Zod](https://zod.dev/) schemas, with built-in support for real-time streaming updates. Perfect for applications needing progressive, token-by-token responses while maintaining type safety and parseable results.
8
+
9
+ Key features:
10
+ - **Structured I/O**: Define input and output schemas with Zod for validation and type inference.
11
+ - **Streaming Support**: Receive token-by-token updates for fields, including progressive content building.
12
+ - **Agent Abstraction**: Simple `Agent` class to orchestrate LLM calls with customizable temperature and models.
13
+ - **Progress Tracking**: Optional callbacks for monitoring stages like streaming or completion.
14
+ - **Nested Schema Flattening**: Nested output fields are automatically flattened for streaming (e.g., `analysis.step1` becomes `analysis_step1`).
15
+
16
+ Supports modern runtimes like Bun and Node.js.
17
+
18
+ ## Installation
19
+
20
+ Install via npm:
4
21
 
5
22
  ```bash
6
- bun install
23
+ npm install gx402
7
24
  ```
8
25
 
9
- To run:
26
+ You'll also need:
27
+ - [Zod](https://www.npmjs.com/package/zod) for schemas (`npm install zod`).
28
+ - An LLM provider (e.g., OpenAI-compatible API). Set your API key in environment variables (e.g., `OPENAI_API_KEY`).
29
+
30
+ For testing, use Bun's test runner: `bun add -d bun:test`.
31
+
32
+ ## Quick Start
33
+
34
+ Create an agent with input/output schemas and run it with a streaming callback.
35
+
36
+ ```typescript
37
+ import { z } from 'zod';
38
+ import { Agent } from 'gx402';
39
+
40
+ const agent = new Agent({
41
+ llm: 'o4-mini-2025-04-16', // Your LLM model (e.g., OpenAI GPT variant)
42
+ inputFormat: z.object({
43
+ question: z.string(),
44
+ }),
45
+ outputFormat: z.object({
46
+ analysis: z.object({
47
+ step1: z.string(),
48
+ step2: z.string(),
49
+ step3: z.string(),
50
+ }).describe('Step-by-step analysis of the question'),
51
+ answer: z.string().describe('The final short concise answer to the question'),
52
+ status: z.string().describe('just literal word OK'),
53
+ }),
54
+ temperature: 0.7, // Optional: Controls creativity (0-1)
55
+ });
56
+
57
+ const input = { question: 'What are the benefits of renewable energy?' };
58
+
59
+ const result = await agent.run(input, (update) => {
60
+ if (update.stage === 'streaming') {
61
+ console.log(`Field: ${update.field}, Partial Value: ${update.value}`);
62
+ // Value builds progressively (e.g., "Renewable energy reduces carbon..." → "...emissions and costs.")
63
+ }
64
+ });
65
+
66
+ console.log('Final Analysis:', result.analysis);
67
+ console.log('Final Answer:', result.answer);
68
+ console.log('Status:', result.status); // e.g., "OK"
69
+ ```
70
+
71
+ ### Expected Output
72
+ - **Streaming Logs** (real-time, token-by-token):
73
+ ```
74
+ Field: analysis, Partial Value: {"step1":"Renewable energy...
75
+ Field: analysis, Partial Value: {"step1":"Renewable energy sources like solar and wind...
76
+ Field: answer, Partial Value: Renewable energy offers environmental, economic...
77
+ ...
78
+ ```
79
+ - **Final Result** (fully parsed and validated):
80
+ ```json
81
+ {
82
+ "analysis": {
83
+ "step1": "Renewable energy sources like solar and wind reduce reliance on fossil fuels.",
84
+ "step2": "They lower greenhouse gas emissions and combat climate change.",
85
+ "step3": "Economically, they create jobs and reduce long-term energy costs."
86
+ },
87
+ "answer": "Renewable energy provides environmental protection, cost savings, and energy independence.",
88
+ "status": "OK"
89
+ }
90
+ ```
91
+
92
+ Nested fields like `analysis.step1` stream as `analysis` (full JSON object building progressively) or flattened (`analysis_step1`) based on config—check your schema descriptions for hints.
93
+
94
+ ## API Reference
95
+
96
+ ### Agent Constructor
97
+ ```typescript
98
+ new Agent(config: AgentConfig)
99
+ ```
100
+
101
+ **AgentConfig**:
102
+ - `llm: string` (required): Model identifier (e.g., `'gpt-4o-mini'`, `'o4-mini-2025-04-16'`).
103
+ - `inputFormat: ZodObject` (required): Schema for validating inputs.
104
+ - `outputFormat: ZodObject` (required): Schema for parsing LLM responses. Use `.describe()` for field hints.
105
+ - `temperature?: number` (default: 0.5): Sampling temperature.
106
+ - `stream?: boolean` (default: true): Enable streaming.
107
+
108
+ ### agent.run(input: InputType, callback?: UpdateCallback): Promise<OutputType>
109
+ - `input`: Object matching `inputFormat`.
110
+ - `callback?: (update: ProgressUpdate | StreamingUpdate) => void`: Optional hook for real-time events.
111
+ - **ProgressUpdate**: `{ stage: 'starting' | 'completing' | 'error', message: string }`.
112
+ - **StreamingUpdate**: `{ stage: 'streaming', field: string, value: string }` – `value` grows token-by-token.
113
+ - Returns: Parsed output matching `outputFormat`.
114
+
115
+ ### Types
116
+ - `ProgressUpdate`: Non-streaming milestones (e.g., "Request sent").
117
+ - `StreamingUpdate`: Field-specific partial values (e.g., building JSON for `analysis`).
118
+
119
+ ## Examples
120
+
121
+ ### Basic Non-Streaming
122
+ ```typescript
123
+ const result = await agent.run(input); // No callback, blocks until complete
124
+ ```
125
+
126
+ ### Custom Error Handling
127
+ ```typescript
128
+ try {
129
+ const result = await agent.run(input, (update) => {
130
+ // Handle updates...
131
+ });
132
+ } catch (error) {
133
+ console.error('Agent failed:', error.message); // e.g., "Invalid API key"
134
+ }
135
+ ```
136
+
137
+ ### Nested Streaming
138
+ For deeply nested schemas, fields stream as flat strings (e.g., `analysis_step1: "Renewables reduce..."`). Use Zod's `.describe()` to guide the LLM on formatting.
139
+
140
+ ## Testing
141
+ gx402 is tested with Bun. Run tests:
10
142
 
11
143
  ```bash
12
- bun run index.ts
144
+ bun test
13
145
  ```
14
146
 
15
- This project was created using `bun init` in bun v1.2.13. [Bun](https://bun.sh) is a fast all-in-one JavaScript runtime.
147
+ Example test (from `tests/streaming.test.ts`):
148
+ ```typescript
149
+ import { test, expect } from 'bun:test';
150
+ import { z } from 'zod';
151
+ import { Agent, ProgressUpdate, StreamingUpdate } from 'gx402';
152
+
153
+ test('should emit streaming progress updates token-by-token', async () => {
154
+ // ... (see full test in repo)
155
+ expect(streamingUpdates.length).toBeGreaterThan(0);
156
+ // Verifies progressive building and final match
157
+ }, { timeout: 60000 });
158
+ ```
159
+
160
+ Note: Tests skip if API keys aren't set (e.g., for CI).
161
+
162
+ ## Configuration
163
+ - **Environment Vars**: `OPENAI_API_KEY` (or provider-specific). Customize via `process.env`.
164
+ - **Timeouts**: Set via test options or extend `Agent` for production.
165
+ - **Providers**: Defaults to OpenAI-compatible; extend `LLM` class for others (e.g., Anthropic, Grok).
166
+
167
+ ## Contributing
168
+ 1. Fork and clone.
169
+ 2. Install deps: `bun install`.
170
+ 3. Run tests: `bun test`.
171
+ 4. Lint: `bun run lint`.
172
+ 5. Submit PRs to `main`.
173
+
174
+ Issues? Open a ticket on GitHub.
175
+
176
+ ## License
177
+ MIT. See [LICENSE](LICENSE).
178
+
179
+ Built with ❤️ for structured AI workflows. Questions? Ping `@galaxydoxyz` on X.
package/package.json CHANGED
@@ -1,8 +1,8 @@
1
1
  {
2
2
  "name": "gx402",
3
- "module": "uai.ts",
4
- "main": "./uai.ts",
5
- "version": "1.3.5",
3
+ "module": "main.ts",
4
+ "main": "./main.ts",
5
+ "version": "1.3.6",
6
6
  "type": "module",
7
7
  "private": false,
8
8
  "devDependencies": {
@@ -1,78 +1,78 @@
1
1
  import { test, expect, it, describe } from 'bun:test';
2
2
  import { z } from 'zod';
3
- import { Agent, LLM, ProgressUpdate, StreamingUpdate } from '../uai';
3
+ import { agent, llm, progressupdate, streamingupdate } from '../uai';
4
4
 
5
- describe('UAI Streaming Tests', () => {
5
+ describe('uai streaming tests', () => {
6
6
  it('should emit streaming progress updates token-by-token', async () => {
7
- const streamingAgent = new Agent({
7
+ const streamingagent = new agent({
8
8
  llm: 'o4-mini-2025-04-16',
9
- inputFormat: z.object({
9
+ inputformat: z.object({
10
10
  question: z.string(),
11
11
  }),
12
- outputFormat: z.object({
12
+ outputformat: z.object({
13
13
  analysis: z.object({
14
14
  step1: z.string(), // nested fieild should be named like analysis_step1
15
15
  step2: z.string(),
16
16
  step3: z.string(),
17
- }).describe("Step-by-step analysis of the question"),
18
- answer: z.string().describe("The final short concise answer to the question"),
19
- status: z.string().describe("just literal word OK"), // to ensure streaming single-word responses
17
+ }).describe("step-by-step analysis of the question"),
18
+ answer: z.string().describe("the final short concise answer to the question"),
19
+ status: z.string().describe("just literal word ok"), // to ensure streaming single-word responses
20
20
  }),
21
21
  temperature: 0.7,
22
22
  });
23
23
 
24
24
  const input = {
25
- question: 'What are the benefits of renewable energy?',
25
+ question: 'what are the benefits of renewable energy?',
26
26
  };
27
27
 
28
- const streamingUpdates: StreamingUpdate[] = [];
29
- const progressUpdates: ProgressUpdate[] = [];
28
+ const streamingupdates: streamingupdate[] = [];
29
+ const progressupdates: progressupdate[] = [];
30
30
 
31
31
  try {
32
- const result = await streamingAgent.run(input, (update) => {
32
+ const result = await streamingagent.run(input, (update) => {
33
33
  if (update.stage === 'streaming') {
34
- streamingUpdates.push(update as StreamingUpdate);
35
- console.log(`Field: ${update.field}, Value: ${update.value}`);
34
+ streamingupdates.push(update as streamingupdate);
35
+ console.log(`field: ${update.field}, value: ${update.value}`);
36
36
  } else {
37
- progressUpdates.push(update);
37
+ progressupdates.push(update);
38
38
  }
39
39
  });
40
40
 
41
- console.log('\n--- Final Result ---');
42
- console.log('Analysis:', result.analysis);
43
- console.log('Answer:', result.answer);
41
+ console.log('\n--- final result ---');
42
+ console.log('analysis:', result.analysis);
43
+ console.log('answer:', result.answer);
44
44
 
45
- // Verify we received streaming updates
46
- expect(streamingUpdates.length).toBeGreaterThan(0);
45
+ // verify we received streaming updates
46
+ expect(streamingupdates.length).tobegreaterthan(0);
47
47
 
48
- // Verify we have updates for both fields
49
- const fieldsUpdated = new Set(streamingUpdates.map(u => u.field));
50
- expect(fieldsUpdated.has('analysis')).toBe(true);
51
- expect(fieldsUpdated.has('answer')).toBe(true);
48
+ // verify we have updates for both fields
49
+ const fieldsupdated = new set(streamingupdates.map(u => u.field));
50
+ expect(fieldsupdated.has('analysis')).tobe(true);
51
+ expect(fieldsupdated.has('answer')).tobe(true);
52
52
 
53
- // Verify the streaming updates show progressive content building
54
- const analysisUpdates = streamingUpdates.filter(u => u.field === 'analysis');
55
- const answerUpdates = streamingUpdates.filter(u => u.field === 'answer');
53
+ // verify the streaming updates show progressive content building
54
+ const analysisupdates = streamingupdates.filter(u => u.field === 'analysis');
55
+ const answerupdates = streamingupdates.filter(u => u.field === 'answer');
56
56
 
57
- expect(analysisUpdates.length).toBeGreaterThan(1);
58
- expect(answerUpdates.length).toBeGreaterThan(1);
57
+ expect(analysisupdates.length).tobegreaterthan(1);
58
+ expect(answerupdates.length).tobegreaterthan(1);
59
59
 
60
- // Verify content is progressively building (each update should contain more text)
61
- for (let i = 1; i < analysisUpdates.length; i++) {
62
- expect(analysisUpdates[i].value.length).toBeGreaterThanOrEqual(analysisUpdates[i - 1].value.length);
60
+ // verify content is progressively building (each update should contain more text)
61
+ for (let i = 1; i < analysisupdates.length; i++) {
62
+ expect(analysisupdates[i].value.length).tobegreaterthanorequal(analysisupdates[i - 1].value.length);
63
63
  }
64
64
 
65
- // Verify final result matches last streaming update values
66
- const lastAnalysisUpdate = analysisUpdates[analysisUpdates.length - 1];
67
- const lastAnswerUpdate = answerUpdates[answerUpdates.length - 1];
65
+ // verify final result matches last streaming update values
66
+ const lastanalysisupdate = analysisupdates[analysisupdates.length - 1];
67
+ const lastanswerupdate = answerupdates[answerupdates.length - 1];
68
68
 
69
- expect(result.analysis.trim()).toBe(lastAnalysisUpdate.value.trim());
70
- expect(result.answer.trim()).toBe(lastAnswerUpdate.value.trim());
69
+ expect(result.analysis.trim()).tobe(lastanalysisupdate.value.trim());
70
+ expect(result.answer.trim()).tobe(lastanswerupdate.value.trim());
71
71
 
72
- console.log(`\n✅ Received ${streamingUpdates.length} streaming updates across ${fieldsUpdated.size} fields`);
72
+ console.log(`\n✅ received ${streamingupdates.length} streaming updates across ${fieldsupdated.size} fields`);
73
73
 
74
74
  } catch (error) {
75
- console.warn('\n⚠️ Streaming test skipped (this is expected if API keys are not configured):', error.message);
75
+ console.warn('\n⚠️ streaming test skipped (this is expected if api keys are not configured):', error.message);
76
76
  }
77
77
  }, { timeout: 60000 });
78
78
  });
package/index.ts DELETED
@@ -1 +0,0 @@
1
- console.log("Hello via Bun!");
package/patch DELETED
@@ -1,290 +0,0 @@
1
- diff --git a/uai.ts b/uai.ts
2
- index 1234567..abcdefg 100644
3
- --- a/uai.ts
4
- +++ b/uai.ts
5
- @@ -33,10 +33,17 @@ export interface AgentConfig<I extends z.ZodObject<any>, O extends z.ZodObject<
6
-
7
- // ### Progress Callback Types
8
- export interface ProgressUpdate {
9
- - stage: "server_selection" | "tool_discovery" | "tool_invocation" | "response_generation";
10
- + stage: "server_selection" | "tool_discovery" | "tool_invocation" | "response_generation" | "streaming";
11
- message: string;
12
- data?: any;
13
- }
14
-
15
- +export interface StreamingUpdate {
16
- + stage: "streaming";
17
- + field: string;
18
- + value: string;
19
- +}
20
- +
21
- export type ProgressCallback = (update: ProgressUpdate) => void;
22
- +export type StreamingCallback = (update: StreamingUpdate) => void;
23
-
24
- // ### XML Utilities
25
- @@ -102,10 +109,11 @@ function xmlToObj(xmlContent: string): any {
26
- /** Calls an LLM API with measurement for logging. */
27
- async function callLLM(
28
- llm: LLMType,
29
- messages: Array<{ role: string; content: string }>,
30
- options: { temperature?: number; maxTokens?: number } = {},
31
- - measureFn?: typeof measure
32
- + measureFn?: typeof measure,
33
- + streamingCallback?: StreamingCallback
34
- ): Promise<string> {
35
- const executeCall = async (measure: typeof measure) => {
36
- const { temperature = 0.7, maxTokens = 4000 } = options;
37
- @@ -119,17 +127,20 @@ async function callLLM(
38
- headers["x-api-key"] = process.env.ANTHROPIC_API_KEY!;
39
- headers["anthropic-version"] = "2023-06-01";
40
- url = "https://api.anthropic.com/v1/messages";
41
- - body = { model: llm, max_tokens: maxTokens, messages };
42
- + body = { model: llm, max_tokens: maxTokens, messages, stream: !!streamingCallback };
43
- } else if (llm.includes("deepseek")) {
44
- headers["Authorization"] = `Bearer ${process.env.DEEPSEEK_API_KEY}`;
45
- url = "https://api.deepseek.com/v1/chat/completions";
46
- - body = { model: llm, temperature, messages, max_tokens: maxTokens };
47
- + body = { model: llm, temperature, messages, max_tokens: maxTokens, stream: !!streamingCallback };
48
- } else {
49
- headers["Authorization"] = `Bearer ${process.env.OPENAI_API_KEY}`;
50
- url = "https://api.openai.com/v1/chat/completions";
51
- - body = { model: llm, temperature, messages, max_tokens: maxTokens };
52
- + body = { model: llm, temperature, messages, max_tokens: maxTokens, stream: !!streamingCallback };
53
- }
54
- +
55
- const requestBodyStr = JSON.stringify(body);
56
- +
57
- + if (!streamingCallback) {
58
- const response = await measure(
59
- async () => {
60
- const res = await fetch(url, {
61
- @@ -144,7 +155,98 @@ async function callLLM(
62
- `HTTP ${llm} API call - Body: ${requestBodyStr.substring(0, 200)}...`
63
- );
64
- const data = await response.json();
65
- return llm.includes("claude") ? data.content[0].text : data.choices[0].message.content;
66
- + } else {
67
- + // Streaming response handling
68
- + const response = await measure(
69
- + async () => {
70
- + const res = await fetch(url, {
71
- + method: "POST",
72
- + headers,
73
- + body: requestBodyStr,
74
- + });
75
- + if (!res.ok) {
76
- + const errorText = await res.text();
77
- + throw new Error(`LLM API error: ${errorText}`);
78
- + }
79
- + return res;
80
- + },
81
- + `HTTP ${llm} streaming API call - Body: ${requestBodyStr.substring(0, 200)}...`
82
- + );
83
- +
84
- + const reader = response.body?.getReader();
85
- + if (!reader) {
86
- + throw new Error("No readable stream available");
87
- + }
88
- +
89
- + const decoder = new TextDecoder();
90
- + let fullResponse = "";
91
- + let currentField = "";
92
- + let currentValue = "";
93
- + let insideTag = false;
94
- + let buffer = "";
95
- +
96
- + try {
97
- + while (true) {
98
- + const { done, value } = await reader.read();
99
- + if (done) break;
100
- +
101
- + const chunk = decoder.decode(value, { stream: true });
102
- + buffer += chunk;
103
- +
104
- + // Process complete lines for different providers
105
- + const lines = buffer.split('\n');
106
- + buffer = lines.pop() || ''; // Keep incomplete line in buffer
107
- +
108
- + for (const line of lines) {
109
- + let content = '';
110
- +
111
- + if (llm.includes("claude")) {
112
- + // Anthropic streaming format
113
- + if (line.startsWith('data: ') && !line.includes('[DONE]')) {
114
- + try {
115
- + const data = JSON.parse(line.slice(6));
116
- + if (data.type === 'content_block_delta' && data.delta?.text) {
117
- + content = data.delta.text;
118
- + }
119
- + } catch (e) {
120
- + continue;
121
- + }
122
- + }
123
- + } else {
124
- + // OpenAI/DeepSeek streaming format
125
- + if (line.startsWith('data: ') && !line.includes('[DONE]')) {
126
- + try {
127
- + const data = JSON.parse(line.slice(6));
128
- + if (data.choices?.[0]?.delta?.content) {
129
- + content = data.choices[0].delta.content;
130
- + }
131
- + } catch (e) {
132
- + continue;
133
- + }
134
- + }
135
- + }
136
- +
137
- + if (content) {
138
- + fullResponse += content;
139
- +
140
- + // Parse XML tags to detect field changes
141
- + for (const char of content) {
142
- + if (char === '<') {
143
- + insideTag = true;
144
- + currentValue = "";
145
- + } else if (char === '>' && insideTag) {
146
- + insideTag = false;
147
- + if (!currentValue.startsWith('/')) {
148
- + currentField = currentValue;
149
- + currentValue = "";
150
- + }
151
- + } else if (insideTag) {
152
- + currentValue += char;
153
- + } else if (currentField && char !== '\n') {
154
- + currentValue += char;
155
- + streamingCallback({ stage: "streaming", field: currentField, value: currentValue });
156
- + }
157
- + }
158
- + }
159
- + }
160
- + }
161
- + } finally {
162
- + reader.releaseLock();
163
- + }
164
- +
165
- + return fullResponse;
166
- + }
167
- };
168
- return measureFn
169
- ? await measureFn(executeCall, `LLM call to ${llm}`)
170
- @@ -254,7 +356,8 @@ export class Agent<I extends z.ZodObject<any>, O extends z.ZodObject<any>> {
171
- stage: "response_generation",
172
- message: "Generating final response...",
173
- });
174
- +
175
- try {
176
- const response = await measure(
177
- - async (measure) => await this.generateResponse(validatedInput, toolResults, measure),
178
- + async (measure) => await this.generateResponse(validatedInput, toolResults, measure, progressCallback),
179
- "Generate AI response with toolResults: " + JSON.stringify(toolResults),
180
- );
181
- return await measure(
182
- @@ -409,7 +512,16 @@ export class Agent<I extends z.ZodObject<any>, O extends z.ZodObject<any>> {
183
- }
184
-
185
- /** Generates the final response based on input and tool results. */
186
- - private async generateResponse(input: any, toolResults: Record<string, any>, measureFn: typeof measure): Promise<any> {
187
- + private async generateResponse(
188
- + input: any,
189
- + toolResults: Record<string, any>,
190
- + measureFn: typeof measure,
191
- + progressCallback?: ProgressCallback
192
- + ): Promise<any> {
193
- + const streamingCallback: StreamingCallback | undefined = progressCallback ?
194
- + (update) => progressCallback(update as ProgressUpdate) :
195
- + undefined;
196
- +
197
- return await measureFn(
198
- async (measure) => {
199
- const systemPrompt = this.config.systemPrompt || null;
200
- @@ -434,7 +546,8 @@ export class Agent<I extends z.ZodObject<any>, O extends z.ZodObject<any>> {
201
- const response = await measure(() => callLLM(
202
- this.config.llm,
203
- messages,
204
- - { temperature: this.config.temperature || 0.7, maxTokens: this.config.maxTokens || 4000 },
205
- + { temperature: this.config.temperature || 0.7, maxTokens: this.config.maxTokens || 4000 },
206
- - measure
207
- + measure,
208
- + streamingCallback
209
- ), userPrompt);
210
- const parsed = await measure(
211
- async () => xmlToObj(response),
212
- diff --git a/test/streaming.test.ts b/test/streaming.test.ts
213
- new file mode 100644
214
- index 0000000..1234567
215
- --- /dev/null
216
- +++ b/test/streaming.test.ts
217
- @@ -0,0 +1,72 @@
218
- +import { test, expect, it, describe } from 'bun:test';
219
- +import { z } from 'zod';
220
- +import { Agent, LLM, ProgressUpdate, StreamingUpdate } from '../uai';
221
- +
222
- +describe('UAI Streaming Tests', () => {
223
- + it('should emit streaming progress updates token-by-token', async () => {
224
- + const streamingAgent = new Agent({
225
- + llm: LLM['gpt-4o-mini'],
226
- + inputFormat: z.object({
227
- + question: z.string(),
228
- + }),
229
- + outputFormat: z.object({
230
- + analysis: z.string().describe("Step-by-step analysis of the question"),
231
- + answer: z.string().describe("The final answer to the question"),
232
- + }),
233
- + temperature: 0.7,
234
- + });
235
- +
236
- + const input = {
237
- + question: 'What are the benefits of renewable energy?',
238
- + };
239
- +
240
- + const streamingUpdates: StreamingUpdate[] = [];
241
- + const progressUpdates: ProgressUpdate[] = [];
242
- +
243
- + try {
244
- + const result = await streamingAgent.run(input, (update) => {
245
- + if (update.stage === 'streaming') {
246
- + streamingUpdates.push(update as StreamingUpdate);
247
- + console.log(`Field: ${update.field}, Value: ${update.value.slice(-20)}...`);
248
- + } else {
249
- + progressUpdates.push(update);
250
- + }
251
- + });
252
- +
253
- + console.log('\n--- Final Result ---');
254
- + console.log('Analysis:', result.analysis);
255
- + console.log('Answer:', result.answer);
256
- +
257
- + // Verify we received streaming updates
258
- + expect(streamingUpdates.length).toBeGreaterThan(0);
259
- +
260
- + // Verify we have updates for both fields
261
- + const fieldsUpdated = new Set(streamingUpdates.map(u => u.field));
262
- + expect(fieldsUpdated.has('analysis')).toBe(true);
263
- + expect(fieldsUpdated.has('answer')).toBe(true);
264
- +
265
- + // Verify the streaming updates show progressive content building
266
- + const analysisUpdates = streamingUpdates.filter(u => u.field === 'analysis');
267
- + const answerUpdates = streamingUpdates.filter(u => u.field === 'answer');
268
- +
269
- + expect(analysisUpdates.length).toBeGreaterThan(1);
270
- + expect(answerUpdates.length).toBeGreaterThan(1);
271
- +
272
- + // Verify content is progressively building (each update should contain more text)
273
- + for (let i = 1; i < analysisUpdates.length; i++) {
274
- + expect(analysisUpdates[i].value.length).toBeGreaterThanOrEqual(analysisUpdates[i-1].value.length);
275
- + }
276
- +
277
- + // Verify final result matches last streaming update values
278
- + const lastAnalysisUpdate = analysisUpdates[analysisUpdates.length - 1];
279
- + const lastAnswerUpdate = answerUpdates[answerUpdates.length - 1];
280
- +
281
- + expect(result.analysis.trim()).toBe(lastAnalysisUpdate.value.trim());
282
- + expect(result.answer.trim()).toBe(lastAnswerUpdate.value.trim());
283
- +
284
- + console.log(`\n✅ Received ${streamingUpdates.length} streaming updates across ${fieldsUpdated.size} fields`);
285
- +
286
- + } catch (error) {
287
- + console.warn('\n⚠️ Streaming test skipped (this is expected if API keys are not configured):', error.message);
288
- + }
289
- + }, { timeout: 60000 });
290
- +});
File without changes