gx402 1.3.6 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,179 +1,242 @@
1
- # gx402
2
-
3
- [![npm version](https://badge.fury.io/js/gx402.svg)](https://badge.fury.io/js/gx402)
4
- [![Bun](https://img.shields.io/badge/Bun-tested-blueviolet)](https://bun.sh/)
5
- [![TypeScript](https://img.shields.io/badge/TypeScript-strict-blue)](https://www.typescriptlang.org/)
6
-
7
- gx402 is a lightweight TypeScript library for building AI agents powered by large language models (LLMs). It emphasizes structured inputs and outputs using [Zod](https://zod.dev/) schemas, with built-in support for real-time streaming updates. Perfect for applications needing progressive, token-by-token responses while maintaining type safety and parseable results.
8
-
9
- Key features:
10
- - **Structured I/O**: Define input and output schemas with Zod for validation and type inference.
11
- - **Streaming Support**: Receive token-by-token updates for fields, including progressive content building.
12
- - **Agent Abstraction**: Simple `Agent` class to orchestrate LLM calls with customizable temperature and models.
13
- - **Progress Tracking**: Optional callbacks for monitoring stages like streaming or completion.
14
- - **Nested Schema Flattening**: Nested output fields are automatically flattened for streaming (e.g., `analysis.step1` becomes `analysis_step1`).
15
-
16
- Supports modern runtimes like Bun and Node.js.
17
-
18
- ## Installation
19
-
20
- Install via npm:
21
-
22
- ```bash
23
- npm install gx402
24
- ```
25
-
26
- You'll also need:
27
- - [Zod](https://www.npmjs.com/package/zod) for schemas (`npm install zod`).
28
- - An LLM provider (e.g., OpenAI-compatible API). Set your API key in environment variables (e.g., `OPENAI_API_KEY`).
29
-
30
- For testing, use Bun's test runner: `bun add -d bun:test`.
31
-
32
- ## Quick Start
33
-
34
- Create an agent with input/output schemas and run it with a streaming callback.
35
-
36
- ```typescript
37
- import { z } from 'zod';
38
- import { Agent } from 'gx402';
39
-
40
- const agent = new Agent({
41
- llm: 'o4-mini-2025-04-16', // Your LLM model (e.g., OpenAI GPT variant)
42
- inputFormat: z.object({
43
- question: z.string(),
44
- }),
45
- outputFormat: z.object({
46
- analysis: z.object({
47
- step1: z.string(),
48
- step2: z.string(),
49
- step3: z.string(),
50
- }).describe('Step-by-step analysis of the question'),
51
- answer: z.string().describe('The final short concise answer to the question'),
52
- status: z.string().describe('just literal word OK'),
53
- }),
54
- temperature: 0.7, // Optional: Controls creativity (0-1)
55
- });
56
-
57
- const input = { question: 'What are the benefits of renewable energy?' };
58
-
59
- const result = await agent.run(input, (update) => {
60
- if (update.stage === 'streaming') {
61
- console.log(`Field: ${update.field}, Partial Value: ${update.value}`);
62
- // Value builds progressively (e.g., "Renewable energy reduces carbon..." → "...emissions and costs.")
63
- }
64
- });
65
-
66
- console.log('Final Analysis:', result.analysis);
67
- console.log('Final Answer:', result.answer);
68
- console.log('Status:', result.status); // e.g., "OK"
69
- ```
70
-
71
- ### Expected Output
72
- - **Streaming Logs** (real-time, token-by-token):
73
- ```
74
- Field: analysis, Partial Value: {"step1":"Renewable energy...
75
- Field: analysis, Partial Value: {"step1":"Renewable energy sources like solar and wind...
76
- Field: answer, Partial Value: Renewable energy offers environmental, economic...
77
- ...
78
- ```
79
- - **Final Result** (fully parsed and validated):
80
- ```json
81
- {
82
- "analysis": {
83
- "step1": "Renewable energy sources like solar and wind reduce reliance on fossil fuels.",
84
- "step2": "They lower greenhouse gas emissions and combat climate change.",
85
- "step3": "Economically, they create jobs and reduce long-term energy costs."
86
- },
87
- "answer": "Renewable energy provides environmental protection, cost savings, and energy independence.",
88
- "status": "OK"
89
- }
90
- ```
91
-
92
- Nested fields like `analysis.step1` stream as `analysis` (full JSON object building progressively) or flattened (`analysis_step1`) based on config—check your schema descriptions for hints.
93
-
94
- ## API Reference
95
-
96
- ### Agent Constructor
97
- ```typescript
98
- new Agent(config: AgentConfig)
99
- ```
100
-
101
- **AgentConfig**:
102
- - `llm: string` (required): Model identifier (e.g., `'gpt-4o-mini'`, `'o4-mini-2025-04-16'`).
103
- - `inputFormat: ZodObject` (required): Schema for validating inputs.
104
- - `outputFormat: ZodObject` (required): Schema for parsing LLM responses. Use `.describe()` for field hints.
105
- - `temperature?: number` (default: 0.5): Sampling temperature.
106
- - `stream?: boolean` (default: true): Enable streaming.
107
-
108
- ### agent.run(input: InputType, callback?: UpdateCallback): Promise<OutputType>
109
- - `input`: Object matching `inputFormat`.
110
- - `callback?: (update: ProgressUpdate | StreamingUpdate) => void`: Optional hook for real-time events.
111
- - **ProgressUpdate**: `{ stage: 'starting' | 'completing' | 'error', message: string }`.
112
- - **StreamingUpdate**: `{ stage: 'streaming', field: string, value: string }` – `value` grows token-by-token.
113
- - Returns: Parsed output matching `outputFormat`.
114
-
115
- ### Types
116
- - `ProgressUpdate`: Non-streaming milestones (e.g., "Request sent").
117
- - `StreamingUpdate`: Field-specific partial values (e.g., building JSON for `analysis`).
118
-
119
- ## Examples
120
-
121
- ### Basic Non-Streaming
122
- ```typescript
123
- const result = await agent.run(input); // No callback, blocks until complete
124
- ```
125
-
126
- ### Custom Error Handling
127
- ```typescript
128
- try {
129
- const result = await agent.run(input, (update) => {
130
- // Handle updates...
131
- });
132
- } catch (error) {
133
- console.error('Agent failed:', error.message); // e.g., "Invalid API key"
134
- }
135
- ```
136
-
137
- ### Nested Streaming
138
- For deeply nested schemas, fields stream as flat strings (e.g., `analysis_step1: "Renewables reduce..."`). Use Zod's `.describe()` to guide the LLM on formatting.
139
-
140
- ## Testing
141
- gx402 is tested with Bun. Run tests:
142
-
143
- ```bash
144
- bun test
145
- ```
146
-
147
- Example test (from `tests/streaming.test.ts`):
148
- ```typescript
149
- import { test, expect } from 'bun:test';
150
- import { z } from 'zod';
151
- import { Agent, ProgressUpdate, StreamingUpdate } from 'gx402';
152
-
153
- test('should emit streaming progress updates token-by-token', async () => {
154
- // ... (see full test in repo)
155
- expect(streamingUpdates.length).toBeGreaterThan(0);
156
- // Verifies progressive building and final match
157
- }, { timeout: 60000 });
158
- ```
159
-
160
- Note: Tests skip if API keys aren't set (e.g., for CI).
161
-
162
- ## Configuration
163
- - **Environment Vars**: `OPENAI_API_KEY` (or provider-specific). Customize via `process.env`.
164
- - **Timeouts**: Set via test options or extend `Agent` for production.
165
- - **Providers**: Defaults to OpenAI-compatible; extend `LLM` class for others (e.g., Anthropic, Grok).
166
-
167
- ## Contributing
168
- 1. Fork and clone.
169
- 2. Install deps: `bun install`.
170
- 3. Run tests: `bun test`.
171
- 4. Lint: `bun run lint`.
172
- 5. Submit PRs to `main`.
173
-
174
- Issues? Open a ticket on GitHub.
175
-
176
- ## License
177
- MIT. See [LICENSE](LICENSE).
178
-
179
- Built with ❤️ for structured AI workflows. Questions? Ping `@galaxydoxyz` on X.
1
+ # gx402
2
+
3
+ [![npm version](https://badge.fury.io/js/gx402.svg)](https://badge.fury.io/js/gx402)
4
+ [![Bun](https://img.shields.io/badge/Bun-tested-blueviolet)](https://bun.sh/)
5
+ [![TypeScript](https://img.shields.io/badge/TypeScript-strict-blue)](https://www.typescriptlang.org/)
6
+
7
+ gx402 is a lightweight TypeScript library for building AI agents powered by large language models (LLMs). It emphasizes structured inputs and outputs using [Zod](https://zod.dev/) schemas, with built-in support for real-time streaming updates. Perfect for applications needing progressive, token-by-token responses while maintaining type safety and parseable results.
8
+
9
+ Key features:
10
+ - **Structured I/O**: Define input and output schemas with Zod for validation and type inference.
11
+ - **Streaming Support**: Receive token-by-token updates for fields, including progressive content building.
12
+ - **Agent Abstraction**: Simple `Agent` class to orchestrate LLM calls with customizable temperature and models.
13
+ - **Progress Tracking**: Optional callbacks for monitoring stages like streaming or completion.
14
+ - **Nested Schema Flattening**: Nested output fields are automatically flattened for streaming (e.g., `analysis.step1` becomes `analysis_step1`).
15
+
16
+ Supports modern runtimes like Bun and Node.js.
17
+
18
+ ## Installation
19
+
20
+ Install via npm:
21
+
22
+ ```bash
23
+ npm install gx402
24
+ ```
25
+
26
+ You'll also need:
27
+ - [Zod](https://www.npmjs.com/package/zod) for schemas (`npm install zod`).
28
+ - An LLM provider API key set in environment variables.
29
+
30
+ ### Supported LLMs
31
+
32
+ | Provider | Models | Env Variable |
33
+ |----------|--------|--------------|
34
+ | OpenAI | `gpt-4o-mini`, `gpt-4o`, `gpt-4`, `o4-mini-*` | `OPENAI_API_KEY` |
35
+ | Google | `gemini-2.0-flash`, `gemini-2.5-pro` | `GEMINI_API_KEY` |
36
+ | Anthropic | `claude-3-sonnet-*` | `ANTHROPIC_API_KEY` |
37
+ | DeepSeek | `deepseek-chat` | `DEEPSEEK_API_KEY` |
38
+
39
+ ## Quick Start
40
+
41
+ Create an agent with input/output schemas and run it with a streaming callback.
42
+
43
+ ```typescript
44
+ import { z } from 'zod';
45
+ import { Agent } from 'gx402';
46
+
47
+ const agent = new Agent({
48
+ llm: 'o4-mini-2025-04-16', // Your LLM model (e.g., OpenAI GPT variant)
49
+ inputFormat: z.object({
50
+ question: z.string(),
51
+ }),
52
+ outputFormat: z.object({
53
+ analysis: z.object({
54
+ step1: z.string(),
55
+ step2: z.string(),
56
+ step3: z.string(),
57
+ }).describe('Step-by-step analysis of the question'),
58
+ answer: z.string().describe('The final short concise answer to the question'),
59
+ status: z.string().describe('just literal word OK'),
60
+ }),
61
+ temperature: 0.7, // Optional: Controls creativity (0-1)
62
+ });
63
+
64
+ const input = { question: 'What are the benefits of renewable energy?' };
65
+
66
+ const result = await agent.run(input, (update) => {
67
+ if (update.stage === 'streaming') {
68
+ console.log(`Field: ${update.field}, Partial Value: ${update.value}`);
69
+ // Value builds progressively (e.g., "Renewable energy reduces carbon..." → "...emissions and costs.")
70
+ }
71
+ });
72
+
73
+ console.log('Final Analysis:', result.analysis);
74
+ console.log('Final Answer:', result.answer);
75
+ console.log('Status:', result.status); // e.g., "OK"
76
+ ```
77
+
78
+ ### Expected Output
79
+ - **Streaming Logs** (real-time, token-by-token):
80
+ ```
81
+ Field: analysis, Partial Value: {"step1":"Renewable energy...
82
+ Field: analysis, Partial Value: {"step1":"Renewable energy sources like solar and wind...
83
+ Field: answer, Partial Value: Renewable energy offers environmental, economic...
84
+ ...
85
+ ```
86
+ - **Final Result** (fully parsed and validated):
87
+ ```json
88
+ {
89
+ "analysis": {
90
+ "step1": "Renewable energy sources like solar and wind reduce reliance on fossil fuels.",
91
+ "step2": "They lower greenhouse gas emissions and combat climate change.",
92
+ "step3": "Economically, they create jobs and reduce long-term energy costs."
93
+ },
94
+ "answer": "Renewable energy provides environmental protection, cost savings, and energy independence.",
95
+ "status": "OK"
96
+ }
97
+ ```
98
+
99
+ Nested fields like `analysis.step1` stream as `analysis` (full JSON object building progressively) or flattened (`analysis_step1`) based on config—check your schema descriptions for hints.
100
+
101
+ ## API Reference
102
+
103
+ ### Agent Constructor
104
+ ```typescript
105
+ new Agent(config: AgentConfig)
106
+ ```
107
+
108
+ **AgentConfig**:
109
+ - `llm: string` (required): Model identifier (e.g., `'gpt-4o-mini'`, `'o4-mini-2025-04-16'`).
110
+ - `inputFormat: ZodObject` (required): Schema for validating inputs.
111
+ - `outputFormat: ZodObject` (required): Schema for parsing LLM responses. Use `.describe()` for field hints.
112
+ - `temperature?: number` (default: 0.5): Sampling temperature.
113
+ - `stream?: boolean` (default: true): Enable streaming.
114
+
115
+ ### agent.run(input: InputType, callback?: UpdateCallback): Promise<OutputType>
116
+ - `input`: Object matching `inputFormat`.
117
+ - `callback?: (update: ProgressUpdate | StreamingUpdate) => void`: Optional hook for real-time events.
118
+ - **ProgressUpdate**: `{ stage: 'starting' | 'completing' | 'error', message: string }`.
119
+ - **StreamingUpdate**: `{ stage: 'streaming', field: string, value: string }` – `value` grows token-by-token.
120
+ - Returns: Parsed output matching `outputFormat`.
121
+
122
+ ### Types
123
+ - `ProgressUpdate`: Non-streaming milestones (e.g., "Request sent").
124
+ - `StreamingUpdate`: Field-specific partial values (e.g., building JSON for `analysis`).
125
+
126
+ ## Examples
127
+
128
+ ### Basic Non-Streaming
129
+ ```typescript
130
+ const result = await agent.run(input); // No callback, blocks until complete
131
+ ```
132
+
133
+ ### Custom Error Handling
134
+ ```typescript
135
+ try {
136
+ const result = await agent.run(input, (update) => {
137
+ // Handle updates...
138
+ });
139
+ } catch (error) {
140
+ console.error('Agent failed:', error.message); // e.g., "Invalid API key"
141
+ }
142
+ ```
143
+
144
+ ### Nested Streaming
145
+ For deeply nested schemas, fields stream as flat strings (e.g., `analysis_step1: "Renewables reduce..."`). Use Zod's `.describe()` to guide the LLM on formatting.
146
+
147
+ ## Testing
148
+ gx402 is tested with Bun. Run tests:
149
+
150
+ ```bash
151
+ bun test
152
+ ```
153
+
154
+ Example test (from `tests/streaming.test.ts`):
155
+ ```typescript
156
+ import { test, expect } from 'bun:test';
157
+ import { z } from 'zod';
158
+ import { Agent, ProgressUpdate, StreamingUpdate } from 'gx402';
159
+
160
+ test('should emit streaming progress updates token-by-token', async () => {
161
+ // ... (see full test in repo)
162
+ expect(streamingUpdates.length).toBeGreaterThan(0);
163
+ // Verifies progressive building and final match
164
+ }, { timeout: 60000 });
165
+ ## Gemini Multimodal
166
+
167
+ gx402 includes first-class Gemini multimodal capabilities:
168
+
169
+ ```typescript
170
+ import { gemini } from 'gx402';
171
+
172
+ // Image generation (Imagen 4)
173
+ const images = await gemini.generateImage({
174
+ prompt: 'A futuristic city at sunset',
175
+ aspectRatio: '16:9',
176
+ imageSize: '2K',
177
+ });
178
+
179
+ // Video generation (Veo 3.1)
180
+ const video = await gemini.generateVideo({
181
+ prompt: 'A drone flying over mountains',
182
+ videoResolution: '1080p',
183
+ onProgress: (status) => console.log(status),
184
+ });
185
+
186
+ // Music generation (Lyria)
187
+ const music = await gemini.generateMusic({
188
+ prompt: 'Upbeat electronic with synth leads',
189
+ bpm: 128,
190
+ durationSeconds: 30,
191
+ });
192
+
193
+ // Deep Research
194
+ const research = await gemini.deepResearch({
195
+ query: 'Latest advances in quantum computing',
196
+ onProgress: (status) => console.log(status),
197
+ });
198
+ console.log(research.report, research.citations);
199
+ ```
200
+
201
+ ## x402 Payments
202
+
203
+ MCP servers can require payment (HTTP 402). gx402 handles this automatically with Solana:
204
+
205
+ ```typescript
206
+ const agent = new Agent({
207
+ llm: 'gpt-4o-mini',
208
+ inputFormat: z.object({ query: z.string() }),
209
+ outputFormat: z.object({ answer: z.string() }),
210
+ solanaWallet: {
211
+ privateKey: process.env.SOLANA_PRIVATE_KEY!,
212
+ rpcUrl: 'https://api.mainnet-beta.solana.com',
213
+ },
214
+ servers: [{ name: 'paid-api', description: 'Premium data', url: 'https://api.example.com' }],
215
+ });
216
+ ```
217
+
218
+ When a server returns 402, gx402 automatically sends SOL payment and retries.
219
+
220
+ ## Analytics Dashboard
221
+
222
+ Built-in analytics dashboard for tracking agent performance:
223
+
224
+ ```typescript
225
+ const agent = new Agent({
226
+ // ...config
227
+ analyticsUrl: 'http://localhost:3001/api/record',
228
+ });
229
+ ```
230
+
231
+ Each `agent.run()` call records timing, input/output, tool invocations, and success/failure status.
232
+
233
+ ## Contributing
234
+ 1. Fork and clone.
235
+ 2. Install deps: `bun install`.
236
+ 3. Run tests: `bun test`.
237
+ 4. Submit PRs to `main`.
238
+
239
+ ## License
240
+ MIT. See [LICENSE](LICENSE).
241
+
242
+ Built with ❤️ for structured AI workflows. Questions? Ping `@galaxydoxyz` on X.
@@ -0,0 +1,11 @@
1
+ import { getAgentStats } from "../../../src/analytics";
2
+
3
+ export async function GET() {
4
+ try {
5
+ const agents = getAgentStats();
6
+ return Response.json(agents);
7
+ } catch (error) {
8
+ console.error("Error fetching agents:", error);
9
+ return Response.json({ error: "Failed to fetch agents" }, { status: 500 });
10
+ }
11
+ }
@@ -0,0 +1,40 @@
1
+ import { addRequest, type InferenceRequest } from "../../../src/analytics";
2
+
3
+ export async function POST(request: Request) {
4
+ try {
5
+ const body: any = await request.json();
6
+
7
+ // Validate required fields
8
+ if (!body.id || !body.agentName || !body.llm) {
9
+ return Response.json(
10
+ { error: "Missing required fields: id, agentName, llm" },
11
+ { status: 400 }
12
+ );
13
+ }
14
+
15
+ const inferenceRequest: InferenceRequest = {
16
+ id: body.id,
17
+ agentName: body.agentName,
18
+ llm: body.llm,
19
+ timestamp: body.timestamp || Date.now(),
20
+ duration: body.duration || 0,
21
+ status: body.status || 'success',
22
+ input: body.input || {},
23
+ output: body.output || {},
24
+ rawPrompt: body.rawPrompt,
25
+ rawResponse: body.rawResponse,
26
+ toolInvocations: body.toolInvocations,
27
+ error: body.error
28
+ };
29
+
30
+ addRequest(inferenceRequest);
31
+
32
+ return Response.json({ success: true, id: inferenceRequest.id });
33
+ } catch (error) {
34
+ console.error("Error recording request:", error);
35
+ return Response.json(
36
+ { error: "Failed to record request" },
37
+ { status: 500 }
38
+ );
39
+ }
40
+ }
@@ -0,0 +1,11 @@
1
+ import { getAllRequests } from "../../../src/analytics";
2
+
3
+ export async function GET() {
4
+ try {
5
+ const requests = getAllRequests();
6
+ return Response.json(requests);
7
+ } catch (error) {
8
+ console.error("Error fetching requests:", error);
9
+ return Response.json({ error: "Failed to fetch requests" }, { status: 500 });
10
+ }
11
+ }