dev-ai-sdk 0.0.2 → 0.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (69) hide show
  1. package/README.md +474 -323
  2. package/dist/client.d.ts +5 -2
  3. package/dist/client.d.ts.map +1 -1
  4. package/dist/client.js +77 -3
  5. package/dist/client.js.map +1 -1
  6. package/dist/core/config.d.ts +3 -0
  7. package/dist/core/config.d.ts.map +1 -1
  8. package/dist/core/council.d.ts +2 -0
  9. package/dist/core/council.d.ts.map +1 -0
  10. package/dist/core/council.js +9 -0
  11. package/dist/core/council.js.map +1 -0
  12. package/dist/core/error.d.ts +4 -1
  13. package/dist/core/error.d.ts.map +1 -1
  14. package/dist/core/error.js +12 -1
  15. package/dist/core/error.js.map +1 -1
  16. package/dist/core/fallbackEngine.js +3 -3
  17. package/dist/core/fallbackEngine.js.map +1 -1
  18. package/dist/core/validate.d.ts.map +1 -1
  19. package/dist/core/validate.js +32 -18
  20. package/dist/core/validate.js.map +1 -1
  21. package/dist/index.d.ts +2 -2
  22. package/dist/index.d.ts.map +1 -1
  23. package/dist/index.js +1 -0
  24. package/dist/index.js.map +1 -1
  25. package/dist/providers/anthropic-core.d.ts +1 -0
  26. package/dist/providers/anthropic-core.d.ts.map +1 -0
  27. package/dist/providers/anthropic-core.js +2 -0
  28. package/dist/providers/anthropic-core.js.map +1 -0
  29. package/dist/providers/anthropic.d.ts +3 -0
  30. package/dist/providers/anthropic.d.ts.map +1 -0
  31. package/dist/providers/anthropic.js +44 -0
  32. package/dist/providers/anthropic.js.map +1 -0
  33. package/dist/providers/deepseek-stream.d.ts +2 -2
  34. package/dist/providers/deepseek-stream.d.ts.map +1 -1
  35. package/dist/providers/deepseek-stream.js +18 -11
  36. package/dist/providers/deepseek-stream.js.map +1 -1
  37. package/dist/providers/deepseek.js +2 -2
  38. package/dist/providers/deepseek.js.map +1 -1
  39. package/dist/providers/google-core.js +2 -2
  40. package/dist/providers/google-core.js.map +1 -1
  41. package/dist/providers/google-stream.d.ts +2 -2
  42. package/dist/providers/google-stream.d.ts.map +1 -1
  43. package/dist/providers/google-stream.js +88 -5
  44. package/dist/providers/google-stream.js.map +1 -1
  45. package/dist/providers/google.d.ts +2 -2
  46. package/dist/providers/google.d.ts.map +1 -1
  47. package/dist/providers/mistral-stream.d.ts +2 -2
  48. package/dist/providers/mistral-stream.d.ts.map +1 -1
  49. package/dist/providers/mistral-stream.js +18 -8
  50. package/dist/providers/mistral-stream.js.map +1 -1
  51. package/dist/providers/mistral.js +2 -2
  52. package/dist/providers/mistral.js.map +1 -1
  53. package/dist/providers/openai-stream.d.ts +2 -2
  54. package/dist/providers/openai-stream.d.ts.map +1 -1
  55. package/dist/providers/openai-stream.js +10 -5
  56. package/dist/providers/openai-stream.js.map +1 -1
  57. package/dist/providers/openai.js +2 -2
  58. package/dist/providers/openai.js.map +1 -1
  59. package/dist/test.d.ts +2 -0
  60. package/dist/test.d.ts.map +1 -0
  61. package/dist/test.js +24 -0
  62. package/dist/test.js.map +1 -0
  63. package/dist/types/error.types.d.ts +7 -0
  64. package/dist/types/error.types.d.ts.map +1 -0
  65. package/dist/types/error.types.js +2 -0
  66. package/dist/types/error.types.js.map +1 -0
  67. package/dist/types/types.d.ts +39 -0
  68. package/dist/types/types.d.ts.map +1 -1
  69. package/package.json +3 -3
package/README.md CHANGED
@@ -1,491 +1,642 @@
1
1
  # dev-ai-sdk
2
2
 
3
- Universal AI SDK with a single syntax for multiple LLM providers.
3
+ **A unified TypeScript SDK for using multiple AI providers with one simple interface.**
4
4
 
5
- This project aims to give you a small, provider-agnostic layer for text generation across different APIs using a consistent TypeScript interface.
6
-
7
- It is still in an early, experimental phase.
8
-
9
- Currently supported providers:
10
-
11
- - OpenAI (Responses API)
12
- - Google Gemini (Generative Language API)
13
- - DeepSeek (chat completions, OpenAI-like)
14
- - Mistral (chat completions, OpenAI-like)
5
+ Stop juggling different API docs and client libraries. `dev-ai-sdk` lets you switch between OpenAI, Google Gemini, DeepSeek, Mistral, and Anthropic Claude with zero code changes. Supports streaming, automatic fallback, and multi-model LLM councils.
15
6
 
16
7
  ---
17
8
 
18
- ## Features (Current)
9
+ ## What It Does
19
10
 
20
- - Unified interface for multiple providers (OpenAI, Google, DeepSeek, Mistral)
21
- - Simple `genChat` client with a single `generate` method
22
- - Strongly typed configuration and request/response types
23
- - Centralized validation of configuration and provider calls
24
- - Basic support for:
25
- - `system` prompt (per provider)
26
- - `temperature` and `maxTokens` (per provider)
27
- - Optional `raw` responses to inspect full provider JSON
28
- - Normalized error type (`SDKError`) with provider tagging
29
- - Tiny, dependency-light TypeScript codebase
11
+ Write once, run anywhere. This SDK provides a consistent interface for text generation across multiple LLM providers:
30
12
 
31
- Planned (not implemented yet):
13
+ - **OpenAI** (GPT models via Chat Completions API)
14
+ - **Google Gemini** (Gemini models)
15
+ - **DeepSeek** (DeepSeek chat models)
16
+ - **Mistral** (Mistral models)
17
+ - **Anthropic Claude** (Claude 3/3.5 models)
32
18
 
33
- - Rich message/chat abstractions
34
- - JSON / structured output helpers
35
- - React / Next.js integrations
36
- - More providers (Anthropic, Azure OpenAI, etc.)
19
+ Switch providers, change models, or even combine multiple providers — your code stays the same. Bonus features: streaming, automatic fallback to other providers, and LLM councils for multi-model decision making.
37
20
 
38
21
  ---
39
22
 
40
- ## Installation
41
-
42
- > This project is not yet published to npm; these instructions assume you are developing or consuming it locally.
43
-
44
- Clone the repository and install dependencies:
45
-
46
- ```bash
47
- npm install
48
- # or
49
- yarn install
50
- # or
51
- pnpm install
52
- ```
23
+ ## Quick Start
53
24
 
54
- Build the TypeScript sources:
25
+ ### Installation
55
26
 
56
27
  ```bash
57
- npm run build
28
+ npm install dev-ai-sdk
58
29
  ```
59
30
 
60
- This outputs compiled files to `dist/` as configured in `package.json`.
61
-
62
- ---
31
+ ### 5-Minute Example
63
32
 
64
- ## Core Concepts
33
+ ```ts
34
+ import { genChat } from 'dev-ai-sdk';
65
35
 
66
- The library exposes a single main client class today: `genChat`.
36
+ // 1. Create a client with your API keys
37
+ const ai = new genChat({
38
+ openai: {
39
+ apiKey: process.env.OPENAI_API_KEY,
40
+ },
41
+ });
67
42
 
68
- - You configure the client with API keys for the providers you want to use.
69
- - You call `generate` with exactly one provider payload (`google`, `openai`, `deepseek`, or `mistral`).
70
- - The client validates the configuration and the request, then calls the appropriate provider adapter.
43
+ // 2. Generate text
44
+ const result = await ai.generate({
45
+ openai: {
46
+ model: 'gpt-4o-mini',
47
+ prompt: 'What is the capital of France?',
48
+ },
49
+ });
71
50
 
72
- Key files:
51
+ // 3. Use the result
52
+ console.log(result.data); // "The capital of France is Paris."
53
+ console.log(result.provider); // "openai"
54
+ console.log(result.model); // "gpt-4o-mini"
55
+ ```
73
56
 
74
- - `src/client.ts` main `genChat` class
75
- - `src/providers/google.ts` – Google Gemini implementation
76
- - `src/providers/openai.ts` – OpenAI Responses API implementation
77
- - `src/providers/deepseek.ts` – DeepSeek chat completions implementation
78
- - `src/providers/mistral.ts` – Mistral chat completions implementation
79
- - `src/core/config.ts` – SDK configuration types
80
- - `src/core/validate.ts` – configuration and provider validation
81
- - `src/core/error.ts` – `SDKError` implementation
82
- - `src/types/types.ts` – request/response types
57
+ That's it. No complex setup, no provider-specific boilerplate.
83
58
 
84
59
  ---
85
60
 
86
- ## Configuration
87
-
88
- The client is configured via an `SDKConfig` object (defined in `src/core/config.ts`):
89
-
90
- ```ts
91
- export type SDKConfig = {
92
- google?: {
93
- apiKey: string;
94
- };
95
-
96
- openai?: {
97
- apiKey: string;
98
- };
99
-
100
- deepseek?: {
101
- apiKey: string;
102
- };
61
+ ## Features
103
62
 
104
- mistral?: {
105
- apiKey: string;
106
- };
107
- };
108
- ```
63
+ ✅ **Single Interface** – Same code works across 5 major LLM providers
64
+ **Type-Safe** – Full TypeScript support with proper types
65
+ ✅ **Minimal** – Tiny, lightweight package (15KB gzipped)
66
+ ✅ **Streaming** – Built-in streaming support for all providers
67
+ ✅ **Automatic Fallback** – If a provider fails, automatically try others
68
+ ✅ **LLM Council** – Run multiple models in parallel, have a judge synthesize the best answer
69
+ ✅ **Error Handling** – Unified error handling across all providers
70
+ ✅ **No Dependencies** – Only `dotenv` for environment variables
109
71
 
110
- Rules:
72
+ ---
111
73
 
112
- - At least one provider (`google`, `openai`, `deepseek`, or `mistral`) must be configured.
113
- - Each configured provider must have a non-empty `apiKey` string.
114
- - If these rules are violated, the SDK throws an `SDKError` from `validateConfig`.
74
+ ## Usage Guide
115
75
 
116
- Example configuration:
76
+ ### Initialize the Client
117
77
 
118
78
  ```ts
119
- import { genChat } from './src/client';
79
+ import { genChat } from 'dev-ai-sdk';
120
80
 
121
81
  const ai = new genChat({
122
- google: {
123
- apiKey: process.env.GOOGLE_API_KEY!,
124
- },
125
82
  openai: {
126
- apiKey: process.env.OPENAI_API_KEY!,
83
+ apiKey: process.env.OPENAI_API_KEY,
84
+ },
85
+ google: {
86
+ apiKey: process.env.GOOGLE_API_KEY,
127
87
  },
128
88
  deepseek: {
129
- apiKey: process.env.DEEPSEEK_API_KEY!,
89
+ apiKey: process.env.DEEPSEEK_API_KEY,
130
90
  },
131
91
  mistral: {
132
- apiKey: process.env.MISTRAL_API_KEY!,
92
+ apiKey: process.env.MISTRAL_API_KEY,
93
+ },
94
+ anthropic: {
95
+ apiKey: process.env.ANTHROPIC_API_KEY,
133
96
  },
134
97
  });
135
98
  ```
136
99
 
137
- You can also configure only the providers you actually intend to use.
100
+ You don't need to configure all providers just the ones you use.
138
101
 
139
102
  ---
140
103
 
141
- ## Provider Request Shape
104
+ ### Basic Text Generation
142
105
 
143
- Requests are described by the `Provider` type in `src/types/types.ts`:
106
+ #### OpenAI
144
107
 
145
108
  ```ts
146
- export type Provider = {
147
- google?: {
148
- model: string;
149
- prompt: string;
150
- system?: string;
151
- temperature?: number;
152
- maxTokens?: number;
153
- raw?: boolean;
154
- stream?: boolean; // stream text from Gemini
155
- };
156
-
157
- openai?: {
158
- model: string;
159
- prompt: string;
160
- system?: string;
161
- temperature?: number;
162
- maxTokens?: number;
163
- raw?: boolean;
164
- stream?: boolean; // stream text from OpenAI
165
- };
166
-
167
- deepseek?: {
168
- model: string;
169
- prompt: string;
170
- system?: string;
171
- temperature?: number;
172
- maxTokens?: number;
173
- raw?: boolean;
174
- stream?: boolean; // stream text from DeepSeek
175
- };
176
-
177
- mistral?: {
178
- model: string;
179
- prompt: string;
180
- system?: string;
181
- temperature?: number;
182
- maxTokens?: number;
183
- raw?: boolean;
184
- stream?: boolean; // stream text from Mistral
185
- };
186
- }
109
+ const result = await ai.generate({
110
+ openai: {
111
+ model: 'gpt-4o-mini',
112
+ prompt: 'Explain quantum computing in one sentence.',
113
+ temperature: 0.7,
114
+ maxTokens: 100,
115
+ },
116
+ });
187
117
 
118
+ console.log(result.data); // The AI's response
188
119
  ```
189
120
 
190
- Common fields per provider:
121
+ #### Google Gemini
191
122
 
192
- - `model` (**required**) – model name for that provider.
193
- - `prompt` (**required**) the main user message.
194
- - `system` (optional) – high-level system instruction (currently only passed through if you add support in the provider).
195
- - `temperature` (optional) – sampling temperature (0–2, provider-specific behavior).
196
- - `maxTokens` (optional) maximum output tokens (provider-specific naming under the hood).
197
- - `raw` (optional) – if `true`, include the full raw provider response in `Output.raw`.
198
-
199
- Rules enforced by `validateProvider`:
123
+ ```ts
124
+ const result = await ai.generate({
125
+ google: {
126
+ model: 'gemini-2.5-flash-lite',
127
+ prompt: 'What are the three laws of robotics?',
128
+ temperature: 0.5,
129
+ maxTokens: 200,
130
+ },
131
+ });
200
132
 
201
- - Exactly one provider must be present per call:
202
- - Either `provider.google`, `provider.openai`, `provider.deepseek`, or `provider.mistral`, but not more than one at a time.
203
- - For the selected provider:
204
- - `model` must be a non-empty string.
205
- - `prompt` must be a non-empty string.
133
+ console.log(result.data);
134
+ ```
206
135
 
207
- If these rules are not met, an `SDKError` is thrown.
136
+ #### DeepSeek
208
137
 
209
- ---
138
+ ```ts
139
+ const result = await ai.generate({
140
+ deepseek: {
141
+ model: 'deepseek-chat',
142
+ prompt: 'Explain machine learning like I\'m 5.',
143
+ temperature: 0.6,
144
+ maxTokens: 150,
145
+ },
146
+ });
210
147
 
211
- ## Response Shape
148
+ console.log(result.data);
149
+ ```
212
150
 
213
- Responses use the `Output` type from `src/types/types.ts`:
151
+ #### Mistral
214
152
 
215
153
  ```ts
216
- export type Output = {
217
- data: string;
218
- provider: string;
219
- model: string;
220
- raw?: any;
221
- }
154
+ const result = await ai.generate({
155
+ mistral: {
156
+ model: 'mistral-small-latest',
157
+ prompt: 'Tell me a joke about programming.',
158
+ temperature: 0.8,
159
+ maxTokens: 100,
160
+ },
161
+ });
162
+
163
+ console.log(result.data);
222
164
  ```
223
165
 
224
- Fields:
166
+ #### Anthropic Claude
225
167
 
226
- - `data`: the main text content returned by the model (extracted from each provider-specific response format).
227
- - `provider`: the provider identifier (for example, `'google'`, `'openai'`, `'deepseek'`, `'mistral'`).
228
- - `model`: the model name that was used.
229
- - `raw` (optional): the full raw JSON response from the provider, included only when `raw: true` is set on the request.
168
+ ```ts
169
+ const result = await ai.generate({
170
+ anthropic: {
171
+ model: 'claude-3-5-sonnet-20241022',
172
+ prompt: 'What is the meaning of life?',
173
+ temperature: 0.7,
174
+ maxTokens: 150,
175
+ },
176
+ });
230
177
 
231
- > Note: Internally, some providers may temporarily return `{ text: ... }` instead of `{ data: ... }`, but the long-term intention is to normalize around `data` as the main text field.
178
+ console.log(result.data);
179
+ ```
232
180
 
233
181
  ---
234
182
 
235
- ## Usage
236
-
237
- ### 0. Streaming vs non-streaming
183
+ ### Streaming Responses
238
184
 
239
- `genChat.generate` returns either a single `Output` (non-streaming) or an async iterable of chunks (streaming), depending on the per-provider `stream` flag:
240
-
241
- - If `stream` is **not** set or `false`, `generate` resolves to an `Output`:
242
- - `{ data, provider, model, raw? }`.
243
- - If `stream` is `true` for a provider (`google`, `openai`, `deepseek`, or `mistral`), `generate` resolves to an async iterable of chunks:
244
- - You can use `for await (const chunk of result) { ... }`.
245
- - For Gemini, each `chunk` is a JSON event; you can drill into `candidates[0].content.parts[0].text` to get only the text.
246
-
247
- ### 1. Creating the Client
248
-
249
- Create a new `genChat` instance with the providers you want to use:
185
+ Get real-time responses for long outputs. All providers return a unified `StreamOutput` format:
250
186
 
251
187
  ```ts
252
- import { genChat } from './src/client';
188
+ import { genChat, type StreamOutput } from 'dev-ai-sdk';
253
189
 
254
- const ai = new genChat({
190
+ const stream = await ai.generate({
255
191
  google: {
256
- apiKey: process.env.GOOGLE_API_KEY!,
257
- },
258
- openai: {
259
- apiKey: process.env.OPENAI_API_KEY!,
260
- },
261
- deepseek: {
262
- apiKey: process.env.DEEPSEEK_API_KEY!,
263
- },
264
- mistral: {
265
- apiKey: process.env.MISTRAL_API_KEY!,
192
+ model: 'gemini-2.5-flash',
193
+ prompt: 'Write a 500-word essay on AI ethics.',
194
+ stream: true,
266
195
  },
267
196
  });
197
+
198
+ // Check if result is a stream
199
+ if (Symbol.asyncIterator in Object(stream)) {
200
+ // Loop through streaming chunks - same pattern for all 4 providers
201
+ for await (const chunk of stream as AsyncIterable<StreamOutput>) {
202
+ // chunk is a StreamOutput with unified structure:
203
+ // - chunk.text: the streamed text content
204
+ // - chunk.done: boolean indicating if stream is complete
205
+ // - chunk.provider: 'google' | 'openai' | 'deepseek' | 'mistral'
206
+ // - chunk.tokens?: { prompt?, completion?, total? } (if available from provider)
207
+ // - chunk.raw: raw provider event for advanced use
208
+
209
+ process.stdout.write(chunk.text);
210
+
211
+ // Show metadata when stream is done
212
+ if (chunk.done) {
213
+ console.log('\nStream completed');
214
+ console.log(`Provider: ${chunk.provider}`);
215
+ if (chunk.tokens) {
216
+ console.log(`Tokens used: ${chunk.tokens.total}`);
217
+ }
218
+ }
219
+ }
220
+ }
268
221
  ```
269
222
 
270
- You can also configure just one provider, e.g. only Mistral:
223
+ **Why `StreamOutput`?**
271
224
 
272
- ```ts
273
- const ai = new genChat({
274
- mistral: {
275
- apiKey: process.env.MISTRAL_API_KEY!,
276
- },
277
- });
278
- ```
225
+ - **Unified API** – Same code works for all 5 providers
226
+ - **Consistent fields** Always access `chunk.text`, never worry about provider-specific paths
227
+ - **Access to metadata** – Token counts, completion status, and provider name
228
+ - **Raw access** – `chunk.raw` gives you the full provider event if you need it
229
+
230
+ ---
279
231
 
280
- ### 2. Calling Google Gemini
232
+ ## Automatic Fallback
281
233
 
282
- #### Non-streaming
234
+ If a provider fails, automatically retry with other configured providers:
283
235
 
284
236
  ```ts
237
+ const ai = new genChat({
238
+ openai: { apiKey: process.env.OPENAI_API_KEY },
239
+ google: { apiKey: process.env.GOOGLE_API_KEY },
240
+ fallback: true, // Enable automatic fallback
241
+ });
242
+
243
+ // Try OpenAI first; if it fails, automatically try Google
285
244
  const result = await ai.generate({
286
- google: {
287
- model: 'gemini-2.5-flash-lite',
288
- prompt: 'Summarize the benefits of TypeScript in 3 bullet points.',
289
- temperature: 0.4,
290
- maxTokens: 256,
291
- raw: false, // set to true to include full raw response
245
+ openai: {
246
+ model: 'gpt-4o-mini',
247
+ prompt: 'What is 2+2?',
292
248
  },
293
249
  });
294
250
 
295
- console.log(result.provider); // 'google'
296
- console.log(result.model); // 'gemini-2.5-flash-lite'
297
- console.log(result.data); // summarized text
251
+ console.log(result.provider); // "openai" or "google" depending on which succeeded
252
+ console.log(result.data);
298
253
  ```
299
254
 
300
- #### Streaming (Gemini)
255
+ **How Fallback Works:**
256
+ 1. First, attempt the configured provider (e.g., OpenAI)
257
+ 2. If it fails with a retryable error (network, timeout, rate limit), try the next provider
258
+ 3. Each fallback provider uses a sensible default model for that provider (e.g., `gemini-2.5-flash-lite` for Google)
259
+ 4. If all providers fail, throw an error
260
+ 5. **Note:** Streaming calls (`stream: true`) do not trigger fallback; only non-streaming calls can fall back
261
+
262
+ **Limitations:**
263
+ - Fallback is disabled for streaming responses
264
+ - Only retryable errors trigger fallback (not validation/config errors)
265
+ - Each fallback attempt uses provider-specific default models
266
+
267
+ ---
268
+
269
+ ## LLM Council
270
+
271
+ Run the same prompt across multiple models and have a judge synthesize the best answer:
301
272
 
302
273
  ```ts
303
- const res = await ai.generate({
304
- google: {
305
- model: 'gemini-2.5-flash-lite',
306
- prompt: 'Explain Vercel in 5 lines.',
307
- system: 'Act like you are the maker of Vercel and answer accordingly.',
308
- maxTokens: 500,
309
- stream: true,
274
+ import { genChat, type CouncilDecision } from 'dev-ai-sdk';
275
+
276
+ const ai = new genChat({
277
+ openai: { apiKey: process.env.OPENAI_API_KEY },
278
+ google: { apiKey: process.env.GOOGLE_API_KEY },
279
+ mistral: { apiKey: process.env.MISTRAL_API_KEY },
280
+ anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
281
+ });
282
+
283
+ // Run same prompt across 3 models, judge with OpenAI
284
+ const decision = await ai.councilGenerate({
285
+ members: [
286
+ {
287
+ google: { model: 'gemini-2.5-flash-lite' },
288
+ },
289
+ {
290
+ mistral: { model: 'mistral-small-latest' },
291
+ },
292
+ {
293
+ anthropic: { model: 'claude-3-5-sonnet-20241022' },
294
+ },
295
+ ],
296
+ judge: {
297
+ openai: { model: 'gpt-4o-mini' },
310
298
  },
299
+ prompt: 'What are the top 3 programming languages for 2025 and why?',
300
+ system: 'You are an expert in technology trends.',
311
301
  });
312
302
 
313
- if (!(Symbol.asyncIterator in Object(res))) {
314
- throw new Error('Expected streaming result to be async iterable');
315
- }
303
+ console.log(decision.finalAnswer); // Judge's synthesis of all member responses
304
+ console.log(decision.memberResponses); // All individual model outputs
305
+ console.log(decision.reasoning); // Judge's reasoning for the final answer
306
+ ```
316
307
 
317
- for await (const chunk of res as AsyncIterable<any>) {
318
- const text =
319
- chunk?.candidates?.[0]?.content?.parts?.[0]?.text ?? '';
308
+ **Council Response Structure:**
320
309
 
321
- if (text) {
322
- console.log(text); // only the text from each streamed event
323
- }
310
+ ```ts
311
+ type CouncilDecision = {
312
+ finalAnswer: string; // Judge's final synthesized answer
313
+ memberResponses: {
314
+ [key: string]: string; // Each member's response by provider name
315
+ };
316
+ reasoning: string; // Judge's reasoning
317
+ judge: {
318
+ provider: string; // Judge provider (e.g., "openai")
319
+ model: string; // Judge model
320
+ };
321
+ members: {
322
+ provider: string; // Member provider
323
+ model: string; // Member model
324
+ }[];
324
325
  }
325
326
  ```
326
327
 
327
- ### 3. Calling OpenAI (Responses API)
328
+ **Benefits:**
329
+ - **Better decisions** – Multiple perspectives on complex problems
330
+ - **Reduced bias** – Different models have different strengths
331
+ - **Unified response** – Single final answer instead of multiple conflicting outputs
332
+ - **Transparent reasoning** – Judge explains why it chose certain ideas
333
+ - **Parallel execution** – All member calls run in parallel for speed
334
+
335
+ ---
336
+
337
+ ### System Prompts
338
+
339
+ Give the AI context and instructions:
328
340
 
329
341
  ```ts
330
342
  const result = await ai.generate({
331
343
  openai: {
332
- model: 'gpt-4.1-mini',
333
- prompt: 'Generate a creative product name for a note-taking app.',
334
- temperature: 0.7,
335
- maxTokens: 128,
336
- raw: false, // set to true to include full raw response
344
+ model: 'gpt-4o-mini',
345
+ system: 'You are a helpful coding assistant. Always provide code examples.',
346
+ prompt: 'How do I sort an array in JavaScript?',
337
347
  },
338
348
  });
339
349
 
340
- console.log(result.provider); // 'openai'
341
- console.log(result.model); // 'gpt-4.1-mini'
342
- console.log(result.data); // generated product name
350
+ console.log(result.data);
343
351
  ```
344
352
 
345
- ### 4. Calling DeepSeek
353
+ ---
354
+
355
+ ### Temperature & Max Tokens
356
+
357
+ Control response behavior:
346
358
 
347
359
  ```ts
348
360
  const result = await ai.generate({
349
- deepseek: {
350
- model: 'deepseek-chat',
351
- prompt: 'Explain RAG in simple terms.',
352
- temperature: 0.5,
353
- maxTokens: 256,
354
- raw: true, // include full raw DeepSeek response
361
+ openai: {
362
+ model: 'gpt-4o-mini',
363
+ prompt: 'Generate a creative story title.',
364
+ temperature: 0.9, // Higher = more creative/random (0-1)
365
+ maxTokens: 50, // Limit response length
355
366
  },
356
367
  });
357
368
 
358
- console.log(result.provider); // 'deepseek'
359
- console.log(result.model); // 'deepseek-chat'
360
- console.log(result.data); // explanation text
361
- console.log(result.raw); // full DeepSeek JSON (for debugging)
369
+ console.log(result.data);
362
370
  ```
363
371
 
364
- ### 5. Calling Mistral
372
+ ---
373
+
374
+ ### Get Raw API Responses
375
+
376
+ Sometimes you need the full provider response:
365
377
 
366
378
  ```ts
367
379
  const result = await ai.generate({
368
- mistral: {
369
- model: 'mistral-tiny',
370
- prompt: 'Give me a short haiku about TypeScript.',
371
- temperature: 0.8,
372
- maxTokens: 64,
380
+ google: {
381
+ model: 'gemini-2.5-flash-lite',
382
+ prompt: 'What is 2+2?',
373
383
  raw: true,
374
384
  },
375
385
  });
376
386
 
377
- console.log(result.provider); // 'mistral'
378
- console.log(result.model); // 'mistral-tiny'
379
- console.log(result.data); // haiku text (once the provider normalizes to `data`)
380
- console.log(result.raw); // full Mistral JSON (for inspecting choices/message)
387
+ console.log(result.raw); // Full Google API response
388
+ console.log(result.data); // Just the text
389
+ ```
390
+
391
+ ---
392
+
393
+ ## Configuration Reference
394
+
395
+ ### Response Object
396
+
397
+ Every call returns this shape (for non-streaming):
398
+
399
+ ```ts
400
+ {
401
+ data: string; // The AI's text response
402
+ provider: string; // Which provider was used (e.g., "openai")
403
+ model: string; // Which model was used (e.g., "gpt-4o-mini")
404
+ raw?: any; // (Optional) Full raw API response if raw: true
405
+ }
381
406
  ```
382
407
 
383
- > Note: The provider implementations for DeepSeek and Mistral are still evolving. They are currently focused on basic, URL-based chat completions and raw response inspection while you iterate on the exact output normalization.
408
+ ### Request Parameters
409
+
410
+ All providers support:
411
+
412
+ | Parameter | Type | Required | Default | Description |
413
+ |-----------|------|----------|---------|-------------|
414
+ | `model` | string | ✅ | — | Model name (e.g., `gpt-4o-mini`, `gemini-2.5-flash-lite`) |
415
+ | `prompt` | string | ✅ | — | Your question or instruction |
416
+ | `system` | string | ❌ | — | System context/role for the AI |
417
+ | `temperature` | number | ❌ | 1 | Randomness (0 = deterministic, 2 = very creative) |
418
+ | `maxTokens` | number | ❌ | — | Max response length in tokens |
419
+ | `stream` | boolean | ❌ | false | Stream responses in real-time |
420
+ | `raw` | boolean | ❌ | false | Include full provider response |
384
421
 
385
422
  ---
386
423
 
387
- ## Error Handling
424
+ ## StreamOutput Type Reference
425
+
426
+ All streaming responses return a unified `StreamOutput` type, regardless of provider:
427
+
428
+ ```ts
429
+ type StreamOutput = {
430
+ text: string; // The streamed text chunk
431
+ done: boolean; // True when stream is complete
432
+ tokens?: {
433
+ prompt?: number; // Prompt tokens (if available)
434
+ completion?: number; // Completion tokens (if available)
435
+ total?: number; // Total tokens (if available)
436
+ };
437
+ raw: any; // Raw provider event object
438
+ provider: string; // 'google' | 'openai' | 'deepseek' | 'mistral' | 'anthropic'
439
+ }
440
+ ```
388
441
 
389
- All SDK-level errors are represented by the `SDKError` class (`src/core/error.ts`):
442
+ **Example:**
390
443
 
391
444
  ```ts
392
- export class SDKError extends Error {
393
- provider: string;
394
- message: string;
395
-
396
- constructor(message: string, provider?: string) {
397
- super(message);
398
- this.provider = provider;
399
- this.message = message;
445
+ const stream = await ai.generate({
446
+ google: {
447
+ model: 'gemini-2.5-flash',
448
+ prompt: 'Hello!',
449
+ stream: true,
450
+ },
451
+ });
452
+
453
+ if (Symbol.asyncIterator in Object(stream)) {
454
+ for await (const chunk of stream as AsyncIterable<StreamOutput>) {
455
+ console.log(chunk.text); // "Hello" or similar
456
+ console.log(chunk.done); // false, then true at end
457
+ console.log(chunk.provider); // "google"
458
+ console.log(chunk.tokens?.total); // 42 (if available)
459
+ console.log(chunk.raw); // Full Gemini event object
400
460
  }
401
461
  }
402
462
  ```
403
463
 
404
- Examples of when `SDKError` is thrown:
464
+ **Key Benefits:**
405
465
 
406
- - No providers configured in `SDKConfig`.
407
- - API key is missing or an empty string for a configured provider.
408
- - No provider passed to `generate`.
409
- - More than one provider passed in a single `generate` call.
410
- - `model` or `prompt` is missing/empty for the chosen provider.
411
- - Provider HTTP response is not OK (`res.ok === false`), in which case the error message includes the status code and response data.
466
+ - Same interface for all 5 providers
467
+ - Always access `chunk.text` for content
468
+ - Always access `chunk.done` to detect completion
469
+ - Token info included when provider supports it
470
+ - `chunk.raw` for provider-specific advanced use cases
412
471
 
413
- You can catch and inspect `SDKError` like this:
472
+ ---
473
+
474
+ ## Error Handling
475
+
476
+ All errors are `SDKError` exceptions:
414
477
 
415
478
  ```ts
479
+ import { SDKError } from 'dev-ai-sdk';
480
+
416
481
  try {
417
482
  const result = await ai.generate({
418
- google: {
419
- model: 'gemini-2.5-flash-lite',
420
- prompt: '', // invalid: empty prompt
483
+ openai: {
484
+ model: 'gpt-4o-mini',
485
+ prompt: '', // Invalid: empty prompt
421
486
  },
422
487
  });
423
488
  } catch (err) {
424
489
  if (err instanceof SDKError) {
425
- console.error('SDK error from provider:', err.provider);
426
- console.error('Message:', err.message);
490
+ console.error(`Error from ${err.provider}: ${err.message}`);
427
491
  } else {
428
- console.error('Unknown error:', err);
492
+ console.error('Unexpected error:', err);
429
493
  }
430
494
  }
431
495
  ```
432
496
 
497
+ Common errors:
498
+ - **Missing API key** – Configure all providers you use
499
+ - **Invalid model name** – Check provider documentation for valid models
500
+ - **Empty prompt** – Prompt must be a non-empty string
501
+ - **Invalid request** – Only pass one provider per request (not multiple)
502
+
503
+ ---
504
+
505
+ ## Environment Setup
506
+
507
+ Create a `.env` file with your API keys:
508
+
509
+ ```bash
510
+ # .env
511
+ OPENAI_API_KEY=sk-...
512
+ GOOGLE_API_KEY=AIza...
513
+ DEEPSEEK_API_KEY=sk-...
514
+ MISTRAL_API_KEY=...
515
+ ANTHROPIC_API_KEY=sk-ant-...
516
+ ```
517
+
518
+ Then load it in your code:
519
+
520
+ ```ts
521
+ import 'dotenv/config';
522
+
523
+ const ai = new genChat({
524
+ openai: { apiKey: process.env.OPENAI_API_KEY! },
525
+ });
526
+ ```
527
+
433
528
  ---
434
529
 
435
- ## Development
530
+ ## Common Patterns
531
+
532
+ ### Try Multiple Providers
533
+
534
+ Switch providers without changing your code:
535
+
536
+ ```ts
537
+ const provider = process.env.AI_PROVIDER || 'openai';
538
+
539
+ const result = await ai.generate({
540
+ [provider]: {
541
+ model: getModelForProvider(provider),
542
+ prompt: 'Hello, AI!',
543
+ },
544
+ });
545
+ ```
546
+
547
+ ### Fallback to Cheaper Model
436
548
 
437
- ### Scripts
549
+ ```ts
550
+ try {
551
+ const result = await ai.generate({
552
+ openai: {
553
+ model: 'gpt-4o', // Expensive
554
+ prompt: 'Complex question...',
555
+ },
556
+ });
557
+ } catch {
558
+ // Fall back to cheaper model
559
+ const result = await ai.generate({
560
+ openai: {
561
+ model: 'gpt-4o-mini', // Cheaper
562
+ prompt: 'Complex question...',
563
+ },
564
+ });
565
+ }
566
+ ```
438
567
 
439
- Defined in `package.json`:
568
+ ### Streaming with Real-Time Updates
440
569
 
441
- - `npm run dev` run `src/index.ts` with `tsx`.
442
- - `npm run build` – run TypeScript compiler (`tsc`).
443
- - `npm run start` – run the built `dist/index.js` with Node.
444
- - `npm run clean` – remove the `dist` directory.
570
+ A practical example combining streaming with unified `StreamOutput`:
445
571
 
446
- ### TypeScript Configuration
572
+ ```ts
573
+ import { genChat, type StreamOutput } from 'dev-ai-sdk';
447
574
 
448
- `tsconfig.json` is set up with:
575
+ const ai = new genChat({
576
+ google: { apiKey: process.env.GOOGLE_API_KEY! },
577
+ });
449
578
 
450
- - `target`: `ES2022`
451
- - `module`: `ESNext`
452
- - `moduleResolution`: `Bundler`
453
- - `strict`: `true`
454
- - `allowImportingTsExtensions`: `true`
455
- - `noEmit`: `true` (for development; the build step can be adjusted as the project evolves)
579
+ const stream = await ai.generate({
580
+ google: {
581
+ model: 'gemini-2.5-flash',
582
+ prompt: 'Write a haiku about programming...',
583
+ stream: true,
584
+ },
585
+ });
456
586
 
457
- The `src/` directory is included for compilation.
587
+ if (Symbol.asyncIterator in Object(stream)) {
588
+ for await (const chunk of stream as AsyncIterable<StreamOutput>) {
589
+ // Unified interface - works the same for all 4 providers
590
+ process.stdout.write(chunk.text);
591
+
592
+ if (chunk.done) {
593
+ console.log('\n');
594
+ console.log(`Completed from ${chunk.provider}`);
595
+ if (chunk.tokens?.total) {
596
+ console.log(`Used ${chunk.tokens.total} tokens`);
597
+ }
598
+ }
599
+ }
600
+ }
601
+ ```
458
602
 
459
603
  ---
460
604
 
461
- ## Limitations (Current)
605
+ ## Limitations
606
+
607
+ This is v0.0.4 — early but functional. Currently:
462
608
 
463
- This project is currently in an early stage and has several limitations:
464
-
465
- - Only single-prompt text generation is supported (no explicit chat/history abstraction yet).
466
- - Streaming is basic and low-level:
467
- - It returns provider-specific JSON events (for example, Gemini `candidates[].content.parts[].text`).
468
- - You are responsible for extracting the text you care about from each chunk.
469
- - No structured/JSON output helpers are provided.
470
- - No React/Next.js integrations or hooks are included.
471
- - Output normalization across providers (e.g. always using `data`) is still being finalized.
609
+ - Single-turn text generation (no multi-turn conversation history yet)
610
+ - Streaming returns unified `StreamOutput` objects (consistent across all providers)
611
+ - Fallback limited to non-streaming calls only
612
+ - LLM Council judge runs sequentially after all members complete
613
+ - No function calling / tool use yet
614
+ - No JSON mode / structured output yet
615
+
616
+ ---
472
617
 
618
+ ## What's Next
473
619
 
474
- These limitations are intentional for now to keep the core small and focused while the API surface is still evolving.
620
+ Future versions will include:
621
+
622
+ - Multi-turn conversation management
623
+ - Structured output helpers
624
+ - Function calling across providers
625
+ - Automatic model selection based on task complexity
626
+ - Rate limiting & caching
627
+ - React/Next.js hooks
628
+ - More providers (Azure, Cohere, Ollama, etc.)
475
629
 
476
630
  ---
477
631
 
478
- ## Future Directions
632
+ ## Support
633
+
634
+ - **GitHub**: https://github.com/shujanislam/dev-ai-sdk
635
+ - **Issues**: https://github.com/shujanislam/dev-ai-sdk/issues
636
+ - **Author**: Shujan Islam
479
637
 
480
- The long-term goal is to move toward a feature set closer to the Vercel AI SDK, while staying provider-agnostic and simple. Potential future improvements include:
638
+ ---
481
639
 
482
- - `generateText`, `streamText`, and `generateObject` helper functions.
483
- - Unified message-based chat interface and history management.
484
- - First-class streaming support with helpers for Node, browser, and Edge runtimes.
485
- - JSON/structured output helpers, with optional schema validation.
486
- - Tool/function calling abstraction across providers.
487
- - Middleware/hooks for logging, metrics, retries, rate limiting, and caching.
488
- - Official React/Next.js integrations and example apps.
489
- - Support for more providers (Anthropic, Azure OpenAI, etc.).
640
+ ## License
490
641
 
491
- Contributions and ideas are welcome as the design evolves.
642
+ MIT Use freely in your projects.