@everworker/oneringai 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,1228 @@
1
+ # @everworker/oneringai
2
+
3
+ > **A unified AI agent library with multi-provider support for text generation, image/video generation, audio (TTS/STT), and agentic workflows.**
4
+
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+ [![TypeScript](https://img.shields.io/badge/TypeScript-5.3-blue.svg)](https://www.typescriptlang.org/)
7
+ [![Node.js](https://img.shields.io/badge/Node.js-18+-green.svg)](https://nodejs.org/)
8
+
9
+ ## Features
10
+
11
+ - ✨ **Unified API** - One interface for 10+ AI providers (OpenAI, Anthropic, Google, Groq, DeepSeek, and more)
12
+ - 🔑 **Connector-First Architecture** - Single auth system with support for multiple keys per vendor
13
+ - 📊 **Model Registry** - Complete metadata for 23+ latest (2026) models with pricing and features
14
+ - 🎤 **Audio Capabilities** - Text-to-Speech (TTS) and Speech-to-Text (STT) with OpenAI and Groq
15
+ - 🖼️ **Image Generation** - DALL-E 3, gpt-image-1, Google Imagen 4 with editing and variations
16
+ - 🎬 **Video Generation** - NEW: OpenAI Sora 2 and Google Veo 3 for AI video creation
17
+ - 🔍 **Web Search** - Connector-based search with Serper, Brave, Tavily, and RapidAPI providers
18
+ - 🔌 **NextGen Context** - Clean, plugin-based context management with `AgentContextNextGen`
19
+ - 🎛️ **Dynamic Tool Management** - Enable/disable tools at runtime, namespaces, priority-based selection
20
+ - 🔌 **Tool Execution Plugins** - NEW: Pluggable pipeline for logging, analytics, UI updates, custom behavior
21
+ - 💾 **Session Persistence** - Save and resume conversations with full state restoration
22
+ - 🤖 **Universal Agent** - ⚠️ *Deprecated* - Use `Agent` with plugins instead
23
+ - 🤖 **Task Agents** - ⚠️ *Deprecated* - Use `Agent` with `WorkingMemoryPluginNextGen`
24
+ - 🔬 **Research Agent** - ⚠️ *Deprecated* - Use `Agent` with search tools
25
+ - 🎯 **Context Management** - Smart strategies (proactive, aggressive, lazy, rolling-window, adaptive)
26
+ - 📌 **InContextMemory** - NEW: Live key-value storage directly in LLM context for instant access
27
+ - 📝 **Persistent Instructions** - NEW: Agent-level custom instructions that persist across sessions on disk
28
+ - 🛠️ **Agentic Workflows** - Built-in tool calling and multi-turn conversations
29
+ - 🔧 **Developer Tools** - NEW: Filesystem and shell tools for coding assistants (read, write, edit, grep, glob, bash)
30
+ - 🔌 **MCP Integration** - NEW: Model Context Protocol client for seamless tool discovery from local and remote servers
31
+ - 👁️ **Vision Support** - Analyze images with AI across all providers
32
+ - 📋 **Clipboard Integration** - Paste screenshots directly (like Claude Code!)
33
+ - 🔐 **OAuth 2.0** - Full OAuth support for external APIs with encrypted token storage
34
+ - 📦 **Vendor Templates** - NEW: Pre-configured auth templates for 43+ services (GitHub, Slack, Stripe, etc.)
35
+ - 🔄 **Streaming** - Real-time responses with event streams
36
+ - 📝 **TypeScript** - Full type safety and IntelliSense support
37
+
38
+ ## Quick Start
39
+
40
+ ### Installation
41
+
42
+ ```bash
43
+ npm install @everworker/oneringai
44
+ ```
45
+
46
+ ### Basic Usage
47
+
48
+ ```typescript
49
+ import { Connector, Agent, Vendor } from '@everworker/oneringai';
50
+
51
+ // 1. Create a connector (authentication)
52
+ Connector.create({
53
+ name: 'openai',
54
+ vendor: Vendor.OpenAI,
55
+ auth: { type: 'api_key', apiKey: process.env.OPENAI_API_KEY! },
56
+ });
57
+
58
+ // 2. Create an agent
59
+ const agent = Agent.create({
60
+ connector: 'openai',
61
+ model: 'gpt-4',
62
+ });
63
+
64
+ // 3. Run
65
+ const response = await agent.run('What is the capital of France?');
66
+ console.log(response.output_text);
67
+ // Output: "The capital of France is Paris."
68
+ ```
69
+
70
+ ### With Tools
71
+
72
+ ```typescript
73
+ import { ToolFunction } from '@everworker/oneringai';
74
+
75
+ const weatherTool: ToolFunction = {
76
+ definition: {
77
+ type: 'function',
78
+ function: {
79
+ name: 'get_weather',
80
+ description: 'Get current weather',
81
+ parameters: {
82
+ type: 'object',
83
+ properties: {
84
+ location: { type: 'string' },
85
+ },
86
+ required: ['location'],
87
+ },
88
+ },
89
+ },
90
+ execute: async (args) => {
91
+ return { temp: 72, location: args.location };
92
+ },
93
+ };
94
+
95
+ const agent = Agent.create({
96
+ connector: 'openai',
97
+ model: 'gpt-4',
98
+ tools: [weatherTool],
99
+ });
100
+
101
+ await agent.run('What is the weather in Paris?');
102
+ ```
103
+
104
+ ### Vision
105
+
106
+ ```typescript
107
+ import { createMessageWithImages } from '@everworker/oneringai';
108
+
109
+ const agent = Agent.create({
110
+ connector: 'openai',
111
+ model: 'gpt-4o',
112
+ });
113
+
114
+ const response = await agent.run(
115
+ createMessageWithImages('What is in this image?', ['./photo.jpg'])
116
+ );
117
+ ```
118
+
119
+ ### Audio (NEW)
120
+
121
+ ```typescript
122
+ import { TextToSpeech, SpeechToText } from '@everworker/oneringai';
123
+
124
+ // Text-to-Speech
125
+ const tts = TextToSpeech.create({
126
+ connector: 'openai',
127
+ model: 'tts-1-hd',
128
+ voice: 'nova',
129
+ });
130
+
131
+ await tts.toFile('Hello, world!', './output.mp3');
132
+
133
+ // Speech-to-Text
134
+ const stt = SpeechToText.create({
135
+ connector: 'openai',
136
+ model: 'whisper-1',
137
+ });
138
+
139
+ const result = await stt.transcribeFile('./audio.mp3');
140
+ console.log(result.text);
141
+ ```
142
+
143
+ ### Image Generation (NEW)
144
+
145
+ ```typescript
146
+ import { ImageGeneration } from '@everworker/oneringai';
147
+
148
+ // OpenAI DALL-E
149
+ const imageGen = ImageGeneration.create({ connector: 'openai' });
150
+
151
+ const result = await imageGen.generate({
152
+ prompt: 'A futuristic city at sunset',
153
+ model: 'dall-e-3',
154
+ size: '1024x1024',
155
+ quality: 'hd',
156
+ });
157
+
158
+ // Save to file
159
+ const buffer = Buffer.from(result.data[0].b64_json!, 'base64');
160
+ await fs.writeFile('./output.png', buffer);
161
+
162
+ // Google Imagen
163
+ const googleGen = ImageGeneration.create({ connector: 'google' });
164
+
165
+ const googleResult = await googleGen.generate({
166
+ prompt: 'A colorful butterfly in a garden',
167
+ model: 'imagen-4.0-generate-001',
168
+ });
169
+ ```
170
+
171
+ ### Video Generation (NEW)
172
+
173
+ ```typescript
174
+ import { VideoGeneration } from '@everworker/oneringai';
175
+
176
+ // OpenAI Sora
177
+ const videoGen = VideoGeneration.create({ connector: 'openai' });
178
+
179
+ // Start video generation (async - returns a job)
180
+ const job = await videoGen.generate({
181
+ prompt: 'A cinematic shot of a sunrise over mountains',
182
+ model: 'sora-2',
183
+ duration: 8,
184
+ resolution: '1280x720',
185
+ });
186
+
187
+ // Wait for completion
188
+ const result = await videoGen.waitForCompletion(job.jobId);
189
+
190
+ // Download the video
191
+ const videoBuffer = await videoGen.download(job.jobId);
192
+ await fs.writeFile('./output.mp4', videoBuffer);
193
+
194
+ // Google Veo
195
+ const googleVideo = VideoGeneration.create({ connector: 'google' });
196
+
197
+ const veoJob = await googleVideo.generate({
198
+ prompt: 'A butterfly flying through a garden',
199
+ model: 'veo-3.0-generate-001',
200
+ duration: 8,
201
+ });
202
+ ```
203
+
204
+ ### Web Search
205
+
206
+ Connector-based web search with multiple providers:
207
+
208
+ ```typescript
209
+ import { Connector, SearchProvider, Services, webSearch, Agent } from '@everworker/oneringai';
210
+
211
+ // Create search connector
212
+ Connector.create({
213
+ name: 'serper-main',
214
+ serviceType: Services.Serper,
215
+ auth: { type: 'api_key', apiKey: process.env.SERPER_API_KEY! },
216
+ baseURL: 'https://google.serper.dev',
217
+ });
218
+
219
+ // Option 1: Use SearchProvider directly
220
+ const search = SearchProvider.create({ connector: 'serper-main' });
221
+ const results = await search.search('latest AI developments 2026', {
222
+ numResults: 10,
223
+ country: 'us',
224
+ language: 'en',
225
+ });
226
+
227
+ // Option 2: Use with Agent
228
+ const agent = Agent.create({
229
+ connector: 'openai',
230
+ model: 'gpt-4',
231
+ tools: [webSearch],
232
+ });
233
+
234
+ await agent.run('Search for quantum computing news and summarize');
235
+ ```
236
+
237
+ **Supported Search Providers:**
238
+ - **Serper** - Google search via Serper.dev (2,500 free queries)
239
+ - **Brave** - Independent search index (privacy-focused)
240
+ - **Tavily** - AI-optimized search with summaries
241
+ - **RapidAPI** - Real-time web search (various pricing)
242
+
243
+ ### Web Scraping
244
+
245
+ Enterprise web scraping with automatic fallback and bot protection bypass:
246
+
247
+ ```typescript
248
+ import { Connector, ScrapeProvider, Services, webScrape, Agent } from '@everworker/oneringai';
249
+
250
+ // Create ZenRows connector for bot-protected sites
251
+ Connector.create({
252
+ name: 'zenrows',
253
+ serviceType: Services.Zenrows,
254
+ auth: { type: 'api_key', apiKey: process.env.ZENROWS_API_KEY! },
255
+ baseURL: 'https://api.zenrows.com/v1',
256
+ });
257
+
258
+ // Option 1: Use ScrapeProvider directly
259
+ const scraper = ScrapeProvider.create({ connector: 'zenrows' });
260
+ const result = await scraper.scrape('https://protected-site.com', {
261
+ includeMarkdown: true,
262
+ vendorOptions: {
263
+ jsRender: true, // JavaScript rendering
264
+ premiumProxy: true, // Residential IPs
265
+ },
266
+ });
267
+
268
+ // Option 2: Use webScrape tool with Agent
269
+ const agent = Agent.create({
270
+ connector: 'openai',
271
+ model: 'gpt-4',
272
+ tools: [webScrape],
273
+ });
274
+
275
+ // webScrape auto-falls back: native → JS → API
276
+ await agent.run('Scrape https://example.com and summarize');
277
+ ```
278
+
279
+ **Supported Scrape Providers:**
280
+ - **ZenRows** - Enterprise scraping with JS rendering, residential proxies, anti-bot bypass
281
+
282
+ ## Supported Providers
283
+
284
+ | Provider | Text | Vision | TTS | STT | Image | Video | Tools | Context |
285
+ |----------|------|--------|-----|-----|-------|-------|-------|---------|
286
+ | **OpenAI** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | 128K |
287
+ | **Anthropic (Claude)** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | 200K |
288
+ | **Google (Gemini)** | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | 1M |
289
+ | **Google Vertex AI** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | 1M |
290
+ | **Grok (xAI)** | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | 128K |
291
+ | **Groq** | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | 128K |
292
+ | **Together AI** | ✅ | Some | ❌ | ❌ | ❌ | ❌ | ✅ | 128K |
293
+ | **DeepSeek** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | 64K |
294
+ | **Mistral** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | 32K |
295
+ | **Ollama** | ✅ | Varies | ❌ | ❌ | ❌ | ❌ | ✅ | Varies |
296
+ | **Custom** | ✅ | Varies | ❌ | ❌ | ❌ | ❌ | ✅ | Varies |
297
+
298
+ ## Key Features
299
+
300
+ ### 1. Agent with Plugins
301
+
302
+ The **Agent** class is the primary agent type, supporting all features through composable plugins:
303
+
304
+ ```typescript
305
+ import { Agent, createFileContextStorage } from '@everworker/oneringai';
306
+
307
+ // Create storage for session persistence
308
+ const storage = createFileContextStorage('my-assistant');
309
+
310
+ const agent = Agent.create({
311
+ connector: 'openai',
312
+ model: 'gpt-4',
313
+ tools: [weatherTool, emailTool],
314
+ context: {
315
+ features: {
316
+ workingMemory: true, // Store/retrieve data across turns
317
+ inContextMemory: true, // Key-value pairs directly in context
318
+ persistentInstructions: true, // Agent instructions that persist to disk
319
+ },
320
+ agentId: 'my-assistant',
321
+ storage,
322
+ },
323
+ });
324
+
325
+ // Run the agent
326
+ const response = await agent.run('Check weather and email me the report');
327
+ console.log(response.output_text);
328
+
329
+ // Save session for later
330
+ await agent.context.save('session-001');
331
+ ```
332
+
333
+ **Features:**
334
+ - 🔧 **Plugin Architecture** - Enable/disable features via `context.features`
335
+ - 💾 **Session Persistence** - Save/load full state with `ctx.save()` and `ctx.load()`
336
+ - 📝 **Working Memory** - Store findings with automatic eviction
337
+ - 📌 **InContextMemory** - Key-value pairs visible directly to LLM
338
+ - 🔄 **Persistent Instructions** - Agent instructions that persist across sessions
339
+
340
+ ### 2. Dynamic Tool Management (NEW)
341
+
342
+ Control tools at runtime. **AgentContextNextGen is the single source of truth** - `agent.tools` and `agent.context.tools` are the same ToolManager instance:
343
+
344
+ ```typescript
345
+ import { Agent } from '@everworker/oneringai';
346
+
347
+ const agent = Agent.create({
348
+ connector: 'openai',
349
+ model: 'gpt-4',
350
+ tools: [weatherTool, emailTool, databaseTool],
351
+ });
352
+
353
+ // Disable tool temporarily
354
+ agent.tools.disable('database_tool');
355
+
356
+ // Enable later
357
+ agent.tools.enable('database_tool');
358
+
359
+ // UNIFIED ACCESS: Both paths access the same ToolManager
360
+ console.log(agent.tools === agent.context.tools); // true
361
+
362
+ // Changes via either path are immediately reflected
363
+ agent.context.tools.disable('email_tool');
364
+ console.log(agent.tools.listEnabled().includes('email_tool')); // false
365
+
366
+ // Context-aware selection
367
+ const selected = agent.tools.selectForContext({
368
+ mode: 'interactive',
369
+ priority: 'high',
370
+ });
371
+
372
+ // Backward compatible
373
+ agent.addTool(newTool); // Still works!
374
+ agent.removeTool('old_tool'); // Still works!
375
+ ```
376
+
377
+ ### 3. Tool Execution Plugins (NEW)
378
+
379
+ Extend tool execution with custom behavior through a pluggable pipeline architecture. Add logging, analytics, UI updates, permission prompts, or any custom logic:
380
+
381
+ ```typescript
382
+ import { Agent, LoggingPlugin, type IToolExecutionPlugin } from '@everworker/oneringai';
383
+
384
+ const agent = Agent.create({
385
+ connector: 'openai',
386
+ model: 'gpt-4',
387
+ tools: [weatherTool],
388
+ });
389
+
390
+ // Add built-in logging plugin
391
+ agent.tools.executionPipeline.use(new LoggingPlugin());
392
+
393
+ // Create a custom plugin
394
+ const analyticsPlugin: IToolExecutionPlugin = {
395
+ name: 'analytics',
396
+ priority: 100,
397
+
398
+ async beforeExecute(ctx) {
399
+ console.log(`Starting ${ctx.toolName}`);
400
+ },
401
+
402
+ async afterExecute(ctx, result) {
403
+ const duration = Date.now() - ctx.startTime;
404
+ trackToolUsage(ctx.toolName, duration);
405
+ return result; // Must return result (can transform it)
406
+ },
407
+
408
+ async onError(ctx, error) {
409
+ reportError(ctx.toolName, error);
410
+ return undefined; // Let error propagate (or return value to recover)
411
+ },
412
+ };
413
+
414
+ agent.tools.executionPipeline.use(analyticsPlugin);
415
+ ```
416
+
417
+ **Plugin Lifecycle:**
418
+ 1. `beforeExecute` - Modify args, abort execution, or pass through
419
+ 2. Tool execution
420
+ 3. `afterExecute` - Transform results (runs in reverse priority order)
421
+ 4. `onError` - Handle/recover from errors
422
+
423
+ **Plugin Context (`PluginExecutionContext`):**
424
+ ```typescript
425
+ interface PluginExecutionContext {
426
+ toolName: string; // Name of the tool being executed
427
+ args: unknown; // Original arguments (read-only)
428
+ mutableArgs: unknown; // Modifiable arguments
429
+ metadata: Map<string, unknown>; // Share data between plugins
430
+ startTime: number; // Execution start timestamp
431
+ tool: ToolFunction; // The tool being executed
432
+ executionId: string; // Unique ID for this execution
433
+ }
434
+ ```
435
+
436
+ **Built-in Plugins:**
437
+ - `LoggingPlugin` - Logs tool execution with timing and result summaries
438
+
439
+ **Pipeline Management:**
440
+ ```typescript
441
+ // Add plugin
442
+ agent.tools.executionPipeline.use(myPlugin);
443
+
444
+ // Remove plugin
445
+ agent.tools.executionPipeline.remove('plugin-name');
446
+
447
+ // Check if registered
448
+ agent.tools.executionPipeline.has('plugin-name');
449
+
450
+ // Get plugin
451
+ const plugin = agent.tools.executionPipeline.get('plugin-name');
452
+
453
+ // List all plugins
454
+ const plugins = agent.tools.executionPipeline.list();
455
+ ```
456
+
457
+ ### 4. Session Persistence
458
+
459
+ Save and resume full context state including conversation history and plugin states:
460
+
461
+ ```typescript
462
+ import { AgentContextNextGen, createFileContextStorage } from '@everworker/oneringai';
463
+
464
+ // Create storage for the agent
465
+ const storage = createFileContextStorage('my-assistant');
466
+
467
+ // Create context with storage
468
+ const ctx = AgentContextNextGen.create({
469
+ model: 'gpt-4',
470
+ features: { workingMemory: true },
471
+ storage,
472
+ });
473
+
474
+ // Build up state
475
+ ctx.addUserMessage('Remember: my favorite color is blue');
476
+ await ctx.memory?.store('user_color', 'User favorite color', 'blue');
477
+
478
+ // Save session with metadata
479
+ await ctx.save('session-001', { title: 'User Preferences' });
480
+
481
+ // Later... load session
482
+ const ctx2 = AgentContextNextGen.create({ model: 'gpt-4', storage });
483
+ const loaded = await ctx2.load('session-001');
484
+
485
+ if (loaded) {
486
+ // Full state restored: conversation, plugin states, etc.
487
+ const color = await ctx2.memory?.retrieve('user_color');
488
+ console.log(color); // 'blue'
489
+ }
490
+ ```
491
+
492
+ **What's Persisted:**
493
+ - Complete conversation history
494
+ - All plugin states (WorkingMemory entries, InContextMemory, etc.)
495
+ - System prompt
496
+
497
+ **Storage Location:** `~/.oneringai/agents/<agentId>/sessions/<sessionId>.json`
498
+
499
+ ### 5. Working Memory
500
+
501
+ Use the `WorkingMemoryPluginNextGen` for agents that need to store and retrieve data:
502
+
503
+ ```typescript
504
+ import { Agent } from '@everworker/oneringai';
505
+
506
+ const agent = Agent.create({
507
+ connector: 'openai',
508
+ model: 'gpt-4',
509
+ tools: [weatherTool, emailTool],
510
+ context: {
511
+ features: { workingMemory: true },
512
+ },
513
+ });
514
+
515
+ // Agent now has memory_store, memory_retrieve, memory_delete, memory_list tools
516
+ await agent.run('Check weather for SF and remember the result');
517
+ ```
518
+
519
+ **Features:**
520
+ - 📝 **Working Memory** - Store and retrieve data with priority-based eviction
521
+ - 🏗️ **Hierarchical Memory** - Raw → Summary → Findings tiers for research tasks
522
+ - 🧠 **Context Management** - Automatic handling of context limits
523
+ - 💾 **Session Persistence** - Save/load via `ctx.save()` and `ctx.load()`
524
+
525
+ ### 6. Research with Search Tools
526
+
527
+ Use `Agent` with search tools and `WorkingMemoryPluginNextGen` for research workflows:
528
+
529
+ ```typescript
530
+ import { Agent, webSearch, SearchProvider, Connector, Services } from '@everworker/oneringai';
531
+
532
+ // Setup search connector
533
+ Connector.create({
534
+ name: 'serper-main',
535
+ serviceType: Services.Serper,
536
+ auth: { type: 'api_key', apiKey: process.env.SERPER_API_KEY! },
537
+ baseURL: 'https://google.serper.dev',
538
+ });
539
+
540
+ // Create agent with search and memory
541
+ const agent = Agent.create({
542
+ connector: 'openai',
543
+ model: 'gpt-4',
544
+ tools: [webSearch],
545
+ context: {
546
+ features: { workingMemory: true },
547
+ },
548
+ });
549
+
550
+ // Agent can search and store findings in memory
551
+ await agent.run('Research AI developments in 2026 and store key findings');
552
+ ```
553
+
554
+ **Features:**
555
+ - 🔍 **Web Search** - SearchProvider with Serper, Brave, Tavily, RapidAPI
556
+ - 📝 **Working Memory** - Store findings with priority-based eviction
557
+ - 🏗️ **Tiered Memory** - Raw → Summary → Findings pattern
558
+
559
+ ### 6. Context Management
560
+
561
+ **AgentContextNextGen** is the modern, plugin-based context manager. It provides clean separation of concerns with composable plugins:
562
+
563
+ ```typescript
564
+ import { Agent, AgentContextNextGen } from '@everworker/oneringai';
565
+
566
+ // Option 1: Use AgentContextNextGen directly (standalone)
567
+ const ctx = AgentContextNextGen.create({
568
+ model: 'gpt-4',
569
+ systemPrompt: 'You are a helpful assistant.',
570
+ features: { workingMemory: true, inContextMemory: true },
571
+ });
572
+
573
+ ctx.addUserMessage('What is the weather in Paris?');
574
+ const { input, budget } = await ctx.prepare(); // Ready for LLM call
575
+
576
+ // Option 2: Via Agent.create
577
+ const agent = Agent.create({
578
+ connector: 'openai',
579
+ model: 'gpt-4',
580
+ context: {
581
+ strategy: 'balanced', // proactive, balanced, lazy
582
+ features: { workingMemory: true },
583
+ },
584
+ });
585
+
586
+ // Agent uses AgentContextNextGen internally
587
+ await agent.run('Check the weather');
588
+ ```
589
+
590
+ #### Feature Configuration
591
+
592
+ Enable/disable features independently. Disabled features = no associated tools registered:
593
+
594
+ ```typescript
595
+ // Minimal stateless agent (no memory)
596
+ const agent = Agent.create({
597
+ connector: 'openai',
598
+ model: 'gpt-4',
599
+ context: {
600
+ features: { workingMemory: false }
601
+ }
602
+ });
603
+
604
+ // Full-featured agent with all plugins
605
+ const agent = Agent.create({
606
+ connector: 'openai',
607
+ model: 'gpt-4',
608
+ context: {
609
+ features: {
610
+ workingMemory: true,
611
+ inContextMemory: true,
612
+ persistentInstructions: true
613
+ },
614
+ agentId: 'my-assistant', // Required for persistentInstructions
615
+ }
616
+ });
617
+ ```
618
+
619
+ **Available Features:**
620
+ | Feature | Default | Plugin | Associated Tools |
621
+ |---------|---------|--------|------------------|
622
+ | `workingMemory` | `true` | WorkingMemoryPluginNextGen | `memory_store/retrieve/delete/list` |
623
+ | `inContextMemory` | `false` | InContextMemoryPluginNextGen | `context_set/delete/list` |
624
+ | `persistentInstructions` | `false` | PersistentInstructionsPluginNextGen | `instructions_set/get/append/clear` |
625
+
626
+ **AgentContextNextGen architecture:**
627
+ - **Plugin-first design** - All features are composable plugins
628
+ - **ToolManager** - Tool registration, execution, circuit breakers
629
+ - **Single system message** - All context components combined
630
+ - **Smart compaction** - Happens once, right before LLM call
631
+
632
+ **Three compaction strategies:**
633
+ - **proactive** - Compact at 70% usage
634
+ - **balanced** (default) - Compact at 80% usage
635
+ - **lazy** - Compact at 90% usage
636
+
637
+ **Context preparation:**
638
+ ```typescript
639
+ const { input, budget, compacted, compactionLog } = await ctx.prepare();
640
+
641
+ console.log(budget.totalUsed); // Total tokens used
642
+ console.log(budget.available); // Remaining tokens
643
+ console.log(budget.utilizationPercent); // Usage percentage
644
+ ```
645
+
646
+ ### 7. InContextMemory
647
+
648
+ Store key-value pairs **directly in context** for instant LLM access without retrieval calls:
649
+
650
+ ```typescript
651
+ import { AgentContextNextGen } from '@everworker/oneringai';
652
+
653
+ const ctx = AgentContextNextGen.create({
654
+ model: 'gpt-4',
655
+ features: { inContextMemory: true },
656
+ plugins: {
657
+ inContextMemory: { maxEntries: 20 },
658
+ },
659
+ });
660
+
661
+ // Access the plugin
662
+ const plugin = ctx.getPlugin('in_context_memory');
663
+
664
+ // Store data - immediately visible to LLM
665
+ plugin.set('current_state', 'Task processing state', { step: 2, status: 'active' });
666
+ plugin.set('user_prefs', 'User preferences', { verbose: true }, 'high');
667
+
668
+ // LLM can use context_set/context_delete/context_list tools
669
+ // Or access directly via plugin API
670
+ const state = plugin.get('current_state'); // { step: 2, status: 'active' }
671
+ ```
672
+
673
+ **Key Difference from WorkingMemory:**
674
+ - **WorkingMemory**: External storage + index → requires `memory_retrieve()` for values
675
+ - **InContextMemory**: Full values in context → instant access, no retrieval needed
676
+
677
+ **Use cases:** Session state, user preferences, counters, flags, small accumulated results.
678
+
679
+ ### 8. Persistent Instructions
680
+
681
+ Store agent-level custom instructions that persist across sessions on disk:
682
+
683
+ ```typescript
684
+ import { Agent } from '@everworker/oneringai';
685
+
686
+ const agent = Agent.create({
687
+ connector: 'openai',
688
+ model: 'gpt-4',
689
+ context: {
690
+ agentId: 'my-assistant', // Required for storage path
691
+ features: {
692
+ persistentInstructions: true,
693
+ },
694
+ },
695
+ });
696
+
697
+ // LLM can now use instructions_set/append/get/clear tools
698
+ // Instructions persist to ~/.oneringai/agents/my-assistant/custom_instructions.md
699
+ ```
700
+
701
+ **Key Features:**
702
+ - 📁 **Disk Persistence** - Instructions survive process restarts and sessions
703
+ - 🔧 **LLM-Modifiable** - Agent can update its own instructions during execution
704
+ - 🔄 **Auto-Load** - Instructions loaded automatically on agent start
705
+ - 🛡️ **Never Compacted** - Critical instructions always preserved in context
706
+
707
+ **Available Tools:**
708
+ - `instructions_set` - Replace all custom instructions
709
+ - `instructions_append` - Add a new section to existing instructions
710
+ - `instructions_get` - Read current instructions
711
+ - `instructions_clear` - Remove all instructions (requires confirmation)
712
+
713
+ **Use cases:** Agent personality/behavior, user preferences, learned rules, tool usage patterns.
714
+
715
+ ### 9. Direct LLM Access
716
+
717
+ Bypass all context management for simple, stateless LLM calls:
718
+
719
+ ```typescript
720
+ const agent = Agent.create({ connector: 'openai', model: 'gpt-4' });
721
+
722
+ // Direct call - no history tracking, no memory, no context preparation
723
+ const response = await agent.runDirect('What is 2 + 2?');
724
+ console.log(response.output_text); // "4"
725
+
726
+ // With options
727
+ const response = await agent.runDirect('Summarize this', {
728
+ instructions: 'Be concise',
729
+ temperature: 0.5,
730
+ maxOutputTokens: 100,
731
+ });
732
+
733
+ // Multimodal (text + image)
734
+ const response = await agent.runDirect([
735
+ { type: 'message', role: 'user', content: [
736
+ { type: 'input_text', text: 'What is in this image?' },
737
+ { type: 'input_image', image_url: 'https://example.com/image.png' }
738
+ ]}
739
+ ]);
740
+
741
+ // Streaming
742
+ for await (const event of agent.streamDirect('Tell me a story')) {
743
+ if (event.type === 'output_text_delta') {
744
+ process.stdout.write(event.delta);
745
+ }
746
+ }
747
+ ```
748
+
749
+ **Comparison:**
750
+
751
+ | Aspect | `run()` / `chat()` | `runDirect()` |
752
+ |--------|-------------------|---------------|
753
+ | History tracking | ✅ | ❌ |
754
+ | Memory/Cache | ✅ | ❌ |
755
+ | Context preparation | ✅ | ❌ |
756
+ | Agentic loop (tool execution) | ✅ | ❌ |
757
+ | Overhead | Full context management | Minimal |
758
+
759
+ **Use cases:** Quick one-off queries, embeddings-like simplicity, testing, hybrid workflows.
760
+
761
+ ### 11. Audio Capabilities
762
+
763
+ Text-to-Speech and Speech-to-Text with multiple providers:
764
+
765
+ ```typescript
766
+ import { TextToSpeech, SpeechToText } from '@everworker/oneringai';
767
+
768
+ // === Text-to-Speech ===
769
+ const tts = TextToSpeech.create({
770
+ connector: 'openai',
771
+ model: 'tts-1-hd', // or 'gpt-4o-mini-tts' for instruction steering
772
+ voice: 'nova',
773
+ });
774
+
775
+ // Synthesize to file
776
+ await tts.toFile('Hello, world!', './output.mp3');
777
+
778
+ // Synthesize with options
779
+ const audio = await tts.synthesize('Speak slowly', {
780
+ format: 'wav',
781
+ speed: 0.75,
782
+ });
783
+
784
+ // Introspection
785
+ const voices = await tts.listVoices();
786
+ const models = tts.listAvailableModels();
787
+
788
+ // === Speech-to-Text ===
789
+ const stt = SpeechToText.create({
790
+ connector: 'openai',
791
+ model: 'whisper-1', // or 'gpt-4o-transcribe'
792
+ });
793
+
794
+ // Transcribe
795
+ const result = await stt.transcribeFile('./audio.mp3');
796
+ console.log(result.text);
797
+
798
+ // With timestamps
799
+ const detailed = await stt.transcribeWithTimestamps(audioBuffer, 'word');
800
+ console.log(detailed.words); // [{ word, start, end }, ...]
801
+
802
+ // Translation
803
+ const english = await stt.translate(frenchAudio);
804
+ ```
805
+
806
+ **Available Models:**
807
+ - **TTS**: OpenAI (`tts-1`, `tts-1-hd`, `gpt-4o-mini-tts`), Google (`gemini-tts`)
808
+ - **STT**: OpenAI (`whisper-1`, `gpt-4o-transcribe`), Groq (`whisper-large-v3` - 12x cheaper!)
809
+
810
+ ### 12. Model Registry
811
+
812
+ Complete metadata for 23+ models:
813
+
814
+ ```typescript
815
+ import { getModelInfo, calculateCost, LLM_MODELS, Vendor } from '@everworker/oneringai';
816
+
817
+ // Get model information
818
+ const model = getModelInfo('gpt-5.2-thinking');
819
+ console.log(model.features.input.tokens); // 400000
820
+ console.log(model.features.input.cpm); // 1.75 (cost per million)
821
+
822
+ // Calculate costs
823
+ const cost = calculateCost('gpt-5.2-thinking', 50_000, 2_000);
824
+ console.log(`Cost: $${cost}`); // $0.1155
825
+
826
+ // With caching
827
+ const cachedCost = calculateCost('gpt-5.2-thinking', 50_000, 2_000, {
828
+ useCachedInput: true
829
+ });
830
+ console.log(`Cached: $${cachedCost}`); // $0.0293 (90% discount)
831
+ ```
832
+
833
+ **Available Models:**
834
+ - **OpenAI (11)**: GPT-5.2 series, GPT-5 family, GPT-4.1, o3-mini
835
+ - **Anthropic (5)**: Claude 4.5 series, Claude 4.x
836
+ - **Google (7)**: Gemini 3, Gemini 2.5
837
+
838
+ ### 13. Streaming
839
+
840
+ Real-time responses:
841
+
842
+ ```typescript
843
+ import { StreamHelpers } from '@everworker/oneringai';
844
+
845
+ for await (const text of StreamHelpers.textOnly(agent.stream('Hello'))) {
846
+ process.stdout.write(text);
847
+ }
848
+ ```
849
+
850
+ ### 14. OAuth for External APIs
851
+
852
+ ```typescript
853
+ import { OAuthManager, FileStorage } from '@everworker/oneringai';
854
+
855
+ const oauth = new OAuthManager({
856
+ flow: 'authorization_code',
857
+ clientId: process.env.GITHUB_CLIENT_ID!,
858
+ clientSecret: process.env.GITHUB_CLIENT_SECRET!,
859
+ authorizationUrl: 'https://github.com/login/oauth/authorize',
860
+ tokenUrl: 'https://github.com/login/oauth/access_token',
861
+ storage: new FileStorage({ directory: './tokens' }),
862
+ });
863
+
864
+ const authUrl = await oauth.startAuthFlow('user123');
865
+ ```
866
+
867
+ ### 15. Developer Tools
868
+
869
+ File system and shell tools for building coding assistants:
870
+
871
+ ```typescript
872
+ import { developerTools } from '@everworker/oneringai';
873
+
874
+ const agent = Agent.create({
875
+ connector: 'openai',
876
+ model: 'gpt-4',
877
+ tools: developerTools, // Includes all 7 tools
878
+ });
879
+
880
+ // Agent can now:
881
+ // - Read files (read_file)
882
+ // - Write files (write_file)
883
+ // - Edit files with surgical precision (edit_file)
884
+ // - Search files by pattern (glob)
885
+ // - Search content with regex (grep)
886
+ // - List directories (list_directory)
887
+ // - Execute shell commands (bash)
888
+
889
+ await agent.run('Read package.json and tell me the dependencies');
890
+ await agent.run('Find all TODO comments in the src directory');
891
+ await agent.run('Run npm test and report any failures');
892
+ ```
893
+
894
+ **Available Tools:**
895
+ - **read_file** - Read file contents with line numbers
896
+ - **write_file** - Create/overwrite files
897
+ - **edit_file** - Surgical find/replace edits
898
+ - **glob** - Find files by pattern (`**/*.ts`)
899
+ - **grep** - Search content with regex
900
+ - **list_directory** - List directory contents
901
+ - **bash** - Execute shell commands with safety guards
902
+
903
+ **Safety Features:**
904
+ - Blocked dangerous commands (`rm -rf /`, fork bombs)
905
+ - Configurable blocked directories (`node_modules`, `.git`)
906
+ - Timeout protection (default 2 min)
907
+ - Output truncation for large outputs
908
+
909
+ ### 16. External API Integration
910
+
911
+ Connect your AI agents to 35+ external services with enterprise-grade resilience:
912
+
913
+ ```typescript
914
+ import { Connector, ConnectorTools, Services, Agent } from '@everworker/oneringai';
915
+
916
+ // Create a connector for an external service
917
+ Connector.create({
918
+ name: 'github',
919
+ serviceType: Services.Github,
920
+ auth: { type: 'api_key', apiKey: process.env.GITHUB_TOKEN! },
921
+ baseURL: 'https://api.github.com',
922
+
923
+ // Enterprise resilience features
924
+ timeout: 30000,
925
+ retry: { maxRetries: 3, baseDelayMs: 1000 },
926
+ circuitBreaker: { enabled: true, failureThreshold: 5 },
927
+ });
928
+
929
+ // Generate tools from the connector
930
+ const tools = ConnectorTools.for('github');
931
+
932
+ // Use with an agent
933
+ const agent = Agent.create({
934
+ connector: 'openai',
935
+ model: 'gpt-4',
936
+ tools: tools,
937
+ });
938
+
939
+ await agent.run('List all open issues in owner/repo');
940
+ ```
941
+
942
+ **Supported Services (35+):**
943
+ - **Communication**: Slack, Discord, Microsoft Teams, Twilio
944
+ - **Development**: GitHub, GitLab, Jira, Linear, Bitbucket
945
+ - **Productivity**: Notion, Asana, Monday, Airtable, Trello
946
+ - **CRM**: Salesforce, HubSpot, Zendesk, Intercom
947
+ - **Payments**: Stripe, PayPal, Square
948
+ - **Cloud**: AWS, Azure, GCP, DigitalOcean
949
+ - And more...
950
+
951
+ **Enterprise Features:**
952
+ - 🔄 **Automatic retry** with exponential backoff
953
+ - ⚡ **Circuit breaker** for failing services
954
+ - ⏱️ **Configurable timeout**
955
+ - 📊 **Metrics tracking** (requests, latency, success rate)
956
+ - 🔐 **Protected auth headers** (cannot be overridden)
957
+
958
+ ```typescript
959
+ // Direct fetch with connector
960
+ const connector = Connector.get('github');
961
+ const data = await connector.fetchJSON('/repos/owner/repo/issues');
962
+
963
+ // Metrics
964
+ const metrics = connector.getMetrics();
965
+ console.log(`Success rate: ${metrics.successCount / metrics.requestCount * 100}%`);
966
+ ```
967
+
968
+ #### Vendor Templates (NEW)
969
+
970
+ Quickly set up connectors for 43+ services with pre-configured authentication templates:
971
+
972
+ ```typescript
973
+ import {
974
+ createConnectorFromTemplate,
975
+ listVendors,
976
+ getVendorTemplate,
977
+ ConnectorTools
978
+ } from '@everworker/oneringai';
979
+
980
+ // List all available vendors
981
+ const vendors = listVendors();
982
+ // [{ id: 'github', name: 'GitHub', authMethods: ['pat', 'oauth-user', 'github-app'], ... }]
983
+
984
+ // Create connector from template (just provide credentials!)
985
+ const connector = createConnectorFromTemplate(
986
+ 'my-github', // Connector name
987
+ 'github', // Vendor ID
988
+ 'pat', // Auth method
989
+ { apiKey: process.env.GITHUB_TOKEN! }
990
+ );
991
+
992
+ // Get tools for the connector
993
+ const tools = ConnectorTools.for('my-github');
994
+
995
+ // Use with agent
996
+ const agent = Agent.create({
997
+ connector: 'openai',
998
+ model: 'gpt-4',
999
+ tools,
1000
+ });
1001
+
1002
+ await agent.run('List my GitHub repositories');
1003
+ ```
1004
+
1005
+ **Supported Categories (43 vendors):**
1006
+ | Category | Vendors |
1007
+ |----------|---------|
1008
+ | Communication | Slack, Discord, Telegram, Microsoft Teams |
1009
+ | Development | GitHub, GitLab, Bitbucket, Jira, Linear, Asana, Trello |
1010
+ | Productivity | Notion, Airtable, Google Workspace, Microsoft 365, Confluence |
1011
+ | CRM | Salesforce, HubSpot, Pipedrive |
1012
+ | Payments | Stripe, PayPal |
1013
+ | Cloud | AWS, GCP, Azure |
1014
+ | Storage | Dropbox, Box, Google Drive, OneDrive |
1015
+ | Email | SendGrid, Mailchimp, Postmark |
1016
+ | Monitoring | Datadog, PagerDuty, Sentry |
1017
+ | Search | Serper, Brave, Tavily, RapidAPI |
1018
+ | Scrape | ZenRows |
1019
+ | Other | Twilio, Zendesk, Intercom, Shopify |
1020
+
1021
+ Each vendor includes:
1022
+ - **Credentials setup URL** - Direct link to where you create API keys
1023
+ - **Multiple auth methods** - API keys, OAuth, service accounts
1024
+ - **Pre-configured URLs** - Authorization, token endpoints pre-filled
1025
+ - **Common scopes** - Recommended scopes for each auth method
1026
+
1027
+ See the [User Guide](./USER_GUIDE.md#vendor-templates) for complete vendor reference.
1028
+
1029
+ **Vendor Logos:**
1030
+ ```typescript
1031
+ import { getVendorLogo, getVendorLogoSvg, getVendorColor } from '@everworker/oneringai';
1032
+
1033
+ // Get logo with metadata
1034
+ const logo = getVendorLogo('github');
1035
+ if (logo) {
1036
+ console.log(logo.svg); // SVG content
1037
+ console.log(logo.hex); // Brand color: "181717"
1038
+ console.log(logo.isPlaceholder); // false (has official icon)
1039
+ }
1040
+
1041
+ // Get just the SVG (with optional color override)
1042
+ const svg = getVendorLogoSvg('slack', 'FFFFFF'); // White icon
1043
+
1044
+ // Get brand color
1045
+ const color = getVendorColor('stripe'); // "635BFF"
1046
+ ```
1047
+
1048
+ #### Tool Discovery with ToolRegistry
1049
+
1050
+ For UIs or tool inventory, use `ToolRegistry` to get all available tools:
1051
+
1052
+ ```typescript
1053
+ import { ToolRegistry } from '@everworker/oneringai';
1054
+
1055
+ const allTools = ToolRegistry.getAllTools();
1056
+
1057
+ for (const tool of allTools) {
1058
+ if (ToolRegistry.isConnectorTool(tool)) {
1059
+ console.log(`API: ${tool.displayName} (${tool.connectorName})`);
1060
+ } else {
1061
+ console.log(`Built-in: ${tool.displayName}`);
1062
+ }
1063
+ }
1064
+ ```
1065
+
1066
+ ## MCP (Model Context Protocol) Integration
1067
+
1068
+ Connect to MCP servers for automatic tool discovery and seamless integration:
1069
+
1070
+ ```typescript
1071
+ import { MCPRegistry, Agent, Connector, Vendor } from '@everworker/oneringai';
1072
+
1073
+ // Setup authentication
1074
+ Connector.create({
1075
+ name: 'openai',
1076
+ vendor: Vendor.OpenAI,
1077
+ auth: { type: 'api_key', apiKey: process.env.OPENAI_API_KEY! },
1078
+ });
1079
+
1080
+ // Connect to local MCP server (stdio)
1081
+ const fsClient = MCPRegistry.create({
1082
+ name: 'filesystem',
1083
+ transport: 'stdio',
1084
+ transportConfig: {
1085
+ command: 'npx',
1086
+ args: ['-y', '@modelcontextprotocol/server-filesystem', process.cwd()],
1087
+ },
1088
+ });
1089
+
1090
+ // Connect to remote MCP server (HTTP/HTTPS)
1091
+ const remoteClient = MCPRegistry.create({
1092
+ name: 'remote-api',
1093
+ transport: 'https',
1094
+ transportConfig: {
1095
+ url: 'https://mcp.example.com/api',
1096
+ token: process.env.MCP_TOKEN,
1097
+ },
1098
+ });
1099
+
1100
+ // Connect and discover tools
1101
+ await fsClient.connect();
1102
+ await remoteClient.connect();
1103
+
1104
+ // Create agent and register MCP tools
1105
+ const agent = Agent.create({ connector: 'openai', model: 'gpt-4' });
1106
+ fsClient.registerTools(agent.tools);
1107
+ remoteClient.registerTools(agent.tools);
1108
+
1109
+ // Agent can now use tools from both MCP servers!
1110
+ await agent.run('List files and analyze them');
1111
+ ```
1112
+
1113
+ **Features:**
1114
+ - 🔌 **Stdio & HTTP/HTTPS transports** - Local and remote server support
1115
+ - 🔍 **Automatic tool discovery** - Tools are discovered and registered automatically
1116
+ - 🏷️ **Namespaced tools** - `mcp:{server}:{tool}` prevents conflicts
1117
+ - 🔄 **Auto-reconnect** - Exponential backoff with configurable retry
1118
+ - 📊 **Session management** - Persistent connections with session IDs
1119
+ - 🔐 **Permission integration** - All MCP tools require user approval
1120
+ - ⚙️ **Configuration file** - Declare servers in `oneringai.config.json`
1121
+
1122
+ **Available MCP Servers:**
1123
+ - [@modelcontextprotocol/server-filesystem](https://github.com/modelcontextprotocol/servers) - File system access
1124
+ - [@modelcontextprotocol/server-github](https://github.com/modelcontextprotocol/servers) - GitHub API
1125
+ - [@modelcontextprotocol/server-google-drive](https://github.com/modelcontextprotocol/servers) - Google Drive
1126
+ - [@modelcontextprotocol/server-slack](https://github.com/modelcontextprotocol/servers) - Slack integration
1127
+ - [@modelcontextprotocol/server-postgres](https://github.com/modelcontextprotocol/servers) - PostgreSQL database
1128
+ - [And many more...](https://github.com/modelcontextprotocol/servers)
1129
+
1130
+ See [MCP_INTEGRATION.md](./MCP_INTEGRATION.md) for complete documentation.
1131
+
1132
+ ## Documentation
1133
+
1134
+ 📖 **[Complete User Guide](./USER_GUIDE.md)** - Comprehensive guide covering all features
1135
+
1136
+ ### Additional Resources
1137
+
1138
+ - **[MCP_INTEGRATION.md](./MCP_INTEGRATION.md)** - Model Context Protocol integration guide
1139
+ - **[CLAUDE.md](./CLAUDE.md)** - Architecture guide for AI assistants
1140
+ - **[MULTIMODAL_ARCHITECTURE.md](./MULTIMODAL_ARCHITECTURE.md)** - Multimodal implementation details
1141
+ - **[MICROSOFT_GRAPH_SETUP.md](./MICROSOFT_GRAPH_SETUP.md)** - Microsoft Graph OAuth setup
1142
+ - **[TESTING.md](./TESTING.md)** - Testing guide for contributors
1143
+
1144
+ ## Examples
1145
+
1146
+ ```bash
1147
+ # Basic examples
1148
+ npm run example:basic # Simple text generation
1149
+ npm run example:streaming # Streaming responses
1150
+ npm run example:vision # Image analysis
1151
+ npm run example:tools # Tool calling
1152
+
1153
+ # Audio examples
1154
+ npm run example:audio # TTS and STT demo
1155
+
1156
+ # Task Agent examples
1157
+ npm run example:task-agent # Basic task agent
1158
+ npm run example:task-agent-demo # Full demo with memory
1159
+ npm run example:planning-agent # AI-driven planning
1160
+
1161
+ # Context management
1162
+ npm run example:context-management # All strategies demo
1163
+ ```
1164
+
1165
+ ## Development
1166
+
1167
+ ```bash
1168
+ # Install dependencies
1169
+ npm install
1170
+
1171
+ # Build
1172
+ npm run build
1173
+
1174
+ # Watch mode
1175
+ npm run dev
1176
+
1177
+ # Run tests
1178
+ npm test
1179
+
1180
+ # Type check
1181
+ npm run typecheck
1182
+ ```
1183
+
1184
+ ## Architecture
1185
+
1186
+ The library uses **Connector-First Architecture**:
1187
+
1188
+ ```
1189
+ User Code → Connector Registry → Agent → Provider → LLM
1190
+ ```
1191
+
1192
+ **Benefits:**
1193
+ - ✅ Single source of truth for authentication
1194
+ - ✅ Multiple keys per vendor
1195
+ - ✅ Named connectors for easy reference
1196
+ - ✅ No API key management in agent code
1197
+ - ✅ Same pattern for AI providers AND external APIs
1198
+
1199
+ ## Troubleshooting
1200
+
1201
+ ### "Connector not found"
1202
+ Make sure you created the connector with `Connector.create()` before using it.
1203
+
1204
+ ### "Invalid API key"
1205
+ Check your `.env` file and ensure the key is correct for that vendor.
1206
+
1207
+ ### "Model not found"
1208
+ Each vendor has different model names. Check the [User Guide](./USER_GUIDE.md) for supported models.
1209
+
1210
+ ### Vision not working
1211
+ Use a vision-capable model: `gpt-4o`, `claude-opus-4-5-20251101`, `gemini-3-flash-preview`.
1212
+
1213
+ ## Contributing
1214
+
1215
+ Contributions are welcome! Please see our [Contributing Guide](./CONTRIBUTING.md) (coming soon).
1216
+
1217
+ ## License
1218
+
1219
+ MIT License - See [LICENSE](./LICENSE) file.
1220
+
1221
+ ---
1222
+
1223
+ **Version:** 0.1.0
1224
+ **Last Updated:** 2026-02-05
1225
+
1226
+ For detailed documentation on all features, see the **[Complete User Guide](./USER_GUIDE.md)**.
1227
+
1228
+ For internal development and architecture improvement plans, see **[IMPROVEMENT_PLAN.md](./IMPROVEMENT_PLAN.md)**.