confused-ai-core 0.1.1 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/Readme.md +237 -84
  2. package/package.json +1 -1
package/Readme.md CHANGED
@@ -1,29 +1,146 @@
1
- # Core Framework Features
2
-
3
- | Category | Features |
4
- | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
5
- | **Learning** | User profiles, persistent memory stores, RAG (Retrieval-Augmented Generation), always/agentic learning modes. |
6
- | **Core** | Model-agnostic (OpenAI/OpenRouter/Ollama via strings), Type-safe I/O with Zod, async-first, long-running operations with retries, multimodal messages. |
7
- | **Knowledge** | RAGEngine, hybrid search, reranking, session/state persistence, pluggable vector stores. |
8
- | **Orchestration** | Human-in-the-loop hooks, guardrails (sensitive data/schema validation), MCP & A2A support, supervisors & sub-agents. |
9
- | **Production** | Circuit breaker, rate limiter, health checks, graceful shutdown, OTLP export, LLM caching, resumable streaming. |
10
- | **Artifacts** | Typed artifacts (text, data, reasoning, plan), versioned storage, media support (images, audio, video). |
11
- | **Voice** | TTS/STT with OpenAI and ElevenLabs, voice ID selection, streaming audio. |
1
+ # @confused-ai/core
2
+
3
+ **Production-grade TypeScript framework for orchestrating multi-agent workflows.**
4
+
5
+ - **One-line agents** `createAgent({ name, instructions })` with auto LLM, session, tools, guardrails
6
+ - **Model-agnostic** OpenAI, OpenRouter, Ollama via `model: "openai:gpt-4o"` or env
7
+ - **Production-ready** Circuit breaker, rate limiter, health checks, graceful shutdown, OTLP
8
+ - **Pluggable** Session stores, vector stores, RAG, tools, guardrails, voice (TTS/STT)
9
+
10
+ ---
11
+
12
+ ## Install
13
+
14
+ ```bash
15
+ npm install @confused-ai/core
16
+ # or
17
+ pnpm add @confused-ai/core
18
+ # or
19
+ bun add @confused-ai/core
20
+ ```
21
+
22
+ **Peer dependencies (optional):** `openai` (for OpenAIProvider), `better-sqlite3` (for SQLite session store). Install if you use those features.
23
+
24
+ **Requirements:** Node.js >= 18.
12
25
 
13
26
  ---
14
27
 
15
- ## Quick Start Examples
28
+ ## Quick Start
29
+
30
+ ### Minimal agent (uses `OPENAI_API_KEY` + default tools)
31
+
32
+ ```typescript
33
+ import { createAgent } from '@confused-ai/core';
34
+
35
+ const agent = createAgent({
36
+ name: 'Assistant',
37
+ instructions: 'You are helpful and concise.',
38
+ });
39
+
40
+ const result = await agent.run('What is 2 + 2?');
41
+ console.log(result.text);
42
+ ```
43
+
44
+ ### With conversation memory (session)
45
+
46
+ ```typescript
47
+ const agent = createAgent({
48
+ name: 'Assistant',
49
+ instructions: 'You are helpful.',
50
+ });
51
+
52
+ const sessionId = await agent.createSession('user-1');
53
+ const result = await agent.run('What did I just say?', { sessionId });
54
+ console.log(result.text);
55
+ ```
56
+
57
+ ### Custom model (OpenRouter, Ollama, etc.)
58
+
59
+ ```typescript
60
+ const agent = createAgent({
61
+ name: 'Assistant',
62
+ instructions: 'You are helpful.',
63
+ model: 'openrouter:anthropic/claude-3.5-sonnet',
64
+ // or Ollama: model: 'ollama:llama3.2', baseURL: 'http://localhost:11434/v1'
65
+ });
66
+ const result = await agent.run('Hello!');
67
+ ```
16
68
 
17
- ### 🔧 Circuit Breaker
69
+ ### Streaming
70
+
71
+ ```typescript
72
+ await agent.run('Explain TypeScript in 3 sentences.', {
73
+ sessionId,
74
+ onChunk: (chunk) => process.stdout.write(chunk),
75
+ });
76
+ ```
77
+
78
+ ---
79
+
80
+ ## Feature overview
81
+
82
+ | Category | Features |
83
+ | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
84
+ | **Learning** | User profiles, persistent memory stores, RAG, always/agentic learning modes. |
85
+ | **Core** | Model-agnostic (OpenAI/OpenRouter/Ollama), type-safe I/O with Zod, async-first, retries, multimodal messages. |
86
+ | **Knowledge** | RAGEngine, hybrid search, reranking, session/state persistence, pluggable vector stores. |
87
+ | **Orchestration** | Human-in-the-loop hooks, guardrails (sensitive data/schema), MCP & A2A, supervisors & sub-agents. |
88
+ | **Production** | Circuit breaker, rate limiter, health checks, graceful shutdown, OTLP export, LLM caching, resumable streaming. |
89
+ | **Artifacts** | Typed artifacts (text, data, reasoning, plan), versioned storage, media (images, audio, video). |
90
+ | **Voice** | TTS/STT with OpenAI and ElevenLabs, voice ID selection, streaming audio. |
91
+
92
+ ---
93
+
94
+ ## API reference
95
+
96
+ ### Main entry: `createAgent`
97
+
98
+ ```typescript
99
+ import {
100
+ createAgent,
101
+ type CreateAgentOptions,
102
+ type CreateAgentResult,
103
+ type AgentRunOptions,
104
+ } from '@confused-ai/core';
105
+
106
+ const agent: CreateAgentResult = createAgent({
107
+ name: string;
108
+ instructions: string;
109
+ model?: string; // e.g. 'gpt-4o', 'openrouter:...', 'ollama:...'
110
+ apiKey?: string;
111
+ baseURL?: string;
112
+ tools?: Tool[] | ToolRegistry;
113
+ sessionStore?: SessionStore;
114
+ guardrails?: GuardrailEngine | false;
115
+ maxSteps?: number;
116
+ timeoutMs?: number;
117
+ dev?: boolean;
118
+ // ... see CreateAgentOptions
119
+ });
120
+
121
+ // Run once
122
+ const result = await agent.run(prompt, { sessionId?, onChunk?, onToolCall? });
123
+
124
+ // Sessions
125
+ const sessionId = await agent.createSession(userId?);
126
+ const messages = await agent.getSessionMessages(sessionId);
127
+ ```
128
+
129
+ ---
130
+
131
+ ## Subpath imports and examples
132
+
133
+ Use subpath imports for smaller bundles and clear separation.
134
+
135
+ ### Production: circuit breaker
18
136
 
19
137
  ```typescript
20
138
  import { createLLMCircuitBreaker } from '@confused-ai/core/production';
21
139
 
22
140
  const breaker = createLLMCircuitBreaker('openai');
23
141
 
24
- // Wrap LLM calls - automatically opens circuit on repeated failures
25
142
  const result = await breaker.execute(async () => {
26
- return await llm.generateText({ messages });
143
+ return await llm.generateText(messages);
27
144
  });
28
145
 
29
146
  if (result.success) {
@@ -33,12 +150,11 @@ if (result.success) {
33
150
  }
34
151
  ```
35
152
 
36
- ### ⏱️ Rate Limiter
153
+ ### Production: rate limiter
37
154
 
38
155
  ```typescript
39
156
  import { createOpenAIRateLimiter } from '@confused-ai/core/production';
40
157
 
41
- // Create limiter matching OpenAI tier limits
42
158
  const limiter = createOpenAIRateLimiter('tier1'); // 60 RPM
43
159
 
44
160
  await limiter.execute(async () => {
@@ -46,22 +162,64 @@ await limiter.execute(async () => {
46
162
  });
47
163
  ```
48
164
 
49
- ### 🏥 Health Checks
165
+ ### Production: health checks
50
166
 
51
167
  ```typescript
52
- import { HealthCheckManager, createLLMHealthCheck } from '@confused-ai/core/production';
168
+ import {
169
+ HealthCheckManager,
170
+ createLLMHealthCheck,
171
+ } from '@confused-ai/core/production';
53
172
 
54
173
  const health = new HealthCheckManager({ version: '1.0.0' });
55
174
  health.addComponent(createLLMHealthCheck(llmProvider));
56
175
 
57
- // Express endpoint
176
+ // Express
58
177
  app.get('/health', async (req, res) => {
59
178
  const result = await health.check();
60
179
  res.status(result.status === 'HEALTHY' ? 200 : 503).json(result);
61
180
  });
62
181
  ```
63
182
 
64
- ### 💾 LLM Response Cache
183
+ ### Production: graceful shutdown
184
+
185
+ ```typescript
186
+ import { GracefulShutdown, createGracefulShutdown } from '@confused-ai/core/production';
187
+
188
+ const shutdown = createGracefulShutdown({ timeoutMs: 30000 });
189
+
190
+ shutdown.addHandler('database', async () => {
191
+ await db.close();
192
+ });
193
+ shutdown.addHandler('http-server', async () => {
194
+ await server.close();
195
+ });
196
+
197
+ shutdown.listen(); // Handles SIGTERM/SIGINT
198
+ ```
199
+
200
+ ### Production: resumable streaming
201
+
202
+ ```typescript
203
+ import {
204
+ ResumableStreamManager,
205
+ formatSSE,
206
+ } from '@confused-ai/core/production';
207
+
208
+ const manager = new ResumableStreamManager();
209
+ const streamId = manager.createStream();
210
+
211
+ // On each chunk from LLM
212
+ manager.saveChunk(streamId, { type: 'text', content: 'Hello' });
213
+
214
+ // Client reconnects
215
+ const checkpoint = manager.getCheckpoint(streamId);
216
+ const missed = manager.getChunksSince(streamId, clientPosition);
217
+ for (const chunk of missed) {
218
+ res.write(formatSSE(chunk));
219
+ }
220
+ ```
221
+
222
+ ### LLM: caching
65
223
 
66
224
  ```typescript
67
225
  import { LLMCache, withCache } from '@confused-ai/core/llm';
@@ -69,36 +227,45 @@ import { LLMCache, withCache } from '@confused-ai/core/llm';
69
227
  const cache = new LLMCache({ maxEntries: 1000, ttlMs: 60000 });
70
228
  const cachedLLM = withCache(llm, cache);
71
229
 
72
- // Identical requests return cached responses
73
- const response = await cachedLLM.generateText({ messages });
230
+ const response = await cachedLLM.generateText(messages);
74
231
  ```
75
232
 
76
- ### 📦 Artifacts
233
+ ### Artifacts
77
234
 
78
235
  ```typescript
79
- import { InMemoryArtifactStorage, createPlanArtifact } from '@confused-ai/core/artifacts';
236
+ import {
237
+ InMemoryArtifactStorage,
238
+ createPlanArtifact,
239
+ createTextArtifact,
240
+ createReasoningArtifact,
241
+ } from '@confused-ai/core/artifacts';
80
242
 
81
243
  const storage = new InMemoryArtifactStorage();
82
244
 
83
- // Create a plan artifact
84
245
  const plan = await storage.save(
85
246
  createPlanArtifact('project-plan', 'Build a chatbot', [
86
247
  { description: 'Design conversation flows' },
87
248
  { description: 'Implement intent detection' },
88
- { description: 'Add response generation' },
89
249
  ])
90
250
  );
91
251
 
92
- // Retrieve with versioning
252
+ const text = await storage.save(
253
+ createTextArtifact('readme', 'markdown', '# Hello')
254
+ );
255
+
93
256
  const retrieved = await storage.get(plan.id);
94
257
  ```
95
258
 
96
- ### 🎤 Voice (TTS/STT)
259
+ ### Voice (TTS / STT)
97
260
 
98
261
  ```typescript
99
- import { OpenAIVoiceProvider } from '@confused-ai/core/voice';
262
+ import {
263
+ OpenAIVoiceProvider,
264
+ ElevenLabsVoiceProvider,
265
+ createVoiceProvider,
266
+ } from '@confused-ai/core/voice';
100
267
 
101
- const voice = new OpenAIVoiceProvider();
268
+ const voice = createVoiceProvider({ provider: 'openai' }); // or { provider: 'elevenlabs' }
102
269
 
103
270
  // Text-to-Speech
104
271
  const { audio } = await voice.textToSpeech('Hello, world!', {
@@ -110,69 +277,55 @@ const { audio } = await voice.textToSpeech('Hello, world!', {
110
277
  const { text } = await voice.speechToText(audioBuffer);
111
278
  ```
112
279
 
113
- ### 🔄 Resumable Streaming
280
+ ### Observability (OTLP)
114
281
 
115
282
  ```typescript
116
- import { ResumableStreamManager, formatSSE } from '@confused-ai/core/production';
117
-
118
- const manager = new ResumableStreamManager();
119
-
120
- // Create stream
121
- const streamId = manager.createStream();
122
-
123
- // On each chunk from LLM
124
- manager.saveChunk(streamId, { type: 'text', content: 'Hello' });
125
-
126
- // Client reconnects after disconnect
127
- const checkpoint = manager.getCheckpoint(streamId);
128
- const missed = manager.getChunksSince(streamId, clientPosition);
283
+ import {
284
+ OTLPTraceExporter,
285
+ OTLPMetricsExporter,
286
+ } from '@confused-ai/core/observability';
129
287
 
130
- for (const chunk of missed) {
131
- res.write(formatSSE(chunk)); // SSE compatible
132
- }
288
+ // Configure OTLP exporters for traces and metrics
289
+ const traceExporter = new OTLPTraceExporter({ endpoint: 'http://localhost:4318' });
290
+ const metricsExporter = new OTLPMetricsExporter({ endpoint: 'http://localhost:4318' });
133
291
  ```
134
292
 
135
- ### 🛡️ Graceful Shutdown
136
-
137
- ```typescript
138
- import { GracefulShutdown } from '@confused-ai/core/production';
139
-
140
- const shutdown = new GracefulShutdown({ timeoutMs: 30000 });
141
-
142
- shutdown.addHandler('database', async () => {
143
- await db.close();
144
- });
145
-
146
- shutdown.addHandler('http-server', async () => {
147
- await server.close();
148
- });
293
+ ---
149
294
 
150
- shutdown.listen(); // Handles SIGTERM/SIGINT
151
- ```
295
+ ## Module map (subpaths)
296
+
297
+ | Import path | Contents |
298
+ | --------------------------- | ------------------------------------------------------------------------ |
299
+ | `@confused-ai/core` | Main API: `createAgent`, Agent, tools, session, guardrails, errors, etc. |
300
+ | `@confused-ai/core/production` | Circuit breaker, rate limiter, health, graceful shutdown, resumable stream |
301
+ | `@confused-ai/core/llm` | OpenAIProvider, OpenRouter, model resolver, LLMCache, withCache |
302
+ | `@confused-ai/core/artifacts` | InMemoryArtifactStorage, createPlanArtifact, createTextArtifact, MediaManager |
303
+ | `@confused-ai/core/voice` | OpenAIVoiceProvider, ElevenLabsVoiceProvider, createVoiceProvider |
304
+ | `@confused-ai/core/observability` | ConsoleLogger, InMemoryTracer, OTLPTraceExporter, OTLPMetricsExporter |
305
+ | `@confused-ai/core/session` | InMemorySessionStore, SQLiteSessionStore, SessionStore types |
306
+ | `@confused-ai/core/guardrails` | GuardrailValidator, createSensitiveDataRule, allowlist |
307
+ | `@confused-ai/core/tools` | HttpClientTool, BrowserTool, registry, base tools |
308
+ | `@confused-ai/core/memory` | Vector store, in-memory memory store |
309
+ | `@confused-ai/core/orchestration` | Orchestrator, pipeline, supervisor, swarm, MCP types |
310
+ | `@confused-ai/core/agentic` | Agentic runner and types (used by createAgent) |
311
+ | `@confused-ai/core/core` | Base agent, context builder, schemas |
312
+ | `@confused-ai/core/planner` | Classical planner, LLM planner |
313
+ | `@confused-ai/core/execution` | Execution engine, graph builder, worker pool |
152
314
 
153
315
  ---
154
316
 
155
- ## Module Imports
317
+ ## Environment variables
156
318
 
157
- ```typescript
158
- // Production resilience
159
- import {
160
- CircuitBreaker,
161
- RateLimiter,
162
- HealthCheckManager,
163
- GracefulShutdown,
164
- ResumableStreamManager,
165
- } from '@confused-ai/core/production';
166
-
167
- // Observability
168
- import { OTLPTraceExporter, OTLPMetricsExporter } from '@confused-ai/core/observability';
319
+ | Variable | Purpose |
320
+ | ------------------------ | -------------------------------- |
321
+ | `OPENAI_API_KEY` | Default OpenAI API key |
322
+ | `OPENAI_MODEL` | Default model (e.g. `gpt-4o`) |
323
+ | `OPENAI_BASE_URL` | Custom API base (e.g. Ollama) |
324
+ | `OPENROUTER_API_KEY` | OpenRouter API key |
325
+ | `OPENROUTER_MODEL` | OpenRouter model |
169
326
 
170
- // LLM with caching
171
- import { LLMCache, withCache } from '@confused-ai/core/llm';
327
+ ---
172
328
 
173
- // Artifacts
174
- import { InMemoryArtifactStorage, MediaManager } from '@confused-ai/core/artifacts';
329
+ ## License
175
330
 
176
- // Voice
177
- import { OpenAIVoiceProvider, ElevenLabsVoiceProvider } from '@confused-ai/core/voice';
178
- ```
331
+ MIT
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "confused-ai-core",
3
- "version": "0.1.1",
3
+ "version": "0.1.2",
4
4
  "description": "Confused AI - Production-grade TypeScript framework for orchestrating multi-agent workflows",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",