agentic-api 2.0.636 β†’ 2.0.642

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +293 -329
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,490 +1,454 @@
1
1
  # @agentic-api
2
2
 
3
- Comprehensive framework for intelligent agent orchestration, document processing, and enterprise workflow management. (inspired from [OpenAI Swarm](https://github.com/openai/openai-realtime-agents)) (01/2025)
3
+ Comprehensive framework for intelligent agent orchestration, document processing, and enterprise workflow management. Inspired by [OpenAI Swarm](https://github.com/openai/openai-realtime-agents).
4
4
 
5
- > **Design Philosophy**:
6
- > This project is not meant to be better than Vercel or LangChain. It's simply less generic and optimized for a specific set of enterprise problems.
7
- > It focuses on specific features that required too many dependencies with other frameworks.
5
+ > **Design Philosophy**: This project is optimized for specific enterprise problems rather than being a generic framework. It follows a minimalist approach with the OpenAI SDK as the only core dependency. It focuses on features that required too many dependencies with other frameworks like Vercel AI or LangChain.
8
6
 
9
- ## πŸš€ Core Capabilities
7
+ ---
10
8
 
11
- ### πŸ€– Agent Orchestration
12
- - Multi-agent conversations with automatic transfers
13
- - State machine-driven workflows for complex processes
14
- - Confidence-based escalation between specialized agents
9
+ ## Core Capabilities
10
+
11
+ ### Agent Orchestration
12
+ - Multi-agent conversations with automatic transfers and confidence-based escalation
15
13
  - **StateGraph Architecture**: Modern conversation state management with automatic persistence
14
+ - **Context Injection**: Profile, instructions, and context-trail for coordination
15
+ - **Multi-Provider LLM**: Support for OpenAI and xAI providers
16
16
 
17
- ### πŸ“„ Document Processing
17
+ ### Document Processing
18
18
  - **MapLLM**: Advanced map-reduce pattern for large document analysis
19
19
  - **Structured Outputs**: OpenAI JSON schema validation support
20
20
  - **Flexible Loaders**: File and string content processing with chunking strategies
21
- - **Callback-Driven**: User-defined logic for accumulation and flow control
22
21
 
23
- ### πŸ§ͺ Testing & Validation
22
+ ### RAG (Retrieval-Augmented Generation)
23
+ - **Multi-RAG Architecture**: Multiple indexes for different contexts (stable, draft, user emails, user projects)
24
+ - **Incremental Fork**: 80-90% performance gain by building RAG only on document diff (SHA-based)
25
+ - **Reactive Updates**: Auto-reload after modifications
26
+ - **Semantic Search**: Vector-based search with HNSW indexing
27
+
28
+ ### Testing & Validation
24
29
  - **Agent Simulator**: Comprehensive testing framework with scenario-based simulations
25
- - **Realistic Personas**: Authentic user behavior simulation for thorough testing
26
- - **Automatic Validation**: Built-in success/failure detection and reporting
30
+ - **Built-in Personas**: Authentic user behavior simulation (patient, rushed, frustrated)
31
+ - **Tool Validation**: Ensures tools are used correctly with `equal`, `gte`, `lte` constraints
27
32
 
28
- ### πŸ“‹ Enterprise Workflow
33
+ ### Enterprise Workflow
34
+ A virtuous cycle for company procedures and guidelines: when a result is incorrect, users can correct it and create a validation request to the responsible person.
29
35
  - **Rules Management**: Git-based workflow for business rules and procedures
30
36
  - **Pull Request Validation**: Structured review and approval processes
31
- - **Multi-File Operations**: Atomic operations across related documents
32
- - **Search Integration**: Full-text search with RAG embeddings
33
-
34
- ## 🎯 Key Advantages
35
-
36
- 1. **Enterprise-Ready**
37
- - Production-grade document workflows
38
- - Comprehensive testing capabilities
39
- - Git-based version control integration
40
-
41
- 2. **Developer-Friendly**
42
- - Minimal configuration required
43
- - TypeScript-first with full type safety
44
- - Modular architecture for easy extension
45
-
46
- 3. **Performance-Optimized**
47
- - Efficient chunking strategies
48
- - Intelligent caching systems
49
- - Parallel processing support
50
-
51
- ### Recommended Use Cases
52
-
53
- - **Enterprise Document Management**: Rules, procedures, and knowledge base management
54
- - **Agent Development & Testing**: Comprehensive agent behavior validation
55
- - **Large Document Processing**: Analysis and synthesis of complex documents
56
- - **Conversational AI Systems**: Multi-agent workflows with state management
37
+ - **Stable IDs**: Unique document identifiers that persist across branches and renames
57
38
 
39
+ ---
58
40
 
59
- [![](https://mermaid.ink/img/pako:eNpVkcluwjAQhl_FGgmJSoDIQoAcKhEQ7aGXNkiVmqDKTYYkUmJHxu5GeZaq575cH6EmC8sc7PnG_4xn7B1EPEZwYZPztyilQpLVImREmy81dYNqW1_VsVlwIxAlinXNi24Qwt_37w-5VQVlIbTCet2ql0TQMiU-itcswudZgkxuSdAwqbkpdrA4ExjJjDNy93CKekbg0xzPhZ4ZzNVW8gIF8VVZct3k2akV3CsujxnI4pDV7jx4XC3XLVXTkX7_msyaCSvwjAsyL8hqkzsdsuQi0h3QPEsYFnoYknKRfXImaV6LPIP0B2dFPLNhq2Gr5kVbtUn4WtIsVwK_yLxp_HD7KXrUQw_0IxQ0i_U37g6xEGSqmwnB1W6MG6pyGULI9lpKleT-B4vAlUJhDwRXSQruhuZbTaqMqcRFRvW3Fa2kpOyJ8yMm4nBTk60fFsWcKybBHVVScHfwrmFiDgzTGJvToWFY2nrwAe54OBg5pu1Yjj21Dcd29j34rGoPB5OxPT23_T90g8Qf?type=png)](https://mermaid.live/edit#pako:eNpVkcluwjAQhl_FGgmJSoDIQoAcKhEQ7aGXNkiVmqDKTYYkUmJHxu5GeZaq575cH6EmC8sc7PnG_4xn7B1EPEZwYZPztyilQpLVImREmy81dYNqW1_VsVlwIxAlinXNi24Qwt_37w-5VQVlIbTCet2ql0TQMiU-itcswudZgkxuSdAwqbkpdrA4ExjJjDNy93CKekbg0xzPhZ4ZzNVW8gIF8VVZct3k2akV3CsujxnI4pDV7jx4XC3XLVXTkX7_msyaCSvwjAsyL8hqkzsdsuQi0h3QPEsYFnoYknKRfXImaV6LPIP0B2dFPLNhq2Gr5kVbtUn4WtIsVwK_yLxp_HD7KXrUQw_0IxQ0i_U37g6xEGSqmwnB1W6MG6pyGULI9lpKleT-B4vAlUJhDwRXSQruhuZbTaqMqcRFRvW3Fa2kpOyJ8yMm4nBTk60fFsWcKybBHVVScHfwrmFiDgzTGJvToWFY2nrwAe54OBg5pu1Yjj21Dcd29j34rGoPB5OxPT23_T90g8Qf)
60
-
61
- ## πŸ“¦ Installation
41
+ ## Installation
62
42
 
63
43
  ```bash
64
44
  npm install @agentic-api
65
45
  ```
66
46
 
67
- ## πŸ’‘ Quick Start
47
+ ---
48
+
49
+ ## Quick Start
68
50
 
69
51
  ### Configuration `.env`
70
52
 
71
53
  ```bash
72
- # Provider LLM (openai | xai)
54
+ # LLM Provider (openai | xai)
73
55
  LLM_PROVIDER=openai
74
56
 
75
- # ClΓ©s API
76
- OPENAI_API_KEY=sk-... # Requis pour OpenAI + embeddings + whisper
77
- XAI_API_KEY=xai-... # Requis si LLM_PROVIDER=xai
57
+ # API Keys
58
+ OPENAI_API_KEY=sk-... # Required for OpenAI + embeddings + whisper
59
+ XAI_API_KEY=xai-... # Required if LLM_PROVIDER=xai
78
60
  ```
79
61
 
80
- ### Usage
62
+ ### Basic Usage
81
63
 
82
64
  ```typescript
83
- import { llmInstance, executeAgentSet, AgenticContext, AgentStateGraph } from '@agentic-api';
65
+ import { llmInstance, executeAgentSet, AgenticContext } from '@agentic-api';
84
66
 
85
- // Initialiser le LLM (utilise LLM_PROVIDER depuis .env)
67
+ // Initialize LLM (uses LLM_PROVIDER from .env)
86
68
  llmInstance();
87
69
 
88
70
  // Create context with user information
89
71
  const context: AgenticContext = {
90
- user: {
91
- id: "user123",
92
- role: "user"
93
- },
94
- credential: "your-api-key"
72
+ user: { id: "user123", role: "user" },
73
+ credential: "msal-or-google-delegation-token" // MSAL or Google delegation token
95
74
  };
96
75
 
97
76
  // Execute agent with StateGraph (automatically managed)
98
77
  const stream = await executeAgentSet(agents, context, {
99
78
  query: "Hello, what can you do?",
100
- home: "welcome", // Starting agent
79
+ home: "welcome",
101
80
  verbose: true,
102
- enrichWithMemory: async (role) => {
103
- // Memory enrichment logic
81
+ enrichWithMemory: async (role, agent, context) => {
82
+ // Injects user profile, global/session instructions, and history
83
+ // See "Context Injection" section below for details
104
84
  return "";
105
85
  }
106
86
  });
107
87
  ```
108
88
 
109
- ## πŸ€– Custom Agent Creation
89
+ ---
90
+
91
+ ## Model Levels
92
+
93
+ The framework supports multiple LLM providers with unified model levels:
94
+
95
+ | Level | OpenAI | xAI | Usage |
96
+ |-------|--------|-----|-------|
97
+ | **LOW** | gpt-5-nano | grok-4-fast-non-reasoning | Simple tasks, economical |
98
+ | **LOW-fast** | gpt-5-nano | grok-code-fast-1 | Ultra fast |
99
+ | **MEDIUM** | gpt-5-mini | grok-4-fast-reasoning | Balanced performance/cost |
100
+ | **HIGH** | gpt-5.1 | grok-4 | Advanced reasoning |
101
+ | **SEARCH** | gpt-5-mini + web_search | grok-4-fast-reasoning + web_search | Web search |
102
+ | **VISION** | gpt-5-mini | grok-4 | Image analysis |
103
+ | **EMBEDDING-small** | text-embedding-3-small | OpenAI only | Embeddings |
104
+ | **WHISPER** | whisper-1 | OpenAI only | Audio transcription |
110
105
 
111
106
  ```typescript
112
- import { AgentConfig, modelConfig } from '@agentic-api';
107
+ import { modelConfig, llmInstance } from '@agentic-api';
108
+
109
+ // Initialize with explicit configuration
110
+ // Note: provider could be a user preference instead of server-side config
111
+ const openai = llmInstance({ provider: 'openai', apiKey: process.env.OPENAI_API_KEY });
112
+ const xai = llmInstance({ provider: 'xai', apiKey: process.env.XAI_API_KEY });
113
113
 
114
- // Example specialized agent with thinking tool
114
+ // Get model config for specific provider
115
+ const config = modelConfig("MEDIUM", { provider: 'openai' });
116
+ const xaiConfig = modelConfig("HIGH", { provider: 'xai' });
117
+ const embedConfig = modelConfig("EMBEDDING-small", { provider: 'openai' });
118
+ ```
119
+
120
+ ---
121
+
122
+ ## Custom Agent Creation
123
+
124
+ ```typescript
125
+ import { AgentConfig, injectTransferTools } from '@agentic-api';
126
+
127
+ // Specialized agent
115
128
  const haiku: AgentConfig = {
116
129
  name: "haiku",
117
130
  publicDescription: "Agent that writes haikus.",
118
131
  instructions: "Ask the user for a subject, then respond with a haiku.",
119
- model: modelConfig("LOW"),
132
+ model: 'LOW', // String alias, converted at runtime
120
133
  tools: [],
121
- maxSteps: 3 // Limit thinking steps
134
+ maxSteps: 3
122
135
  };
123
136
 
124
- // Welcome agent with transfer
137
+ // Router agent with transfers
125
138
  const welcome: AgentConfig = {
126
139
  name: "welcome",
127
140
  publicDescription: "Agent that welcomes users.",
128
- instructions: "Welcome the user and suggest options.",
129
- model: modelConfig("LOW"),
141
+ instructions: "Welcome the user and route to appropriate specialist.",
142
+ model: 'LOW',
130
143
  tools: [],
131
144
  downstreamAgents: [haiku]
132
145
  };
133
146
 
134
- // Inject transfer and thinking tools
135
- import { injectTransferTools } from '@agentic-api';
136
- const myAgents = injectTransferTools([welcome, haiku]);
147
+ // Inject transfer tools automatically
148
+ const agents = injectTransferTools([welcome, haiku]);
137
149
  ```
138
150
 
139
- ## 🧠 StateGraph Memory Management
140
-
141
- The new StateGraph architecture provides automatic conversation state management:
142
-
143
- ```typescript
144
- import { AgentStateGraph, sessionStateGraphGet, sessionStateGraphSet } from '@agentic-api';
151
+ ---
145
152
 
146
- // StateGraph is automatically managed during executeAgentSet
147
- // But you can also work with it directly:
153
+ ## Agent Transfer System
148
154
 
149
- function setupStateGraph(req: Request) {
150
- // Get existing StateGraph from session (with automatic migration)
151
- let stateGraph = sessionStateGraphGet(req);
152
- if (!stateGraph) {
153
- stateGraph = new AgentStateGraph();
154
- }
155
-
156
- // Create or restore discussion for specific agent
157
- const discussion = stateGraph.createOrRestore("welcome");
158
-
159
- // Add messages to discussion
160
- stateGraph.push("welcome", {
161
- role: "user",
162
- content: "Hello!"
163
- });
164
-
165
- // Update token usage
166
- stateGraph.updateTokens("welcome", {
167
- prompt: 10,
168
- completion: 20,
169
- total: 30,
170
- cost: 0.001
171
- });
172
-
173
- // Save back to session with gzip compression
174
- sessionStateGraphSet(req, stateGraph);
175
-
176
- return stateGraph;
177
- }
155
+ The multi-agent transfer system enables specialized agents to collaborate:
178
156
 
179
- // Client-safe view (filters system messages and tools)
180
- const clientDiscussion = stateGraph.toClientView("welcome");
157
+ ```
158
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
159
+ β”‚ Router Agent β”‚ ─── Qualifies and routes
160
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
161
+ β”‚
162
+ β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
163
+ β–Ό β–Ό β–Ό
164
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
165
+ β”‚ Info β”‚ β”‚ Action β”‚ β”‚ Admin β”‚
166
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜
181
167
  ```
182
168
 
183
- ## 🧠 Legacy Memory Management (MemoriesLite)
169
+ **Key Features:**
170
+ - Confidence threshold (0.7) for transfer decisions
171
+ - Automatic context preservation via `<context-trail>`
172
+ - Specialized agent tracking for return after temporary transfers
184
173
 
185
174
  ```typescript
186
- import { MemoriesLite } from '@memories-lite';
187
-
188
- const memory = new MemoriesLite({
189
- // Memory configuration
190
- });
191
-
192
- async function chatWithMemories(message: string, userId = "default_user") {
193
- // Semantic memory search
194
- const relevantMemories = await memory.retrieve(message, userId, {
195
- limit: 5,
196
- filters: { type: "conversation" }
197
- });
198
-
199
- const systemPrompt = `You are a helpful AI. Answer based on query and memories.
200
- User Memories:
201
- ${relevantMemories.results.map(entry => `- ${entry.memory}`).join("\n")}`;
202
-
203
- const messages = [
204
- { role: "system", content: systemPrompt },
205
- { role: "user", content: message },
206
- ];
207
-
208
- // Capture new conversation
209
- await memory.capture(messages, userId, {
210
- metadata: { type: "conversation" }
211
- });
175
+ // Transfer tool schema (auto-generated)
176
+ {
177
+ rationale_for_transfer: string, // Why this transfer
178
+ conversation_context: string, // Context for destination
179
+ destination_agent: string, // Target agent name
180
+ confidence: number // >= 0.7 required
212
181
  }
213
182
  ```
214
183
 
215
- ## βš™οΈ Model Levels
184
+ **Documentation:** [Agent Transfer](./docs/11.AGENT-TRANSFER.md)
216
185
 
217
- - **LOW**: gpt-4o-mini (simple tasks)
218
- - **MEDIUM**: gpt-4o (balanced performance/cost)
219
- - **HIGH**: gpt-4o (advanced reasoning)
220
- - **SEARCH**: gpt-4o-mini-search-preview (web search with localization)
186
+ ---
221
187
 
222
- ## πŸ”„ Agent Transfer
188
+ ## Context Injection
223
189
 
224
- Agent transfer is automatically managed with:
225
- - **Temporary transfers**: Agents transfer for single transactions and return to specialized agent
226
- - Confidence threshold (0.7) for transfer
227
- - Transfer justification
228
- - Conversation context preservation
229
- - Automatic system instruction updates
230
- - **Specialized agent tracking**: Each discussion remembers its starting agent
190
+ Architecture based on OpenAI best practices for profile and memory injection:
191
+
192
+ | Priority | Tag | Location | Description |
193
+ |----------|-----|----------|-------------|
194
+ | 2 (Required) | Agent instructions | System | Non-modifiable by user |
195
+ | 1 (High) | `<profile>`, `<instructions>`, `<context>` | System/User | User context |
196
+ | 0 (Low) | `<history>` | System | Informational only |
197
+ | Auto | `<context-trail>` | System | Tool calls and transfers tracking |
231
198
 
232
199
  ```typescript
233
- // Transfer logic (handled automatically)
234
- // 1. Agent A processes user message
235
- // 2. If tool calls indicate transfer to Agent B
236
- // 3. Agent B handles the specific task
237
- // 4. Control returns to Agent A (specialized agent)
200
+ import { renderContextInjection, renderUserContextInjection } from '@agentic-api';
201
+
202
+ // System message injection (profile + instructions)
203
+ const systemInjection = renderContextInjection(
204
+ 'date: 24/01/2026\nname: John', // userProfile
205
+ '- Prefers French', // globalInstructions
206
+ '- Budget max 50k', // sessionInstructions
207
+ 'Previous discussion summary' // history (optional)
208
+ );
209
+
210
+ // User message injection (attached assets)
211
+ const userMessage = renderUserContextInjection(assets) + userQuery;
238
212
  ```
239
213
 
240
- ## πŸ”„ StateGraph Features
214
+ ---
241
215
 
242
- ### **Core Operations**
243
- ```typescript
244
- // Create or restore agent discussion
245
- const discussion = stateGraph.createOrRestore("agentName");
246
-
247
- // Add messages with auto-generated ID and timestamp
248
- stateGraph.push("agentName", {
249
- role: "assistant",
250
- content: "Hello!",
251
- name: "functionName" // For OpenAI tool calls
252
- });
216
+ ## StateGraph Memory Management
253
217
 
254
- // Set system message (overwrites existing)
255
- stateGraph.set("agentName", "You are a helpful assistant");
218
+ StateGraph manages conversation state across multiple discussions. It enables switching between agent discussions, serializing the state, and exporting a client-safe view:
256
219
 
257
- // Update token usage (cumulative)
258
- stateGraph.updateTokens("agentName", {
259
- prompt: 10,
260
- completion: 20,
261
- cost: 0.001
262
- });
220
+ ```typescript
221
+ import { AgentStateGraph, sessionStateGraphGet, sessionStateGraphSet } from '@agentic-api';
263
222
 
264
- // Clear discussion (keeps system message)
265
- stateGraph.clearDiscussion("agentName");
266
- ```
223
+ // Get or create StateGraph
224
+ let stateGraph = sessionStateGraphGet(req) || new AgentStateGraph();
267
225
 
268
- ### **Utility Functions**
269
- ```typescript
270
- // Get specialized (starting) agent for discussion
271
- import { getSpecializedAgent } from '@agentic-api';
272
- const specializedAgent = getSpecializedAgent(discussion);
226
+ // Create or restore discussion
227
+ const discussion = stateGraph.createOrRestore("welcome");
273
228
 
274
- // Find discussion by ID
275
- const discussion = stateGraph.findDiscussionById("discussion-123");
229
+ // Add messages
230
+ stateGraph.push("welcome", { role: "user", content: "Hello!" });
276
231
 
277
- // Rename discussion
278
- stateGraph.renameDiscussion("agentName", "New Name", "Description");
232
+ // Track token usage
233
+ stateGraph.updateTokens("welcome", {
234
+ prompt: 10, completion: 20, total: 30, cost: 0.001
235
+ });
236
+
237
+ // Save with gzip compression
238
+ sessionStateGraphSet(req, stateGraph);
279
239
 
280
- // Delete discussion
281
- stateGraph.deleteDiscussion("agentName");
240
+ // Client-safe view (filters system messages and tools)
241
+ const clientView = stateGraph.toClientView("welcome");
282
242
  ```
283
243
 
284
- ## πŸ—‚οΈ MapLLM Document Processing
244
+ **Documentation:** [StateGraph](./docs/13.STATEGRAPH.md)
285
245
 
286
- Modern map-reduce pattern for processing large documents with flexible content handling and OpenAI structured outputs.
246
+ ---
287
247
 
288
- - **Flexible Loaders**: `FileNativeLoader` and `StringNativeLoader` with configurable chunking strategies
289
- - **Callback-Driven**: User-defined logic for accumulation, termination, and flow control
290
- - **Structured Outputs**: Optional JSON schema validation via OpenAI structured outputs
291
- - **Robust EOF Handling**: Proper chunk boundary detection and clean termination
248
+ ## RAG System
292
249
 
293
- πŸ“– **[Complete MapLLM Documentation β†’](./docs/README-AGENT-REDUCE.md)**
250
+ Multi-RAG architecture with semantic search:
294
251
 
295
252
  ```typescript
296
- import { MapLLM, FileNativeLoader, modelConfig } from '@agentic-api';
297
-
298
- // Basic text processing
299
- const config = {
300
- digestPrompt: "Analyze this chunk for key information.",
301
- reducePrompt: "Merge analysis with previous results."
302
- };
303
-
304
- const loader = new FileNativeLoader('document.pdf', { type: 'pages', size: 1 });
305
- const mapper = new MapLLM(loader);
253
+ import { RAGManager } from '@agentic-api';
306
254
 
307
- const callback = (result, currentValue) => {
308
- result.acc = result.acc + `\n${currentValue}\n`;
309
-
310
- // Stop conditions
311
- if (result.acc.length > 5000) {
312
- result.continue = true;
313
- }
314
- if (result.metadata.iterations > 20) {
315
- result.maxIterations = true;
316
- }
317
-
318
- return result;
319
- };
255
+ // Singleton pattern per baseDir
256
+ const ragManager = RAGManager.get({
257
+ baseDir: './rag-data',
258
+ maxArchives: 5
259
+ });
320
260
 
321
- const init = {
322
- acc: "",
323
- model: modelConfig("LOW-fast"),
324
- verbose: true
325
- };
261
+ // Create RAG
262
+ await ragManager.create('procedures', { description: 'Validated procedures' });
326
263
 
327
- const result = await mapper.reduce(config, callback, init);
328
- console.log('Final result:', result.acc);
329
-
330
- // Structured output processing with JSON schema
331
- const myJsonSchema = {
332
- type: "object",
333
- properties: {
334
- summary: { type: "string" },
335
- key_points: { type: "array", items: { type: "string" } }
336
- },
337
- required: ["summary", "key_points"]
338
- };
264
+ // Incremental fork (80-90% performance gain)
265
+ await ragManager.create('procedures-v2', {
266
+ fork: 'procedures',
267
+ exclude: ['modified-1.md', 'modified-2.md']
268
+ });
339
269
 
340
- const structuredCallback = (result, currentValue) => {
341
- // Set structured output format
342
- result.format = {
343
- name: "analysis",
344
- schema: myJsonSchema,
345
- strict: true
346
- };
347
-
348
- // For structured output, accumulate as object or merged object
349
- result.acc = { ...result.acc, ...currentValue };
350
-
351
- if (result.metadata.iterations > 5) {
352
- result.maxIterations = true;
353
- }
354
-
355
- return result;
356
- };
270
+ // Build and search
271
+ await ragManager.build('procedures');
272
+ const embeddings = ragManager.load('procedures');
357
273
 
358
- const structuredResult = await mapper.reduce(config, structuredCallback, {
359
- acc: {},
360
- verbose: true
361
- });
362
- // structuredResult.acc is validated JSON matching the schema
274
+ if (embeddings.isReady()) {
275
+ const results = await embeddings.semanticSearch('How to validate budget?', {
276
+ neighbors: 5
277
+ });
278
+ }
363
279
  ```
364
280
 
365
- ## πŸ€– Agent Simulator
281
+ **Documentation:** [RAG Overview](./docs/30.RAG-OVERVIEW.md) | [RAG Manager](./docs/31.RAG-MANAGER.md)
366
282
 
367
- Advanced testing framework for agent behavior validation with scenario-based simulations.
283
+ ---
368
284
 
369
- - **Clean API**: Separated `scenario` (context) and `testCase` (test parameters)
370
- - **Oneshot by Default**: `maxExchanges=1` for simple single-response tests
371
- - **Automatic Tool Validation**: Built-in validation with `expectedTools`
372
- - **Exchange Limiting**: Control simulation length with configurable exchange limits
285
+ ## Agent Simulator
373
286
 
374
- πŸ“– **[Complete Agent Simulator Documentation β†’](./docs/README-AGENT-SIMULATOR.md)**
287
+ Testing framework with scenario-based simulations:
375
288
 
376
289
  ```typescript
377
290
  import { AgentSimulator, PERSONA_PATIENT } from '@agentic-api';
378
291
 
379
- // Configure simulator
380
292
  const simulator = new AgentSimulator({
381
- agents: [haikuAgent, welcomeAgent],
293
+ agents: [welcomeAgent, specialistAgent],
382
294
  start: "welcome",
383
295
  verbose: true
384
296
  });
385
297
 
386
- // Define test scenario (context)
298
+ // Define scenario (context)
387
299
  const scenario = {
388
- goals: "Verify that the agent can help with haiku creation. Agent provides a complete haiku poem.",
300
+ goals: "Verify agent handles requests correctly and transfers when needed.",
389
301
  persona: PERSONA_PATIENT
390
- // result defaults to '{"success": boolean, "explain": string, "error": string}'
391
302
  };
392
303
 
393
304
  // Run test case
394
305
  const result = await simulator.testCase(scenario, {
395
- query: "I want to write a haiku about nature. Can you help me?",
306
+ query: "I need help with my heating issue",
396
307
  maxExchanges: 5, // defaults to 1 (oneshot)
397
- expectedTools: { 'transferAgents': { gte: 1 } } // defaults to {}
308
+ expectedTools: { 'transferAgents': { gte: 1 } }
398
309
  });
399
310
 
400
- // Validate results
401
311
  console.log('Success:', result.success);
402
- console.log('Summary:', result.message);
403
312
  console.log('Exchanges:', result.exchangeCount);
313
+ console.log('Messages:', result.messages.length);
314
+ ```
404
315
 
405
- if (!result.success) {
406
- console.error('Error:', result.error);
407
- }
316
+ **Built-in Personas:**
317
+ - `PERSONA_PATIENT` - Patient, polite user
318
+ - `PERSONA_PRESSE` - Rushed user wanting quick solutions
319
+ - `PERSONA_ENERVE` - Frustrated user, 3rd call for same problem
320
+
321
+ **Documentation:** [Agent Simulator](./docs/10.AGENT.SIMULATOR.md)
322
+
323
+ ---
324
+
325
+ ## MapLLM Document Processing
326
+
327
+ True map-reduce pattern for large documents with structured outputs and callback-driven accumulation:
328
+
329
+ ```typescript
330
+ import { MapLLM, FileNativeLoader, StringNativeLoader } from '@agentic-api';
331
+
332
+ // Chunking strategies: lines, pages, paragraphs, overlap
333
+ const loader = new FileNativeLoader('document.pdf', { type: 'pages', size: 1 });
334
+ const mapper = new MapLLM(loader);
335
+
336
+ const config = {
337
+ digestPrompt: "Analyze this chunk and extract key points.",
338
+ reducePrompt: "Merge this analysis with previous results.",
339
+ reduceModulo: 5 // Optional: reduce every N chunks
340
+ };
341
+
342
+ // Callback controls accumulation and termination
343
+ const result = await mapper.reduce(config, (result, currentValue) => {
344
+ // Structured output support
345
+ result.format = { name: "analysis", schema: myJsonSchema, strict: true };
346
+ result.acc = { ...result.acc, ...currentValue };
347
+
348
+ // Stop conditions
349
+ if (result.metadata.iterations > 20) result.maxIterations = true;
350
+ return result;
351
+ }, { acc: {}, model: 'LOW', verbose: true });
408
352
  ```
409
353
 
410
- ### Simulation Features
354
+ **Documentation:** [MapLLM](./docs/10.AGENTS.MAPLLM.md)
411
355
 
412
- - **Separated Concerns**: `scenario` for context, `testCase` for test parameters
413
- - **Sensible Defaults**: `maxExchanges=1`, `expectedTools={}`, default result format
414
- - **Persona Simulation**: Built-in personas (PERSONA_PATIENT, PERSONA_PRESSE, PERSONA_ENERVE)
415
- - **Tool Validation**: Automatic validation with `equal`, `gte`, `lte` constraints
416
- - **Execution Metadata**: Access to token usage, actions, and performance metrics
356
+ ---
417
357
 
418
- ## πŸ“‹ Rules Management System
358
+ ## JobRunner (Plan β†’ Execute β†’ Reduce)
419
359
 
420
- Enterprise-grade Git-based workflow for managing business rules, procedures, and documentation.
360
+ Sequential task execution engine for complex multi-objective requests:
421
361
 
422
- - **Git-Based Workflow**: Complete version control with branch-based status management
423
- - **Pull Request Validation**: Structured validation process with reviewer assignment
424
- - **Multi-File Operations**: Atomic operations across multiple related documents
425
- - **Content Management**: Rich metadata support with front-matter parsing
426
- - **Search Integration**: Full-text search with RAG embeddings support
362
+ ```typescript
363
+ import { JobRunner, jobPlannerPrompt } from '@agentic-api';
364
+
365
+ // JobRunner workflow:
366
+ // 1. Planner: userRequest β†’ JobPlan (list of TaskSpec)
367
+ // 2. Executor: TaskSpec + memory β†’ TaskResult
368
+ // 3. Reducer: (prevMemory, task, result) β†’ ReducedJobMemory (via MapLLM)
369
+
370
+ // Features:
371
+ // - Strict contracts (structured outputs / schema validation)
372
+ // - Context reduction after each task via MapLLM reducer
373
+ // - Error policy: 2 attempts max per task, then thrown β†’ synthesis
374
+ // - Sequential deterministic execution (V1)
375
+ ```
376
+
377
+ **Key Contracts:**
378
+ - `JobPlan`: jobId, goal, tasks[]
379
+ - `TaskSpec`: id, title, type, dependsOn[], acceptance[]
380
+ - `TaskResult`: taskId, ok, summary, data, artifacts[], error
381
+ - `ReducedJobMemory`: memory (short canonical), index (stable refs)
382
+
383
+ **Documentation:** [JobRunner](./docs/10.AGENTS.JOB.md)
384
+
385
+ ---
427
386
 
428
- πŸ“– **[Complete Rules System Documentation β†’](./docs/README-RULES-SYSTEM.md)**
387
+ ## Rules Management System
388
+
389
+ Git-based workflow for business documents:
429
390
 
430
391
  ```typescript
431
392
  import { RulesWorkflow, gitLoad, RuleStatus } from '@agentic-api';
432
393
 
433
- // Initialize workflow
434
- const config = gitLoad(); // Loads from environment variables
435
- const workflow = new RulesWorkflow(config);
394
+ const workflow = new RulesWorkflow(gitLoad());
436
395
 
437
- // Create a new rule
438
- const newRule = await workflow.createRule('procedures/finance/budget.md', {
396
+ // Create rule with auto-generated ID
397
+ const rule = await workflow.createRule('procedures/budget.md', {
439
398
  title: 'Budget Validation Process',
440
- slugs: ['budget-validation'],
441
399
  tags: ['finance', 'validation'],
442
400
  role: 'rule'
443
- }, 'Initial budget validation procedure');
401
+ }, 'Initial content...');
444
402
 
445
- // Create validation request (PR)
403
+ // Create PR for validation
446
404
  const prBranch = await workflow.createPullRequest(
447
- ['procedure-a.md', 'procedure-b.md'],
448
- 'Update financial procedures'
405
+ ['procedures/budget.md'],
406
+ 'New budget procedure'
449
407
  );
450
408
 
451
- // Search rules by content
452
- const searchResults = await workflow.searchRules('budget validation', {
453
- branch: 'main',
454
- tags: ['finance'],
455
- limit: 10
456
- });
457
-
458
- // Get rules by status
459
- const draftRules = await workflow.getRulesByStatus(RuleStatus.EDITING);
409
+ // Search and filter
410
+ const results = await workflow.searchRules('budget', { tags: ['finance'] });
411
+ const drafts = await workflow.getRulesByStatus(RuleStatus.EDITING);
460
412
  ```
461
413
 
462
- ## πŸ§ͺ Testing
414
+ **Branch-Based Status:**
415
+ | Branch | Status |
416
+ |--------|--------|
417
+ | `main` | PUBLISHED |
418
+ | `rule-editor` | EDITING |
419
+ | `rule-validation-*` | PULLREQUEST |
420
+
421
+ **Documentation:** [Rules Overview](./docs/40.RULES-OVERVIEW.md)
422
+
423
+ ---
424
+
425
+ ## Documentation
426
+
427
+ Complete documentation available in `./docs/`:
428
+
429
+ - [Agent Configuration](./docs/10.AGENTS.md)
430
+ - [Agent Transfer](./docs/11.AGENT-TRANSFER.md)
431
+ - [StateGraph](./docs/13.STATEGRAPH.md)
432
+ - [RAG Overview](./docs/30.RAG-OVERVIEW.md)
433
+ - [Rules Overview](./docs/40.RULES-OVERVIEW.md)
434
+ - [Troubleshooting](./docs/90.TROUBLESHOOTING.md)
435
+
436
+ ---
437
+
438
+ ## Testing
463
439
 
464
440
  ```bash
465
441
  npm test
466
442
  ```
467
443
 
468
- ## πŸ“„ License
444
+ ---
469
445
 
470
- MIT License
446
+ ## License
471
447
 
472
- Copyright (c) 2024 Pilet & Renaud SA
448
+ MIT License - Copyright (c) 2024-2026 olivier@evaletolab.ch
473
449
 
474
- Permission is hereby granted, free of charge, to any person obtaining a copy
475
- of this software and associated documentation files (the "Software"), to deal
476
- in the Software without restriction, including without limitation the rights
477
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
478
- copies of the Software, and to permit persons to whom the Software is
479
- furnished to do so, subject to the following conditions:
450
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
480
451
 
481
- The above copyright notice and this permission notice shall be included in all
482
- copies or substantial portions of the Software.
452
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
483
453
 
484
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
485
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
486
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
487
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
488
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
489
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
490
- SOFTWARE.
454
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "agentic-api",
3
- "version": "2.0.636",
3
+ "version": "2.0.642",
4
4
  "description": "API pour l'orchestration d'agents intelligents avec sΓ©quences et escalades automatiques",
5
5
  "main": "dist/src/index.js",
6
6
  "types": "dist/src/index.d.ts",