network-ai 3.3.2 → 3.3.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
 
5
5
  [![CI](https://github.com/jovanSAPFIONEER/Network-AI/actions/workflows/ci.yml/badge.svg)](https://github.com/jovanSAPFIONEER/Network-AI/actions/workflows/ci.yml)
6
6
  [![CodeQL](https://github.com/jovanSAPFIONEER/Network-AI/actions/workflows/codeql.yml/badge.svg)](https://github.com/jovanSAPFIONEER/Network-AI/actions/workflows/codeql.yml)
7
- [![Release](https://img.shields.io/badge/release-v3.3.2-blue.svg)](https://github.com/jovanSAPFIONEER/Network-AI/releases)
7
+ [![Release](https://img.shields.io/badge/release-v3.3.4-blue.svg)](https://github.com/jovanSAPFIONEER/Network-AI/releases)
8
8
  [![npm](https://img.shields.io/npm/dw/network-ai.svg?label=npm%20downloads)](https://www.npmjs.com/package/network-ai)
9
9
  [![ClawHub](https://img.shields.io/badge/ClawHub-network--ai-orange.svg)](https://clawhub.ai/skills/network-ai)
10
10
  [![Node.js](https://img.shields.io/badge/node-%3E%3D18.0.0-brightgreen.svg)](https://nodejs.org)
@@ -19,12 +19,14 @@
19
19
 
20
20
  > **Legacy Users:** This skill works with **Clawdbot** and **Moltbot** (now OpenClaw). If you're searching for *Moltbot Security*, *Clawdbot Swarm*, or *Moltbot multi-agent* -- you're in the right place!
21
21
 
22
- Network-AI is a framework-agnostic multi-agent orchestrator and **behavioral control plane** that connects LLM agents across **12 frameworks** -- LangChain, AutoGen, CrewAI, OpenAI Assistants, LlamaIndex, Semantic Kernel, Haystack, DSPy, Agno, MCP, OpenClaw, and custom adapters. It provides shared blackboard coordination with atomic commits, built-in security (AES-256, HMAC tokens, rate limiting), content quality gates with hallucination detection, compliance enforcement, and agentic workflow patterns (parallel fan-out/fan-in, voting, chaining). Zero dependencies per adapter -- bring your own framework SDK and start building governed multi-agent systems in minutes.
22
+ If network-ai saves you time, a on GitHub helps others find it.
23
+
24
+ Connect agents across **12 frameworks** through a shared blackboard with built-in security, compliance enforcement, and behavioral governance -- in a single `npm install`. No glue code, no lock-in.
23
25
 
24
26
  **Why Network-AI?**
25
- - **Framework-agnostic** -- Not locked to one LLM provider or agent SDK
26
- - **Governance layer** -- Permission gating, audit trails, budget ceilings, and compliance enforcement across all agents
27
- - **Shared state** -- Atomic blackboard with conflict resolution for safe parallel agent coordination (fan-out/fan-in)
27
+ - **Framework-agnostic** -- LangChain, AutoGen, CrewAI, MCP, OpenAI Assistants, and 7 more in one orchestrator
28
+ - **Governed coordination** -- FSM-controlled agent turns, permission gating, audit trails, budget ceilings
29
+ - **Shared state** -- Atomic blackboard with conflict resolution for safe parallel agent coordination
28
30
  - **Production security** -- AES-256 encryption, HMAC audit logs, rate limiting, input sanitization
29
31
  - **Zero config** -- Works out of the box with `createSwarmOrchestrator()`
30
32
 
@@ -540,6 +542,190 @@ class MyAdapter extends BaseAdapter {
540
542
 
541
543
  See [references/adapter-system.md](references/adapter-system.md) for the full adapter architecture guide.
542
544
 
545
+ ## API Architecture & Performance
546
+
547
+ **Your swarm is only as fast as the backend it calls into.**
548
+
549
+ Network-AI is backend-agnostic — every agent in a swarm can call a cloud API, a different cloud API, or a local GPU model. That choice has a direct and significant impact on speed, parallelism, and reliability.
550
+
551
+ ### Why It Matters
552
+
553
+ When you run a 5-agent swarm, Network-AI can dispatch all 5 calls simultaneously. Whether those calls actually execute in parallel depends entirely on what's behind each agent:
554
+
555
+ | Backend | Parallelism | Typical 5-agent swarm | Notes |
556
+ |---|---|---|---|
557
+ | **Single cloud API key** (OpenAI, Anthropic, etc.) | Rate-limited | 40–70s sequential | RPM limits force sequential dispatch + retry waits |
558
+ | **Multiple API keys / providers** | True parallel | 8–15s | Each agent hits a different key or provider |
559
+ | **Local GPU** (Ollama, llama.cpp, vLLM) | True parallel | 5–20s depending on hardware | No RPM limit — all 5 agents fire simultaneously |
560
+ | **Mixed** (some cloud, some local) | Partial | Varies | Local agents never block; cloud agents rate-paced |
561
+
562
+ ### The Single-Key Rate Limit Problem
563
+
564
+ Cloud APIs enforce **Requests Per Minute (RPM)** limits per API key. When you run 5 agents sharing one key and hit the ceiling, the API silently returns empty responses — not a 429 error, just blank content. Network-AI's swarm demos handle this automatically with **sequential dispatch** (one agent at a time) and **adaptive header-based pacing** that reads the `x-ratelimit-reset-requests` header to wait exactly as long as needed before the next call.
565
+
566
+ ```
567
+ Single key (gpt-5.2, 6 RPM limit):
568
+ Agent 1 ──call──▶ response (7s)
569
+ wait 1s
570
+ Agent 2 ──call──▶ response (7s)
571
+ wait 1s
572
+ ... (sequential)
573
+ Total: ~60s for 5 agents + coordinator
574
+ ```
575
+
576
+ ### Multiple Keys or Providers = True Parallel
577
+
578
+ Register each reviewer agent against a different API key or provider and dispatch fires all 5 simultaneously:
579
+
580
+ ```typescript
581
+ import { CustomAdapter, AdapterRegistry } from 'network-ai';
582
+
583
+ // Each agent points to a different OpenAI key
584
+ const registry = new AdapterRegistry();
585
+
586
+ for (const reviewer of REVIEWERS) {
587
+ const adapter = new CustomAdapter();
588
+ const client = new OpenAI({ apiKey: process.env[`OPENAI_KEY_${reviewer.id.toUpperCase()}`] });
589
+
590
+ adapter.registerHandler(reviewer.id, async (payload) => {
591
+ const resp = await client.chat.completions.create({ ... });
592
+ return { findings: extractContent(resp) };
593
+ });
594
+
595
+ registry.register(reviewer.id, adapter);
596
+ }
597
+
598
+ // Now all 5 dispatch in parallel via Promise.all
599
+ // Total: ~8-12s instead of ~60s
600
+ ```
601
+
602
+ ### Local GPU = Zero Rate Limits
603
+
604
+ Run Ollama or any OpenAI-compatible local server and drop it in as a backend. No RPM ceiling means every agent fires the moment the previous one starts — true parallel for free:
605
+
606
+ ```typescript
607
+ // Point any agent at a local Ollama or vLLM server
608
+ const localClient = new OpenAI({
609
+ apiKey : 'not-needed',
610
+ baseURL : 'http://localhost:11434/v1',
611
+ });
612
+
613
+ adapter.registerHandler('sec_review', async (payload) => {
614
+ const resp = await localClient.chat.completions.create({
615
+ model : 'llama3.2', // or mistral, deepseek-r1, codellama, etc.
616
+ messages: [...],
617
+ });
618
+ return { findings: extractContent(resp) };
619
+ });
620
+ ```
621
+
622
+ ### Mixing Cloud and Local
623
+
624
+ The adapter system makes it trivial to give some agents a cloud backend and others a local one:
625
+
626
+ ```typescript
627
+ // Fast local model for lightweight reviewers
628
+ registry.register('test_review', localAdapter);
629
+ registry.register('arch_review', localAdapter);
630
+
631
+ // Cloud model for high-stakes reviewers
632
+ registry.register('sec_review', cloudAdapter); // GPT-4o / Claude
633
+ ```
634
+
635
+ Network-AI's orchestrator, blackboard, and trust model stay identical regardless of what's behind each adapter. The only thing that changes is speed.
636
+
637
+ ### Summary
638
+
639
+ | You have | What to expect |
640
+ |---|---|
641
+ | One cloud API key | Sequential dispatch, 40–70s per 5-agent swarm — fully handled automatically |
642
+ | Multiple cloud keys | Near-parallel, 10–15s — use one key per adapter instance |
643
+ | Local GPU (Ollama, vLLM) | True parallel, 5–20s depending on hardware |
644
+ | Home GPU + cloud mix | Local agents never block — cloud agents rate-paced independently |
645
+
646
+ The framework doesn't get in the way of any of these setups. Connect whatever backend you have and the orchestration layer handles the rest.
647
+
648
+ ### Cloud Provider Performance
649
+
650
+ Not all cloud APIs perform the same. Model size, inference infrastructure, and tier all affect how fast each agent gets a response — and that directly multiplies across every agent in your swarm.
651
+
652
+ | Provider / Model | Avg response (5-agent swarm) | RPM limit (free/tier-1) | Notes |
653
+ |---|---|---|---|
654
+ | **OpenAI gpt-5.2** | 6–10s per call | 3–6 RPM | Flagship model, high latency, strict RPM |
655
+ | **OpenAI gpt-4o-mini** | 2–4s per call | 500 RPM | Fast, cheap, good for reviewer agents |
656
+ | **OpenAI gpt-4o** | 4–7s per call | 60–500 RPM | Balanced quality/speed |
657
+ | **Anthropic Claude 3.5 Haiku** | 2–3s per call | 50 RPM | Fastest Claude, great for parallel agents |
658
+ | **Anthropic Claude 3.7 Sonnet** | 4–8s per call | 50 RPM | Stronger reasoning, higher latency |
659
+ | **Google Gemini 2.0 Flash** | 1–3s per call | 15 RPM (free) | Very fast inference, low RPM on free tier |
660
+ | **Groq (Llama 3.3 70B)** | 0.5–2s per call | 30 RPM | Fastest cloud inference available |
661
+ | **Together AI / Fireworks** | 1–3s per call | Varies by plan | Good for parallel workloads, competitive RPM |
662
+
663
+ **Key insight:** A 5-agent swarm using `gpt-4o-mini` at 500 RPM can fire all 5 agents truly in parallel and finish in ~4s total. The same swarm on `gpt-5.2` at 6 RPM must go sequential and takes 60s. **The model tier matters more than the orchestration framework.**
664
+
665
+ #### Choosing a Model for Swarm Agents
666
+
667
+ - **Speed over depth** (many agents, real-time feedback) → `gpt-4o-mini`, `gpt-5-mini`, `claude-3.5-haiku`, `gemini-2.0-flash`, `groq/llama-3.3-70b`
668
+ - **Depth over speed** (fewer agents, high-stakes output) → `gpt-4o`, `claude-3.7-sonnet`, `gpt-5.2`
669
+ - **Free / no-cost testing** → Groq free tier, Gemini free tier, or Ollama locally
670
+ - **Production swarms with budget** → Multiple keys across providers, route different agents to different models
671
+
672
+ All of these plug into Network-AI through the `CustomAdapter` by swapping the client's `baseURL` and `model` string — no other code changes needed.
673
+
674
+ ### `max_completion_tokens` — The Silent Truncation Trap
675
+
676
+ One of the most common failure modes in agentic output tasks is **silent truncation**. When a model hits the `max_completion_tokens` ceiling it stops mid-output and returns whatever it has — no error, no warning. The API call succeeds with a 200 and `finish_reason: "length"` instead of `"stop"`.
677
+
678
+ **This is especially dangerous for code-rewrite agents** where the output is a full file. A fixed `max_completion_tokens: 3000` cap will silently drop everything after line ~150 of a 200-line fix.
679
+
680
+ ```
681
+ # What you set vs what you need
682
+
683
+ max_completion_tokens: 3000 → enough for a short blog post
684
+ → NOT enough for a 200-line code rewrite
685
+
686
+ # Real numbers (gpt-5-mini, order-service.ts rewrite):
687
+ Blockers section: ~120 tokens
688
+ Fixed code: ~2,800 tokens (213 lines with // FIX: comments)
689
+ Total needed: ~3,000 tokens ← hits the cap exactly, empty output
690
+ Fix: set to 16,000 → full rewrite delivered in one shot
691
+ ```
692
+
693
+ **Lessons learned from building the code-review swarm:**
694
+
695
+ | Issue | Root cause | Fix |
696
+ |---|---|---|
697
+ | Fixed code output was empty | `max_completion_tokens: 3000` too low for a full rewrite | Raise to `16000`+ for any code-output agent |
698
+ | `finish_reason: "length"` silently discards output | Model hits cap, returns partial response with no error | Always check `choices[0].finish_reason` and alert on `"length"` |
699
+ | `gpt-5.2` slow + expensive for reviewer agents | Flagship model = high latency + $14/1M output tokens | Use `gpt-5-mini` ($2/1M, 128k output, same RPM) for reviewer/fixer agents |
700
+ | Coordinator + fixer as two separate calls | Second call hits rate limit window, adds 60s wait | Merge into one combined call with a structured two-section response format |
701
+
702
+ **Rule of thumb for `max_completion_tokens` by task:**
703
+
704
+ | Task | Recommended cap |
705
+ |---|---|
706
+ | Short classification / sentiment | 200–500 |
707
+ | Code review findings (one reviewer) | 400–800 |
708
+ | Blocker summary (coordinator) | 500–1,000 |
709
+ | Full file rewrite (≤300 lines) | 12,000–16,000 |
710
+ | Full file rewrite (≤1,000 lines) | 32,000–64,000 |
711
+ | Document / design revision | 16,000–32,000 |
712
+
713
+ All GPT-5 variants (`gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-5.2`) support **128,000 max output tokens** — the ceiling is never the model, it's always the cap you set.
714
+
715
+ #### Cloud GPU Instances (Self-Hosted on AWS / GCP / Azure)
716
+
717
+ Running your own model on a cloud GPU VM (e.g. AWS `p3.2xlarge` / A100, GCP `a2-highgpu`, Azure `NC` series) sits between managed APIs and local hardware:
718
+
719
+ | Setup | Parallelism | Speed vs managed API | RPM limit |
720
+ |---|---|---|---|
721
+ | A100 (80GB) + vLLM, Llama 3.3 70B | True parallel | **Faster** — 0.5–2s per call | None |
722
+ | H100 + vLLM, Mixtral 8x7B | True parallel | **Faster** — 0.3–1s per call | None |
723
+ | T4 / V100 + Ollama, Llama 3.2 8B | True parallel | Comparable | None |
724
+
725
+ Since you own the endpoint, there are no rate limits — all 5 agents fire at the same moment. At inference speeds on an A100, a 5-agent swarm can complete in **3–8 seconds** for a 70B model, comparable to Groq and faster than any managed flagship model.
726
+
727
+ The tradeoff is cost (GPU VMs are $1–$5/hr) and setup (vLLM install, model download). For high-volume production swarms or teams that want no external API dependency, it's the fastest architecture available. The connection is identical to local Ollama — just point `baseURL` at your VM's IP.
728
+
543
729
  ## Permission System
544
730
 
545
731
  The AuthGuardian evaluates requests using:
package/dist/run.d.ts ADDED
@@ -0,0 +1,3 @@
1
+ #!/usr/bin/env ts-node
2
+ export {};
3
+ //# sourceMappingURL=run.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"run.d.ts","sourceRoot":"","sources":["../run.ts"],"names":[],"mappings":""}
package/dist/run.js ADDED
@@ -0,0 +1,143 @@
1
+ #!/usr/bin/env ts-node
2
+ "use strict";
3
+ var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
4
+ if (k2 === undefined) k2 = k;
5
+ var desc = Object.getOwnPropertyDescriptor(m, k);
6
+ if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
7
+ desc = { enumerable: true, get: function() { return m[k]; } };
8
+ }
9
+ Object.defineProperty(o, k2, desc);
10
+ }) : (function(o, m, k, k2) {
11
+ if (k2 === undefined) k2 = k;
12
+ o[k2] = m[k];
13
+ }));
14
+ var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
15
+ Object.defineProperty(o, "default", { enumerable: true, value: v });
16
+ }) : function(o, v) {
17
+ o["default"] = v;
18
+ });
19
+ var __importStar = (this && this.__importStar) || (function () {
20
+ var ownKeys = function(o) {
21
+ ownKeys = Object.getOwnPropertyNames || function (o) {
22
+ var ar = [];
23
+ for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
24
+ return ar;
25
+ };
26
+ return ownKeys(o);
27
+ };
28
+ return function (mod) {
29
+ if (mod && mod.__esModule) return mod;
30
+ var result = {};
31
+ if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
32
+ __setModuleDefault(result, mod);
33
+ return result;
34
+ };
35
+ })();
36
+ Object.defineProperty(exports, "__esModule", { value: true });
37
+ /**
38
+ * network-ai demo launcher
39
+ * Run: npx ts-node run.ts
40
+ */
41
+ const fs_1 = require("fs");
42
+ const path_1 = require("path");
43
+ const child_process_1 = require("child_process");
44
+ const readline = __importStar(require("readline"));
45
+ // ─── Demo registry ────────────────────────────────────────────────────────────
46
+ const DEMOS = [
47
+ { id: '01', file: 'examples/01-hello-swarm.ts', title: 'Hello Swarm', desc: '3-agent greeting pipeline' },
48
+ { id: '02', file: 'examples/02-fsm-pipeline.ts', title: 'FSM Pipeline', desc: 'Finite-state-machine task orchestration' },
49
+ { id: '03', file: 'examples/03-parallel-agents.ts', title: 'Parallel Agents', desc: 'Fan-out + merge pattern' },
50
+ { id: '04', file: 'examples/04-live-swarm.ts', title: 'AI Safety Swarm', desc: '9-agent live research swarm + executive summary' },
51
+ { id: '05', file: 'examples/05-code-review-swarm.ts', title: 'Code Review Swarm', desc: '5 specialist reviewers + coordinator verdict' },
52
+ ];
53
+ // ─── Colours ──────────────────────────────────────────────────────────────────
54
+ const c = {
55
+ reset: '\x1b[0m',
56
+ bold: '\x1b[1m',
57
+ dim: '\x1b[2m',
58
+ cyan: '\x1b[36m',
59
+ yellow: '\x1b[33m',
60
+ green: '\x1b[32m',
61
+ red: '\x1b[31m',
62
+ white: '\x1b[97m',
63
+ };
64
+ function banner() {
65
+ console.clear();
66
+ console.log();
67
+ console.log(` ${c.bold}${c.cyan}network-ai${c.reset} — demo launcher`);
68
+ console.log(` ${c.dim}──────────────────────────────────────${c.reset}`);
69
+ console.log();
70
+ }
71
+ function printMenu(available) {
72
+ available.forEach((d, i) => {
73
+ const num = `${c.bold}${c.yellow}[${i + 1}]${c.reset}`;
74
+ const title = `${c.bold}${c.white}${d.title}${c.reset}`;
75
+ const desc = `${c.dim}${d.desc}${c.reset}`;
76
+ console.log(` ${num} ${title}`);
77
+ console.log(` ${desc}`);
78
+ console.log();
79
+ });
80
+ console.log(` ${c.dim}[q] Quit${c.reset}`);
81
+ console.log();
82
+ }
83
+ function ask(prompt) {
84
+ const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
85
+ return new Promise(resolve => {
86
+ rl.question(prompt, answer => {
87
+ rl.close();
88
+ resolve(answer.trim());
89
+ });
90
+ });
91
+ }
92
+ function runDemo(file) {
93
+ return new Promise((resolve, reject) => {
94
+ console.log();
95
+ console.log(` ${c.dim}Launching: ${file}${c.reset}`);
96
+ console.log();
97
+ const proc = (0, child_process_1.spawn)('npx', ['ts-node', file], { stdio: 'inherit', shell: true, cwd: process.cwd() });
98
+ proc.on('exit', code => {
99
+ if (code === 0)
100
+ resolve();
101
+ else
102
+ reject(new Error(`Demo exited with code ${code}`));
103
+ });
104
+ proc.on('error', reject);
105
+ });
106
+ }
107
+ // ─── Main ─────────────────────────────────────────────────────────────────────
108
+ async function main() {
109
+ const available = DEMOS.filter(d => (0, fs_1.existsSync)((0, path_1.join)(process.cwd(), d.file)));
110
+ while (true) {
111
+ banner();
112
+ printMenu(available);
113
+ const answer = await ask(` ${c.bold}Choose a demo:${c.reset} `);
114
+ if (answer.toLowerCase() === 'q' || answer.toLowerCase() === 'quit') {
115
+ console.log(`\n ${c.dim}Bye.${c.reset}\n`);
116
+ process.exit(0);
117
+ }
118
+ const idx = parseInt(answer) - 1;
119
+ if (isNaN(idx) || idx < 0 || idx >= available.length) {
120
+ console.log(`\n ${c.red}Invalid choice — press Enter to try again.${c.reset}`);
121
+ await ask('');
122
+ continue;
123
+ }
124
+ const demo = available[idx];
125
+ try {
126
+ await runDemo(demo.file);
127
+ }
128
+ catch (err) {
129
+ console.log(`\n ${c.red}${err.message}${c.reset}`);
130
+ }
131
+ console.log();
132
+ const again = await ask(` ${c.dim}Back to menu? [Y/n]:${c.reset} `);
133
+ if (again.toLowerCase() === 'n') {
134
+ console.log(`\n ${c.dim}Bye.${c.reset}\n`);
135
+ process.exit(0);
136
+ }
137
+ }
138
+ }
139
+ main().catch(err => {
140
+ console.error(err);
141
+ process.exit(1);
142
+ });
143
+ //# sourceMappingURL=run.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"run.js","sourceRoot":"","sources":["../run.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AACA;;;GAGG;AACH,2BAAgC;AAChC,+BAA4B;AAC5B,iDAAsC;AACtC,mDAAqC;AAErC,iFAAiF;AACjF,MAAM,KAAK,GAAG;IACZ,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,4BAA4B,EAAK,KAAK,EAAE,aAAa,EAAa,IAAI,EAAE,2BAA2B,EAAsB;IAC3I,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,6BAA6B,EAAI,KAAK,EAAE,cAAc,EAAY,IAAI,EAAE,yCAAyC,EAAS;IAC5I,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,gCAAgC,EAAC,KAAK,EAAE,iBAAiB,EAAS,IAAI,EAAE,yBAAyB,EAAyB;IAC5I,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,2BAA2B,EAAM,KAAK,EAAE,iBAAiB,EAAS,IAAI,EAAE,iDAAiD,EAAC;IAC5I,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,kCAAkC,EAAE,KAAK,EAAE,mBAAmB,EAAG,IAAI,EAAE,8CAA8C,EAAI;CAC5I,CAAC;AAEF,iFAAiF;AACjF,MAAM,CAAC,GAAG;IACR,KAAK,EAAI,SAAS;IAClB,IAAI,EAAK,SAAS;IAClB,GAAG,EAAM,SAAS;IAClB,IAAI,EAAK,UAAU;IACnB,MAAM,EAAG,UAAU;IACnB,KAAK,EAAI,UAAU;IACnB,GAAG,EAAM,UAAU;IACnB,KAAK,EAAI,UAAU;CACpB,CAAC;AAEF,SAAS,MAAM;IACb,OAAO,CAAC,KAAK,EAAE,CAAC;IAChB,OAAO,CAAC,GAAG,EAAE,CAAC;IACd,OAAO,CAAC,GAAG,CAAC,KAAK,CAAC,CAAC,IAAI,GAAG,CAAC,CAAC,IAAI,aAAa,CAAC,CAAC,KAAK,oBAAoB,CAAC,CAAC;IAC1E,OAAO,CAAC,GAAG,CAAC,KAAK,CAAC,CAAC,GAAG,yCAAyC,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC;IAC1E,OAAO,CAAC,GAAG,EAAE,CAAC;AAChB,CAAC;AAED,SAAS,SAAS,CAAC,SAAuB;IACxC,SAAS,CAAC,OAAO,CAAC,CAAC,CAAC,EAAE,CAAC,EAAE,EAAE;QACzB,MAAM,GAAG,GAAK,GAAG,CAAC,CAAC,IAAI,GAAG,CAAC,CAAC,MAAM,IAAI,CAAC,GAAG,CAAC,IAAI,CAAC,CAAC,KAAK,EAAE,CAAC;QACzD,MAAM,KAAK,GAAG,GAAG,CAAC,CAAC,IAAI,GAAG,CAAC,CAAC,KAAK,GAAG,CAAC,CAAC,KAAK,GAAG,CAAC,CAAC,KAAK,EAAE,CAAC;QACxD,MAAM,IAAI,GAAI,GAAG,CAAC,CAAC,GAAG,GAAG,CAAC,CAAC,IAAI,GAAG,CAAC,CAAC,KAAK,EAAE,CAAC;QAC5C,OAAO,CAAC,GAAG,CAAC,KAAK,GAAG,KAAK,KAAK,EAAE,CAAC,CAAC;QAClC,OAAO,CAAC,GAAG,CAAC,UAAU,IAAI,EAAE,CAAC,CAAC;QAC9B,OAAO,CAAC,GAAG,EAAE,CAAC;IAChB,CAAC,CAAC,CAAC;IACH,OAAO,CAAC,GAAG,CAAC,KAAK,CAAC,CAAC,GAAG,YAAY,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC;IAC7C,OAAO,CAAC,GAAG,EAAE,CAAC;AAChB,CAAC;AAED,SAAS,GAAG,CAAC,MAAc;IACzB,MAAM,EAAE,GAAG,QAAQ,CAAC,eAAe,CAAC,EAAE,KAAK,EAAE,OAAO,CAAC,KAAK,EAAE,MAAM,EAAE,OAAO,CAAC,MAAM,EAAE,CAAC,CAAC;IACtF,OAAO,IAAI,OAAO,CAAC,OAAO,CAAC,EAAE;QAC3B,EAAE,CAAC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,EAAE;YAC3B,EAAE,CAAC,KAAK,EAAE,CAAC;YACX,OAAO,CAAC,MAAM,CAAC,IAAI,EAAE,CAAC,CAAC;QACzB,CAAC,CAAC,CAAC;IACL,CAAC,CAAC,CAAC;AACL,CAAC;AAED,SAAS,OAAO,CAAC,IAAY;IAC3B,OAAO,IAAI,OAAO,CAAC,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;QACrC,OAAO,CAAC,GAAG,EAAE,CAAC;QACd,OAAO,CAAC,GAAG,CAAC,KAAK,CAAC,CAAC,GAAG,cAAc,IAAI,GAAG,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC;QACtD,OAAO,CAAC,GAAG,EAAE,CAAC;QAEd,MAAM,IAAI,GAAG,IAAA,qBAAK,EAChB,KAAK,EAAE,CAAC,SAAS,EAAE,IAAI,CAAC,EACxB,EAAE,KAAK,EAAE,SAAS,EAAE,KAAK,EAAE,IAAI,EAAE,GAAG,EAAE,OAAO,CAAC,GAAG,EAAE,EAAE,CACtD,CAAC;QAEF,IAAI,CAAC,EAAE,CAAC,MAAM,EAAE,IAAI,CAAC,EAAE;YACrB,IAAI,IAAI,KAAK,CAAC;gBAAE,OAAO,EAAE,CAAC;;gBACrB,MAAM,CAAC,IAAI,KAAK,CAAC,yBAAyB,IAAI,EAAE,CAAC,CAAC,CAAC;QAC1D,CAAC,CAAC,CAAC;QAEH,IAAI,CAAC,EAAE,CAAC,OAAO,EAAE,MAAM,CAAC,CAAC;IAC3B,CAAC,CAAC,CAAC;AACL,CAAC;AAED,iFAAiF;AACjF,KAAK,UAAU,IAAI;IACjB,MAAM,SAAS,GAAG,KAAK,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,CAAC,IAAA,eAAU,EAAC,IAAA,WAAI,EAAC,OAAO,CAAC,GAAG,EAAE,EAAE,CAAC,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;IAE7E,OAAO,IAAI,EAAE,CAAC;QACZ,MAAM,EAAE,CAAC;QACT,SAAS,CAAC,SAAS,CAAC,CAAC;QAErB,MAAM,MAAM,GAAG,MAAM,GAAG,CAAC,KAAK,CAAC,CAAC,IAAI,iBAAiB,CAAC,CAAC,KAAK,GAAG,CAAC,CAAC;QAEjE,IAAI,MAAM,CAAC,WAAW,EAAE,KAAK,GAAG,IAAI,MAAM,CAAC,WAAW,EAAE,KAAK,MAAM,EAAE,CAAC;YACpE,OAAO,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,GAAG,OAAO,CAAC,CAAC,KAAK,IAAI,CAAC,CAAC;YAC5C,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAClB,CAAC;QAED,MAAM,GAAG,GAAG,QAAQ,CAAC,MAAM,CAAC,GAAG,CAAC,CAAC;QACjC,IAAI,KAAK,CAAC,GAAG,CAAC,IAAI,GAAG,GAAG,CAAC,IAAI,GAAG,IAAI,SAAS,CAAC,MAAM,EAAE,CAAC;YACrD,OAAO,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,GAAG,6CAA6C,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC;YAChF,MAAM,GAAG,CAAC,EAAE,CAAC,CAAC;YACd,SAAS;QACX,CAAC;QAED,MAAM,IAAI,GAAG,SAAS,CAAC,GAAG,CAAC,CAAC;QAC5B,IAAI,CAAC;YACH,MAAM,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;QAC3B,CAAC;QAAC,OAAO,GAAQ,EAAE,CAAC;YAClB,OAAO,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,GAAG,GAAG,GAAG,CAAC,OAAO,GAAG,CAAC,CAAC,KAAK,EAAE,CAAC,CAAC;QACtD,CAAC;QAED,OAAO,CAAC,GAAG,EAAE,CAAC;QACd,MAAM,KAAK,GAAG,MAAM,GAAG,CAAC,KAAK,CAAC,CAAC,GAAG,uBAAuB,CAAC,CAAC,KAAK,GAAG,CAAC,CAAC;QACrE,IAAI,KAAK,CAAC,WAAW,EAAE,KAAK,GAAG,EAAE,CAAC;YAChC,OAAO,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,GAAG,OAAO,CAAC,CAAC,KAAK,IAAI,CAAC,CAAC;YAC5C,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;QAClB,CAAC;IACH,CAAC;AACH,CAAC;AAED,IAAI,EAAE,CAAC,KAAK,CAAC,GAAG,CAAC,EAAE;IACjB,OAAO,CAAC,KAAK,CAAC,GAAG,CAAC,CAAC;IACnB,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAC;AAClB,CAAC,CAAC,CAAC"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "network-ai",
3
- "version": "3.3.2",
3
+ "version": "3.3.4",
4
4
  "description": "AI agent orchestration framework for TypeScript/Node.js - plug-and-play multi-agent coordination with 12 frameworks (LangChain, AutoGen, CrewAI, OpenAI Assistants, LlamaIndex, Semantic Kernel, Haystack, DSPy, Agno, MCP, OpenClaw). Built-in security, swarm intelligence, and agentic workflow patterns.",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -70,6 +70,7 @@
70
70
  },
71
71
  "devDependencies": {
72
72
  "@types/node": "^25.2.3",
73
+ "openai": "^6.22.0",
73
74
  "ts-node": "^10.9.2",
74
75
  "typescript": "^5.9.3"
75
76
  },