@lightining/general.ai 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/LICENSE +178 -0
  2. package/README.md +809 -0
  3. package/dist/defaults.d.ts +7 -0
  4. package/dist/defaults.js +58 -0
  5. package/dist/defaults.js.map +1 -0
  6. package/dist/endpoint-adapters.d.ts +12 -0
  7. package/dist/endpoint-adapters.js +225 -0
  8. package/dist/endpoint-adapters.js.map +1 -0
  9. package/dist/general-ai.d.ts +13 -0
  10. package/dist/general-ai.js +54 -0
  11. package/dist/general-ai.js.map +1 -0
  12. package/dist/index.d.ts +8 -0
  13. package/dist/index.js +8 -0
  14. package/dist/index.js.map +1 -0
  15. package/dist/memory.d.ts +6 -0
  16. package/dist/memory.js +10 -0
  17. package/dist/memory.js.map +1 -0
  18. package/dist/prompts/endpoint-chat-completions.txt +7 -0
  19. package/dist/prompts/endpoint-responses.txt +7 -0
  20. package/dist/prompts/identity.txt +17 -0
  21. package/dist/prompts/memory.txt +9 -0
  22. package/dist/prompts/personality.txt +9 -0
  23. package/dist/prompts/protocol.txt +54 -0
  24. package/dist/prompts/safety.txt +10 -0
  25. package/dist/prompts/task.txt +7 -0
  26. package/dist/prompts/thinking.txt +10 -0
  27. package/dist/prompts/tools-subagents.txt +18 -0
  28. package/dist/prompts.d.ts +11 -0
  29. package/dist/prompts.js +126 -0
  30. package/dist/prompts.js.map +1 -0
  31. package/dist/protocol.d.ts +15 -0
  32. package/dist/protocol.js +393 -0
  33. package/dist/protocol.js.map +1 -0
  34. package/dist/runtime.d.ts +20 -0
  35. package/dist/runtime.js +871 -0
  36. package/dist/runtime.js.map +1 -0
  37. package/dist/tools.d.ts +21 -0
  38. package/dist/tools.js +49 -0
  39. package/dist/tools.js.map +1 -0
  40. package/dist/types.d.ts +358 -0
  41. package/dist/types.js +2 -0
  42. package/dist/types.js.map +1 -0
  43. package/dist/utils.d.ts +14 -0
  44. package/dist/utils.js +115 -0
  45. package/dist/utils.js.map +1 -0
  46. package/package.json +63 -0
package/README.md ADDED
@@ -0,0 +1,809 @@
1
+ <div align="center">
2
+
3
+ # General.AI
4
+
5
+ **Production-ready, TypeScript-first OpenAI orchestration for Node and Bun**
6
+
7
+ Native OpenAI passthrough when you want exact SDK behavior.
8
+ An agent runtime when you want prompts, protocol parsing, tools, subagents, safety, memory, retries, and cleaned output.
9
+
10
+ [![npm version](https://img.shields.io/npm/v/@lightining/general.ai?color=cb3837&label=npm)](https://npmjs.com/package/@lightining/general.ai)
11
+ [![npm downloads](https://img.shields.io/npm/dm/@lightining/general.ai)](https://npmjs.com/package/@lightining/general.ai)
12
+ [![Node >=22](https://img.shields.io/badge/node-%3E%3D22-339933)](https://nodejs.org/)
13
+ [![Bun >=1.1](https://img.shields.io/badge/bun-%3E%3D1.1-000000)](https://bun.sh/)
14
+ [![License: Apache-2.0](https://img.shields.io/badge/license-Apache%202.0-blue)](./LICENSE)
15
+
16
+ [npm](https://npmjs.com/package/@lightining/general.ai) • [GitHub](https://github.com/nixaut-codelabs/general.ai)
17
+
18
+ </div>
19
+
20
+ ---
21
+
22
+ ## What General.AI Is
23
+
24
+ `@lightining/general.ai` exposes **two complementary surfaces**:
25
+
26
+ - `native`: exact OpenAI SDK access with no request, response, or stream-shape mutation
27
+ - `agent`: a structured orchestration runtime that layers prompt assembly, protocol parsing, retries, tools, subagents, safety, memory, streaming, and cleaned output on top of OpenAI models
28
+
29
+ This split is intentional:
30
+
31
+ - use **`native`** when you want raw provider behavior
32
+ - use **`agent`** when you want a consistent runtime with higher-level orchestration
33
+
34
+ > General.AI’s bundled prompts are written in English for consistency, but user-visible output still mirrors the user’s language unless the user explicitly asks for another one.
35
+
36
+ ---
37
+
38
+ ## Table Of Contents
39
+
40
+ - [Install](#install)
41
+ - [Why General.AI](#why-generalai)
42
+ - [Feature Matrix](#feature-matrix)
43
+ - [Quick Start](#quick-start)
44
+ - [Native Surface](#native-surface)
45
+ - [Agent Surface](#agent-surface)
46
+ - [Tools](#tools)
47
+ - [Subagents](#subagents)
48
+ - [Prompt Packs And Overrides](#prompt-packs-and-overrides)
49
+ - [Thinking, Safety, Personality, Memory](#thinking-safety-personality-memory)
50
+ - [Streaming](#streaming)
51
+ - [Compatibility Mode](#compatibility-mode)
52
+ - [Protocol](#protocol)
53
+ - [Examples](#examples)
54
+ - [Testing](#testing)
55
+ - [Publishing](#publishing)
56
+ - [Package Notes](#package-notes)
57
+ - [License](#license)
58
+
59
+ ---
60
+
61
+ ## Install
62
+
63
+ ```bash
64
+ npm install @lightining/general.ai openai
65
+ ```
66
+
67
+ or:
68
+
69
+ ```bash
70
+ bun add @lightining/general.ai openai
71
+ ```
72
+
73
+ **Runtime targets**
74
+
75
+ - Node `>=22`
76
+ - Bun `>=1.1.0`
77
+
78
+ General.AI is **ESM-only**.
79
+
80
+ ---
81
+
82
+ ## Why General.AI
83
+
84
+ Most wrappers do one of two things badly:
85
+
86
+ - they hide the provider too much and make advanced OpenAI features harder to reach
87
+ - or they stay so thin that you still have to rebuild orchestration yourself
88
+
89
+ General.AI is designed to avoid both failures.
90
+
91
+ ### Design goals
92
+
93
+ - **No lock-in at the transport layer**: `native` exposes the injected OpenAI client exactly
94
+ - **Strong orchestration defaults**: `agent` ships with an opinionated runtime and robust prompts
95
+ - **TypeScript-first**: public types are shipped from `dist/*.d.ts`
96
+ - **OpenAI-first but provider-friendly**: supports official OpenAI and OpenAI-compatible providers
97
+ - **Operationally pragmatic**: retries, parser tolerance, compatibility modes, tool gating, memory, and streaming are already built in
98
+
99
+ ---
100
+
101
+ ## Feature Matrix
102
+
103
+ | Capability | `native` | `agent` |
104
+ | --- | --- | --- |
105
+ | Exact OpenAI SDK shapes | Yes | No, returns General.AI runtime results |
106
+ | `responses` endpoint | Yes | Yes |
107
+ | `chat.completions` endpoint | Yes | Yes |
108
+ | Streaming | Yes, exact provider events | Yes, parsed runtime events + cleaned deltas |
109
+ | Prompt assembly | No | Yes |
110
+ | Protocol parsing | No | Yes |
111
+ | Cleaned user-visible output | No | Yes |
112
+ | Tool loop | Provider-native only | Yes, protocol-driven |
113
+ | Subagents | No | Yes |
114
+ | Safety markers | No | Yes |
115
+ | Thinking checkpoints | No | Yes |
116
+ | Memory adapter | No | Yes |
117
+ | Retry on malformed protocol / execution failures | No | Yes |
118
+ | Compatibility mode for classic providers | N/A | Yes |
119
+
120
+ ---
121
+
122
+ ## Quick Start
123
+
124
+ ```ts
125
+ import OpenAI from "openai";
126
+ import { GeneralAI } from "@lightining/general.ai";
127
+
128
+ const openai = new OpenAI({
129
+ apiKey: process.env.OPENAI_API_KEY,
130
+ });
131
+
132
+ const generalAI = new GeneralAI({ openai });
133
+
134
+ const result = await generalAI.agent.generate({
135
+ endpoint: "responses",
136
+ model: "gpt-5.4-mini",
137
+ messages: [
138
+ { role: "user", content: "Explain prompt caching briefly." },
139
+ ],
140
+ });
141
+
142
+ console.log(result.cleaned);
143
+ console.log(result.events);
144
+ console.log(result.usage);
145
+ ```
146
+
147
+ ### Returned shape
148
+
149
+ ```ts
150
+ type GeneralAIAgentResult = {
151
+ output: string; // full raw protocol output
152
+ cleaned: string; // only writing blocks
153
+ events: ProtocolEvent[];
154
+ meta: {
155
+ warnings: string[];
156
+ prompt: RenderedPrompts;
157
+ strippedRequestKeys: string[];
158
+ stepCount: number;
159
+ toolCallCount: number;
160
+ subagentCallCount: number;
161
+ protocolErrorCount: number;
162
+ memorySessionId?: string;
163
+ endpointResults: unknown[];
164
+ };
165
+ usage: {
166
+ inputTokens: number;
167
+ outputTokens: number;
168
+ totalTokens: number;
169
+ cachedInputTokens: number;
170
+ reasoningTokens: number;
171
+ };
172
+ endpointResult: unknown;
173
+ };
174
+ ```
175
+
176
+ ---
177
+
178
+ ## Native Surface
179
+
180
+ Use the native surface when you want **exact OpenAI SDK behavior**.
181
+
182
+ ```ts
183
+ import OpenAI from "openai";
184
+ import { GeneralAI } from "@lightining/general.ai";
185
+
186
+ const openai = new OpenAI({
187
+ apiKey: process.env.OPENAI_API_KEY,
188
+ });
189
+
190
+ const generalAI = new GeneralAI({ openai });
191
+
192
+ const response = await generalAI.native.responses.create({
193
+ model: "gpt-5.4-mini",
194
+ input: "Give a one-sentence explanation of prompt caching.",
195
+ });
196
+
197
+ const completion = await generalAI.native.chat.completions.create({
198
+ model: "gpt-5.4-mini",
199
+ messages: [
200
+ { role: "user", content: "Say hello in one sentence." },
201
+ ],
202
+ });
203
+
204
+ console.log(response.output_text);
205
+ console.log(completion.choices[0]?.message?.content ?? "");
206
+ ```
207
+
208
+ ### Why this matters
209
+
210
+ - request bodies stay OpenAI-native
211
+ - response objects stay OpenAI-native
212
+ - stream events stay OpenAI-native
213
+ - advanced provider parameters stay available exactly where the SDK supports them
214
+
215
+ This is the right surface when you need:
216
+
217
+ - exact built-in OpenAI tool behavior
218
+ - exact stream event handling
219
+ - structured outputs or advanced endpoint fields without wrapper interpretation
220
+ - minimal abstraction
221
+
222
+ ---
223
+
224
+ ## Agent Surface
225
+
226
+ Use the agent surface when you want **runtime orchestration** rather than raw provider behavior.
227
+
228
+ ```ts
229
+ const result = await generalAI.agent.generate({
230
+ endpoint: "chat_completions",
231
+ model: "gpt-5.4-mini",
232
+ messages: [
233
+ { role: "user", content: "Introduce yourself briefly." },
234
+ ],
235
+ compatibility: {
236
+ chatRoleMode: "classic",
237
+ },
238
+ });
239
+
240
+ console.log(result.cleaned);
241
+ ```
242
+
243
+ ### Agent responsibilities
244
+
245
+ - assemble a strong internal prompt stack
246
+ - drive a strict protocol
247
+ - parse runtime events from model output
248
+ - retry recoverable protocol/execution failures
249
+ - execute tools and subagents
250
+ - maintain optional memory
251
+ - return both raw protocol and cleaned output
252
+
253
+ ### Core agent parameters
254
+
255
+ | Field | Required | Description |
256
+ | --- | --- | --- |
257
+ | `endpoint` | Yes | `"responses"` or `"chat_completions"` |
258
+ | `model` | Yes | Provider model name |
259
+ | `messages` | Yes | Normalized conversation array |
260
+ | `personality` | No | Persona, style, behavior, boundaries, prompt text |
261
+ | `safety` | No | Input/output safety behavior |
262
+ | `thinking` | No | Checkpointed thinking strategy |
263
+ | `tools` | No | Runtime tool registry |
264
+ | `subagents` | No | Delegated specialist registry |
265
+ | `memory` | No | Session memory adapter config |
266
+ | `prompts` | No | Prompt section overrides |
267
+ | `limits` | No | Step/tool/subagent/protocol error limits |
268
+ | `request` | No | Endpoint-native OpenAI pass-through values |
269
+ | `compatibility` | No | Provider compatibility knobs such as classic chat role mode |
270
+ | `metadata` | No | Extra metadata for prompt/task context |
271
+ | `debug` | No | Enable debug-oriented prompt/runtime behavior |
272
+
273
+ ---
274
+
275
+ ## Tools
276
+
277
+ General.AI tools are **runtime-defined JavaScript functions** triggered by protocol markers.
278
+
279
+ ```ts
280
+ import { defineTool } from "@lightining/general.ai";
281
+
282
+ const echoTool = defineTool({
283
+ name: "echo",
284
+ description: "Echo a string back for runtime testing.",
285
+ inputSchema: {
286
+ type: "object",
287
+ additionalProperties: false,
288
+ properties: {
289
+ text: { type: "string" },
290
+ },
291
+ required: ["text"],
292
+ },
293
+ async execute(args) {
294
+ return { echoed: args.text };
295
+ },
296
+ });
297
+ ```
298
+
299
+ ### Tool access policy
300
+
301
+ You can explicitly decide whether a tool is callable:
302
+
303
+ - from the root agent
304
+ - from all subagents
305
+ - from selected subagents only
306
+
307
+ ```ts
308
+ const rootOnlyTool = defineTool({
309
+ name: "root_only",
310
+ description: "Only callable from the root agent.",
311
+ access: {
312
+ subagents: false,
313
+ },
314
+ async execute() {
315
+ return { ok: true };
316
+ },
317
+ });
318
+
319
+ const mathOnlyTool = defineTool({
320
+ name: "math_only",
321
+ description: "Only callable from the math_helper subagent.",
322
+ access: {
323
+ subagents: ["math_helper"],
324
+ },
325
+ async execute() {
326
+ return { ok: true };
327
+ },
328
+ });
329
+ ```
330
+
331
+ ### Built-in helper
332
+
333
+ General.AI also ships a helper for OpenAI web search via Responses:
334
+
335
+ ```ts
336
+ import OpenAI from "openai";
337
+ import { createOpenAIWebSearchTool } from "@lightining/general.ai";
338
+
339
+ const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
340
+
341
+ const webSearch = createOpenAIWebSearchTool({
342
+ openai,
343
+ model: "gpt-5.4-mini",
344
+ });
345
+ ```
346
+
347
+ ---
348
+
349
+ ## Subagents
350
+
351
+ Subagents are **bounded delegated General.AI runs** with their own instructions, model, limits, safety, and tool access.
352
+
353
+ ```ts
354
+ import { defineSubagent } from "@lightining/general.ai";
355
+
356
+ const mathHelper = defineSubagent({
357
+ name: "math_helper",
358
+ description: "A precise arithmetic specialist.",
359
+ instructions: [
360
+ "Solve delegated arithmetic carefully.",
361
+ "Return a concise answer.",
362
+ "Do not call nested subagents unless explicitly required.",
363
+ ].join(" "),
364
+ });
365
+ ```
366
+
367
+ Use them in a run:
368
+
369
+ ```ts
370
+ const result = await generalAI.agent.generate({
371
+ endpoint: "chat_completions",
372
+ model: "gpt-5.4-mini",
373
+ messages: [
374
+ {
375
+ role: "system",
376
+ content: "Delegate arithmetic work to the available subagent when useful.",
377
+ },
378
+ {
379
+ role: "user",
380
+ content: "What is 17 multiplied by 23?",
381
+ },
382
+ ],
383
+ subagents: {
384
+ registry: [mathHelper],
385
+ },
386
+ compatibility: {
387
+ chatRoleMode: "classic",
388
+ },
389
+ });
390
+ ```
391
+
392
+ ### What the runtime already handles for you
393
+
394
+ - subagent instructions are automatically injected
395
+ - subagents inherit compatibility mode
396
+ - nested subagents can be disabled
397
+ - tool visibility can be filtered per subagent
398
+ - recoverable subagent execution failures can trigger retries
399
+
400
+ ---
401
+
402
+ ## Prompt Packs And Overrides
403
+
404
+ General.AI renders a layered prompt stack in this order:
405
+
406
+ 1. identity
407
+ 2. endpoint adapter rules
408
+ 3. protocol
409
+ 4. safety
410
+ 5. personality
411
+ 6. thinking
412
+ 7. tools and subagents
413
+ 8. memory
414
+ 9. task context
415
+
416
+ Bundled prompts live in `prompts/*.txt`.
417
+
418
+ ### Override a section
419
+
420
+ ```ts
421
+ const prompt = await generalAI.agent.renderPrompts({
422
+ endpoint: "responses",
423
+ model: "gpt-5.4-mini",
424
+ messages: [{ role: "user", content: "Hello" }],
425
+ prompts: {
426
+ sections: {
427
+ task: "Task override.\n{block:task_context}",
428
+ },
429
+ },
430
+ });
431
+ ```
432
+
433
+ ### Placeholders
434
+
435
+ - `{data:key}` for scalar values
436
+ - `{block:key}` for multiline blocks
437
+
438
+ ### Raw prompt overrides
439
+
440
+ ```ts
441
+ prompts: {
442
+ raw: {
443
+ prepend: "Extra preamble",
444
+ append: "Extra appendix",
445
+ replace: "Replace the full rendered prompt entirely",
446
+ },
447
+ }
448
+ ```
449
+
450
+ ---
451
+
452
+ ## Thinking, Safety, Personality, Memory
453
+
454
+ These systems are separate on purpose.
455
+
456
+ ### Thinking
457
+
458
+ Thinking defaults to a checkpointed strategy in agent mode.
459
+
460
+ ```ts
461
+ thinking: {
462
+ enabled: true,
463
+ strategy: "checkpointed",
464
+ effort: "high",
465
+ checkpoints: [
466
+ "Before the first writing block",
467
+ "After each tool result",
468
+ "Before final completion",
469
+ ],
470
+ }
471
+ ```
472
+
473
+ ### Safety
474
+
475
+ Safety is configured independently for input and output.
476
+
477
+ ```ts
478
+ safety: {
479
+ enabled: true,
480
+ mode: "balanced",
481
+ input: {
482
+ enabled: true,
483
+ instructions: "Inspect the user request carefully.",
484
+ },
485
+ output: {
486
+ enabled: true,
487
+ instructions: "Inspect the final answer before completion.",
488
+ },
489
+ }
490
+ ```
491
+
492
+ ### Personality
493
+
494
+ ```ts
495
+ personality: {
496
+ enabled: true,
497
+ profile: "direct_technical",
498
+ persona: { honesty: "high" },
499
+ style: { verbosity: "medium", tone: "direct" },
500
+ behavior: { avoid_sycophancy: true },
501
+ boundaries: { insult_user: false },
502
+ instructions: "Be clear, direct, and technically precise.",
503
+ }
504
+ ```
505
+
506
+ ### Memory
507
+
508
+ General.AI ships with `InMemoryMemoryAdapter`, and you can inject your own adapter.
509
+
510
+ ```ts
511
+ import { GeneralAI, InMemoryMemoryAdapter } from "@lightining/general.ai";
512
+
513
+ const memoryAdapter = new InMemoryMemoryAdapter();
514
+ const generalAI = new GeneralAI({ openai, memoryAdapter });
515
+
516
+ await generalAI.agent.generate({
517
+ endpoint: "chat_completions",
518
+ model: "gpt-5.4-mini",
519
+ messages: [{ role: "user", content: "Remember this preference." }],
520
+ memory: {
521
+ enabled: true,
522
+ sessionId: "user-123",
523
+ },
524
+ });
525
+ ```
526
+
527
+ ---
528
+
529
+ ## Streaming
530
+
531
+ ### Native streaming
532
+
533
+ Use the OpenAI SDK directly through `native` when you want exact provider stream events.
534
+
535
+ ### Agent streaming
536
+
537
+ Use `agent.stream()` when you want parsed runtime events and cleaned writing deltas.
538
+
539
+ ```ts
540
+ const stream = generalAI.agent.stream({
541
+ endpoint: "responses",
542
+ model: "gpt-5.4-mini",
543
+ messages: [{ role: "user", content: "Say hello." }],
544
+ });
545
+
546
+ for await (const event of stream) {
547
+ if (event.type === "writing_delta") {
548
+ process.stdout.write(event.text);
549
+ }
550
+ }
551
+ ```
552
+
553
+ Typical stream events include:
554
+
555
+ - `run_started`
556
+ - `prompt_rendered`
557
+ - `step_started`
558
+ - `raw_text_delta`
559
+ - `writing_delta`
560
+ - `protocol_event`
561
+ - `tool_started`
562
+ - `tool_result`
563
+ - `subagent_started`
564
+ - `subagent_result`
565
+ - `warning`
566
+ - `run_completed`
567
+
568
+ ---
569
+
570
+ ## Compatibility Mode
571
+
572
+ Some OpenAI-compatible providers do not fully support newer chat roles such as `developer`.
573
+
574
+ For those providers, use:
575
+
576
+ ```ts
577
+ compatibility: {
578
+ chatRoleMode: "classic",
579
+ }
580
+ ```
581
+
582
+ This enables safer continuation behavior for providers that expect classic `system` / `user` / `assistant` flows.
583
+
584
+ This is especially useful with:
585
+
586
+ - older compatible gateways
587
+ - NVIDIA-style OpenAI-compatible endpoints
588
+ - providers that reject post-assistant `system` or `developer` messages
589
+
590
+ ---
591
+
592
+ ## Protocol
593
+
594
+ General.AI’s agent runtime uses a text protocol based on triple-bracket markers.
595
+
596
+ ### Common markers
597
+
598
+ - `[[[status:thinking]]]`
599
+ - `[[[status:writing]]]`
600
+ - `[[[status:input_safety:{...}]]]`
601
+ - `[[[status:output_safety:{...}]]]`
602
+ - `[[[status:call_tool:"name":{...}]]]`
603
+ - `[[[status:call_subagent:"name":{...}]]]`
604
+ - `[[[status:checkpoint]]]`
605
+ - `[[[status:revise]]]`
606
+ - `[[[status:error:{...}]]]`
607
+ - `[[[status:done]]]`
608
+
609
+ ### Important runtime rule
610
+
611
+ Only `writing` blocks survive into `result.cleaned`.
612
+
613
+ That means:
614
+
615
+ - `thinking` is runtime-only
616
+ - safety markers are runtime-only
617
+ - tool and subagent markers are runtime-only
618
+ - `cleaned` is the user-facing answer
619
+
620
+ ### Parser behavior
621
+
622
+ The parser is intentionally tolerant of real-world model behavior:
623
+
624
+ - block-style JSON markers are supported
625
+ - one-missing-bracket marker near-misses are tolerated
626
+ - inline marker runs can be normalized onto separate lines
627
+ - malformed protocol can trigger automatic retries up to `limits.maxProtocolErrors`
628
+
629
+ ---
630
+
631
+ ## Advanced OpenAI Pass-Through
632
+
633
+ The `agent` surface owns the orchestration keys, but endpoint-native extra parameters still pass through via:
634
+
635
+ - `request.responses`
636
+ - `request.chat_completions`
637
+
638
+ Example:
639
+
640
+ ```ts
641
+ const result = await generalAI.agent.generate({
642
+ endpoint: "responses",
643
+ model: "gpt-5.4-mini",
644
+ messages: [{ role: "user", content: "Summarize this." }],
645
+ request: {
646
+ responses: {
647
+ prompt_cache_key: "summary:v1",
648
+ reasoning: { effort: "medium" },
649
+ service_tier: "auto",
650
+ store: false,
651
+ background: false,
652
+ },
653
+ },
654
+ });
655
+ ```
656
+
657
+ Reserved keys that would break agent orchestration, such as `input`, `messages`, or native tool transport fields, are stripped and reported in `result.meta.strippedRequestKeys`.
658
+
659
+ ---
660
+
661
+ ## Examples
662
+
663
+ Included examples:
664
+
665
+ - [examples/native-chat.mjs](./examples/native-chat.mjs)
666
+ - [examples/native-responses.mjs](./examples/native-responses.mjs)
667
+ - [examples/agent-basic.mjs](./examples/agent-basic.mjs)
668
+
669
+ Run an example:
670
+
671
+ ```bash
672
+ npm run build
673
+ node examples/native-chat.mjs
674
+ ```
675
+
676
+ ---
677
+
678
+ ## Testing
679
+
680
+ ### Deterministic test suite
681
+
682
+ ```bash
683
+ npm test
684
+ ```
685
+
686
+ This runs:
687
+
688
+ - build
689
+ - unit and runtime integration tests in `test/**/*.test.js`
690
+
691
+ ### Cross-runtime smoke tests
692
+
693
+ ```bash
694
+ npm run smoke
695
+ ```
696
+
697
+ ### Full public-surface and live smoke script
698
+
699
+ ```bash
700
+ bun run test.js
701
+ ```
702
+
703
+ The root [test.js](./test.js) is a comprehensive manual verification script that covers:
704
+
705
+ - deterministic API surface checks with fake clients
706
+ - parser behavior
707
+ - prompt rendering
708
+ - memory
709
+ - tool gating
710
+ - subagent execution
711
+ - retry behavior
712
+ - streaming
713
+ - live provider smoke tests
714
+
715
+ #### Useful environment variables
716
+
717
+ ```bash
718
+ GENERAL_AI_API_KEY=...
719
+ GENERAL_AI_BASE_URL=...
720
+ GENERAL_AI_MODEL=...
721
+ GENERAL_AI_SKIP_LIVE=1
722
+ ```
723
+
724
+ If `GENERAL_AI_SKIP_LIVE=1` is set, `test.js` skips live provider checks.
725
+
726
+ ---
727
+
728
+ ## Publishing
729
+
730
+ The package is configured for production publishing with:
731
+
732
+ - repository metadata
733
+ - homepage and issue tracker links
734
+ - Apache-2.0 license file
735
+ - ESM entrypoints and declaration files
736
+ - `sideEffects: false`
737
+ - `prepublishOnly` checks
738
+ - `publishConfig.provenance`
739
+
740
+ ### Publish pipeline
741
+
742
+ ```bash
743
+ npm test
744
+ npm run smoke
745
+ npm run pack:check
746
+ npm publish
747
+ ```
748
+
749
+ Or rely on:
750
+
751
+ ```bash
752
+ npm publish
753
+ ```
754
+
755
+ because `prepublishOnly` already runs:
756
+
757
+ - `npm test`
758
+ - `npm run smoke`
759
+ - `npm run pack:check`
760
+
761
+ ### Inspect the tarball
762
+
763
+ ```bash
764
+ npm pack --dry-run
765
+ ```
766
+
767
+ ---
768
+
769
+ ## Package Notes
770
+
771
+ ### Internal prompt language
772
+
773
+ Bundled prompts are English by default for consistency across providers and prompt packs.
774
+
775
+ ### User-facing language
776
+
777
+ The assistant should still answer in the user’s language unless the user explicitly asks for another language.
778
+
779
+ ### ESM-only package
780
+
781
+ Use `import`, not `require`.
782
+
783
+ ### OpenAI SDK baseline
784
+
785
+ General.AI currently targets the installed OpenAI Node SDK family represented by `openai@^6.33.0`.
786
+
787
+ ### Production scope
788
+
789
+ General.AI is built for:
790
+
791
+ - app backends
792
+ - internal LLM runtimes
793
+ - tool and subagent orchestration layers
794
+ - OpenAI and OpenAI-compatible provider integrations
795
+
796
+ It is **not** intended as a browser bundle.
797
+
798
+ ---
799
+
800
+ ## Links
801
+
802
+ - npm: [npmjs.com/package/@lightining/general.ai](https://npmjs.com/package/@lightining/general.ai)
803
+ - GitHub: [github.com/nixaut-codelabs/general.ai](https://github.com/nixaut-codelabs/general.ai)
804
+
805
+ ---
806
+
807
+ ## License
808
+
809
+ Apache-2.0. See [LICENSE](./LICENSE).