@diabolicallabs/llm-client 0.3.0 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +63 -2
- package/dist/index.d.ts +48 -0
- package/dist/index.js +611 -227
- package/dist/index.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -6,7 +6,7 @@ Unified LLM API across Anthropic, OpenAI, Google Gemini, DeepSeek, and Perplexit
|
|
|
6
6
|
|
|
7
7
|
## Status
|
|
8
8
|
|
|
9
|
-
**Published — v0.
|
|
9
|
+
**Published — v0.3.0.** All five providers implemented. v0.4.0 adds native strict structured outputs (OpenAI json_schema, Anthropic tool-use, Gemini responseSchema) triggered automatically when a Zod 4 schema is passed.
|
|
10
10
|
|
|
11
11
|
## Install
|
|
12
12
|
|
|
@@ -42,13 +42,74 @@ for await (const chunk of client.stream([{ role: 'user', content: 'Hello' }])) {
|
|
|
42
42
|
process.stdout.write(chunk.token);
|
|
43
43
|
}
|
|
44
44
|
|
|
45
|
-
// Structured output
|
|
45
|
+
// Structured output — Zod 4 schema triggers strict native mode automatically
|
|
46
46
|
import { z } from 'zod';
|
|
47
47
|
const schema = z.object({ name: z.string(), score: z.number() });
|
|
48
48
|
const result = await client.structured(messages, schema);
|
|
49
49
|
// result.data is typed as { name: string; score: number }
|
|
50
|
+
// result.model and result.id are populated (v0.4.0+)
|
|
50
51
|
```
|
|
51
52
|
|
|
53
|
+
## Strict structured outputs (v0.4.0)
|
|
54
|
+
|
|
55
|
+
Pass a **Zod 4** schema to `structured()` and the toolkit automatically routes to the strictest native path available for each provider. No opt-in flag required.
|
|
56
|
+
|
|
57
|
+
```typescript
|
|
58
|
+
import { z } from 'zod';
|
|
59
|
+
const schema = z.object({
|
|
60
|
+
topic: z.string(),
|
|
61
|
+
bullets: z.array(z.string()),
|
|
62
|
+
});
|
|
63
|
+
|
|
64
|
+
const result = await client.structured(messages, schema);
|
|
65
|
+
// result.data — typed and Zod-validated
|
|
66
|
+
// result.model — model ID used (always present, v0.4.0+)
|
|
67
|
+
// result.id — provider request ID for tracing (OpenAI + Anthropic)
|
|
68
|
+
// result.citations — Perplexity citations if any
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### How detection works
|
|
72
|
+
|
|
73
|
+
The toolkit checks for Zod 4's internal `_zod` marker at runtime. If the schema is a Zod 4 instance, it converts to JSON Schema using Zod 4's built-in `z.toJSONSchema()` and routes to the native path. If the schema is anything else (plain `{ parse }` object, Zod 3, etc.), it falls back to the v0.3.0 system-prompt path.
|
|
74
|
+
|
|
75
|
+
### Schema-feature support matrix
|
|
76
|
+
|
|
77
|
+
| Provider | Native mode | What's enforced | Known limits |
|
|
78
|
+
|---|---|---|---|
|
|
79
|
+
| OpenAI (`gpt-5.x`) | `response_format: { type: 'json_schema', strict: true }` | Schema structure guaranteed; model cannot produce off-schema output | No `format`, `pattern`, or recursive schemas (`z.lazy()`). Throws at conversion time with clear message. |
|
|
80
|
+
| Anthropic | Tool-use with forced `tool_choice: { type: 'tool', name: 'extract' }` | Model must call the tool; `input` is pre-parsed JSON | Defense-in-depth `schema.parse()` still runs |
|
|
81
|
+
| Gemini | `responseSchema` (OpenAPI 3.0) + `responseMimeType: 'application/json'` | Schema communicated to the model; belt-and-braces fence-strip retained | Tested via mocks only — file an issue if Gemini's API rejects the schema shape |
|
|
82
|
+
| DeepSeek | None (prompt-only, API limitation) | System-prompt nudge + schema.parse() | Same as v0.3.0 |
|
|
83
|
+
| Perplexity | None (prompt-only, API limitation) | System-prompt nudge + `<think>` strip + schema.parse() | Same as v0.3.0; `citations` propagated to structured response |
|
|
84
|
+
|
|
85
|
+
### Prompt-mode escape hatch
|
|
86
|
+
|
|
87
|
+
If your schema uses a feature unsupported in strict mode (e.g. `z.function()`, `z.lazy()`) and you need to keep using it, pass the escape hatch:
|
|
88
|
+
|
|
89
|
+
```typescript
|
|
90
|
+
const result = await client.structured(messages, schema, {
|
|
91
|
+
providerOptions: { structuredMode: 'prompt' },
|
|
92
|
+
});
|
|
93
|
+
// Forces the v0.3.0 prompt-only path regardless of schema type
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
Alternatively, catch the `LlmError` thrown during schema conversion and inform the user:
|
|
97
|
+
|
|
98
|
+
```typescript
|
|
99
|
+
try {
|
|
100
|
+
const result = await client.structured(messages, schema);
|
|
101
|
+
} catch (err) {
|
|
102
|
+
if (err instanceof LlmError && err.kind === 'unknown') {
|
|
103
|
+
// Schema contains an unrepresentable feature — message names it
|
|
104
|
+
console.error(err.message);
|
|
105
|
+
}
|
|
106
|
+
}
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
### Zod 3 schemas
|
|
110
|
+
|
|
111
|
+
If a Zod 3 schema is passed, the toolkit throws `LlmError` with a clear "upgrade to Zod 4" message rather than silently falling through to prompt mode. Pass `providerOptions.structuredMode = 'prompt'` if you cannot upgrade immediately.
|
|
112
|
+
|
|
52
113
|
## Provider universe
|
|
53
114
|
|
|
54
115
|
| Provider | Status | Env var |
|
package/dist/index.d.ts
CHANGED
|
@@ -13,6 +13,12 @@
|
|
|
13
13
|
* LlmCallOptions.streamStallTimeoutMs — per-stream stall detection (ms); default 30000.
|
|
14
14
|
* LlmClientConfig.streamStallTimeoutMs — config-level stall default.
|
|
15
15
|
* LlmError.kind — discriminator for error classification.
|
|
16
|
+
*
|
|
17
|
+
* v0.4.0 additions (strict structured outputs):
|
|
18
|
+
* LlmStructuredResponse.model — model ID actually used (always populated).
|
|
19
|
+
* LlmStructuredResponse.id — provider request ID where available (debugging).
|
|
20
|
+
* LlmStructuredResponse.citations — web citations from Perplexity structured responses.
|
|
21
|
+
* LlmClient.structured JSDoc — Zod 4 trigger and structuredMode escape hatch.
|
|
16
22
|
*/
|
|
17
23
|
interface LlmMessage {
|
|
18
24
|
role: 'system' | 'user' | 'assistant';
|
|
@@ -119,15 +125,57 @@ declare class LlmError extends Error {
|
|
|
119
125
|
cause?: unknown;
|
|
120
126
|
});
|
|
121
127
|
}
|
|
128
|
+
/**
|
|
129
|
+
* Structured output response.
|
|
130
|
+
*
|
|
131
|
+
* v0.4.0 — additive fields:
|
|
132
|
+
* model — model ID reported by the provider (always present).
|
|
133
|
+
* id — provider request / message ID for tracing and debugging.
|
|
134
|
+
* Populated by OpenAI (response.id) and Anthropic (response.id).
|
|
135
|
+
* Undefined for Gemini, DeepSeek, and Perplexity.
|
|
136
|
+
* citations — web citations propagated from Perplexity structured calls.
|
|
137
|
+
* Undefined for all other providers.
|
|
138
|
+
*/
|
|
122
139
|
type LlmStructuredResponse<T> = {
|
|
123
140
|
data: T;
|
|
141
|
+
model: string;
|
|
142
|
+
id?: string;
|
|
124
143
|
usage: LlmUsage;
|
|
125
144
|
latencyMs: number;
|
|
145
|
+
citations?: Array<{
|
|
146
|
+
url: string;
|
|
147
|
+
title?: string;
|
|
148
|
+
}>;
|
|
126
149
|
};
|
|
127
150
|
interface LlmClient {
|
|
128
151
|
readonly config: Readonly<LlmClientConfig>;
|
|
129
152
|
complete(messages: LlmMessage[], options?: LlmCallOptions): Promise<LlmResponse>;
|
|
130
153
|
stream(messages: LlmMessage[], options?: LlmCallOptions): AsyncGenerator<LlmStreamChunk>;
|
|
154
|
+
/**
|
|
155
|
+
* Structured output — parses and validates the response against a schema.
|
|
156
|
+
*
|
|
157
|
+
* **Strict native mode (v0.4.0+):**
|
|
158
|
+
* Pass a Zod 4 schema to automatically opt into the provider's strictest native
|
|
159
|
+
* structured-output path:
|
|
160
|
+
* - OpenAI: `response_format: { type: 'json_schema', strict: true }` (gpt-5.x family)
|
|
161
|
+
* - Anthropic: forced tool-use with `tool_choice: { type: 'tool', name: 'extract' }`
|
|
162
|
+
* - Gemini: `responseSchema` populated in GenerateContentConfig
|
|
163
|
+
*
|
|
164
|
+
* **Prompt-only fallback:**
|
|
165
|
+
* If the schema is not a Zod 4 instance, or if
|
|
166
|
+
* `options.providerOptions.structuredMode === 'prompt'` is set, the v0.3.0
|
|
167
|
+
* system-prompt + parse path is used instead. This is the escape hatch for:
|
|
168
|
+
* - Zod 4 schemas that use unrepresentable features (z.function(), z.lazy(), etc.)
|
|
169
|
+
* - Non-Zod schema objects that satisfy the narrow `{ parse }` interface
|
|
170
|
+
* - DeepSeek and Perplexity (no native schema mode — always prompt-only)
|
|
171
|
+
*
|
|
172
|
+
* **Defense-in-depth:** schema.parse() is called on the parsed result even
|
|
173
|
+
* after a native strict-mode call, to catch truncation or partial outputs.
|
|
174
|
+
*
|
|
175
|
+
* @param schema - A Zod 4 schema (triggers strict mode) or any `{ parse }` interface.
|
|
176
|
+
* Using a narrower interface than ZodType avoids a hard zod dependency at
|
|
177
|
+
* the types level.
|
|
178
|
+
*/
|
|
131
179
|
structured<T>(messages: LlmMessage[], schema: {
|
|
132
180
|
parse: (data: unknown) => T;
|
|
133
181
|
}, options?: LlmCallOptions): Promise<LlmStructuredResponse<T>>;
|