@adia-ai/llm 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +51 -0
- package/README.md +67 -0
- package/adapters/anthropic.js +106 -0
- package/adapters/gemini.js +99 -0
- package/adapters/index.js +170 -0
- package/adapters/openai.js +85 -0
- package/adapters/sse.js +50 -0
- package/index.js +17 -0
- package/llm-bridge.js +214 -0
- package/llm-stub.js +69 -0
- package/package.json +32 -0
package/CHANGELOG.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
# @adia-ai/llm
|
|
2
|
+
|
|
3
|
+
All notable changes to this package are documented here.
|
|
4
|
+
The format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
5
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
6
|
+
|
|
7
|
+
## [Unreleased]
|
|
8
|
+
|
|
9
|
+
_No pending changes._
|
|
10
|
+
|
|
11
|
+
## [0.3.0] - 2026-05-05
|
|
12
|
+
|
|
13
|
+
**Initial release as the 9th `@adia-ai/*` lockstep package.** Joins the lockstep at the cut version. All 9 published `@adia-ai/*` packages now share one version, governed by [`docs/specs/package-architecture.md` § 15](../../docs/specs/package-architecture.md#15-versioning-policy).
|
|
14
|
+
|
|
15
|
+
### Added
|
|
16
|
+
|
|
17
|
+
- Provider-agnostic LLM client. Three adapters (anthropic / openai / gemini) behind a single `chat()` + `streamChat()` facade. Works in browser (with `proxyUrl`) and Node.
|
|
18
|
+
- `chat()`, `streamChat()`, `createClient()` facade in `@adia-ai/llm` (default export).
|
|
19
|
+
- `createAdapter()` bridge for the A2UI generation pipeline at `@adia-ai/llm/bridge`.
|
|
20
|
+
- `StubLLMAdapter` for deterministic tests at `@adia-ai/llm/stub`.
|
|
21
|
+
- Direct adapter access at `@adia-ai/llm/adapters/{anthropic,openai,gemini}` for callers that need the raw adapter object.
|
|
22
|
+
- **Browser proxy mode.** Pass `proxyUrl` to `streamChat`/`chat` and the client speaks a provider-neutral protocol to your proxy: `{ provider, model, messages, system?, maxTokens?, temperature?, thinking?, stream }`. The proxy holds the real API key and reformats per upstream; the adapter still parses the SSE response stream verbatim. Reference proxy implementation ships at `packages/llm/server.js` (run via `npm run proxy` from the chat-ui repo root).
|
|
23
|
+
|
|
24
|
+
### Why this is its own package
|
|
25
|
+
|
|
26
|
+
The LLM adapters previously lived under `@adia-ai/a2ui-compose/llm` but were consumed by `chat-shell` (web-modules) outside any A2UI generation concern — leaking the boundary. `chat-shell` shouldn't depend on `@adia-ai/a2ui-compose` to talk to OpenAI. As a sibling-shaped foundational primitive, `@adia-ai/llm` lets compose, chat, MCP synthesis, and any other surface depend on the LLM client without pulling the generator graph.
|
|
27
|
+
|
|
28
|
+
### Migration from `@adia-ai/a2ui-compose/llm`
|
|
29
|
+
|
|
30
|
+
Consumers should rewrite imports:
|
|
31
|
+
|
|
32
|
+
```diff
|
|
33
|
+
- import { streamChat } from '@adia-ai/a2ui-compose/llm/adapters/index.js';
|
|
34
|
+
+ import { streamChat } from '@adia-ai/llm';
|
|
35
|
+
|
|
36
|
+
- import { createAdapter } from '@adia-ai/a2ui-compose/llm/llm-bridge.js';
|
|
37
|
+
+ import { createAdapter } from '@adia-ai/llm/bridge';
|
|
38
|
+
|
|
39
|
+
- import { StubLLMAdapter } from '@adia-ai/a2ui-compose/llm/llm-stub.js';
|
|
40
|
+
+ import { StubLLMAdapter } from '@adia-ai/llm/stub';
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
`@adia-ai/a2ui-compose@0.3.0` no longer exports `./llm` — see its CHANGELOG.
|
|
44
|
+
|
|
45
|
+
### Proxy-mode protocol fix
|
|
46
|
+
|
|
47
|
+
In v0.3.0, proxy-mode requests now correctly include `provider` in the body and drop the `Authorization: Bearer ${apiKey}` header (which was previously emitted with `apiKey: undefined` when `proxyUrl` was set). Two stacked bugs that affected anyone using `proxyUrl` in v0.2.x:
|
|
48
|
+
1. The proxy received bodies without `provider`, defaulted to `anthropic`, and routed `gpt-4o-mini` requests to Anthropic → 404 model-not-found.
|
|
49
|
+
2. `Authorization: Bearer undefined` appeared in headers when the client didn't carry an API key (proxy-only deployments).
|
|
50
|
+
|
|
51
|
+
Both fixed in v0.3.0. The chat playground at `apps/chat/app/chat.html` exercises this path.
|
package/README.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
1
|
+
# `@adia-ai/llm`
|
|
2
|
+
|
|
3
|
+
Provider-agnostic LLM client. Three adapters (anthropic / openai / gemini)
|
|
4
|
+
behind a single `chat()` + `streamChat()` facade. Works in browser and Node.
|
|
5
|
+
|
|
6
|
+
```js
|
|
7
|
+
import { chat, streamChat } from '@adia-ai/llm';
|
|
8
|
+
|
|
9
|
+
// Direct API call (apiKey owned by the caller)
|
|
10
|
+
const reply = await chat({
|
|
11
|
+
apiKey: 'sk-...',
|
|
12
|
+
model: 'gpt-4o-mini',
|
|
13
|
+
messages: [{ role: 'user', content: 'Hello' }],
|
|
14
|
+
});
|
|
15
|
+
|
|
16
|
+
// Streaming
|
|
17
|
+
for await (const chunk of streamChat({
|
|
18
|
+
apiKey: 'sk-...',
|
|
19
|
+
model: 'claude-haiku-4-5-20251001',
|
|
20
|
+
messages: [{ role: 'user', content: 'Hello' }],
|
|
21
|
+
})) {
|
|
22
|
+
if (chunk.type === 'text') process.stdout.write(chunk.text);
|
|
23
|
+
}
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
## Browser proxy mode
|
|
27
|
+
|
|
28
|
+
Pass `proxyUrl` to route through your server-side proxy (which holds the
|
|
29
|
+
API key). The client speaks a provider-neutral protocol to the proxy:
|
|
30
|
+
|
|
31
|
+
```js
|
|
32
|
+
for await (const chunk of streamChat({
|
|
33
|
+
proxyUrl: '/api/chat',
|
|
34
|
+
provider: 'openai', // optional — auto-detected from model
|
|
35
|
+
model: 'gpt-4o-mini',
|
|
36
|
+
messages: [{ role: 'user', content: 'Hello' }],
|
|
37
|
+
})) { /* ... */ }
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
The body sent to the proxy:
|
|
41
|
+
|
|
42
|
+
```json
|
|
43
|
+
{
|
|
44
|
+
"provider": "openai",
|
|
45
|
+
"model": "gpt-4o-mini",
|
|
46
|
+
"messages": [{ "role": "user", "content": "Hello" }],
|
|
47
|
+
"system": "...optional...",
|
|
48
|
+
"maxTokens": 4096,
|
|
49
|
+
"temperature": 0.7,
|
|
50
|
+
"stream": true
|
|
51
|
+
}
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
The proxy reformats per upstream provider and pipes the SSE bytes
|
|
55
|
+
verbatim. The reference proxy implementation is at `server.js` in the
|
|
56
|
+
chat-ui repo root.
|
|
57
|
+
|
|
58
|
+
## Subpath exports
|
|
59
|
+
|
|
60
|
+
| Subpath | Purpose |
|
|
61
|
+
|---------|---------|
|
|
62
|
+
| `@adia-ai/llm` | Default: `chat`, `streamChat`, `createClient` |
|
|
63
|
+
| `@adia-ai/llm/bridge` | `createAdapter` — wraps the facade in the A2UI pipeline's adapter interface |
|
|
64
|
+
| `@adia-ai/llm/stub` | `StubLLMAdapter` — deterministic adapter for tests |
|
|
65
|
+
| `@adia-ai/llm/adapters/anthropic` | Direct adapter object |
|
|
66
|
+
| `@adia-ai/llm/adapters/openai` | Direct adapter object |
|
|
67
|
+
| `@adia-ai/llm/adapters/gemini` | Direct adapter object |
|
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Anthropic Messages API adapter.
|
|
3
|
+
* Endpoint: https://api.anthropic.com/v1/messages
|
|
4
|
+
*/
|
|
5
|
+
|
|
6
|
+
import { readSSE } from './sse.js';
|
|
7
|
+
|
|
8
|
+
const API_URL = 'https://api.anthropic.com/v1/messages';
|
|
9
|
+
const API_VERSION = '2023-06-01';
|
|
10
|
+
const DEFAULT_MAX_TOKENS = 4096;
|
|
11
|
+
|
|
12
|
+
export const anthropic = {
|
|
13
|
+
name: 'anthropic',
|
|
14
|
+
|
|
15
|
+
buildRequest(opts) {
|
|
16
|
+
const body = {
|
|
17
|
+
model: opts.model,
|
|
18
|
+
max_tokens: opts.maxTokens || DEFAULT_MAX_TOKENS,
|
|
19
|
+
messages: opts.messages,
|
|
20
|
+
stream: !!opts.stream,
|
|
21
|
+
};
|
|
22
|
+
if (opts.system) {
|
|
23
|
+
// Prompt caching: the AdiaUI system prompt is ~23KB and constant across
|
|
24
|
+
// a session. Emitting it as a cached block marks it as a cache breakpoint
|
|
25
|
+
// (ephemeral, ~5 min TTL). First call = cache write (+25% cost), every
|
|
26
|
+
// subsequent call in the window = cache read (−90% cost). No-op below
|
|
27
|
+
// the model's minimum cacheable size (1024 tok Sonnet/Opus, 2048 Haiku).
|
|
28
|
+
body.system = opts.cache
|
|
29
|
+
? [{ type: 'text', text: opts.system, cache_control: { type: 'ephemeral' } }]
|
|
30
|
+
: opts.system;
|
|
31
|
+
}
|
|
32
|
+
if (opts.temperature != null) body.temperature = opts.temperature;
|
|
33
|
+
if (opts.thinking) {
|
|
34
|
+
body.thinking = { type: 'enabled', budget_tokens: opts.thinkingBudget || 10000 };
|
|
35
|
+
}
|
|
36
|
+
|
|
37
|
+
return {
|
|
38
|
+
url: API_URL,
|
|
39
|
+
headers: {
|
|
40
|
+
'content-type': 'application/json',
|
|
41
|
+
'x-api-key': opts.apiKey,
|
|
42
|
+
'anthropic-version': API_VERSION,
|
|
43
|
+
},
|
|
44
|
+
body,
|
|
45
|
+
};
|
|
46
|
+
},
|
|
47
|
+
|
|
48
|
+
parseResponse(data) {
|
|
49
|
+
const text = data.content?.find(b => b.type === 'text')?.text ?? '';
|
|
50
|
+
return {
|
|
51
|
+
text,
|
|
52
|
+
usage: {
|
|
53
|
+
input: data.usage?.input_tokens ?? 0,
|
|
54
|
+
output: data.usage?.output_tokens ?? 0,
|
|
55
|
+
// Cache telemetry: non-zero cacheRead on turn 2+ is the signal that
|
|
56
|
+
// caching is actually kicking in. Recorded per-turn for hit-rate analysis.
|
|
57
|
+
cacheCreation: data.usage?.cache_creation_input_tokens ?? 0,
|
|
58
|
+
cacheRead: data.usage?.cache_read_input_tokens ?? 0,
|
|
59
|
+
},
|
|
60
|
+
stopReason: data.stop_reason ?? 'end',
|
|
61
|
+
};
|
|
62
|
+
},
|
|
63
|
+
|
|
64
|
+
async *parseStream(response) {
|
|
65
|
+
let snapshot = '';
|
|
66
|
+
let usage = { input: 0, output: 0, cacheCreation: 0, cacheRead: 0 };
|
|
67
|
+
let stopReason = 'end';
|
|
68
|
+
|
|
69
|
+
for await (const event of readSSE(response.body)) {
|
|
70
|
+
if (event.done) break;
|
|
71
|
+
let data;
|
|
72
|
+
try { data = JSON.parse(event.data); } catch { continue; }
|
|
73
|
+
const eventType = event.event ?? data.type;
|
|
74
|
+
|
|
75
|
+
switch (eventType) {
|
|
76
|
+
case 'message_start':
|
|
77
|
+
if (data.message?.usage) {
|
|
78
|
+
usage.input = data.message.usage.input_tokens ?? 0;
|
|
79
|
+
usage.cacheCreation = data.message.usage.cache_creation_input_tokens ?? 0;
|
|
80
|
+
usage.cacheRead = data.message.usage.cache_read_input_tokens ?? 0;
|
|
81
|
+
}
|
|
82
|
+
break;
|
|
83
|
+
case 'content_block_delta': {
|
|
84
|
+
const delta = data.delta;
|
|
85
|
+
if (delta?.type === 'text_delta') {
|
|
86
|
+
snapshot += delta.text;
|
|
87
|
+
yield { type: 'text', text: delta.text, snapshot };
|
|
88
|
+
} else if (delta?.type === 'thinking_delta') {
|
|
89
|
+
yield { type: 'thinking', text: delta.thinking };
|
|
90
|
+
}
|
|
91
|
+
break;
|
|
92
|
+
}
|
|
93
|
+
case 'message_delta':
|
|
94
|
+
if (data.delta?.stop_reason) stopReason = data.delta.stop_reason;
|
|
95
|
+
if (data.usage) usage.output = data.usage.output_tokens ?? 0;
|
|
96
|
+
break;
|
|
97
|
+
case 'message_stop':
|
|
98
|
+
yield { type: 'done', text: snapshot, usage, stopReason };
|
|
99
|
+
break;
|
|
100
|
+
case 'error':
|
|
101
|
+
yield { type: 'error', error: new Error(data.error?.message ?? 'Stream error') };
|
|
102
|
+
break;
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
},
|
|
106
|
+
};
|
|
@@ -0,0 +1,99 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Google Gemini generateContent API adapter.
|
|
3
|
+
* Endpoint: https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
|
|
4
|
+
* Streaming: .../{model}:streamGenerateContent?alt=sse
|
|
5
|
+
*/
|
|
6
|
+
|
|
7
|
+
import { readSSE } from './sse.js';
|
|
8
|
+
|
|
9
|
+
const API_URL = 'https://generativelanguage.googleapis.com/v1beta/models';
|
|
10
|
+
const DEFAULT_MAX_TOKENS = 4096;
|
|
11
|
+
|
|
12
|
+
export const gemini = {
|
|
13
|
+
name: 'gemini',
|
|
14
|
+
|
|
15
|
+
buildRequest(opts) {
|
|
16
|
+
const model = opts.model;
|
|
17
|
+
const contents = [];
|
|
18
|
+
for (const msg of opts.messages) {
|
|
19
|
+
contents.push({
|
|
20
|
+
role: msg.role === 'assistant' ? 'model' : 'user',
|
|
21
|
+
parts: [{ text: msg.content }],
|
|
22
|
+
});
|
|
23
|
+
}
|
|
24
|
+
|
|
25
|
+
const body = { contents };
|
|
26
|
+
|
|
27
|
+
if (opts.system) {
|
|
28
|
+
body.systemInstruction = { parts: [{ text: opts.system }] };
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
const generationConfig = {
|
|
32
|
+
maxOutputTokens: opts.maxTokens || DEFAULT_MAX_TOKENS,
|
|
33
|
+
};
|
|
34
|
+
if (opts.temperature != null) generationConfig.temperature = opts.temperature;
|
|
35
|
+
body.generationConfig = generationConfig;
|
|
36
|
+
|
|
37
|
+
const action = opts.stream
|
|
38
|
+
? `streamGenerateContent?alt=sse`
|
|
39
|
+
: 'generateContent';
|
|
40
|
+
|
|
41
|
+
return {
|
|
42
|
+
url: `${API_URL}/${model}:${action}`,
|
|
43
|
+
headers: {
|
|
44
|
+
'content-type': 'application/json',
|
|
45
|
+
'x-goog-api-key': opts.apiKey,
|
|
46
|
+
},
|
|
47
|
+
body,
|
|
48
|
+
};
|
|
49
|
+
},
|
|
50
|
+
|
|
51
|
+
parseResponse(data) {
|
|
52
|
+
const parts = data.candidates?.[0]?.content?.parts ?? [];
|
|
53
|
+
const text = parts.map(p => p.text ?? '').join('');
|
|
54
|
+
return {
|
|
55
|
+
text,
|
|
56
|
+
usage: {
|
|
57
|
+
input: data.usageMetadata?.promptTokenCount ?? 0,
|
|
58
|
+
output: data.usageMetadata?.candidatesTokenCount ?? 0,
|
|
59
|
+
},
|
|
60
|
+
stopReason: data.candidates?.[0]?.finishReason === 'STOP' ? 'end' : 'end',
|
|
61
|
+
};
|
|
62
|
+
},
|
|
63
|
+
|
|
64
|
+
async *parseStream(response) {
|
|
65
|
+
let snapshot = '';
|
|
66
|
+
let usage = { input: 0, output: 0 };
|
|
67
|
+
let stopReason = 'end';
|
|
68
|
+
|
|
69
|
+
for await (const event of readSSE(response.body)) {
|
|
70
|
+
if (event.done) break;
|
|
71
|
+
let data;
|
|
72
|
+
try { data = JSON.parse(event.data); } catch { continue; }
|
|
73
|
+
|
|
74
|
+
if (data.usageMetadata) {
|
|
75
|
+
usage.input = data.usageMetadata.promptTokenCount ?? 0;
|
|
76
|
+
usage.output = data.usageMetadata.candidatesTokenCount ?? 0;
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
const candidate = data.candidates?.[0];
|
|
80
|
+
if (!candidate) continue;
|
|
81
|
+
|
|
82
|
+
if (candidate.finishReason && candidate.finishReason !== 'STOP') {
|
|
83
|
+
stopReason = candidate.finishReason;
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
const parts = candidate.content?.parts;
|
|
87
|
+
if (!parts?.length) continue;
|
|
88
|
+
|
|
89
|
+
for (const part of parts) {
|
|
90
|
+
if (part.text != null) {
|
|
91
|
+
snapshot += part.text;
|
|
92
|
+
yield { type: 'text', text: part.text, snapshot };
|
|
93
|
+
}
|
|
94
|
+
}
|
|
95
|
+
}
|
|
96
|
+
|
|
97
|
+
yield { type: 'done', text: snapshot, usage, stopReason };
|
|
98
|
+
},
|
|
99
|
+
};
|
|
@@ -0,0 +1,170 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* LLM Client — Provider-agnostic chat interface.
|
|
3
|
+
*
|
|
4
|
+
* Usage:
|
|
5
|
+
* import { createClient, chat, streamChat } from './llm/index.js';
|
|
6
|
+
*
|
|
7
|
+
* // Quick use (provider auto-detected from model name)
|
|
8
|
+
* const reply = await chat({
|
|
9
|
+
* apiKey: 'sk-ant-...',
|
|
10
|
+
* model: 'claude-sonnet-4-20250514',
|
|
11
|
+
* messages: [{ role: 'user', content: 'Hello' }],
|
|
12
|
+
* });
|
|
13
|
+
*
|
|
14
|
+
* for await (const chunk of streamChat({
|
|
15
|
+
* apiKey: 'sk-...',
|
|
16
|
+
* model: 'gpt-4o',
|
|
17
|
+
* messages: [{ role: 'user', content: 'Hello' }],
|
|
18
|
+
* })) {
|
|
19
|
+
* if (chunk.type === 'text') process.stdout.write(chunk.text);
|
|
20
|
+
* }
|
|
21
|
+
*
|
|
22
|
+
* // Explicit provider
|
|
23
|
+
* const reply = await chat({ provider: 'gemini', apiKey: '...', model: 'gemini-2.5-flash', ... });
|
|
24
|
+
*
|
|
25
|
+
* // Reusable client instance
|
|
26
|
+
* const client = createClient({ provider: 'anthropic', apiKey: '...' });
|
|
27
|
+
* const reply = await client.chat({ model: 'claude-sonnet-4-20250514', messages: [...] });
|
|
28
|
+
* for await (const chunk of client.stream({ model: '...', messages: [...] })) { ... }
|
|
29
|
+
*
|
|
30
|
+
* Chunk types (streaming):
|
|
31
|
+
* { type: 'text', text: 'delta', snapshot: 'full text so far' }
|
|
32
|
+
* { type: 'thinking', text: 'thinking delta' }
|
|
33
|
+
* { type: 'done', text: 'full response', usage: { input, output }, stopReason }
|
|
34
|
+
* { type: 'error', error: Error }
|
|
35
|
+
*/
|
|
36
|
+
|
|
37
|
+
import { anthropic } from './anthropic.js';
|
|
38
|
+
import { openai } from './openai.js';
|
|
39
|
+
import { gemini } from './gemini.js';
|
|
40
|
+
|
|
41
|
+
// ── Provider registry ──
|
|
42
|
+
|
|
43
|
+
const providers = { anthropic, openai, gemini };
|
|
44
|
+
|
|
45
|
+
/** Detect provider from model name. */
|
|
46
|
+
function detectProvider(model) {
|
|
47
|
+
if (!model) return null;
|
|
48
|
+
const m = model.toLowerCase();
|
|
49
|
+
if (m.includes('claude') || m.startsWith('anthropic/')) return 'anthropic';
|
|
50
|
+
if (m.includes('gpt') || m.includes('o1') || m.includes('o3') || m.includes('o4') || m.startsWith('openai/')) return 'openai';
|
|
51
|
+
if (m.includes('gemini') || m.startsWith('google/')) return 'gemini';
|
|
52
|
+
return null;
|
|
53
|
+
}
|
|
54
|
+
|
|
55
|
+
function resolveAdapter(opts) {
|
|
56
|
+
const name = opts.provider || detectProvider(opts.model);
|
|
57
|
+
if (!name) throw new Error(`Cannot detect provider for model "${opts.model}". Set provider explicitly.`);
|
|
58
|
+
const adapter = providers[name];
|
|
59
|
+
if (!adapter) throw new Error(`Unknown provider "${name}". Available: ${Object.keys(providers).join(', ')}`);
|
|
60
|
+
return adapter;
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
// ── Proxy mode ──
|
|
64
|
+
//
|
|
65
|
+
// When `proxyUrl` is set, the client speaks a provider-neutral protocol
|
|
66
|
+
// to the proxy: { provider, model, messages, system?, maxTokens?,
|
|
67
|
+
// temperature?, thinking?, stream }. The proxy holds the real API key
|
|
68
|
+
// and reformats per upstream provider. Each adapter still parses the
|
|
69
|
+
// upstream's streamed body via its own parseStream — the proxy pipes
|
|
70
|
+
// the SSE bytes verbatim.
|
|
71
|
+
|
|
72
|
+
function proxyRequest(opts, stream) {
|
|
73
|
+
const provider = opts.provider || detectProvider(opts.model);
|
|
74
|
+
const body = {
|
|
75
|
+
provider,
|
|
76
|
+
model: opts.model,
|
|
77
|
+
messages: opts.messages,
|
|
78
|
+
stream,
|
|
79
|
+
};
|
|
80
|
+
if (opts.system != null) body.system = opts.system;
|
|
81
|
+
if (opts.maxTokens != null) body.maxTokens = opts.maxTokens;
|
|
82
|
+
if (opts.temperature != null) body.temperature = opts.temperature;
|
|
83
|
+
if (opts.thinking != null) body.thinking = opts.thinking;
|
|
84
|
+
return {
|
|
85
|
+
url: opts.proxyUrl,
|
|
86
|
+
headers: { 'content-type': 'application/json' },
|
|
87
|
+
body,
|
|
88
|
+
};
|
|
89
|
+
}
|
|
90
|
+
|
|
91
|
+
// ── Standalone functions ──
|
|
92
|
+
|
|
93
|
+
/**
|
|
94
|
+
* Non-streaming chat completion.
|
|
95
|
+
* @returns {Promise<{text: string, usage: {input: number, output: number}, stopReason: string}>}
|
|
96
|
+
*/
|
|
97
|
+
export async function chat(opts) {
|
|
98
|
+
const adapter = resolveAdapter(opts);
|
|
99
|
+
const { url, headers, body } = opts.proxyUrl
|
|
100
|
+
? proxyRequest(opts, false)
|
|
101
|
+
: adapter.buildRequest({ ...opts, stream: false });
|
|
102
|
+
|
|
103
|
+
const res = await fetch(url, {
|
|
104
|
+
method: 'POST',
|
|
105
|
+
headers,
|
|
106
|
+
body: JSON.stringify(body),
|
|
107
|
+
signal: opts.signal,
|
|
108
|
+
});
|
|
109
|
+
|
|
110
|
+
if (!res.ok) {
|
|
111
|
+
const err = await res.json().catch(() => ({}));
|
|
112
|
+
throw new Error(err?.error?.message || `${adapter.name} API error ${res.status}`);
|
|
113
|
+
}
|
|
114
|
+
|
|
115
|
+
return adapter.parseResponse(await res.json());
|
|
116
|
+
}
|
|
117
|
+
|
|
118
|
+
/**
|
|
119
|
+
* Streaming chat — yields chunks as they arrive.
|
|
120
|
+
* @returns {AsyncGenerator<{type: string, text?: string, snapshot?: string, usage?: object, error?: Error}>}
|
|
121
|
+
*/
|
|
122
|
+
export async function* streamChat(opts) {
|
|
123
|
+
const adapter = resolveAdapter(opts);
|
|
124
|
+
const { url, headers, body } = opts.proxyUrl
|
|
125
|
+
? proxyRequest(opts, true)
|
|
126
|
+
: adapter.buildRequest({ ...opts, stream: true });
|
|
127
|
+
|
|
128
|
+
let res;
|
|
129
|
+
try {
|
|
130
|
+
res = await fetch(url, {
|
|
131
|
+
method: 'POST',
|
|
132
|
+
headers,
|
|
133
|
+
body: JSON.stringify(body),
|
|
134
|
+
signal: opts.signal,
|
|
135
|
+
});
|
|
136
|
+
} catch (err) {
|
|
137
|
+
yield { type: 'error', error: err };
|
|
138
|
+
return;
|
|
139
|
+
}
|
|
140
|
+
|
|
141
|
+
if (!res.ok) {
|
|
142
|
+
const err = await res.json().catch(() => ({}));
|
|
143
|
+
yield { type: 'error', error: new Error(err?.error?.message || `${adapter.name} API error ${res.status}`) };
|
|
144
|
+
return;
|
|
145
|
+
}
|
|
146
|
+
|
|
147
|
+
yield* adapter.parseStream(res);
|
|
148
|
+
}
|
|
149
|
+
|
|
150
|
+
// ── Client factory ──
|
|
151
|
+
|
|
152
|
+
/**
|
|
153
|
+
* Create a reusable client instance with defaults baked in.
|
|
154
|
+
*
|
|
155
|
+
* @param {object} defaults
|
|
156
|
+
* @param {string} defaults.provider — 'anthropic' | 'openai' | 'gemini'
|
|
157
|
+
* @param {string} defaults.apiKey
|
|
158
|
+
* @param {string} [defaults.model] — default model
|
|
159
|
+
* @param {string} [defaults.proxyUrl] — proxy URL (for CORS)
|
|
160
|
+
* @param {string} [defaults.system] — default system prompt
|
|
161
|
+
*/
|
|
162
|
+
export function createClient(defaults = {}) {
|
|
163
|
+
return {
|
|
164
|
+
chat: (opts) => chat({ ...defaults, ...opts }),
|
|
165
|
+
stream: (opts) => streamChat({ ...defaults, ...opts }),
|
|
166
|
+
};
|
|
167
|
+
}
|
|
168
|
+
|
|
169
|
+
// Re-export adapters for direct use
|
|
170
|
+
export { anthropic, openai, gemini };
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* OpenAI Chat Completions API adapter.
|
|
3
|
+
* Endpoint: https://api.openai.com/v1/chat/completions
|
|
4
|
+
* Also compatible with: Groq, Together, Mistral, any OpenAI-compatible API.
|
|
5
|
+
*/
|
|
6
|
+
|
|
7
|
+
import { readSSE } from './sse.js';
|
|
8
|
+
|
|
9
|
+
const API_URL = 'https://api.openai.com/v1/chat/completions';
|
|
10
|
+
const DEFAULT_MAX_TOKENS = 4096;
|
|
11
|
+
|
|
12
|
+
export const openai = {
|
|
13
|
+
name: 'openai',
|
|
14
|
+
|
|
15
|
+
buildRequest(opts) {
|
|
16
|
+
const messages = [];
|
|
17
|
+
if (opts.system) messages.push({ role: 'system', content: opts.system });
|
|
18
|
+
for (const msg of opts.messages) {
|
|
19
|
+
messages.push({ role: msg.role, content: msg.content });
|
|
20
|
+
}
|
|
21
|
+
|
|
22
|
+
const body = {
|
|
23
|
+
model: opts.model,
|
|
24
|
+
messages,
|
|
25
|
+
stream: !!opts.stream,
|
|
26
|
+
};
|
|
27
|
+
if (opts.maxTokens) body.max_tokens = opts.maxTokens;
|
|
28
|
+
if (opts.temperature != null) body.temperature = opts.temperature;
|
|
29
|
+
if (opts.stream) body.stream_options = { include_usage: true };
|
|
30
|
+
|
|
31
|
+
return {
|
|
32
|
+
url: API_URL,
|
|
33
|
+
headers: {
|
|
34
|
+
'content-type': 'application/json',
|
|
35
|
+
'authorization': `Bearer ${opts.apiKey}`,
|
|
36
|
+
},
|
|
37
|
+
body,
|
|
38
|
+
};
|
|
39
|
+
},
|
|
40
|
+
|
|
41
|
+
parseResponse(data) {
|
|
42
|
+
const choice = data.choices?.[0];
|
|
43
|
+
const text = choice?.message?.content ?? '';
|
|
44
|
+
return {
|
|
45
|
+
text,
|
|
46
|
+
usage: { input: data.usage?.prompt_tokens ?? 0, output: data.usage?.completion_tokens ?? 0 },
|
|
47
|
+
stopReason: choice?.finish_reason === 'stop' ? 'end' : (choice?.finish_reason ?? 'end'),
|
|
48
|
+
};
|
|
49
|
+
},
|
|
50
|
+
|
|
51
|
+
async *parseStream(response) {
|
|
52
|
+
let snapshot = '';
|
|
53
|
+
let usage = { input: 0, output: 0 };
|
|
54
|
+
let stopReason = 'end';
|
|
55
|
+
|
|
56
|
+
for await (const event of readSSE(response.body)) {
|
|
57
|
+
if (event.done) break;
|
|
58
|
+
let data;
|
|
59
|
+
try { data = JSON.parse(event.data); } catch { continue; }
|
|
60
|
+
|
|
61
|
+
if (data.usage) {
|
|
62
|
+
usage.input = data.usage.prompt_tokens ?? 0;
|
|
63
|
+
usage.output = data.usage.completion_tokens ?? 0;
|
|
64
|
+
}
|
|
65
|
+
|
|
66
|
+
const choice = data.choices?.[0];
|
|
67
|
+
if (!choice) continue;
|
|
68
|
+
|
|
69
|
+
if (choice.finish_reason) {
|
|
70
|
+
stopReason = choice.finish_reason === 'stop' ? 'end' : choice.finish_reason;
|
|
71
|
+
}
|
|
72
|
+
|
|
73
|
+
const delta = choice.delta;
|
|
74
|
+
if (delta?.content) {
|
|
75
|
+
snapshot += delta.content;
|
|
76
|
+
yield { type: 'text', text: delta.content, snapshot };
|
|
77
|
+
}
|
|
78
|
+
if (delta?.reasoning_content) {
|
|
79
|
+
yield { type: 'thinking', text: delta.reasoning_content };
|
|
80
|
+
}
|
|
81
|
+
}
|
|
82
|
+
|
|
83
|
+
yield { type: 'done', text: snapshot, usage, stopReason };
|
|
84
|
+
},
|
|
85
|
+
};
|
package/adapters/sse.js
ADDED
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* SSE Parser — shared by Anthropic, OpenAI, and Gemini adapters.
|
|
3
|
+
* Handles partial line buffering, double-newline splitting, and [DONE] detection.
|
|
4
|
+
*/
|
|
5
|
+
|
|
6
|
+
export async function* readSSE(body) {
|
|
7
|
+
const reader = body.getReader();
|
|
8
|
+
const decoder = new TextDecoder();
|
|
9
|
+
let buffer = '';
|
|
10
|
+
try {
|
|
11
|
+
while (true) {
|
|
12
|
+
const { done, value } = await reader.read();
|
|
13
|
+
if (done) break;
|
|
14
|
+
buffer += decoder.decode(value, { stream: true });
|
|
15
|
+
const { events, remainder } = parse(buffer);
|
|
16
|
+
buffer = remainder;
|
|
17
|
+
for (const event of events) yield event;
|
|
18
|
+
}
|
|
19
|
+
if (buffer.trim()) {
|
|
20
|
+
const { events } = parse(buffer + '\n\n');
|
|
21
|
+
for (const event of events) yield event;
|
|
22
|
+
}
|
|
23
|
+
} finally {
|
|
24
|
+
reader.releaseLock();
|
|
25
|
+
}
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
function parse(text) {
|
|
29
|
+
const events = [];
|
|
30
|
+
const parts = text.split(/\n\n|\r\n\r\n/);
|
|
31
|
+
const remainder = parts.pop() ?? '';
|
|
32
|
+
for (const part of parts) {
|
|
33
|
+
const trimmed = part.trim();
|
|
34
|
+
if (!trimmed) continue;
|
|
35
|
+
let eventType;
|
|
36
|
+
const dataLines = [];
|
|
37
|
+
for (const line of trimmed.split(/\r?\n/)) {
|
|
38
|
+
if (line.startsWith(':')) continue;
|
|
39
|
+
if (line.startsWith('event:')) eventType = line.slice(6).trim();
|
|
40
|
+
else if (line.startsWith('data:')) {
|
|
41
|
+
const v = line.slice(5);
|
|
42
|
+
dataLines.push(v.startsWith(' ') ? v.slice(1) : v);
|
|
43
|
+
}
|
|
44
|
+
}
|
|
45
|
+
if (!dataLines.length) continue;
|
|
46
|
+
const data = dataLines.join('\n');
|
|
47
|
+
events.push({ event: eventType, data, done: data === '[DONE]' });
|
|
48
|
+
}
|
|
49
|
+
return { events, remainder };
|
|
50
|
+
}
|
package/index.js
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* @adia-ai/llm — provider-agnostic LLM client.
|
|
3
|
+
*
|
|
4
|
+
* Re-exports the adapters facade so `@adia-ai/llm` is the single entry
|
|
5
|
+
* point for chat-shell, the a2ui generation pipeline, and any other
|
|
6
|
+
* consumer that needs to talk to anthropic / openai / gemini.
|
|
7
|
+
*
|
|
8
|
+
* import { chat, streamChat, createClient } from '@adia-ai/llm';
|
|
9
|
+
* import { createAdapter } from '@adia-ai/llm/bridge';
|
|
10
|
+
* import { StubLLMAdapter } from '@adia-ai/llm/stub';
|
|
11
|
+
*/
|
|
12
|
+
|
|
13
|
+
export {
|
|
14
|
+
chat,
|
|
15
|
+
streamChat,
|
|
16
|
+
createClient,
|
|
17
|
+
} from './adapters/index.js';
|
package/llm-bridge.js
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* LLM Bridge — Wraps AdiaUI's llm module into the AdiaUI createAdapter() API.
|
|
3
|
+
*
|
|
4
|
+
* This is the single integration point between the AdiaUI pipeline and the
|
|
5
|
+
* LLM module. It handles:
|
|
6
|
+
* - Env var reading (VITE_* in browser, process.env in Node)
|
|
7
|
+
* - CORS proxy routing in browser (Vite dev server at /api/llm/*)
|
|
8
|
+
* - API translation (AdiaUI's simple { messages, systemPrompt } → llm module's interface)
|
|
9
|
+
*
|
|
10
|
+
* Consumers call createAdapter() and get an object with .complete() and .stream()
|
|
11
|
+
* matching the AdiaUI pipeline interface.
|
|
12
|
+
*/
|
|
13
|
+
|
|
14
|
+
import { StubLLMAdapter } from './llm-stub.js';
|
|
15
|
+
|
|
16
|
+
// Lazy-loaded — ../llm/index.js uses Vite aliases that don't resolve in Node
|
|
17
|
+
let _createClient = null;
|
|
18
|
+
async function getCreateClient() {
|
|
19
|
+
if (!_createClient) {
|
|
20
|
+
try {
|
|
21
|
+
const mod = await import('./adapters/index.js');
|
|
22
|
+
_createClient = mod.createClient;
|
|
23
|
+
} catch {
|
|
24
|
+
_createClient = null;
|
|
25
|
+
}
|
|
26
|
+
}
|
|
27
|
+
return _createClient;
|
|
28
|
+
}
|
|
29
|
+
|
|
30
|
+
// ── Environment ──────────────────────────────────────────────────────────
|
|
31
|
+
|
|
32
|
+
function getEnv(key) {
|
|
33
|
+
try {
|
|
34
|
+
const env = import.meta.env;
|
|
35
|
+
if (env) {
|
|
36
|
+
const val = env[`VITE_${key}`] || env[key];
|
|
37
|
+
if (val) return val;
|
|
38
|
+
}
|
|
39
|
+
} catch {}
|
|
40
|
+
if (typeof process !== 'undefined' && process.env) {
|
|
41
|
+
return process.env[key] || '';
|
|
42
|
+
}
|
|
43
|
+
return '';
|
|
44
|
+
}
|
|
45
|
+
|
|
46
|
+
const IS_BROWSER = typeof window !== 'undefined';
|
|
47
|
+
|
|
48
|
+
function resolveBaseUrl(provider) {
|
|
49
|
+
if (!IS_BROWSER) return undefined; // Let the module use its defaults
|
|
50
|
+
const proxyMap = {
|
|
51
|
+
anthropic: '/api/llm/anthropic/v1/messages',
|
|
52
|
+
openai: '/api/llm/openai/v1/chat/completions',
|
|
53
|
+
google: '/api/llm/google',
|
|
54
|
+
};
|
|
55
|
+
return proxyMap[provider];
|
|
56
|
+
}
|
|
57
|
+
|
|
58
|
+
// ── Factory ──────────────────────────────────────────────────────────────
|
|
59
|
+
|
|
60
|
+
/**
|
|
61
|
+
* Create an LLM adapter for the AdiaUI pipeline.
|
|
62
|
+
*
|
|
63
|
+
* Auto-detects provider from env vars. Returns an object with .complete()
|
|
64
|
+
* and .stream() that match the AdiaUI interface (simple messages + systemPrompt).
|
|
65
|
+
*
|
|
66
|
+
* @param {object} [opts]
|
|
67
|
+
* @param {string} [opts.provider] — 'anthropic' | 'openai' | 'google' | 'stub'
|
|
68
|
+
* @param {string} [opts.apiKey] — explicit API key (overrides env)
|
|
69
|
+
* @param {string} [opts.model] — model override
|
|
70
|
+
* @returns {StubLLMAdapter | AdiaUILLMBridge}
|
|
71
|
+
*/
|
|
72
|
+
export async function createAdapter(opts = {}) {
|
|
73
|
+
const provider = opts.provider || getEnv('LLM_PROVIDER') || detectProvider();
|
|
74
|
+
const model = opts.model || getEnv('LLM_MODEL') || undefined;
|
|
75
|
+
|
|
76
|
+
if (provider === 'stub') return new StubLLMAdapter();
|
|
77
|
+
|
|
78
|
+
// Resolve API key for the detected provider
|
|
79
|
+
const apiKey = opts.apiKey || getEnv(`${provider.toUpperCase()}_API_KEY`) || getEnv('ANTHROPIC_API_KEY') || getEnv('OPENAI_API_KEY') || getEnv('GOOGLE_API_KEY');
|
|
80
|
+
|
|
81
|
+
// No key found → fall back to stub
|
|
82
|
+
if (!apiKey) {
|
|
83
|
+
console.warn('LLM Bridge: No API keys found. Using stub adapter.');
|
|
84
|
+
return new StubLLMAdapter();
|
|
85
|
+
}
|
|
86
|
+
|
|
87
|
+
const createClient = await getCreateClient();
|
|
88
|
+
if (!createClient) {
|
|
89
|
+
console.warn('LLM Bridge: LLM module not available. Using stub adapter.');
|
|
90
|
+
return new StubLLMAdapter();
|
|
91
|
+
}
|
|
92
|
+
|
|
93
|
+
const proxyUrl = resolveBaseUrl(provider);
|
|
94
|
+
const client = createClient({
|
|
95
|
+
provider,
|
|
96
|
+
apiKey,
|
|
97
|
+
model: model || DEFAULT_MODELS[provider] || 'claude-sonnet-4-20250514',
|
|
98
|
+
...(proxyUrl ? { proxyUrl } : {}),
|
|
99
|
+
});
|
|
100
|
+
|
|
101
|
+
return new AdiaUILLMBridge(client, model || DEFAULT_MODELS[provider] || 'claude-sonnet-4-20250514', provider);
|
|
102
|
+
}
|
|
103
|
+
|
|
104
|
+
function detectProvider() {
|
|
105
|
+
if (getEnv('ANTHROPIC_API_KEY')) return 'anthropic';
|
|
106
|
+
if (getEnv('OPENAI_API_KEY')) return 'openai';
|
|
107
|
+
if (getEnv('GOOGLE_API_KEY')) return 'google';
|
|
108
|
+
return 'stub';
|
|
109
|
+
}
|
|
110
|
+
|
|
111
|
+
// ── Bridge class ─────────────────────────────────────────────────────────
|
|
112
|
+
|
|
113
|
+
/** Default models per provider */
|
|
114
|
+
const DEFAULT_MODELS = {
|
|
115
|
+
anthropic: 'claude-sonnet-4-20250514',
|
|
116
|
+
openai: 'gpt-4o',
|
|
117
|
+
google: 'gemini-2.0-flash',
|
|
118
|
+
};
|
|
119
|
+
|
|
120
|
+
/**
|
|
121
|
+
* Wraps the AdiaUI llm client to match the AdiaUI pipeline's simpler interface.
|
|
122
|
+
*
|
|
123
|
+
* AdiaUI calls: adapter.complete({ messages, systemPrompt })
|
|
124
|
+
* LLM module expects: client.chat({ model, messages, system, ... })
|
|
125
|
+
*/
|
|
126
|
+
class AdiaUILLMBridge {
|
|
127
|
+
#client;
|
|
128
|
+
#model;
|
|
129
|
+
#provider;
|
|
130
|
+
|
|
131
|
+
constructor(client, model, provider) {
|
|
132
|
+
this.#client = client;
|
|
133
|
+
this.#model = model;
|
|
134
|
+
this.#provider = provider;
|
|
135
|
+
}
|
|
136
|
+
|
|
137
|
+
/**
|
|
138
|
+
* Non-streaming completion. Matches AdiaUI interface.
|
|
139
|
+
*
|
|
140
|
+
* 32k max_tokens: A2UI JSON for moderately complex UIs (kanban, dashboard,
|
|
141
|
+
* pricing table) routinely exceeds 8k. Truncation produced silent fallbacks
|
|
142
|
+
* that the validator rubber-stamped at ~89/100 — see diagnosis report
|
|
143
|
+
* 2026-04-19. Modern Claude/GPT/Gemini all support ≥32k output cleanly.
|
|
144
|
+
*
|
|
145
|
+
* @param {{ messages: { role: string, content: string }[], systemPrompt?: string }} opts
|
|
146
|
+
* @returns {Promise<{ content: string, stopReason: string, usage: { inputTokens: number, outputTokens: number } }>}
|
|
147
|
+
*/
|
|
148
|
+
async complete({ messages, systemPrompt }) {
|
|
149
|
+
const response = await this.#client.chat({
|
|
150
|
+
model: this.#model,
|
|
151
|
+
messages,
|
|
152
|
+
system: systemPrompt,
|
|
153
|
+
maxTokens: 32768,
|
|
154
|
+
// Anthropic-only: mark the system prompt as a cache breakpoint. No-op
|
|
155
|
+
// on other providers (unknown opt silently ignored) and no-op below the
|
|
156
|
+
// model's minimum cacheable size.
|
|
157
|
+
cache: this.#provider === 'anthropic',
|
|
158
|
+
});
|
|
159
|
+
return {
|
|
160
|
+
content: response.text,
|
|
161
|
+
// 'max_tokens' / 'length' / 'MAX_TOKENS' (Gemini) signal truncation;
|
|
162
|
+
// downstream parser uses this to refuse silent fallback rendering.
|
|
163
|
+
stopReason: response.stopReason ?? 'end',
|
|
164
|
+
usage: {
|
|
165
|
+
inputTokens: response.usage?.input ?? 0,
|
|
166
|
+
outputTokens: response.usage?.output ?? 0,
|
|
167
|
+
cacheCreationTokens: response.usage?.cacheCreation ?? 0,
|
|
168
|
+
cacheReadTokens: response.usage?.cacheRead ?? 0,
|
|
169
|
+
},
|
|
170
|
+
};
|
|
171
|
+
}
|
|
172
|
+
|
|
173
|
+
/**
|
|
174
|
+
* Streaming completion. Matches AdiaUI interface.
|
|
175
|
+
*
|
|
176
|
+
* @param {{ messages: { role: string, content: string }[], systemPrompt?: string }} opts
|
|
177
|
+
* @yields {{ type: 'text', content: string } | { type: 'done', stopReason: string, usage: { inputTokens: number, outputTokens: number, cacheCreationTokens: number, cacheReadTokens: number } }}
|
|
178
|
+
*/
|
|
179
|
+
async *stream({ messages, systemPrompt }) {
|
|
180
|
+
for await (const chunk of this.#client.stream({
|
|
181
|
+
model: this.#model,
|
|
182
|
+
messages,
|
|
183
|
+
system: systemPrompt,
|
|
184
|
+
maxTokens: 32768,
|
|
185
|
+
cache: this.#provider === 'anthropic',
|
|
186
|
+
})) {
|
|
187
|
+
if (chunk.type === 'text') {
|
|
188
|
+
yield { type: 'text', content: chunk.text };
|
|
189
|
+
} else if (chunk.type === 'done') {
|
|
190
|
+
// Surface the terminal stopReason + cache telemetry so the consumer
|
|
191
|
+
// can detect max_tokens truncation and the dialog recorder can log
|
|
192
|
+
// cache hit-rate per turn.
|
|
193
|
+
yield {
|
|
194
|
+
type: 'done',
|
|
195
|
+
stopReason: chunk.stopReason ?? 'end',
|
|
196
|
+
usage: {
|
|
197
|
+
inputTokens: chunk.usage?.input ?? 0,
|
|
198
|
+
outputTokens: chunk.usage?.output ?? 0,
|
|
199
|
+
cacheCreationTokens: chunk.usage?.cacheCreation ?? 0,
|
|
200
|
+
cacheReadTokens: chunk.usage?.cacheRead ?? 0,
|
|
201
|
+
},
|
|
202
|
+
};
|
|
203
|
+
}
|
|
204
|
+
// Other chunk types (thinking, error) are still available on the
|
|
205
|
+
// underlying adapter but the AdiaUI pipeline doesn't consume them yet.
|
|
206
|
+
}
|
|
207
|
+
}
|
|
208
|
+
|
|
209
|
+
/** Expose the underlying client for advanced use. */
|
|
210
|
+
get adapter() { return this.#client; }
|
|
211
|
+
|
|
212
|
+
/** Expose provider name for detection. */
|
|
213
|
+
get provider() { return this.#provider; }
|
|
214
|
+
}
|
package/llm-stub.js
ADDED
|
@@ -0,0 +1,69 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* StubLLMAdapter — Deterministic LLM adapter for testing.
|
|
3
|
+
*
|
|
4
|
+
* Returns canned A2UI responses for known prompts. Implements the same
|
|
5
|
+
* interface that a real LLM adapter would (complete, stream) so pipeline
|
|
6
|
+
* code can develop against it without API keys.
|
|
7
|
+
*/
|
|
8
|
+
|
|
9
|
+
export class StubLLMAdapter {
|
|
10
|
+
/**
|
|
11
|
+
* Complete a prompt and return a full A2UI response.
|
|
12
|
+
*
|
|
13
|
+
* @param {object} opts
|
|
14
|
+
* @param {object[]} opts.messages — Chat messages (system + user turns)
|
|
15
|
+
* @param {string} [opts.systemPrompt] — System prompt override
|
|
16
|
+
* @returns {Promise<{ content: string, usage: { inputTokens: number, outputTokens: number } }>}
|
|
17
|
+
*/
|
|
18
|
+
async complete({ messages, systemPrompt }) {
|
|
19
|
+
const lastMessage = messages?.[messages.length - 1]?.content || '';
|
|
20
|
+
const components = this.#buildResponse(lastMessage);
|
|
21
|
+
|
|
22
|
+
return {
|
|
23
|
+
content: JSON.stringify([
|
|
24
|
+
{
|
|
25
|
+
type: 'updateComponents',
|
|
26
|
+
surfaceId: 'default',
|
|
27
|
+
components,
|
|
28
|
+
},
|
|
29
|
+
]),
|
|
30
|
+
usage: {
|
|
31
|
+
inputTokens: estimateTokens(JSON.stringify(messages)),
|
|
32
|
+
outputTokens: estimateTokens(JSON.stringify(components)),
|
|
33
|
+
},
|
|
34
|
+
};
|
|
35
|
+
}
|
|
36
|
+
|
|
37
|
+
/**
|
|
38
|
+
* Stream a response as an async iterable of chunks.
|
|
39
|
+
*
|
|
40
|
+
* @param {object} request — Same shape as complete()
|
|
41
|
+
* @yields {{ type: 'text', content: string }}
|
|
42
|
+
*/
|
|
43
|
+
async *stream(request) {
|
|
44
|
+
const result = await this.complete(request);
|
|
45
|
+
// Simulate progressive streaming by yielding the full response
|
|
46
|
+
yield { type: 'text', content: result.content };
|
|
47
|
+
}
|
|
48
|
+
|
|
49
|
+
/**
|
|
50
|
+
* Build a canned component tree from the intent text.
|
|
51
|
+
* @param {string} intent
|
|
52
|
+
* @returns {object[]}
|
|
53
|
+
*/
|
|
54
|
+
#buildResponse(intent) {
|
|
55
|
+
return [
|
|
56
|
+
{ id: 'root', component: 'Card', children: ['hdr', 'sec'] },
|
|
57
|
+
{ id: 'hdr', component: 'Header', children: ['title'] },
|
|
58
|
+
{ id: 'title', component: 'Text', variant: 'h3', textContent: 'Generated UI' },
|
|
59
|
+
{ id: 'sec', component: 'Section', children: ['col'] },
|
|
60
|
+
{ id: 'col', component: 'Column', children: ['desc'] },
|
|
61
|
+
{ id: 'desc', component: 'Text', variant: 'body', textContent: intent || 'No intent provided' },
|
|
62
|
+
];
|
|
63
|
+
}
|
|
64
|
+
}
|
|
65
|
+
|
|
66
|
+
/** Rough token estimate (~4 chars per token) */
|
|
67
|
+
function estimateTokens(text) {
|
|
68
|
+
return Math.ceil((text?.length || 0) / 4);
|
|
69
|
+
}
|
package/package.json
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "@adia-ai/llm",
|
|
3
|
+
"version": "0.3.0",
|
|
4
|
+
"description": "Provider-agnostic LLM client — anthropic / openai / gemini adapters with a unified chat() + streamChat() facade. Used by AdiaUI's chat-shell and the A2UI generation pipeline; works in browser (with proxyUrl) and Node.",
|
|
5
|
+
"type": "module",
|
|
6
|
+
"exports": {
|
|
7
|
+
".": "./index.js",
|
|
8
|
+
"./adapters/*": "./adapters/*.js",
|
|
9
|
+
"./bridge": "./llm-bridge.js",
|
|
10
|
+
"./stub": "./llm-stub.js",
|
|
11
|
+
"./package.json": "./package.json"
|
|
12
|
+
},
|
|
13
|
+
"files": [
|
|
14
|
+
"adapters/",
|
|
15
|
+
"llm-bridge.js",
|
|
16
|
+
"llm-stub.js",
|
|
17
|
+
"index.js",
|
|
18
|
+
"README.md",
|
|
19
|
+
"CHANGELOG.md"
|
|
20
|
+
],
|
|
21
|
+
"sideEffects": false,
|
|
22
|
+
"publishConfig": {
|
|
23
|
+
"access": "public",
|
|
24
|
+
"registry": "https://registry.npmjs.org"
|
|
25
|
+
},
|
|
26
|
+
"repository": {
|
|
27
|
+
"type": "git",
|
|
28
|
+
"url": "git+https://github.com/adiahealth/gen-ui-kit.git",
|
|
29
|
+
"directory": "packages/llm"
|
|
30
|
+
},
|
|
31
|
+
"license": "MIT"
|
|
32
|
+
}
|