prompt-api-polyfill 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,8 +1,12 @@
1
- # Prompt API Polyfill (Firebase AI Logic backend)
1
+ # Prompt API Polyfill
2
2
 
3
3
  This package provides a browser polyfill for the
4
- [Prompt API `LanguageModel`](https://github.com/webmachinelearning/prompt-api)
5
- backed by **Firebase AI Logic**.
4
+ [Prompt API `LanguageModel`](https://github.com/webmachinelearning/prompt-api),
5
+ supporting dynamic backends:
6
+
7
+ - **Firebase AI Logic**
8
+ - **Google Gemini API**
9
+ - **OpenAI API**
6
10
 
7
11
  When loaded in the browser, it defines a global:
8
12
 
@@ -13,8 +17,28 @@ window.LanguageModel;
13
17
  so you can use the Prompt API shape even in environments where it is not yet
14
18
  natively available.
15
19
 
16
- - Back end: Firebase AI Logic
17
- - Default model: `gemini-2.5-flash-lite` (configurable via `modelName`)
20
+ ## Supported Backends
21
+
22
+ ### Firebase AI Logic
23
+
24
+ - **Uses**: `firebase/ai` SDK.
25
+ - **Select by setting**: `window.FIREBASE_CONFIG`.
26
+ - **Model**: Uses default if not specified (see
27
+ [`backends/defaults.js`](backends/defaults.js)).
28
+
29
+ ### Google Gemini API
30
+
31
+ - **Uses**: `@google/generative-ai` SDK.
32
+ - **Select by setting**: `window.GEMINI_CONFIG`.
33
+ - **Model**: Uses default if not specified (see
34
+ [`backends/defaults.js`](backends/defaults.js)).
35
+
36
+ ### OpenAI API
37
+
38
+ - **Uses**: `openai` SDK.
39
+ - **Select by setting**: `window.OPENAI_CONFIG`.
40
+ - **Model**: Uses default if not specified (see
41
+ [`backends/defaults.js`](backends/defaults.js)).
18
42
 
19
43
  ---
20
44
 
@@ -28,37 +52,78 @@ npm install prompt-api-polyfill
28
52
 
29
53
  ## Quick start
30
54
 
31
- 1. **Create a Firebase project with Generative AI enabled** (see Configuration
32
- below).
33
- 2. **Provide your Firebase config** on `window.FIREBASE_CONFIG`.
34
- 3. **Import the polyfill** so it can attach `window.LanguageModel`.
55
+ ### Backed by Firebase
35
56
 
36
- ### Example (using a JSON config file)
37
-
38
- Create a `.env.json` file (see
39
- [Configuring `dot_env.json` / `.env.json`](#configuring-dot_envjson--envjson))
40
- and then use it from a browser entry point:
57
+ 1. **Create a Firebase project with Generative AI enabled**.
58
+ 2. **Provide your Firebase config** on `window.FIREBASE_CONFIG`.
59
+ 3. **Import the polyfill**.
41
60
 
42
61
  ```html
43
62
  <script type="module">
44
63
  import firebaseConfig from './.env.json' with { type: 'json' };
45
64
 
46
- // Make the config available to the polyfill
65
+ // Set FIREBASE_CONFIG to select the Firebase backend
47
66
  window.FIREBASE_CONFIG = firebaseConfig;
48
67
 
49
- // Only load the polyfill if LanguageModel is not available natively
50
68
  if (!('LanguageModel' in window)) {
51
69
  await import('prompt-api-polyfill');
52
70
  }
53
71
 
54
72
  const session = await LanguageModel.create();
55
- const text = await session.prompt('Say hello from the polyfill!');
56
- console.log(text);
57
73
  </script>
58
74
  ```
59
75
 
60
- > **Note**: The polyfill attaches `LanguageModel` to `window` as a side effect.
61
- > There are no named exports.
76
+ ### Backed by Gemini API
77
+
78
+ 1. **Get a Gemini API Key** from
79
+ [Google AI Studio](https://aistudio.google.com/).
80
+ 2. **Provide your API Key** on `window.GEMINI_CONFIG`.
81
+ 3. **Import the polyfill**.
82
+
83
+ ```html
84
+ <script type="module">
85
+ // NOTE: Do not expose real keys in production source code!
86
+ // Set GEMINI_CONFIG to select the Gemini backend
87
+ window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };
88
+
89
+ if (!('LanguageModel' in window)) {
90
+ await import('prompt-api-polyfill');
91
+ }
92
+
93
+ const session = await LanguageModel.create();
94
+ </script>
95
+ ```
96
+
97
+ ### Backed by OpenAI API
98
+
99
+ 1. **Get an OpenAI API Key** from the
100
+ [OpenAI Platform](https://platform.openai.com/).
101
+ 2. **Provide your API Key** on `window.OPENAI_CONFIG`.
102
+ 3. **Import the polyfill**.
103
+
104
+ ```html
105
+ <script type="module">
106
+ // NOTE: Do not expose real keys in production source code!
107
+ // Set OPENAI_CONFIG to select the OpenAI backend
108
+ window.OPENAI_CONFIG = { apiKey: 'YOUR_OPENAI_API_KEY' };
109
+
110
+ if (!('LanguageModel' in window)) {
111
+ await import('prompt-api-polyfill');
112
+ }
113
+
114
+ const session = await LanguageModel.create();
115
+ </script>
116
+ ```
117
+
118
+ ---
119
+
120
+ ## Configuration
121
+
122
+ ### Example (using a JSON config file)
123
+
124
+ Create a `.env.json` file (see
125
+ [Configuring `dot_env.json` / `.env.json`](#configuring-dot_envjson--envjson))
126
+ and then use it from a browser entry point.
62
127
 
63
128
  ### Example based on `index.html` in this repo
64
129
 
@@ -76,8 +141,8 @@ A simplified version of how it is wired up:
76
141
 
77
142
  ```html
78
143
  <script type="module">
79
- import firebaseConfig from './.env.json' with { type: 'json' };
80
- window.FIREBASE_CONFIG = firebaseConfig;
144
+ // Set GEMINI_CONFIG to select the Gemini backend
145
+ window.GEMINI_CONFIG = { apiKey: 'YOUR_GEMINI_API_KEY' };
81
146
 
82
147
  // Load the polyfill only when necessary
83
148
  if (!('LanguageModel' in window)) {
@@ -110,17 +175,20 @@ This repo ships with a template file:
110
175
  ```jsonc
111
176
  // dot_env.json
112
177
  {
113
- "apiKey": "",
178
+ // For Firebase:
114
179
  "projectId": "",
115
180
  "appId": "",
116
181
  "modelName": "",
182
+
183
+ // For Firebase OR Gemini OR OpenAI:
184
+ "apiKey": "",
117
185
  }
118
186
  ```
119
187
 
120
188
  You should treat `dot_env.json` as a **template** and create a real `.env.json`
121
189
  that is **not committed** with your secrets.
122
190
 
123
- ### 1. Create `.env.json`
191
+ ### Create `.env.json`
124
192
 
125
193
  Copy the template:
126
194
 
@@ -128,63 +196,56 @@ Copy the template:
128
196
  cp dot_env.json .env.json
129
197
  ```
130
198
 
131
- Then open `.env.json` and fill in the values from your Firebase project:
199
+ Then open `.env.json` and fill in the values.
200
+
201
+ **For Firebase:**
132
202
 
133
203
  ```json
134
204
  {
135
205
  "apiKey": "YOUR_FIREBASE_WEB_API_KEY",
136
206
  "projectId": "your-gcp-project-id",
137
207
  "appId": "YOUR_FIREBASE_APP_ID",
138
- "modelName": "gemini-2.5-flash-lite"
208
+ "modelName": "choose-model-for-firebase"
139
209
  }
140
210
  ```
141
211
 
142
- ### 2. Field-by-field explanation
143
-
144
- - `apiKey` Your **Firebase Web API key**. You can find this in the Firebase
145
- Console under: _Project settings → General → Your apps → Web app_.
212
+ **For Gemini:**
146
213
 
147
- - `projectId` The **GCP / Firebase project ID**, e.g. `my-ai-project`.
148
-
149
- - `appId` The **Firebase Web app ID**, e.g. `1:1234567890:web:abcdef123456`.
150
-
151
- - `modelName` (optional) The Gemini model ID to use. If omitted, the polyfill
152
- defaults to:
214
+ ```json
215
+ {
216
+ "apiKey": "YOUR_GEMINI_CONFIG",
217
+ "modelName": "choose-model-for-gemini"
218
+ }
219
+ ```
153
220
 
154
- ```json
155
- "modelName": "gemini-2.5-flash-lite"
156
- ```
221
+ **For OpenAI:**
157
222
 
158
- You can substitute another supported Gemini model here if desired.
223
+ ```json
224
+ {
225
+ "apiKey": "YOUR_OPENAI_API_KEY",
226
+ "modelName": "choose-model-for-openai"
227
+ }
228
+ ```
159
229
 
160
- These fields are passed directly to:
230
+ ### Field-by-field explanation
161
231
 
162
- - `initializeApp(firebaseConfig)` from Firebase
163
- - `getAI(app, { backend: new GoogleAIBackend() })` from the Firebase AI SDK
232
+ - `apiKey`:
233
+ - **Firebase**: Your Firebase Web API key.
234
+ - **Gemini**: Your Gemini API Key.
235
+ - **OpenAI**: Your OpenAI API Key.
236
+ - `projectId` / `appId`: **Firebase only**.
164
237
 
165
- and `modelName` is used to select which Gemini model to call.
238
+ - `modelName` (optional): The model ID to use. If not provided, the polyfill
239
+ uses the defaults defined in [`backends/defaults.js`](backends/defaults.js).
166
240
 
167
241
  > **Important:** Do **not** commit a real `.env.json` with production
168
242
  > credentials to source control. Use `dot_env.json` as the committed template
169
243
  > and keep `.env.json` local.
170
244
 
171
- ### 3. Wiring the config into the polyfill
245
+ ### Wiring the config into the polyfill
172
246
 
173
- Once `.env.json` is filled out, you can import it and expose it to the polyfill
174
- exactly like in `index.html`:
175
-
176
- ```js
177
- import firebaseConfig from './.env.json' with { type: 'json' };
178
-
179
- window.FIREBASE_CONFIG = firebaseConfig;
180
-
181
- if (!('LanguageModel' in window)) {
182
- await import('prompt-api-polyfill');
183
- }
184
- ```
185
-
186
- From this point on, `LanguageModel.create()` will use your Firebase
187
- configuration.
247
+ Once `.env.json` is filled out, you can import it and expose it to the polyfill.
248
+ See the [Quick start](#quick-start) examples above.
188
249
 
189
250
  ---
190
251
 
@@ -200,8 +261,7 @@ For a complete, end-to-end example, see the `index.html` file in this directory.
200
261
 
201
262
  ## Running the demo locally
202
263
 
203
- 1. Install dependencies and this package (if using the npm-installed version in
204
- another project):
264
+ 1. Install dependencies:
205
265
 
206
266
  ```bash
207
267
  npm install
@@ -211,17 +271,34 @@ For a complete, end-to-end example, see the `index.html` file in this directory.
211
271
 
212
272
  ```bash
213
273
  cp dot_env.json .env.json
214
- # then edit .env.json with your Firebase and model settings
215
274
  ```
216
275
 
217
276
  3. Serve `index.html`:
218
-
219
277
  ```bash
220
278
  npm start
221
279
  ```
222
280
 
223
- You should see network requests to the Vertex AI / Firebase AI backend and
224
- streaming responses logged in the console.
281
+ You should see network requests to the backends logs.
282
+
283
+ ---
284
+
285
+ ## Testing
286
+
287
+ The project includes a comprehensive test suite that runs in a headless browser.
288
+
289
+ ### Running Browser Tests
290
+
291
+ Uses `playwright` to run tests in a real Chromium instance. This is the
292
+ recommended way to verify environmental fidelity and multimodal support.
293
+
294
+ ```bash
295
+ npm run test:browser
296
+ ```
297
+
298
+ To see the browser and DevTools while testing, you can modify
299
+ `vitest.browser.config.js` to set `headless: false`.
300
+
301
+ ---
225
302
 
226
303
  ## License
227
304
 
@@ -0,0 +1,59 @@
1
+ /**
2
+ * Abstract class representing a backend for the LanguageModel polyfill.
3
+ */
4
+ export default class PolyfillBackend {
5
+ #model;
6
+
7
+ /**
8
+ * @param {string} modelName - The name of the model.
9
+ */
10
+ constructor(modelName) {
11
+ this.modelName = modelName;
12
+ }
13
+
14
+ /**
15
+ * Checks if the backend is available given the options.
16
+ * @param {Object} options - LanguageModel options.
17
+ * @returns {string} 'available', 'unavailable', 'downloadable', or 'downloading'.
18
+ */
19
+ static availability(options) {
20
+ return 'available';
21
+ }
22
+
23
+ /**
24
+ * Creates a model session and stores it.
25
+ * @param {Object} options - LanguageModel options.
26
+ * @param {Object} inCloudParams - Parameters for the cloud model.
27
+ * @returns {any} The created session object.
28
+ */
29
+ createSession(options, inCloudParams) {
30
+ throw new Error('Not implemented');
31
+ }
32
+
33
+ /**
34
+ * Generates content (non-streaming).
35
+ * @param {Array} content - The history + new message content.
36
+ * @returns {Promise<{text: string, usage: number}>}
37
+ */
38
+ async generateContent(content) {
39
+ throw new Error('Not implemented');
40
+ }
41
+
42
+ /**
43
+ * Generates content stream.
44
+ * @param {Array} content - The history + new content.
45
+ * @returns {Promise<AsyncIterable>} Stream of chunks.
46
+ */
47
+ async generateContentStream(content) {
48
+ throw new Error('Not implemented');
49
+ }
50
+
51
+ /**
52
+ * Counts tokens.
53
+ * @param {Array} content - The content to count.
54
+ * @returns {Promise<number>} Total tokens.
55
+ */
56
+ async countTokens(content) {
57
+ throw new Error('Not implemented');
58
+ }
59
+ }
@@ -0,0 +1,9 @@
1
+ /**
2
+ * Default model versions for each backend.
3
+ */
4
+ export const DEFAULT_MODELS = {
5
+ firebase: 'gemini-2.5-flash-lite',
6
+ gemini: 'gemini-2.0-flash-lite-preview-02-05',
7
+ openai: 'gpt-4o',
8
+ transformers: 'onnx-community/Qwen3-4B-ONNX',
9
+ };
@@ -0,0 +1,45 @@
1
+ import { initializeApp } from 'https://esm.run/firebase/app';
2
+ import {
3
+ getAI,
4
+ getGenerativeModel,
5
+ GoogleAIBackend,
6
+ InferenceMode,
7
+ } from 'https://esm.run/firebase/ai';
8
+ import PolyfillBackend from './base.js';
9
+ import { DEFAULT_MODELS } from './defaults.js';
10
+
11
+ /**
12
+ * Firebase AI Logic Backend
13
+ */
14
+ export default class FirebaseBackend extends PolyfillBackend {
15
+ #model;
16
+
17
+ constructor(config) {
18
+ super(config.modelName || DEFAULT_MODELS.firebase);
19
+ this.ai = getAI(initializeApp(config), { backend: new GoogleAIBackend() });
20
+ }
21
+
22
+ createSession(_options, inCloudParams) {
23
+ this.#model = getGenerativeModel(this.ai, {
24
+ mode: InferenceMode.ONLY_IN_CLOUD,
25
+ inCloudParams,
26
+ });
27
+ return this.#model;
28
+ }
29
+
30
+ async generateContent(contents) {
31
+ const result = await this.#model.generateContent({ contents });
32
+ const usage = result.response.usageMetadata?.promptTokenCount || 0;
33
+ return { text: result.response.text(), usage };
34
+ }
35
+
36
+ async generateContentStream(contents) {
37
+ const result = await this.#model.generateContentStream({ contents });
38
+ return result.stream;
39
+ }
40
+
41
+ async countTokens(contents) {
42
+ const { totalTokens } = await this.#model.countTokens({ contents });
43
+ return totalTokens;
44
+ }
45
+ }
@@ -0,0 +1,48 @@
1
+ import { GoogleGenerativeAI } from 'https://esm.run/@google/generative-ai';
2
+ import PolyfillBackend from './base.js';
3
+ import { DEFAULT_MODELS } from './defaults.js';
4
+
5
+ /**
6
+ * Google Gemini API Backend
7
+ */
8
+ export default class GeminiBackend extends PolyfillBackend {
9
+ #model;
10
+
11
+ constructor(config) {
12
+ super(config.modelName || DEFAULT_MODELS.gemini);
13
+ this.genAI = new GoogleGenerativeAI(config.apiKey);
14
+ }
15
+
16
+ createSession(options, inCloudParams) {
17
+ const modelParams = {
18
+ model: options.modelName || this.modelName,
19
+ generationConfig: inCloudParams.generationConfig,
20
+ systemInstruction: inCloudParams.systemInstruction,
21
+ };
22
+ // Clean undefined systemInstruction
23
+ if (!modelParams.systemInstruction) {
24
+ delete modelParams.systemInstruction;
25
+ }
26
+
27
+ this.#model = this.genAI.getGenerativeModel(modelParams);
28
+ return this.#model;
29
+ }
30
+
31
+ async generateContent(contents) {
32
+ // Gemini SDK expects { role, parts: [...] } which matches our internal structure
33
+ const result = await this.#model.generateContent({ contents });
34
+ const response = await result.response;
35
+ const usage = response.usageMetadata?.promptTokenCount || 0;
36
+ return { text: response.text(), usage };
37
+ }
38
+
39
+ async generateContentStream(contents) {
40
+ const result = await this.#model.generateContentStream({ contents });
41
+ return result.stream;
42
+ }
43
+
44
+ async countTokens(contents) {
45
+ const { totalTokens } = await this.#model.countTokens({ contents });
46
+ return totalTokens;
47
+ }
48
+ }