@vibes.diy/prompts 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,455 @@
1
+ # CallAI Helper Function
2
+
3
+ The `callAI` helper function provides an easy way to make AI requests to OpenAI-compatible model providers.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install call-ai
9
+ ```
10
+
11
+ ## API Key
12
+
13
+ You can set the API key in the `window` object:
14
+
15
+ ```javascript
16
+ window.CALLAI_API_KEY = "your-api-key";
17
+ ```
18
+
19
+ Or pass it directly to the `callAI` function:
20
+
21
+ ```javascript
22
+ const response = await callAI("Write a haiku", { apiKey: "your-api-key" });
23
+ ```
24
+
25
+ ## Basic Usage
26
+
27
+ By default the function returns a Promise that resolves to the complete response:
28
+
29
+ ```javascript
30
+ import { callAI } from 'call-ai';
31
+
32
+ // Default behavior - returns a Promise<string>
33
+ const response = await callAI("Write a haiku");
34
+
35
+ // Use the complete response directly
36
+ console.log(response); // Complete response text
37
+ ```
38
+
39
+ ## Streaming Mode
40
+
41
+ If you prefer to receive the response incrementally as it's generated, set `stream: true`. This returns an AsyncGenerator which must be awaited:
42
+
43
+ ```javascript
44
+ import { callAI } from 'call-ai';
45
+
46
+ // Enable streaming mode explicitly - returns an AsyncGenerator
47
+ const generator = await callAI("Write an epic poem", { stream: true });
48
+ // Process the streaming response
49
+ for await (const partialResponse of generator) {
50
+ console.log(partialResponse); // Updates incrementally
51
+ }
52
+ ```
53
+
54
+ ## JSON Schema Responses
55
+
56
+ To get structured JSON responses, provide a schema in the options:
57
+
58
+ ```javascript
59
+ import { callAI } from 'call-ai';
60
+
61
+ const todoResponse = await callAI("Give me a todo list for learning React", {
62
+ schema: {
63
+ name: "todo", // Optional - defaults to "result" if not provided
64
+ properties: {
65
+ todos: {
66
+ type: "array",
67
+ items: { type: "string" }
68
+ }
69
+ }
70
+ }
71
+ });
72
+ const todoData = JSON.parse(todoResponse);
73
+ console.log(todoData.todos); // Array of todo items
74
+ ```
75
+
76
+ ## JSON with Streaming
77
+
78
+ In this example, we're using the `callAI` helper function to get weather data in a structured format with streaming preview:
79
+
80
+ ```javascript
81
+ import { callAI } from 'call-ai';
82
+
83
+ // Get weather data with streaming updates
84
+ const generator = await callAI("What's the weather like in Paris today?", {
85
+ stream: true,
86
+ schema: {
87
+ properties: {
88
+ location: {
89
+ type: "string",
90
+ description: "City or location name"
91
+ },
92
+ temperature: {
93
+ type: "number",
94
+ description: "Temperature in Celsius"
95
+ },
96
+ conditions: {
97
+ type: "string",
98
+ description: "Weather conditions description"
99
+ }
100
+ }
101
+ }
102
+ });
103
+
104
+ // Preview streaming updates as they arrive, don't parse until the end
105
+ const resultElement = document.getElementById('result');
106
+ let finalResponse;
107
+
108
+ for await (const partialResponse of generator) {
109
+ resultElement.textContent = partialResponse;
110
+ finalResponse = partialResponse;
111
+ }
112
+
113
+ // Parse final result
114
+ try {
115
+ const weatherData = JSON.parse(finalResponse);
116
+
117
+ // Access individual fields
118
+ const { location, temperature, conditions } = weatherData;
119
+
120
+ // Update UI with formatted data
121
+ document.getElementById('location').textContent = location;
122
+ document.getElementById('temperature').textContent = `${temperature}°C`;
123
+ document.getElementById('conditions').textContent = conditions;
124
+ } catch (error) {
125
+ console.error("Failed to parse response:", error);
126
+ }
127
+ ```
128
+
129
+ ### Schema Structure Recommendations
130
+
131
+ 1. **Flat schemas perform better across all models**. If you need maximum compatibility, avoid deeply nested structures.
132
+
133
+ 2. **Field names matter**. Some models have preferences for certain property naming patterns:
134
+ - Use simple, common naming patterns like `name`, `type`, `items`, `price`
135
+ - Avoid deeply nested object hierarchies (more than 2 levels deep)
136
+ - Keep array items simple (strings or flat objects)
137
+
138
+ 3. **Model-specific considerations**:
139
+ - **OpenAI models**: Best overall schema adherence and handle complex nesting well
140
+ - **Claude models**: Great for simple schemas, occasional JSON formatting issues with complex structures
141
+ - **Gemini models**: Good general performance, handles array properties well
142
+ - **Llama/Mistral/Deepseek**: Strong with flat schemas, but often ignore nesting structure and provide their own organization
143
+
144
+ 4. **For mission-critical applications** requiring schema adherence, use OpenAI models or implement fallback mechanisms.
145
+
146
+ ### Models for Structured Outputs
147
+
148
+ - OpenAI models: Best overall schema adherence and handle complex nesting well
149
+ - Claude models: Great for simple schemas, occasional JSON formatting issues with complex structures
150
+ - Gemini models: Good general performance, handles array properties well
151
+ - Llama/Mistral/Deepseek: Strong with flat schemas, but often ignore nesting structure and provide their own organization
152
+
153
+
154
+ ## Specifying a Model
155
+
156
+ By default, the function uses `openrouter/auto` (automatic model selection). You can specify a different model:
157
+
158
+ ```javascript
159
+ import { callAI } from 'call-ai';
160
+
161
+ // Use a specific model via options
162
+ const response = await callAI(
163
+ "Explain quantum computing in simple terms",
164
+ { model: "openai/gpt-4o" }
165
+ );
166
+
167
+ console.log(response);
168
+ ```
169
+
170
+ ## Additional Options
171
+
172
+ You can pass extra parameters to customize the request:
173
+
174
+ ```javascript
175
+ import { callAI } from 'call-ai';
176
+
177
+ const response = await callAI(
178
+ "Write a creative story",
179
+ {
180
+ model: "anthropic/claude-3-opus",
181
+ temperature: 0.8, // Higher for more creativity (0-1)
182
+ max_tokens: 1000, // Limit response length
183
+ top_p: 0.95 // Control randomness
184
+ }
185
+ );
186
+
187
+ console.log(response);
188
+ ```
189
+
190
+ ## Message History
191
+
192
+ For multi-turn conversations, you can pass an array of messages:
193
+
194
+ ```javascript
195
+ import { callAI } from 'call-ai';
196
+
197
+ // Create a conversation
198
+ const messages = [
199
+ { role: "system", content: "You are a helpful coding assistant." },
200
+ { role: "user", content: "How do I use React hooks?" },
201
+ { role: "assistant", content: "React hooks are functions that let you use state and other React features in functional components..." },
202
+ { role: "user", content: "Can you show me an example of useState?" }
203
+ ];
204
+
205
+ // Pass the entire conversation history
206
+ const response = await callAI(messages);
207
+ console.log(response);
208
+
209
+ // To continue the conversation, add the new response and send again
210
+ messages.push({ role: "assistant", content: response });
211
+ messages.push({ role: "user", content: "What about useEffect?" });
212
+
213
+ // Call again with updated history
214
+ const nextResponse = await callAI(messages);
215
+ console.log(nextResponse);
216
+ ```
217
+
218
+ ## Using with OpenAI API
219
+
220
+ You can use callAI with OpenAI's API directly by providing the appropriate endpoint and API key:
221
+
222
+ ```javascript
223
+ import { callAI } from 'call-ai';
224
+
225
+ // Use with OpenAI's API
226
+ const response = await callAI(
227
+ "Explain the theory of relativity",
228
+ {
229
+ model: "gpt-4",
230
+ apiKey: "sk-...", // Your OpenAI API key
231
+ endpoint: "https://api.openai.com/v1/chat/completions"
232
+ }
233
+ );
234
+
235
+ console.log(response);
236
+
237
+ // Or with streaming
238
+ const generator = callAI(
239
+ "Explain the theory of relativity",
240
+ {
241
+ model: "gpt-4",
242
+ apiKey: "sk-...", // Your OpenAI API key
243
+ endpoint: "https://api.openai.com/v1/chat/completions",
244
+ stream: true
245
+ }
246
+ );
247
+
248
+ for await (const chunk of generator) {
249
+ console.log(chunk);
250
+ }
251
+ ```
252
+
253
+ ## Custom Endpoints
254
+
255
+ You can specify a custom endpoint for any OpenAI-compatible API:
256
+
257
+ ```javascript
258
+ import { callAI } from 'call-ai';
259
+
260
+ // Use with any OpenAI-compatible API
261
+ const response = await callAI(
262
+ "Generate ideas for a mobile app",
263
+ {
264
+ model: "your-model-name",
265
+ apiKey: "your-api-key",
266
+ endpoint: "https://your-custom-endpoint.com/v1/chat/completions"
267
+ }
268
+ );
269
+
270
+ console.log(response);
271
+ ```
272
+
273
+ ## Recommended Models
274
+
275
+ | Model | Best For | Speed vs Quality |
276
+ |-------|----------|------------------|
277
+ | `openrouter/auto` | Default, automatically selects | Adaptive |
278
+ | `openai/gpt-4o-mini` | data generation | Fast, good quality |
279
+ | `anthropic/claude-3-haiku` | Cost-effective | Fast, good quality |
280
+ | `openai/gpt-4o` | Best overall quality | Medium speed, highest quality |
281
+ | `anthropic/claude-3-opus` | Complex reasoning | Slower, highest quality |
282
+ | `mistralai/mistral-large` | Open weights alternative | Good balance |
283
+
284
+ ## Automatic Retry Mechanism
285
+
286
+ Call-AI has a built-in fallback mechanism that automatically retries with `openrouter/auto` if the requested model is invalid or unavailable. This ensures your application remains functional even when specific models experience issues.
287
+
288
+ If you need to disable this behavior (for example, in test environments), you can use the `skipRetry` option:
289
+
290
+ ```javascript
291
+ const response = await callAI("Your prompt", {
292
+ model: "your-model-name",
293
+ skipRetry: true // Disable automatic fallback
294
+ });
295
+ ```
296
+
297
+ ## Items with lists
298
+
299
+ ```javascript
300
+ import { callAI } from 'call-ai';
301
+
302
+ const generator = await callAI([
303
+ {
304
+ role: "user",
305
+ content: "Generate 3 JSON records with name, description, tags, and priority (0 is highest, 5 is lowest)."
306
+ }
307
+ ], {
308
+ stream: true,
309
+ schema: {
310
+ properties: {
311
+ records: {
312
+ type: "array",
313
+ items: {
314
+ type: "object",
315
+ properties: {
316
+ name: { type: "string" },
317
+ description: { type: "string" },
318
+ tags: {
319
+ type: "array",
320
+ items: { type: "string" }
321
+ },
322
+ priority: { type: "integer" }
323
+ }
324
+ }
325
+ }
326
+ }
327
+ }
328
+ });
329
+
330
+ for await (const partialResponse of generator) {
331
+ console.log(partialResponse);
332
+ }
333
+
334
+ const recordData = JSON.parse(/* final response */);
335
+ console.log(recordData.records); // Array of records
336
+ ```
337
+
338
+ ## Items with properties
339
+
340
+ ```javascript
341
+ const demoData = await callAI("Generate 4 items with label, status, priority (low, medium, high, critical), and notes. Return as structured JSON with these fields.", {
342
+ schema: {
343
+ properties: {
344
+ items: {
345
+ type: "array",
346
+ items: {
347
+ type: "object",
348
+ properties: {
349
+ label: { type: "string" },
350
+ status: { type: "string" },
351
+ priority: { type: "string" },
352
+ notes: { type: "string" }
353
+ }
354
+ }
355
+ }
356
+ }
357
+ }
358
+ });
359
+ ```
360
+
361
+ ## Error Handling
362
+
363
+ Errors are handled through standard JavaScript try/catch blocks:
364
+
365
+ ```javascript
366
+ import { callAI } from 'call-ai';
367
+
368
+ try {
369
+
370
+ const response = await callAI("Generate some content", {
371
+ apiKey: "invalid-key" // Invalid or missing API key
372
+ });
373
+
374
+ // If no error was thrown, process the normal response
375
+ console.log(response);
376
+ } catch (error) {
377
+ // API errors are standard Error objects with useful properties
378
+ console.error("API error:", error.message);
379
+ console.error("Status code:", error.status);
380
+ console.error("Error type:", error.errorType);
381
+ console.error("Error details:", error.details);
382
+ }
383
+ ```
384
+
385
+ For streaming mode, error handling works the same way:
386
+
387
+ ```javascript
388
+ import { callAI } from 'call-ai';
389
+
390
+ try {
391
+ const generator = await callAI("Generate some content", {
392
+ apiKey: "invalid-key", // Invalid or missing API key
393
+ stream: true
394
+ });
395
+
396
+ // Any error during streaming will throw an exception
397
+ let finalResponse = '';
398
+ for await (const chunk of generator) {
399
+ finalResponse = chunk;
400
+ console.log("Chunk:", chunk);
401
+ }
402
+
403
+ // Process the final response
404
+ console.log("Final response:", finalResponse);
405
+ } catch (error) {
406
+ // Handle errors with standard try/catch
407
+ console.error("API error:", error.message);
408
+ console.error("Error properties:", {
409
+ status: error.status,
410
+ type: error.errorType,
411
+ details: error.details
412
+ });
413
+ }
414
+ ```
415
+
416
+ This approach is idiomatic and consistent with standard JavaScript practices. Errors provide rich information for better debugging and error handling in your applications.
417
+
418
+ ## Image Recognition Example
419
+
420
+ Call-AI supports image recognition using multimodal models like GPT-4o. You can pass both text and image content to analyze images in the browser:
421
+
422
+ ```javascript
423
+ import { callAI } from 'call-ai';
424
+
425
+ // Function to analyze an image using GPT-4o
426
+ async function analyzeImage(imageFile, prompt = 'Describe this image in detail') {
427
+ // Convert the image file to a data URL
428
+ const dataUrl = await fileToDataUrl(imageFile);
429
+
430
+ const content = [
431
+ { type: 'text', text: prompt },
432
+ { type: 'image_url', image_url: { url: dataUrl } }
433
+ ];
434
+
435
+ // Call the model with the multimodal content
436
+ const result = await callAI(
437
+ [{ role: 'user', content }],
438
+ {
439
+ model: 'openai/gpt-4o-2024-08-06', // Or 'openai/gpt-4o-latest'
440
+ apiKey: window.CALLAI_API_KEY,
441
+ }
442
+ );
443
+
444
+ return result;
445
+ }
446
+
447
+ // Helper function to convert File to data URL
448
+ async function fileToDataUrl(file) {
449
+ return new Promise((resolve) => {
450
+ const reader = new FileReader();
451
+ reader.onloadend = () => resolve(reader.result);
452
+ reader.readAsDataURL(file);
453
+ });
454
+ }
455
+ ```
package/llms/d3.json ADDED
@@ -0,0 +1,9 @@
1
+ {
2
+ "name": "d3",
3
+ "label": "D3.js",
4
+ "module": "d3",
5
+ "description": "D3.js data visualization library for creating interactive charts, graphs, maps, and data-driven documents using SVG, HTML, CSS. Includes scales, selections, transitions, animations, force simulations, geographic projections, data binding, DOM manipulation, data viz, dataviz",
6
+ "importModule": "d3",
7
+ "importName": "d3",
8
+ "importType": "namespace"
9
+ }