@vibes.diy/prompts 0.20.4-dev-push → 0.20.5-dev-cli-stable-entry

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/llms/callai.txt DELETED
@@ -1,352 +0,0 @@
1
- # CallAI Helper Function
2
-
3
- The `callAI` helper function provides an easy way to make AI requests to OpenAI-compatible model providers.
4
-
5
- ## Basic Usage
6
-
7
- By default the function returns a Promise that resolves to the complete response:
8
-
9
- ```javascript
10
- import { callAI } from 'call-ai';
11
-
12
- // Default behavior - returns a Promise<string>
13
- const response = await callAI("Write a haiku");
14
-
15
- // Use the complete response directly
16
- console.log(response); // Complete response text
17
- ```
18
-
19
- ## Streaming Mode
20
-
21
- If you prefer to receive the response incrementally as it's generated, set `stream: true`. This returns an AsyncGenerator which must be awaited:
22
-
23
- ```javascript
24
- import { callAI } from 'call-ai';
25
-
26
- // Enable streaming mode explicitly - returns an AsyncGenerator
27
- const generator = await callAI("Write an epic poem", { stream: true });
28
- // Process the streaming response
29
- for await (const partialResponse of generator) {
30
- console.log(partialResponse); // Updates incrementally
31
- }
32
- ```
33
-
34
- ## JSON Schema Responses
35
-
36
- To get structured JSON responses, provide a schema in the options:
37
-
38
- ```javascript
39
- import { callAI } from 'call-ai';
40
-
41
- const todoResponse = await callAI("Give me a todo list for learning React", {
42
- schema: {
43
- name: "todo", // Optional - defaults to "result" if not provided
44
- properties: {
45
- todos: {
46
- type: "array",
47
- items: { type: "string" }
48
- }
49
- }
50
- }
51
- });
52
- const todoData = JSON.parse(todoResponse);
53
- console.log(todoData.todos); // Array of todo items
54
- ```
55
-
56
- ## JSON with Streaming
57
-
58
- In this example, we're using the `callAI` helper function to get weather data in a structured format with streaming preview:
59
-
60
- ```javascript
61
- import { callAI } from 'call-ai';
62
-
63
- // Get weather data with streaming updates
64
- const generator = await callAI("What's the weather like in Paris today?", {
65
- stream: true,
66
- schema: {
67
- properties: {
68
- location: {
69
- type: "string",
70
- description: "City or location name"
71
- },
72
- temperature: {
73
- type: "number",
74
- description: "Temperature in Celsius"
75
- },
76
- conditions: {
77
- type: "string",
78
- description: "Weather conditions description"
79
- }
80
- }
81
- }
82
- });
83
-
84
- // Preview streaming updates as they arrive, don't parse until the end
85
- const resultElement = document.getElementById('result');
86
- let finalResponse;
87
-
88
- for await (const partialResponse of generator) {
89
- resultElement.textContent = partialResponse;
90
- finalResponse = partialResponse;
91
- }
92
-
93
- // Parse final result
94
- try {
95
- const weatherData = JSON.parse(finalResponse);
96
-
97
- // Access individual fields
98
- const { location, temperature, conditions } = weatherData;
99
-
100
- // Update UI with formatted data
101
- document.getElementById('location').textContent = location;
102
- document.getElementById('temperature').textContent = `${temperature}°C`;
103
- document.getElementById('conditions').textContent = conditions;
104
- } catch (error) {
105
- console.error("Failed to parse response:", error);
106
- }
107
- ```
108
-
109
- ### Schema Structure Recommendations
110
-
111
- 1. **Flat schemas perform better across all models**. If you need maximum compatibility, avoid deeply nested structures.
112
-
113
- 2. **Field names matter**. Some models have preferences for certain property naming patterns:
114
- - Use simple, common naming patterns like `name`, `type`, `items`, `price`
115
- - Avoid deeply nested object hierarchies (more than 2 levels deep)
116
- - Keep array items simple (strings or flat objects)
117
-
118
- ## Specifying a Model
119
-
120
- By default, the function uses `openrouter/auto` (automatic model selection). You can specify a different model:
121
-
122
- ```javascript
123
- import { callAI } from 'call-ai';
124
-
125
- // Use a specific model via options
126
- const response = await callAI(
127
- "Explain quantum computing in simple terms",
128
- { model: "openai/gpt-4o" }
129
- );
130
-
131
- console.log(response);
132
- ```
133
-
134
- ## Additional Options
135
-
136
- You can pass extra parameters to customize the request:
137
-
138
- ```javascript
139
- import { callAI } from 'call-ai';
140
-
141
- const response = await callAI(
142
- "Write a creative story",
143
- {
144
- model: "anthropic/claude-3-opus",
145
- temperature: 0.8, // Higher for more creativity (0-1)
146
- max_tokens: 1000, // Limit response length
147
- top_p: 0.95 // Control randomness
148
- }
149
- );
150
-
151
- console.log(response);
152
- ```
153
-
154
- ## Message History
155
-
156
- For multi-turn conversations, you can pass an array of messages:
157
-
158
- ```javascript
159
- import { callAI } from 'call-ai';
160
-
161
- // Create a conversation
162
- const messages = [
163
- { role: "system", content: "You are a helpful coding assistant." },
164
- { role: "user", content: "How do I use React hooks?" },
165
- { role: "assistant", content: "React hooks are functions that let you use state and other React features in functional components..." },
166
- { role: "user", content: "Can you show me an example of useState?" }
167
- ];
168
-
169
- // Pass the entire conversation history
170
- const response = await callAI(messages);
171
- console.log(response);
172
-
173
- // To continue the conversation, add the new response and send again
174
- messages.push({ role: "assistant", content: response });
175
- messages.push({ role: "user", content: "What about useEffect?" });
176
-
177
- // Call again with updated history
178
- const nextResponse = await callAI(messages);
179
- console.log(nextResponse);
180
- ```
181
-
182
- ## Recommended Models
183
-
184
- | Model | Best For | Speed vs Quality |
185
- |-------|----------|------------------|
186
- | `openrouter/auto` | Default, automatically selects | Adaptive |
187
- | `openai/gpt-4o-mini` | data generation | Fast, good quality |
188
- | `anthropic/claude-3-haiku` | Cost-effective | Fast, good quality |
189
- | `openai/gpt-4o` | Best overall quality | Medium speed, highest quality |
190
- | `anthropic/claude-3-opus` | Complex reasoning | Slower, highest quality |
191
- | `mistralai/mistral-large` | Open weights alternative | Good balance |
192
-
193
-
194
- ## Items with lists
195
-
196
- ```javascript
197
- import { callAI } from 'call-ai';
198
-
199
- const generator = await callAI([
200
- {
201
- role: "user",
202
- content: "Generate 3 JSON records with name, description, tags, and priority (0 is highest, 5 is lowest)."
203
- }
204
- ], {
205
- stream: true,
206
- schema: {
207
- properties: {
208
- records: {
209
- type: "array",
210
- items: {
211
- type: "object",
212
- properties: {
213
- name: { type: "string" },
214
- description: { type: "string" },
215
- tags: {
216
- type: "array",
217
- items: { type: "string" }
218
- },
219
- priority: { type: "integer" }
220
- }
221
- }
222
- }
223
- }
224
- }
225
- });
226
-
227
- for await (const partialResponse of generator) {
228
- console.log(partialResponse);
229
- }
230
-
231
- const recordData = JSON.parse(/* final response */);
232
- console.log(recordData.records); // Array of records
233
- ```
234
-
235
- ## Items with properties
236
-
237
- ```javascript
238
- const demoData = await callAI("Generate 4 items with label, status, priority (low, medium, high, critical), and notes. Return as structured JSON with these fields.", {
239
- schema: {
240
- properties: {
241
- items: {
242
- type: "array",
243
- items: {
244
- type: "object",
245
- properties: {
246
- label: { type: "string" },
247
- status: { type: "string" },
248
- priority: { type: "string" },
249
- notes: { type: "string" }
250
- }
251
- }
252
- }
253
- }
254
- }
255
- });
256
- ```
257
-
258
- ## Error Handling
259
-
260
- Errors are handled through standard JavaScript try/catch blocks:
261
-
262
- ```javascript
263
- import { callAI } from 'call-ai';
264
-
265
- try {
266
-
267
- const response = await callAI("Generate some content", {
268
- apiKey: "invalid-key" // Invalid or missing API key
269
- });
270
-
271
- // If no error was thrown, process the normal response
272
- console.log(response);
273
- } catch (error) {
274
- // API errors are standard Error objects with useful properties
275
- console.error("API error:", error.message);
276
- console.error("Status code:", error.status);
277
- console.error("Error type:", error.errorType);
278
- console.error("Error details:", error.details);
279
- }
280
- ```
281
-
282
- For streaming mode, error handling works the same way:
283
-
284
- ```javascript
285
- import { callAI } from 'call-ai';
286
-
287
- try {
288
- const generator = await callAI("Generate some content", {
289
- apiKey: "invalid-key", // Invalid or missing API key
290
- stream: true
291
- });
292
-
293
- // Any error during streaming will throw an exception
294
- let finalResponse = '';
295
- for await (const chunk of generator) {
296
- finalResponse = chunk;
297
- console.log("Chunk:", chunk);
298
- }
299
-
300
- // Process the final response
301
- console.log("Final response:", finalResponse);
302
- } catch (error) {
303
- // Handle errors with standard try/catch
304
- console.error("API error:", error.message);
305
- console.error("Error properties:", {
306
- status: error.status,
307
- type: error.errorType,
308
- details: error.details
309
- });
310
- }
311
- ```
312
-
313
- This approach is idiomatic and consistent with standard JavaScript practices. Errors provide rich information for better debugging and error handling in your applications.
314
-
315
- ## Image Recognition Example
316
-
317
- Call-AI supports image recognition using multimodal models like GPT-4o. You can pass both text and image content to analyze images in the browser:
318
-
319
- ```javascript
320
- import { callAI } from 'call-ai';
321
-
322
- // Function to analyze an image using GPT-4o
323
- async function analyzeImage(imageFile, prompt = 'Describe this image in detail') {
324
- // Convert the image file to a data URL
325
- const dataUrl = await fileToDataUrl(imageFile);
326
-
327
- const content = [
328
- { type: 'text', text: prompt },
329
- { type: 'image_url', image_url: { url: dataUrl } }
330
- ];
331
-
332
- // Call the model with the multimodal content
333
- const result = await callAI(
334
- [{ role: 'user', content }],
335
- {
336
- model: 'openai/gpt-4o-2024-08-06', // Or 'openai/gpt-4o-latest'
337
- apiKey: window.CALLAI_API_KEY,
338
- }
339
- );
340
-
341
- return result;
342
- }
343
-
344
- // Helper function to convert File to data URL
345
- async function fileToDataUrl(file) {
346
- return new Promise((resolve) => {
347
- const reader = new FileReader();
348
- reader.onloadend = () => resolve(reader.result);
349
- reader.readAsDataURL(file);
350
- });
351
- }
352
- ```