@nextsparkjs/plugin-ai 0.1.0-beta.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. package/.env.example +79 -0
  2. package/README.md +529 -0
  3. package/api/README.md +65 -0
  4. package/api/ai-history/[id]/route.ts +112 -0
  5. package/api/embeddings/route.ts +129 -0
  6. package/api/generate/route.ts +160 -0
  7. package/docs/01-getting-started/01-introduction.md +237 -0
  8. package/docs/01-getting-started/02-installation.md +447 -0
  9. package/docs/01-getting-started/03-configuration.md +416 -0
  10. package/docs/02-features/01-text-generation.md +523 -0
  11. package/docs/02-features/02-embeddings.md +241 -0
  12. package/docs/02-features/03-ai-history.md +549 -0
  13. package/docs/03-advanced-usage/01-core-utilities.md +500 -0
  14. package/docs/04-use-cases/01-content-generation.md +453 -0
  15. package/entities/ai-history/ai-history.config.ts +123 -0
  16. package/entities/ai-history/ai-history.fields.ts +330 -0
  17. package/entities/ai-history/messages/en.json +56 -0
  18. package/entities/ai-history/messages/es.json +56 -0
  19. package/entities/ai-history/migrations/001_ai_history_table.sql +167 -0
  20. package/entities/ai-history/migrations/002_ai_history_metas.sql +103 -0
  21. package/lib/ai-history-meta-service.ts +379 -0
  22. package/lib/ai-history-service.ts +391 -0
  23. package/lib/ai-sdk.ts +7 -0
  24. package/lib/core-utils.ts +217 -0
  25. package/lib/plugin-env.ts +252 -0
  26. package/lib/sanitize.ts +122 -0
  27. package/lib/save-example.ts +237 -0
  28. package/lib/server-env.ts +104 -0
  29. package/package.json +23 -0
  30. package/plugin.config.ts +55 -0
  31. package/public/docs/login-404-error.png +0 -0
  32. package/tsconfig.json +47 -0
  33. package/tsconfig.tsbuildinfo +1 -0
  34. package/types/ai.types.ts +51 -0
@@ -0,0 +1,523 @@
1
+ # Text Generation
2
+
3
+ ## Overview
4
+
5
+ The **Generate Endpoint** (`/api/plugin/ai/generate`) provides flexible text generation capabilities using any configured AI provider. It's a general-purpose endpoint for creating content, answering questions, analyzing text, and more.
6
+
7
+ **Key Features:**
8
+ - Multi-provider support (OpenAI, Anthropic, Ollama)
9
+ - Automatic model selection
10
+ - Cost tracking
11
+ - Token usage monitoring
12
+ - History tracking (optional)
13
+ - Error handling
14
+ - Flexible parameters
15
+
16
+ ## Endpoint
17
+
18
+ ```
19
+ POST /api/plugin/ai/generate
20
+ ```
21
+
22
+ **Authentication:** Required (session or API key)
23
+
24
+ ## Request Schema
25
+
26
+ ### Basic Request
27
+
28
+ ```typescript
29
+ {
30
+ "prompt": string // Required: Your text prompt
31
+ }
32
+ ```
33
+
34
+ ### Full Request Options
35
+
36
+ ```typescript
37
+ {
38
+ "prompt": string, // Required: Text prompt (1-10,000 chars)
39
+ "model": string, // Optional: AI model to use
40
+ "maxTokens": number, // Optional: Max response length (1-10,000)
41
+ "temperature": number, // Optional: Creativity (0-1)
42
+ "saveExample": boolean // Optional: Save to history (default: false)
43
+ }
44
+ ```
45
+
46
+ ### Parameter Details
47
+
48
+ **prompt** (required)
49
+ - Type: `string`
50
+ - Min length: 1 character
51
+ - Max length: 10,000 characters
52
+ - Description: Your instruction or question for the AI
53
+
54
+ **model** (optional)
55
+ - Type: `string`
56
+ - Default: From `DEFAULT_MODEL` in `.env`
57
+ - Options:
58
+ - **Ollama:** `llama3.2:3b`, `llama3.2`, `llama3.1`, `qwen2.5`, `mistral`
59
+ - **OpenAI:** `gpt-4o`, `gpt-4o-mini`, `gpt-3.5-turbo`
60
+ - **Anthropic:** `claude-3-5-sonnet-20241022`, `claude-3-5-haiku-20241022`
61
+ - Description: AI model to use for generation
62
+
63
+ **maxTokens** (optional)
64
+ - Type: `number`
65
+ - Min: 1
66
+ - Max: 10,000
67
+ - Default: From `MAX_TOKENS` in `.env` (typically 2000)
68
+ - Description: Maximum tokens in response (affects cost and length)
69
+
70
+ **temperature** (optional)
71
+ - Type: `number`
72
+ - Min: 0 (deterministic, focused)
73
+ - Max: 1 (creative, varied)
74
+ - Default: From `DEFAULT_TEMPERATURE` in `.env` (typically 0.7)
75
+ - Description: Controls response randomness
76
+
77
+ **saveExample** (optional)
78
+ - Type: `boolean`
79
+ - Default: `false`
80
+ - Description: Save interaction to AI History (opt-in for examples)
81
+
82
+ ## Response Format
83
+
84
+ ### Success Response
85
+
86
+ ```json
87
+ {
88
+ "success": true,
89
+ "response": "AI-generated text here...",
90
+ "model": "llama3.2:3b",
91
+ "provider": "ollama",
92
+ "isLocal": true,
93
+ "cost": 0,
94
+ "tokens": {
95
+ "input": 15,
96
+ "output": 42,
97
+ "total": 57
98
+ },
99
+ "userId": "user-id-here",
100
+ "exampleSaved": false
101
+ }
102
+ ```
103
+
104
+ ### Response Fields
105
+
106
+ **success** - `boolean` - Always `true` on success
107
+
108
+ **response** - `string` - Generated text from AI
109
+
110
+ **model** - `string` - Model used for generation
111
+
112
+ **provider** - `string` - Provider used (`openai`, `anthropic`, `ollama`)
113
+
114
+ **isLocal** - `boolean` - `true` if using Ollama (local), `false` for cloud
115
+
116
+ **cost** - `number` - Estimated cost in USD (0 for local models)
117
+
118
+ **tokens** - `object` - Token usage breakdown
119
+ - `input` - Tokens in prompt
120
+ - `output` - Tokens in response
121
+ - `total` - Total tokens used
122
+
123
+ **userId** - `string` - ID of authenticated user
124
+
125
+ **exampleSaved** - `boolean` - Whether interaction was saved to history
126
+
127
+ ### Error Response
128
+
129
+ ```json
130
+ {
131
+ "error": "Error type",
132
+ "message": "Detailed error message"
133
+ }
134
+ ```
135
+
136
+ **HTTP Status Codes:**
137
+ - `400` - Bad request (validation failed)
138
+ - `401` - Unauthorized (no session/API key)
139
+ - `404` - Model not found
140
+ - `429` - Rate limit exceeded
141
+ - `503` - Service unavailable (provider down, plugin disabled)
142
+ - `500` - Internal server error
143
+
144
+ ## Usage Examples
145
+
146
+ ### Example 1: Basic Text Generation
147
+
148
+ ```bash
149
+ curl -X POST http://localhost:5173/api/plugin/ai/generate \
150
+ -H "Content-Type: application/json" \
151
+ -H "Cookie: your-session-cookie" \
152
+ -d '{
153
+ "prompt": "Explain quantum computing in simple terms"
154
+ }'
155
+ ```
156
+
157
+ **Response:**
158
+ ```json
159
+ {
160
+ "success": true,
161
+ "response": "Quantum computing is a type of computing that uses quantum-mechanical phenomena...",
162
+ "model": "llama3.2:3b",
163
+ "provider": "ollama",
164
+ "isLocal": true,
165
+ "cost": 0,
166
+ "tokens": {
167
+ "input": 8,
168
+ "output": 156,
169
+ "total": 164
170
+ }
171
+ }
172
+ ```
173
+
174
+ ### Example 2: With Specific Model
175
+
176
+ ```bash
177
+ curl -X POST http://localhost:5173/api/plugin/ai/generate \
178
+ -H "Content-Type: application/json" \
179
+ -H "Cookie: your-session-cookie" \
180
+ -d '{
181
+ "prompt": "Write a professional email requesting a meeting",
182
+ "model": "gpt-4o-mini",
183
+ "maxTokens": 300,
184
+ "temperature": 0.7
185
+ }'
186
+ ```
187
+
188
+ ### Example 3: Creative Writing
189
+
190
+ ```bash
191
+ curl -X POST http://localhost:5173/api/plugin/ai/generate \
192
+ -H "Content-Type: application/json" \
193
+ -H "Cookie: your-session-cookie" \
194
+ -d '{
195
+ "prompt": "Write a short story about a robot learning to paint",
196
+ "temperature": 0.9,
197
+ "maxTokens": 500
198
+ }'
199
+ ```
200
+
201
+ ### Example 4: Deterministic Analysis
202
+
203
+ ```bash
204
+ curl -X POST http://localhost:5173/api/plugin/ai/generate \
205
+ -H "Content-Type: application/json" \
206
+ -H "Cookie: your-session-cookie" \
207
+ -d '{
208
+ "prompt": "List the pros and cons of remote work",
209
+ "temperature": 0.2,
210
+ "maxTokens": 400
211
+ }'
212
+ ```
213
+
214
+ ### Example 5: Save to History
215
+
216
+ ```bash
217
+ curl -X POST http://localhost:5173/api/plugin/ai/generate \
218
+ -H "Content-Type: application/json" \
219
+ -H "Cookie: your-session-cookie" \
220
+ -d '{
221
+ "prompt": "Generate a product description for wireless headphones",
222
+ "saveExample": true
223
+ }'
224
+ ```
225
+
226
+ ## JavaScript/TypeScript Examples
227
+
228
+ ### Using Fetch API
229
+
230
+ ```typescript
231
+ async function generateText(prompt: string, model?: string) {
232
+ const response = await fetch('/api/plugin/ai/generate', {
233
+ method: 'POST',
234
+ headers: {
235
+ 'Content-Type': 'application/json',
236
+ },
237
+ body: JSON.stringify({
238
+ prompt,
239
+ model,
240
+ maxTokens: 500,
241
+ temperature: 0.7
242
+ })
243
+ })
244
+
245
+ if (!response.ok) {
246
+ const error = await response.json()
247
+ throw new Error(error.message)
248
+ }
249
+
250
+ const data = await response.json()
251
+ return data.response
252
+ }
253
+
254
+ // Usage
255
+ const text = await generateText('Explain machine learning')
256
+ console.log(text)
257
+ ```
258
+
259
+ ### React Hook
260
+
261
+ ```typescript
262
+ import { useState } from 'react'
263
+
264
+ export function useAIGenerate() {
265
+ const [loading, setLoading] = useState(false)
266
+ const [error, setError] = useState<string | null>(null)
267
+
268
+ const generate = async (prompt: string, options?: {
269
+ model?: string
270
+ maxTokens?: number
271
+ temperature?: number
272
+ }) => {
273
+ setLoading(true)
274
+ setError(null)
275
+
276
+ try {
277
+ const response = await fetch('/api/plugin/ai/generate', {
278
+ method: 'POST',
279
+ headers: { 'Content-Type': 'application/json' },
280
+ body: JSON.stringify({ prompt, ...options })
281
+ })
282
+
283
+ if (!response.ok) {
284
+ const err = await response.json()
285
+ throw new Error(err.message)
286
+ }
287
+
288
+ const data = await response.json()
289
+ return data.response
290
+ } catch (err) {
291
+ const message = err instanceof Error ? err.message : 'Unknown error'
292
+ setError(message)
293
+ throw err
294
+ } finally {
295
+ setLoading(false)
296
+ }
297
+ }
298
+
299
+ return { generate, loading, error }
300
+ }
301
+
302
+ // Usage in component
303
+ function MyComponent() {
304
+ const { generate, loading, error } = useAIGenerate()
305
+ const [result, setResult] = useState('')
306
+
307
+ const handleGenerate = async () => {
308
+ try {
309
+ const text = await generate('Write a tagline for a SaaS product')
310
+ setResult(text)
311
+ } catch (err) {
312
+ console.error('Generation failed:', err)
313
+ }
314
+ }
315
+
316
+ return (
317
+ <div>
318
+ <button onClick={handleGenerate} disabled={loading}>
319
+ {loading ? 'Generating...' : 'Generate'}
320
+ </button>
321
+ {error && <p>Error: {error}</p>}
322
+ {result && <p>{result}</p>}
323
+ </div>
324
+ )
325
+ }
326
+ ```
327
+
328
+ ## Supported Models
329
+
330
+ ### Ollama (Local, Free)
331
+
332
+ | Model | Size | Speed | Quality | Use Case |
333
+ |-------|------|-------|---------|----------|
334
+ | `llama3.2:3b` | 3B | ⚡⚡⚡ | ⭐⭐⭐ | Development, testing |
335
+ | `llama3.2` | 11B | ⚡⚡ | ⭐⭐⭐⭐ | General purpose |
336
+ | `llama3.1` | 8B/70B | ⚡⚡ | ⭐⭐⭐⭐ | Production quality |
337
+ | `qwen2.5` | 7B | ⚡⚡ | ⭐⭐⭐⭐ | Multilingual |
338
+ | `mistral` | 7B | ⚡⚡ | ⭐⭐⭐⭐ | European model |
339
+
340
+ **Setup:** `ollama pull llama3.2:3b`
341
+
342
+ ### OpenAI (Cloud, Paid)
343
+
344
+ | Model | Context | Speed | Quality | Cost (per 1K tokens) |
345
+ |-------|---------|-------|---------|----------------------|
346
+ | `gpt-4o` | 128K | ⚡⚡ | ⭐⭐⭐⭐⭐ | $0.0025 in / $0.01 out |
347
+ | `gpt-4o-mini` | 128K | ⚡⚡⚡ | ⭐⭐⭐⭐ | $0.00015 in / $0.0006 out |
348
+ | `gpt-3.5-turbo` | 16K | ⚡⚡⚡ | ⭐⭐⭐ | $0.0005 in / $0.0015 out |
349
+
350
+ **Setup:** Set `OPENAI_API_KEY` in `.env`
351
+
352
+ ### Anthropic (Cloud, Paid)
353
+
354
+ | Model | Context | Speed | Quality | Cost (per 1K tokens) |
355
+ |-------|---------|-------|---------|----------------------|
356
+ | `claude-3-5-sonnet-20241022` | 200K | ⚡⚡ | ⭐⭐⭐⭐⭐ | $0.003 in / $0.015 out |
357
+ | `claude-3-5-haiku-20241022` | 200K | ⚡⚡⚡ | ⭐⭐⭐⭐ | $0.00025 in / $0.00125 out |
358
+
359
+ **Setup:** Set `ANTHROPIC_API_KEY` in `.env`
360
+
361
+ ## Cost Tracking
362
+
363
+ The endpoint automatically calculates costs based on token usage:
364
+
365
+ ```json
366
+ {
367
+ "cost": 0.00045, // $0.00045 USD
368
+ "tokens": {
369
+ "input": 100, // 100 input tokens
370
+ "output": 200 // 200 output tokens
371
+ },
372
+ "model": "gpt-4o-mini"
373
+ }
374
+ ```
375
+
376
+ **Cost Formula:**
377
+ ```
378
+ cost = (input_tokens / 1000 * input_price) + (output_tokens / 1000 * output_price)
379
+ ```
380
+
381
+ **Example (GPT-4o Mini):**
382
+ ```
383
+ Input: 100 tokens × $0.00015 = $0.000015
384
+ Output: 200 tokens × $0.0006 = $0.00012
385
+ Total: $0.000135
386
+ ```
387
+
388
+ ## Error Handling
389
+
390
+ ### Common Errors
391
+
392
+ **1. Authentication Error**
393
+ ```json
394
+ {
395
+ "error": "Authentication required"
396
+ }
397
+ ```
398
+ Solution: Include session cookie or API key
399
+
400
+ **2. Provider Not Configured**
401
+ ```json
402
+ {
403
+ "error": "OpenAI authentication failed",
404
+ "message": "Check your OPENAI_API_KEY in contents/plugins/ai/.env"
405
+ }
406
+ ```
407
+ Solution: Add API key to `.env` file
408
+
409
+ **3. Ollama Connection Failed**
410
+ ```json
411
+ {
412
+ "error": "Ollama connection failed",
413
+ "message": "Make sure Ollama is running (ollama serve)"
414
+ }
415
+ ```
416
+ Solution: Start Ollama service
417
+
418
+ **4. Model Not Found**
419
+ ```json
420
+ {
421
+ "error": "Model not found",
422
+ "message": "The specified model is not available or not installed"
423
+ }
424
+ ```
425
+ Solution: For Ollama, run `ollama pull model-name`
426
+
427
+ **5. Rate Limit**
428
+ ```json
429
+ {
430
+ "error": "Rate limit exceeded",
431
+ "message": "API rate limit reached. Try again later."
432
+ }
433
+ ```
434
+ Solution: Wait and retry, or upgrade provider tier
435
+
436
+ ## Best Practices
437
+
438
+ ### 1. Choose Appropriate Model
439
+
440
+ ```typescript
441
+ // Development: Use free local models
442
+ const devPrompt = {
443
+ prompt: "test",
444
+ model: "llama3.2:3b"
445
+ }
446
+
447
+ // Production: Use quality cloud models for customer-facing features
448
+ const prodPrompt = {
449
+ prompt: userInput,
450
+ model: "gpt-4o-mini" // Or claude-3-5-haiku-20241022
451
+ }
452
+ ```
453
+
454
+ ### 2. Optimize Token Usage
455
+
456
+ ```typescript
457
+ // Set appropriate max tokens
458
+ {
459
+ prompt: "Write a short tagline",
460
+ maxTokens: 50 // Don't use 2000 for short outputs
461
+ }
462
+
463
+ // Be concise in prompts
464
+ {
465
+ prompt: "List 3 benefits of X", // Clear, specific
466
+ // Not: "Can you please help me understand what the benefits might be..."
467
+ }
468
+ ```
469
+
470
+ ### 3. Use Temperature Wisely
471
+
472
+ ```typescript
473
+ // Deterministic tasks (analysis, extraction)
474
+ {
475
+ prompt: "Extract key points from this text",
476
+ temperature: 0.2
477
+ }
478
+
479
+ // Creative tasks (writing, brainstorming)
480
+ {
481
+ prompt: "Write a creative story",
482
+ temperature: 0.9
483
+ }
484
+
485
+ // Balanced (most use cases)
486
+ {
487
+ prompt: "Generate a product description",
488
+ temperature: 0.7
489
+ }
490
+ ```
491
+
492
+ ### 4. Handle Errors Gracefully
493
+
494
+ ```typescript
495
+ try {
496
+ const result = await fetch('/api/plugin/ai/generate', {
497
+ method: 'POST',
498
+ body: JSON.stringify({ prompt })
499
+ })
500
+
501
+ if (!result.ok) {
502
+ const error = await result.json()
503
+ // Show user-friendly message
504
+ showError('AI generation failed. Please try again.')
505
+ // Log for debugging
506
+ console.error('AI Error:', error)
507
+ return
508
+ }
509
+
510
+ const data = await result.json()
511
+ return data.response
512
+ } catch (err) {
513
+ // Network error
514
+ showError('Connection failed. Check your internet.')
515
+ }
516
+ ```
517
+
518
+ ## Next Steps
519
+
520
+ - **[Embeddings](./02-embeddings.md)** - Generate semantic embeddings
521
+ - **[AI History](./03-ai-history.md)** - Track AI operations
522
+ - **[API Reference](../03-api-reference/02-generate-endpoint.md)** - Detailed API docs
523
+ - **[Custom Endpoints](../04-advanced-usage/02-custom-endpoints.md)** - Build your own