opencode-smart-voice-notify 1.1.2 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -35,6 +35,13 @@ The plugin automatically tries multiple TTS engines in order, falling back if on
35
35
  - **Permission Batching**: Multiple simultaneous permission requests are batched into a single notification (e.g., "5 permission requests require your attention")
36
36
  - **Question Tool Support** (SDK v1.1.7+): Notifies when the agent asks questions and needs user input
37
37
 
38
+ ### AI-Generated Messages (Experimental)
39
+ - **Dynamic notifications**: Use a local AI to generate unique, contextual messages instead of preset static ones
40
+ - **OpenAI-compatible**: Works with Ollama, LM Studio, LocalAI, vLLM, llama.cpp, Jan.ai, or any OpenAI-compatible endpoint
41
+ - **User-hosted**: You provide your own AI endpoint - no cloud API keys required
42
+ - **Custom prompts**: Configure prompts per notification type for full control over AI personality
43
+ - **Smart fallback**: Automatically falls back to static messages if AI is unavailable
44
+
38
45
  ### System Integration
39
46
  - **Native Edge TTS**: No external dependencies (Python/pip) required
40
47
  - Wake monitor from sleep before notifying
@@ -383,8 +390,40 @@ If you prefer to create the config manually, add a `smart-voice-notify.jsonc` fi
383
390
  }
384
391
  ```
385
392
 
386
- See `example.config.jsonc` for more details.
387
-
393
+ See `example.config.jsonc` for more details.
394
+
395
+ ### AI Message Generation (Optional)
396
+
397
+ If you want dynamic, AI-generated notification messages instead of preset ones, you can connect to a local AI server:
398
+
399
+ 1. **Install a local AI server** (e.g., [Ollama](https://ollama.ai)):
400
+ ```bash
401
+ # Install Ollama and pull a model
402
+ ollama pull llama3
403
+ ```
404
+
405
+ 2. **Enable AI messages in your config**:
406
+ ```jsonc
407
+ {
408
+ "enableAIMessages": true,
409
+ "aiEndpoint": "http://localhost:11434/v1",
410
+ "aiModel": "llama3",
411
+ "aiApiKey": "",
412
+ "aiFallbackToStatic": true
413
+ }
414
+ ```
415
+
416
+ 3. **The AI will generate unique messages** for each notification, which are then spoken by your TTS engine.
417
+
418
+ **Supported AI Servers:**
419
+ | Server | Default Endpoint | API Key |
420
+ |--------|-----------------|---------|
421
+ | Ollama | `http://localhost:11434/v1` | Not needed |
422
+ | LM Studio | `http://localhost:1234/v1` | Not needed |
423
+ | LocalAI | `http://localhost:8080/v1` | Not needed |
424
+ | vLLM | `http://localhost:8000/v1` | Use "EMPTY" |
425
+ | Jan.ai | `http://localhost:1337/v1` | Required |
426
+
388
427
  ## Requirements
389
428
 
390
429
  ### For ElevenLabs TTS
@@ -250,6 +250,71 @@
250
250
  // Question batch window (ms) - how long to wait for more questions before notifying
251
251
  "questionBatchWindowMs": 800,
252
252
 
253
+ // ============================================================
254
+ // AI MESSAGE GENERATION (OpenAI-Compatible Endpoints)
255
+ // ============================================================
256
+ // Use a local/self-hosted AI to generate dynamic notification messages
257
+ // instead of using preset static messages. The AI generates the text,
258
+ // which is then spoken by your configured TTS engine (ElevenLabs, Edge, etc.)
259
+ //
260
+ // Supports: Ollama, LM Studio, LocalAI, vLLM, llama.cpp, Jan.ai, and any
261
+ // OpenAI-compatible endpoint. You provide your own endpoint URL and API key.
262
+ //
263
+ // HOW IT WORKS:
264
+ // 1. When a notification is triggered (task complete, permission needed, etc.)
265
+ // 2. If AI is enabled, the plugin sends a prompt to your AI server
266
+ // 3. The AI generates a unique, contextual notification message
267
+ // 4. That message is spoken by your TTS engine (ElevenLabs, Edge, SAPI)
268
+ // 5. If AI fails, it falls back to the static messages defined above
269
+
270
+ // Enable AI-generated messages (experimental feature)
271
+ // Default: false (uses static messages defined above)
272
+ "enableAIMessages": false,
273
+
274
+ // Your AI server endpoint URL
275
+ // Common local AI servers and their default endpoints:
276
+ // Ollama: http://localhost:11434/v1
277
+ // LM Studio: http://localhost:1234/v1
278
+ // LocalAI: http://localhost:8080/v1
279
+ // vLLM: http://localhost:8000/v1
280
+ // llama.cpp: http://localhost:8080/v1
281
+ // Jan.ai: http://localhost:1337/v1
282
+ // text-gen-webui: http://localhost:5000/v1
283
+ "aiEndpoint": "http://localhost:11434/v1",
284
+
285
+ // Model name to use (must match a model loaded in your AI server)
286
+ // Examples for Ollama: "llama3", "llama3.2", "mistral", "phi3", "gemma2", "qwen2"
287
+ // For LM Studio: Use the model name shown in the UI
288
+ "aiModel": "llama3",
289
+
290
+ // API key for your AI server
291
+ // Most local servers (Ollama, LM Studio, LocalAI) don't require a key - leave empty
292
+ // Only set this if your server requires authentication
293
+ // For vLLM with auth disabled, use "EMPTY"
294
+ "aiApiKey": "",
295
+
296
+ // Request timeout in milliseconds
297
+ // Local AI can be slow on first request (model loading), so 15 seconds is recommended
298
+ // Increase if you have a slower machine or larger models
299
+ "aiTimeout": 15000,
300
+
301
+ // Fall back to static messages (defined above) if AI generation fails
302
+ // Recommended: true - ensures notifications always work even if AI is down
303
+ "aiFallbackToStatic": true,
304
+
305
+ // Custom prompts for each notification type
306
+ // You can customize these to change the AI's personality/style
307
+ // The AI will generate a short message based on these prompts
308
+ // TIP: Keep prompts concise - they're sent with each notification
309
+ "aiPrompts": {
310
+ "idle": "Generate a single brief, friendly notification sentence (max 15 words) saying a coding task is complete. Be encouraging and warm. Output only the message, no quotes.",
311
+ "permission": "Generate a single brief, urgent but friendly notification sentence (max 15 words) asking the user to approve a permission request. Output only the message, no quotes.",
312
+ "question": "Generate a single brief, polite notification sentence (max 15 words) saying the assistant has a question and needs user input. Output only the message, no quotes.",
313
+ "idleReminder": "Generate a single brief, gentle reminder sentence (max 15 words) that a completed task is waiting for review. Be slightly more insistent. Output only the message, no quotes.",
314
+ "permissionReminder": "Generate a single brief, urgent reminder sentence (max 15 words) that permission approval is still needed. Convey importance. Output only the message, no quotes.",
315
+ "questionReminder": "Generate a single brief, polite but persistent reminder sentence (max 15 words) that a question is still waiting for an answer. Output only the message, no quotes."
316
+ },
317
+
253
318
  // ============================================================
254
319
  // SOUND FILES (For immediate notifications)
255
320
  // These are played first before TTS reminder kicks in
package/index.js CHANGED
@@ -2,6 +2,7 @@ import fs from 'fs';
2
2
  import os from 'os';
3
3
  import path from 'path';
4
4
  import { createTTS, getTTSConfig } from './util/tts.js';
5
+ import { getSmartMessage } from './util/ai-messages.js';
5
6
 
6
7
  /**
7
8
  * OpenCode Smart Voice Notify Plugin
@@ -251,11 +252,11 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
251
252
  const storedCount = reminder?.itemCount || 1;
252
253
  let reminderMessage;
253
254
  if (type === 'permission') {
254
- reminderMessage = getPermissionMessage(storedCount, true);
255
+ reminderMessage = await getPermissionMessage(storedCount, true);
255
256
  } else if (type === 'question') {
256
- reminderMessage = getQuestionMessage(storedCount, true);
257
+ reminderMessage = await getQuestionMessage(storedCount, true);
257
258
  } else {
258
- reminderMessage = getRandomMessage(config.idleReminderTTSMessages);
259
+ reminderMessage = await getSmartMessage('idle', true, config.idleReminderTTSMessages);
259
260
  }
260
261
 
261
262
  // Check for ElevenLabs API key configuration issues
@@ -305,11 +306,11 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
305
306
  const followUpStoredCount = followUpReminder?.itemCount || 1;
306
307
  let followUpMessage;
307
308
  if (type === 'permission') {
308
- followUpMessage = getPermissionMessage(followUpStoredCount, true);
309
+ followUpMessage = await getPermissionMessage(followUpStoredCount, true);
309
310
  } else if (type === 'question') {
310
- followUpMessage = getQuestionMessage(followUpStoredCount, true);
311
+ followUpMessage = await getQuestionMessage(followUpStoredCount, true);
311
312
  } else {
312
- followUpMessage = getRandomMessage(config.idleReminderTTSMessages);
313
+ followUpMessage = await getSmartMessage('idle', true, config.idleReminderTTSMessages);
313
314
  }
314
315
 
315
316
  await tts.wakeMonitor();
@@ -391,11 +392,11 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
391
392
  if (config.notificationMode === 'tts-first' || config.notificationMode === 'both') {
392
393
  let immediateMessage;
393
394
  if (type === 'permission') {
394
- immediateMessage = getRandomMessage(config.permissionTTSMessages);
395
+ immediateMessage = await getSmartMessage('permission', false, config.permissionTTSMessages);
395
396
  } else if (type === 'question') {
396
- immediateMessage = getRandomMessage(config.questionTTSMessages);
397
+ immediateMessage = await getSmartMessage('question', false, config.questionTTSMessages);
397
398
  } else {
398
- immediateMessage = getRandomMessage(config.idleTTSMessages);
399
+ immediateMessage = await getSmartMessage('idle', false, config.idleTTSMessages);
399
400
  }
400
401
 
401
402
  await tts.speak(immediateMessage, {
@@ -407,18 +408,19 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
407
408
 
408
409
  /**
409
410
  * Get a count-aware TTS message for permission requests
411
+ * Uses AI generation when enabled, falls back to static messages
410
412
  * @param {number} count - Number of permission requests
411
413
  * @param {boolean} isReminder - Whether this is a reminder message
412
- * @returns {string} The formatted message
414
+ * @returns {Promise<string>} The formatted message
413
415
  */
414
- const getPermissionMessage = (count, isReminder = false) => {
416
+ const getPermissionMessage = async (count, isReminder = false) => {
415
417
  const messages = isReminder
416
418
  ? config.permissionReminderTTSMessages
417
419
  : config.permissionTTSMessages;
418
420
 
419
421
  if (count === 1) {
420
- // Single permission - use regular message
421
- return getRandomMessage(messages);
422
+ // Single permission - use smart message (AI or static fallback)
423
+ return await getSmartMessage('permission', isReminder, messages, { count });
422
424
  } else {
423
425
  // Multiple permissions - use count-aware messages if available, or format dynamically
424
426
  const countMessages = isReminder
@@ -430,7 +432,11 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
430
432
  const template = getRandomMessage(countMessages);
431
433
  return template.replace('{count}', count.toString());
432
434
  } else {
433
- // Fallback: generate a dynamic message
435
+ // Try AI message with count context, fallback to dynamic message
436
+ const aiMessage = await getSmartMessage('permission', isReminder, [], { count });
437
+ if (aiMessage !== 'Notification') {
438
+ return aiMessage;
439
+ }
434
440
  return `Attention! There are ${count} permission requests waiting for your approval.`;
435
441
  }
436
442
  }
@@ -438,18 +444,19 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
438
444
 
439
445
  /**
440
446
  * Get a count-aware TTS message for question requests (SDK v1.1.7+)
447
+ * Uses AI generation when enabled, falls back to static messages
441
448
  * @param {number} count - Number of question requests
442
449
  * @param {boolean} isReminder - Whether this is a reminder message
443
- * @returns {string} The formatted message
450
+ * @returns {Promise<string>} The formatted message
444
451
  */
445
- const getQuestionMessage = (count, isReminder = false) => {
452
+ const getQuestionMessage = async (count, isReminder = false) => {
446
453
  const messages = isReminder
447
454
  ? config.questionReminderTTSMessages
448
455
  : config.questionTTSMessages;
449
456
 
450
457
  if (count === 1) {
451
- // Single question - use regular message
452
- return getRandomMessage(messages);
458
+ // Single question - use smart message (AI or static fallback)
459
+ return await getSmartMessage('question', isReminder, messages, { count });
453
460
  } else {
454
461
  // Multiple questions - use count-aware messages if available, or format dynamically
455
462
  const countMessages = isReminder
@@ -461,7 +468,11 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
461
468
  const template = getRandomMessage(countMessages);
462
469
  return template.replace('{count}', count.toString());
463
470
  } else {
464
- // Fallback: generate a dynamic message
471
+ // Try AI message with count context, fallback to dynamic message
472
+ const aiMessage = await getSmartMessage('question', isReminder, [], { count });
473
+ if (aiMessage !== 'Notification') {
474
+ return aiMessage;
475
+ }
465
476
  return `Hey! I have ${count} questions for you. Please check your screen.`;
466
477
  }
467
478
  }
@@ -508,8 +519,8 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
508
519
  }
509
520
 
510
521
  // Get count-aware TTS message
511
- const ttsMessage = getPermissionMessage(batchCount, false);
512
- const reminderMessage = getPermissionMessage(batchCount, true);
522
+ const ttsMessage = await getPermissionMessage(batchCount, false);
523
+ const reminderMessage = await getPermissionMessage(batchCount, true);
513
524
 
514
525
  // Smart notification: sound first, TTS reminder later
515
526
  await smartNotify('permission', {
@@ -582,8 +593,8 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
582
593
  }
583
594
 
584
595
  // Get count-aware TTS message (uses total question count, not request count)
585
- const ttsMessage = getQuestionMessage(totalQuestionCount, false);
586
- const reminderMessage = getQuestionMessage(totalQuestionCount, true);
596
+ const ttsMessage = await getQuestionMessage(totalQuestionCount, false);
597
+ const reminderMessage = await getQuestionMessage(totalQuestionCount, true);
587
598
 
588
599
  // Smart notification: sound first, TTS reminder later
589
600
  // Sound plays 2 times by default (matching permission behavior)
@@ -755,11 +766,14 @@ export default async function SmartVoiceNotifyPlugin({ project, client, $, direc
755
766
  debugLog(`session.idle: notifying for session ${sessionID} (idleTime=${lastSessionIdleTime})`);
756
767
  await showToast("✅ Agent has finished working", "success", 5000);
757
768
 
769
+ // Get smart message for idle notification (AI or static fallback)
770
+ const idleTtsMessage = await getSmartMessage('idle', false, config.idleTTSMessages);
771
+
758
772
  // Smart notification: sound first, TTS reminder later
759
773
  await smartNotify('idle', {
760
774
  soundFile: config.idleSound,
761
775
  soundLoops: 1,
762
- ttsMessage: getRandomMessage(config.idleTTSMessages),
776
+ ttsMessage: idleTtsMessage,
763
777
  fallbackSound: config.idleSound
764
778
  });
765
779
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "opencode-smart-voice-notify",
3
- "version": "1.1.2",
3
+ "version": "1.2.0",
4
4
  "description": "Smart voice notification plugin for OpenCode with multiple TTS engines (ElevenLabs, Edge TTS, Windows SAPI) and intelligent reminder system",
5
5
  "main": "index.js",
6
6
  "type": "module",
@@ -0,0 +1,207 @@
1
+ /**
2
+ * AI Message Generation Module
3
+ *
4
+ * Generates dynamic notification messages using OpenAI-compatible AI endpoints.
5
+ * Supports: Ollama, LM Studio, LocalAI, vLLM, llama.cpp, Jan.ai, etc.
6
+ *
7
+ * Uses native fetch() - no external dependencies required.
8
+ */
9
+
10
+ import { getTTSConfig } from './tts.js';
11
+
12
+ /**
13
+ * Generate a message using an OpenAI-compatible AI endpoint
14
+ * @param {string} promptType - The type of prompt ('idle', 'permission', 'question', 'idleReminder', 'permissionReminder', 'questionReminder')
15
+ * @param {object} context - Optional context about the notification (for future use)
16
+ * @returns {Promise<string|null>} Generated message or null if failed
17
+ */
18
+ export async function generateAIMessage(promptType, context = {}) {
19
+ const config = getTTSConfig();
20
+
21
+ // Check if AI messages are enabled
22
+ if (!config.enableAIMessages) {
23
+ return null;
24
+ }
25
+
26
+ // Get the prompt for this type
27
+ const prompt = config.aiPrompts?.[promptType];
28
+ if (!prompt) {
29
+ console.error(`[AI Messages] No prompt configured for type: ${promptType}`);
30
+ return null;
31
+ }
32
+
33
+ try {
34
+ // Build headers
35
+ const headers = { 'Content-Type': 'application/json' };
36
+ if (config.aiApiKey) {
37
+ headers['Authorization'] = `Bearer ${config.aiApiKey}`;
38
+ }
39
+
40
+ // Build endpoint URL (ensure it ends with /chat/completions)
41
+ let endpoint = config.aiEndpoint || 'http://localhost:11434/v1';
42
+ if (!endpoint.endsWith('/chat/completions')) {
43
+ endpoint = endpoint.replace(/\/$/, '') + '/chat/completions';
44
+ }
45
+
46
+ // Create abort controller for timeout
47
+ const controller = new AbortController();
48
+ const timeout = setTimeout(() => controller.abort(), config.aiTimeout || 15000);
49
+
50
+ // Make the request
51
+ const response = await fetch(endpoint, {
52
+ method: 'POST',
53
+ headers,
54
+ signal: controller.signal,
55
+ body: JSON.stringify({
56
+ model: config.aiModel || 'llama3',
57
+ messages: [
58
+ {
59
+ role: 'system',
60
+ content: 'You are a helpful assistant that generates short notification messages. Output only the message text, nothing else. No quotes, no explanations.'
61
+ },
62
+ {
63
+ role: 'user',
64
+ content: prompt
65
+ }
66
+ ],
67
+ max_tokens: 1000, // High value to accommodate thinking models (e.g., Gemini 2.5) that use internal reasoning tokens
68
+ temperature: 0.7
69
+ })
70
+ });
71
+
72
+ clearTimeout(timeout);
73
+
74
+ if (!response.ok) {
75
+ const errorText = await response.text().catch(() => 'Unknown error');
76
+ console.error(`[AI Messages] API error ${response.status}: ${errorText}`);
77
+ return null;
78
+ }
79
+
80
+ const data = await response.json();
81
+
82
+ // Extract the message content
83
+ const message = data.choices?.[0]?.message?.content?.trim();
84
+
85
+ if (!message) {
86
+ console.error('[AI Messages] Empty response from AI');
87
+ return null;
88
+ }
89
+
90
+ // Clean up the message (remove quotes if AI added them)
91
+ let cleanMessage = message.replace(/^["']|["']$/g, '').trim();
92
+
93
+ // Validate message length (sanity check)
94
+ if (cleanMessage.length < 5 || cleanMessage.length > 200) {
95
+ console.error(`[AI Messages] Message length invalid: ${cleanMessage.length} chars`);
96
+ return null;
97
+ }
98
+
99
+ return cleanMessage;
100
+
101
+ } catch (error) {
102
+ if (error.name === 'AbortError') {
103
+ console.error(`[AI Messages] Request timed out after ${config.aiTimeout || 15000}ms`);
104
+ } else {
105
+ console.error(`[AI Messages] Error: ${error.message}`);
106
+ }
107
+ return null;
108
+ }
109
+ }
110
+
111
+ /**
112
+ * Get a smart message - tries AI first, falls back to static messages
113
+ * @param {string} eventType - 'idle', 'permission', 'question'
114
+ * @param {boolean} isReminder - Whether this is a reminder message
115
+ * @param {string[]} staticMessages - Array of static fallback messages
116
+ * @param {object} context - Optional context (e.g., { count: 3 } for batched notifications)
117
+ * @returns {Promise<string>} The message to speak
118
+ */
119
+ export async function getSmartMessage(eventType, isReminder, staticMessages, context = {}) {
120
+ const config = getTTSConfig();
121
+
122
+ // Determine the prompt type
123
+ const promptType = isReminder ? `${eventType}Reminder` : eventType;
124
+
125
+ // Try AI generation if enabled
126
+ if (config.enableAIMessages) {
127
+ try {
128
+ const aiMessage = await generateAIMessage(promptType, context);
129
+ if (aiMessage) {
130
+ // Log success for debugging
131
+ if (config.debugLog) {
132
+ console.log(`[AI Messages] Generated: ${aiMessage}`);
133
+ }
134
+ return aiMessage;
135
+ }
136
+ } catch (error) {
137
+ console.error(`[AI Messages] Generation failed: ${error.message}`);
138
+ }
139
+
140
+ // Check if fallback is disabled
141
+ if (!config.aiFallbackToStatic) {
142
+ // Return a generic message if fallback disabled and AI failed
143
+ return 'Notification: Please check your screen.';
144
+ }
145
+ }
146
+
147
+ // Fallback to static messages
148
+ if (!Array.isArray(staticMessages) || staticMessages.length === 0) {
149
+ return 'Notification';
150
+ }
151
+
152
+ return staticMessages[Math.floor(Math.random() * staticMessages.length)];
153
+ }
154
+
155
+ /**
156
+ * Test connectivity to the AI endpoint
157
+ * @returns {Promise<{success: boolean, message: string, model?: string}>}
158
+ */
159
+ export async function testAIConnection() {
160
+ const config = getTTSConfig();
161
+
162
+ if (!config.enableAIMessages) {
163
+ return { success: false, message: 'AI messages not enabled' };
164
+ }
165
+
166
+ try {
167
+ const headers = { 'Content-Type': 'application/json' };
168
+ if (config.aiApiKey) {
169
+ headers['Authorization'] = `Bearer ${config.aiApiKey}`;
170
+ }
171
+
172
+ // Try to list models (simpler endpoint to test connectivity)
173
+ let endpoint = config.aiEndpoint || 'http://localhost:11434/v1';
174
+ endpoint = endpoint.replace(/\/$/, '') + '/models';
175
+
176
+ const controller = new AbortController();
177
+ const timeout = setTimeout(() => controller.abort(), 5000);
178
+
179
+ const response = await fetch(endpoint, {
180
+ method: 'GET',
181
+ headers,
182
+ signal: controller.signal
183
+ });
184
+
185
+ clearTimeout(timeout);
186
+
187
+ if (response.ok) {
188
+ const data = await response.json();
189
+ const models = data.data?.map(m => m.id) || [];
190
+ return {
191
+ success: true,
192
+ message: `Connected! Available models: ${models.slice(0, 3).join(', ')}${models.length > 3 ? '...' : ''}`,
193
+ models
194
+ };
195
+ } else {
196
+ return { success: false, message: `HTTP ${response.status}: ${response.statusText}` };
197
+ }
198
+
199
+ } catch (error) {
200
+ if (error.name === 'AbortError') {
201
+ return { success: false, message: 'Connection timed out' };
202
+ }
203
+ return { success: false, message: error.message };
204
+ }
205
+ }
206
+
207
+ export default { generateAIMessage, getSmartMessage, testAIConnection };
package/util/config.js CHANGED
@@ -297,6 +297,54 @@ const generateDefaultConfig = (overrides = {}, version = '1.0.0') => {
297
297
  // Question batch window (ms) - how long to wait for more questions before notifying
298
298
  "questionBatchWindowMs": ${overrides.questionBatchWindowMs !== undefined ? overrides.questionBatchWindowMs : 800},
299
299
 
300
+ // ============================================================
301
+ // AI MESSAGE GENERATION (OpenAI-Compatible Endpoints)
302
+ // ============================================================
303
+ // Use a local/self-hosted AI to generate dynamic notification messages
304
+ // instead of using preset static messages. The AI generates the text,
305
+ // which is then spoken by your configured TTS engine (ElevenLabs, Edge, etc.)
306
+ //
307
+ // Supports: Ollama, LM Studio, LocalAI, vLLM, llama.cpp, Jan.ai, and any
308
+ // OpenAI-compatible endpoint. You provide your own endpoint URL and API key.
309
+
310
+ // Enable AI-generated messages (experimental feature)
311
+ "enableAIMessages": ${overrides.enableAIMessages !== undefined ? overrides.enableAIMessages : false},
312
+
313
+ // Your AI server endpoint URL (e.g., Ollama: http://localhost:11434/v1)
314
+ // Common endpoints:
315
+ // Ollama: http://localhost:11434/v1
316
+ // LM Studio: http://localhost:1234/v1
317
+ // LocalAI: http://localhost:8080/v1
318
+ // vLLM: http://localhost:8000/v1
319
+ // Jan.ai: http://localhost:1337/v1
320
+ "aiEndpoint": "${overrides.aiEndpoint || 'http://localhost:11434/v1'}",
321
+
322
+ // Model name to use (depends on what's loaded in your AI server)
323
+ // Examples: "llama3", "mistral", "phi3", "gemma2", "qwen2"
324
+ "aiModel": "${overrides.aiModel || 'llama3'}",
325
+
326
+ // API key for your AI server (leave empty for Ollama/LM Studio/LocalAI)
327
+ // Only needed if your server requires authentication
328
+ "aiApiKey": "${overrides.aiApiKey || ''}",
329
+
330
+ // Request timeout in milliseconds (local AI can be slow on first request)
331
+ "aiTimeout": ${overrides.aiTimeout !== undefined ? overrides.aiTimeout : 15000},
332
+
333
+ // Fallback to static preset messages if AI generation fails
334
+ "aiFallbackToStatic": ${overrides.aiFallbackToStatic !== undefined ? overrides.aiFallbackToStatic : true},
335
+
336
+ // Custom prompts for each notification type
337
+ // The AI will generate a short message based on these prompts
338
+ // Keep prompts concise - they're sent with each notification
339
+ "aiPrompts": ${formatJSON(overrides.aiPrompts || {
340
+ "idle": "Generate a single brief, friendly notification sentence (max 15 words) saying a coding task is complete. Be encouraging and warm. Output only the message, no quotes.",
341
+ "permission": "Generate a single brief, urgent but friendly notification sentence (max 15 words) asking the user to approve a permission request. Output only the message, no quotes.",
342
+ "question": "Generate a single brief, polite notification sentence (max 15 words) saying the assistant has a question and needs user input. Output only the message, no quotes.",
343
+ "idleReminder": "Generate a single brief, gentle reminder sentence (max 15 words) that a completed task is waiting for review. Be slightly more insistent. Output only the message, no quotes.",
344
+ "permissionReminder": "Generate a single brief, urgent reminder sentence (max 15 words) that permission approval is still needed. Convey importance. Output only the message, no quotes.",
345
+ "questionReminder": "Generate a single brief, polite but persistent reminder sentence (max 15 words) that a question is still waiting for an answer. Output only the message, no quotes."
346
+ }, 4)},
347
+
300
348
  // ============================================================
301
349
  // SOUND FILES (For immediate notifications)
302
350
  // These are played first before TTS reminder kicks in