remote-opencode 1.0.8 → 1.0.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -12,9 +12,11 @@
12
12
  - 💻 **Access from any device** — Use your powerful dev machine from a laptop or tablet
13
13
  - 🌍 **Work remotely** — Control your home/office workstation from anywhere
14
14
  - 👥 **Collaborate** — Share AI coding sessions with team members in Discord
15
+ - 🤖 **Automated Workflows** — Queue up multiple tasks and let the bot process them sequentially
15
16
 
16
17
  ## How It Works
17
18
 
19
+
18
20
  ```
19
21
  ┌─────────────────┐ Discord API ┌─────────────────┐
20
22
  │ Your Phone / │ ◄──────────────► │ Discord Bot │
@@ -48,6 +50,7 @@ The bot runs on your development machine alongside OpenCode. When you send a com
48
50
  - [Configuration](#configuration)
49
51
  - [Troubleshooting](#troubleshooting)
50
52
  - [Development](#development)
53
+ - [Changelog](#changelog)
51
54
  - [License](#license)
52
55
 
53
56
  ---
@@ -257,20 +260,6 @@ Enable automatic worktree creation for a project. When enabled, new `/opencode`
257
260
  2. The setting toggles on/off for that project
258
261
  3. When enabled, new sessions automatically create worktrees with branch names like `auto/abc12345-1738600000000`
259
262
 
260
- **Example:**
261
- ```
262
- You: /autowork
263
- Bot: ✅ Auto-worktree enabled for project myapp.
264
- New sessions will automatically create isolated worktrees.
265
-
266
- You: /opencode prompt:Add user authentication
267
- Bot: [Creates thread + auto-worktree]
268
- 🌳 Auto-Worktree: auto/abc12345-1738600000000
269
- [Delete] [Create PR]
270
- 📌 Prompt: Add user authentication
271
- [streaming response...]
272
- ```
273
-
274
263
  **Features:**
275
264
  - 🌳 **Automatic isolation** — each session gets its own branch and worktree
276
265
  - 📱 **Mobile-friendly** — no need to type `/work` with branch names
@@ -278,8 +267,30 @@ Bot: [Creates thread + auto-worktree]
278
267
  - 🚀 **Create PR button** — easily create pull requests from worktree
279
268
  - ⚡ **Per-project setting** — enable/disable independently for each project
280
269
 
270
+ ### `/queue` — Manage Message Queue
271
+
272
+ Control the automated job queue for the current thread.
273
+
274
+ ```
275
+ /queue list
276
+ /queue clear
277
+ /queue pause
278
+ /queue resume
279
+ /queue settings continue_on_failure:True fresh_context:True
280
+ ```
281
+
282
+ **How it works:**
283
+ 1. Send multiple messages to a thread (or use `/opencode` multiple times)
284
+ 2. If the bot is busy, it reacts with `📥` and adds the task to the queue
285
+ 3. Once the current job is done, the bot automatically picks up the next one
286
+
287
+ **Settings:**
288
+ - `continue_on_failure`: If `True`, the bot moves to the next task even if the current one fails.
289
+ - `fresh_context`: If `True` (default), the AI forgets previous chat history for each new queued task to improve performance, while maintaining the same code state.
290
+
281
291
  ---
282
292
 
293
+
283
294
  ## Usage Workflow
284
295
 
285
296
  ### Basic Workflow
@@ -325,28 +336,27 @@ Share AI coding sessions with your team:
325
336
  3. Team members can watch sessions in real-time
326
337
  4. Discuss in threads while AI works
327
338
 
328
- ### Worktree Workflow (Parallel Features)
329
-
330
- Work on multiple features without conflicts:
339
+ ### Automated Iteration Workflow
331
340
 
332
- 1. **Start a new feature:**
333
- ```
334
- /work branch:feature/auth description:Implement OAuth2 login
335
- ```
341
+ Perfect for "setting and forgetting" several tasks:
336
342
 
337
- 2. **Work in the created thread:**
343
+ 1. **Send multiple instructions:**
338
344
  ```
339
- /opencode prompt:Add Google OAuth provider
345
+ You: Refactor the API
346
+ Bot: [Starts working]
347
+ You: Add documentation to the new methods
348
+ Bot: 📥 [Queued]
349
+ You: Run tests and fix any issues
350
+ Bot: 📥 [Queued]
340
351
  ```
341
352
 
342
- 3. **When done, create a PR:**
343
- Click the **Create PR** button
353
+ 2. **The bot will finish the API refactor, then automatically start the documentation task, then run the tests.**
344
354
 
345
- 4. **Clean up:**
346
- Click **Delete** to remove the worktree
355
+ 3. **Monitor progress:** Use `/queue list` to see pending tasks.
347
356
 
348
357
  ---
349
358
 
359
+
350
360
  ## Configuration
351
361
 
352
362
  All configuration is stored in `~/.remote-opencode/`:
@@ -494,7 +504,10 @@ src/
494
504
  ├── services/ # Core business logic
495
505
  │ ├── serveManager.ts # OpenCode process management
496
506
  │ ├── sessionManager.ts # Session state management
507
+ │ ├── queueManager.ts # Automated job queuing
508
+ │ ├── executionService.ts # Core prompt execution logic
497
509
  │ ├── sseClient.ts # Real-time event streaming
510
+
498
511
  │ ├── dataStore.ts # Persistent storage
499
512
  │ ├── configStore.ts # Bot configuration
500
513
  │ └── worktreeManager.ts # Git worktree operations
@@ -508,6 +521,38 @@ src/
508
521
 
509
522
  ---
510
523
 
524
+ ## Changelog
525
+
526
+ See [CHANGELOG.md](CHANGELOG.md) for a full history of changes.
527
+
528
+ ### [1.1.0] - 2026-02-05
529
+
530
+ #### Added
531
+ - **Automated Message Queuing**: Added a new system to queue multiple prompts in a thread. If the bot is busy, new messages are automatically queued and processed sequentially.
532
+ - **Queue Management**: New `/queue` slash command suite to list, clear, pause, resume, and configure queue settings.
533
+
534
+ ### [1.0.10] - 2026-02-04
535
+
536
+ #### Added
537
+ - New `/setports` slash command to configure the port range for OpenCode server instances.
538
+
539
+ #### Fixed
540
+ - Fixed Windows-specific spawning issue (targeting `opencode.cmd`).
541
+ - Resolved `spawn EINVAL` errors on Windows.
542
+ - Improved server reliability and suppressed `DEP0190` security warnings.
543
+
544
+ ### [1.0.9] - 2026-02-04
545
+
546
+ #### Added
547
+ - New `/model` slash command to set AI models per channel.
548
+ - Support for `--model` flag in OpenCode server instances.
549
+
550
+ #### Fixed
551
+ - Fixed connection timeout issues.
552
+ - Standardized internal communication to use `127.0.0.1`.
553
+
554
+ ---
555
+
511
556
  ## License
512
557
 
513
558
  MIT
@@ -0,0 +1,72 @@
1
+ import { describe, it, expect, vi, beforeEach } from 'vitest';
2
+ import { processNextInQueue, isBusy } from '../services/queueManager.js';
3
+ import * as dataStore from '../services/dataStore.js';
4
+ import * as executionService from '../services/executionService.js';
5
+ import * as sessionManager from '../services/sessionManager.js';
6
+ vi.mock('../services/dataStore.js');
7
+ vi.mock('../services/executionService.js');
8
+ vi.mock('../services/sessionManager.js');
9
+ describe('queueManager', () => {
10
+ const threadId = 'thread-1';
11
+ const parentId = 'channel-1';
12
+ const mockChannel = {
13
+ send: vi.fn().mockResolvedValue({})
14
+ };
15
+ beforeEach(() => {
16
+ vi.clearAllMocks();
17
+ });
18
+ describe('isBusy', () => {
19
+ it('should return true if sseClient is connected', () => {
20
+ vi.mocked(sessionManager.getSseClient).mockReturnValue({
21
+ isConnected: () => true
22
+ });
23
+ expect(isBusy(threadId)).toBe(true);
24
+ });
25
+ it('should return false if sseClient is not connected', () => {
26
+ vi.mocked(sessionManager.getSseClient).mockReturnValue({
27
+ isConnected: () => false
28
+ });
29
+ expect(isBusy(threadId)).toBe(false);
30
+ });
31
+ it('should return false if sseClient is missing', () => {
32
+ vi.mocked(sessionManager.getSseClient).mockReturnValue(undefined);
33
+ expect(isBusy(threadId)).toBe(false);
34
+ });
35
+ });
36
+ describe('processNextInQueue', () => {
37
+ it('should do nothing if queue is paused', async () => {
38
+ vi.mocked(dataStore.getQueueSettings).mockReturnValue({
39
+ paused: true,
40
+ continueOnFailure: false,
41
+ freshContext: true
42
+ });
43
+ await processNextInQueue(mockChannel, threadId, parentId);
44
+ expect(dataStore.popFromQueue).not.toHaveBeenCalled();
45
+ });
46
+ it('should pop and run next prompt if not paused', async () => {
47
+ vi.mocked(dataStore.getQueueSettings).mockReturnValue({
48
+ paused: false,
49
+ continueOnFailure: false,
50
+ freshContext: true
51
+ });
52
+ vi.mocked(dataStore.popFromQueue).mockReturnValue({
53
+ prompt: 'test prompt',
54
+ userId: 'user-1',
55
+ timestamp: Date.now()
56
+ });
57
+ await processNextInQueue(mockChannel, threadId, parentId);
58
+ expect(dataStore.popFromQueue).toHaveBeenCalledWith(threadId);
59
+ expect(executionService.runPrompt).toHaveBeenCalledWith(mockChannel, threadId, 'test prompt', parentId);
60
+ });
61
+ it('should do nothing if queue is empty', async () => {
62
+ vi.mocked(dataStore.getQueueSettings).mockReturnValue({
63
+ paused: false,
64
+ continueOnFailure: false,
65
+ freshContext: true
66
+ });
67
+ vi.mocked(dataStore.popFromQueue).mockReturnValue(undefined);
68
+ await processNextInQueue(mockChannel, threadId, parentId);
69
+ expect(executionService.runPrompt).not.toHaveBeenCalled();
70
+ });
71
+ });
72
+ });
@@ -17,7 +17,11 @@ vi.mock('node:net', () => ({
17
17
  }
18
18
  },
19
19
  }));
20
+ vi.mock('../services/configStore.js', () => ({
21
+ getPortConfig: vi.fn(),
22
+ }));
20
23
  import * as serveManager from '../services/serveManager.js';
24
+ import { getPortConfig } from '../services/configStore.js';
21
25
  import { spawn } from 'node:child_process';
22
26
  const createMockProcess = () => {
23
27
  const proc = new EventEmitter();
@@ -32,7 +36,7 @@ const createMockProcess = () => {
32
36
  };
33
37
  describe('serveManager', () => {
34
38
  beforeEach(() => {
35
- vi.clearAllMocks();
39
+ vi.resetAllMocks();
36
40
  });
37
41
  afterEach(() => {
38
42
  serveManager.stopAll();
@@ -47,6 +51,7 @@ describe('serveManager', () => {
47
51
  expect(port).toBeLessThanOrEqual(14200);
48
52
  expect(spawn).toHaveBeenCalledWith('opencode', ['serve', '--port', port.toString()], expect.objectContaining({
49
53
  cwd: projectPath,
54
+ shell: true,
50
55
  }));
51
56
  });
52
57
  it('should return existing port if serve already running for project', async () => {
@@ -65,6 +70,13 @@ describe('serveManager', () => {
65
70
  expect(port1).not.toBe(port2);
66
71
  expect(spawn).toHaveBeenCalledTimes(2);
67
72
  });
73
+ it('should respect custom port range from config', async () => {
74
+ vi.mocked(spawn).mockImplementation(() => createMockProcess());
75
+ vi.mocked(getPortConfig).mockReturnValue({ min: 20000, max: 20010 });
76
+ const port = await serveManager.spawnServe('/test/custom-port');
77
+ expect(port).toBe(20000);
78
+ expect(spawn).toHaveBeenCalledWith('opencode', ['serve', '--port', '20000'], expect.anything());
79
+ });
68
80
  it('should clean up when process exits', async () => {
69
81
  const mockProc = createMockProcess();
70
82
  vi.mocked(spawn).mockReturnValue(mockProc);
@@ -72,7 +84,51 @@ describe('serveManager', () => {
72
84
  await serveManager.spawnServe(projectPath);
73
85
  expect(serveManager.getPort(projectPath)).toBeDefined();
74
86
  mockProc.emit('exit', 0, null);
75
- expect(serveManager.getPort(projectPath)).toBeUndefined();
87
+ // Wait for async exit handler
88
+ await new Promise(resolve => setTimeout(resolve, 10));
89
+ // Instance should still exist but be marked as exited
90
+ const state = serveManager.getInstanceState(projectPath);
91
+ expect(state?.exited).toBe(true);
92
+ expect(state?.exitCode).toBe(0);
93
+ });
94
+ it('should track error message when process exits with non-zero code', async () => {
95
+ const mockProc = createMockProcess();
96
+ vi.mocked(spawn).mockReturnValue(mockProc);
97
+ const projectPath = '/test/project';
98
+ await serveManager.spawnServe(projectPath);
99
+ // Simulate stderr output before exit
100
+ mockProc.stderr?.emit('data', Buffer.from('Error: opencode command not found'));
101
+ mockProc.emit('exit', 1, null);
102
+ await new Promise(resolve => setTimeout(resolve, 10));
103
+ const state = serveManager.getInstanceState(projectPath);
104
+ expect(state?.exited).toBe(true);
105
+ expect(state?.exitCode).toBe(1);
106
+ expect(state?.exitError).toContain('opencode command not found');
107
+ });
108
+ it('should track error message when process fails to spawn', async () => {
109
+ const mockProc = createMockProcess();
110
+ vi.mocked(spawn).mockReturnValue(mockProc);
111
+ const projectPath = '/test/project';
112
+ await serveManager.spawnServe(projectPath);
113
+ mockProc.emit('error', new Error('spawn opencode ENOENT'));
114
+ await new Promise(resolve => setTimeout(resolve, 10));
115
+ const state = serveManager.getInstanceState(projectPath);
116
+ expect(state?.exited).toBe(true);
117
+ expect(state?.exitError).toContain('spawn opencode ENOENT');
118
+ });
119
+ it('should allow respawning after process exits', async () => {
120
+ vi.mocked(spawn).mockImplementation(() => createMockProcess());
121
+ const projectPath = '/test/project';
122
+ const port1 = await serveManager.spawnServe(projectPath);
123
+ // Get the mock process and mark it as exited
124
+ const mockProc1 = vi.mocked(spawn).mock.results[0].value;
125
+ mockProc1.emit('exit', 1, null);
126
+ await new Promise(resolve => setTimeout(resolve, 10));
127
+ // Should spawn a new process
128
+ const port2 = await serveManager.spawnServe(projectPath);
129
+ expect(spawn).toHaveBeenCalledTimes(2);
130
+ // Port might be the same or different depending on cleanup timing
131
+ expect(port2).toBeGreaterThanOrEqual(14097);
76
132
  });
77
133
  });
78
134
  describe('getPort', () => {
@@ -138,7 +194,7 @@ describe('serveManager', () => {
138
194
  const promise = serveManager.waitForReady(14097);
139
195
  await vi.runAllTimersAsync();
140
196
  await expect(promise).resolves.toBeUndefined();
141
- expect(fetch).toHaveBeenCalledWith('http://localhost:14097/session');
197
+ expect(fetch).toHaveBeenCalledWith('http://127.0.0.1:14097/session');
142
198
  });
143
199
  it('should retry if fetch fails or returns not ok', async () => {
144
200
  vi.mocked(fetch)
@@ -147,14 +203,38 @@ describe('serveManager', () => {
147
203
  .mockResolvedValueOnce({ ok: true });
148
204
  const promise = serveManager.waitForReady(14097);
149
205
  await vi.advanceTimersByTimeAsync(0);
150
- await vi.advanceTimersByTimeAsync(500);
151
- await vi.advanceTimersByTimeAsync(500);
206
+ await vi.advanceTimersByTimeAsync(1000);
207
+ await vi.advanceTimersByTimeAsync(1000);
152
208
  await expect(promise).resolves.toBeUndefined();
153
209
  expect(fetch).toHaveBeenCalledTimes(3);
154
210
  });
155
211
  it('should throw error on timeout', async () => {
156
212
  vi.mocked(fetch).mockRejectedValue(new Error('Connection refused'));
157
213
  const promise = serveManager.waitForReady(14097, 1000);
214
+ const wrappedPromise = expect(promise).rejects.toThrow('Service at port 14097 failed to become ready within 1000ms. Check if \'opencode serve\' is working correctly.');
215
+ await vi.advanceTimersByTimeAsync(1500);
216
+ await wrappedPromise;
217
+ });
218
+ it('should fail fast when process exits early with error', async () => {
219
+ vi.useRealTimers();
220
+ vi.mocked(fetch).mockRejectedValue(new Error('Connection refused'));
221
+ const mockProc = createMockProcess();
222
+ vi.mocked(spawn).mockReturnValue(mockProc);
223
+ const projectPath = '/test/fast-fail';
224
+ const port = await serveManager.spawnServe(projectPath);
225
+ // Simulate stderr output and immediate exit
226
+ mockProc.stderr?.emit('data', Buffer.from('Error: Failed to bind to port'));
227
+ mockProc.emit('exit', 1, null);
228
+ // Wait for exit handler to process
229
+ await new Promise(resolve => setTimeout(resolve, 10));
230
+ // Now waitForReady should fail fast with the error message
231
+ await expect(serveManager.waitForReady(port, 30000, projectPath)).rejects.toThrow('opencode serve failed to start: Error: Failed to bind to port');
232
+ vi.useFakeTimers();
233
+ });
234
+ it('should still timeout if no projectPath provided and process exits', async () => {
235
+ vi.mocked(fetch).mockRejectedValue(new Error('Connection refused'));
236
+ // Without projectPath, can't detect early exit
237
+ const promise = serveManager.waitForReady(14097, 1000);
158
238
  const wrappedPromise = expect(promise).rejects.toThrow('Service at port 14097 failed to become ready within 1000ms');
159
239
  await vi.advanceTimersByTimeAsync(1500);
160
240
  await wrappedPromise;
@@ -18,7 +18,7 @@ describe('SessionManager', () => {
18
18
  json: async () => ({ id: mockSessionId, slug: 'test-session' }),
19
19
  });
20
20
  const sessionId = await createSession(3000);
21
- expect(mockFetch).toHaveBeenCalledWith('http://localhost:3000/session', {
21
+ expect(mockFetch).toHaveBeenCalledWith('http://127.0.0.1:3000/session', {
22
22
  method: 'POST',
23
23
  headers: { 'Content-Type': 'application/json' },
24
24
  body: '{}',
@@ -48,7 +48,7 @@ describe('SessionManager', () => {
48
48
  status: 204,
49
49
  });
50
50
  await sendPrompt(3000, 'ses_abc123', 'Hello OpenCode');
51
- expect(mockFetch).toHaveBeenCalledWith('http://localhost:3000/session/ses_abc123/prompt_async', {
51
+ expect(mockFetch).toHaveBeenCalledWith('http://127.0.0.1:3000/session/ses_abc123/prompt_async', {
52
52
  method: 'POST',
53
53
  headers: { 'Content-Type': 'application/json' },
54
54
  body: JSON.stringify({
@@ -56,6 +56,21 @@ describe('SessionManager', () => {
56
56
  }),
57
57
  });
58
58
  });
59
+ it('should include model in payload when provided', async () => {
60
+ mockFetch.mockResolvedValueOnce({
61
+ ok: true,
62
+ status: 204,
63
+ });
64
+ await sendPrompt(3000, 'ses_abc123', 'Hello OpenCode', 'llm-proxy/ant_gemini-3-flash');
65
+ expect(mockFetch).toHaveBeenCalledWith('http://127.0.0.1:3000/session/ses_abc123/prompt_async', {
66
+ method: 'POST',
67
+ headers: { 'Content-Type': 'application/json' },
68
+ body: JSON.stringify({
69
+ parts: [{ type: 'text', text: 'Hello OpenCode' }],
70
+ model: { providerID: 'llm-proxy', modelID: 'ant_gemini-3-flash' },
71
+ }),
72
+ });
73
+ });
59
74
  it('should throw error if HTTP request fails', async () => {
60
75
  mockFetch.mockResolvedValueOnce({
61
76
  ok: false,
@@ -32,22 +32,22 @@ describe('SSEClient', () => {
32
32
  });
33
33
  describe('connect', () => {
34
34
  it('should connect to SSE endpoint', () => {
35
- client.connect('http://localhost:3000');
36
- expect(MockEventSource).toHaveBeenCalledWith('http://localhost:3000/event');
35
+ client.connect('http://127.0.0.1:3000');
36
+ expect(MockEventSource).toHaveBeenCalledWith('http://127.0.0.1:3000/event');
37
37
  });
38
38
  it('should set up message event listener', () => {
39
- client.connect('http://localhost:3000');
39
+ client.connect('http://127.0.0.1:3000');
40
40
  expect(mockEventSourceInstance.addEventListener).toHaveBeenCalledWith('message', expect.any(Function));
41
41
  });
42
42
  it('should set up error event listener', () => {
43
- client.connect('http://localhost:3000');
43
+ client.connect('http://127.0.0.1:3000');
44
44
  expect(mockEventSourceInstance.addEventListener).toHaveBeenCalledWith('error', expect.any(Function));
45
45
  });
46
46
  });
47
47
  describe('onPartUpdated', () => {
48
48
  it('should trigger callback for text part updates', () => {
49
49
  const callback = vi.fn();
50
- client.connect('http://localhost:3000');
50
+ client.connect('http://127.0.0.1:3000');
51
51
  client.onPartUpdated(callback);
52
52
  const messageHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'message')?.[1];
53
53
  const event = {
@@ -74,7 +74,7 @@ describe('SSEClient', () => {
74
74
  });
75
75
  it('should not trigger callback for non-text parts', () => {
76
76
  const callback = vi.fn();
77
- client.connect('http://localhost:3000');
77
+ client.connect('http://127.0.0.1:3000');
78
78
  client.onPartUpdated(callback);
79
79
  const messageHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'message')?.[1];
80
80
  const event = {
@@ -95,7 +95,7 @@ describe('SSEClient', () => {
95
95
  });
96
96
  it('should not trigger callback for non-part-updated events', () => {
97
97
  const callback = vi.fn();
98
- client.connect('http://localhost:3000');
98
+ client.connect('http://127.0.0.1:3000');
99
99
  client.onPartUpdated(callback);
100
100
  const messageHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'message')?.[1];
101
101
  const event = {
@@ -111,7 +111,7 @@ describe('SSEClient', () => {
111
111
  describe('onSessionIdle', () => {
112
112
  it('should trigger callback for session.idle events', () => {
113
113
  const callback = vi.fn();
114
- client.connect('http://localhost:3000');
114
+ client.connect('http://127.0.0.1:3000');
115
115
  client.onSessionIdle(callback);
116
116
  const messageHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'message')?.[1];
117
117
  const event = {
@@ -127,7 +127,7 @@ describe('SSEClient', () => {
127
127
  });
128
128
  it('should not trigger callback for non-idle events', () => {
129
129
  const callback = vi.fn();
130
- client.connect('http://localhost:3000');
130
+ client.connect('http://127.0.0.1:3000');
131
131
  client.onSessionIdle(callback);
132
132
  const messageHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'message')?.[1];
133
133
  const event = {
@@ -143,7 +143,7 @@ describe('SSEClient', () => {
143
143
  describe('onError', () => {
144
144
  it('should trigger callback on error', () => {
145
145
  const callback = vi.fn();
146
- client.connect('http://localhost:3000');
146
+ client.connect('http://127.0.0.1:3000');
147
147
  client.onError(callback);
148
148
  const errorHandler = mockEventSourceInstance.addEventListener.mock.calls.find((call) => call[0] === 'error')?.[1];
149
149
  const error = new Error('Connection failed');
@@ -153,7 +153,7 @@ describe('SSEClient', () => {
153
153
  });
154
154
  describe('disconnect', () => {
155
155
  it('should close the connection', () => {
156
- client.connect('http://localhost:3000');
156
+ client.connect('http://127.0.0.1:3000');
157
157
  client.disconnect();
158
158
  expect(mockEventSourceInstance.close).toHaveBeenCalled();
159
159
  });
@@ -163,12 +163,12 @@ describe('SSEClient', () => {
163
163
  });
164
164
  describe('isConnected', () => {
165
165
  it('should return true when connected', () => {
166
- client.connect('http://localhost:3000');
166
+ client.connect('http://127.0.0.1:3000');
167
167
  mockEventSourceInstance.readyState = 1;
168
168
  expect(client.isConnected()).toBe(true);
169
169
  });
170
170
  it('should return false when disconnected', () => {
171
- client.connect('http://localhost:3000');
171
+ client.connect('http://127.0.0.1:3000');
172
172
  mockEventSourceInstance.readyState = 2;
173
173
  expect(client.isConnected()).toBe(false);
174
174
  });
package/dist/src/cli.js CHANGED
@@ -1,4 +1,5 @@
1
1
  #!/usr/bin/env node
2
+ process.removeAllListeners('warning');
2
3
  import { Command } from 'commander';
3
4
  import pc from 'picocolors';
4
5
  import { createRequire } from 'module';
@@ -8,7 +9,16 @@ import { deployCommands } from './setup/deploy.js';
8
9
  import { startBot } from './bot.js';
9
10
  import { hasBotConfig, getConfigDir } from './services/configStore.js';
10
11
  const require = createRequire(import.meta.url);
11
- const pkg = require('../../package.json');
12
+ // In dev mode (src/cli.ts), package.json is one level up
13
+ // In production (dist/src/cli.js), package.json is two levels up
14
+ const pkg = (() => {
15
+ try {
16
+ return require('../../package.json');
17
+ }
18
+ catch {
19
+ return require('../package.json');
20
+ }
21
+ })();
12
22
  updateNotifier({ pkg }).notify({ isGlobal: true });
13
23
  const program = new Command();
14
24
  program
@@ -6,6 +6,9 @@ import { opencode } from './opencode.js';
6
6
  import { work } from './work.js';
7
7
  import { code } from './code.js';
8
8
  import { autowork } from './autowork.js';
9
+ import { model } from './model.js';
10
+ import { setports } from './setports.js';
11
+ import { queue } from './queue.js';
9
12
  export const commands = new Collection();
10
13
  commands.set(setpath.data.name, setpath);
11
14
  commands.set(projects.data.name, projects);
@@ -14,3 +17,6 @@ commands.set(opencode.data.name, opencode);
14
17
  commands.set(work.data.name, work);
15
18
  commands.set(code.data.name, code);
16
19
  commands.set(autowork.data.name, autowork);
20
+ commands.set(model.data.name, model);
21
+ commands.set(setports.data.name, setports);
22
+ commands.set(queue.data.name, queue);
@@ -0,0 +1,85 @@
1
+ import { SlashCommandBuilder, MessageFlags } from 'discord.js';
2
+ import { execSync } from 'node:child_process';
3
+ import * as dataStore from '../services/dataStore.js';
4
+ function getEffectiveChannelId(interaction) {
5
+ const channel = interaction.channel;
6
+ if (channel?.isThread()) {
7
+ return channel.parentId ?? interaction.channelId;
8
+ }
9
+ return interaction.channelId;
10
+ }
11
+ export const model = {
12
+ data: new SlashCommandBuilder()
13
+ .setName('model')
14
+ .setDescription('Manage AI models for the current channel')
15
+ .addSubcommand(subcommand => subcommand
16
+ .setName('list')
17
+ .setDescription('List all available models'))
18
+ .addSubcommand(subcommand => subcommand
19
+ .setName('set')
20
+ .setDescription('Set the model to use in this channel')
21
+ .addStringOption(option => option.setName('name')
22
+ .setDescription('The model name (e.g., google/gemini-2.0-flash)')
23
+ .setRequired(true))),
24
+ async execute(interaction) {
25
+ const subcommand = interaction.options.getSubcommand();
26
+ if (subcommand === 'list') {
27
+ await interaction.deferReply({ flags: MessageFlags.Ephemeral });
28
+ try {
29
+ const output = execSync('opencode models', { encoding: 'utf-8' });
30
+ const models = output.split('\n').filter(m => m.trim());
31
+ if (models.length === 0) {
32
+ await interaction.editReply('No models found.');
33
+ return;
34
+ }
35
+ // Group models by provider
36
+ const groups = {};
37
+ for (const m of models) {
38
+ const [provider] = m.split('/');
39
+ if (!groups[provider])
40
+ groups[provider] = [];
41
+ groups[provider].push(m);
42
+ }
43
+ let response = '### 🤖 Available Models\n\n';
44
+ for (const [provider, providerModels] of Object.entries(groups)) {
45
+ response += `**${provider}**\n`;
46
+ // Limit to 10 models per provider to avoid hitting discord message limit
47
+ const displayModels = providerModels.slice(0, 10);
48
+ response += displayModels.map(m => `• \`${m}\``).join('\n') + '\n';
49
+ if (providerModels.length > 10) {
50
+ response += `*...and ${providerModels.length - 10} more*\n`;
51
+ }
52
+ response += '\n';
53
+ if (response.length > 1800) {
54
+ await interaction.followUp({ content: response, flags: MessageFlags.Ephemeral });
55
+ response = '';
56
+ }
57
+ }
58
+ if (response) {
59
+ await interaction.editReply(response);
60
+ }
61
+ }
62
+ catch (error) {
63
+ console.error('Failed to list models:', error);
64
+ await interaction.editReply('❌ Failed to retrieve models from OpenCode CLI.');
65
+ }
66
+ }
67
+ else if (subcommand === 'set') {
68
+ const modelName = interaction.options.getString('name', true);
69
+ const channelId = getEffectiveChannelId(interaction);
70
+ const projectAlias = dataStore.getChannelBinding(channelId);
71
+ if (!projectAlias) {
72
+ await interaction.reply({
73
+ content: '❌ No project bound to this channel. Use `/use <alias>` first.',
74
+ flags: MessageFlags.Ephemeral
75
+ });
76
+ return;
77
+ }
78
+ dataStore.setChannelModel(channelId, modelName);
79
+ await interaction.reply({
80
+ content: `✅ Model for this channel set to \`${modelName}\`.\nSubsequent commands will use this model.`,
81
+ flags: MessageFlags.Ephemeral
82
+ });
83
+ }
84
+ }
85
+ };