apexbot 1.0.1 → 1.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
- # 🦊 ApexBot — Your Free, Private AI Assistant
1
+ # ApexBot
2
2
 
3
- <div align="center">
3
+ Personal AI assistant you run on your own devices. Free with Ollama (local AI).
4
4
 
5
5
  ```
6
6
  ___ ____ _______ ______ ____ ______
@@ -9,112 +9,119 @@
9
9
  / ___ |/ ____/ /___ / / /_/ / /_/ / / /
10
10
  /_/ |_/_/ /_____//_/|_/_____/\____/ /_/
11
11
 
12
- 🦊 Your Free AI Assistant 🦊
12
+ Your Free AI Assistant
13
13
  ```
14
14
 
15
15
  [![CI](https://github.com/YOUR_USERNAME/apexbot/actions/workflows/ci.yml/badge.svg)](https://github.com/YOUR_USERNAME/apexbot/actions)
16
- [![npm version](https://badge.fury.io/js/apexbot.svg)](https://www.npmjs.com/package/apexbot)
17
- [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
16
+ [![npm package](https://badge.fury.io/js/apexbot.svg)](https://www.npmjs.com/package/apexbot)
17
+ [![license](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
18
18
  [![TypeScript](https://img.shields.io/badge/TypeScript-5.0-blue.svg)](https://www.typescriptlang.org/)
19
19
 
20
- **100% Free Open Source Private Self-Hosted**
20
+ 100% Free | Open Source | Private | Self-Hosted
21
21
 
22
- [Installation](#-installation) • [Features](#-features) • [Quick Start](#-quick-start) • [Commands](#-commands) • [Contributing](#-contributing)
23
-
24
- </div>
22
+ [Installation](#installation) |
23
+ [Features](#features) |
24
+ [Quick Start](#quick-start) |
25
+ [Commands](#commands) |
26
+ [Contributing](#contributing)
25
27
 
26
28
  ---
27
29
 
28
- ## Installation
30
+ ## Installation
31
+
32
+ Runtime: Node 18+
33
+
34
+ ### npm (recommended)
35
+
36
+ ```bash
37
+ npm install -g apexbot
38
+ apexbot onboard
39
+ ```
40
+
41
+ ### One-line install
29
42
 
30
- ### One-Line Install
43
+ Windows (PowerShell):
31
44
 
32
- **Windows (PowerShell):**
33
45
  ```powershell
34
46
  iwr -useb https://raw.githubusercontent.com/YOUR_USERNAME/apexbot/main/scripts/install.ps1 | iex
35
47
  ```
36
48
 
37
- **macOS/Linux:**
38
- ```bash
39
- curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/apexbot/main/scripts/install.sh | bash
40
- ```
49
+ macOS/Linux:
41
50
 
42
- **npm (global):**
43
51
  ```bash
44
- npm install -g apexbot
45
- apexbot onboard
52
+ curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/apexbot/main/scripts/install.sh | bash
46
53
  ```
47
54
 
48
- ### Manual Install
55
+ ### From source
49
56
 
50
57
  ```bash
51
- # Clone repository
52
58
  git clone https://github.com/YOUR_USERNAME/apexbot.git
53
59
  cd apexbot
54
60
  npm install
55
61
  npm run build
56
-
57
- # Run setup wizard
58
62
  npm run onboard
59
63
  ```
60
64
 
61
65
  ---
62
66
 
63
- ## Features
64
-
65
- ### 🤖 100% Free AI with Ollama
66
- - **No API keys needed** — runs completely locally
67
- - **No cloud costs** your hardware, your AI
68
- - **Private** — conversations never leave your machine
69
- - **Multiple models** llama3.2, mistral, codellama, qwen2.5
70
-
71
- ### 📱 Multi-Channel Support
72
- | Channel | Status | Description |
73
- |---------|--------|-------------|
74
- | **Telegram** | ✅ Ready | Full bot integration |
75
- | **Discord** | ✅ Ready | Guilds and DMs |
76
- | **WebChat** | ✅ Ready | Built-in web dashboard |
77
- | **WhatsApp** | 🚧 Soon | Via Baileys |
78
-
79
- ### 🧠 AI Providers
80
- | Provider | Cost | Description |
81
- |----------|------|-------------|
82
- | **Ollama** | **FREE** | Local AI (recommended!) |
83
- | Google Gemini | Free tier | Cloud AI |
84
- | Anthropic Claude | Paid | Cloud AI |
85
- | OpenAI GPT | Paid | Cloud AI |
86
-
87
- ### 🛡️ Self-Hosted & Private
88
- - Your data stays with you
89
- - No telemetry or tracking
90
- - Full control over your bot
67
+ ## Features
68
+
69
+ ### Local AI with Ollama
70
+
71
+ ApexBot runs AI models locally on your machine using Ollama. No API keys, no cloud costs, complete privacy.
72
+
73
+ - **Free**: No subscriptions or usage fees
74
+ - **Private**: Conversations never leave your computer
75
+ - **Offline**: Works without internet after model download
76
+ - **Models**: llama3.2, mistral, codellama, qwen2.5, and more
77
+
78
+ ### Multi-channel support
79
+
80
+ | Channel | Status | Description |
81
+ |------------|----------|------------------------|
82
+ | Telegram | Ready | Full bot integration |
83
+ | Discord | Ready | Guilds and DMs |
84
+ | WebChat | Ready | Built-in web dashboard |
85
+ | WhatsApp | Planned | Via Baileys |
86
+
87
+ ### AI providers
88
+
89
+ | Provider | Cost | Notes |
90
+ |------------------|---------|----------------------------|
91
+ | **Ollama** | Free | Local AI (recommended) |
92
+ | Google Gemini | Free tier | Cloud API |
93
+ | Anthropic Claude | Paid | Cloud API |
94
+ | OpenAI GPT | Paid | Cloud API |
91
95
 
92
96
  ---
93
97
 
94
- ## 🚀 Quick Start
98
+ ## Quick Start
95
99
 
96
- ### 1. Install Ollama (for free AI)
100
+ ### 1. Install Ollama
101
+
102
+ Windows:
97
103
 
98
- **Windows:**
99
104
  ```powershell
100
105
  winget install Ollama.Ollama
101
106
  ```
102
107
 
103
- **macOS:**
108
+ macOS:
109
+
104
110
  ```bash
105
111
  brew install ollama
106
112
  ```
107
113
 
108
- **Linux:**
114
+ Linux:
115
+
109
116
  ```bash
110
117
  curl -fsSL https://ollama.com/install.sh | sh
111
118
  ```
112
119
 
113
- ### 2. Pull a model
120
+ ### 2. Download a model
114
121
 
115
122
  ```bash
116
123
  ollama pull llama3.2
117
- ollama serve # Keep running in background
124
+ ollama serve
118
125
  ```
119
126
 
120
127
  ### 3. Install ApexBot
@@ -123,38 +130,35 @@ ollama serve # Keep running in background
123
130
  npm install -g apexbot
124
131
  ```
125
132
 
126
- ### 4. Run setup wizard
133
+ ### 4. Run setup
127
134
 
128
135
  ```bash
129
136
  apexbot onboard
130
137
  ```
131
138
 
132
- The wizard will guide you through:
133
- - Choosing AI provider (Ollama recommended!)
134
- - Setting up Telegram/Discord
135
- - Configuring your bot
139
+ The wizard walks you through AI provider selection, channel setup (Telegram/Discord), and configuration.
136
140
 
137
- ### 5. Start your bot
141
+ ### 5. Start the bot
138
142
 
139
143
  ```bash
140
144
  apexbot daemon start
141
145
  ```
142
146
 
143
- Done! 🎉 Your bot is now running.
147
+ Your assistant is now running in the background.
144
148
 
145
149
  ---
146
150
 
147
- ## 🎮 Commands
151
+ ## Commands
148
152
 
149
- ### CLI Commands
153
+ ### CLI
150
154
 
151
155
  ```bash
152
- # Setup & Configuration
156
+ # Setup
153
157
  apexbot onboard # Interactive setup wizard
154
158
  apexbot config # Show configuration
155
159
  apexbot config --reset # Reset configuration
156
160
 
157
- # Daemon (background service)
161
+ # Daemon (background)
158
162
  apexbot daemon start # Start in background
159
163
  apexbot daemon stop # Stop daemon
160
164
  apexbot daemon restart # Restart daemon
@@ -162,85 +166,73 @@ apexbot daemon status # Check status
162
166
 
163
167
  # Gateway (foreground)
164
168
  apexbot gateway # Start gateway server
165
- apexbot gateway --verbose # With debug logs
169
+ apexbot gateway --verbose # With debug output
166
170
 
167
- # Other
171
+ # Utilities
168
172
  apexbot status # Show status
169
173
  apexbot models # Manage Ollama models
170
174
  ```
171
175
 
172
- ### Chat Commands
176
+ ### Chat commands
173
177
 
174
- | Command | Description |
175
- |---------|-------------|
176
- | `/help` | Show available commands |
177
- | `/status` | Show session info |
178
- | `/new` | Start new conversation |
179
- | `/model <name>` | Switch AI model |
178
+ | Command | Description |
179
+ |----------------|--------------------------|
180
+ | `/help` | Show available commands |
181
+ | `/status` | Show session info |
182
+ | `/new` | Start new conversation |
183
+ | `/model <name>`| Switch AI model |
180
184
 
181
185
  ---
182
186
 
183
- ## 🏗️ Architecture
187
+ ## Architecture
184
188
 
185
189
  ```
186
- ┌─────────────────────────────────────────────────────────────┐
187
- │ Messaging Channels │
188
- Telegram │ Discord │ WebChat │ WhatsApp │
189
- └────────────┬─────────┬─────────┬─────────┬──────────────────┘
190
- │ │ │ │
191
- ▼ ▼ ▼ ▼
192
- ┌─────────────────────────────────────────────────────────────┐
193
- │ Gateway Server │
194
- │ (HTTP + WebSocket on port 18789) │
195
- ├─────────────────────────────────────────────────────────────┤
196
- │ Sessions Manager │ Event Bus │ Rate Limiter │
197
- └────────────────────┼─────────────┼──────────────────────────┘
198
- │ │
199
-
200
- ┌─────────────────────────────────────────────────────────────┐
201
- │ AI Agent Manager │
202
- │ Ollama │ Gemini │ Claude │ OpenAI │
203
- └─────────────────────────────────────────────────────────────┘
190
+ Messaging Channels
191
+ Telegram | Discord | WebChat | WhatsApp
192
+ | | |
193
+ v v v
194
+ +----------------------------------------+
195
+ | Gateway Server |
196
+ | (HTTP + WebSocket :18789) |
197
+ +----------------------------------------+
198
+ | Sessions | Event Bus | Rate Limit |
199
+ +----------------------------------------+
200
+ |
201
+ v
202
+ +----------------------------------------+
203
+ | AI Agent Manager |
204
+ | Ollama | Gemini | Claude | OpenAI |
205
+ +----------------------------------------+
204
206
  ```
205
207
 
206
208
  ---
207
209
 
208
- ## 🆚 ApexBot vs Clawdbot
209
-
210
- | Feature | Clawdbot | ApexBot |
211
- |---------|----------|---------|
212
- | **Cost** | Cloud APIs (paid) | **100% FREE with Ollama** |
213
- | **Privacy** | Cloud-dependent | **Fully local & private** |
214
- | **Channels** | WhatsApp, Telegram, Discord, iMessage | Telegram, Discord, WebChat |
215
- | **AI** | Claude, GPT | **Ollama (free!)**, Gemini, Claude, GPT |
216
- | **Setup** | Complex | **Simple wizard** |
217
- | **License** | Proprietary | **MIT (Open Source)** |
218
-
219
- ---
220
-
221
- ## 📁 Project Structure
210
+ ## Project Structure
222
211
 
223
212
  ```
224
213
  apexbot/
225
- ├── src/
226
- │ ├── cli/ # Command-line interface
227
- │ ├── gateway/ # HTTP/WebSocket server & dashboard
228
- │ ├── channels/ # Telegram, Discord, WebChat
229
- │ ├── agent/ # AI provider integrations
230
- │ ├── sessions/ # Conversation management
231
- │ └── core/ # Event bus, utilities
232
- ├── scripts/
233
- │ ├── install.ps1 # Windows installer
234
- │ └── install.sh # macOS/Linux installer
235
- ├── package.json
236
- └── README.md
214
+ src/
215
+ adapters/ Provider integrations (Manifold, etc.)
216
+ agent/ AI agent manager and message handling
217
+ backtest/ Backtesting framework
218
+ channels/ Channel adapters (Telegram, Discord)
219
+ cli/ Command-line interface
220
+ core/ Event bus and shared utilities
221
+ gateway/ HTTP/WebSocket server and dashboard
222
+ math/ Expected value and Kelly criterion
223
+ safety/ Rate limiting and content filtering
224
+ sessions/ Conversation state management
225
+ strategy/ Trading strategies (arbitrage)
226
+ scripts/
227
+ install.ps1 Windows installer
228
+ install.sh Unix installer
237
229
  ```
238
230
 
239
231
  ---
240
232
 
241
- ## 🔧 Configuration
233
+ ## Configuration
242
234
 
243
- Configuration is stored in `~/.apexbot/config.json`:
235
+ Configuration lives in `~/.apexbot/config.json`:
244
236
 
245
237
  ```json
246
238
  {
@@ -260,7 +252,7 @@ Configuration is stored in `~/.apexbot/config.json`:
260
252
  }
261
253
  ```
262
254
 
263
- Or use environment variables:
255
+ Environment variables also work:
264
256
 
265
257
  ```bash
266
258
  export OLLAMA_URL=http://localhost:11434
@@ -270,38 +262,27 @@ export TELEGRAM_BOT_TOKEN=your-token
270
262
 
271
263
  ---
272
264
 
273
- ## 🤝 Contributing
265
+ ## Contributing
274
266
 
275
- Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
267
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
276
268
 
277
269
  ```bash
278
- # Development
279
270
  git clone https://github.com/YOUR_USERNAME/apexbot.git
280
271
  cd apexbot
281
272
  npm install
282
- npm run dev # Start with hot-reload
273
+ npm run dev
283
274
  ```
284
275
 
285
276
  ---
286
277
 
287
- ## 📄 License
278
+ ## License
288
279
 
289
- MIT License see [LICENSE](LICENSE) for details.
280
+ MIT License. See [LICENSE](LICENSE).
290
281
 
291
282
  ---
292
283
 
293
- ## 🙏 Acknowledgments
294
-
295
- - Inspired by [Clawdbot](https://github.com/clawdbot/clawdbot)
296
- - Powered by [Ollama](https://ollama.com) for free local AI
297
- - Built with TypeScript, grammY, discord.js
298
-
299
- ---
300
-
301
- <div align="center">
302
-
303
- **Made with ❤️ by the ApexBot community**
304
-
305
- ⭐ Star this repo if you find it useful!
284
+ ## Acknowledgments
306
285
 
307
- </div>
286
+ - Inspired by [Clawdbot](https://github.com/moltbot/moltbot)
287
+ - Powered by [Ollama](https://ollama.com) for local AI
288
+ - Built with TypeScript, grammy, discord.js
@@ -7,6 +7,8 @@
7
7
  Object.defineProperty(exports, "__esModule", { value: true });
8
8
  exports.AgentManager = void 0;
9
9
  const generative_ai_1 = require("@google/generative-ai");
10
+ const tools_1 = require("../tools");
11
+ const toolExecutor_1 = require("./toolExecutor");
10
12
  class AgentManager {
11
13
  config = null;
12
14
  googleClient = null;
@@ -42,6 +44,7 @@ You are running locally on the user's machine. No data leaves their computer. Yo
42
44
  maxTokens: 4096,
43
45
  temperature: 0.7,
44
46
  systemPrompt: this.defaultSystemPrompt,
47
+ enableTools: true, // Tools enabled by default
45
48
  ...config,
46
49
  };
47
50
  // Initialize client based on provider
@@ -65,9 +68,10 @@ You are running locally on the user's machine. No data leaves their computer. Yo
65
68
  async process(session, message) {
66
69
  if (!this.config) {
67
70
  console.warn('[Agent] Not configured');
68
- return { text: '⚠️ Agent not configured. Please set up an AI provider.' };
71
+ return { text: 'Agent not configured. Please set up an AI provider.' };
69
72
  }
70
73
  const userText = message.text || '';
74
+ console.log(`[Agent] Processing message: ${userText.slice(0, 50)}...`);
71
75
  // Handle slash commands
72
76
  if (userText.startsWith('/')) {
73
77
  return this.handleCommand(session, userText);
@@ -93,8 +97,45 @@ You are running locally on the user's machine. No data leaves their computer. Yo
93
97
  response = await this.processWithKimi(history);
94
98
  break;
95
99
  default:
96
- response = { text: 'Unknown AI provider' };
100
+ response = { text: 'Unknown AI provider' };
97
101
  }
102
+ // Check for tool calls in response
103
+ if (this.config.enableTools) {
104
+ const toolCalls = (0, toolExecutor_1.parseToolCalls)(response.text);
105
+ if (toolCalls.length > 0) {
106
+ console.log(`[Agent] Found ${toolCalls.length} tool calls`);
107
+ // Build tool context
108
+ const context = (0, toolExecutor_1.buildToolContext)(session.id, message.userId || 'unknown', message.channel, { workspaceDir: process.cwd() });
109
+ // Execute tools
110
+ const toolResults = await (0, toolExecutor_1.executeToolCalls)(toolCalls, context);
111
+ response.toolCalls = toolResults;
112
+ // Format results and continue conversation
113
+ const resultsText = (0, toolExecutor_1.formatToolResults)(toolResults);
114
+ if (resultsText) {
115
+ // Add tool results to history and get follow-up response
116
+ const followUpHistory = [
117
+ ...history,
118
+ { role: 'assistant', content: response.text, timestamp: Date.now() },
119
+ { role: 'user', content: `Tool results:\n${resultsText}\n\nPlease continue based on these results.`, timestamp: Date.now() },
120
+ ];
121
+ // Get follow-up response
122
+ let followUp;
123
+ switch (this.config.provider) {
124
+ case 'ollama':
125
+ followUp = await this.processWithOllama(followUpHistory);
126
+ break;
127
+ case 'google':
128
+ followUp = await this.processWithGemini(followUpHistory);
129
+ break;
130
+ default:
131
+ followUp = { text: resultsText };
132
+ }
133
+ // Combine responses
134
+ response.text = followUp.text;
135
+ }
136
+ }
137
+ }
138
+ console.log(`[Agent] Generated response: ${response.text.slice(0, 50)}...`);
98
139
  // Save to session
99
140
  session.messages.push({ role: 'user', content: userText, timestamp: Date.now() });
100
141
  session.messages.push({ role: 'assistant', content: response.text, timestamp: Date.now() });
@@ -103,9 +144,8 @@ You are running locally on the user's machine. No data leaves their computer. Yo
103
144
  return response;
104
145
  }
105
146
  catch (e) {
106
- console.error('[Agent] Error:', e);
107
- // Escape special characters for Telegram
108
- const errorMsg = (e.message || 'Unknown error').slice(0, 200).replace(/[_*\[\]()~`>#+=|{}.!-]/g, '\\$&');
147
+ console.error('[Agent] Error processing:', e);
148
+ const errorMsg = (e.message || 'Unknown error').slice(0, 200);
109
149
  return { text: `Error: ${errorMsg}` };
110
150
  }
111
151
  }
@@ -120,7 +160,7 @@ You are running locally on the user's machine. No data leaves their computer. Yo
120
160
  const apiKey = cfg.apiKey || process.env.KIMI_API_KEY;
121
161
  const apiUrl = cfg.apiUrl || process.env.KIMI_API_URL;
122
162
  if (!apiKey || !apiUrl) {
123
- return { text: '⚠️ Kimi provider not configured. Set config.apiUrl and apiKey (or KIMI_API_URL/KIMI_API_KEY).' };
163
+ return { text: 'Kimi provider not configured. Set config.apiUrl and apiKey (or KIMI_API_URL/KIMI_API_KEY).' };
124
164
  }
125
165
  // Build prompt from history (system prompt + conversation)
126
166
  const systemPrompt = history.find(m => m.role === 'system')?.content || '';
@@ -150,7 +190,7 @@ You are running locally on the user's machine. No data leaves their computer. Yo
150
190
  const data = await res.json().catch(() => ({}));
151
191
  const text = data.text || data.output || (data.choices && data.choices[0]?.text) || '';
152
192
  return {
153
- text: String(text || '').trim() || '⚠️ Kimi returned an empty response.',
193
+ text: String(text || '').trim() || 'Kimi returned an empty response.',
154
194
  };
155
195
  }
156
196
  async processWithOllama(history) {
@@ -160,6 +200,7 @@ You are running locally on the user's machine. No data leaves their computer. Yo
160
200
  const apiUrl = cfg.apiUrl || 'http://localhost:11434';
161
201
  const model = cfg.model || 'llama3.2';
162
202
  const temperature = cfg.temperature ?? 0.7;
203
+ console.log(`[Agent] Calling Ollama at ${apiUrl} with model ${model}...`);
163
204
  // Build messages array for Ollama chat API
164
205
  const messages = history.map(m => ({
165
206
  role: m.role,
@@ -183,12 +224,14 @@ You are running locally on the user's machine. No data leaves their computer. Yo
183
224
  });
184
225
  if (!res.ok) {
185
226
  const err = await res.text().catch(() => res.statusText);
227
+ console.error(`[Agent] Ollama error: ${err}`);
186
228
  throw new Error(`Ollama error: ${err}`);
187
229
  }
188
230
  const data = await res.json();
189
231
  const text = data.message?.content || '';
232
+ console.log(`[Agent] Ollama response received: ${text.slice(0, 50)}...`);
190
233
  return {
191
- text: String(text).trim() || '🤔 No response from model',
234
+ text: String(text).trim() || 'No response from model',
192
235
  usage: {
193
236
  inputTokens: data.prompt_eval_count || 0,
194
237
  outputTokens: data.eval_count || 0,
@@ -196,9 +239,10 @@ You are running locally on the user's machine. No data leaves their computer. Yo
196
239
  };
197
240
  }
198
241
  catch (error) {
242
+ console.error('[Agent] Ollama fetch error:', error);
199
243
  if (error.code === 'ECONNREFUSED' || error.message?.includes('fetch failed')) {
200
244
  return {
201
- text: `⚠️ Ollama not running.\n\nPlease:\n1. Install Ollama: https://ollama.com\n2. Pull a model: ollama pull llama3.2\n3. Start Ollama: ollama serve`,
245
+ text: `Ollama not running.\n\nPlease:\n1. Install Ollama: https://ollama.com\n2. Pull a model: ollama pull llama3.2\n3. Start Ollama: ollama serve`,
202
246
  };
203
247
  }
204
248
  throw error;
@@ -206,8 +250,11 @@ You are running locally on the user's machine. No data leaves their computer. Yo
206
250
  }
207
251
  buildHistory(session, currentMessage) {
208
252
  const history = [];
209
- // System prompt
210
- const systemPrompt = session.systemPrompt || this.config?.systemPrompt || this.defaultSystemPrompt;
253
+ // System prompt with tools if enabled
254
+ let systemPrompt = session.systemPrompt || this.config?.systemPrompt || this.defaultSystemPrompt;
255
+ if (this.config?.enableTools && tools_1.toolRegistry.list().length > 0) {
256
+ systemPrompt += '\n\n' + (0, toolExecutor_1.getToolsSystemPrompt)();
257
+ }
211
258
  history.push({ role: 'system', content: systemPrompt, timestamp: 0 });
212
259
  // Previous messages
213
260
  for (const msg of session.messages) {
@@ -257,19 +304,19 @@ You are running locally on the user's machine. No data leaves their computer. Yo
257
304
  async processWithClaude(history) {
258
305
  // TODO: Implement Anthropic Claude API
259
306
  // Requires @anthropic-ai/sdk
260
- return { text: '⚠️ Claude integration not yet implemented. Use Google/Gemini for now.' };
307
+ return { text: 'Claude integration not yet implemented. Use Google/Gemini for now.' };
261
308
  }
262
309
  async processWithOpenAI(history) {
263
310
  // TODO: Implement OpenAI API
264
311
  // Requires openai package
265
- return { text: '⚠️ OpenAI integration not yet implemented. Use Google/Gemini for now.' };
312
+ return { text: 'OpenAI integration not yet implemented. Use Google/Gemini for now.' };
266
313
  }
267
314
  handleCommand(session, command) {
268
315
  const [cmd, ...args] = command.slice(1).split(' ');
269
316
  switch (cmd.toLowerCase()) {
270
317
  case 'status':
271
318
  return {
272
- text: `📊 *Session Status*\n` +
319
+ text: `*Session Status*\n` +
273
320
  `Session ID: \`${session.id}\`\n` +
274
321
  `Messages: ${session.messageCount}\n` +
275
322
  `Model: ${session.model || this.config?.model || 'default'}\n` +
@@ -283,10 +330,10 @@ You are running locally on the user's machine. No data leaves their computer. Yo
283
330
  const systemMsgs = session.messages.filter(m => m.role === 'system');
284
331
  session.messages = systemMsgs;
285
332
  session.messageCount = 0;
286
- return { text: '🔄 Conversation reset. Let\'s start fresh!' };
333
+ return { text: 'Conversation reset. Let\'s start fresh!' };
287
334
  case 'help':
288
335
  return {
289
- text: `📖 *ApexBot Commands*\n\n` +
336
+ text: `*ApexBot Commands*\n\n` +
290
337
  `/status - Show session info\n` +
291
338
  `/new - Reset conversation\n` +
292
339
  `/model <name> - Change AI model\n` +
@@ -296,11 +343,11 @@ You are running locally on the user's machine. No data leaves their computer. Yo
296
343
  case 'model':
297
344
  if (args.length > 0) {
298
345
  session.model = args.join(' ');
299
- return { text: `✅ Model changed to: ${session.model}` };
346
+ return { text: `Model changed to: ${session.model}` };
300
347
  }
301
348
  return { text: `Current model: ${session.model || this.config?.model || 'default'}` };
302
349
  default:
303
- return { text: `❓ Unknown command: /${cmd}\nType /help for available commands.` };
350
+ return { text: `Unknown command: /${cmd}\nType /help for available commands.` };
304
351
  }
305
352
  }
306
353
  getStatus() {