@banaxi/banana-code 1.1.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -41,14 +41,22 @@ Banana Code is a high-performance, terminal-based AI pair programmer. It combine
41
41
  #S#SSSSSS%%%?%%%%S
42
42
  ```
43
43
 
44
+ ## 🤔 Why Banana Code?
45
+ While tools like Cursor provide great GUI experiences, Banana Code is built for developers who live in the terminal and want maximum flexibility.
46
+ - **No Vendor Lock-in**: Switch instantly between the best proprietary models (Gemini, Claude, OpenAI) and high-performance open-source models (Ollama Local, Ollama Cloud) mid-conversation.
47
+ - **True Autonomy**: With Plan & Execute mode and Self-Healing Error Loops, Banana Code doesn't just suggest code; it tries, fails, reads the errors, and fixes its own mistakes automatically.
48
+ - **Terminal Native**: It brings the power of full workspace awareness, web search, and surgical file patching directly to your CLI without forcing you to change your IDE.
49
+
44
50
  ## ✨ Key Features
45
51
 
46
- - **Multi-Provider Support**: Switch between **Google Gemini**, **Anthropic Claude**, **OpenAI**, and **Ollama (Local)** effortlessly.
47
- - **Interactive TUI**: A beautiful, minimal terminal interface with real-time feedback and progress indicators.
52
+ - **Multi-Provider Support**: Switch between **Google Gemini**, **Anthropic Claude**, **OpenAI**, **Ollama Cloud**, and **Ollama (Local)** effortlessly.
53
+ - **Plan & Agent Modes**: Use `/agent` for instant execution, or `/plan` to make the AI draft a step-by-step implementation plan for your approval before it touches any code.
54
+ - **Self-Healing Loop**: If the AI runs a command (like running tests) and it fails, Banana Code automatically feeds the error trace back to the AI so it can fix its own code.
55
+ - **Agent Skills**: Teach your AI specialized workflows. Drop a `SKILL.md` file in your config folder, and the AI will automatically activate it when relevant.
56
+ - **Smart Context**: Use `@file/path.js` to instantly inject file contents into your prompt, or use `/settings` to auto-feed your entire workspace structure (respecting `.gitignore`).
57
+ - **Web Research**: Deep integration with DuckDuckGo APIs and Scrapers to give the AI real-time access to the internet.
48
58
  - **Persistent Sessions**: All chats are saved to `~/.config/banana-code/chats/`. Resume any session with a single command.
49
- - **Robust Tool System**: Banana Code can execute shell commands, read/write files, fetch URLs, and search your workspace.
50
- - **Security First**: A dedicated permission model ensures no tool is executed without your explicit approval.
51
- - **Keyless Playground**: Integration with OpenAI Codex for seamless, keyless access to GPT-4o and beyond.
59
+ - **Syntax Highlighting**: Beautiful, readable markdown output with syntax coloring directly in your terminal.
52
60
 
53
61
  ## 🚀 Installation
54
62
 
@@ -96,6 +104,28 @@ Banana Code can assist you by:
96
104
  - **`search_files`**: Performing regex searches across your project.
97
105
  - **`list_directory`**: Exploring folder structures.
98
106
 
107
+ ### 🧠 Agent Skills
108
+ Banana Code supports custom Agent Skills. Skills are like "onboarding guides" that teach the AI how to do specific tasks, use certain APIs, or follow your company's coding standards.
109
+
110
+ When the AI detects a task that matches a skill's description, it automatically activates the skill and loads its specialized instructions.
111
+
112
+ **How to create a Skill:**
113
+ 1. Create a folder in your config directory: `~/.config/banana-code/skills/my-react-skill/`
114
+ 2. Create a `SKILL.md` file inside that folder using this exact format:
115
+
116
+ ```markdown
117
+ ---
118
+ name: my-react-skill
119
+ description: Use this skill whenever you are asked to build or edit a React component.
120
+ ---
121
+
122
+ # React Guidelines
123
+ - Always use functional components.
124
+ - Always use Tailwind CSS for styling.
125
+ - Do not use default exports.
126
+ ```
127
+ 3. Type `/skills` in Banana Code to verify it loaded. The AI will now follow these rules automatically!
128
+
99
129
  ## 🔐 Privacy & Security
100
130
 
101
131
  Banana Code is built with transparency in mind:
package/package.json CHANGED
@@ -1,7 +1,19 @@
1
1
  {
2
2
  "name": "@banaxi/banana-code",
3
- "version": "1.1.0",
3
+ "version": "1.2.0",
4
4
  "description": "🍌 BananaCode",
5
+ "keywords": [
6
+ "banana",
7
+ "ai",
8
+ "cli",
9
+ "agent",
10
+ "coding-assistant",
11
+ "terminal",
12
+ "gemini",
13
+ "claude",
14
+ "openai",
15
+ "ollama"
16
+ ],
5
17
  "type": "module",
6
18
  "license": "GPL-3.0-or-later",
7
19
  "bin": {
package/src/config.js CHANGED
@@ -41,10 +41,10 @@ export async function setupProvider(provider, config = {}) {
41
41
  default: config.apiKey
42
42
  });
43
43
  config.model = await select({
44
- message: 'Select a model:',
45
- choices: OPENAI_MODELS
44
+ message: 'Select a Gemini model:',
45
+ choices: GEMINI_MODELS
46
46
  });
47
- } else if (provider === 'ollama_cloud') {
47
+ } else if (provider === 'ollama_cloud') {
48
48
  config.apiKey = await input({
49
49
  message: 'Enter your OLLAMA_API_KEY (from ollama.com):',
50
50
  default: config.apiKey
@@ -65,7 +65,7 @@ export async function setupProvider(provider, config = {}) {
65
65
  });
66
66
  }
67
67
  config.model = selectedModel;
68
- } else if (provider === 'ollama') {
68
+ } else if (provider === 'claude') {
69
69
  config.apiKey = await input({
70
70
  message: 'Enter your ANTHROPIC_API_KEY:',
71
71
  default: config.apiKey
package/src/index.js CHANGED
@@ -1,5 +1,6 @@
1
1
  import readline from 'readline';
2
2
  import chalk from 'chalk';
3
+ import ora from 'ora';
3
4
  import { loadConfig, saveConfig, setupProvider } from './config.js';
4
5
  import { runStartup } from './startup.js';
5
6
  import { getSessionPermissions } from './permissions.js';
@@ -12,6 +13,7 @@ import { OllamaCloudProvider } from './providers/ollamaCloud.js';
12
13
 
13
14
  import { loadSession, saveSession, generateSessionId, getLatestSessionId, listSessions } from './sessions.js';
14
15
  import { printMarkdown } from './utils/markdown.js';
16
+ import { estimateConversationTokens } from './utils/tokens.js';
15
17
 
16
18
  let config;
17
19
  let providerInstance;
@@ -62,7 +64,7 @@ async function handleSlashCommand(command) {
62
64
  providerInstance = createProvider();
63
65
  console.log(chalk.green(`Switched provider to ${newProv} (${config.model}).`));
64
66
  } else {
65
- console.log(chalk.yellow(`Usage: /provider <gemini|claude|openai|ollama>`));
67
+ console.log(chalk.yellow(`Usage: /provider <gemini|claude|openai|ollama_cloud|ollama>`));
66
68
  }
67
69
  break;
68
70
  case '/model':
@@ -130,11 +132,86 @@ async function handleSlashCommand(command) {
130
132
  providerInstance = createProvider(); // fresh instance = clear history
131
133
  console.log(chalk.green('Chat history cleared.'));
132
134
  break;
135
+ case '/clean':
136
+ if (!config.betaTools || !config.betaTools.includes('clean_command')) {
137
+ console.log(chalk.yellow("The /clean command is a beta feature. You need to enable it in the /beta menu first."));
138
+ break;
139
+ }
140
+ const msgCount = providerInstance.messages ? providerInstance.messages.length : 0;
141
+ if (msgCount <= 2) {
142
+ console.log(chalk.yellow("Not enough history to summarize."));
143
+ break;
144
+ }
145
+
146
+ console.log(chalk.cyan("Summarizing context to save tokens..."));
147
+ const summarySpinner = ora({ text: 'Compressing history...', color: 'yellow', stream: process.stdout }).start();
148
+
149
+ try {
150
+ // Temporarily disable terminal formatting for the summary request
151
+ const originalUseMarked = config.useMarkedTerminal;
152
+ config.useMarkedTerminal = false;
153
+
154
+ // Create a temporary prompt asking for a summary
155
+ const summaryPrompt = "SYSTEM INSTRUCTION: Please provide a highly concise summary of our entire conversation so far. Focus ONLY on the overall goal, the current state of the project, any important decisions made, and what we were about to do next. Do not include pleasantries. This summary will be used as your memory going forward.";
156
+
157
+ // Ask the AI to summarize
158
+ const summary = await providerInstance.sendMessage(summaryPrompt);
159
+
160
+ // Restore settings
161
+ config.useMarkedTerminal = originalUseMarked;
162
+ summarySpinner.stop();
163
+
164
+ // Re-initialize the provider to wipe old history
165
+ providerInstance = createProvider();
166
+
167
+ // Inject the summary as the first message after the system prompt
168
+ const summaryMemory = `[PREVIOUS CONVERSATION SUMMARY]\n${summary}`;
169
+
170
+ if (config.provider === 'gemini') {
171
+ providerInstance.messages.push({ role: 'user', parts: [{ text: summaryMemory }] });
172
+ providerInstance.messages.push({ role: 'model', parts: [{ text: "I have stored the summary of our previous conversation in my memory." }] });
173
+ } else if (config.provider === 'claude') {
174
+ providerInstance.messages.push({ role: 'user', content: summaryMemory });
175
+ providerInstance.messages.push({ role: 'assistant', content: "I have stored the summary of our previous conversation in my memory." });
176
+ } else {
177
+ providerInstance.messages.push({ role: 'user', content: summaryMemory });
178
+ providerInstance.messages.push({ role: 'assistant', content: "I have stored the summary of our previous conversation in my memory." });
179
+ }
180
+
181
+ console.log(chalk.green(`\nContext successfully compressed!`));
182
+ if (config.debug) {
183
+ console.log(chalk.gray(`\n[Saved Summary]:\n${summary}\n`));
184
+ }
185
+
186
+ await saveSession(currentSessionId, {
187
+ provider: config.provider,
188
+ model: config.model || providerInstance.modelName,
189
+ messages: providerInstance.messages
190
+ });
191
+
192
+ } catch (err) {
193
+ summarySpinner.stop();
194
+ console.log(chalk.red(`Failed to compress context: ${err.message}`));
195
+ }
196
+ break;
133
197
  case '/context':
134
198
  let length = 0;
135
- if (providerInstance.messages) length = providerInstance.messages.length;
136
- else if (providerInstance.chat) length = (await providerInstance.chat.getHistory()).length;
137
- console.log(chalk.cyan(`Current context contains approximately ${length} messages.`));
199
+ let messagesForEstimation = [];
200
+
201
+ if (providerInstance.messages) {
202
+ length = providerInstance.messages.length;
203
+ messagesForEstimation = providerInstance.messages;
204
+ } else if (providerInstance.chat) {
205
+ messagesForEstimation = await providerInstance.chat.getHistory();
206
+ length = messagesForEstimation.length;
207
+ }
208
+
209
+ const { estimateConversationTokens } = await import('./utils/tokens.js');
210
+ const estimatedTokens = estimateConversationTokens(messagesForEstimation);
211
+
212
+ console.log(chalk.cyan(`Current context:`));
213
+ console.log(chalk.cyan(`- Messages: ${length}`));
214
+ console.log(chalk.cyan(`- Estimated Tokens: ~${estimatedTokens.toLocaleString()}`));
138
215
  break;
139
216
  case '/permissions':
140
217
  const perms = getSessionPermissions();
@@ -149,18 +226,27 @@ async function handleSlashCommand(command) {
149
226
  const { TOOLS } = await import('./tools/registry.js');
150
227
  const betaTools = TOOLS.filter(t => t.beta);
151
228
 
152
- if (betaTools.length === 0) {
153
- console.log(chalk.yellow("No beta tools available."));
229
+ let choices = betaTools.map(t => ({
230
+ name: t.label || t.name,
231
+ value: t.name,
232
+ checked: (config.betaTools || []).includes(t.name)
233
+ }));
234
+
235
+ // Add beta commands that aren't tools
236
+ choices.push({
237
+ name: '/clean command (Context Compression)',
238
+ value: 'clean_command',
239
+ checked: (config.betaTools || []).includes('clean_command')
240
+ });
241
+
242
+ if (choices.length === 0) {
243
+ console.log(chalk.yellow("No beta features available."));
154
244
  break;
155
245
  }
156
246
 
157
247
  const enabledBetaTools = await checkbox({
158
- message: 'Select beta tools to activate (Space to toggle, Enter to confirm):',
159
- choices: betaTools.map(t => ({
160
- name: t.label || t.name,
161
- value: t.name,
162
- checked: (config.betaTools || []).includes(t.name)
163
- }))
248
+ message: 'Select beta features to activate (Space to toggle, Enter to confirm):',
249
+ choices: choices
164
250
  });
165
251
 
166
252
  if (enabledBetaTools.includes('duck_duck_go_scrape') && !(config.betaTools || []).includes('duck_duck_go_scrape')) {
@@ -182,7 +268,13 @@ async function handleSlashCommand(command) {
182
268
 
183
269
  config.betaTools = enabledBetaTools;
184
270
  await saveConfig(config);
185
- providerInstance = createProvider(); // Re-init to update tools
271
+ if (providerInstance) {
272
+ const savedMessages = providerInstance.messages;
273
+ providerInstance = createProvider(); // Re-init to update tools
274
+ providerInstance.messages = savedMessages;
275
+ } else {
276
+ providerInstance = createProvider();
277
+ }
186
278
  console.log(chalk.green(`Beta tools updated: ${enabledBetaTools.join(', ') || 'none'}`));
187
279
  break;
188
280
  case '/settings':
@@ -199,22 +291,80 @@ async function handleSlashCommand(command) {
199
291
  name: 'Use syntax highlighting for AI output (requires waiting for full response)',
200
292
  value: 'useMarkedTerminal',
201
293
  checked: config.useMarkedTerminal || false
294
+ },
295
+ {
296
+ name: 'Always show current token count in status bar',
297
+ value: 'showTokenCount',
298
+ checked: config.showTokenCount || false
202
299
  }
203
300
  ]
204
301
  });
205
302
 
206
303
  config.autoFeedWorkspace = enabledSettings.includes('autoFeedWorkspace');
207
304
  config.useMarkedTerminal = enabledSettings.includes('useMarkedTerminal');
305
+ config.showTokenCount = enabledSettings.includes('showTokenCount');
208
306
  await saveConfig(config);
209
- providerInstance = createProvider(); // Re-init to update tools/config
307
+ if (providerInstance) {
308
+ const savedMessages = providerInstance.messages;
309
+ providerInstance = createProvider(); // Re-init to update tools/config
310
+ providerInstance.messages = savedMessages;
311
+ } else {
312
+ providerInstance = createProvider();
313
+ }
210
314
  console.log(chalk.green(`Settings updated.`));
211
315
  break;
212
316
  case '/debug':
213
317
  config.debug = !config.debug;
214
318
  await saveConfig(config);
215
- providerInstance = createProvider(); // Re-init to pass debug flag
319
+ if (providerInstance) {
320
+ const savedMessages = providerInstance.messages;
321
+ providerInstance = createProvider(); // Re-init to pass debug flag
322
+ providerInstance.messages = savedMessages;
323
+ } else {
324
+ providerInstance = createProvider();
325
+ }
216
326
  console.log(chalk.magenta(`Debug mode ${config.debug ? 'enabled' : 'disabled'}.`));
217
327
  break;
328
+ case '/skills':
329
+ const { getAvailableSkills } = await import('./utils/skills.js');
330
+ const skills = getAvailableSkills();
331
+ if (skills.length === 0) {
332
+ console.log(chalk.yellow("No skills found."));
333
+ const os = await import('os');
334
+ const path = await import('path');
335
+ const skillsDir = path.join(os.homedir(), '.config', 'banana-code', 'skills');
336
+ console.log(chalk.gray(`Create skill directories with a SKILL.md file in ${skillsDir}`));
337
+ } else {
338
+ console.log(chalk.cyan.bold("\nLoaded Skills:"));
339
+ skills.forEach(skill => {
340
+ console.log(chalk.green(`- ${skill.id}`) + `: ${skill.description}`);
341
+ });
342
+ }
343
+ break;
344
+ case '/plan':
345
+ config.planMode = true;
346
+ await saveConfig(config);
347
+ if (providerInstance) {
348
+ const savedMessages = providerInstance.messages;
349
+ providerInstance = createProvider();
350
+ providerInstance.messages = savedMessages;
351
+ } else {
352
+ providerInstance = createProvider();
353
+ }
354
+ console.log(chalk.magenta(`Plan mode enabled. For significant changes, the AI will now propose an implementation plan before writing code.`));
355
+ break;
356
+ case '/agent':
357
+ config.planMode = false;
358
+ await saveConfig(config);
359
+ if (providerInstance) {
360
+ const savedMessages = providerInstance.messages;
361
+ providerInstance = createProvider();
362
+ providerInstance.messages = savedMessages;
363
+ } else {
364
+ providerInstance = createProvider();
365
+ }
366
+ console.log(chalk.green(`Agent mode enabled. The AI will make changes directly.`));
367
+ break;
218
368
  case '/chats':
219
369
  const sessions = await listSessions();
220
370
  if (sessions.length === 0) {
@@ -231,14 +381,18 @@ async function handleSlashCommand(command) {
231
381
  case '/help':
232
382
  console.log(chalk.yellow(`
233
383
  Available commands:
234
- /provider <name> - Switch AI provider (gemini, claude, openai, ollama)
384
+ /provider <name> - Switch AI provider (gemini, claude, openai, ollama_cloud, ollama)
235
385
  /model [name] - Switch model within current provider (opens menu if name omitted)
236
386
  /chats - List persistent chat sessions
237
387
  /clear - Clear chat history
388
+ /clean - Compress chat history into a summary to save tokens
238
389
  /context - Show current context window size
239
390
  /permissions - List session-approved permissions
240
391
  /beta - Manage beta features and tools
241
392
  /settings - Manage app settings (workspace auto-feed, etc)
393
+ /skills - List loaded agent skills
394
+ /plan - Enable Plan Mode (AI proposes a plan for big changes)
395
+ /agent - Enable Agent Mode (default, AI edits directly)
242
396
  /debug - Toggle debug mode (show tool results)
243
397
  /help - Show all commands
244
398
  /exit - Quit Banana Code
@@ -278,7 +432,7 @@ function drawPromptBox(inputText, cursorPos) {
278
432
  const placeholder = 'Type your message or @path/to/file';
279
433
  const prefix = ' > ';
280
434
 
281
- const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + inputText);
435
+ const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + chalk.white(inputText));
282
436
  const totalChars = (prefix.length + Math.max(inputText.length, placeholder.length));
283
437
  const rows = Math.ceil(totalChars / width) || 1;
284
438
 
@@ -298,8 +452,26 @@ function drawPromptBox(inputText, cursorPos) {
298
452
  // Redraw status bar and separator (they are always below the prompt)
299
453
  const modelDisplay = providerInstance ? providerInstance.modelName : (config.model || 'unknown');
300
454
  const providerDisplay = config.provider.toUpperCase();
301
- const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)}`;
302
- const rightText = '? for shortcuts ';
455
+ const modeDisplay = config.planMode ? chalk.magenta('PLAN MODE') : chalk.green('AGENT MODE');
456
+
457
+ let tokenDisplay = '';
458
+ if (config.showTokenCount && providerInstance) {
459
+ let msgs = providerInstance.messages || [];
460
+ // Support for Ollama chat history format if different
461
+ if (!providerInstance.messages && typeof providerInstance.chat?.getHistory === 'function') {
462
+ msgs = providerInstance.chat.getHistory(); // Note: this is async normally, but we use an approximation here or just skip it if it's strictly async. For now, assume providerInstance.messages is the standard.
463
+ }
464
+ const tokens = estimateConversationTokens(msgs);
465
+ let color = chalk.green;
466
+ if (tokens >= 128000) color = chalk.red;
467
+ else if (tokens >= 86000) color = chalk.hex('#FFA500'); // Orange
468
+ else if (tokens >= 64000) color = chalk.yellow;
469
+
470
+ tokenDisplay = ` / Tokens: ${color(tokens.toLocaleString())}`;
471
+ }
472
+
473
+ const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)} / ${modeDisplay}${tokenDisplay}`;
474
+ const rightText = '/help for shortcuts ';
303
475
  const leftStripped = leftText.replace(/\x1b\[[0-9;]*m/g, '');
304
476
  const midPad = Math.max(0, width - leftStripped.length - rightText.length);
305
477
  const statusLine = chalk.gray(leftText + ' '.repeat(midPad) + rightText);
@@ -325,7 +497,7 @@ function drawPromptBoxInitial(inputText) {
325
497
  const placeholder = 'Type your message or @path/to/file';
326
498
  const prefix = ' > ';
327
499
 
328
- const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + inputText);
500
+ const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + chalk.white(inputText));
329
501
  const totalChars = (prefix.length + Math.max(inputText.length, placeholder.length));
330
502
  const rows = Math.ceil(totalChars / width) || 1;
331
503
 
@@ -338,11 +510,29 @@ function drawPromptBoxInitial(inputText) {
338
510
  process.stdout.write(userBg(padLine(lineText, width)) + '\n');
339
511
  }
340
512
 
341
- // Status bar: Current Provider / Model + right-aligned "? for shortcuts"
513
+ // Status bar: Current Provider / Model + right-aligned "/help for shortcuts"
342
514
  const modelDisplay = providerInstance ? providerInstance.modelName : (config.model || 'unknown');
343
515
  const providerDisplay = config.provider.toUpperCase();
344
- const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)}`;
345
- const rightText = '? for shortcuts ';
516
+ const modeDisplay = config.planMode ? chalk.magenta('PLAN MODE') : chalk.green('AGENT MODE');
517
+
518
+ let tokenDisplay = '';
519
+ if (config.showTokenCount && providerInstance) {
520
+ let msgs = providerInstance.messages || [];
521
+ // Support for Ollama chat history format if different
522
+ if (!providerInstance.messages && typeof providerInstance.chat?.getHistory === 'function') {
523
+ msgs = providerInstance.chat.getHistory(); // Note: this is async normally, but we use an approximation here or just skip it if it's strictly async. For now, assume providerInstance.messages is the standard.
524
+ }
525
+ const tokens = estimateConversationTokens(msgs);
526
+ let color = chalk.green;
527
+ if (tokens >= 128000) color = chalk.red;
528
+ else if (tokens >= 86000) color = chalk.hex('#FFA500'); // Orange
529
+ else if (tokens >= 64000) color = chalk.yellow;
530
+
531
+ tokenDisplay = ` / Tokens: ${color(tokens.toLocaleString())}`;
532
+ }
533
+
534
+ const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)} / ${modeDisplay}${tokenDisplay}`;
535
+ const rightText = '/help for shortcuts ';
346
536
 
347
537
  const leftStripped = leftText.replace(/\x1b\[[0-9;]*m/g, '');
348
538
  const midPad = Math.max(0, width - leftStripped.length - rightText.length);
package/src/prompt.js CHANGED
@@ -1,5 +1,6 @@
1
1
  import os from 'os';
2
2
  import { getAvailableTools } from './tools/registry.js';
3
+ import { getAvailableSkills } from './utils/skills.js';
3
4
 
4
5
  export function getSystemPrompt(config = {}) {
5
6
  const platform = os.platform();
@@ -16,6 +17,7 @@ export function getSystemPrompt(config = {}) {
16
17
  const availableToolsList = getAvailableTools(config);
17
18
  const availableToolsNames = availableToolsList.map(t => t.name).join(', ');
18
19
  const hasPatchTool = availableToolsList.some(t => t.name === 'patch_file');
20
+ const skills = getAvailableSkills();
19
21
 
20
22
  let prompt = `You are Banana Code, a terminal-based AI coding assistant running on ${osDescription}. You help users write, debug, and understand code. You have access to tools: ${availableToolsNames}.
21
23
 
@@ -27,6 +29,26 @@ SAFETY RULES:
27
29
 
28
30
  Always use tools when they would help. Be concise but thorough. `;
29
31
 
32
+ if (skills && skills.length > 0) {
33
+ prompt += `\n\n# Available Agent Skills\n\nYou have access to the following specialized skills. To activate a skill and receive its detailed instructions, call the \`activate_skill\` tool with the skill's name.\n\n<available_skills>\n`;
34
+ for (const skill of skills) {
35
+ prompt += ` <skill>\n <name>${skill.id}</name>\n <description>${skill.description}</description>\n </skill>\n`;
36
+ }
37
+ prompt += `</available_skills>\n\nOnce a skill is activated, its instructions and resources are returned wrapped in <activated_skill> tags. You MUST treat the content within <instructions> as expert procedural guidance for the duration of the task.\n`;
38
+ }
39
+
40
+ if (config.planMode) {
41
+ prompt += `
42
+ [PLAN MODE ENABLED]
43
+ The user is operating in "Plan Mode".
44
+ - For very small, trivial changes (like fixing a typo or a one-line bug), you may execute the change directly using your tools.
45
+ - For ANY change that has a significant impact, modifies multiple areas, or adds a new feature, you MUST NOT write or patch code immediately.
46
+ - Instead, you MUST output a detailed "Implementation Plan" outlining the files you will change and the specific steps you will take.
47
+ - Stop and ask the user: "Does this plan look good, or would you like to make any changes?"
48
+ - ONLY proceed to use the 'write_file' or 'patch_file' tools AFTER the user has explicitly approved the plan.
49
+ `;
50
+ }
51
+
30
52
  if (hasPatchTool) {
31
53
  prompt += `
32
54
  When editing existing files, PREFER using the 'patch_file' tool for surgical, targeted changes instead of 'write_file', especially for large files. This prevents accidental truncation and is much more efficient. Only use 'write_file' when creating new files or when making very extensive changes to a small file.`;
@@ -0,0 +1,22 @@
1
+ import { getAvailableSkills } from '../utils/skills.js';
2
+
3
+ export async function activateSkill({ skillName }) {
4
+ const skills = getAvailableSkills();
5
+ // Match by ID or Name
6
+ const skill = skills.find(s => s.id === skillName || s.name === skillName);
7
+
8
+ if (!skill) {
9
+ return `Error: Skill '${skillName}' not found. Available skills: ${skills.map(s => s.name).join(', ')}`;
10
+ }
11
+
12
+ // The format expected by the AI agent
13
+ let output = `<activated_skill>\n`;
14
+ output += `<instructions>\n${skill.instructions}\n</instructions>\n`;
15
+ output += `<available_resources>\n`;
16
+ output += `- location: ${skill.path}\n`;
17
+ output += ` (Use list_directory and read_file to access bundled scripts, references, or assets)\n`;
18
+ output += `</available_resources>\n`;
19
+ output += `</activated_skill>`;
20
+
21
+ return output;
22
+ }
@@ -7,6 +7,7 @@ import { listDirectory } from './listDirectory.js';
7
7
  import { duckDuckGo } from './duckDuckGo.js';
8
8
  import { duckDuckGoScrape } from './duckDuckGoScrape.js';
9
9
  import { patchFile } from './patchFile.js';
10
+ import { activateSkill } from './activateSkill.js';
10
11
 
11
12
  export const TOOLS = [
12
13
  {
@@ -133,6 +134,17 @@ export const TOOLS = [
133
134
  },
134
135
  required: ['filepath', 'edits']
135
136
  }
137
+ },
138
+ {
139
+ name: 'activate_skill',
140
+ description: 'Activates a specialized agent skill by name. Returns the skill\'s instructions wrapped in <activated_skill> tags. These provide specialized guidance for the current task.',
141
+ parameters: {
142
+ type: 'object',
143
+ properties: {
144
+ skillName: { type: 'string', description: 'The name or ID of the skill to activate.' }
145
+ },
146
+ required: ['skillName']
147
+ }
136
148
  }
137
149
  ];
138
150
 
@@ -156,6 +168,7 @@ export async function executeTool(name, args) {
156
168
  case 'duck_duck_go': return await duckDuckGo(args);
157
169
  case 'duck_duck_go_scrape': return await duckDuckGoScrape(args);
158
170
  case 'patch_file': return await patchFile(args);
171
+ case 'activate_skill': return await activateSkill(args);
159
172
  default: return `Unknown tool: ${name}`;
160
173
  }
161
174
  }
@@ -0,0 +1,61 @@
1
+ import fs from 'fs';
2
+ import path from 'path';
3
+ import os from 'os';
4
+
5
+ const SKILLS_DIR = path.join(os.homedir(), '.config', 'banana-code', 'skills');
6
+
7
+ /**
8
+ * Scans the skills directory and parses SKILL.md files.
9
+ * @returns {Array} List of discovered skills.
10
+ */
11
+ export function getAvailableSkills() {
12
+ try {
13
+ if (!fs.existsSync(SKILLS_DIR)) {
14
+ fs.mkdirSync(SKILLS_DIR, { recursive: true });
15
+ }
16
+ } catch (e) {
17
+ return [];
18
+ }
19
+
20
+ let skills = [];
21
+ try {
22
+ const entries = fs.readdirSync(SKILLS_DIR, { withFileTypes: true });
23
+ for (const entry of entries) {
24
+ if (entry.isDirectory()) {
25
+ const skillPath = path.join(SKILLS_DIR, entry.name);
26
+ const mdPath = path.join(skillPath, 'SKILL.md');
27
+
28
+ if (fs.existsSync(mdPath)) {
29
+ try {
30
+ const content = fs.readFileSync(mdPath, 'utf8');
31
+ // Match YAML frontmatter between --- and ---
32
+ const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
33
+
34
+ if (match) {
35
+ const frontmatter = match[1];
36
+ const body = content.slice(match[0].length).trim();
37
+
38
+ // Simple YAML parsing via regex
39
+ const nameMatch = frontmatter.match(/name:\s*['"]?([^'"\n]+)['"]?/);
40
+ const descMatch = frontmatter.match(/description:\s*['"]?([^'"\n]+)['"]?/);
41
+
42
+ if (nameMatch && descMatch) {
43
+ skills.push({
44
+ id: entry.name,
45
+ name: nameMatch[1].trim(),
46
+ description: descMatch[1].trim(),
47
+ instructions: body,
48
+ path: skillPath
49
+ });
50
+ }
51
+ }
52
+ } catch (err) {
53
+ // Skip corrupted or unreadable skills
54
+ }
55
+ }
56
+ }
57
+ }
58
+ } catch (e) {}
59
+
60
+ return skills;
61
+ }
@@ -0,0 +1,44 @@
1
+ /**
2
+ * Estimates the number of tokens in a given string.
3
+ * This is a rough approximation (1 token ≈ 4 characters or ~0.75 words)
4
+ * used to provide a quick estimate without needing heavy, provider-specific tokenizer libraries.
5
+ *
6
+ * @param {string} text - The input text to estimate tokens for.
7
+ * @returns {number} The estimated token count.
8
+ */
9
+ export function estimateTokens(text) {
10
+ if (!text || typeof text !== 'string') return 0;
11
+
12
+ // A common heuristic: 1 token is roughly 4 English characters.
13
+ // For code, it can be denser, but this provides a reasonable ballpark.
14
+ return Math.ceil(text.length / 4);
15
+ }
16
+
17
+ /**
18
+ * Calculates the estimated token count for the entire conversation history.
19
+ *
20
+ * @param {Array} messages - The array of message objects.
21
+ * @returns {number} The estimated total tokens.
22
+ */
23
+ export function estimateConversationTokens(messages) {
24
+ if (!Array.isArray(messages)) return 0;
25
+
26
+ let totalString = '';
27
+
28
+ // Stringify the entire message array to get a representation of its "weight"
29
+ // This includes system prompts, tool calls, and results.
30
+ try {
31
+ totalString = JSON.stringify(messages);
32
+ } catch (e) {
33
+ // Fallback if there are circular references (unlikely in simple message arrays)
34
+ messages.forEach(msg => {
35
+ if (typeof msg === 'string') totalString += msg;
36
+ else if (msg && typeof msg === 'object') {
37
+ if (msg.content) totalString += typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content);
38
+ if (msg.parts) totalString += JSON.stringify(msg.parts);
39
+ }
40
+ });
41
+ }
42
+
43
+ return estimateTokens(totalString);
44
+ }