@banaxi/banana-code 1.1.0 → 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +35 -5
- package/package.json +13 -1
- package/src/config.js +27 -5
- package/src/constants.js +9 -0
- package/src/index.js +221 -26
- package/src/prompt.js +35 -0
- package/src/providers/claude.js +1 -1
- package/src/providers/gemini.js +1 -1
- package/src/providers/mistral.js +141 -0
- package/src/providers/ollamaCloud.js +1 -1
- package/src/providers/openai.js +2 -2
- package/src/tools/activateSkill.js +22 -0
- package/src/tools/delegateTask.js +100 -0
- package/src/tools/registry.js +39 -1
- package/src/utils/skills.js +61 -0
- package/src/utils/tokens.js +44 -0
package/README.md
CHANGED
|
@@ -41,14 +41,22 @@ Banana Code is a high-performance, terminal-based AI pair programmer. It combine
|
|
|
41
41
|
#S#SSSSSS%%%?%%%%S
|
|
42
42
|
```
|
|
43
43
|
|
|
44
|
+
## 🤔 Why Banana Code?
|
|
45
|
+
While tools like Cursor provide great GUI experiences, Banana Code is built for developers who live in the terminal and want maximum flexibility.
|
|
46
|
+
- **No Vendor Lock-in**: Switch instantly between the best proprietary models (Gemini, Claude, OpenAI) and high-performance open-source models (Ollama Local, Ollama Cloud) mid-conversation.
|
|
47
|
+
- **True Autonomy**: With Plan & Execute mode and Self-Healing Error Loops, Banana Code doesn't just suggest code; it tries, fails, reads the errors, and fixes its own mistakes automatically.
|
|
48
|
+
- **Terminal Native**: It brings the power of full workspace awareness, web search, and surgical file patching directly to your CLI without forcing you to change your IDE.
|
|
49
|
+
|
|
44
50
|
## ✨ Key Features
|
|
45
51
|
|
|
46
|
-
- **Multi-Provider Support**: Switch between **Google Gemini**, **Anthropic Claude**, **OpenAI**, and **Ollama (Local)** effortlessly.
|
|
47
|
-
- **
|
|
52
|
+
- **Multi-Provider Support**: Switch between **Google Gemini**, **Anthropic Claude**, **OpenAI**, **Ollama Cloud**, and **Ollama (Local)** effortlessly.
|
|
53
|
+
- **Plan & Agent Modes**: Use `/agent` for instant execution, or `/plan` to make the AI draft a step-by-step implementation plan for your approval before it touches any code.
|
|
54
|
+
- **Self-Healing Loop**: If the AI runs a command (like running tests) and it fails, Banana Code automatically feeds the error trace back to the AI so it can fix its own code.
|
|
55
|
+
- **Agent Skills**: Teach your AI specialized workflows. Drop a `SKILL.md` file in your config folder, and the AI will automatically activate it when relevant.
|
|
56
|
+
- **Smart Context**: Use `@file/path.js` to instantly inject file contents into your prompt, or use `/settings` to auto-feed your entire workspace structure (respecting `.gitignore`).
|
|
57
|
+
- **Web Research**: Deep integration with DuckDuckGo APIs and Scrapers to give the AI real-time access to the internet.
|
|
48
58
|
- **Persistent Sessions**: All chats are saved to `~/.config/banana-code/chats/`. Resume any session with a single command.
|
|
49
|
-
- **
|
|
50
|
-
- **Security First**: A dedicated permission model ensures no tool is executed without your explicit approval.
|
|
51
|
-
- **Keyless Playground**: Integration with OpenAI Codex for seamless, keyless access to GPT-4o and beyond.
|
|
59
|
+
- **Syntax Highlighting**: Beautiful, readable markdown output with syntax coloring directly in your terminal.
|
|
52
60
|
|
|
53
61
|
## 🚀 Installation
|
|
54
62
|
|
|
@@ -96,6 +104,28 @@ Banana Code can assist you by:
|
|
|
96
104
|
- **`search_files`**: Performing regex searches across your project.
|
|
97
105
|
- **`list_directory`**: Exploring folder structures.
|
|
98
106
|
|
|
107
|
+
### 🧠 Agent Skills
|
|
108
|
+
Banana Code supports custom Agent Skills. Skills are like "onboarding guides" that teach the AI how to do specific tasks, use certain APIs, or follow your company's coding standards.
|
|
109
|
+
|
|
110
|
+
When the AI detects a task that matches a skill's description, it automatically activates the skill and loads its specialized instructions.
|
|
111
|
+
|
|
112
|
+
**How to create a Skill:**
|
|
113
|
+
1. Create a folder in your config directory: `~/.config/banana-code/skills/my-react-skill/`
|
|
114
|
+
2. Create a `SKILL.md` file inside that folder using this exact format:
|
|
115
|
+
|
|
116
|
+
```markdown
|
|
117
|
+
---
|
|
118
|
+
name: my-react-skill
|
|
119
|
+
description: Use this skill whenever you are asked to build or edit a React component.
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
# React Guidelines
|
|
123
|
+
- Always use functional components.
|
|
124
|
+
- Always use Tailwind CSS for styling.
|
|
125
|
+
- Do not use default exports.
|
|
126
|
+
```
|
|
127
|
+
3. Type `/skills` in Banana Code to verify it loaded. The AI will now follow these rules automatically!
|
|
128
|
+
|
|
99
129
|
## 🔐 Privacy & Security
|
|
100
130
|
|
|
101
131
|
Banana Code is built with transparency in mind:
|
package/package.json
CHANGED
|
@@ -1,7 +1,19 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@banaxi/banana-code",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.3.0",
|
|
4
4
|
"description": "🍌 BananaCode",
|
|
5
|
+
"keywords": [
|
|
6
|
+
"banana",
|
|
7
|
+
"ai",
|
|
8
|
+
"cli",
|
|
9
|
+
"agent",
|
|
10
|
+
"coding-assistant",
|
|
11
|
+
"terminal",
|
|
12
|
+
"gemini",
|
|
13
|
+
"claude",
|
|
14
|
+
"openai",
|
|
15
|
+
"ollama"
|
|
16
|
+
],
|
|
5
17
|
"type": "module",
|
|
6
18
|
"license": "GPL-3.0-or-later",
|
|
7
19
|
"bin": {
|
package/src/config.js
CHANGED
|
@@ -6,7 +6,7 @@ import { execSync } from 'child_process';
|
|
|
6
6
|
import fsSync from 'fs';
|
|
7
7
|
import chalk from 'chalk';
|
|
8
8
|
|
|
9
|
-
import { GEMINI_MODELS, CLAUDE_MODELS, OPENAI_MODELS, CODEX_MODELS, OLLAMA_CLOUD_MODELS } from './constants.js';
|
|
9
|
+
import { GEMINI_MODELS, CLAUDE_MODELS, OPENAI_MODELS, CODEX_MODELS, OLLAMA_CLOUD_MODELS, MISTRAL_MODELS } from './constants.js';
|
|
10
10
|
|
|
11
11
|
const CONFIG_DIR = path.join(os.homedir(), '.config', 'banana-code');
|
|
12
12
|
const CONFIG_FILE = path.join(CONFIG_DIR, 'config.json');
|
|
@@ -41,10 +41,10 @@ export async function setupProvider(provider, config = {}) {
|
|
|
41
41
|
default: config.apiKey
|
|
42
42
|
});
|
|
43
43
|
config.model = await select({
|
|
44
|
-
message: 'Select a model:',
|
|
45
|
-
choices:
|
|
44
|
+
message: 'Select a Gemini model:',
|
|
45
|
+
choices: GEMINI_MODELS
|
|
46
46
|
});
|
|
47
|
-
|
|
47
|
+
} else if (provider === 'ollama_cloud') {
|
|
48
48
|
config.apiKey = await input({
|
|
49
49
|
message: 'Enter your OLLAMA_API_KEY (from ollama.com):',
|
|
50
50
|
default: config.apiKey
|
|
@@ -65,7 +65,7 @@ export async function setupProvider(provider, config = {}) {
|
|
|
65
65
|
});
|
|
66
66
|
}
|
|
67
67
|
config.model = selectedModel;
|
|
68
|
-
} else if (provider === '
|
|
68
|
+
} else if (provider === 'claude') {
|
|
69
69
|
config.apiKey = await input({
|
|
70
70
|
message: 'Enter your ANTHROPIC_API_KEY:',
|
|
71
71
|
default: config.apiKey
|
|
@@ -74,6 +74,27 @@ export async function setupProvider(provider, config = {}) {
|
|
|
74
74
|
message: 'Select a Claude model:',
|
|
75
75
|
choices: CLAUDE_MODELS
|
|
76
76
|
});
|
|
77
|
+
} else if (provider === 'mistral') {
|
|
78
|
+
config.apiKey = await input({
|
|
79
|
+
message: 'Enter your MISTRAL_API_KEY (from console.mistral.ai):',
|
|
80
|
+
default: config.apiKey
|
|
81
|
+
});
|
|
82
|
+
|
|
83
|
+
const choices = [...MISTRAL_MODELS, { name: chalk.magenta('✎ Enter custom model ID...'), value: 'CUSTOM_ID' }];
|
|
84
|
+
let selectedModel = await select({
|
|
85
|
+
message: 'Select a Mistral model:',
|
|
86
|
+
choices,
|
|
87
|
+
loop: false,
|
|
88
|
+
pageSize: Math.max(choices.length, 15)
|
|
89
|
+
});
|
|
90
|
+
|
|
91
|
+
if (selectedModel === 'CUSTOM_ID') {
|
|
92
|
+
selectedModel = await input({
|
|
93
|
+
message: 'Enter the exact Mistral model ID (e.g., mistral-large-latest):',
|
|
94
|
+
validate: (v) => v.trim().length > 0 || 'Model ID cannot be empty'
|
|
95
|
+
});
|
|
96
|
+
}
|
|
97
|
+
config.model = selectedModel;
|
|
77
98
|
} else if (provider === 'openai') {
|
|
78
99
|
const authMethod = await select({
|
|
79
100
|
message: 'How would you like to authenticate with OpenAI?',
|
|
@@ -158,6 +179,7 @@ async function runSetupWizard() {
|
|
|
158
179
|
{ name: 'Google Gemini', value: 'gemini' },
|
|
159
180
|
{ name: 'Anthropic Claude', value: 'claude' },
|
|
160
181
|
{ name: 'OpenAI', value: 'openai' },
|
|
182
|
+
{ name: 'Mistral AI', value: 'mistral' },
|
|
161
183
|
{ name: 'Ollama Cloud', value: 'ollama_cloud' },
|
|
162
184
|
{ name: 'Ollama (Local)', value: 'ollama' }
|
|
163
185
|
]
|
package/src/constants.js
CHANGED
|
@@ -30,6 +30,15 @@ export const OLLAMA_CLOUD_MODELS = [
|
|
|
30
30
|
{ name: 'Llama 3.1 405B (Cloud)', value: 'llama3.1:405b-cloud' }
|
|
31
31
|
];
|
|
32
32
|
|
|
33
|
+
export const MISTRAL_MODELS = [
|
|
34
|
+
{ name: 'Mistral Large (Latest)', value: 'mistral-large-latest' },
|
|
35
|
+
{ name: 'Mistral Medium (Latest)', value: 'mistral-medium-latest' },
|
|
36
|
+
{ name: 'Mistral Small (Latest)', value: 'mistral-small-latest' },
|
|
37
|
+
{ name: 'Codestral (Latest)', value: 'codestral-latest' },
|
|
38
|
+
{ name: 'Mistral Nemo', value: 'open-mistral-nemo' },
|
|
39
|
+
{ name: 'Pixtral 12B', value: 'pixtral-12b-2409' }
|
|
40
|
+
];
|
|
41
|
+
|
|
33
42
|
export const CODEX_MODELS = [
|
|
34
43
|
{ name: 'GPT-5.4 (Newest)', value: 'gpt-5.4' },
|
|
35
44
|
{ name: 'GPT-5.3 Codex', value: 'gpt-5.3-codex' },
|
package/src/index.js
CHANGED
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
import readline from 'readline';
|
|
2
2
|
import chalk from 'chalk';
|
|
3
|
+
import ora from 'ora';
|
|
3
4
|
import { loadConfig, saveConfig, setupProvider } from './config.js';
|
|
4
5
|
import { runStartup } from './startup.js';
|
|
5
6
|
import { getSessionPermissions } from './permissions.js';
|
|
@@ -9,9 +10,11 @@ import { ClaudeProvider } from './providers/claude.js';
|
|
|
9
10
|
import { OpenAIProvider } from './providers/openai.js';
|
|
10
11
|
import { OllamaProvider } from './providers/ollama.js';
|
|
11
12
|
import { OllamaCloudProvider } from './providers/ollamaCloud.js';
|
|
13
|
+
import { MistralProvider } from './providers/mistral.js';
|
|
12
14
|
|
|
13
15
|
import { loadSession, saveSession, generateSessionId, getLatestSessionId, listSessions } from './sessions.js';
|
|
14
16
|
import { printMarkdown } from './utils/markdown.js';
|
|
17
|
+
import { estimateConversationTokens } from './utils/tokens.js';
|
|
15
18
|
|
|
16
19
|
let config;
|
|
17
20
|
let providerInstance;
|
|
@@ -26,6 +29,7 @@ function createProvider(overrideConfig = null) {
|
|
|
26
29
|
case 'gemini': return new GeminiProvider(activeConfig);
|
|
27
30
|
case 'claude': return new ClaudeProvider(activeConfig);
|
|
28
31
|
case 'openai': return new OpenAIProvider(activeConfig);
|
|
32
|
+
case 'mistral': return new MistralProvider(activeConfig);
|
|
29
33
|
case 'ollama_cloud': return new OllamaCloudProvider(activeConfig);
|
|
30
34
|
case 'ollama': return new OllamaProvider(activeConfig);
|
|
31
35
|
default:
|
|
@@ -49,20 +53,21 @@ async function handleSlashCommand(command) {
|
|
|
49
53
|
{ name: 'Google Gemini', value: 'gemini' },
|
|
50
54
|
{ name: 'Anthropic Claude', value: 'claude' },
|
|
51
55
|
{ name: 'OpenAI', value: 'openai' },
|
|
56
|
+
{ name: 'Mistral AI', value: 'mistral' },
|
|
52
57
|
{ name: 'Ollama Cloud', value: 'ollama_cloud' },
|
|
53
58
|
{ name: 'Ollama (Local)', value: 'ollama' }
|
|
54
59
|
]
|
|
55
60
|
});
|
|
56
61
|
}
|
|
57
62
|
|
|
58
|
-
if (['gemini', 'claude', 'openai', 'ollama_cloud', 'ollama'].includes(newProv)) {
|
|
63
|
+
if (['gemini', 'claude', 'openai', 'mistral', 'ollama_cloud', 'ollama'].includes(newProv)) {
|
|
59
64
|
// Use the shared setup logic to get keys/models
|
|
60
65
|
config = await setupProvider(newProv, config);
|
|
61
66
|
await saveConfig(config);
|
|
62
67
|
providerInstance = createProvider();
|
|
63
68
|
console.log(chalk.green(`Switched provider to ${newProv} (${config.model}).`));
|
|
64
69
|
} else {
|
|
65
|
-
console.log(chalk.yellow(`Usage: /provider <gemini|claude|openai|ollama>`));
|
|
70
|
+
console.log(chalk.yellow(`Usage: /provider <gemini|claude|openai|ollama_cloud|ollama>`));
|
|
66
71
|
}
|
|
67
72
|
break;
|
|
68
73
|
case '/model':
|
|
@@ -70,13 +75,15 @@ async function handleSlashCommand(command) {
|
|
|
70
75
|
if (!newModel) {
|
|
71
76
|
// Interactive selection
|
|
72
77
|
const { select } = await import('@inquirer/prompts');
|
|
73
|
-
const { GEMINI_MODELS, CLAUDE_MODELS, OPENAI_MODELS, CODEX_MODELS, OLLAMA_CLOUD_MODELS } = await import('./constants.js');
|
|
78
|
+
const { GEMINI_MODELS, CLAUDE_MODELS, OPENAI_MODELS, CODEX_MODELS, OLLAMA_CLOUD_MODELS, MISTRAL_MODELS } = await import('./constants.js');
|
|
74
79
|
|
|
75
80
|
let choices = [];
|
|
76
81
|
if (config.provider === 'gemini') choices = GEMINI_MODELS;
|
|
77
82
|
else if (config.provider === 'claude') choices = CLAUDE_MODELS;
|
|
78
83
|
else if (config.provider === 'openai') {
|
|
79
84
|
choices = config.authType === 'oauth' ? CODEX_MODELS : OPENAI_MODELS;
|
|
85
|
+
} else if (config.provider === 'mistral') {
|
|
86
|
+
choices = MISTRAL_MODELS;
|
|
80
87
|
} else if (config.provider === 'ollama_cloud') {
|
|
81
88
|
choices = OLLAMA_CLOUD_MODELS;
|
|
82
89
|
} else if (config.provider === 'ollama') {
|
|
@@ -92,7 +99,7 @@ async function handleSlashCommand(command) {
|
|
|
92
99
|
|
|
93
100
|
if (choices.length > 0) {
|
|
94
101
|
const finalChoices = [...choices];
|
|
95
|
-
if (config.provider === 'ollama_cloud') {
|
|
102
|
+
if (config.provider === 'ollama_cloud' || config.provider === 'mistral') {
|
|
96
103
|
finalChoices.push({ name: chalk.magenta('✎ Enter custom model ID...'), value: 'CUSTOM_ID' });
|
|
97
104
|
}
|
|
98
105
|
|
|
@@ -130,11 +137,86 @@ async function handleSlashCommand(command) {
|
|
|
130
137
|
providerInstance = createProvider(); // fresh instance = clear history
|
|
131
138
|
console.log(chalk.green('Chat history cleared.'));
|
|
132
139
|
break;
|
|
140
|
+
case '/clean':
|
|
141
|
+
if (!config.betaTools || !config.betaTools.includes('clean_command')) {
|
|
142
|
+
console.log(chalk.yellow("The /clean command is a beta feature. You need to enable it in the /beta menu first."));
|
|
143
|
+
break;
|
|
144
|
+
}
|
|
145
|
+
const msgCount = providerInstance.messages ? providerInstance.messages.length : 0;
|
|
146
|
+
if (msgCount <= 2) {
|
|
147
|
+
console.log(chalk.yellow("Not enough history to summarize."));
|
|
148
|
+
break;
|
|
149
|
+
}
|
|
150
|
+
|
|
151
|
+
console.log(chalk.cyan("Summarizing context to save tokens..."));
|
|
152
|
+
const summarySpinner = ora({ text: 'Compressing history...', color: 'yellow', stream: process.stdout }).start();
|
|
153
|
+
|
|
154
|
+
try {
|
|
155
|
+
// Temporarily disable terminal formatting for the summary request
|
|
156
|
+
const originalUseMarked = config.useMarkedTerminal;
|
|
157
|
+
config.useMarkedTerminal = false;
|
|
158
|
+
|
|
159
|
+
// Create a temporary prompt asking for a summary
|
|
160
|
+
const summaryPrompt = "SYSTEM INSTRUCTION: Please provide a highly concise summary of our entire conversation so far. Focus ONLY on the overall goal, the current state of the project, any important decisions made, and what we were about to do next. Do not include pleasantries. This summary will be used as your memory going forward.";
|
|
161
|
+
|
|
162
|
+
// Ask the AI to summarize
|
|
163
|
+
const summary = await providerInstance.sendMessage(summaryPrompt);
|
|
164
|
+
|
|
165
|
+
// Restore settings
|
|
166
|
+
config.useMarkedTerminal = originalUseMarked;
|
|
167
|
+
summarySpinner.stop();
|
|
168
|
+
|
|
169
|
+
// Re-initialize the provider to wipe old history
|
|
170
|
+
providerInstance = createProvider();
|
|
171
|
+
|
|
172
|
+
// Inject the summary as the first message after the system prompt
|
|
173
|
+
const summaryMemory = `[PREVIOUS CONVERSATION SUMMARY]\n${summary}`;
|
|
174
|
+
|
|
175
|
+
if (config.provider === 'gemini') {
|
|
176
|
+
providerInstance.messages.push({ role: 'user', parts: [{ text: summaryMemory }] });
|
|
177
|
+
providerInstance.messages.push({ role: 'model', parts: [{ text: "I have stored the summary of our previous conversation in my memory." }] });
|
|
178
|
+
} else if (config.provider === 'claude') {
|
|
179
|
+
providerInstance.messages.push({ role: 'user', content: summaryMemory });
|
|
180
|
+
providerInstance.messages.push({ role: 'assistant', content: "I have stored the summary of our previous conversation in my memory." });
|
|
181
|
+
} else {
|
|
182
|
+
providerInstance.messages.push({ role: 'user', content: summaryMemory });
|
|
183
|
+
providerInstance.messages.push({ role: 'assistant', content: "I have stored the summary of our previous conversation in my memory." });
|
|
184
|
+
}
|
|
185
|
+
|
|
186
|
+
console.log(chalk.green(`\nContext successfully compressed!`));
|
|
187
|
+
if (config.debug) {
|
|
188
|
+
console.log(chalk.gray(`\n[Saved Summary]:\n${summary}\n`));
|
|
189
|
+
}
|
|
190
|
+
|
|
191
|
+
await saveSession(currentSessionId, {
|
|
192
|
+
provider: config.provider,
|
|
193
|
+
model: config.model || providerInstance.modelName,
|
|
194
|
+
messages: providerInstance.messages
|
|
195
|
+
});
|
|
196
|
+
|
|
197
|
+
} catch (err) {
|
|
198
|
+
summarySpinner.stop();
|
|
199
|
+
console.log(chalk.red(`Failed to compress context: ${err.message}`));
|
|
200
|
+
}
|
|
201
|
+
break;
|
|
133
202
|
case '/context':
|
|
134
203
|
let length = 0;
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
204
|
+
let messagesForEstimation = [];
|
|
205
|
+
|
|
206
|
+
if (providerInstance.messages) {
|
|
207
|
+
length = providerInstance.messages.length;
|
|
208
|
+
messagesForEstimation = providerInstance.messages;
|
|
209
|
+
} else if (providerInstance.chat) {
|
|
210
|
+
messagesForEstimation = await providerInstance.chat.getHistory();
|
|
211
|
+
length = messagesForEstimation.length;
|
|
212
|
+
}
|
|
213
|
+
|
|
214
|
+
const { estimateConversationTokens } = await import('./utils/tokens.js');
|
|
215
|
+
const estimatedTokens = estimateConversationTokens(messagesForEstimation);
|
|
216
|
+
|
|
217
|
+
console.log(chalk.cyan(`Current context:`));
|
|
218
|
+
console.log(chalk.cyan(`- Messages: ${length}`));
|
|
219
|
+
console.log(chalk.cyan(`- Estimated Tokens: ~${estimatedTokens.toLocaleString()}`));
|
|
138
220
|
break;
|
|
139
221
|
case '/permissions':
|
|
140
222
|
const perms = getSessionPermissions();
|
|
@@ -149,18 +231,27 @@ async function handleSlashCommand(command) {
|
|
|
149
231
|
const { TOOLS } = await import('./tools/registry.js');
|
|
150
232
|
const betaTools = TOOLS.filter(t => t.beta);
|
|
151
233
|
|
|
152
|
-
|
|
153
|
-
|
|
234
|
+
let choices = betaTools.map(t => ({
|
|
235
|
+
name: t.label || t.name,
|
|
236
|
+
value: t.name,
|
|
237
|
+
checked: (config.betaTools || []).includes(t.name)
|
|
238
|
+
}));
|
|
239
|
+
|
|
240
|
+
// Add beta commands that aren't tools
|
|
241
|
+
choices.push({
|
|
242
|
+
name: '/clean command (Context Compression)',
|
|
243
|
+
value: 'clean_command',
|
|
244
|
+
checked: (config.betaTools || []).includes('clean_command')
|
|
245
|
+
});
|
|
246
|
+
|
|
247
|
+
if (choices.length === 0) {
|
|
248
|
+
console.log(chalk.yellow("No beta features available."));
|
|
154
249
|
break;
|
|
155
250
|
}
|
|
156
251
|
|
|
157
252
|
const enabledBetaTools = await checkbox({
|
|
158
|
-
message: 'Select beta
|
|
159
|
-
choices:
|
|
160
|
-
name: t.label || t.name,
|
|
161
|
-
value: t.name,
|
|
162
|
-
checked: (config.betaTools || []).includes(t.name)
|
|
163
|
-
}))
|
|
253
|
+
message: 'Select beta features to activate (Space to toggle, Enter to confirm):',
|
|
254
|
+
choices: choices
|
|
164
255
|
});
|
|
165
256
|
|
|
166
257
|
if (enabledBetaTools.includes('duck_duck_go_scrape') && !(config.betaTools || []).includes('duck_duck_go_scrape')) {
|
|
@@ -182,7 +273,13 @@ async function handleSlashCommand(command) {
|
|
|
182
273
|
|
|
183
274
|
config.betaTools = enabledBetaTools;
|
|
184
275
|
await saveConfig(config);
|
|
185
|
-
|
|
276
|
+
if (providerInstance) {
|
|
277
|
+
const savedMessages = providerInstance.messages;
|
|
278
|
+
providerInstance = createProvider(); // Re-init to update tools
|
|
279
|
+
providerInstance.messages = savedMessages;
|
|
280
|
+
} else {
|
|
281
|
+
providerInstance = createProvider();
|
|
282
|
+
}
|
|
186
283
|
console.log(chalk.green(`Beta tools updated: ${enabledBetaTools.join(', ') || 'none'}`));
|
|
187
284
|
break;
|
|
188
285
|
case '/settings':
|
|
@@ -199,22 +296,80 @@ async function handleSlashCommand(command) {
|
|
|
199
296
|
name: 'Use syntax highlighting for AI output (requires waiting for full response)',
|
|
200
297
|
value: 'useMarkedTerminal',
|
|
201
298
|
checked: config.useMarkedTerminal || false
|
|
299
|
+
},
|
|
300
|
+
{
|
|
301
|
+
name: 'Always show current token count in status bar',
|
|
302
|
+
value: 'showTokenCount',
|
|
303
|
+
checked: config.showTokenCount || false
|
|
202
304
|
}
|
|
203
305
|
]
|
|
204
306
|
});
|
|
205
307
|
|
|
206
308
|
config.autoFeedWorkspace = enabledSettings.includes('autoFeedWorkspace');
|
|
207
309
|
config.useMarkedTerminal = enabledSettings.includes('useMarkedTerminal');
|
|
310
|
+
config.showTokenCount = enabledSettings.includes('showTokenCount');
|
|
208
311
|
await saveConfig(config);
|
|
209
|
-
|
|
312
|
+
if (providerInstance) {
|
|
313
|
+
const savedMessages = providerInstance.messages;
|
|
314
|
+
providerInstance = createProvider(); // Re-init to update tools/config
|
|
315
|
+
providerInstance.messages = savedMessages;
|
|
316
|
+
} else {
|
|
317
|
+
providerInstance = createProvider();
|
|
318
|
+
}
|
|
210
319
|
console.log(chalk.green(`Settings updated.`));
|
|
211
320
|
break;
|
|
212
321
|
case '/debug':
|
|
213
322
|
config.debug = !config.debug;
|
|
214
323
|
await saveConfig(config);
|
|
215
|
-
|
|
324
|
+
if (providerInstance) {
|
|
325
|
+
const savedMessages = providerInstance.messages;
|
|
326
|
+
providerInstance = createProvider(); // Re-init to pass debug flag
|
|
327
|
+
providerInstance.messages = savedMessages;
|
|
328
|
+
} else {
|
|
329
|
+
providerInstance = createProvider();
|
|
330
|
+
}
|
|
216
331
|
console.log(chalk.magenta(`Debug mode ${config.debug ? 'enabled' : 'disabled'}.`));
|
|
217
332
|
break;
|
|
333
|
+
case '/skills':
|
|
334
|
+
const { getAvailableSkills } = await import('./utils/skills.js');
|
|
335
|
+
const skills = getAvailableSkills();
|
|
336
|
+
if (skills.length === 0) {
|
|
337
|
+
console.log(chalk.yellow("No skills found."));
|
|
338
|
+
const os = await import('os');
|
|
339
|
+
const path = await import('path');
|
|
340
|
+
const skillsDir = path.join(os.homedir(), '.config', 'banana-code', 'skills');
|
|
341
|
+
console.log(chalk.gray(`Create skill directories with a SKILL.md file in ${skillsDir}`));
|
|
342
|
+
} else {
|
|
343
|
+
console.log(chalk.cyan.bold("\nLoaded Skills:"));
|
|
344
|
+
skills.forEach(skill => {
|
|
345
|
+
console.log(chalk.green(`- ${skill.id}`) + `: ${skill.description}`);
|
|
346
|
+
});
|
|
347
|
+
}
|
|
348
|
+
break;
|
|
349
|
+
case '/plan':
|
|
350
|
+
config.planMode = true;
|
|
351
|
+
await saveConfig(config);
|
|
352
|
+
if (providerInstance) {
|
|
353
|
+
const savedMessages = providerInstance.messages;
|
|
354
|
+
providerInstance = createProvider();
|
|
355
|
+
providerInstance.messages = savedMessages;
|
|
356
|
+
} else {
|
|
357
|
+
providerInstance = createProvider();
|
|
358
|
+
}
|
|
359
|
+
console.log(chalk.magenta(`Plan mode enabled. For significant changes, the AI will now propose an implementation plan before writing code.`));
|
|
360
|
+
break;
|
|
361
|
+
case '/agent':
|
|
362
|
+
config.planMode = false;
|
|
363
|
+
await saveConfig(config);
|
|
364
|
+
if (providerInstance) {
|
|
365
|
+
const savedMessages = providerInstance.messages;
|
|
366
|
+
providerInstance = createProvider();
|
|
367
|
+
providerInstance.messages = savedMessages;
|
|
368
|
+
} else {
|
|
369
|
+
providerInstance = createProvider();
|
|
370
|
+
}
|
|
371
|
+
console.log(chalk.green(`Agent mode enabled. The AI will make changes directly.`));
|
|
372
|
+
break;
|
|
218
373
|
case '/chats':
|
|
219
374
|
const sessions = await listSessions();
|
|
220
375
|
if (sessions.length === 0) {
|
|
@@ -231,14 +386,18 @@ async function handleSlashCommand(command) {
|
|
|
231
386
|
case '/help':
|
|
232
387
|
console.log(chalk.yellow(`
|
|
233
388
|
Available commands:
|
|
234
|
-
/provider <name> - Switch AI provider (gemini, claude, openai, ollama)
|
|
389
|
+
/provider <name> - Switch AI provider (gemini, claude, openai, ollama_cloud, ollama)
|
|
235
390
|
/model [name] - Switch model within current provider (opens menu if name omitted)
|
|
236
391
|
/chats - List persistent chat sessions
|
|
237
392
|
/clear - Clear chat history
|
|
393
|
+
/clean - Compress chat history into a summary to save tokens
|
|
238
394
|
/context - Show current context window size
|
|
239
395
|
/permissions - List session-approved permissions
|
|
240
396
|
/beta - Manage beta features and tools
|
|
241
397
|
/settings - Manage app settings (workspace auto-feed, etc)
|
|
398
|
+
/skills - List loaded agent skills
|
|
399
|
+
/plan - Enable Plan Mode (AI proposes a plan for big changes)
|
|
400
|
+
/agent - Enable Agent Mode (default, AI edits directly)
|
|
242
401
|
/debug - Toggle debug mode (show tool results)
|
|
243
402
|
/help - Show all commands
|
|
244
403
|
/exit - Quit Banana Code
|
|
@@ -278,7 +437,7 @@ function drawPromptBox(inputText, cursorPos) {
|
|
|
278
437
|
const placeholder = 'Type your message or @path/to/file';
|
|
279
438
|
const prefix = ' > ';
|
|
280
439
|
|
|
281
|
-
const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + inputText);
|
|
440
|
+
const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + chalk.white(inputText));
|
|
282
441
|
const totalChars = (prefix.length + Math.max(inputText.length, placeholder.length));
|
|
283
442
|
const rows = Math.ceil(totalChars / width) || 1;
|
|
284
443
|
|
|
@@ -298,8 +457,26 @@ function drawPromptBox(inputText, cursorPos) {
|
|
|
298
457
|
// Redraw status bar and separator (they are always below the prompt)
|
|
299
458
|
const modelDisplay = providerInstance ? providerInstance.modelName : (config.model || 'unknown');
|
|
300
459
|
const providerDisplay = config.provider.toUpperCase();
|
|
301
|
-
const
|
|
302
|
-
|
|
460
|
+
const modeDisplay = config.planMode ? chalk.magenta('PLAN MODE') : chalk.green('AGENT MODE');
|
|
461
|
+
|
|
462
|
+
let tokenDisplay = '';
|
|
463
|
+
if (config.showTokenCount && providerInstance) {
|
|
464
|
+
let msgs = providerInstance.messages || [];
|
|
465
|
+
// Support for Ollama chat history format if different
|
|
466
|
+
if (!providerInstance.messages && typeof providerInstance.chat?.getHistory === 'function') {
|
|
467
|
+
msgs = providerInstance.chat.getHistory(); // Note: this is async normally, but we use an approximation here or just skip it if it's strictly async. For now, assume providerInstance.messages is the standard.
|
|
468
|
+
}
|
|
469
|
+
const tokens = estimateConversationTokens(msgs);
|
|
470
|
+
let color = chalk.green;
|
|
471
|
+
if (tokens >= 128000) color = chalk.red;
|
|
472
|
+
else if (tokens >= 86000) color = chalk.hex('#FFA500'); // Orange
|
|
473
|
+
else if (tokens >= 64000) color = chalk.yellow;
|
|
474
|
+
|
|
475
|
+
tokenDisplay = ` / Tokens: ${color(tokens.toLocaleString())}`;
|
|
476
|
+
}
|
|
477
|
+
|
|
478
|
+
const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)} / ${modeDisplay}${tokenDisplay}`;
|
|
479
|
+
const rightText = '/help for shortcuts ';
|
|
303
480
|
const leftStripped = leftText.replace(/\x1b\[[0-9;]*m/g, '');
|
|
304
481
|
const midPad = Math.max(0, width - leftStripped.length - rightText.length);
|
|
305
482
|
const statusLine = chalk.gray(leftText + ' '.repeat(midPad) + rightText);
|
|
@@ -325,7 +502,7 @@ function drawPromptBoxInitial(inputText) {
|
|
|
325
502
|
const placeholder = 'Type your message or @path/to/file';
|
|
326
503
|
const prefix = ' > ';
|
|
327
504
|
|
|
328
|
-
const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + inputText);
|
|
505
|
+
const visibleText = (inputText.length === 0) ? (prefix + chalk.gray(placeholder)) : (prefix + chalk.white(inputText));
|
|
329
506
|
const totalChars = (prefix.length + Math.max(inputText.length, placeholder.length));
|
|
330
507
|
const rows = Math.ceil(totalChars / width) || 1;
|
|
331
508
|
|
|
@@ -338,11 +515,29 @@ function drawPromptBoxInitial(inputText) {
|
|
|
338
515
|
process.stdout.write(userBg(padLine(lineText, width)) + '\n');
|
|
339
516
|
}
|
|
340
517
|
|
|
341
|
-
// Status bar: Current Provider / Model + right-aligned "
|
|
518
|
+
// Status bar: Current Provider / Model + right-aligned "/help for shortcuts"
|
|
342
519
|
const modelDisplay = providerInstance ? providerInstance.modelName : (config.model || 'unknown');
|
|
343
520
|
const providerDisplay = config.provider.toUpperCase();
|
|
344
|
-
const
|
|
345
|
-
|
|
521
|
+
const modeDisplay = config.planMode ? chalk.magenta('PLAN MODE') : chalk.green('AGENT MODE');
|
|
522
|
+
|
|
523
|
+
let tokenDisplay = '';
|
|
524
|
+
if (config.showTokenCount && providerInstance) {
|
|
525
|
+
let msgs = providerInstance.messages || [];
|
|
526
|
+
// Support for Ollama chat history format if different
|
|
527
|
+
if (!providerInstance.messages && typeof providerInstance.chat?.getHistory === 'function') {
|
|
528
|
+
msgs = providerInstance.chat.getHistory(); // Note: this is async normally, but we use an approximation here or just skip it if it's strictly async. For now, assume providerInstance.messages is the standard.
|
|
529
|
+
}
|
|
530
|
+
const tokens = estimateConversationTokens(msgs);
|
|
531
|
+
let color = chalk.green;
|
|
532
|
+
if (tokens >= 128000) color = chalk.red;
|
|
533
|
+
else if (tokens >= 86000) color = chalk.hex('#FFA500'); // Orange
|
|
534
|
+
else if (tokens >= 64000) color = chalk.yellow;
|
|
535
|
+
|
|
536
|
+
tokenDisplay = ` / Tokens: ${color(tokens.toLocaleString())}`;
|
|
537
|
+
}
|
|
538
|
+
|
|
539
|
+
const leftText = ` Provider: ${chalk.cyan(providerDisplay)} / Model: ${chalk.yellow(modelDisplay)} / ${modeDisplay}${tokenDisplay}`;
|
|
540
|
+
const rightText = '/help for shortcuts ';
|
|
346
541
|
|
|
347
542
|
const leftStripped = leftText.replace(/\x1b\[[0-9;]*m/g, '');
|
|
348
543
|
const midPad = Math.max(0, width - leftStripped.length - rightText.length);
|
package/src/prompt.js
CHANGED
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
import os from 'os';
|
|
2
2
|
import { getAvailableTools } from './tools/registry.js';
|
|
3
|
+
import { getAvailableSkills } from './utils/skills.js';
|
|
3
4
|
|
|
4
5
|
export function getSystemPrompt(config = {}) {
|
|
5
6
|
const platform = os.platform();
|
|
@@ -16,6 +17,7 @@ export function getSystemPrompt(config = {}) {
|
|
|
16
17
|
const availableToolsList = getAvailableTools(config);
|
|
17
18
|
const availableToolsNames = availableToolsList.map(t => t.name).join(', ');
|
|
18
19
|
const hasPatchTool = availableToolsList.some(t => t.name === 'patch_file');
|
|
20
|
+
const skills = getAvailableSkills();
|
|
19
21
|
|
|
20
22
|
let prompt = `You are Banana Code, a terminal-based AI coding assistant running on ${osDescription}. You help users write, debug, and understand code. You have access to tools: ${availableToolsNames}.
|
|
21
23
|
|
|
@@ -27,6 +29,39 @@ SAFETY RULES:
|
|
|
27
29
|
|
|
28
30
|
Always use tools when they would help. Be concise but thorough. `;
|
|
29
31
|
|
|
32
|
+
if (skills && skills.length > 0) {
|
|
33
|
+
prompt += `\n\n# Available Agent Skills\n\nYou have access to the following specialized skills. To activate a skill and receive its detailed instructions, call the \`activate_skill\` tool with the skill's name.\n\n<available_skills>\n`;
|
|
34
|
+
for (const skill of skills) {
|
|
35
|
+
prompt += ` <skill>\n <name>${skill.id}</name>\n <description>${skill.description}</description>\n </skill>\n`;
|
|
36
|
+
}
|
|
37
|
+
prompt += `</available_skills>\n\nOnce a skill is activated, its instructions and resources are returned wrapped in <activated_skill> tags. You MUST treat the content within <instructions> as expert procedural guidance for the duration of the task.\n`;
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
const hasDelegateTool = availableToolsList.some(t => t.name === 'delegate_task');
|
|
41
|
+
if (hasDelegateTool) {
|
|
42
|
+
prompt += `
|
|
43
|
+
\n# Sub-Agent Delegation
|
|
44
|
+
You have the ability to spawn specialized sub-agents to handle complex sub-tasks using the \`delegate_task\` tool.
|
|
45
|
+
- Use **researcher** for deep codebase exploration or fact-finding.
|
|
46
|
+
- Use **coder** for implementing specific features or complex bug fixes.
|
|
47
|
+
- Use **reviewer** for analyzing code quality or security.
|
|
48
|
+
- Use **generalist** for any other multi-step sub-task.
|
|
49
|
+
Delegation is highly recommended for tasks that would otherwise bloat your current conversation context. The results of the sub-agent will be returned to you as a summary.
|
|
50
|
+
`;
|
|
51
|
+
}
|
|
52
|
+
|
|
53
|
+
if (config.planMode) {
|
|
54
|
+
prompt += `
|
|
55
|
+
[PLAN MODE ENABLED]
|
|
56
|
+
The user is operating in "Plan Mode".
|
|
57
|
+
- For very small, trivial changes (like fixing a typo or a one-line bug), you may execute the change directly using your tools.
|
|
58
|
+
- For ANY change that has a significant impact, modifies multiple areas, or adds a new feature, you MUST NOT write or patch code immediately.
|
|
59
|
+
- Instead, you MUST output a detailed "Implementation Plan" outlining the files you will change and the specific steps you will take.
|
|
60
|
+
- Stop and ask the user: "Does this plan look good, or would you like to make any changes?"
|
|
61
|
+
- ONLY proceed to use the 'write_file' or 'patch_file' tools AFTER the user has explicitly approved the plan.
|
|
62
|
+
`;
|
|
63
|
+
}
|
|
64
|
+
|
|
30
65
|
if (hasPatchTool) {
|
|
31
66
|
prompt += `
|
|
32
67
|
When editing existing files, PREFER using the 'patch_file' tool for surgical, targeted changes instead of 'write_file', especially for large files. This prevents accidental truncation and is much more efficient. Only use 'write_file' when creating new files or when making very extensive changes to a small file.`;
|
package/src/providers/claude.js
CHANGED
|
@@ -114,7 +114,7 @@ export class ClaudeProvider {
|
|
|
114
114
|
const toolResultContent = [];
|
|
115
115
|
for (const call of toolCalls) {
|
|
116
116
|
console.log(chalk.yellow(`\n[Banana Calling Tool: ${call.name}]`));
|
|
117
|
-
const res = await executeTool(call.name, call.input);
|
|
117
|
+
const res = await executeTool(call.name, call.input, this.config);
|
|
118
118
|
if (this.config.debug) {
|
|
119
119
|
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
120
120
|
}
|
package/src/providers/gemini.js
CHANGED
|
@@ -135,7 +135,7 @@ export class GeminiProvider {
|
|
|
135
135
|
hasToolCalls = true;
|
|
136
136
|
const call = part.functionCall;
|
|
137
137
|
console.log(chalk.yellow(`\n[Banana Calling Tool: ${call.name}]`));
|
|
138
|
-
const res = await executeTool(call.name, call.args);
|
|
138
|
+
const res = await executeTool(call.name, call.args, this.config);
|
|
139
139
|
if (this.config.debug) {
|
|
140
140
|
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
141
141
|
}
|
|
@@ -0,0 +1,141 @@
|
|
|
1
|
+
import OpenAI from 'openai';
|
|
2
|
+
import { getAvailableTools, executeTool } from '../tools/registry.js';
|
|
3
|
+
import chalk from 'chalk';
|
|
4
|
+
import ora from 'ora';
|
|
5
|
+
import { getSystemPrompt } from '../prompt.js';
|
|
6
|
+
import { printMarkdown } from '../utils/markdown.js';
|
|
7
|
+
|
|
8
|
+
export class MistralProvider {
|
|
9
|
+
constructor(config) {
|
|
10
|
+
this.config = config;
|
|
11
|
+
this.openai = new OpenAI({
|
|
12
|
+
apiKey: config.apiKey,
|
|
13
|
+
baseURL: 'https://api.mistral.ai/v1'
|
|
14
|
+
});
|
|
15
|
+
this.modelName = config.model || 'mistral-large-latest';
|
|
16
|
+
this.systemPrompt = getSystemPrompt(config);
|
|
17
|
+
this.messages = [{ role: 'system', content: this.systemPrompt }];
|
|
18
|
+
this.tools = getAvailableTools(config).map(t => ({
|
|
19
|
+
type: 'function',
|
|
20
|
+
function: {
|
|
21
|
+
name: t.name,
|
|
22
|
+
description: t.description,
|
|
23
|
+
parameters: t.parameters
|
|
24
|
+
}
|
|
25
|
+
}));
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
updateSystemPrompt(newPrompt) {
|
|
29
|
+
this.systemPrompt = newPrompt;
|
|
30
|
+
if (this.messages.length > 0 && this.messages[0].role === 'system') {
|
|
31
|
+
this.messages[0].content = newPrompt;
|
|
32
|
+
}
|
|
33
|
+
}
|
|
34
|
+
|
|
35
|
+
async sendMessage(message) {
|
|
36
|
+
this.messages.push({ role: 'user', content: message });
|
|
37
|
+
|
|
38
|
+
let spinner = ora({ text: 'Thinking...', color: 'yellow', stream: process.stdout }).start();
|
|
39
|
+
let finalResponse = '';
|
|
40
|
+
|
|
41
|
+
try {
|
|
42
|
+
while (true) {
|
|
43
|
+
let stream = null;
|
|
44
|
+
try {
|
|
45
|
+
stream = await this.openai.chat.completions.create({
|
|
46
|
+
model: this.modelName,
|
|
47
|
+
messages: this.messages,
|
|
48
|
+
tools: this.tools.length > 0 ? this.tools : undefined,
|
|
49
|
+
stream: true
|
|
50
|
+
});
|
|
51
|
+
} catch (e) {
|
|
52
|
+
spinner.stop();
|
|
53
|
+
console.error(chalk.red(`Mistral Request Error: ${e.message}`));
|
|
54
|
+
return `Error: ${e.message}`;
|
|
55
|
+
}
|
|
56
|
+
|
|
57
|
+
let chunkResponse = '';
|
|
58
|
+
let toolCalls = [];
|
|
59
|
+
|
|
60
|
+
for await (const chunk of stream) {
|
|
61
|
+
const delta = chunk.choices[0]?.delta;
|
|
62
|
+
|
|
63
|
+
if (delta?.content) {
|
|
64
|
+
if (spinner.isSpinning && !this.config.useMarkedTerminal) spinner.stop();
|
|
65
|
+
if (!this.config.useMarkedTerminal) {
|
|
66
|
+
process.stdout.write(chalk.cyan(delta.content));
|
|
67
|
+
}
|
|
68
|
+
chunkResponse += delta.content;
|
|
69
|
+
finalResponse += delta.content;
|
|
70
|
+
}
|
|
71
|
+
|
|
72
|
+
if (delta?.tool_calls) {
|
|
73
|
+
if (spinner.isSpinning) spinner.stop();
|
|
74
|
+
for (const tc of delta.tool_calls) {
|
|
75
|
+
if (tc.index === undefined) continue;
|
|
76
|
+
if (!toolCalls[tc.index]) {
|
|
77
|
+
toolCalls[tc.index] = { id: tc.id, type: 'function', function: { name: tc.function.name, arguments: '' } };
|
|
78
|
+
}
|
|
79
|
+
if (tc.function?.arguments) {
|
|
80
|
+
toolCalls[tc.index].function.arguments += tc.function.arguments;
|
|
81
|
+
// Visual feedback for streaming tool arguments
|
|
82
|
+
if (!spinner.isSpinning) {
|
|
83
|
+
spinner = ora({ text: `Generating ${chalk.yellow(toolCalls[tc.index].function.name)} (${toolCalls[tc.index].function.arguments.length} bytes)...`, color: 'yellow', stream: process.stdout }).start();
|
|
84
|
+
} else {
|
|
85
|
+
spinner.text = `Generating ${chalk.yellow(toolCalls[tc.index].function.name)} (${toolCalls[tc.index].function.arguments.length} bytes)...`;
|
|
86
|
+
}
|
|
87
|
+
}
|
|
88
|
+
}
|
|
89
|
+
}
|
|
90
|
+
}
|
|
91
|
+
if (spinner.isSpinning) spinner.stop();
|
|
92
|
+
|
|
93
|
+
if (chunkResponse) {
|
|
94
|
+
if (this.config.useMarkedTerminal) printMarkdown(chunkResponse);
|
|
95
|
+
this.messages.push({ role: 'assistant', content: chunkResponse });
|
|
96
|
+
}
|
|
97
|
+
|
|
98
|
+
toolCalls = toolCalls.filter(Boolean);
|
|
99
|
+
|
|
100
|
+
if (toolCalls.length === 0) {
|
|
101
|
+
console.log();
|
|
102
|
+
break;
|
|
103
|
+
}
|
|
104
|
+
|
|
105
|
+
this.messages.push({
|
|
106
|
+
role: 'assistant',
|
|
107
|
+
tool_calls: toolCalls,
|
|
108
|
+
content: chunkResponse || null
|
|
109
|
+
});
|
|
110
|
+
|
|
111
|
+
for (const call of toolCalls) {
|
|
112
|
+
console.log(chalk.yellow(`\n[Banana Calling Tool: ${call.function.name}]`));
|
|
113
|
+
let args = {};
|
|
114
|
+
try {
|
|
115
|
+
args = JSON.parse(call.function.arguments);
|
|
116
|
+
} catch (e) { }
|
|
117
|
+
|
|
118
|
+
const res = await executeTool(call.function.name, args, this.config);
|
|
119
|
+
if (this.config.debug) {
|
|
120
|
+
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
121
|
+
}
|
|
122
|
+
console.log(chalk.yellow(`[Tool Result Received]\n`));
|
|
123
|
+
|
|
124
|
+
this.messages.push({
|
|
125
|
+
role: 'tool',
|
|
126
|
+
tool_call_id: call.id,
|
|
127
|
+
content: typeof res === 'string' ? res : JSON.stringify(res)
|
|
128
|
+
});
|
|
129
|
+
}
|
|
130
|
+
|
|
131
|
+
spinner = ora({ text: 'Processing tool results...', color: 'yellow', stream: process.stdout }).start();
|
|
132
|
+
}
|
|
133
|
+
|
|
134
|
+
return finalResponse;
|
|
135
|
+
} catch (err) {
|
|
136
|
+
if (spinner && spinner.isSpinning) spinner.stop();
|
|
137
|
+
console.error(chalk.red(`Mistral Runtime Error: ${err.message}`));
|
|
138
|
+
return `Error: ${err.message}`;
|
|
139
|
+
}
|
|
140
|
+
}
|
|
141
|
+
}
|
|
@@ -82,7 +82,7 @@ export class OllamaCloudProvider {
|
|
|
82
82
|
const fn = call.function;
|
|
83
83
|
console.log(chalk.yellow(`\n[Banana Calling Tool: ${fn.name}]`));
|
|
84
84
|
|
|
85
|
-
let res = await executeTool(fn.name, fn.arguments);
|
|
85
|
+
let res = await executeTool(fn.name, fn.arguments, this.config);
|
|
86
86
|
if (this.config.debug) {
|
|
87
87
|
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
88
88
|
}
|
package/src/providers/openai.js
CHANGED
|
@@ -121,7 +121,7 @@ export class OpenAIProvider {
|
|
|
121
121
|
args = JSON.parse(call.function.arguments);
|
|
122
122
|
} catch (e) { }
|
|
123
123
|
|
|
124
|
-
const res = await executeTool(call.function.name, args);
|
|
124
|
+
const res = await executeTool(call.function.name, args, this.config);
|
|
125
125
|
if (this.config.debug) {
|
|
126
126
|
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
127
127
|
}
|
|
@@ -371,7 +371,7 @@ export class OpenAIProvider {
|
|
|
371
371
|
args = JSON.parse(call.function.arguments);
|
|
372
372
|
} catch (e) { }
|
|
373
373
|
|
|
374
|
-
const res = await executeTool(call.function.name, args);
|
|
374
|
+
const res = await executeTool(call.function.name, args, this.config);
|
|
375
375
|
if (this.config.debug) {
|
|
376
376
|
console.log(chalk.gray(`[DEBUG] Tool Result: ${typeof res === 'string' ? res : JSON.stringify(res, null, 2)}`));
|
|
377
377
|
}
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
import { getAvailableSkills } from '../utils/skills.js';
|
|
2
|
+
|
|
3
|
+
export async function activateSkill({ skillName }) {
|
|
4
|
+
const skills = getAvailableSkills();
|
|
5
|
+
// Match by ID or Name
|
|
6
|
+
const skill = skills.find(s => s.id === skillName || s.name === skillName);
|
|
7
|
+
|
|
8
|
+
if (!skill) {
|
|
9
|
+
return `Error: Skill '${skillName}' not found. Available skills: ${skills.map(s => s.name).join(', ')}`;
|
|
10
|
+
}
|
|
11
|
+
|
|
12
|
+
// The format expected by the AI agent
|
|
13
|
+
let output = `<activated_skill>\n`;
|
|
14
|
+
output += `<instructions>\n${skill.instructions}\n</instructions>\n`;
|
|
15
|
+
output += `<available_resources>\n`;
|
|
16
|
+
output += `- location: ${skill.path}\n`;
|
|
17
|
+
output += ` (Use list_directory and read_file to access bundled scripts, references, or assets)\n`;
|
|
18
|
+
output += `</available_resources>\n`;
|
|
19
|
+
output += `</activated_skill>`;
|
|
20
|
+
|
|
21
|
+
return output;
|
|
22
|
+
}
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
import chalk from 'chalk';
|
|
2
|
+
import ora from 'ora';
|
|
3
|
+
import { getAvailableTools, executeTool } from './registry.js';
|
|
4
|
+
import { getSystemPrompt } from '../prompt.js';
|
|
5
|
+
import { requestPermission } from '../permissions.js';
|
|
6
|
+
|
|
7
|
+
// Specialist prompts to guide sub-agents
|
|
8
|
+
const SPECIALIST_PROMPTS = {
|
|
9
|
+
researcher: "You are a Research Specialist. Your goal is to explore the codebase, find information, and answer specific questions. Do not make any file changes. Use tools like search_files, list_directory, and read_file to gather facts.",
|
|
10
|
+
coder: "You are a Coding Specialist. Your goal is to implement specific logic or fix bugs as requested. Focus on writing high-quality, idiomatic code using patch_file and write_file.",
|
|
11
|
+
reviewer: "You are a Code Reviewer. Your goal is to analyze provided code for bugs, security vulnerabilities, or style issues. Provide a detailed report of your findings.",
|
|
12
|
+
generalist: "You are a Generalist Sub-Agent. Complete the assigned task as efficiently as possible using all available tools."
|
|
13
|
+
};
|
|
14
|
+
|
|
15
|
+
/**
|
|
16
|
+
* Tool that allows the main agent to delegate a sub-task to a specialized agent.
|
|
17
|
+
*/
|
|
18
|
+
export async function delegateTask({ task, agentType = 'generalist', contextFiles = [] }, mainConfig) {
|
|
19
|
+
const perm = await requestPermission('Delegate Task', `${agentType} specialist: ${task}`);
|
|
20
|
+
if (!perm.allowed) {
|
|
21
|
+
return `User denied permission to delegate task to ${agentType} specialist.`;
|
|
22
|
+
}
|
|
23
|
+
|
|
24
|
+
const spinner = ora({
|
|
25
|
+
text: `Delegating to ${chalk.magenta(agentType)} specialist...`,
|
|
26
|
+
color: 'magenta',
|
|
27
|
+
stream: process.stdout
|
|
28
|
+
}).start();
|
|
29
|
+
|
|
30
|
+
try {
|
|
31
|
+
// 1. Setup the sub-agent config (inherit from main, but could be customized)
|
|
32
|
+
const subConfig = { ...mainConfig };
|
|
33
|
+
|
|
34
|
+
// Use a dynamic import to avoid circular dependency with index.js if needed,
|
|
35
|
+
// but here we can just manually create the provider based on current config.
|
|
36
|
+
const { GeminiProvider } = await import('../providers/gemini.js');
|
|
37
|
+
const { ClaudeProvider } = await import('../providers/claude.js');
|
|
38
|
+
const { OpenAIProvider } = await import('../providers/openai.js');
|
|
39
|
+
const { MistralProvider } = await import('../providers/mistral.js');
|
|
40
|
+
const { OllamaProvider } = await import('../providers/ollama.js');
|
|
41
|
+
const { OllamaCloudProvider } = await import('../providers/ollamaCloud.js');
|
|
42
|
+
|
|
43
|
+
const createSubProvider = (cfg) => {
|
|
44
|
+
switch (cfg.provider) {
|
|
45
|
+
case 'gemini': return new GeminiProvider(cfg);
|
|
46
|
+
case 'claude': return new ClaudeProvider(cfg);
|
|
47
|
+
case 'openai': return new OpenAIProvider(cfg);
|
|
48
|
+
case 'mistral': return new MistralProvider(cfg);
|
|
49
|
+
case 'ollama_cloud': return new OllamaCloudProvider(cfg);
|
|
50
|
+
case 'ollama': return new OllamaProvider(cfg);
|
|
51
|
+
default: return new OllamaProvider(cfg);
|
|
52
|
+
}
|
|
53
|
+
};
|
|
54
|
+
|
|
55
|
+
const subProvider = createSubProvider(subConfig);
|
|
56
|
+
|
|
57
|
+
// 2. Customize the system prompt for the specialist
|
|
58
|
+
const basePrompt = getSystemPrompt(subConfig);
|
|
59
|
+
const specialistInstruction = SPECIALIST_PROMPTS[agentType] || SPECIALIST_PROMPTS.generalist;
|
|
60
|
+
|
|
61
|
+
// Inject specialist instructions at the start
|
|
62
|
+
subProvider.updateSystemPrompt(`${specialistInstruction}\n\n${basePrompt}`);
|
|
63
|
+
|
|
64
|
+
// 3. Prepare initial message with context
|
|
65
|
+
let initialMessage = `TASK: ${task}`;
|
|
66
|
+
if (contextFiles.length > 0) {
|
|
67
|
+
const fs = await import('fs');
|
|
68
|
+
initialMessage += "\n\nCONTEXT FILES:";
|
|
69
|
+
for (const file of contextFiles) {
|
|
70
|
+
try {
|
|
71
|
+
const content = fs.readFileSync(file, 'utf8');
|
|
72
|
+
initialMessage += `\n\n--- ${file} ---\n${content}`;
|
|
73
|
+
} catch (e) {
|
|
74
|
+
initialMessage += `\n\n(Error reading context file ${file}: ${e.message})`;
|
|
75
|
+
}
|
|
76
|
+
}
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
// 4. Run the sub-agent message loop
|
|
80
|
+
// We'll give it a limit of 5 turns to prevent infinite loops between agents
|
|
81
|
+
let turns = 0;
|
|
82
|
+
let finalResponse = '';
|
|
83
|
+
|
|
84
|
+
spinner.text = `Sub-agent (${agentType}) is working on the task...`;
|
|
85
|
+
|
|
86
|
+
// For the sub-agent, we want to capture its sendMessage result
|
|
87
|
+
// Note: Sub-agents run silently (their output isn't printed unless debug is on)
|
|
88
|
+
// to prevent terminal clutter.
|
|
89
|
+
finalResponse = await subProvider.sendMessage(initialMessage);
|
|
90
|
+
|
|
91
|
+
spinner.stop();
|
|
92
|
+
console.log(chalk.magenta(`[Sub-Agent ${agentType} task complete]`));
|
|
93
|
+
|
|
94
|
+
return `SUB-AGENT RESULT:\n${finalResponse}`;
|
|
95
|
+
|
|
96
|
+
} catch (err) {
|
|
97
|
+
if (spinner.isSpinning) spinner.stop();
|
|
98
|
+
return `Error in delegation: ${err.message}`;
|
|
99
|
+
}
|
|
100
|
+
}
|
package/src/tools/registry.js
CHANGED
|
@@ -7,6 +7,8 @@ import { listDirectory } from './listDirectory.js';
|
|
|
7
7
|
import { duckDuckGo } from './duckDuckGo.js';
|
|
8
8
|
import { duckDuckGoScrape } from './duckDuckGoScrape.js';
|
|
9
9
|
import { patchFile } from './patchFile.js';
|
|
10
|
+
import { activateSkill } from './activateSkill.js';
|
|
11
|
+
import { delegateTask } from './delegateTask.js';
|
|
10
12
|
|
|
11
13
|
export const TOOLS = [
|
|
12
14
|
{
|
|
@@ -133,6 +135,40 @@ export const TOOLS = [
|
|
|
133
135
|
},
|
|
134
136
|
required: ['filepath', 'edits']
|
|
135
137
|
}
|
|
138
|
+
},
|
|
139
|
+
{
|
|
140
|
+
name: 'activate_skill',
|
|
141
|
+
description: 'Activates a specialized agent skill by name. Returns the skill\'s instructions wrapped in <activated_skill> tags. These provide specialized guidance for the current task.',
|
|
142
|
+
parameters: {
|
|
143
|
+
type: 'object',
|
|
144
|
+
properties: {
|
|
145
|
+
skillName: { type: 'string', description: 'The name or ID of the skill to activate.' }
|
|
146
|
+
},
|
|
147
|
+
required: ['skillName']
|
|
148
|
+
}
|
|
149
|
+
},
|
|
150
|
+
{
|
|
151
|
+
name: 'delegate_task',
|
|
152
|
+
label: 'Sub-Agent Delegation (Beta)',
|
|
153
|
+
description: 'Spawns a specialized sub-agent to handle a specific sub-task. Use this for complex research, big code changes, or detailed reviews to keep the main context clean.',
|
|
154
|
+
beta: true,
|
|
155
|
+
parameters: {
|
|
156
|
+
type: 'object',
|
|
157
|
+
properties: {
|
|
158
|
+
task: { type: 'string', description: 'The specific, detailed instruction for the sub-agent.' },
|
|
159
|
+
agentType: {
|
|
160
|
+
type: 'string',
|
|
161
|
+
description: 'The type of specialist to spawn.',
|
|
162
|
+
enum: ['researcher', 'coder', 'reviewer', 'generalist']
|
|
163
|
+
},
|
|
164
|
+
contextFiles: {
|
|
165
|
+
type: 'array',
|
|
166
|
+
description: 'Optional list of file paths to provide as initial context to the sub-agent.',
|
|
167
|
+
items: { type: 'string' }
|
|
168
|
+
}
|
|
169
|
+
},
|
|
170
|
+
required: ['task']
|
|
171
|
+
}
|
|
136
172
|
}
|
|
137
173
|
];
|
|
138
174
|
|
|
@@ -145,7 +181,7 @@ export function getAvailableTools(config = {}) {
|
|
|
145
181
|
});
|
|
146
182
|
}
|
|
147
183
|
|
|
148
|
-
export async function executeTool(name, args) {
|
|
184
|
+
export async function executeTool(name, args, config) {
|
|
149
185
|
switch (name) {
|
|
150
186
|
case 'execute_command': return await execCommand(args);
|
|
151
187
|
case 'read_file': return await readFile(args);
|
|
@@ -156,6 +192,8 @@ export async function executeTool(name, args) {
|
|
|
156
192
|
case 'duck_duck_go': return await duckDuckGo(args);
|
|
157
193
|
case 'duck_duck_go_scrape': return await duckDuckGoScrape(args);
|
|
158
194
|
case 'patch_file': return await patchFile(args);
|
|
195
|
+
case 'activate_skill': return await activateSkill(args);
|
|
196
|
+
case 'delegate_task': return await delegateTask(args, config);
|
|
159
197
|
default: return `Unknown tool: ${name}`;
|
|
160
198
|
}
|
|
161
199
|
}
|
|
@@ -0,0 +1,61 @@
|
|
|
1
|
+
import fs from 'fs';
|
|
2
|
+
import path from 'path';
|
|
3
|
+
import os from 'os';
|
|
4
|
+
|
|
5
|
+
const SKILLS_DIR = path.join(os.homedir(), '.config', 'banana-code', 'skills');
|
|
6
|
+
|
|
7
|
+
/**
|
|
8
|
+
* Scans the skills directory and parses SKILL.md files.
|
|
9
|
+
* @returns {Array} List of discovered skills.
|
|
10
|
+
*/
|
|
11
|
+
export function getAvailableSkills() {
|
|
12
|
+
try {
|
|
13
|
+
if (!fs.existsSync(SKILLS_DIR)) {
|
|
14
|
+
fs.mkdirSync(SKILLS_DIR, { recursive: true });
|
|
15
|
+
}
|
|
16
|
+
} catch (e) {
|
|
17
|
+
return [];
|
|
18
|
+
}
|
|
19
|
+
|
|
20
|
+
let skills = [];
|
|
21
|
+
try {
|
|
22
|
+
const entries = fs.readdirSync(SKILLS_DIR, { withFileTypes: true });
|
|
23
|
+
for (const entry of entries) {
|
|
24
|
+
if (entry.isDirectory()) {
|
|
25
|
+
const skillPath = path.join(SKILLS_DIR, entry.name);
|
|
26
|
+
const mdPath = path.join(skillPath, 'SKILL.md');
|
|
27
|
+
|
|
28
|
+
if (fs.existsSync(mdPath)) {
|
|
29
|
+
try {
|
|
30
|
+
const content = fs.readFileSync(mdPath, 'utf8');
|
|
31
|
+
// Match YAML frontmatter between --- and ---
|
|
32
|
+
const match = content.match(/^---\r?\n([\s\S]*?)\r?\n---/);
|
|
33
|
+
|
|
34
|
+
if (match) {
|
|
35
|
+
const frontmatter = match[1];
|
|
36
|
+
const body = content.slice(match[0].length).trim();
|
|
37
|
+
|
|
38
|
+
// Simple YAML parsing via regex
|
|
39
|
+
const nameMatch = frontmatter.match(/name:\s*['"]?([^'"\n]+)['"]?/);
|
|
40
|
+
const descMatch = frontmatter.match(/description:\s*['"]?([^'"\n]+)['"]?/);
|
|
41
|
+
|
|
42
|
+
if (nameMatch && descMatch) {
|
|
43
|
+
skills.push({
|
|
44
|
+
id: entry.name,
|
|
45
|
+
name: nameMatch[1].trim(),
|
|
46
|
+
description: descMatch[1].trim(),
|
|
47
|
+
instructions: body,
|
|
48
|
+
path: skillPath
|
|
49
|
+
});
|
|
50
|
+
}
|
|
51
|
+
}
|
|
52
|
+
} catch (err) {
|
|
53
|
+
// Skip corrupted or unreadable skills
|
|
54
|
+
}
|
|
55
|
+
}
|
|
56
|
+
}
|
|
57
|
+
}
|
|
58
|
+
} catch (e) {}
|
|
59
|
+
|
|
60
|
+
return skills;
|
|
61
|
+
}
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Estimates the number of tokens in a given string.
|
|
3
|
+
* This is a rough approximation (1 token ≈ 4 characters or ~0.75 words)
|
|
4
|
+
* used to provide a quick estimate without needing heavy, provider-specific tokenizer libraries.
|
|
5
|
+
*
|
|
6
|
+
* @param {string} text - The input text to estimate tokens for.
|
|
7
|
+
* @returns {number} The estimated token count.
|
|
8
|
+
*/
|
|
9
|
+
export function estimateTokens(text) {
|
|
10
|
+
if (!text || typeof text !== 'string') return 0;
|
|
11
|
+
|
|
12
|
+
// A common heuristic: 1 token is roughly 4 English characters.
|
|
13
|
+
// For code, it can be denser, but this provides a reasonable ballpark.
|
|
14
|
+
return Math.ceil(text.length / 4);
|
|
15
|
+
}
|
|
16
|
+
|
|
17
|
+
/**
|
|
18
|
+
* Calculates the estimated token count for the entire conversation history.
|
|
19
|
+
*
|
|
20
|
+
* @param {Array} messages - The array of message objects.
|
|
21
|
+
* @returns {number} The estimated total tokens.
|
|
22
|
+
*/
|
|
23
|
+
export function estimateConversationTokens(messages) {
|
|
24
|
+
if (!Array.isArray(messages)) return 0;
|
|
25
|
+
|
|
26
|
+
let totalString = '';
|
|
27
|
+
|
|
28
|
+
// Stringify the entire message array to get a representation of its "weight"
|
|
29
|
+
// This includes system prompts, tool calls, and results.
|
|
30
|
+
try {
|
|
31
|
+
totalString = JSON.stringify(messages);
|
|
32
|
+
} catch (e) {
|
|
33
|
+
// Fallback if there are circular references (unlikely in simple message arrays)
|
|
34
|
+
messages.forEach(msg => {
|
|
35
|
+
if (typeof msg === 'string') totalString += msg;
|
|
36
|
+
else if (msg && typeof msg === 'object') {
|
|
37
|
+
if (msg.content) totalString += typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content);
|
|
38
|
+
if (msg.parts) totalString += JSON.stringify(msg.parts);
|
|
39
|
+
}
|
|
40
|
+
});
|
|
41
|
+
}
|
|
42
|
+
|
|
43
|
+
return estimateTokens(totalString);
|
|
44
|
+
}
|