@jupyterlite/ai 0.9.1 → 0.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +5 -214
- package/lib/agent.d.ts +58 -66
- package/lib/agent.js +291 -310
- package/lib/approval-buttons.d.ts +19 -82
- package/lib/approval-buttons.js +36 -289
- package/lib/chat-model-registry.d.ts +6 -0
- package/lib/chat-model-registry.js +4 -1
- package/lib/chat-model.d.ts +26 -54
- package/lib/chat-model.js +277 -303
- package/lib/components/clear-button.d.ts +6 -1
- package/lib/components/clear-button.js +10 -6
- package/lib/components/completion-status.d.ts +5 -0
- package/lib/components/completion-status.js +5 -4
- package/lib/components/model-select.d.ts +6 -1
- package/lib/components/model-select.js +13 -16
- package/lib/components/stop-button.d.ts +6 -1
- package/lib/components/stop-button.js +12 -8
- package/lib/components/token-usage-display.d.ts +5 -0
- package/lib/components/token-usage-display.js +2 -2
- package/lib/components/tool-select.d.ts +6 -1
- package/lib/components/tool-select.js +10 -9
- package/lib/index.d.ts +1 -0
- package/lib/index.js +61 -81
- package/lib/models/settings-model.d.ts +1 -1
- package/lib/models/settings-model.js +40 -26
- package/lib/providers/built-in-providers.js +38 -19
- package/lib/providers/models.d.ts +3 -3
- package/lib/providers/provider-registry.d.ts +3 -4
- package/lib/providers/provider-registry.js +1 -4
- package/lib/tokens.d.ts +5 -6
- package/lib/tools/commands.d.ts +2 -1
- package/lib/tools/commands.js +36 -49
- package/lib/widgets/ai-settings.d.ts +6 -0
- package/lib/widgets/ai-settings.js +72 -71
- package/lib/widgets/main-area-chat.d.ts +2 -0
- package/lib/widgets/main-area-chat.js +5 -2
- package/lib/widgets/provider-config-dialog.d.ts +2 -0
- package/lib/widgets/provider-config-dialog.js +34 -34
- package/package.json +13 -13
- package/schema/settings-model.json +3 -2
- package/src/agent.ts +360 -372
- package/src/approval-buttons.ts +43 -389
- package/src/chat-model-registry.ts +9 -1
- package/src/chat-model.ts +399 -370
- package/src/completion/completion-provider.ts +2 -3
- package/src/components/clear-button.tsx +18 -6
- package/src/components/completion-status.tsx +18 -4
- package/src/components/model-select.tsx +25 -16
- package/src/components/stop-button.tsx +22 -9
- package/src/components/token-usage-display.tsx +14 -2
- package/src/components/tool-select.tsx +27 -9
- package/src/index.ts +78 -134
- package/src/models/settings-model.ts +41 -27
- package/src/providers/built-in-providers.ts +38 -19
- package/src/providers/models.ts +3 -3
- package/src/providers/provider-registry.ts +4 -8
- package/src/tokens.ts +5 -6
- package/src/tools/commands.ts +40 -53
- package/src/widgets/ai-settings.tsx +153 -84
- package/src/widgets/main-area-chat.ts +8 -2
- package/src/widgets/provider-config-dialog.tsx +54 -41
- package/style/base.css +24 -73
- package/lib/mcp/browser.d.ts +0 -68
- package/lib/mcp/browser.js +0 -138
- package/lib/tools/file.d.ts +0 -36
- package/lib/tools/file.js +0 -351
- package/lib/tools/notebook.d.ts +0 -40
- package/lib/tools/notebook.js +0 -779
- package/src/mcp/browser.ts +0 -220
- package/src/tools/file.ts +0 -438
- package/src/tools/notebook.ts +0 -986
package/README.md
CHANGED
|
@@ -1,9 +1,10 @@
|
|
|
1
1
|
# jupyterlite-ai
|
|
2
2
|
|
|
3
3
|
[](https://github.com/jupyterlite/ai/actions/workflows/build.yml)
|
|
4
|
+
[](https://jupyterlite-ai.readthedocs.io/en/latest/?badge=latest)
|
|
4
5
|
[](https://jupyterlite.github.io/ai/lab/index.html)
|
|
5
6
|
|
|
6
|
-
AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite
|
|
7
|
+
AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite.
|
|
7
8
|
|
|
8
9
|
[a screencast showing the Jupyterlite AI extension in JupyterLite](https://github.com/user-attachments/assets/e33d7d84-53ca-4835-a034-b6757476c98b)
|
|
9
10
|
|
|
@@ -11,14 +12,12 @@ AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite ✨
|
|
|
11
12
|
|
|
12
13
|
- JupyterLab >= 4.4.0 or Notebook >= 7.4.0
|
|
13
14
|
|
|
14
|
-
##
|
|
15
|
+
## Try it in your browser
|
|
15
16
|
|
|
16
17
|
You can try the extension in your browser using JupyterLite:
|
|
17
18
|
|
|
18
19
|
[](https://jupyterlite.github.io/ai/lab/index.html)
|
|
19
20
|
|
|
20
|
-
See the [Usage](#usage) section below for more information on how to provide your API key.
|
|
21
|
-
|
|
22
21
|
## Install
|
|
23
22
|
|
|
24
23
|
To install the extension, execute:
|
|
@@ -33,217 +32,9 @@ To install requirements (JupyterLab, JupyterLite and Notebook):
|
|
|
33
32
|
pip install jupyterlite-ai[jupyter]
|
|
34
33
|
```
|
|
35
34
|
|
|
36
|
-
##
|
|
37
|
-
|
|
38
|
-
> [!NOTE]
|
|
39
|
-
> This documentation applies to the upcoming **0.9.0** release.
|
|
40
|
-
> For the latest stable version, please refer to the [0.8.x branch](https://github.com/jupyterlite/ai/tree/0.8.x).
|
|
41
|
-
|
|
42
|
-
AI providers typically require using an API key to access their models.
|
|
43
|
-
|
|
44
|
-
The process is different for each provider, so you may refer to their documentation to learn how to generate new API keys.
|
|
45
|
-
|
|
46
|
-
### Using a provider with an API key (e.g. Anthropic, MistralAI, OpenAI)
|
|
47
|
-
|
|
48
|
-
1. Open the AI settings and
|
|
49
|
-
2. Click on "Add a new provider"
|
|
50
|
-
3. Enter the details for the provider
|
|
51
|
-
4. In the chat, select the new provider
|
|
52
|
-
|
|
53
|
-

|
|
54
|
-
|
|
55
|
-
### Using a generic OpenAI-compatible provider
|
|
56
|
-
|
|
57
|
-
The Generic provider allows you to connect to any OpenAI-compatible API endpoint, including local servers like Ollama and LiteLLM.
|
|
58
|
-
|
|
59
|
-
1. In JupyterLab, open the AI settings panel and go to the **Providers** section
|
|
60
|
-
2. Click on "Add a new provider"
|
|
61
|
-
3. Select the **Generic (OpenAI-compatible)** provider
|
|
62
|
-
4. Configure the following settings:
|
|
63
|
-
- **Base URL**: The base URL of your API endpoint (suggestions are provided for common local servers)
|
|
64
|
-
- **Model**: The model name to use
|
|
65
|
-
- **API Key**: Your API key (if required by the provider)
|
|
66
|
-
|
|
67
|
-
### Using Ollama
|
|
68
|
-
|
|
69
|
-
[Ollama](https://ollama.com/) allows you to run open-weight LLMs locally on your machine.
|
|
70
|
-
|
|
71
|
-
#### Setting up Ollama
|
|
72
|
-
|
|
73
|
-
1. Install Ollama following the instructions at <https://ollama.com/download>
|
|
74
|
-
2. Pull a model, for example:
|
|
75
|
-
|
|
76
|
-
```bash
|
|
77
|
-
ollama pull llama3.2
|
|
78
|
-
```
|
|
79
|
-
|
|
80
|
-
3. Start the Ollama server (it typically runs on `http://localhost:11434`)
|
|
81
|
-
|
|
82
|
-
#### Configuring `jupyterlite-ai` to use Ollama
|
|
83
|
-
|
|
84
|
-
1. In JupyterLab, open the AI settings panel and go to the **Providers** section
|
|
85
|
-
2. Click on "Add a new provider"
|
|
86
|
-
3. Select the **Generic (OpenAI-compatible)** provider
|
|
87
|
-
4. Configure the following settings:
|
|
88
|
-
- **Base URL**: Select `http://localhost:11434/v1` from the suggestions (or enter manually)
|
|
89
|
-
- **Model**: The model name you pulled (e.g., `llama3.2`)
|
|
90
|
-
- **API Key**: Leave empty (not required for Ollama)
|
|
91
|
-
|
|
92
|
-
### Using LiteLLM Proxy
|
|
93
|
-
|
|
94
|
-
[LiteLLM Proxy](https://docs.litellm.ai/docs/simple_proxy) is an OpenAI-compatible proxy server that allows you to call 100+ LLMs through a unified interface.
|
|
95
|
-
|
|
96
|
-
Using LiteLLM Proxy with jupyterlite-ai provides flexibility to switch between different AI providers (OpenAI, Anthropic, Google, Azure, local models, etc.) without changing your JupyterLite configuration. It's particularly useful for enterprise deployments where the proxy can be hosted within private infrastructure to manage external API calls and keep API keys server-side.
|
|
97
|
-
|
|
98
|
-
#### Setting up LiteLLM Proxy
|
|
99
|
-
|
|
100
|
-
1. Install LiteLLM:
|
|
101
|
-
|
|
102
|
-
Follow the instructions at <https://docs.litellm.ai/docs/simple_proxy>.
|
|
103
|
-
|
|
104
|
-
2. Create a `litellm_config.yaml` file with your model configuration:
|
|
105
|
-
|
|
106
|
-
```yaml
|
|
107
|
-
model_list:
|
|
108
|
-
- model_name: gpt-5
|
|
109
|
-
litellm_params:
|
|
110
|
-
model: gpt-5
|
|
111
|
-
api_key: os.environ/OPENAI_API_KEY
|
|
112
|
-
|
|
113
|
-
- model_name: claude-sonnet
|
|
114
|
-
litellm_params:
|
|
115
|
-
model: claude-sonnet-4-5-20250929
|
|
116
|
-
api_key: os.environ/ANTHROPIC_API_KEY
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
3. Start the proxy server, for example:
|
|
120
|
-
|
|
121
|
-
```bash
|
|
122
|
-
litellm --config litellm_config.yaml
|
|
123
|
-
```
|
|
124
|
-
|
|
125
|
-
The proxy will start on `http://0.0.0.0:4000` by default.
|
|
126
|
-
|
|
127
|
-
#### Configuring `jupyterlite-ai` to use LiteLLM Proxy
|
|
128
|
-
|
|
129
|
-
Configure the [Generic provider (OpenAI-compatible)](#using-a-generic-openai-compatible-provider) with the following settings:
|
|
130
|
-
|
|
131
|
-
- **Base URL**: `http://0.0.0.0:4000` (or your proxy server URL)
|
|
132
|
-
- **Model**: The model name from your `litellm_config.yaml` (e.g., `gpt-5`, `claude-sonnet`)
|
|
133
|
-
- **API Key (optional)**: If the LiteLLM Proxy server requires an API key, provide it here.
|
|
134
|
-
|
|
135
|
-
> [!IMPORTANT]
|
|
136
|
-
> The API key must be configured on the LiteLLM Proxy server (in the `litellm_config.yaml` file). Providing an API key via the AI provider settings UI will not have any effect, as the proxy server handles authentication with the upstream AI providers.
|
|
137
|
-
|
|
138
|
-
> [!NOTE]
|
|
139
|
-
> For more information about LiteLLM Proxy configuration, see the [LiteLLM documentation](https://docs.litellm.ai/docs/simple_proxy).
|
|
140
|
-
|
|
141
|
-
## Custom Providers
|
|
142
|
-
|
|
143
|
-
`jupyterlite-ai` supports custom AI providers through its provider registry system. Third-party providers can be registered programmatically in a JupyterLab extension.
|
|
144
|
-
|
|
145
|
-
Providers are based on the [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction), which provides a unified interface for working with different AI models.
|
|
146
|
-
|
|
147
|
-
### Registering a Custom Provider
|
|
148
|
-
|
|
149
|
-
#### Example: Registering a custom OpenAI-compatible provider
|
|
150
|
-
|
|
151
|
-
```typescript
|
|
152
|
-
import {
|
|
153
|
-
JupyterFrontEnd,
|
|
154
|
-
JupyterFrontEndPlugin
|
|
155
|
-
} from '@jupyterlab/application';
|
|
156
|
-
import { IProviderRegistry } from '@jupyterlite/ai';
|
|
157
|
-
import { createOpenAI } from '@ai-sdk/openai';
|
|
158
|
-
|
|
159
|
-
const plugin: JupyterFrontEndPlugin<void> = {
|
|
160
|
-
id: 'my-extension:custom-provider',
|
|
161
|
-
autoStart: true,
|
|
162
|
-
requires: [IProviderRegistry],
|
|
163
|
-
activate: (app: JupyterFrontEnd, registry: IProviderRegistry) => {
|
|
164
|
-
const providerInfo = {
|
|
165
|
-
id: 'my-custom-provider',
|
|
166
|
-
name: 'My Custom Provider',
|
|
167
|
-
apiKeyRequirement: 'required' as const,
|
|
168
|
-
defaultModels: ['my-model'],
|
|
169
|
-
supportsBaseURL: true,
|
|
170
|
-
factory: (options: {
|
|
171
|
-
apiKey: string;
|
|
172
|
-
baseURL?: string;
|
|
173
|
-
model?: string;
|
|
174
|
-
}) => {
|
|
175
|
-
const provider = createOpenAI({
|
|
176
|
-
apiKey: options.apiKey,
|
|
177
|
-
baseURL: options.baseURL || 'https://api.example.com/v1'
|
|
178
|
-
});
|
|
179
|
-
return provider(options.model || 'my-model');
|
|
180
|
-
}
|
|
181
|
-
};
|
|
182
|
-
|
|
183
|
-
registry.registerProvider(providerInfo);
|
|
184
|
-
}
|
|
185
|
-
};
|
|
186
|
-
```
|
|
187
|
-
|
|
188
|
-
The provider configuration object requires the following properties:
|
|
189
|
-
|
|
190
|
-
- `id`: Unique identifier for the provider
|
|
191
|
-
- `name`: Display name shown in the settings UI
|
|
192
|
-
- `apiKeyRequirement`: Whether an API key is `'required'`, `'optional'`, or `'none'`
|
|
193
|
-
- `defaultModels`: Array of model names to show in the settings
|
|
194
|
-
- `supportsBaseURL`: Whether the provider supports a custom base URL
|
|
195
|
-
- `factory`: Function that creates and returns a language model (the registry automatically wraps it for chat usage)
|
|
196
|
-
|
|
197
|
-
#### Example: Using a custom fetch function
|
|
198
|
-
|
|
199
|
-
You can provide a custom `fetch` function to the provider, which is useful for adding custom headers, handling authentication, or routing requests through a proxy:
|
|
200
|
-
|
|
201
|
-
```typescript
|
|
202
|
-
factory: (options: { apiKey: string; baseURL?: string; model?: string }) => {
|
|
203
|
-
const provider = createOpenAI({
|
|
204
|
-
apiKey: options.apiKey,
|
|
205
|
-
baseURL: options.baseURL || 'https://api.example.com/v1',
|
|
206
|
-
fetch: async (url, init) => {
|
|
207
|
-
// Custom fetch implementation
|
|
208
|
-
const modifiedInit = {
|
|
209
|
-
...init,
|
|
210
|
-
headers: {
|
|
211
|
-
...init?.headers,
|
|
212
|
-
'X-Custom-Header': 'custom-value'
|
|
213
|
-
}
|
|
214
|
-
};
|
|
215
|
-
return fetch(url, modifiedInit);
|
|
216
|
-
}
|
|
217
|
-
});
|
|
218
|
-
return provider(options.model || 'my-model');
|
|
219
|
-
};
|
|
220
|
-
```
|
|
221
|
-
|
|
222
|
-
## API key management
|
|
223
|
-
|
|
224
|
-
To avoid storing the API keys in the settings, `jupyterlite-ai` uses [jupyter-secrets-manager](https://github.com/jupyterlab-contrib/jupyter-secrets-manager) by default.
|
|
225
|
-
|
|
226
|
-
The secrets manager get the API keys from a connector in a secure way.\
|
|
227
|
-
The default connector of the secrets manager is _in memory_, which means that **the API keys are reset when reloading the page**.
|
|
228
|
-
|
|
229
|
-
To prevent the keys from being reset on reload, there are two options:
|
|
230
|
-
|
|
231
|
-
1. use a connector that fetches the keys on a remote server (using secure rest API, or web socket)
|
|
232
|
-
|
|
233
|
-
This is the recommended method, as it ensures the security of the keys and makes them accessible only to logged-in users. \
|
|
234
|
-
But it requires some frontend and backend deployments:
|
|
235
|
-
|
|
236
|
-
- a server that can store and send the keys on demand
|
|
237
|
-
- a way to get authenticated to the server
|
|
238
|
-
- a frontend extension providing the connector, able to connect to the server side
|
|
239
|
-
|
|
240
|
-
2. disable the use of the secrets manager from the AI settings panel
|
|
35
|
+
## Documentation
|
|
241
36
|
|
|
242
|
-
|
|
243
|
-
> The API keys will be stored in plain text using the settings system of Jupyterlab
|
|
244
|
-
>
|
|
245
|
-
> - using Jupyterlab, the settings are stored in a [directory](https://jupyterlab.readthedocs.io/en/stable/user/directories.html#jupyterlab-user-settings-directory) on the server
|
|
246
|
-
> - using Jupyterlite, the settings are stored in the [browser](https://jupyterlite.readthedocs.io/en/latest/howto/configure/storage.html#configure-the-browser-storage)
|
|
37
|
+
For detailed usage instructions, including how to configure AI providers, see the [documentation](https://jupyterlite-ai.readthedocs.io/).
|
|
247
38
|
|
|
248
39
|
## Uninstall
|
|
249
40
|
|
package/lib/agent.d.ts
CHANGED
|
@@ -1,9 +1,10 @@
|
|
|
1
1
|
import { ISignal } from '@lumino/signaling';
|
|
2
|
+
import { type Tool } from 'ai';
|
|
2
3
|
import { ISecretsManager } from 'jupyter-secrets-manager';
|
|
3
|
-
import { BrowserMCPServerStreamableHttp } from './mcp/browser';
|
|
4
4
|
import { AISettingsModel } from './models/settings-model';
|
|
5
5
|
import type { IProviderRegistry } from './tokens';
|
|
6
6
|
import { ITool, IToolRegistry, ITokenUsage } from './tokens';
|
|
7
|
+
type ToolMap = Record<string, Tool>;
|
|
7
8
|
export declare namespace AgentManagerFactory {
|
|
8
9
|
interface IOptions {
|
|
9
10
|
/**
|
|
@@ -33,15 +34,19 @@ export declare class AgentManagerFactory {
|
|
|
33
34
|
* @returns True if the server is connected, false otherwise
|
|
34
35
|
*/
|
|
35
36
|
isMCPServerConnected(serverName: string): boolean;
|
|
37
|
+
/**
|
|
38
|
+
* Gets the MCP tools from connected servers
|
|
39
|
+
*/
|
|
40
|
+
getMCPTools(): Promise<ToolMap>;
|
|
36
41
|
/**
|
|
37
42
|
* Handles settings changes and reinitializes the agent.
|
|
38
43
|
*/
|
|
39
44
|
private _onSettingsChanged;
|
|
40
45
|
/**
|
|
41
|
-
* Initializes MCP (Model Context Protocol)
|
|
42
|
-
* Closes existing
|
|
46
|
+
* Initializes MCP (Model Context Protocol) clients based on current settings.
|
|
47
|
+
* Closes existing clients and connects to enabled servers from configuration.
|
|
43
48
|
*/
|
|
44
|
-
private
|
|
49
|
+
private _initializeMCPClients;
|
|
45
50
|
/**
|
|
46
51
|
* Initializes the AI agent with current settings and tools.
|
|
47
52
|
* Sets up the agent with model configuration, tools, and MCP servers.
|
|
@@ -50,7 +55,7 @@ export declare class AgentManagerFactory {
|
|
|
50
55
|
private _agentManagers;
|
|
51
56
|
private _settingsModel;
|
|
52
57
|
private _secretsManager?;
|
|
53
|
-
private
|
|
58
|
+
private _mcpClients;
|
|
54
59
|
private _mcpConnectionChanged;
|
|
55
60
|
private _isInitializing;
|
|
56
61
|
}
|
|
@@ -81,19 +86,15 @@ export interface IAgentEventTypeMap {
|
|
|
81
86
|
output: string;
|
|
82
87
|
isError: boolean;
|
|
83
88
|
};
|
|
84
|
-
|
|
85
|
-
|
|
89
|
+
tool_approval_request: {
|
|
90
|
+
approvalId: string;
|
|
91
|
+
toolCallId: string;
|
|
86
92
|
toolName: string;
|
|
87
|
-
|
|
88
|
-
callId?: string;
|
|
93
|
+
args: unknown;
|
|
89
94
|
};
|
|
90
|
-
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
interruptionId: string;
|
|
94
|
-
toolName: string;
|
|
95
|
-
toolInput: string;
|
|
96
|
-
}>;
|
|
95
|
+
tool_approval_resolved: {
|
|
96
|
+
approvalId: string;
|
|
97
|
+
approved: boolean;
|
|
97
98
|
};
|
|
98
99
|
error: {
|
|
99
100
|
error: Error;
|
|
@@ -138,7 +139,7 @@ export interface IAgentManagerOptions {
|
|
|
138
139
|
/**
|
|
139
140
|
* Manages the AI agent lifecycle and execution loop.
|
|
140
141
|
* Provides agent initialization, tool management, MCP server integration,
|
|
141
|
-
* and handles the complete agent execution cycle
|
|
142
|
+
* and handles the complete agent execution cycle.
|
|
142
143
|
* Emits events for UI updates instead of directly manipulating the chat interface.
|
|
143
144
|
*/
|
|
144
145
|
export declare class AgentManager {
|
|
@@ -174,10 +175,10 @@ export declare class AgentManager {
|
|
|
174
175
|
*/
|
|
175
176
|
setSelectedTools(toolNames: string[]): void;
|
|
176
177
|
/**
|
|
177
|
-
* Gets the currently selected tools as
|
|
178
|
-
* @returns
|
|
178
|
+
* Gets the currently selected tools as a record.
|
|
179
|
+
* @returns Record of selected tools
|
|
179
180
|
*/
|
|
180
|
-
get selectedAgentTools(): ITool
|
|
181
|
+
get selectedAgentTools(): Record<string, ITool>;
|
|
181
182
|
/**
|
|
182
183
|
* Checks if the current configuration is valid for agent operations.
|
|
183
184
|
* Uses the provider registry to determine if an API key is required.
|
|
@@ -186,7 +187,6 @@ export declare class AgentManager {
|
|
|
186
187
|
hasValidConfig(): boolean;
|
|
187
188
|
/**
|
|
188
189
|
* Clears conversation history and resets agent state.
|
|
189
|
-
* Removes all conversation history, pending approvals, and interrupted state.
|
|
190
190
|
*/
|
|
191
191
|
clearHistory(): void;
|
|
192
192
|
/**
|
|
@@ -194,70 +194,63 @@ export declare class AgentManager {
|
|
|
194
194
|
*/
|
|
195
195
|
stopStreaming(): void;
|
|
196
196
|
/**
|
|
197
|
-
*
|
|
198
|
-
*
|
|
199
|
-
* @param
|
|
197
|
+
* Approves a pending tool call.
|
|
198
|
+
* @param approvalId The approval ID to approve
|
|
199
|
+
* @param reason Optional reason for approval
|
|
200
200
|
*/
|
|
201
|
-
|
|
201
|
+
approveToolCall(approvalId: string, reason?: string): void;
|
|
202
202
|
/**
|
|
203
|
-
*
|
|
204
|
-
* @param
|
|
203
|
+
* Rejects a pending tool call.
|
|
204
|
+
* @param approvalId The approval ID to reject
|
|
205
|
+
* @param reason Optional reason for rejection
|
|
205
206
|
*/
|
|
206
|
-
|
|
207
|
+
rejectToolCall(approvalId: string, reason?: string): void;
|
|
207
208
|
/**
|
|
208
|
-
*
|
|
209
|
-
*
|
|
210
|
-
|
|
211
|
-
rejectToolCall(interruptionId: string): Promise<void>;
|
|
212
|
-
/**
|
|
213
|
-
* Approves all tools in a group by group ID.
|
|
214
|
-
* @param groupId The group ID containing the tool calls
|
|
215
|
-
* @param interruptionIds Array of interruption IDs to approve
|
|
209
|
+
* Generates AI response to user message using the agent.
|
|
210
|
+
* Handles the complete execution cycle including tool calls.
|
|
211
|
+
* @param message The user message to respond to (may include processed attachment content)
|
|
216
212
|
*/
|
|
217
|
-
|
|
213
|
+
generateResponse(message: string): Promise<void>;
|
|
218
214
|
/**
|
|
219
|
-
*
|
|
220
|
-
* @param groupId The group ID containing the tool calls
|
|
221
|
-
* @param interruptionIds Array of interruption IDs to reject
|
|
215
|
+
* Updates token usage statistics.
|
|
222
216
|
*/
|
|
223
|
-
|
|
217
|
+
private _updateTokenUsage;
|
|
224
218
|
/**
|
|
225
219
|
* Initializes the AI agent with current settings and tools.
|
|
226
|
-
* Sets up the agent with model configuration, tools, and MCP
|
|
220
|
+
* Sets up the agent with model configuration, tools, and MCP tools.
|
|
227
221
|
*/
|
|
228
|
-
initializeAgent: (
|
|
222
|
+
initializeAgent: (mcpTools?: ToolMap) => Promise<void>;
|
|
229
223
|
/**
|
|
230
|
-
* Processes the result
|
|
224
|
+
* Processes the stream result from agent execution.
|
|
231
225
|
* Handles message streaming, tool calls, and emits appropriate events.
|
|
232
|
-
* @param result The
|
|
226
|
+
* @param result The stream result from agent execution
|
|
227
|
+
* @returns Processing result including approval info if applicable
|
|
233
228
|
*/
|
|
234
|
-
private
|
|
229
|
+
private _processStreamResult;
|
|
235
230
|
/**
|
|
236
|
-
*
|
|
237
|
-
* @param input The tool input string to format
|
|
238
|
-
* @returns Pretty-printed JSON string
|
|
231
|
+
* Emits a message_complete event.
|
|
239
232
|
*/
|
|
240
|
-
private
|
|
233
|
+
private _emitMessageComplete;
|
|
241
234
|
/**
|
|
242
|
-
* Handles
|
|
243
|
-
* @param modelEvent The model event containing tool call information
|
|
235
|
+
* Handles tool-result stream parts.
|
|
244
236
|
*/
|
|
245
|
-
private
|
|
237
|
+
private _handleToolResult;
|
|
246
238
|
/**
|
|
247
|
-
* Handles tool
|
|
248
|
-
* @param event The tool output event containing result information
|
|
239
|
+
* Handles tool-approval-request stream parts.
|
|
249
240
|
*/
|
|
250
|
-
private
|
|
241
|
+
private _handleApprovalRequest;
|
|
251
242
|
/**
|
|
252
|
-
*
|
|
253
|
-
* @param
|
|
243
|
+
* Waits for user approval of a tool call.
|
|
244
|
+
* @param approvalId The approval ID to wait for
|
|
245
|
+
* @returns Promise that resolves to true if approved, false if rejected
|
|
254
246
|
*/
|
|
255
|
-
private
|
|
247
|
+
private _waitForApproval;
|
|
256
248
|
/**
|
|
257
|
-
*
|
|
258
|
-
* @param
|
|
249
|
+
* Formats tool input for display by pretty-printing JSON strings.
|
|
250
|
+
* @param input The tool input string to format
|
|
251
|
+
* @returns Pretty-printed JSON string
|
|
259
252
|
*/
|
|
260
|
-
private
|
|
253
|
+
private _formatToolInput;
|
|
261
254
|
/**
|
|
262
255
|
* Checks if the current provider supports tool calling.
|
|
263
256
|
* @returns True if the provider supports tool calling, false otherwise
|
|
@@ -280,16 +273,15 @@ export declare class AgentManager {
|
|
|
280
273
|
private _secretsManager?;
|
|
281
274
|
private _selectedToolNames;
|
|
282
275
|
private _agent;
|
|
283
|
-
private _runner;
|
|
284
276
|
private _history;
|
|
285
|
-
private
|
|
277
|
+
private _mcpTools;
|
|
286
278
|
private _isInitializing;
|
|
287
279
|
private _controller;
|
|
288
|
-
private _pendingApprovals;
|
|
289
|
-
private _interruptedState;
|
|
290
280
|
private _agentEvent;
|
|
291
281
|
private _tokenUsage;
|
|
292
282
|
private _tokenUsageChanged;
|
|
293
283
|
private _activeProvider;
|
|
294
284
|
private _activeProviderChanged;
|
|
285
|
+
private _pendingApprovals;
|
|
295
286
|
}
|
|
287
|
+
export {};
|