@jupyterlite/ai 0.9.0-a4 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -52,13 +52,25 @@ The process is different for each provider, so you may refer to their documentat
52
52
 
53
53
  ![screenshot showing the dialog to add a new provider](https://github.com/user-attachments/assets/823c71c6-5807-44c8-80b6-2e59379a65d5)
54
54
 
55
+ ### Using a generic OpenAI-compatible provider
56
+
57
+ The Generic provider allows you to connect to any OpenAI-compatible API endpoint, including local servers like Ollama and LiteLLM.
58
+
59
+ 1. In JupyterLab, open the AI settings panel and go to the **Providers** section
60
+ 2. Click on "Add a new provider"
61
+ 3. Select the **Generic (OpenAI-compatible)** provider
62
+ 4. Configure the following settings:
63
+ - **Base URL**: The base URL of your API endpoint (suggestions are provided for common local servers)
64
+ - **Model**: The model name to use
65
+ - **API Key**: Your API key (if required by the provider)
66
+
55
67
  ### Using Ollama
56
68
 
57
69
  [Ollama](https://ollama.com/) allows you to run open-weight LLMs locally on your machine.
58
70
 
59
71
  #### Setting up Ollama
60
72
 
61
- 1. Install Ollama following the instructions at https://ollama.com/download
73
+ 1. Install Ollama following the instructions at <https://ollama.com/download>
62
74
  2. Pull a model, for example:
63
75
 
64
76
  ```bash
@@ -71,21 +83,11 @@ ollama pull llama3.2
71
83
 
72
84
  1. In JupyterLab, open the AI settings panel and go to the **Providers** section
73
85
  2. Click on "Add a new provider"
74
- 3. Select the **Ollama** provider
86
+ 3. Select the **Generic (OpenAI-compatible)** provider
75
87
  4. Configure the following settings:
88
+ - **Base URL**: Select `http://localhost:11434/v1` from the suggestions (or enter manually)
76
89
  - **Model**: The model name you pulled (e.g., `llama3.2`)
77
-
78
- ### Using a generic OpenAI-compatible provider
79
-
80
- The Generic provider allows you to connect to any OpenAI-compatible API endpoint.
81
-
82
- 1. In JupyterLab, open the AI settings panel and go to the **Providers** section
83
- 2. Click on "Add a new provider"
84
- 3. Select the **Generic** provider
85
- 4. Configure the following settings:
86
- - **Base URL**: The base URL of your API endpoint
87
- - **Model**: The model name to use
88
- - **API Key**: Your API key (if required by the provider)
90
+ - **API Key**: Leave empty (not required for Ollama)
89
91
 
90
92
  ### Using LiteLLM Proxy
91
93
 
@@ -97,7 +99,7 @@ Using LiteLLM Proxy with jupyterlite-ai provides flexibility to switch between d
97
99
 
98
100
  1. Install LiteLLM:
99
101
 
100
- Follow the instructions at https://docs.litellm.ai/docs/simple_proxy.
102
+ Follow the instructions at <https://docs.litellm.ai/docs/simple_proxy>.
101
103
 
102
104
  2. Create a `litellm_config.yaml` file with your model configuration:
103
105
 
@@ -144,7 +146,7 @@ Providers are based on the [Vercel AI SDK](https://sdk.vercel.ai/docs/introducti
144
146
 
145
147
  ### Registering a Custom Provider
146
148
 
147
- **Example: Registering a custom OpenAI-compatible provider**
149
+ #### Example: Registering a custom OpenAI-compatible provider
148
150
 
149
151
  ```typescript
150
152
  import {
@@ -192,7 +194,7 @@ The provider configuration object requires the following properties:
192
194
  - `supportsBaseURL`: Whether the provider supports a custom base URL
193
195
  - `factory`: Function that creates and returns a language model (the registry automatically wraps it for chat usage)
194
196
 
195
- **Example: Using a custom fetch function**
197
+ #### Example: Using a custom fetch function
196
198
 
197
199
  You can provide a custom `fetch` function to the provider, which is useful for adding custom headers, handling authentication, or routing requests through a proxy:
198
200
 
@@ -253,75 +255,4 @@ pip uninstall jupyterlite-ai
253
255
 
254
256
  ## Contributing
255
257
 
256
- ### Development install
257
-
258
- Note: You will need NodeJS to build the extension package.
259
-
260
- The `jlpm` command is JupyterLab's pinned version of
261
- [yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use
262
- `yarn` or `npm` in lieu of `jlpm` below.
263
-
264
- ```bash
265
- # Clone the repo to your local environment
266
- # Change directory to the jupyterlite_ai directory
267
- # Install package in development mode
268
- pip install -e "."
269
- # Link your development version of the extension with JupyterLab
270
- jupyter labextension develop . --overwrite
271
- # Rebuild extension Typescript source after making changes
272
- jlpm build
273
- ```
274
-
275
- You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.
276
-
277
- ```bash
278
- # Watch the source directory in one terminal, automatically rebuilding when needed
279
- jlpm watch
280
- # Run JupyterLab in another terminal
281
- jupyter lab
282
- ```
283
-
284
- With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).
285
-
286
- By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:
287
-
288
- ```bash
289
- jupyter lab build --minimize=False
290
- ```
291
-
292
- ### Running UI tests
293
-
294
- The UI tests use Playwright and can be configured with environment variables:
295
-
296
- - `PWVIDEO`: Controls video recording during tests (default: `retain-on-failure`)
297
- - `on`: Record video for all tests
298
- - `off`: Do not record video
299
- - `retain-on-failure`: Only keep videos for failed tests
300
- - `PWSLOWMO`: Adds a delay (in milliseconds) between Playwright actions for debugging (default: `0`)
301
-
302
- Example usage:
303
-
304
- ```bash
305
- # Record all test videos
306
- PWVIDEO=on jlpm playwright test
307
-
308
- # Slow down test execution by 500ms per action
309
- PWSLOWMO=500 jlpm playwright test
310
-
311
- # Combine both options
312
- PWVIDEO=on PWSLOWMO=1000 jlpm playwright test
313
- ```
314
-
315
- ### Development uninstall
316
-
317
- ```bash
318
- pip uninstall jupyterlite-ai
319
- ```
320
-
321
- In development mode, you will also need to remove the symlink created by `jupyter labextension develop`
322
- command. To find its location, you can run `jupyter labextension list` to figure out where the `labextensions`
323
- folder is located. Then you can remove the symlink named `@jupyterlite/ai` within that folder.
324
-
325
- ### Packaging the extension
326
-
327
- See [RELEASE](RELEASE.md)
258
+ See [CONTRIBUTING](CONTRIBUTING.md)
package/lib/chat-model.js CHANGED
@@ -524,7 +524,91 @@ ${toolsList}
524
524
  const code = cell.source || '';
525
525
  const cellType = cell.cell_type;
526
526
  const lang = cellType === 'code' ? kernelLang : cellType;
527
- return `**Cell [${cellInfo.id}] (${cellType}):**\n\`\`\`${lang}\n${code}\n\`\`\``;
527
+ const DISPLAY_PRIORITY = [
528
+ 'application/vnd.jupyter.widget-view+json',
529
+ 'application/javascript',
530
+ 'text/html',
531
+ 'image/svg+xml',
532
+ 'image/png',
533
+ 'image/jpeg',
534
+ 'text/markdown',
535
+ 'text/latex',
536
+ 'text/plain'
537
+ ];
538
+ function extractDisplay(data) {
539
+ for (const mime of DISPLAY_PRIORITY) {
540
+ if (!(mime in data)) {
541
+ continue;
542
+ }
543
+ const value = data[mime];
544
+ if (!value) {
545
+ continue;
546
+ }
547
+ switch (mime) {
548
+ case 'application/vnd.jupyter.widget-view+json':
549
+ return `Widget: ${value.model_id ?? 'unknown model'}`;
550
+ case 'image/png':
551
+ return `![image](data:image/png;base64,${value.slice(0, 100)}...)`;
552
+ case 'image/jpeg':
553
+ return `![image](data:image/jpeg;base64,${value.slice(0, 100)}...)`;
554
+ case 'image/svg+xml':
555
+ return String(value).slice(0, 500) + '...\n[svg truncated]';
556
+ case 'text/html':
557
+ return (String(value).slice(0, 1000) +
558
+ (String(value).length > 1000 ? '\n...[truncated]' : ''));
559
+ case 'text/markdown':
560
+ case 'text/latex':
561
+ case 'text/plain': {
562
+ let text = Array.isArray(value)
563
+ ? value.join('')
564
+ : String(value);
565
+ if (text.length > 2000) {
566
+ text = text.slice(0, 2000) + '\n...[truncated]';
567
+ }
568
+ return text;
569
+ }
570
+ default:
571
+ return JSON.stringify(value).slice(0, 2000);
572
+ }
573
+ }
574
+ return JSON.stringify(data).slice(0, 2000);
575
+ }
576
+ let outputs = '';
577
+ if (cellType === 'code' && Array.isArray(cell.outputs)) {
578
+ outputs = cell.outputs
579
+ .map((output) => {
580
+ if (output.output_type === 'stream') {
581
+ return output.text;
582
+ }
583
+ else if (output.output_type === 'error') {
584
+ const err = output;
585
+ return `${err.ename}: ${err.evalue}\n${(err.traceback || []).join('\n')}`;
586
+ }
587
+ else if (output.output_type === 'execute_result' ||
588
+ output.output_type === 'display_data') {
589
+ const data = output.data;
590
+ if (!data) {
591
+ return '';
592
+ }
593
+ try {
594
+ return extractDisplay(data);
595
+ }
596
+ catch (e) {
597
+ console.error('Cannot extract cell output', e);
598
+ return '';
599
+ }
600
+ }
601
+ return '';
602
+ })
603
+ .filter(Boolean)
604
+ .join('\n---\n');
605
+ if (outputs.length > 2000) {
606
+ outputs = outputs.slice(0, 2000) + '\n...[truncated]';
607
+ }
608
+ }
609
+ return (`**Cell [${cellInfo.id}] (${cellType}):**\n` +
610
+ `\`\`\`${lang}\n${code}\n\`\`\`` +
611
+ (outputs ? `\n**Outputs:**\n\`\`\`text\n${outputs}\n\`\`\`` : ''));
528
612
  })
529
613
  .filter(Boolean)
530
614
  .join('\n\n');
@@ -0,0 +1,20 @@
1
+ import { AISettingsModel } from '../models/settings-model';
2
+ import { ReactWidget } from '@jupyterlab/ui-components';
3
+ /**
4
+ * The completion status props.
5
+ */
6
+ interface ICompletionStatusProps {
7
+ /**
8
+ * The settings model.
9
+ */
10
+ settingsModel: AISettingsModel;
11
+ }
12
+ /**
13
+ * The completion status widget that will be added to the status bar.
14
+ */
15
+ export declare class CompletionStatusWidget extends ReactWidget {
16
+ constructor(options: ICompletionStatusProps);
17
+ render(): JSX.Element;
18
+ private _props;
19
+ }
20
+ export {};
@@ -0,0 +1,51 @@
1
+ import React, { useEffect, useState } from 'react';
2
+ import { ReactWidget } from '@jupyterlab/ui-components';
3
+ import { jupyternautIcon } from '../icons';
4
+ const COMPLETION_STATUS_CLASS = 'jp-ai-completion-status';
5
+ const COMPLETION_DISABLED_CLASS = 'jp-ai-completion-disabled';
6
+ /**
7
+ * The completion status component.
8
+ */
9
+ function CompletionStatus(props) {
10
+ const [disabled, setDisabled] = useState(true);
11
+ const [title, setTitle] = useState('');
12
+ /**
13
+ * Handle changes in the settings.
14
+ */
15
+ useEffect(() => {
16
+ const stateChanged = (model) => {
17
+ if (model.config.useSameProviderForChatAndCompleter) {
18
+ setDisabled(false);
19
+ setTitle(`Completion using ${model.getDefaultProvider()?.model}`);
20
+ }
21
+ else if (model.config.activeCompleterProvider) {
22
+ setDisabled(false);
23
+ setTitle(`Completion using ${model.getProvider(model.config.activeCompleterProvider)?.model}`);
24
+ }
25
+ else {
26
+ setDisabled(true);
27
+ setTitle('No completion');
28
+ }
29
+ };
30
+ props.settingsModel.stateChanged.connect(stateChanged);
31
+ stateChanged(props.settingsModel);
32
+ return () => {
33
+ props.settingsModel.stateChanged.disconnect(stateChanged);
34
+ };
35
+ }, [props.settingsModel]);
36
+ return (React.createElement(jupyternautIcon.react, { className: disabled ? COMPLETION_DISABLED_CLASS : '', top: '2px', width: '16px', stylesheet: 'statusBar', title: title }));
37
+ }
38
+ /**
39
+ * The completion status widget that will be added to the status bar.
40
+ */
41
+ export class CompletionStatusWidget extends ReactWidget {
42
+ constructor(options) {
43
+ super();
44
+ this.addClass(COMPLETION_STATUS_CLASS);
45
+ this._props = options;
46
+ }
47
+ render() {
48
+ return React.createElement(CompletionStatus, { ...this._props });
49
+ }
50
+ _props;
51
+ }
@@ -1,4 +1,5 @@
1
1
  export * from './clear-button';
2
+ export * from './completion-status';
2
3
  export * from './model-select';
3
4
  export * from './stop-button';
4
5
  export * from './token-usage-display';
@@ -1,4 +1,5 @@
1
1
  export * from './clear-button';
2
+ export * from './completion-status';
2
3
  export * from './model-select';
3
4
  export * from './stop-button';
4
5
  export * from './token-usage-display';
package/lib/index.js CHANGED
@@ -8,6 +8,7 @@ import { INotebookTracker } from '@jupyterlab/notebook';
8
8
  import { IRenderMimeRegistry } from '@jupyterlab/rendermime';
9
9
  import { IKernelSpecManager } from '@jupyterlab/services';
10
10
  import { ISettingRegistry } from '@jupyterlab/settingregistry';
11
+ import { IStatusBar } from '@jupyterlab/statusbar';
11
12
  import { settingsIcon, Toolbar, ToolbarButton } from '@jupyterlab/ui-components';
12
13
  import { ISecretsManager, SecretsManager } from 'jupyter-secrets-manager';
13
14
  import { PromiseDelegate, UUID } from '@lumino/coreutils';
@@ -16,9 +17,9 @@ import { ProviderRegistry } from './providers/provider-registry';
16
17
  import { ApprovalButtons } from './approval-buttons';
17
18
  import { ChatModelRegistry } from './chat-model-registry';
18
19
  import { CommandIds, IAgentManagerFactory, IProviderRegistry, IToolRegistry, SECRETS_NAMESPACE, IAISettingsModel, IChatModelRegistry, IDiffManager } from './tokens';
19
- import { anthropicProvider, googleProvider, mistralProvider, openaiProvider, ollamaProvider, genericProvider } from './providers/built-in-providers';
20
+ import { anthropicProvider, googleProvider, mistralProvider, openaiProvider, genericProvider } from './providers/built-in-providers';
20
21
  import { AICompletionProvider } from './completion';
21
- import { clearItem, createModelSelectItem, createToolSelectItem, stopItem, TokenUsageWidget } from './components';
22
+ import { clearItem, createModelSelectItem, createToolSelectItem, stopItem, CompletionStatusWidget, TokenUsageWidget } from './components';
22
23
  import { AISettingsModel } from './models/settings-model';
23
24
  import { DiffManager } from './diff-manager';
24
25
  import { ToolRegistry } from './tools/tool-registry';
@@ -87,18 +88,6 @@ const openaiProviderPlugin = {
87
88
  providerRegistry.registerProvider(openaiProvider);
88
89
  }
89
90
  };
90
- /**
91
- * Ollama provider plugin
92
- */
93
- const ollamaProviderPlugin = {
94
- id: '@jupyterlite/ai:ollama-provider',
95
- description: 'Register Ollama provider',
96
- autoStart: true,
97
- requires: [IProviderRegistry],
98
- activate: (app, providerRegistry) => {
99
- providerRegistry.registerProvider(ollamaProvider);
100
- }
101
- };
102
91
  /**
103
92
  * Generic provider plugin
104
93
  */
@@ -480,6 +469,7 @@ const agentManagerFactory = SecretsManager.sign(SECRETS_NAMESPACE, token => ({
480
469
  });
481
470
  settingsWidget.id = 'jupyterlite-ai-settings';
482
471
  settingsWidget.title.icon = settingsIcon;
472
+ settingsWidget.title.iconClass = 'jp-ai-settings-icon';
483
473
  // Build the completion provider
484
474
  if (completionManager) {
485
475
  const completionProvider = new AICompletionProvider({
@@ -500,6 +490,7 @@ const agentManagerFactory = SecretsManager.sign(SECRETS_NAMESPACE, token => ({
500
490
  label: 'AI Settings',
501
491
  caption: 'Configure AI providers and behavior',
502
492
  icon: settingsIcon,
493
+ iconClass: 'jp-ai-settings-icon',
503
494
  execute: () => {
504
495
  // Check if the widget already exists in shell
505
496
  let widget = Array.from(app.shell.widgets('main')).find(w => w.id === 'jupyterlite-ai-settings');
@@ -644,13 +635,30 @@ const inputToolbarFactory = {
644
635
  };
645
636
  }
646
637
  };
638
+ const completionStatus = {
639
+ id: '@jupyterlite/ai:completion-status',
640
+ description: 'The completion status displayed in the status bar',
641
+ autoStart: true,
642
+ requires: [IAISettingsModel],
643
+ optional: [IStatusBar],
644
+ activate: (app, settingsModel, statusBar) => {
645
+ if (!statusBar) {
646
+ return;
647
+ }
648
+ const item = new CompletionStatusWidget({ settingsModel });
649
+ statusBar?.registerStatusItem('completionState', {
650
+ item,
651
+ align: 'right',
652
+ rank: 10
653
+ });
654
+ }
655
+ };
647
656
  export default [
648
657
  providerRegistryPlugin,
649
658
  anthropicProviderPlugin,
650
659
  googleProviderPlugin,
651
660
  mistralProviderPlugin,
652
661
  openaiProviderPlugin,
653
- ollamaProviderPlugin,
654
662
  genericProviderPlugin,
655
663
  settingsModel,
656
664
  diffManager,
@@ -658,7 +666,8 @@ export default [
658
666
  plugin,
659
667
  toolRegistry,
660
668
  agentManagerFactory,
661
- inputToolbarFactory
669
+ inputToolbarFactory,
670
+ completionStatus
662
671
  ];
663
672
  // Export extension points for other extensions to use
664
673
  export * from './tokens';
@@ -166,7 +166,7 @@ Rules:
166
166
  }
167
167
  return this._config.activeCompleterProvider
168
168
  ? this.getProvider(this._config.activeCompleterProvider)
169
- : this.getDefaultProvider();
169
+ : undefined;
170
170
  }
171
171
  async addProvider(providerConfig) {
172
172
  const id = `${providerConfig.provider}-${Date.now()}`;
@@ -219,6 +219,11 @@ Rules:
219
219
  return;
220
220
  }
221
221
  Object.assign(provider, updates);
222
+ Object.keys(provider).forEach(key => {
223
+ if (key !== 'id' && updates[key] === undefined) {
224
+ delete provider[key];
225
+ }
226
+ });
222
227
  await this.saveSetting('providers', this._config.providers);
223
228
  }
224
229
  async setActiveProvider(id) {
@@ -298,6 +303,9 @@ Rules:
298
303
  if (value !== undefined) {
299
304
  await this._settings.set(key, value);
300
305
  }
306
+ else {
307
+ await this._settings.remove(key);
308
+ }
301
309
  }
302
310
  }
303
311
  catch (error) {
@@ -15,10 +15,6 @@ export declare const mistralProvider: IProviderInfo;
15
15
  * OpenAI provider
16
16
  */
17
17
  export declare const openaiProvider: IProviderInfo;
18
- /**
19
- * Ollama provider
20
- */
21
- export declare const ollamaProvider: IProviderInfo;
22
18
  /**
23
19
  * Generic OpenAI-compatible provider
24
20
  */
@@ -2,7 +2,7 @@ import { createAnthropic } from '@ai-sdk/anthropic';
2
2
  import { createGoogleGenerativeAI } from '@ai-sdk/google';
3
3
  import { createMistral } from '@ai-sdk/mistral';
4
4
  import { createOpenAI } from '@ai-sdk/openai';
5
- import { createOllama } from 'ollama-ai-provider-v2';
5
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
6
6
  /**
7
7
  * Anthropic provider
8
8
  */
@@ -194,25 +194,6 @@ export const openaiProvider = {
194
194
  return openai(modelName);
195
195
  }
196
196
  };
197
- /**
198
- * Ollama provider
199
- */
200
- export const ollamaProvider = {
201
- id: 'ollama',
202
- name: 'Ollama',
203
- apiKeyRequirement: 'none',
204
- defaultModels: [],
205
- supportsBaseURL: true,
206
- supportsHeaders: true,
207
- factory: (options) => {
208
- const ollama = createOllama({
209
- baseURL: options.baseURL || 'http://localhost:11434/api',
210
- ...(options.headers && { headers: options.headers })
211
- });
212
- const modelName = options.model || 'phi3';
213
- return ollama(modelName);
214
- }
215
- };
216
197
  /**
217
198
  * Generic OpenAI-compatible provider
218
199
  */
@@ -225,13 +206,24 @@ export const genericProvider = {
225
206
  supportsHeaders: true,
226
207
  supportsToolCalling: true,
227
208
  description: 'Uses /chat/completions endpoint',
209
+ baseUrls: [
210
+ {
211
+ url: 'http://localhost:4000',
212
+ description: 'Default for local LiteLLM server'
213
+ },
214
+ {
215
+ url: 'http://localhost:11434/v1',
216
+ description: 'Default for local Ollama server'
217
+ }
218
+ ],
228
219
  factory: (options) => {
229
- const openai = createOpenAI({
220
+ const openaiCompatible = createOpenAICompatible({
221
+ name: options.provider,
230
222
  apiKey: options.apiKey || 'dummy',
231
- ...(options.baseURL && { baseURL: options.baseURL }),
223
+ baseURL: options.baseURL ?? '',
232
224
  ...(options.headers && { headers: options.headers })
233
225
  });
234
226
  const modelName = options.model || 'gpt-4o';
235
- return openai(modelName);
227
+ return openaiCompatible(modelName);
236
228
  }
237
229
  };
package/lib/tokens.d.ts CHANGED
@@ -130,6 +130,13 @@ export interface IProviderInfo {
130
130
  * Optional description shown in the UI
131
131
  */
132
132
  description?: string;
133
+ /**
134
+ * Optional URL suggestions
135
+ */
136
+ baseUrls?: {
137
+ url: string;
138
+ description?: string;
139
+ }[];
133
140
  /**
134
141
  * Factory function for creating language models
135
142
  */
@@ -353,6 +353,7 @@ const AISettingsComponent = ({ model, agentManagerFactory, themeManager, provide
353
353
  overflow: 'auto',
354
354
  p: 2,
355
355
  pb: 4,
356
+ boxSizing: 'border-box',
356
357
  fontSize: '0.9rem'
357
358
  } },
358
359
  React.createElement(Box, { sx: { mb: 2, display: 'flex', alignItems: 'center', gap: 2 } },
@@ -376,9 +377,9 @@ const AISettingsComponent = ({ model, agentManagerFactory, themeManager, provide
376
377
  }), color: "primary" }), label: "Use same provider for chat and completions" }),
377
378
  !config.useSameProviderForChatAndCompleter && (React.createElement(FormControl, { fullWidth: true },
378
379
  React.createElement(InputLabel, null, "Completion Provider"),
379
- React.createElement(Select, { value: config.activeCompleterProvider || '', label: "Completion Provider", onChange: e => model.setActiveCompleterProvider(e.target.value || undefined) },
380
+ React.createElement(Select, { value: config.activeCompleterProvider || '', label: "Completion Provider", className: "jp-ai-completion-provider-select", onChange: e => model.setActiveCompleterProvider(e.target.value || undefined) },
380
381
  React.createElement(MenuItem, { value: "" },
381
- React.createElement("em", null, "Use chat provider")),
382
+ React.createElement("em", null, "No completion")),
382
383
  config.providers.map(provider => (React.createElement(MenuItem, { key: provider.id, value: provider.id }, provider.name)))))))))),
383
384
  React.createElement(Card, { elevation: 2 },
384
385
  React.createElement(CardContent, null,
@@ -444,7 +445,7 @@ const AISettingsComponent = ({ model, agentManagerFactory, themeManager, provide
444
445
  useSecretsManager: e.target.checked
445
446
  }), color: "primary", sx: { alignSelf: 'flex-start' } }), label: React.createElement("div", null,
446
447
  React.createElement("span", null, "Use the secrets manager to manage API keys"),
447
- config.useSecretsManager && (React.createElement(Alert, { severity: "warning", icon: React.createElement(Error, null), sx: { mb: 2 } }, "The secrets will be stored in plain text in settings"))) })))),
448
+ !config.useSecretsManager && (React.createElement(Alert, { severity: "warning", icon: React.createElement(Error, null), sx: { mb: 2 } }, "The secrets are stored in plain text in settings"))) })))),
448
449
  activeTab === 1 && (React.createElement(Card, { elevation: 2 },
449
450
  React.createElement(CardContent, null,
450
451
  React.createElement(Typography, { variant: "h6", component: "h2", gutterBottom: true }, "Behavior Settings"),
@@ -1,7 +1,7 @@
1
1
  import ExpandMore from '@mui/icons-material/ExpandMore';
2
2
  import Visibility from '@mui/icons-material/Visibility';
3
3
  import VisibilityOff from '@mui/icons-material/VisibilityOff';
4
- import { Accordion, AccordionDetails, AccordionSummary, Box, Button, Chip, Dialog, DialogActions, DialogContent, DialogTitle, FormControl, FormControlLabel, IconButton, InputAdornment, InputLabel, MenuItem, Select, Slider, Switch, TextField, Typography } from '@mui/material';
4
+ import { Accordion, AccordionDetails, AccordionSummary, Autocomplete, Box, Button, Chip, Dialog, DialogActions, DialogContent, DialogTitle, FormControl, FormControlLabel, IconButton, InputAdornment, InputLabel, MenuItem, Select, Slider, Switch, TextField, Typography } from '@mui/material';
5
5
  import React from 'react';
6
6
  /**
7
7
  * Default parameter values for provider configuration
@@ -28,9 +28,10 @@ export const ProviderConfigDialog = ({ open, onClose, onSave, initialConfig, mod
28
28
  label: info.name,
29
29
  models: info.defaultModels,
30
30
  apiKeyRequirement: info.apiKeyRequirement,
31
- allowCustomModel: id === 'ollama' || id === 'generic', // Ollama and Generic allow custom models
31
+ allowCustomModel: id === 'generic', // Generic allows custom models
32
32
  supportsBaseURL: info.supportsBaseURL,
33
- description: info.description
33
+ description: info.description,
34
+ baseUrls: info.baseUrls
34
35
  };
35
36
  });
36
37
  }, [providerRegistry]);
@@ -124,11 +125,17 @@ export const ProviderConfigDialog = ({ open, onClose, onSave, initialConfig, mod
124
125
  endAdornment: (React.createElement(InputAdornment, { position: "end" },
125
126
  React.createElement(IconButton, { onClick: () => setShowApiKey(!showApiKey), edge: "end" }, showApiKey ? React.createElement(VisibilityOff, null) : React.createElement(Visibility, null))))
126
127
  } })),
127
- selectedProvider?.supportsBaseURL && (React.createElement(TextField, { fullWidth: true, label: "Base URL (Optional)", value: baseURL, onChange: e => setBaseURL(e.target.value), placeholder: provider === 'ollama'
128
- ? 'http://localhost:11434/api'
129
- : 'Custom API endpoint', helperText: provider === 'ollama'
130
- ? 'Ollama server endpoint'
131
- : 'Custom API base URL (e.g., for LiteLLM proxy). Leave empty to use default provider endpoint.' })),
128
+ selectedProvider?.supportsBaseURL && (React.createElement(Autocomplete, { freeSolo: true, fullWidth: true, options: (selectedProvider.baseUrls ?? []).map(option => option.url), value: baseURL || '', onChange: (_, value) => {
129
+ if (value && typeof value === 'string') {
130
+ setBaseURL(value);
131
+ }
132
+ }, inputValue: baseURL || '', renderOption: (props, option) => {
133
+ const urlOption = (selectedProvider.baseUrls ?? []).find(u => u.url === option);
134
+ return (React.createElement(Box, { component: "li", ...props, key: option },
135
+ React.createElement(Box, null,
136
+ React.createElement(Typography, { variant: "body2" }, option),
137
+ urlOption?.description && (React.createElement(Typography, { variant: "caption", color: "text.secondary" }, urlOption.description)))));
138
+ }, renderInput: params => (React.createElement(TextField, { ...params, fullWidth: true, label: "Base URL", placeholder: "https://api.example.com/v1", onChange: e => setBaseURL(e.target.value) })), clearOnBlur: false })),
132
139
  React.createElement(Accordion, { expanded: expandedAdvanced, onChange: (_, isExpanded) => setExpandedAdvanced(isExpanded), sx: {
133
140
  mt: 2,
134
141
  bgcolor: 'transparent',
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@jupyterlite/ai",
3
- "version": "0.9.0-a4",
3
+ "version": "0.9.0",
4
4
  "description": "AI code completions and chat for JupyterLite",
5
5
  "keywords": [
6
6
  "jupyter",
@@ -58,6 +58,7 @@
58
58
  "@ai-sdk/google": "^2.0.19",
59
59
  "@ai-sdk/mistral": "^2.0.17",
60
60
  "@ai-sdk/openai": "^2.0.44",
61
+ "@ai-sdk/openai-compatible": "^1.0.26",
61
62
  "@jupyter/chat": "^0.18.2",
62
63
  "@jupyterlab/application": "^4.0.0",
63
64
  "@jupyterlab/apputils": "^4.5.6",
@@ -71,6 +72,7 @@
71
72
  "@jupyterlab/rendermime": "^4.4.6",
72
73
  "@jupyterlab/services": "^7.4.6",
73
74
  "@jupyterlab/settingregistry": "^4.0.0",
75
+ "@jupyterlab/statusbar": "^4.4.6",
74
76
  "@jupyterlab/ui-components": "^4.4.6",
75
77
  "@lumino/commands": "^2.3.2",
76
78
  "@lumino/coreutils": "^2.2.1",
@@ -86,7 +88,6 @@
86
88
  "@openai/agents-extensions": "^0.1.5",
87
89
  "ai": "^5.0.60",
88
90
  "jupyter-secrets-manager": "^0.4.0",
89
- "ollama-ai-provider-v2": "^1.4.1",
90
91
  "zod": "^3.25.76"
91
92
  },
92
93
  "devDependencies": {
package/src/chat-model.ts CHANGED
@@ -24,6 +24,8 @@ import { AISettingsModel } from './models/settings-model';
24
24
 
25
25
  import { ITokenUsage } from './tokens';
26
26
 
27
+ import * as nbformat from '@jupyterlab/nbformat';
28
+
27
29
  /**
28
30
  * AI Chat Model implementation that provides chat functionality with OpenAI agents,
29
31
  * tool integration, and MCP server support.
@@ -646,7 +648,107 @@ ${toolsList}
646
648
  const cellType = cell.cell_type;
647
649
  const lang = cellType === 'code' ? kernelLang : cellType;
648
650
 
649
- return `**Cell [${cellInfo.id}] (${cellType}):**\n\`\`\`${lang}\n${code}\n\`\`\``;
651
+ const DISPLAY_PRIORITY = [
652
+ 'application/vnd.jupyter.widget-view+json',
653
+ 'application/javascript',
654
+ 'text/html',
655
+ 'image/svg+xml',
656
+ 'image/png',
657
+ 'image/jpeg',
658
+ 'text/markdown',
659
+ 'text/latex',
660
+ 'text/plain'
661
+ ];
662
+
663
+ function extractDisplay(data: any): string {
664
+ for (const mime of DISPLAY_PRIORITY) {
665
+ if (!(mime in data)) {
666
+ continue;
667
+ }
668
+
669
+ const value = data[mime];
670
+ if (!value) {
671
+ continue;
672
+ }
673
+
674
+ switch (mime) {
675
+ case 'application/vnd.jupyter.widget-view+json':
676
+ return `Widget: ${(value as any).model_id ?? 'unknown model'}`;
677
+
678
+ case 'image/png':
679
+ return `![image](data:image/png;base64,${value.slice(0, 100)}...)`;
680
+
681
+ case 'image/jpeg':
682
+ return `![image](data:image/jpeg;base64,${value.slice(0, 100)}...)`;
683
+
684
+ case 'image/svg+xml':
685
+ return String(value).slice(0, 500) + '...\n[svg truncated]';
686
+
687
+ case 'text/html':
688
+ return (
689
+ String(value).slice(0, 1000) +
690
+ (String(value).length > 1000 ? '\n...[truncated]' : '')
691
+ );
692
+
693
+ case 'text/markdown':
694
+ case 'text/latex':
695
+ case 'text/plain': {
696
+ let text = Array.isArray(value)
697
+ ? value.join('')
698
+ : String(value);
699
+ if (text.length > 2000) {
700
+ text = text.slice(0, 2000) + '\n...[truncated]';
701
+ }
702
+ return text;
703
+ }
704
+
705
+ default:
706
+ return JSON.stringify(value).slice(0, 2000);
707
+ }
708
+ }
709
+
710
+ return JSON.stringify(data).slice(0, 2000);
711
+ }
712
+
713
+ let outputs = '';
714
+ if (cellType === 'code' && Array.isArray(cell.outputs)) {
715
+ outputs = cell.outputs
716
+ .map((output: nbformat.IOutput) => {
717
+ if (output.output_type === 'stream') {
718
+ return (output as nbformat.IStream).text;
719
+ } else if (output.output_type === 'error') {
720
+ const err = output as nbformat.IError;
721
+ return `${err.ename}: ${err.evalue}\n${(err.traceback || []).join('\n')}`;
722
+ } else if (
723
+ output.output_type === 'execute_result' ||
724
+ output.output_type === 'display_data'
725
+ ) {
726
+ const data = (output as nbformat.IDisplayData).data;
727
+ if (!data) {
728
+ return '';
729
+ }
730
+ try {
731
+ return extractDisplay(data);
732
+ } catch (e) {
733
+ console.error('Cannot extract cell output', e);
734
+ return '';
735
+ }
736
+ }
737
+ return '';
738
+ })
739
+ .filter(Boolean)
740
+ .join('\n---\n');
741
+
742
+ if (outputs.length > 2000) {
743
+ outputs = outputs.slice(0, 2000) + '\n...[truncated]';
744
+ }
745
+ }
746
+
747
+ return (
748
+ `**Cell [${cellInfo.id}] (${cellType}):**\n` +
749
+ `\`\`\`${lang}\n${code}\n\`\`\`` +
750
+ (outputs ? `\n**Outputs:**\n\`\`\`text\n${outputs}\n\`\`\`` : '')
751
+ );
650
752
  })
651
753
  .filter(Boolean)
652
754
  .join('\n\n');
@@ -0,0 +1,79 @@
1
+ import React, { useEffect, useState } from 'react';
2
+ import { AISettingsModel } from '../models/settings-model';
3
+ import { ReactWidget } from '@jupyterlab/ui-components';
4
+ import { jupyternautIcon } from '../icons';
5
+
6
+ const COMPLETION_STATUS_CLASS = 'jp-ai-completion-status';
7
+ const COMPLETION_DISABLED_CLASS = 'jp-ai-completion-disabled';
8
+
9
+ /**
10
+ * The completion status props.
11
+ */
12
+ interface ICompletionStatusProps {
13
+ /**
14
+ * The settings model.
15
+ */
16
+ settingsModel: AISettingsModel;
17
+ }
18
+
19
+ /**
20
+ * The completion status component.
21
+ */
22
+ function CompletionStatus(props: ICompletionStatusProps): JSX.Element {
23
+ const [disabled, setDisabled] = useState<boolean>(true);
24
+ const [title, setTitle] = useState<string>('');
25
+
26
+ /**
27
+ * Handle changes in the settings.
28
+ */
29
+ useEffect(() => {
30
+ const stateChanged = (model: AISettingsModel) => {
31
+ if (model.config.useSameProviderForChatAndCompleter) {
32
+ setDisabled(false);
33
+ setTitle(`Completion using ${model.getDefaultProvider()?.model}`);
34
+ } else if (model.config.activeCompleterProvider) {
35
+ setDisabled(false);
36
+ setTitle(
37
+ `Completion using ${model.getProvider(model.config.activeCompleterProvider)?.model}`
38
+ );
39
+ } else {
40
+ setDisabled(true);
41
+ setTitle('No completion');
42
+ }
43
+ };
44
+
45
+ props.settingsModel.stateChanged.connect(stateChanged);
46
+
47
+ stateChanged(props.settingsModel);
48
+ return () => {
49
+ props.settingsModel.stateChanged.disconnect(stateChanged);
50
+ };
51
+ }, [props.settingsModel]);
52
+
53
+ return (
54
+ <jupyternautIcon.react
55
+ className={disabled ? COMPLETION_DISABLED_CLASS : ''}
56
+ top={'2px'}
57
+ width={'16px'}
58
+ stylesheet={'statusBar'}
59
+ title={title}
60
+ />
61
+ );
62
+ }
63
+
64
+ /**
65
+ * The completion status widget that will be added to the status bar.
66
+ */
67
+ export class CompletionStatusWidget extends ReactWidget {
68
+ constructor(options: ICompletionStatusProps) {
69
+ super();
70
+ this.addClass(COMPLETION_STATUS_CLASS);
71
+ this._props = options;
72
+ }
73
+
74
+ render(): JSX.Element {
75
+ return <CompletionStatus {...this._props} />;
76
+ }
77
+
78
+ private _props: ICompletionStatusProps;
79
+ }
@@ -1,4 +1,5 @@
1
1
  export * from './clear-button';
2
+ export * from './completion-status';
2
3
  export * from './model-select';
3
4
  export * from './stop-button';
4
5
  export * from './token-usage-display';
package/src/index.ts CHANGED
@@ -36,6 +36,8 @@ import { IKernelSpecManager, KernelSpec } from '@jupyterlab/services';
36
36
 
37
37
  import { ISettingRegistry } from '@jupyterlab/settingregistry';
38
38
 
39
+ import { IStatusBar } from '@jupyterlab/statusbar';
40
+
39
41
  import {
40
42
  settingsIcon,
41
43
  Toolbar,
@@ -72,7 +74,6 @@ import {
72
74
  googleProvider,
73
75
  mistralProvider,
74
76
  openaiProvider,
75
- ollamaProvider,
76
77
  genericProvider
77
78
  } from './providers/built-in-providers';
78
79
 
@@ -83,6 +84,7 @@ import {
83
84
  createModelSelectItem,
84
85
  createToolSelectItem,
85
86
  stopItem,
87
+ CompletionStatusWidget,
86
88
  TokenUsageWidget
87
89
  } from './components';
88
90
 
@@ -189,19 +191,6 @@ const openaiProviderPlugin: JupyterFrontEndPlugin<void> = {
189
191
  }
190
192
  };
191
193
 
192
- /**
193
- * Ollama provider plugin
194
- */
195
- const ollamaProviderPlugin: JupyterFrontEndPlugin<void> = {
196
- id: '@jupyterlite/ai:ollama-provider',
197
- description: 'Register Ollama provider',
198
- autoStart: true,
199
- requires: [IProviderRegistry],
200
- activate: (app: JupyterFrontEnd, providerRegistry: IProviderRegistry) => {
201
- providerRegistry.registerProvider(ollamaProvider);
202
- }
203
- };
204
-
205
194
  /**
206
195
  * Generic provider plugin
207
196
  */
@@ -698,6 +687,7 @@ const agentManagerFactory: JupyterFrontEndPlugin<AgentManagerFactory> =
698
687
  });
699
688
  settingsWidget.id = 'jupyterlite-ai-settings';
700
689
  settingsWidget.title.icon = settingsIcon;
690
+ settingsWidget.title.iconClass = 'jp-ai-settings-icon';
701
691
 
702
692
  // Build the completion provider
703
693
  if (completionManager) {
@@ -723,6 +713,7 @@ const agentManagerFactory: JupyterFrontEndPlugin<AgentManagerFactory> =
723
713
  label: 'AI Settings',
724
714
  caption: 'Configure AI providers and behavior',
725
715
  icon: settingsIcon,
716
+ iconClass: 'jp-ai-settings-icon',
726
717
  execute: () => {
727
718
  // Check if the widget already exists in shell
728
719
  let widget = Array.from(app.shell.widgets('main')).find(
@@ -930,13 +921,35 @@ const inputToolbarFactory: JupyterFrontEndPlugin<IInputToolbarRegistryFactory> =
930
921
  }
931
922
  };
932
923
 
924
+ const completionStatus: JupyterFrontEndPlugin<void> = {
925
+ id: '@jupyterlite/ai:completion-status',
926
+ description: 'The completion status displayed in the status bar',
927
+ autoStart: true,
928
+ requires: [IAISettingsModel],
929
+ optional: [IStatusBar],
930
+ activate: (
931
+ app: JupyterFrontEnd,
932
+ settingsModel: AISettingsModel,
933
+ statusBar: IStatusBar | null
934
+ ) => {
935
+ if (!statusBar) {
936
+ return;
937
+ }
938
+ const item = new CompletionStatusWidget({ settingsModel });
939
+ statusBar?.registerStatusItem('completionState', {
940
+ item,
941
+ align: 'right',
942
+ rank: 10
943
+ });
944
+ }
945
+ };
946
+
933
947
  export default [
934
948
  providerRegistryPlugin,
935
949
  anthropicProviderPlugin,
936
950
  googleProviderPlugin,
937
951
  mistralProviderPlugin,
938
952
  openaiProviderPlugin,
939
- ollamaProviderPlugin,
940
953
  genericProviderPlugin,
941
954
  settingsModel,
942
955
  diffManager,
@@ -944,7 +957,8 @@ export default [
944
957
  plugin,
945
958
  toolRegistry,
946
959
  agentManagerFactory,
947
- inputToolbarFactory
960
+ inputToolbarFactory,
961
+ completionStatus
948
962
  ];
949
963
 
950
964
  // Export extension points for other extensions to use
@@ -241,7 +241,7 @@ Rules:
241
241
  }
242
242
  return this._config.activeCompleterProvider
243
243
  ? this.getProvider(this._config.activeCompleterProvider)
244
- : this.getDefaultProvider();
244
+ : undefined;
245
245
  }
246
246
 
247
247
  async addProvider(
@@ -311,6 +311,11 @@ Rules:
311
311
  }
312
312
 
313
313
  Object.assign(provider, updates);
314
+ Object.keys(provider).forEach(key => {
315
+ if (key !== 'id' && updates[key] === undefined) {
316
+ delete provider[key];
317
+ }
318
+ });
314
319
  await this.saveSetting('providers', this._config.providers);
315
320
  }
316
321
 
@@ -416,6 +421,8 @@ Rules:
416
421
  // Only save the specific setting that changed
417
422
  if (value !== undefined) {
418
423
  await this._settings.set(key, value as any);
424
+ } else {
425
+ await this._settings.remove(key);
419
426
  }
420
427
  }
421
428
  } catch (error) {
@@ -2,7 +2,7 @@ import { createAnthropic } from '@ai-sdk/anthropic';
2
2
  import { createGoogleGenerativeAI } from '@ai-sdk/google';
3
3
  import { createMistral } from '@ai-sdk/mistral';
4
4
  import { createOpenAI } from '@ai-sdk/openai';
5
- import { createOllama } from 'ollama-ai-provider-v2';
5
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
6
6
 
7
7
  import type { IProviderInfo } from '../tokens';
8
8
  import type { IModelOptions } from './models';
@@ -202,26 +202,6 @@ export const openaiProvider: IProviderInfo = {
202
202
  }
203
203
  };
204
204
 
205
- /**
206
- * Ollama provider
207
- */
208
- export const ollamaProvider: IProviderInfo = {
209
- id: 'ollama',
210
- name: 'Ollama',
211
- apiKeyRequirement: 'none',
212
- defaultModels: [],
213
- supportsBaseURL: true,
214
- supportsHeaders: true,
215
- factory: (options: IModelOptions) => {
216
- const ollama = createOllama({
217
- baseURL: options.baseURL || 'http://localhost:11434/api',
218
- ...(options.headers && { headers: options.headers })
219
- });
220
- const modelName = options.model || 'phi3';
221
- return ollama(modelName);
222
- }
223
- };
224
-
225
205
  /**
226
206
  * Generic OpenAI-compatible provider
227
207
  */
@@ -234,13 +214,24 @@ export const genericProvider: IProviderInfo = {
234
214
  supportsHeaders: true,
235
215
  supportsToolCalling: true,
236
216
  description: 'Uses /chat/completions endpoint',
217
+ baseUrls: [
218
+ {
219
+ url: 'http://localhost:4000',
220
+ description: 'Default for local LiteLLM server'
221
+ },
222
+ {
223
+ url: 'http://localhost:11434/v1',
224
+ description: 'Default for local Ollama server'
225
+ }
226
+ ],
237
227
  factory: (options: IModelOptions) => {
238
- const openai = createOpenAI({
228
+ const openaiCompatible = createOpenAICompatible({
229
+ name: options.provider,
239
230
  apiKey: options.apiKey || 'dummy',
240
- ...(options.baseURL && { baseURL: options.baseURL }),
231
+ baseURL: options.baseURL ?? '',
241
232
  ...(options.headers && { headers: options.headers })
242
233
  });
243
234
  const modelName = options.model || 'gpt-4o';
244
- return openai(modelName);
235
+ return openaiCompatible(modelName);
245
236
  }
246
237
  };
package/src/tokens.ts CHANGED
@@ -159,6 +159,11 @@ export interface IProviderInfo {
159
159
  */
160
160
  description?: string;
161
161
 
162
+ /**
163
+ * Optional URL suggestions
164
+ */
165
+ baseUrls?: { url: string; description?: string }[];
166
+
162
167
  /**
163
168
  * Factory function for creating language models
164
169
  */
@@ -527,6 +527,7 @@ const AISettingsComponent: React.FC<IAISettingsComponentProps> = ({
527
527
  overflow: 'auto',
528
528
  p: 2,
529
529
  pb: 4,
530
+ boxSizing: 'border-box',
530
531
  fontSize: '0.9rem'
531
532
  }}
532
533
  >
@@ -600,6 +601,7 @@ const AISettingsComponent: React.FC<IAISettingsComponentProps> = ({
600
601
  <Select
601
602
  value={config.activeCompleterProvider || ''}
602
603
  label="Completion Provider"
604
+ className="jp-ai-completion-provider-select"
603
605
  onChange={e =>
604
606
  model.setActiveCompleterProvider(
605
607
  e.target.value || undefined
@@ -607,7 +609,7 @@ const AISettingsComponent: React.FC<IAISettingsComponentProps> = ({
607
609
  }
608
610
  >
609
611
  <MenuItem value="">
610
- <em>Use chat provider</em>
612
+ <em>No completion</em>
611
613
  </MenuItem>
612
614
  {config.providers.map(provider => (
613
615
  <MenuItem key={provider.id} value={provider.id}>
@@ -793,9 +795,9 @@ const AISettingsComponent: React.FC<IAISettingsComponentProps> = ({
793
795
  label={
794
796
  <div>
795
797
  <span>Use the secrets manager to manage API keys</span>
796
- {config.useSecretsManager && (
798
+ {!config.useSecretsManager && (
797
799
  <Alert severity="warning" icon={<Error />} sx={{ mb: 2 }}>
798
- The secrets will be stored in plain text in settings
800
+ The secrets are stored in plain text in settings
799
801
  </Alert>
800
802
  )}
801
803
  </div>
@@ -5,6 +5,7 @@ import {
5
5
  Accordion,
6
6
  AccordionDetails,
7
7
  AccordionSummary,
8
+ Autocomplete,
8
9
  Box,
9
10
  Button,
10
11
  Chip,
@@ -83,9 +84,10 @@ export const ProviderConfigDialog: React.FC<IProviderConfigDialogProps> = ({
83
84
  label: info.name,
84
85
  models: info.defaultModels,
85
86
  apiKeyRequirement: info.apiKeyRequirement,
86
- allowCustomModel: id === 'ollama' || id === 'generic', // Ollama and Generic allow custom models
87
+ allowCustomModel: id === 'generic', // Generic allows custom models
87
88
  supportsBaseURL: info.supportsBaseURL,
88
- description: info.description
89
+ description: info.description,
90
+ baseUrls: info.baseUrls
89
91
  };
90
92
  });
91
93
  }, [providerRegistry]);
@@ -279,21 +281,46 @@ export const ProviderConfigDialog: React.FC<IProviderConfigDialogProps> = ({
279
281
  )}
280
282
 
281
283
  {selectedProvider?.supportsBaseURL && (
282
- <TextField
284
+ <Autocomplete
285
+ freeSolo
283
286
  fullWidth
284
- label="Base URL (Optional)"
285
- value={baseURL}
286
- onChange={e => setBaseURL(e.target.value)}
287
- placeholder={
288
- provider === 'ollama'
289
- ? 'http://localhost:11434/api'
290
- : 'Custom API endpoint'
291
- }
292
- helperText={
293
- provider === 'ollama'
294
- ? 'Ollama server endpoint'
295
- : 'Custom API base URL (e.g., for LiteLLM proxy). Leave empty to use default provider endpoint.'
296
- }
287
+ options={(selectedProvider.baseUrls ?? []).map(
288
+ option => option.url
289
+ )}
290
+ value={baseURL || ''}
291
+ onChange={(_, value) => {
292
+ if (value && typeof value === 'string') {
293
+ setBaseURL(value);
294
+ }
295
+ }}
296
+ inputValue={baseURL || ''}
297
+ renderOption={(props, option) => {
298
+ const urlOption = (selectedProvider.baseUrls ?? []).find(
299
+ u => u.url === option
300
+ );
301
+ return (
302
+ <Box component="li" {...props} key={option}>
303
+ <Box>
304
+ <Typography variant="body2">{option}</Typography>
305
+ {urlOption?.description && (
306
+ <Typography variant="caption" color="text.secondary">
307
+ {urlOption.description}
308
+ </Typography>
309
+ )}
310
+ </Box>
311
+ </Box>
312
+ );
313
+ }}
314
+ renderInput={params => (
315
+ <TextField
316
+ {...params}
317
+ fullWidth
318
+ label="Base URL"
319
+ placeholder="https://api.example.com/v1"
320
+ onChange={e => setBaseURL(e.target.value)}
321
+ />
322
+ )}
323
+ clearOnBlur={false}
297
324
  />
298
325
  )}
299
326
 
package/style/base.css CHANGED
@@ -371,6 +371,11 @@
371
371
  transform: rotate(180deg);
372
372
  }
373
373
 
374
+ .jp-ai-settings-icon {
375
+ align-items: center;
376
+ display: flex;
377
+ }
378
+
374
379
  .jp-chat-sidepanel .jp-chat-add span.jp-ToolbarButtonComponent-label {
375
380
  display: none;
376
381
  }
@@ -379,3 +384,12 @@
379
384
  stroke: var(--jp-inverse-layout-color3);
380
385
  stroke-width: 2;
381
386
  }
387
+
388
+ /* Disabled color for the completion status */
389
+ .jp-ai-completion-status .jp-ai-completion-disabled circle {
390
+ fill: var(--jp-layout-color3);
391
+ }
392
+
393
+ .jp-ai-completion-status .jp-ai-completion-disabled path {
394
+ fill: var(--jp-layout-color2);
395
+ }