@jupyterlite/ai 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. package/README.md +48 -9
  2. package/lib/_provider-settings/anthropic.json +70 -0
  3. package/lib/_provider-settings/chromeAI.json +21 -0
  4. package/lib/_provider-settings/mistralAI.json +75 -0
  5. package/lib/_provider-settings/openAI.json +668 -0
  6. package/lib/chat-handler.d.ts +12 -0
  7. package/lib/chat-handler.js +70 -21
  8. package/lib/completion-provider.d.ts +3 -3
  9. package/lib/icons.d.ts +2 -0
  10. package/lib/icons.js +15 -0
  11. package/lib/index.d.ts +2 -1
  12. package/lib/index.js +61 -6
  13. package/lib/llm-models/anthropic-completer.d.ts +19 -0
  14. package/lib/llm-models/anthropic-completer.js +57 -0
  15. package/lib/llm-models/base-completer.d.ts +6 -2
  16. package/lib/llm-models/chrome-completer.d.ts +19 -0
  17. package/lib/llm-models/chrome-completer.js +67 -0
  18. package/lib/llm-models/codestral-completer.d.ts +9 -8
  19. package/lib/llm-models/codestral-completer.js +37 -54
  20. package/lib/llm-models/openai-completer.d.ts +19 -0
  21. package/lib/llm-models/openai-completer.js +51 -0
  22. package/lib/llm-models/utils.d.ts +1 -0
  23. package/lib/llm-models/utils.js +57 -0
  24. package/lib/provider.d.ts +11 -0
  25. package/lib/provider.js +26 -0
  26. package/lib/slash-commands.d.ts +16 -0
  27. package/lib/slash-commands.js +25 -0
  28. package/package.json +23 -104
  29. package/schema/ai-provider.json +4 -8
  30. package/schema/chat.json +8 -0
  31. package/src/chat-handler.ts +91 -34
  32. package/src/completion-provider.ts +3 -3
  33. package/src/icons.ts +18 -0
  34. package/src/index.ts +67 -5
  35. package/src/llm-models/anthropic-completer.ts +75 -0
  36. package/src/llm-models/base-completer.ts +7 -2
  37. package/src/llm-models/chrome-completer.ts +88 -0
  38. package/src/llm-models/codestral-completer.ts +43 -69
  39. package/src/llm-models/openai-completer.ts +67 -0
  40. package/src/llm-models/svg.d.ts +9 -0
  41. package/src/llm-models/utils.ts +49 -0
  42. package/src/provider.ts +38 -0
  43. package/src/slash-commands.tsx +55 -0
  44. package/style/icons/jupyternaut-lite.svg +7 -0
package/README.md CHANGED
@@ -3,9 +3,9 @@
3
3
  [![Github Actions Status](https://github.com/jupyterlite/ai/workflows/Build/badge.svg)](https://github.com/jupyterlite/ai/actions/workflows/build.yml)
4
4
  [![lite-badge](https://jupyterlite.rtfd.io/en/latest/_static/badge.svg)](https://jupyterlite.github.io/ai/lab/index.html)
5
5
 
6
- AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite, powered by MistralAI
6
+ AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite ✨
7
7
 
8
- [a screencast showing the Codestral extension in JupyterLite](https://github.com/jupyterlite/ai/assets/591645/855c4e3e-3a63-4868-8052-5c9909922c21)
8
+ [a screencast showing the Jupyterlite AI extension in JupyterLite](https://github.com/jupyterlite/ai/assets/591645/855c4e3e-3a63-4868-8052-5c9909922c21)
9
9
 
10
10
  ## Requirements
11
11
 
@@ -14,12 +14,7 @@ AI code completions and chat for JupyterLab, Notebook 7 and JupyterLite, powered
14
14
  > To enable more AI providers in JupyterLab and Jupyter Notebook, we recommend using the [Jupyter AI](https://github.com/jupyterlab/jupyter-ai) extension directly.
15
15
  > At the moment Jupyter AI is not compatible with JupyterLite, but might be to some extent in the future.
16
16
 
17
- - JupyterLab >= 4.1.0 or Notebook >= 7.1.0
18
-
19
- > [!WARNING]
20
- > This extension is still very much experimental. It is not an official MistralAI extension.
21
- > It is exploring the integration of the MistralAI API with JupyterLab, which can also be used in [JupyterLite](https://jupyterlite.readthedocs.io/).
22
- > For a more complete AI extension for JupyterLab, see [Jupyter AI](https://github.com/jupyterlab/jupyter-ai).
17
+ - JupyterLab >= 4.4.0a0 or Notebook >= 7.4.0a0
23
18
 
24
19
  ## ✨ Try it in your browser ✨
25
20
 
@@ -37,13 +32,29 @@ To install the extension, execute:
37
32
  pip install jupyterlite-ai
38
33
  ```
39
34
 
35
+ To install requirements (jupyterlab, jupyterlite and notebook), there is an optional dependencies argument:
36
+
37
+ ```bash
38
+ pip install jupyterlite-ai[jupyter]
39
+ ```
40
+
40
41
  # Usage
41
42
 
43
+ AI providers typically require using an API key to access their models.
44
+
45
+ The process is different for each provider, so you may refer to their documentation to learn how to generate new API keys, if they are not covered in the sections below.
46
+
47
+ ## Using MistralAI
48
+
49
+ > [!WARNING]
50
+ > This extension is still very much experimental. It is not an official MistralAI extension.
51
+
42
52
  1. Go to https://console.mistral.ai/api-keys/ and create an API key.
43
53
 
44
54
  ![Screenshot showing how to create an API key](./img/1-api-key.png)
45
55
 
46
- 2. Open the JupyterLab settings and go to the Codestral section to enter the API key
56
+ 2. Open the JupyterLab settings and go to the **Ai providers** section to select the `MistralAI`
57
+ provider and the API key (required).
47
58
 
48
59
  ![Screenshot showing how to add the API key to the settings](./img/2-jupyterlab-settings.png)
49
60
 
@@ -51,6 +62,34 @@ pip install jupyterlite-ai
51
62
 
52
63
  ![Screenshot showing how to use the chat](./img/3-usage.png)
53
64
 
65
+ ## Using ChromeAI
66
+
67
+ > [!WARNING]
68
+ > Support for ChromeAI is still experimental and only available in Google Chrome.
69
+
70
+ You can test ChromeAI is enabled in your browser by going to the following URL: https://chromeai.org/
71
+
72
+ Enable the proper flags in Google Chrome.
73
+
74
+ - chrome://flags/#prompt-api-for-gemini-nano
75
+ - Select: `Enabled`
76
+ - chrome://flags/#optimization-guide-on-device-model
77
+ - Select: `Enabled BypassPrefRequirement`
78
+ - chrome://components
79
+ - Click `Check for Update` on Optimization Guide On Device Model to download the model
80
+ - [Optional] chrome://flags/#text-safety-classifier
81
+
82
+ ![a screenshot showing how to enable the ChromeAI flag in Google Chrome](https://github.com/user-attachments/assets/d48f46cc-52ee-4ce5-9eaf-c763cdbee04c)
83
+
84
+ Then restart Chrome for these changes to take effect.
85
+
86
+ > [!WARNING]
87
+ > On first use, Chrome will download the on-device model, which can be as large as 22GB (according to their docs and at the time of writing).
88
+ > During the download, ChromeAI may not be available via the extension.
89
+
90
+ > [!NOTE]
91
+ > For more information about Chrome Built-in AI: https://developer.chrome.com/docs/ai/get-started
92
+
54
93
  ## Uninstall
55
94
 
56
95
  To remove the extension, execute:
@@ -0,0 +1,70 @@
1
+ {
2
+ "$schema": "http://json-schema.org/draft-07/schema#",
3
+ "type": "object",
4
+ "properties": {
5
+ "temperature": {
6
+ "type": "number",
7
+ "description": "Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks."
8
+ },
9
+ "topK": {
10
+ "type": "number",
11
+ "description": "Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Defaults to -1, which disables it."
12
+ },
13
+ "topP": {
14
+ "type": "number",
15
+ "description": "Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both."
16
+ },
17
+ "maxTokens": {
18
+ "type": "number",
19
+ "description": "A maximum number of tokens to generate before stopping."
20
+ },
21
+ "maxTokensToSample": {
22
+ "type": "number",
23
+ "description": "A maximum number of tokens to generate before stopping.",
24
+ "deprecated": "Use \"maxTokens\" instead."
25
+ },
26
+ "stopSequences": {
27
+ "type": "array",
28
+ "items": {
29
+ "type": "string"
30
+ },
31
+ "description": "A list of strings upon which to stop generating. You probably want `[\"\\n\\nHuman:\"]`, as that's the cue for the next turn in the dialog agent."
32
+ },
33
+ "streaming": {
34
+ "type": "boolean",
35
+ "description": "Whether to stream the results or not"
36
+ },
37
+ "anthropicApiKey": {
38
+ "type": "string",
39
+ "description": "Anthropic API key"
40
+ },
41
+ "apiKey": {
42
+ "type": "string",
43
+ "description": "Anthropic API key"
44
+ },
45
+ "anthropicApiUrl": {
46
+ "type": "string",
47
+ "description": "Anthropic API URL"
48
+ },
49
+ "modelName": {
50
+ "type": "string",
51
+ "deprecated": "Use \"model\" instead"
52
+ },
53
+ "model": {
54
+ "type": "string",
55
+ "description": "Model name to use"
56
+ },
57
+ "invocationKwargs": {
58
+ "type": "object",
59
+ "description": "Holds any additional parameters that are valid to pass to {@link * https://console.anthropic.com/docs/api/reference | } * `anthropic.messages`} that are not explicitly specified on this class."
60
+ },
61
+ "streamUsage": {
62
+ "type": "boolean",
63
+ "description": "Whether or not to include token usage data in streamed chunks.",
64
+ "default": false
65
+ }
66
+ },
67
+ "additionalProperties": false,
68
+ "description": "Input to AnthropicChat class.",
69
+ "definitions": {}
70
+ }
@@ -0,0 +1,21 @@
1
+ {
2
+ "$schema": "http://json-schema.org/draft-07/schema#",
3
+ "type": "object",
4
+ "properties": {
5
+ "concurrency": {
6
+ "type": "number",
7
+ "deprecated": "Use `maxConcurrency` instead"
8
+ },
9
+ "topK": {
10
+ "type": "number"
11
+ },
12
+ "temperature": {
13
+ "type": "number"
14
+ },
15
+ "systemPrompt": {
16
+ "type": "string"
17
+ }
18
+ },
19
+ "additionalProperties": false,
20
+ "definitions": {}
21
+ }
@@ -0,0 +1,75 @@
1
+ {
2
+ "$schema": "http://json-schema.org/draft-07/schema#",
3
+ "type": "object",
4
+ "properties": {
5
+ "streamUsage": {
6
+ "type": "boolean",
7
+ "description": "Whether or not to include token usage in the stream.",
8
+ "default": true
9
+ },
10
+ "disableStreaming": {
11
+ "type": "boolean",
12
+ "description": "Whether to disable streaming.\n\nIf streaming is bypassed, then `stream()` will defer to `invoke()`.\n\n- If true, will always bypass streaming case.\n- If false (default), will always use streaming case if available."
13
+ },
14
+ "apiKey": {
15
+ "type": "string",
16
+ "description": "The API key to use.",
17
+ "default": ""
18
+ },
19
+ "modelName": {
20
+ "type": "string",
21
+ "description": "The name of the model to use. Alias for `model`",
22
+ "default": "mistral-small-latest"
23
+ },
24
+ "model": {
25
+ "type": "string",
26
+ "description": "The name of the model to use.",
27
+ "default": "mistral-small-latest"
28
+ },
29
+ "endpoint": {
30
+ "type": "string",
31
+ "description": "Override the default endpoint."
32
+ },
33
+ "temperature": {
34
+ "type": "number",
35
+ "description": "What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
36
+ "default": 0.7
37
+ },
38
+ "topP": {
39
+ "type": "number",
40
+ "description": "Nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Should be between 0 and 1.",
41
+ "default": 1
42
+ },
43
+ "maxTokens": {
44
+ "type": "number",
45
+ "description": "The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length."
46
+ },
47
+ "streaming": {
48
+ "type": "boolean",
49
+ "description": "Whether or not to stream the response.",
50
+ "default": false
51
+ },
52
+ "safeMode": {
53
+ "type": "boolean",
54
+ "description": "Whether to inject a safety prompt before all conversations.",
55
+ "default": false,
56
+ "deprecated": "use safePrompt instead"
57
+ },
58
+ "safePrompt": {
59
+ "type": "boolean",
60
+ "description": "Whether to inject a safety prompt before all conversations.",
61
+ "default": false
62
+ },
63
+ "randomSeed": {
64
+ "type": "number",
65
+ "description": "The seed to use for random sampling. If set, different calls will generate deterministic results. Alias for `seed`"
66
+ },
67
+ "seed": {
68
+ "type": "number",
69
+ "description": "The seed to use for random sampling. If set, different calls will generate deterministic results."
70
+ }
71
+ },
72
+ "additionalProperties": false,
73
+ "description": "Input to chat model class.",
74
+ "definitions": {}
75
+ }