@lobehub/chat 0.148.5 → 0.148.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,39 @@
2
2
 
3
3
  # Changelog
4
4
 
5
+ ### [Version 0.148.6](https://github.com/lobehub/lobe-chat/compare/v0.148.5...v0.148.6)
6
+
7
+ <sup>Released on **2024-04-22**</sup>
8
+
9
+ #### 🐛 Bug Fixes
10
+
11
+ - **misc**: Add Windows Phone, iPadOS, BlackBerry OS, Linux OS and Chrome OS sync icons.
12
+
13
+ #### 💄 Styles
14
+
15
+ - **misc**: Support more model Icons: dbrx, command-r, openchat, rwkv, Bert-vits2, Stable Diffusion, WizardLM, adobe firefly, skylark.
16
+
17
+ <br/>
18
+
19
+ <details>
20
+ <summary><kbd>Improvements and Fixes</kbd></summary>
21
+
22
+ #### What's fixed
23
+
24
+ - **misc**: Add Windows Phone, iPadOS, BlackBerry OS, Linux OS and Chrome OS sync icons, closes [#2139](https://github.com/lobehub/lobe-chat/issues/2139) ([8ed1f07](https://github.com/lobehub/lobe-chat/commit/8ed1f07))
25
+
26
+ #### Styles
27
+
28
+ - **misc**: Support more model Icons: dbrx, command-r, openchat, rwkv, Bert-vits2, Stable Diffusion, WizardLM, adobe firefly, skylark, closes [#2107](https://github.com/lobehub/lobe-chat/issues/2107) ([4268d8b](https://github.com/lobehub/lobe-chat/commit/4268d8b))
29
+
30
+ </details>
31
+
32
+ <div align="right">
33
+
34
+ [![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)
35
+
36
+ </div>
37
+
5
38
  ### [Version 0.148.5](https://github.com/lobehub/lobe-chat/compare/v0.148.4...v0.148.5)
6
39
 
7
40
  <sup>Released on **2024-04-22**</sup>
package/CONTRIBUTING.md CHANGED
@@ -43,7 +43,7 @@ Choose a meaningful branch name related to your work. It makes collaboration eas
43
43
  🧙‍♀️ Time to work your magic! Write your code, fix bugs, or add new features. Be sure to follow our project's coding style. You can check if your code adheres to our style using:
44
44
 
45
45
  ```bash
46
- yarn lint
46
+ pnpm lint
47
47
  ```
48
48
 
49
49
  This adds a bit of enchantment to your coding experience! ✨
@@ -37,8 +37,67 @@ For more information on using Ollama in LobeChat, please refer to [Ollama Usage]
37
37
  ## Accessing Ollama from Non-Local Locations
38
38
 
39
39
  When you first initiate Ollama, it is configured to allow access only from the local machine. To enable access from other domains and set up port listening, you will need to adjust the environment variables `OLLAMA_ORIGINS` and `OLLAMA_HOST` accordingly.
40
- ```
41
- set OLLAMA_ORIGINS=*
42
- set OLLAMA_HOST=:11434
43
- ```
40
+
41
+ ### Ollama Environment Variables
42
+
43
+ `OLLAMA_HOST` The host:port to bind to (default "127.0.0.1:11434")
44
+ `OLLAMA_ORIGINS` A comma separated list of allowed origins.
45
+ `OLLAMA_MODELS` The path to the models directory (default is "~/.ollama/models")
46
+ `OLLAMA_KEEP_ALIVE` The duration that models stay loaded in memory (default is "5m")
47
+ `OLLAMA_DEBUG` Set to 1 to enable additional debug logging
48
+
49
+ ### Setting environment variables on Windows
50
+
51
+ On Windows, Ollama inherits your user and system environment variables.
52
+
53
+ 1. First Quit Ollama by clicking on its tray icon and selecting Quit
54
+ 2. Edit system environment variables from the control panel
55
+ 3. Edit or create New variable(s) for your user account for `OLLAMA_HOST`, `OLLAMA_ORIGINS`, etc.
56
+ 4. Click OK/Apply to save
57
+ 5. Restart it
58
+
59
+ ### Setting environment variables on Mac
60
+
61
+ If Ollama is run as a macOS application, environment variables should be set using `launchctl`:
62
+
63
+ 1. For each environment variable, call `launchctl setenv`.
64
+
65
+ ```bash
66
+ launchctl setenv OLLAMA_HOST "0.0.0.0"
67
+ launchctl setenv OLLAMA_ORIGINS "*"
68
+ ```
69
+
70
+ 2. Restart Ollama application.
71
+
72
+
73
+ ### Setting environment variables on Linux
74
+
75
+ If Ollama is run as a systemd service, environment variables should be set using `systemctl`:
76
+
77
+ 1. Edit the systemd service by calling `sudo systemctl edit ollama.service`.
78
+
79
+ ```bash
80
+ sudo systemctl edit ollama.service
81
+ ```
82
+
83
+ 2. For each environment variable, add a line `Environment` under section `[Service]`:
84
+
85
+ ```bash
86
+ [Service]
87
+ Environment="OLLAMA_HOST=0.0.0.0"
88
+ Environment="OLLAMA_ORIGINS=*"
89
+ ```
90
+
91
+ 3. Save and exit.
92
+ 4. Reload `systemd` and restart Ollama:
93
+
94
+ ```bash
95
+ sudo systemctl daemon-reload
96
+ sudo systemctl restart ollama
97
+ ```
98
+
99
+ ### Setting environment variables on Docker
100
+
101
+ If Ollama is run as a Docker container, you can add the environment variables to the `docker run` command.
102
+
44
103
  For further guidance on configuration, consult the [Ollama Official Documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server).
@@ -35,8 +35,69 @@ docker run -d -p 3210:3210 -e OLLAMA_PROXY_URL=http://host.docker.internal:11434
35
35
  ## 非本地访问 Ollama
36
36
 
37
37
  由于 Ollama 默认参数在启动时设置了仅本地访问,所以跨域访问以及端口监听需要进行额外的环境变量设置 `OLLAMA_ORIGINS` 和 `OLLAMA_HOST`。
38
- ```
39
- set OLLAMA_ORIGINS=*
40
- set OLLAMA_HOST=:11434
41
- ```
38
+
39
+ ### Ollama 环境变量
40
+
41
+ `OLLAMA_HOST` 绑定的主机:端口 (默认 "127.0.0.1:11434")
42
+ `OLLAMA_ORIGINS` 允许的源的逗号分隔列表。
43
+ `OLLAMA_MODELS` 模型目录的路径 (默认是 "~/.ollama/models")
44
+ `OLLAMA_KEEP_ALIVE` 模型在内存中保持加载的持续时间 (默认是 "5m")
45
+ `OLLAMA_DEBUG` 设置为 1 以启用额外的调试日志
46
+
47
+ ### 在 Windows 上设置环境变量
48
+
49
+ 在 Windows 上,Ollama 继承了您的用户和系统环境变量。
50
+
51
+ 1. 首先通过 Windows 任务栏点击 Ollama 退出程序。
52
+ 2. 从控制面板编辑系统环境变量。
53
+ 3. 为您的用户账户编辑或新建 Ollama 的环境变量,比如`OLLAMA_HOST`、`OLLAMA_ORIGINS`等。
54
+ 4. 点击`OK/应用`保存。
55
+ 5. 重新运行`Ollama`。
56
+
57
+
58
+ ### 在 Mac 上设置环境变量
59
+
60
+ 如果 Ollama 作为 macOS 应用程序运行,你需要使用 `launchctl` 设置环境变量:
61
+
62
+ 1. 对于每个环境变量,调用 `launchctl setenv`。
63
+
64
+ ```bash
65
+ launchctl setenv OLLAMA_HOST "0.0.0.0"
66
+ launchctl setenv OLLAMA_ORIGINS "*"
67
+ ```
68
+
69
+ 2. 重启 Ollama 应用程序。
70
+
71
+
72
+ ### 在 Linux 上设置环境变量
73
+
74
+ 如果 Ollama 作为 systemd 服务运行,应该使用`systemctl`设置环境变量:
75
+
76
+ 1. 通过调用`sudo systemctl edit ollama.service`编辑 systemd 服务。
77
+
78
+ ```bash
79
+ sudo systemctl edit ollama.service
80
+ ```
81
+
82
+ 2. 对于每个环境变量,在`[Service]`部分下添加`Environment`:
83
+
84
+ ```bash
85
+ [Service]
86
+ Environment="OLLAMA_HOST=0.0.0.0"
87
+ Environment="OLLAMA_ORIGINS=*"
88
+ ```
89
+
90
+ 3. 保存并退出。
91
+ 4. 重载`systemd`并重启 Ollama:
92
+
93
+ ```bash
94
+ sudo systemctl daemon-reload
95
+ sudo systemctl restart ollama
96
+ ```
97
+
98
+ ### 在 Docker 上设置环境变量
99
+
100
+ 如果 Ollama 作为 Docker 容器运行,你可以将环境变量添加到 `docker run` 命令中。
101
+
42
102
  详细配置方法可以参考 [Ollama 官方文档](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server)。
103
+
@@ -25,8 +25,15 @@ We understand the importance of providing a seamless experience for users in tod
25
25
 
26
26
  If you are unfamiliar with the installation process of PWA, you can follow the steps below to add LobeChat as a desktop app (also applicable to mobile devices):
27
27
 
28
+ ## Running on Chrome / Edge
29
+
30
+ <Callout type={'important'}>
31
+ On macOS, when using a Chrome-installed PWA, it is required that Chrome be open, otherwise Chrome will automatically open and then launch the PWA app.
32
+ </Callout>
33
+
28
34
  <Steps>
29
- ### Run Chrome or Edge browser on your computer
35
+
36
+ ### Run Chrome or Edge browser on your computer
30
37
 
31
38
  ### Visit the LobeChat webpage
32
39
 
@@ -35,3 +42,32 @@ If you are unfamiliar with the installation process of PWA, you can follow the s
35
42
  ### Follow the on-screen instructions to complete the PWA installation
36
43
 
37
44
  </Steps>
45
+
46
+ ## Running on Safari
47
+
48
+ Safari PWA requires macOS Ventura or later. The PWA installed by Safari does not require Safari to be open; you can directly open the PWA app.
49
+
50
+ <Steps>
51
+
52
+ ### Run Safari browser on your computer
53
+
54
+ ### Visit the LobeChat webpage
55
+
56
+ ### In the top right corner of the address bar, click the <kbd>Share</kbd> icon
57
+
58
+ ### Click <kbd>Add to Dock</kbd>
59
+
60
+ ### Follow the on-screen instructions to complete the PWA installation
61
+
62
+ </Steps>
63
+
64
+ <Callout type={'tip'}>
65
+ The default installed LobeChat PWA icon has a black background, you can use <kbd>cmd</kbd> + <kbd>i</kbd> to paste the following image to replace it with a white background.
66
+ </Callout>
67
+
68
+ <Image
69
+ alt={'PWA White Icon'}
70
+ borderless
71
+ cover
72
+ src={'https://github.com/lobehub/lobe-chat/assets/36695271/16ce82cb-49be-4d4d-ac86-4403a1536917'}
73
+ />
@@ -25,8 +25,15 @@ tags:
25
25
 
26
26
  若您未熟悉 PWA 的安装过程,您可以按照以下步骤将 LobeChat 添加为您的桌面应用(也适用于移动设备):
27
27
 
28
+ ## Chrome / Edge 浏览器上运行
29
+
30
+ <Callout type={'important'}>
31
+ macOS 下,使用 Chrome 安装的 PWA 时,必须要求 Chrome 是打开状态,否则会自动打开 Chrome 再打开 PWA 应用。
32
+ </Callout>
33
+
28
34
  <Steps>
29
- ### 在电脑上运行 Chrome 或 Edge 浏览器
35
+
36
+ ### 在电脑上运行 Chrome 或 Edge 浏览器
30
37
 
31
38
  ### 访问 LobeChat 网页
32
39
 
@@ -35,3 +42,32 @@ tags:
35
42
  ### 根据屏幕上的指示完成 PWA 的安装
36
43
 
37
44
  </Steps>
45
+
46
+ ## Safari 浏览器上运行
47
+
48
+ Safari PWA 需要 macOS Ventura 或更高版本。Safari 安装的 PWA 并不要求 Safari 是打开状态,可以直接打开 PWA 应用。
49
+
50
+ <Steps>
51
+
52
+ ### 在电脑上运行 Safari 浏览器
53
+
54
+ ### 访问 LobeChat 网页
55
+
56
+ ### 在地址栏的右上角,单击 <kbd>分享</kbd> 图标
57
+
58
+ ### 点选 <kbd>添加到程序坞</kbd>
59
+
60
+ ### 根据屏幕上的指示完成 PWA 的安装
61
+
62
+ </Steps>
63
+
64
+ <Callout type={'tip'}>
65
+ 默认安装的 LobeChat PWA 图标是黑色背景的,您可以在自行使用 <kbd>cmd</kbd> + <kbd>i</kbd> 粘贴如下图片替换为白色背景的。
66
+ </Callout>
67
+
68
+ <Image
69
+ alt={'PWA White Icon'}
70
+ borderless
71
+ cover
72
+ src={'https://github.com/lobehub/lobe-chat/assets/36695271/16ce82cb-49be-4d4d-ac86-4403a1536917'}
73
+ />
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@lobehub/chat",
3
- "version": "0.148.5",
3
+ "version": "0.148.6",
4
4
  "description": "Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.",
5
5
  "keywords": [
6
6
  "framework",
@@ -1,4 +1,4 @@
1
- import { SiAndroid, SiApple, SiWindows11 } from '@icons-pack/react-simple-icons';
1
+ import {SiAndroid, SiApple, SiBlackberry, SiGooglechrome, SiLinux, SiWindows11} from '@icons-pack/react-simple-icons';
2
2
  import { memo } from 'react';
3
3
 
4
4
  // TODO: 等 simple icons 修复类型,移除 ignore
@@ -7,14 +7,23 @@ const SystemIcon = memo<{ title?: string }>(({ title }) => {
7
7
  if (!title) return;
8
8
 
9
9
  // @ts-ignore
10
- if (['Mac OS', 'iOS'].includes(title)) return <SiApple size={32} />;
10
+ if (['Mac OS', 'iOS', 'iPadOS'].includes(title)) return <SiApple size={32} />;
11
11
 
12
12
  // @ts-ignore
13
- if (title === 'Windows') return <SiWindows11 size={32} />;
13
+ if (['Windows'].includes(title))return <SiWindows11 size={32} />;
14
14
 
15
15
  // @ts-ignore
16
16
  if (title === 'Android') return <SiAndroid size={32} />;
17
17
 
18
+ // @ts-ignore
19
+ if (['BlackBerry'].includes(title))return <SiBlackberry size={32} />;
20
+
21
+ // @ts-ignore
22
+ if (title === 'Linux') return <SiLinux size={32} />;
23
+
24
+ // @ts-ignore
25
+ if (title === 'Chrome OS') return <SiGooglechrome size={32} />;
26
+
18
27
  return null;
19
28
  });
20
29
 
@@ -1,9 +1,15 @@
1
1
  import {
2
+ Adobe,
3
+ Ai21,
2
4
  Aws,
5
+ Azure,
3
6
  Baichuan,
7
+ ByteDance,
4
8
  ChatGLM,
5
9
  Claude,
6
10
  Cohere,
11
+ Dbrx,
12
+ FishAudio,
7
13
  Gemini,
8
14
  Gemma,
9
15
  Hunyuan,
@@ -13,9 +19,12 @@ import {
13
19
  Mistral,
14
20
  Moonshot,
15
21
  OpenAI,
22
+ OpenChat,
16
23
  OpenRouter,
17
24
  Perplexity,
25
+ Rwkv,
18
26
  Spark,
27
+ Stability,
19
28
  Tongyi,
20
29
  Wenxin,
21
30
  Yi,
@@ -33,33 +42,63 @@ const ModelIcon = memo<ModelProviderIconProps>(({ model: originModel, size = 12
33
42
  // lower case the origin model so to better match more model id case
34
43
  const model = originModel.toLowerCase();
35
44
 
45
+ // currently supported models, maybe not in its own provider
36
46
  if (model.includes('gpt-3')) return <OpenAI.Avatar size={size} type={'gpt3'} />;
37
47
  if (model.includes('gpt-4')) return <OpenAI.Avatar size={size} type={'gpt4'} />;
38
- if (model.startsWith('glm') || model.includes('chatglm')) return <ChatGLM.Avatar size={size} />;
48
+ if (model.startsWith('glm') ||
49
+ model.includes('chatglm'))
50
+ return <ChatGLM.Avatar size={size} />;
39
51
  if (model.includes('claude')) return <Claude.Avatar size={size} />;
40
52
  if (model.includes('titan')) return <Aws.Avatar size={size} />;
41
53
  if (model.includes('llama')) return <Meta.Avatar size={size} />;
42
54
  if (model.includes('llava')) return <LLaVA.Avatar size={size} />;
43
55
  if (model.includes('gemini')) return <Gemini.Avatar size={size} />;
44
56
  if (model.includes('gemma')) return <Gemma.Avatar size={size} />;
57
+ if (model.includes('moonshot')) return <Moonshot.Avatar size={size} />;
45
58
  if (model.includes('qwen')) return <Tongyi.Avatar background={Tongyi.colorPrimary} size={size} />;
46
59
  if (model.includes('minmax')) return <Minimax.Avatar size={size} />;
47
- if (model.includes('moonshot')) return <Moonshot.Avatar size={size} />;
48
- if (model.includes('baichuan'))
49
- return <Baichuan.Avatar background={Baichuan.colorPrimary} size={size} />;
50
-
51
- if (model.includes('mistral') || model.includes('mixtral')) return <Mistral.Avatar size={size} />;
52
-
53
- if (model.includes('pplx') || model.includes('sonar')) return <Perplexity.Avatar size={size} />;
54
-
60
+ if (model.includes('mistral') ||
61
+ model.includes('mixtral'))
62
+ return <Mistral.Avatar size={size} />;
63
+ if (model.includes('pplx') ||
64
+ model.includes('sonar'))
65
+ return <Perplexity.Avatar size={size} />;
55
66
  if (model.includes('yi-')) return <Yi.Avatar size={size} />;
56
- if (model.includes('openrouter')) return <OpenRouter.Avatar size={size} />;
57
- if (model.includes('command')) return <Cohere.Color size={size} />;
67
+ if (model.startsWith('openrouter')) return <OpenRouter.Avatar size={size} />; // only for Cinematika and Auto
68
+ if (model.startsWith('openchat')) return <OpenChat.Avatar size={size} />;
69
+ if (model.includes('command')) return <Cohere.Avatar size={size} />;
70
+ if (model.includes('dbrx')) return <Dbrx.Avatar size={size} />;
58
71
 
59
- if (model.includes('ernie')) return <Wenxin.Avatar size={size} />;
72
+ // below: To be supported in providers, move up if supported
73
+ if (model.includes('baichuan'))
74
+ return <Baichuan.Avatar background={Baichuan.colorPrimary} size={size} />;
75
+ if (model.includes('rwkv')) return <Rwkv.Avatar size={size} />;
76
+ if (model.includes('ernie'))
77
+ return <Wenxin.Avatar size={size} />;
60
78
  if (model.includes('spark')) return <Spark.Avatar size={size} />;
61
79
  if (model.includes('hunyuan')) return <Hunyuan.Avatar size={size} />;
62
- if (model.includes('abab')) return <Minimax.Avatar size={size} />;
80
+ // ref https://github.com/fishaudio/Bert-VITS2/blob/master/train_ms.py#L702
81
+ if (model.startsWith('d_') ||
82
+ model.startsWith('g_') || model.startsWith('wd_'))
83
+ return <FishAudio.Avatar size={size} />;
84
+ if (model.includes('skylark')) return <ByteDance.Avatar size={size} />;
85
+
86
+ if (
87
+ model.includes('stable-diffusion') ||
88
+ model.includes('stable-video') ||
89
+ model.includes('stable-cascade') ||
90
+ model.includes('sdxl') ||
91
+ model.includes('stablelm') ||
92
+ model.startsWith('stable-') ||
93
+ model.startsWith('sd3')
94
+ )
95
+ return <Stability.Avatar size={size} />;
96
+
97
+ if (model.includes('wizardlm')) return <Azure.Avatar size={size} />;
98
+ if (model.includes('firefly')) return <Adobe.Avatar size={size} />;
99
+ if (model.includes('jamba') ||
100
+ model.includes('j2-'))
101
+ return <Ai21.Avatar size={size} />;
63
102
  });
64
103
 
65
104
  export default ModelIcon;
@@ -1,18 +1,32 @@
1
1
  import {
2
+ AdobeFirefly,
3
+ Ai21,
2
4
  Aws,
5
+ Azure,
3
6
  Baichuan,
7
+ ByteDance,
4
8
  ChatGLM,
5
9
  Claude,
10
+ Cohere,
11
+ Dbrx,
12
+ FishAudio,
6
13
  Gemini,
7
14
  Gemma,
15
+ Hunyuan,
8
16
  LLaVA,
9
17
  Meta,
10
18
  Minimax,
11
19
  Mistral,
12
20
  Moonshot,
13
21
  OpenAI,
22
+ OpenChat,
23
+ OpenRouter,
14
24
  Perplexity,
25
+ Rwkv,
26
+ Spark,
27
+ Stability,
15
28
  Tongyi,
29
+ Wenxin,
16
30
  ZeroOne,
17
31
  } from '@lobehub/icons';
18
32
  import { memo } from 'react';
@@ -25,8 +39,9 @@ interface ModelIconProps {
25
39
  const ModelIcon = memo<ModelIconProps>(({ model, size = 12 }) => {
26
40
  if (!model) return;
27
41
 
42
+ // currently supported models, maybe not in its own provider
28
43
  if (model.startsWith('gpt')) return <OpenAI size={size} />;
29
- if (model.startsWith('glm')) return <ChatGLM size={size} />;
44
+ if (model.startsWith('glm') || model.includes('chatglm')) return <ChatGLM size={size} />;
30
45
  if (model.includes('claude')) return <Claude size={size} />;
31
46
  if (model.includes('titan')) return <Aws size={size} />;
32
47
  if (model.includes('llama')) return <Meta size={size} />;
@@ -36,10 +51,39 @@ const ModelIcon = memo<ModelIconProps>(({ model, size = 12 }) => {
36
51
  if (model.includes('moonshot')) return <Moonshot size={size} />;
37
52
  if (model.includes('qwen')) return <Tongyi size={size} />;
38
53
  if (model.includes('minmax')) return <Minimax size={size} />;
39
- if (model.includes('baichuan')) return <Baichuan size={size} />;
40
54
  if (model.includes('mistral') || model.includes('mixtral')) return <Mistral size={size} />;
41
- if (model.includes('pplx')) return <Perplexity size={size} />;
42
- if (model.startsWith('yi-')) return <ZeroOne size={size} />;
55
+ if (model.includes('pplx') || model.includes('sonar')) return <Perplexity size={size} />;
56
+ if (model.includes('yi-')) return <ZeroOne size={size} />;
57
+ if (model.startsWith('openrouter')) return <OpenRouter size={size} />; // only for Cinematika and Auto
58
+ if (model.startsWith('openchat')) return <OpenChat size={size} />;
59
+ if (model.includes('command')) return <Cohere size={size} />;
60
+ if (model.includes('dbrx')) return <Dbrx size={size} />;
61
+
62
+ // below: To be supported in providers, move up if supported
63
+ if (model.includes('baichuan')) return <Baichuan size={size} />;
64
+ if (model.includes('rwkv')) return <Rwkv size={size} />;
65
+ if (model.includes('ernie')) return <Wenxin size={size} />;
66
+ if (model.includes('spark')) return <Spark size={size} />;
67
+ if (model.includes('hunyuan')) return <Hunyuan size={size} />;
68
+ // ref https://github.com/fishaudio/Bert-VITS2/blob/master/train_ms.py#L702
69
+ if (model.startsWith('d_') || model.startsWith('g_') || model.startsWith('wd_'))
70
+ return <FishAudio size={size} />;
71
+ if (model.includes('skylark')) return <ByteDance size={size} />;
72
+
73
+ if (
74
+ model.includes('stable-diffusion') ||
75
+ model.includes('stable-video') ||
76
+ model.includes('stable-cascade') ||
77
+ model.includes('sdxl') ||
78
+ model.includes('stablelm') ||
79
+ model.startsWith('stable-') ||
80
+ model.startsWith('sd3')
81
+ )
82
+ return <Stability size={size} />;
83
+
84
+ if (model.includes('wizardlm')) return <Azure size={size} />;
85
+ if (model.includes('firefly')) return <AdobeFirefly size={size} />;
86
+ if (model.includes('jamba') || model.includes('j2-')) return <Ai21 size={size} />;
43
87
  });
44
88
 
45
89
  export default ModelIcon;