@lobehub/chat 1.44.3 → 1.45.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (320) hide show
  1. package/.remarkrc.mdx.js +6 -0
  2. package/CHANGELOG.md +50 -0
  3. package/changelog/v1.json +18 -0
  4. package/docs/changelog/2023-09-09-plugin-system.mdx +5 -0
  5. package/docs/changelog/2023-09-09-plugin-system.zh-CN.mdx +5 -0
  6. package/docs/changelog/2023-11-14-gpt4-vision.mdx +6 -0
  7. package/docs/changelog/2023-11-14-gpt4-vision.zh-CN.mdx +6 -0
  8. package/docs/changelog/2023-11-19-tts-stt.mdx +6 -0
  9. package/docs/changelog/2023-11-19-tts-stt.zh-CN.mdx +7 -0
  10. package/docs/changelog/2023-12-22-dalle-3.mdx +6 -0
  11. package/docs/changelog/2023-12-22-dalle-3.zh-CN.mdx +4 -0
  12. package/docs/changelog/2024-02-08-sso-oauth.mdx +6 -0
  13. package/docs/changelog/2024-02-08-sso-oauth.zh-CN.mdx +6 -0
  14. package/docs/changelog/2024-02-14-ollama.mdx +6 -0
  15. package/docs/changelog/2024-02-14-ollama.zh-CN.mdx +5 -0
  16. package/docs/changelog/2024-06-19-lobe-chat-v1.mdx +6 -0
  17. package/docs/changelog/2024-06-19-lobe-chat-v1.zh-CN.mdx +5 -0
  18. package/docs/changelog/2024-07-19-gpt-4o-mini.mdx +5 -0
  19. package/docs/changelog/2024-07-19-gpt-4o-mini.zh-CN.mdx +4 -0
  20. package/docs/changelog/2024-08-02-lobe-chat-database-docker.mdx +6 -0
  21. package/docs/changelog/2024-08-02-lobe-chat-database-docker.zh-CN.mdx +5 -0
  22. package/docs/changelog/2024-08-21-file-upload-and-knowledge-base.mdx +6 -0
  23. package/docs/changelog/2024-08-21-file-upload-and-knowledge-base.zh-CN.mdx +5 -0
  24. package/docs/changelog/2024-09-13-openai-o1-models.mdx +6 -0
  25. package/docs/changelog/2024-09-13-openai-o1-models.zh-CN.mdx +6 -0
  26. package/docs/changelog/2024-09-20-artifacts.mdx +6 -0
  27. package/docs/changelog/2024-09-20-artifacts.zh-CN.mdx +6 -0
  28. package/docs/changelog/2024-10-27-pin-assistant.mdx +5 -0
  29. package/docs/changelog/2024-10-27-pin-assistant.zh-CN.mdx +4 -0
  30. package/docs/changelog/2024-11-06-share-text-json.mdx +4 -0
  31. package/docs/changelog/2024-11-06-share-text-json.zh-CN.mdx +4 -0
  32. package/docs/changelog/2024-11-25-november-providers.mdx +7 -0
  33. package/docs/changelog/2024-11-25-november-providers.zh-CN.mdx +7 -0
  34. package/docs/changelog/2024-11-27-forkable-chat.mdx +4 -0
  35. package/docs/changelog/2024-11-27-forkable-chat.zh-CN.mdx +5 -0
  36. package/docs/changelog/2025-01-03-user-profile.mdx +5 -0
  37. package/docs/changelog/2025-01-03-user-profile.zh-CN.mdx +4 -1
  38. package/docs/self-hosting/advanced/auth/clerk.mdx +25 -41
  39. package/docs/self-hosting/advanced/auth/clerk.zh-CN.mdx +23 -37
  40. package/docs/self-hosting/advanced/auth/next-auth/auth0.mdx +31 -58
  41. package/docs/self-hosting/advanced/auth/next-auth/auth0.zh-CN.mdx +30 -57
  42. package/docs/self-hosting/advanced/auth/next-auth/authelia.mdx +38 -38
  43. package/docs/self-hosting/advanced/auth/next-auth/authelia.zh-CN.mdx +37 -38
  44. package/docs/self-hosting/advanced/auth/next-auth/authentik.mdx +26 -31
  45. package/docs/self-hosting/advanced/auth/next-auth/authentik.zh-CN.mdx +25 -30
  46. package/docs/self-hosting/advanced/auth/next-auth/casdoor.mdx +74 -75
  47. package/docs/self-hosting/advanced/auth/next-auth/casdoor.zh-CN.mdx +72 -73
  48. package/docs/self-hosting/advanced/auth/next-auth/cloudflare-zero-trust.mdx +24 -25
  49. package/docs/self-hosting/advanced/auth/next-auth/cloudflare-zero-trust.zh-CN.mdx +23 -24
  50. package/docs/self-hosting/advanced/auth/next-auth/github.mdx +46 -73
  51. package/docs/self-hosting/advanced/auth/next-auth/github.zh-CN.mdx +43 -70
  52. package/docs/self-hosting/advanced/auth/next-auth/logto.mdx +28 -37
  53. package/docs/self-hosting/advanced/auth/next-auth/logto.zh-CN.mdx +28 -37
  54. package/docs/self-hosting/advanced/auth/next-auth/microsoft-entra-id.mdx +36 -49
  55. package/docs/self-hosting/advanced/auth/next-auth/microsoft-entra-id.zh-CN.mdx +30 -43
  56. package/docs/self-hosting/advanced/auth/next-auth/wechat.mdx +13 -14
  57. package/docs/self-hosting/advanced/auth/next-auth/wechat.zh-CN.mdx +14 -15
  58. package/docs/self-hosting/advanced/auth/next-auth/zitadel.mdx +35 -69
  59. package/docs/self-hosting/advanced/auth/next-auth/zitadel.zh-CN.mdx +34 -68
  60. package/docs/self-hosting/advanced/auth.mdx +14 -13
  61. package/docs/self-hosting/advanced/auth.zh-CN.mdx +15 -14
  62. package/docs/self-hosting/advanced/feature-flags.zh-CN.mdx +15 -15
  63. package/docs/self-hosting/advanced/knowledge-base.mdx +14 -3
  64. package/docs/self-hosting/advanced/knowledge-base.zh-CN.mdx +12 -3
  65. package/docs/self-hosting/advanced/model-list.zh-CN.mdx +5 -5
  66. package/docs/self-hosting/advanced/s3/cloudflare-r2.mdx +52 -81
  67. package/docs/self-hosting/advanced/s3/cloudflare-r2.zh-CN.mdx +51 -80
  68. package/docs/self-hosting/advanced/s3/tencent-cloud.mdx +20 -34
  69. package/docs/self-hosting/advanced/s3/tencent-cloud.zh-CN.mdx +28 -43
  70. package/docs/self-hosting/advanced/s3.mdx +30 -34
  71. package/docs/self-hosting/advanced/s3.zh-CN.mdx +28 -33
  72. package/docs/self-hosting/advanced/settings-url-share.mdx +6 -6
  73. package/docs/self-hosting/advanced/settings-url-share.zh-CN.mdx +19 -19
  74. package/docs/self-hosting/advanced/upstream-sync.mdx +73 -89
  75. package/docs/self-hosting/advanced/upstream-sync.zh-CN.mdx +71 -87
  76. package/docs/self-hosting/advanced/webrtc.mdx +14 -21
  77. package/docs/self-hosting/advanced/webrtc.zh-CN.mdx +19 -26
  78. package/docs/self-hosting/environment-variables/analytics.zh-CN.mdx +0 -3
  79. package/docs/self-hosting/environment-variables/auth.zh-CN.mdx +1 -1
  80. package/docs/self-hosting/environment-variables/basic.mdx +13 -13
  81. package/docs/self-hosting/environment-variables/basic.zh-CN.mdx +15 -15
  82. package/docs/self-hosting/environment-variables/model-provider.mdx +1 -1
  83. package/docs/self-hosting/environment-variables/model-provider.zh-CN.mdx +1 -1
  84. package/docs/self-hosting/environment-variables/s3.mdx +3 -4
  85. package/docs/self-hosting/environment-variables/s3.zh-CN.mdx +5 -7
  86. package/docs/self-hosting/environment-variables.mdx +8 -4
  87. package/docs/self-hosting/environment-variables.zh-CN.mdx +4 -0
  88. package/docs/self-hosting/examples/azure-openai.mdx +9 -12
  89. package/docs/self-hosting/examples/azure-openai.zh-CN.mdx +8 -11
  90. package/docs/self-hosting/examples/ollama.mdx +8 -7
  91. package/docs/self-hosting/examples/ollama.zh-CN.mdx +8 -7
  92. package/docs/self-hosting/platform/alibaba-cloud.mdx +5 -7
  93. package/docs/self-hosting/platform/alibaba-cloud.zh-CN.mdx +5 -7
  94. package/docs/self-hosting/platform/btpanel.mdx +3 -2
  95. package/docs/self-hosting/platform/btpanel.zh-CN.mdx +3 -3
  96. package/docs/self-hosting/platform/docker-compose.mdx +75 -85
  97. package/docs/self-hosting/platform/docker-compose.zh-CN.mdx +75 -85
  98. package/docs/self-hosting/platform/docker.mdx +87 -92
  99. package/docs/self-hosting/platform/docker.zh-CN.mdx +96 -115
  100. package/docs/self-hosting/platform/netlify.mdx +44 -94
  101. package/docs/self-hosting/platform/netlify.zh-CN.mdx +40 -90
  102. package/docs/self-hosting/platform/railway.mdx +6 -7
  103. package/docs/self-hosting/platform/railway.zh-CN.mdx +6 -7
  104. package/docs/self-hosting/platform/repocloud.mdx +6 -7
  105. package/docs/self-hosting/platform/repocloud.zh-CN.mdx +6 -7
  106. package/docs/self-hosting/platform/sealos.mdx +6 -7
  107. package/docs/self-hosting/platform/sealos.zh-CN.mdx +6 -7
  108. package/docs/self-hosting/platform/vercel.mdx +7 -8
  109. package/docs/self-hosting/platform/vercel.zh-CN.mdx +7 -8
  110. package/docs/self-hosting/platform/zeabur.mdx +29 -32
  111. package/docs/self-hosting/platform/zeabur.zh-CN.mdx +29 -32
  112. package/docs/self-hosting/server-database/docker-compose.mdx +44 -71
  113. package/docs/self-hosting/server-database/docker-compose.zh-CN.mdx +44 -71
  114. package/docs/self-hosting/server-database/docker.mdx +84 -88
  115. package/docs/self-hosting/server-database/docker.zh-CN.mdx +87 -91
  116. package/docs/self-hosting/server-database/dokploy.mdx +18 -1
  117. package/docs/self-hosting/server-database/dokploy.zh-CN.mdx +84 -68
  118. package/docs/self-hosting/server-database/repocloud.mdx +7 -9
  119. package/docs/self-hosting/server-database/repocloud.zh-CN.mdx +9 -11
  120. package/docs/self-hosting/server-database/vercel.mdx +158 -243
  121. package/docs/self-hosting/server-database/vercel.zh-CN.mdx +137 -205
  122. package/docs/self-hosting/server-database/zeabur.mdx +21 -23
  123. package/docs/self-hosting/server-database/zeabur.zh-CN.mdx +20 -22
  124. package/docs/self-hosting/server-database.mdx +34 -36
  125. package/docs/self-hosting/server-database.zh-CN.mdx +34 -37
  126. package/docs/self-hosting/start.mdx +1 -4
  127. package/docs/self-hosting/start.zh-CN.mdx +1 -1
  128. package/docs/usage/agents/agent-organization.mdx +5 -21
  129. package/docs/usage/agents/agent-organization.zh-CN.mdx +5 -21
  130. package/docs/usage/agents/concepts.mdx +4 -4
  131. package/docs/usage/agents/concepts.zh-CN.mdx +4 -4
  132. package/docs/usage/agents/custom-agent.mdx +2 -2
  133. package/docs/usage/agents/custom-agent.zh-CN.mdx +2 -2
  134. package/docs/usage/agents/model.mdx +4 -4
  135. package/docs/usage/agents/model.zh-CN.mdx +6 -6
  136. package/docs/usage/agents/prompt.mdx +5 -6
  137. package/docs/usage/agents/prompt.zh-CN.mdx +5 -6
  138. package/docs/usage/agents/topics.mdx +2 -2
  139. package/docs/usage/agents/topics.zh-CN.mdx +2 -2
  140. package/docs/usage/features/agent-market.mdx +2 -2
  141. package/docs/usage/features/agent-market.zh-CN.mdx +2 -2
  142. package/docs/usage/features/auth.mdx +1 -5
  143. package/docs/usage/features/auth.zh-CN.mdx +1 -5
  144. package/docs/usage/features/database.mdx +1 -5
  145. package/docs/usage/features/database.zh-CN.mdx +1 -5
  146. package/docs/usage/features/local-llm.mdx +2 -6
  147. package/docs/usage/features/local-llm.zh-CN.mdx +2 -6
  148. package/docs/usage/features/mobile.mdx +1 -5
  149. package/docs/usage/features/mobile.zh-CN.mdx +1 -5
  150. package/docs/usage/features/multi-ai-providers.mdx +3 -11
  151. package/docs/usage/features/multi-ai-providers.zh-CN.mdx +3 -11
  152. package/docs/usage/features/plugin-system.mdx +9 -10
  153. package/docs/usage/features/plugin-system.zh-CN.mdx +9 -10
  154. package/docs/usage/features/pwa.mdx +11 -25
  155. package/docs/usage/features/pwa.zh-CN.mdx +11 -25
  156. package/docs/usage/features/text-to-image.mdx +2 -2
  157. package/docs/usage/features/text-to-image.zh-CN.mdx +2 -2
  158. package/docs/usage/features/theme.mdx +1 -6
  159. package/docs/usage/features/theme.zh-CN.mdx +1 -6
  160. package/docs/usage/features/tts.mdx +3 -7
  161. package/docs/usage/features/tts.zh-CN.mdx +3 -7
  162. package/docs/usage/features/vision.mdx +2 -2
  163. package/docs/usage/features/vision.zh-CN.mdx +2 -2
  164. package/docs/usage/foundation/basic.mdx +7 -18
  165. package/docs/usage/foundation/basic.zh-CN.mdx +6 -16
  166. package/docs/usage/foundation/share.mdx +3 -13
  167. package/docs/usage/foundation/share.zh-CN.mdx +3 -13
  168. package/docs/usage/foundation/text2image.mdx +3 -12
  169. package/docs/usage/foundation/text2image.zh-CN.mdx +3 -12
  170. package/docs/usage/foundation/translate.mdx +3 -13
  171. package/docs/usage/foundation/translate.zh-CN.mdx +3 -13
  172. package/docs/usage/foundation/tts-stt.mdx +3 -12
  173. package/docs/usage/foundation/tts-stt.zh-CN.mdx +3 -12
  174. package/docs/usage/foundation/vision.mdx +4 -16
  175. package/docs/usage/foundation/vision.zh-CN.mdx +4 -16
  176. package/docs/usage/plugins/basic-usage.mdx +7 -30
  177. package/docs/usage/plugins/basic-usage.zh-CN.mdx +7 -30
  178. package/docs/usage/plugins/development.mdx +30 -78
  179. package/docs/usage/plugins/development.zh-CN.mdx +31 -79
  180. package/docs/usage/plugins/store.mdx +2 -10
  181. package/docs/usage/plugins/store.zh-CN.mdx +2 -10
  182. package/docs/usage/providers/ai21.mdx +17 -33
  183. package/docs/usage/providers/ai21.zh-CN.mdx +17 -33
  184. package/docs/usage/providers/ai360.mdx +17 -33
  185. package/docs/usage/providers/ai360.zh-CN.mdx +20 -36
  186. package/docs/usage/providers/anthropic.mdx +23 -45
  187. package/docs/usage/providers/anthropic.zh-CN.mdx +22 -44
  188. package/docs/usage/providers/azure.mdx +21 -51
  189. package/docs/usage/providers/azure.zh-CN.mdx +19 -48
  190. package/docs/usage/providers/baichuan.mdx +16 -34
  191. package/docs/usage/providers/baichuan.zh-CN.mdx +15 -33
  192. package/docs/usage/providers/bedrock.mdx +38 -87
  193. package/docs/usage/providers/bedrock.zh-CN.mdx +37 -86
  194. package/docs/usage/providers/cloudflare.mdx +25 -48
  195. package/docs/usage/providers/cloudflare.zh-CN.mdx +24 -45
  196. package/docs/usage/providers/deepseek.mdx +25 -51
  197. package/docs/usage/providers/deepseek.zh-CN.mdx +24 -50
  198. package/docs/usage/providers/fireworksai.mdx +23 -43
  199. package/docs/usage/providers/fireworksai.zh-CN.mdx +21 -41
  200. package/docs/usage/providers/gemini.mdx +20 -46
  201. package/docs/usage/providers/gemini.zh-CN.mdx +20 -46
  202. package/docs/usage/providers/giteeai.mdx +24 -45
  203. package/docs/usage/providers/giteeai.zh-CN.mdx +22 -43
  204. package/docs/usage/providers/github.mdx +19 -45
  205. package/docs/usage/providers/github.zh-CN.mdx +19 -44
  206. package/docs/usage/providers/groq.mdx +12 -29
  207. package/docs/usage/providers/groq.zh-CN.mdx +11 -28
  208. package/docs/usage/providers/hunyuan.mdx +19 -39
  209. package/docs/usage/providers/hunyuan.zh-CN.mdx +18 -38
  210. package/docs/usage/providers/internlm.mdx +21 -38
  211. package/docs/usage/providers/internlm.zh-CN.mdx +19 -36
  212. package/docs/usage/providers/minimax.mdx +24 -50
  213. package/docs/usage/providers/minimax.zh-CN.mdx +22 -48
  214. package/docs/usage/providers/mistral.mdx +21 -39
  215. package/docs/usage/providers/mistral.zh-CN.mdx +20 -38
  216. package/docs/usage/providers/moonshot.mdx +20 -38
  217. package/docs/usage/providers/moonshot.zh-CN.mdx +19 -37
  218. package/docs/usage/providers/novita.mdx +20 -43
  219. package/docs/usage/providers/novita.zh-CN.mdx +19 -42
  220. package/docs/usage/providers/ollama/gemma.mdx +12 -29
  221. package/docs/usage/providers/ollama/gemma.zh-CN.mdx +12 -30
  222. package/docs/usage/providers/ollama/qwen.mdx +17 -32
  223. package/docs/usage/providers/ollama/qwen.zh-CN.mdx +12 -27
  224. package/docs/usage/providers/ollama.mdx +67 -99
  225. package/docs/usage/providers/ollama.zh-CN.mdx +67 -99
  226. package/docs/usage/providers/openai.mdx +42 -56
  227. package/docs/usage/providers/openai.zh-CN.mdx +39 -52
  228. package/docs/usage/providers/openrouter.mdx +48 -84
  229. package/docs/usage/providers/openrouter.zh-CN.mdx +31 -67
  230. package/docs/usage/providers/perplexity.mdx +16 -34
  231. package/docs/usage/providers/perplexity.zh-CN.mdx +16 -34
  232. package/docs/usage/providers/qwen.mdx +26 -52
  233. package/docs/usage/providers/qwen.zh-CN.mdx +25 -51
  234. package/docs/usage/providers/sensenova.mdx +24 -45
  235. package/docs/usage/providers/sensenova.zh-CN.mdx +22 -43
  236. package/docs/usage/providers/siliconcloud.mdx +17 -33
  237. package/docs/usage/providers/siliconcloud.zh-CN.mdx +17 -33
  238. package/docs/usage/providers/spark.mdx +20 -40
  239. package/docs/usage/providers/spark.zh-CN.mdx +19 -39
  240. package/docs/usage/providers/stepfun.mdx +17 -35
  241. package/docs/usage/providers/stepfun.zh-CN.mdx +17 -35
  242. package/docs/usage/providers/taichu.mdx +16 -34
  243. package/docs/usage/providers/taichu.zh-CN.mdx +15 -33
  244. package/docs/usage/providers/togetherai.mdx +18 -40
  245. package/docs/usage/providers/togetherai.zh-CN.mdx +18 -40
  246. package/docs/usage/providers/upstage.mdx +18 -34
  247. package/docs/usage/providers/upstage.zh-CN.mdx +17 -33
  248. package/docs/usage/providers/wenxin.mdx +22 -42
  249. package/docs/usage/providers/wenxin.zh-CN.mdx +20 -40
  250. package/docs/usage/providers/xai.mdx +21 -38
  251. package/docs/usage/providers/xai.zh-CN.mdx +20 -37
  252. package/docs/usage/providers/zeroone.mdx +22 -48
  253. package/docs/usage/providers/zeroone.zh-CN.mdx +22 -48
  254. package/docs/usage/providers/zhipu.mdx +17 -35
  255. package/docs/usage/providers/zhipu.zh-CN.mdx +18 -34
  256. package/docs/usage/providers.mdx +1 -6
  257. package/docs/usage/providers.zh-CN.mdx +1 -6
  258. package/docs/usage/start.mdx +4 -18
  259. package/docs/usage/start.zh-CN.mdx +2 -19
  260. package/docs/usage/tools-calling/anthropic.mdx +18 -51
  261. package/docs/usage/tools-calling/anthropic.zh-CN.mdx +22 -55
  262. package/docs/usage/tools-calling/google.mdx +16 -23
  263. package/docs/usage/tools-calling/google.zh-CN.mdx +17 -24
  264. package/docs/usage/tools-calling/groq.mdx +9 -0
  265. package/docs/usage/tools-calling/groq.zh-CN.mdx +44 -70
  266. package/docs/usage/tools-calling/moonshot.mdx +9 -0
  267. package/docs/usage/tools-calling/openai.mdx +19 -44
  268. package/docs/usage/tools-calling/openai.zh-CN.mdx +20 -45
  269. package/docs/usage/tools-calling.mdx +9 -0
  270. package/docs/usage/tools-calling.zh-CN.mdx +60 -68
  271. package/next.config.ts +1 -0
  272. package/package.json +48 -41
  273. package/scripts/mdxWorkflow/index.ts +7 -0
  274. package/src/app/(main)/(mobile)/me/(home)/features/Header.tsx +2 -1
  275. package/src/app/(main)/(mobile)/me/data/features/Header.tsx +1 -1
  276. package/src/app/(main)/(mobile)/me/profile/features/Header.tsx +1 -1
  277. package/src/app/(main)/(mobile)/me/settings/features/Header.tsx +1 -1
  278. package/src/app/(main)/@nav/_layout/Mobile.tsx +2 -1
  279. package/src/app/(main)/chat/(workspace)/@topic/features/SystemRole/SystemRoleContent.tsx +2 -1
  280. package/src/app/(main)/chat/(workspace)/@topic/features/TopicListContent/ByTimeMode/GroupItem.tsx +2 -2
  281. package/src/app/(main)/chat/(workspace)/_layout/Desktop/ChatHeader/Main.tsx +2 -1
  282. package/src/app/(main)/chat/(workspace)/_layout/Desktop/ChatHeader/index.tsx +1 -1
  283. package/src/app/(main)/chat/(workspace)/_layout/Mobile/ChatHeader/ChatHeaderTitle.tsx +2 -1
  284. package/src/app/(main)/chat/(workspace)/_layout/Mobile/ChatHeader/index.tsx +1 -1
  285. package/src/app/(main)/chat/@session/_layout/Mobile/SessionHeader.tsx +2 -1
  286. package/src/app/(main)/chat/settings/_layout/Desktop/Header.tsx +1 -1
  287. package/src/app/(main)/chat/settings/_layout/Mobile/Header.tsx +1 -1
  288. package/src/app/(main)/discover/(detail)/provider/[slug]/features/InfoSidebar/SuggestionItem.tsx +8 -6
  289. package/src/app/(main)/discover/(list)/_layout/Desktop/Nav.tsx +1 -1
  290. package/src/app/(main)/discover/(list)/_layout/Mobile/Header.tsx +2 -1
  291. package/src/app/(main)/discover/_layout/Desktop/Header.tsx +1 -1
  292. package/src/app/(main)/discover/components/VirtuosoGridList/index.tsx +9 -5
  293. package/src/app/(main)/discover/search/_layout/Mobile/Header.tsx +1 -1
  294. package/src/app/(main)/files/(content)/@menu/features/KnowledgeBase/EmptyStatus.tsx +21 -13
  295. package/src/app/(main)/repos/[id]/evals/evaluation/EvaluationList/index.tsx +1 -1
  296. package/src/app/(main)/settings/sync/features/DeviceInfo/SystemIcon.tsx +2 -0
  297. package/src/components/Branding/ProductLogo/Custom.tsx +19 -20
  298. package/src/components/BrowserIcon/index.tsx +19 -30
  299. package/src/components/BubblesLoading/index.tsx +31 -23
  300. package/src/components/FunctionModal/createModalHooks.ts +6 -3
  301. package/src/components/StopLoading.tsx +10 -7
  302. package/src/features/ChatInput/Desktop/InputArea/index.tsx +2 -2
  303. package/src/features/InitClientDB/EnableModal.tsx +2 -2
  304. package/src/features/InitClientDB/{PGliteSVG.tsx → PGliteIcon.tsx} +17 -11
  305. package/src/features/ShareModal/ShareImage/index.tsx +32 -22
  306. package/src/features/ShareModal/ShareJSON/Preview.tsx +2 -2
  307. package/src/features/ShareModal/ShareJSON/index.tsx +49 -37
  308. package/src/features/ShareModal/ShareText/Preview.tsx +4 -1
  309. package/src/features/ShareModal/ShareText/index.tsx +49 -38
  310. package/src/features/ShareModal/index.tsx +1 -1
  311. package/src/features/ShareModal/style.ts +30 -0
  312. package/src/utils/colorUtils.ts +1 -1
  313. package/src/components/BrowserIcon/components/Brave.tsx +0 -56
  314. package/src/components/BrowserIcon/components/Chrome.tsx +0 -14
  315. package/src/components/BrowserIcon/components/Chromium.tsx +0 -14
  316. package/src/components/BrowserIcon/components/Edge.tsx +0 -36
  317. package/src/components/BrowserIcon/components/Firefox.tsx +0 -38
  318. package/src/components/BrowserIcon/components/Opera.tsx +0 -19
  319. package/src/components/BrowserIcon/components/Safari.tsx +0 -23
  320. package/src/components/BrowserIcon/components/Samsung.tsx +0 -21
@@ -14,11 +14,7 @@ tags:
14
14
 
15
15
  # Using Google Gemma Model
16
16
 
17
- <Image
18
- alt={'Using Gemma in LobeChat'}
19
- cover
20
- src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'}
21
- />
17
+ <Image alt={'Using Gemma in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'} />
22
18
 
23
19
  [Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
24
20
 
@@ -27,42 +23,29 @@ This document will guide you on how to use Google Gemma in LobeChat:
27
23
  <Steps>
28
24
  ### Install Ollama locally
29
25
 
30
- First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
26
+ First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
31
27
 
32
- ### Pull Google Gemma model to local using Ollama
28
+ ### Pull Google Gemma model to local using Ollama
33
29
 
34
- After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
30
+ After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
35
31
 
36
- ```bash
37
- ollama pull gemma
38
- ```
32
+ ```bash
33
+ ollama pull gemma
34
+ ```
39
35
 
40
- <Image
41
- alt={'Pulling Gemma model using Ollama'}
42
- height={473}
43
- inStep
44
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
45
- width={791}
46
- />
36
+ <Image alt={'Pulling Gemma model using Ollama'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'} width={791} />
47
37
 
48
- ### Select Gemma model
38
+ ### Select Gemma model
49
39
 
50
- In the session page, open the model panel and then select the Gemma model.
40
+ In the session page, open the model panel and then select the Gemma model.
51
41
 
52
- <Image
53
- alt={'Selecting Gemma model in the model selection panel'}
54
- height={629}
55
- inStep
56
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'}
57
- width={791}
58
- />
42
+ <Image alt={'Selecting Gemma model in the model selection panel'} height={629} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'} width={791} />
59
43
 
60
44
  <Callout type={'info'}>
61
45
  If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
62
46
  with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
63
47
  LobeChat.
64
-
65
- </Callout>
48
+ </Callout>
66
49
  </Steps>
67
50
 
68
51
  Now, you can start conversing with the local Gemma model using LobeChat.
@@ -13,12 +13,7 @@ tags:
13
13
 
14
14
  # 使用 Google Gemma 模型
15
15
 
16
- <Image
17
- alt={'在 LobeChat 中使用 Gemma'}
18
- cover
19
- rounded
20
- src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'}
21
- />
16
+ <Image alt={'在 LobeChat 中使用 Gemma'} cover rounded src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'} />
22
17
 
23
18
  [Gemma](https://blog.google/technology/developers/gemma-open-models/) 是 Google 开源的一款大语言模型(LLM),旨在提供一个更加通用、灵活的模型用于各种自然语言处理任务。现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 Google Gemma。
24
19
 
@@ -27,41 +22,28 @@ tags:
27
22
  <Steps>
28
23
  ### 本地安装 Ollama
29
24
 
30
- 首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
25
+ 首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
31
26
 
32
- ### 用 Ollama 拉取 Google Gemma 模型到本地
27
+ ### 用 Ollama 拉取 Google Gemma 模型到本地
33
28
 
34
- 在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:
29
+ 在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:
35
30
 
36
- ```bash
37
- ollama pull gemma
38
- ```
31
+ ```bash
32
+ ollama pull gemma
33
+ ```
39
34
 
40
- <Image
41
- alt={'使用 Ollama 拉取 Gemma 模型'}
42
- height={473}
43
- inStep
44
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
45
- width={791}
46
- />
35
+ <Image alt={'使用 Ollama 拉取 Gemma 模型'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'} width={791} />
47
36
 
48
- ### 选择 Gemma 模型
37
+ ### 选择 Gemma 模型
49
38
 
50
- 在会话页面中,选择模型面板打开,然后选择 Gemma 模型。
39
+ 在会话页面中,选择模型面板打开,然后选择 Gemma 模型。
51
40
 
52
- <Image
53
- alt={'模型选择面板中选择 Gemma 模型'}
54
- height={629}
55
- inStep
56
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'}
57
- width={791}
58
- />
41
+ <Image alt={'模型选择面板中选择 Gemma 模型'} height={629} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'} width={791} />
59
42
 
60
43
  <Callout type={'info'}>
61
44
  如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
62
45
  集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
63
-
64
- </Callout>
46
+ </Callout>
65
47
  </Steps>
66
48
 
67
49
  接下来,你就可以使用 LobeChat 与本地 Gemma 模型对话了。
@@ -11,11 +11,7 @@ tags:
11
11
 
12
12
  # Using the Local Qwen Model
13
13
 
14
- <Image
15
- alt={'Using Qwen in LobeChat'}
16
- cover
17
- src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'}
18
- />
14
+ <Image alt={'Using Qwen in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'} />
19
15
 
20
16
  [Qwen](https://github.com/QwenLM/Qwen1.5) is a large language model (LLM) open-sourced by Alibaba Cloud. It is officially defined as a constantly evolving AI large model, and it achieves more accurate Chinese recognition capabilities through more training set content.
21
17
 
@@ -26,44 +22,33 @@ Now, through the integration of LobeChat and [Ollama](https://ollama.com/), you
26
22
  <Steps>
27
23
  ## Local Installation of Ollama
28
24
 
29
- First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
25
+ First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
30
26
 
31
- ## Pull the Qwen Model to Local with Ollama
27
+ ## Pull the Qwen Model to Local with Ollama
32
28
 
33
- After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
29
+ After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
34
30
 
35
- ```bash
36
- ollama pull qwen:14b
37
- ```
31
+ ```bash
32
+ ollama pull qwen:14b
33
+ ```
38
34
 
39
- <Callout type={'info'}>
40
- The local version of Qwen provides different model sizes to choose from. Please refer to the
41
- [Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
42
- model size.
43
- </Callout>
35
+ <Callout type={'info'}>
36
+ The local version of Qwen provides different model sizes to choose from. Please refer to the
37
+ [Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
38
+ model size.
39
+ </Callout>
44
40
 
45
- <Image
46
- alt={'Use Ollama Pull Qwen Model'}
47
- height={473}
48
- inStep
49
- src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'}
50
- />
41
+ <Image alt={'Use Ollama Pull Qwen Model'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />
51
42
 
52
- ### Select the Qwen Model
43
+ ### Select the Qwen Model
53
44
 
54
- In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
45
+ In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
55
46
 
56
- <Image
57
- alt={'Choose Qwen Model'}
58
- height={430}
59
- inStep
60
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'}
61
- />
47
+ <Image alt={'Choose Qwen Model'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />
62
48
 
63
49
  <Callout type={'info'}>
64
50
  If you do not see the Ollama provider in the model selection panel, please refer to [Integration with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat.
65
-
66
- </Callout>
51
+ </Callout>
67
52
  </Steps>
68
53
 
69
54
  Next, you can have a conversation with the local Qwen model in LobeChat.
@@ -11,11 +11,7 @@ tags:
11
11
 
12
12
  # 使用本地通义千问 Qwen 模型
13
13
 
14
- <Image
15
- alt={'在 LobeChat 中使用 Qwen'}
16
- cover
17
- src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'}
18
- />
14
+ <Image alt={'在 LobeChat 中使用 Qwen'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'} />
19
15
 
20
16
  [通义千问](https://github.com/QwenLM/Qwen1.5) 是阿里云开源的一款大语言模型(LLM),官方定义是一个不断进化的 AI 大模型,并通过更多的训练集内容达到更精准的中文识别能力。
21
17
 
@@ -28,39 +24,28 @@ tags:
28
24
  <Steps>
29
25
  ### 本地安装 Ollama
30
26
 
31
- 首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
27
+ 首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
32
28
 
33
- ### 用 Ollama 拉取 Qwen 模型到本地
29
+ ### 用 Ollama 拉取 Qwen 模型到本地
34
30
 
35
- 在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:
31
+ 在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:
36
32
 
37
- ```bash
38
- ollama pull qwen:14b
39
- ```
33
+ ```bash
34
+ ollama pull qwen:14b
35
+ ```
40
36
 
41
- <Image
42
- alt={'使用 Ollama 拉取 Qwen 模型'}
43
- height={473}
44
- inStep
45
- src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'}
46
- />
37
+ <Image alt={'使用 Ollama 拉取 Qwen 模型'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />
47
38
 
48
- ### 选择 Qwen 模型
39
+ ### 选择 Qwen 模型
49
40
 
50
- 在会话页面中,选择模型面板打开,然后选择 Qwen 模型。
41
+ 在会话页面中,选择模型面板打开,然后选择 Qwen 模型。
51
42
 
52
- <Image
53
- alt={'模型选择面板中选择 Qwen 模型'}
54
- height={430}
55
- inStep
56
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'}
57
- />
43
+ <Image alt={'模型选择面板中选择 Qwen 模型'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />
58
44
 
59
45
  <Callout type={'info'}>
60
46
  如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
61
47
  集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
62
-
63
- </Callout>
48
+ </Callout>
64
49
  </Steps>
65
50
 
66
51
  接下来,你就可以使用 LobeChat 与本地 Qwen 模型对话了。
@@ -13,151 +13,130 @@ tags:
13
13
 
14
14
  # Using Ollama in LobeChat
15
15
 
16
- <Image
17
- alt={'Using Ollama in LobeChat'}
18
- borderless
19
- cover
20
- src={'https://github.com/lobehub/lobe-chat/assets/17870709/f579b39b-e771-402c-a1d1-620e57a10c75'}
21
- />
16
+ <Image alt={'Using Ollama in LobeChat'} borderless cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/f579b39b-e771-402c-a1d1-620e57a10c75'} />
22
17
 
23
18
  Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.
24
19
 
25
20
  This document will guide you on how to use Ollama in LobeChat:
26
21
 
27
- <Video
28
- alt="demonstration of using Ollama in LobeChat"
29
- height={580}
30
- src="https://github.com/lobehub/lobe-chat/assets/28616219/c32b56db-c6a1-4876-9bc3-acbd37ec0c0c"
31
- />
22
+ <Video alt="demonstration of using Ollama in LobeChat" height={580} src="https://github.com/lobehub/lobe-chat/assets/28616219/c32b56db-c6a1-4876-9bc3-acbd37ec0c0c" />
32
23
 
33
24
  ## Using Ollama on macOS
34
25
 
35
26
  <Steps>
27
+ ### Local Installation of Ollama
36
28
 
37
- ### Local Installation of Ollama
29
+ [Download Ollama for macOS](https://ollama.com/download?utm_source=lobehub\&utm_medium=docs\&utm_campaign=download-macos) and unzip/install it.
38
30
 
39
- [Download Ollama for macOS](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-macos) and unzip/install it.
31
+ ### Configure Ollama for Cross-Origin Access
40
32
 
41
- ### Configure Ollama for Cross-Origin Access
33
+ Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. Use `launchctl` to set the environment variable:
42
34
 
43
- Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. Use `launchctl` to set the environment variable:
35
+ ```bash
36
+ launchctl setenv OLLAMA_ORIGINS "*"
37
+ ```
44
38
 
45
- ```bash
46
- launchctl setenv OLLAMA_ORIGINS "*"
47
- ```
48
-
49
- After setting up, restart the Ollama application.
39
+ After setting up, restart the Ollama application.
50
40
 
51
- ### Conversing with Local Large Models in LobeChat
41
+ ### Conversing with Local Large Models in LobeChat
52
42
 
53
- Now, you can start conversing with the local LLM in LobeChat.
54
-
55
- <Image
56
- alt="Chat with llama3 in LobeChat"
57
- height="573"
58
- src="https://github.com/lobehub/lobe-chat/assets/28616219/7f9a9a9f-fd91-4f59-aac9-3f26c6d49a1e"
59
- />
43
+ Now, you can start conversing with the local LLM in LobeChat.
60
44
 
45
+ <Image alt="Chat with llama3 in LobeChat" height="573" src="https://github.com/lobehub/lobe-chat/assets/28616219/7f9a9a9f-fd91-4f59-aac9-3f26c6d49a1e" />
61
46
  </Steps>
62
47
 
63
48
  ## Using Ollama on Windows
64
49
 
65
50
  <Steps>
51
+ ### Local Installation of Ollama
66
52
 
67
- ### Local Installation of Ollama
68
-
69
- [Download Ollama for Windows](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-windows) and install it.
70
-
71
- ### Configure Ollama for Cross-Origin Access
53
+ [Download Ollama for Windows](https://ollama.com/download?utm_source=lobehub\&utm_medium=docs\&utm_campaign=download-windows) and install it.
72
54
 
73
- Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
55
+ ### Configure Ollama for Cross-Origin Access
74
56
 
75
- On Windows, Ollama inherits your user and system environment variables.
57
+ Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
76
58
 
77
- 1. First, exit the Ollama program by clicking on it in the Windows taskbar.
78
- 2. Edit system environment variables from the Control Panel.
79
- 3. Edit or create the Ollama environment variable `OLLAMA_ORIGINS` for your user account, setting the value to `*`.
80
- 4. Click `OK/Apply` to save and restart the system.
81
- 5. Run `Ollama` again.
59
+ On Windows, Ollama inherits your user and system environment variables.
82
60
 
83
- ### Conversing with Local Large Models in LobeChat
61
+ 1. First, exit the Ollama program by clicking on it in the Windows taskbar.
62
+ 2. Edit system environment variables from the Control Panel.
63
+ 3. Edit or create the Ollama environment variable `OLLAMA_ORIGINS` for your user account, setting the value to `*`.
64
+ 4. Click `OK/Apply` to save and restart the system.
65
+ 5. Run `Ollama` again.
84
66
 
85
- Now, you can start conversing with the local LLM in LobeChat.
67
+ ### Conversing with Local Large Models in LobeChat
86
68
 
69
+ Now, you can start conversing with the local LLM in LobeChat.
87
70
  </Steps>
88
71
 
89
72
  ## Using Ollama on Linux
90
73
 
91
74
  <Steps>
75
+ ### Local Installation of Ollama
92
76
 
93
- ### Local Installation of Ollama
77
+ Install using the following command:
94
78
 
95
- Install using the following command:
79
+ ```bash
80
+ curl -fsSL https://ollama.com/install.sh | sh
81
+ ```
96
82
 
97
- ```bash
98
- curl -fsSL https://ollama.com/install.sh | sh
99
- ```
100
-
101
- Alternatively, you can refer to the [Linux manual installation guide](https://github.com/ollama/ollama/blob/main/docs/linux.md).
83
+ Alternatively, you can refer to the [Linux manual installation guide](https://github.com/ollama/ollama/blob/main/docs/linux.md).
102
84
 
103
- ### Configure Ollama for Cross-Origin Access
85
+ ### Configure Ollama for Cross-Origin Access
104
86
 
105
- Due to Ollama's default configuration, which allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. If Ollama runs as a systemd service, use `systemctl` to set the environment variable:
87
+ Due to Ollama's default configuration, which allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. If Ollama runs as a systemd service, use `systemctl` to set the environment variable:
106
88
 
107
- 1. Edit the systemd service by calling `sudo systemctl edit ollama.service`:
108
-
109
- ```bash
110
- sudo systemctl edit ollama.service
111
- ```
89
+ 1. Edit the systemd service by calling `sudo systemctl edit ollama.service`:
112
90
 
113
- 2. Add `Environment` under `[Service]` for each environment variable:
91
+ ```bash
92
+ sudo systemctl edit ollama.service
93
+ ```
114
94
 
115
- ```bash
116
- [Service]
117
- Environment="OLLAMA_HOST=0.0.0.0"
118
- Environment="OLLAMA_ORIGINS=*"
119
- ```
95
+ 2. Add `Environment` under `[Service]` for each environment variable:
120
96
 
121
- 3. Save and exit.
122
- 4. Reload `systemd` and restart Ollama:
97
+ ```bash
98
+ [Service]
99
+ Environment="OLLAMA_HOST=0.0.0.0"
100
+ Environment="OLLAMA_ORIGINS=*"
101
+ ```
123
102
 
124
- ```bash
125
- sudo systemctl daemon-reload
126
- sudo systemctl restart ollama
127
- ```
103
+ 3. Save and exit.
104
+ 4. Reload `systemd` and restart Ollama:
128
105
 
129
- ### Conversing with Local Large Models in LobeChat
106
+ ```bash
107
+ sudo systemctl daemon-reload
108
+ sudo systemctl restart ollama
109
+ ```
130
110
 
131
- Now, you can start conversing with the local LLM in LobeChat.
111
+ ### Conversing with Local Large Models in LobeChat
132
112
 
113
+ Now, you can start conversing with the local LLM in LobeChat.
133
114
  </Steps>
134
115
 
135
116
  ## Deploying Ollama using Docker
136
117
 
137
118
  <Steps>
119
+ ### Pulling Ollama Image
138
120
 
139
- ### Pulling Ollama Image
121
+ If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:
140
122
 
141
- If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:
123
+ ```bash
124
+ docker pull ollama/ollama
125
+ ```
142
126
 
143
- ```bash
144
- docker pull ollama/ollama
145
- ```
146
-
147
- ### Configure Ollama for Cross-Origin Access
148
-
149
- Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
127
+ ### Configure Ollama for Cross-Origin Access
150
128
 
151
- If Ollama runs as a Docker container, you can add the environment variable to the `docker run` command.
129
+ Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
152
130
 
153
- ```bash
154
- docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
155
- ```
131
+ If Ollama runs as a Docker container, you can add the environment variable to the `docker run` command.
156
132
 
157
- ### Conversing with Local Large Models in LobeChat
133
+ ```bash
134
+ docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
135
+ ```
158
136
 
159
- Now, you can start conversing with the local LLM in LobeChat.
137
+ ### Conversing with Local Large Models in LobeChat
160
138
 
139
+ Now, you can start conversing with the local LLM in LobeChat.
161
140
  </Steps>
162
141
 
163
142
  ## Installing Ollama Models
@@ -168,11 +147,7 @@ Ollama supports various models, which you can view in the [Ollama Library](https
168
147
 
169
148
  In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.
170
149
 
171
- <Image
172
- alt="LobeChat guide your to install Ollama model"
173
- height="460"
174
- src="https://github.com/lobehub/lobe-chat/assets/28616219/4e81decc-776c-43b8-9a54-dfb43e9f601a"
175
- />
150
+ <Image alt="LobeChat guide your to install Ollama model" height="460" src="https://github.com/lobehub/lobe-chat/assets/28616219/4e81decc-776c-43b8-9a54-dfb43e9f601a" />
176
151
 
177
152
  Once downloaded, you can start conversing.
178
153
 
@@ -184,20 +159,13 @@ Alternatively, you can install models by executing the following command in the
184
159
  ollama pull llama3
185
160
  ```
186
161
 
187
- <Video
188
- height="524"
189
- src="https://github.com/lobehub/lobe-chat/assets/28616219/95828c11-0ae5-4dfa-84ed-854124e927a6"
190
- />
162
+ <Video height="524" src="https://github.com/lobehub/lobe-chat/assets/28616219/95828c11-0ae5-4dfa-84ed-854124e927a6" />
191
163
 
192
164
  ## Custom Configuration
193
165
 
194
166
  You can find Ollama's configuration options in `Settings` -> `Language Models`, where you can configure Ollama's proxy, model names, etc.
195
167
 
196
- <Image
197
- alt={'Ollama Provider Settings'}
198
- height={274}
199
- src={'https://github.com/lobehub/lobe-chat/assets/28616219/54b3696b-5b13-4761-8c1b-1e664867b2dd'}
200
- />
168
+ <Image alt={'Ollama Provider Settings'} height={274} src={'https://github.com/lobehub/lobe-chat/assets/28616219/54b3696b-5b13-4761-8c1b-1e664867b2dd'} />
201
169
 
202
170
  <Callout type={'info'}>
203
171
  Visit [Integrating with Ollama](/docs/self-hosting/examples/ollama) to learn how to deploy