@lobehub/chat 1.44.3 → 1.45.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.remarkrc.mdx.js +6 -0
- package/CHANGELOG.md +25 -0
- package/changelog/v1.json +9 -0
- package/docs/changelog/2023-09-09-plugin-system.mdx +5 -0
- package/docs/changelog/2023-09-09-plugin-system.zh-CN.mdx +5 -0
- package/docs/changelog/2023-11-14-gpt4-vision.mdx +6 -0
- package/docs/changelog/2023-11-14-gpt4-vision.zh-CN.mdx +6 -0
- package/docs/changelog/2023-11-19-tts-stt.mdx +6 -0
- package/docs/changelog/2023-11-19-tts-stt.zh-CN.mdx +7 -0
- package/docs/changelog/2023-12-22-dalle-3.mdx +6 -0
- package/docs/changelog/2023-12-22-dalle-3.zh-CN.mdx +4 -0
- package/docs/changelog/2024-02-08-sso-oauth.mdx +6 -0
- package/docs/changelog/2024-02-08-sso-oauth.zh-CN.mdx +6 -0
- package/docs/changelog/2024-02-14-ollama.mdx +6 -0
- package/docs/changelog/2024-02-14-ollama.zh-CN.mdx +5 -0
- package/docs/changelog/2024-06-19-lobe-chat-v1.mdx +6 -0
- package/docs/changelog/2024-06-19-lobe-chat-v1.zh-CN.mdx +5 -0
- package/docs/changelog/2024-07-19-gpt-4o-mini.mdx +5 -0
- package/docs/changelog/2024-07-19-gpt-4o-mini.zh-CN.mdx +4 -0
- package/docs/changelog/2024-08-02-lobe-chat-database-docker.mdx +6 -0
- package/docs/changelog/2024-08-02-lobe-chat-database-docker.zh-CN.mdx +5 -0
- package/docs/changelog/2024-08-21-file-upload-and-knowledge-base.mdx +6 -0
- package/docs/changelog/2024-08-21-file-upload-and-knowledge-base.zh-CN.mdx +5 -0
- package/docs/changelog/2024-09-13-openai-o1-models.mdx +6 -0
- package/docs/changelog/2024-09-13-openai-o1-models.zh-CN.mdx +6 -0
- package/docs/changelog/2024-09-20-artifacts.mdx +6 -0
- package/docs/changelog/2024-09-20-artifacts.zh-CN.mdx +6 -0
- package/docs/changelog/2024-10-27-pin-assistant.mdx +5 -0
- package/docs/changelog/2024-10-27-pin-assistant.zh-CN.mdx +4 -0
- package/docs/changelog/2024-11-06-share-text-json.mdx +4 -0
- package/docs/changelog/2024-11-06-share-text-json.zh-CN.mdx +4 -0
- package/docs/changelog/2024-11-25-november-providers.mdx +7 -0
- package/docs/changelog/2024-11-25-november-providers.zh-CN.mdx +7 -0
- package/docs/changelog/2024-11-27-forkable-chat.mdx +4 -0
- package/docs/changelog/2024-11-27-forkable-chat.zh-CN.mdx +5 -0
- package/docs/changelog/2025-01-03-user-profile.mdx +5 -0
- package/docs/changelog/2025-01-03-user-profile.zh-CN.mdx +4 -1
- package/docs/self-hosting/advanced/auth/clerk.mdx +25 -41
- package/docs/self-hosting/advanced/auth/clerk.zh-CN.mdx +23 -37
- package/docs/self-hosting/advanced/auth/next-auth/auth0.mdx +31 -58
- package/docs/self-hosting/advanced/auth/next-auth/auth0.zh-CN.mdx +30 -57
- package/docs/self-hosting/advanced/auth/next-auth/authelia.mdx +38 -38
- package/docs/self-hosting/advanced/auth/next-auth/authelia.zh-CN.mdx +37 -38
- package/docs/self-hosting/advanced/auth/next-auth/authentik.mdx +26 -31
- package/docs/self-hosting/advanced/auth/next-auth/authentik.zh-CN.mdx +25 -30
- package/docs/self-hosting/advanced/auth/next-auth/casdoor.mdx +74 -75
- package/docs/self-hosting/advanced/auth/next-auth/casdoor.zh-CN.mdx +72 -73
- package/docs/self-hosting/advanced/auth/next-auth/cloudflare-zero-trust.mdx +24 -25
- package/docs/self-hosting/advanced/auth/next-auth/cloudflare-zero-trust.zh-CN.mdx +23 -24
- package/docs/self-hosting/advanced/auth/next-auth/github.mdx +46 -73
- package/docs/self-hosting/advanced/auth/next-auth/github.zh-CN.mdx +43 -70
- package/docs/self-hosting/advanced/auth/next-auth/logto.mdx +28 -37
- package/docs/self-hosting/advanced/auth/next-auth/logto.zh-CN.mdx +28 -37
- package/docs/self-hosting/advanced/auth/next-auth/microsoft-entra-id.mdx +36 -49
- package/docs/self-hosting/advanced/auth/next-auth/microsoft-entra-id.zh-CN.mdx +30 -43
- package/docs/self-hosting/advanced/auth/next-auth/wechat.mdx +13 -14
- package/docs/self-hosting/advanced/auth/next-auth/wechat.zh-CN.mdx +14 -15
- package/docs/self-hosting/advanced/auth/next-auth/zitadel.mdx +35 -69
- package/docs/self-hosting/advanced/auth/next-auth/zitadel.zh-CN.mdx +34 -68
- package/docs/self-hosting/advanced/auth.mdx +14 -13
- package/docs/self-hosting/advanced/auth.zh-CN.mdx +15 -14
- package/docs/self-hosting/advanced/feature-flags.zh-CN.mdx +15 -15
- package/docs/self-hosting/advanced/knowledge-base.mdx +14 -3
- package/docs/self-hosting/advanced/knowledge-base.zh-CN.mdx +12 -3
- package/docs/self-hosting/advanced/model-list.zh-CN.mdx +5 -5
- package/docs/self-hosting/advanced/s3/cloudflare-r2.mdx +52 -81
- package/docs/self-hosting/advanced/s3/cloudflare-r2.zh-CN.mdx +51 -80
- package/docs/self-hosting/advanced/s3/tencent-cloud.mdx +20 -34
- package/docs/self-hosting/advanced/s3/tencent-cloud.zh-CN.mdx +28 -43
- package/docs/self-hosting/advanced/s3.mdx +30 -34
- package/docs/self-hosting/advanced/s3.zh-CN.mdx +28 -33
- package/docs/self-hosting/advanced/settings-url-share.mdx +6 -6
- package/docs/self-hosting/advanced/settings-url-share.zh-CN.mdx +19 -19
- package/docs/self-hosting/advanced/upstream-sync.mdx +73 -89
- package/docs/self-hosting/advanced/upstream-sync.zh-CN.mdx +71 -87
- package/docs/self-hosting/advanced/webrtc.mdx +14 -21
- package/docs/self-hosting/advanced/webrtc.zh-CN.mdx +19 -26
- package/docs/self-hosting/environment-variables/analytics.zh-CN.mdx +0 -3
- package/docs/self-hosting/environment-variables/auth.zh-CN.mdx +1 -1
- package/docs/self-hosting/environment-variables/basic.mdx +13 -13
- package/docs/self-hosting/environment-variables/basic.zh-CN.mdx +15 -15
- package/docs/self-hosting/environment-variables/model-provider.mdx +1 -1
- package/docs/self-hosting/environment-variables/model-provider.zh-CN.mdx +1 -1
- package/docs/self-hosting/environment-variables/s3.mdx +3 -4
- package/docs/self-hosting/environment-variables/s3.zh-CN.mdx +5 -7
- package/docs/self-hosting/environment-variables.mdx +8 -4
- package/docs/self-hosting/environment-variables.zh-CN.mdx +4 -0
- package/docs/self-hosting/examples/azure-openai.mdx +9 -12
- package/docs/self-hosting/examples/azure-openai.zh-CN.mdx +8 -11
- package/docs/self-hosting/examples/ollama.mdx +8 -7
- package/docs/self-hosting/examples/ollama.zh-CN.mdx +8 -7
- package/docs/self-hosting/platform/alibaba-cloud.mdx +5 -7
- package/docs/self-hosting/platform/alibaba-cloud.zh-CN.mdx +5 -7
- package/docs/self-hosting/platform/btpanel.mdx +3 -2
- package/docs/self-hosting/platform/btpanel.zh-CN.mdx +3 -3
- package/docs/self-hosting/platform/docker-compose.mdx +75 -85
- package/docs/self-hosting/platform/docker-compose.zh-CN.mdx +75 -85
- package/docs/self-hosting/platform/docker.mdx +87 -92
- package/docs/self-hosting/platform/docker.zh-CN.mdx +96 -115
- package/docs/self-hosting/platform/netlify.mdx +44 -94
- package/docs/self-hosting/platform/netlify.zh-CN.mdx +40 -90
- package/docs/self-hosting/platform/railway.mdx +6 -7
- package/docs/self-hosting/platform/railway.zh-CN.mdx +6 -7
- package/docs/self-hosting/platform/repocloud.mdx +6 -7
- package/docs/self-hosting/platform/repocloud.zh-CN.mdx +6 -7
- package/docs/self-hosting/platform/sealos.mdx +6 -7
- package/docs/self-hosting/platform/sealos.zh-CN.mdx +6 -7
- package/docs/self-hosting/platform/vercel.mdx +7 -8
- package/docs/self-hosting/platform/vercel.zh-CN.mdx +7 -8
- package/docs/self-hosting/platform/zeabur.mdx +29 -32
- package/docs/self-hosting/platform/zeabur.zh-CN.mdx +29 -32
- package/docs/self-hosting/server-database/docker-compose.mdx +44 -71
- package/docs/self-hosting/server-database/docker-compose.zh-CN.mdx +44 -71
- package/docs/self-hosting/server-database/docker.mdx +84 -88
- package/docs/self-hosting/server-database/docker.zh-CN.mdx +87 -91
- package/docs/self-hosting/server-database/dokploy.mdx +18 -1
- package/docs/self-hosting/server-database/dokploy.zh-CN.mdx +84 -68
- package/docs/self-hosting/server-database/repocloud.mdx +7 -9
- package/docs/self-hosting/server-database/repocloud.zh-CN.mdx +9 -11
- package/docs/self-hosting/server-database/vercel.mdx +158 -243
- package/docs/self-hosting/server-database/vercel.zh-CN.mdx +137 -205
- package/docs/self-hosting/server-database/zeabur.mdx +21 -23
- package/docs/self-hosting/server-database/zeabur.zh-CN.mdx +20 -22
- package/docs/self-hosting/server-database.mdx +34 -36
- package/docs/self-hosting/server-database.zh-CN.mdx +34 -37
- package/docs/self-hosting/start.mdx +1 -4
- package/docs/self-hosting/start.zh-CN.mdx +1 -1
- package/docs/usage/agents/agent-organization.mdx +5 -21
- package/docs/usage/agents/agent-organization.zh-CN.mdx +5 -21
- package/docs/usage/agents/concepts.mdx +4 -4
- package/docs/usage/agents/concepts.zh-CN.mdx +4 -4
- package/docs/usage/agents/custom-agent.mdx +2 -2
- package/docs/usage/agents/custom-agent.zh-CN.mdx +2 -2
- package/docs/usage/agents/model.mdx +4 -4
- package/docs/usage/agents/model.zh-CN.mdx +6 -6
- package/docs/usage/agents/prompt.mdx +5 -6
- package/docs/usage/agents/prompt.zh-CN.mdx +5 -6
- package/docs/usage/agents/topics.mdx +2 -2
- package/docs/usage/agents/topics.zh-CN.mdx +2 -2
- package/docs/usage/features/agent-market.mdx +2 -2
- package/docs/usage/features/agent-market.zh-CN.mdx +2 -2
- package/docs/usage/features/auth.mdx +1 -5
- package/docs/usage/features/auth.zh-CN.mdx +1 -5
- package/docs/usage/features/database.mdx +1 -5
- package/docs/usage/features/database.zh-CN.mdx +1 -5
- package/docs/usage/features/local-llm.mdx +2 -6
- package/docs/usage/features/local-llm.zh-CN.mdx +2 -6
- package/docs/usage/features/mobile.mdx +1 -5
- package/docs/usage/features/mobile.zh-CN.mdx +1 -5
- package/docs/usage/features/multi-ai-providers.mdx +3 -11
- package/docs/usage/features/multi-ai-providers.zh-CN.mdx +3 -11
- package/docs/usage/features/plugin-system.mdx +9 -10
- package/docs/usage/features/plugin-system.zh-CN.mdx +9 -10
- package/docs/usage/features/pwa.mdx +11 -25
- package/docs/usage/features/pwa.zh-CN.mdx +11 -25
- package/docs/usage/features/text-to-image.mdx +2 -2
- package/docs/usage/features/text-to-image.zh-CN.mdx +2 -2
- package/docs/usage/features/theme.mdx +1 -6
- package/docs/usage/features/theme.zh-CN.mdx +1 -6
- package/docs/usage/features/tts.mdx +3 -7
- package/docs/usage/features/tts.zh-CN.mdx +3 -7
- package/docs/usage/features/vision.mdx +2 -2
- package/docs/usage/features/vision.zh-CN.mdx +2 -2
- package/docs/usage/foundation/basic.mdx +7 -18
- package/docs/usage/foundation/basic.zh-CN.mdx +6 -16
- package/docs/usage/foundation/share.mdx +3 -13
- package/docs/usage/foundation/share.zh-CN.mdx +3 -13
- package/docs/usage/foundation/text2image.mdx +3 -12
- package/docs/usage/foundation/text2image.zh-CN.mdx +3 -12
- package/docs/usage/foundation/translate.mdx +3 -13
- package/docs/usage/foundation/translate.zh-CN.mdx +3 -13
- package/docs/usage/foundation/tts-stt.mdx +3 -12
- package/docs/usage/foundation/tts-stt.zh-CN.mdx +3 -12
- package/docs/usage/foundation/vision.mdx +4 -16
- package/docs/usage/foundation/vision.zh-CN.mdx +4 -16
- package/docs/usage/plugins/basic-usage.mdx +7 -30
- package/docs/usage/plugins/basic-usage.zh-CN.mdx +7 -30
- package/docs/usage/plugins/development.mdx +30 -78
- package/docs/usage/plugins/development.zh-CN.mdx +31 -79
- package/docs/usage/plugins/store.mdx +2 -10
- package/docs/usage/plugins/store.zh-CN.mdx +2 -10
- package/docs/usage/providers/ai21.mdx +17 -33
- package/docs/usage/providers/ai21.zh-CN.mdx +17 -33
- package/docs/usage/providers/ai360.mdx +17 -33
- package/docs/usage/providers/ai360.zh-CN.mdx +20 -36
- package/docs/usage/providers/anthropic.mdx +23 -45
- package/docs/usage/providers/anthropic.zh-CN.mdx +22 -44
- package/docs/usage/providers/azure.mdx +21 -51
- package/docs/usage/providers/azure.zh-CN.mdx +19 -48
- package/docs/usage/providers/baichuan.mdx +16 -34
- package/docs/usage/providers/baichuan.zh-CN.mdx +15 -33
- package/docs/usage/providers/bedrock.mdx +38 -87
- package/docs/usage/providers/bedrock.zh-CN.mdx +37 -86
- package/docs/usage/providers/cloudflare.mdx +25 -48
- package/docs/usage/providers/cloudflare.zh-CN.mdx +24 -45
- package/docs/usage/providers/deepseek.mdx +25 -51
- package/docs/usage/providers/deepseek.zh-CN.mdx +24 -50
- package/docs/usage/providers/fireworksai.mdx +23 -43
- package/docs/usage/providers/fireworksai.zh-CN.mdx +21 -41
- package/docs/usage/providers/gemini.mdx +20 -46
- package/docs/usage/providers/gemini.zh-CN.mdx +20 -46
- package/docs/usage/providers/giteeai.mdx +24 -45
- package/docs/usage/providers/giteeai.zh-CN.mdx +22 -43
- package/docs/usage/providers/github.mdx +19 -45
- package/docs/usage/providers/github.zh-CN.mdx +19 -44
- package/docs/usage/providers/groq.mdx +12 -29
- package/docs/usage/providers/groq.zh-CN.mdx +11 -28
- package/docs/usage/providers/hunyuan.mdx +19 -39
- package/docs/usage/providers/hunyuan.zh-CN.mdx +18 -38
- package/docs/usage/providers/internlm.mdx +21 -38
- package/docs/usage/providers/internlm.zh-CN.mdx +19 -36
- package/docs/usage/providers/minimax.mdx +24 -50
- package/docs/usage/providers/minimax.zh-CN.mdx +22 -48
- package/docs/usage/providers/mistral.mdx +21 -39
- package/docs/usage/providers/mistral.zh-CN.mdx +20 -38
- package/docs/usage/providers/moonshot.mdx +20 -38
- package/docs/usage/providers/moonshot.zh-CN.mdx +19 -37
- package/docs/usage/providers/novita.mdx +20 -43
- package/docs/usage/providers/novita.zh-CN.mdx +19 -42
- package/docs/usage/providers/ollama/gemma.mdx +12 -29
- package/docs/usage/providers/ollama/gemma.zh-CN.mdx +12 -30
- package/docs/usage/providers/ollama/qwen.mdx +17 -32
- package/docs/usage/providers/ollama/qwen.zh-CN.mdx +12 -27
- package/docs/usage/providers/ollama.mdx +67 -99
- package/docs/usage/providers/ollama.zh-CN.mdx +67 -99
- package/docs/usage/providers/openai.mdx +42 -56
- package/docs/usage/providers/openai.zh-CN.mdx +39 -52
- package/docs/usage/providers/openrouter.mdx +48 -84
- package/docs/usage/providers/openrouter.zh-CN.mdx +31 -67
- package/docs/usage/providers/perplexity.mdx +16 -34
- package/docs/usage/providers/perplexity.zh-CN.mdx +16 -34
- package/docs/usage/providers/qwen.mdx +26 -52
- package/docs/usage/providers/qwen.zh-CN.mdx +25 -51
- package/docs/usage/providers/sensenova.mdx +24 -45
- package/docs/usage/providers/sensenova.zh-CN.mdx +22 -43
- package/docs/usage/providers/siliconcloud.mdx +17 -33
- package/docs/usage/providers/siliconcloud.zh-CN.mdx +17 -33
- package/docs/usage/providers/spark.mdx +20 -40
- package/docs/usage/providers/spark.zh-CN.mdx +19 -39
- package/docs/usage/providers/stepfun.mdx +17 -35
- package/docs/usage/providers/stepfun.zh-CN.mdx +17 -35
- package/docs/usage/providers/taichu.mdx +16 -34
- package/docs/usage/providers/taichu.zh-CN.mdx +15 -33
- package/docs/usage/providers/togetherai.mdx +18 -40
- package/docs/usage/providers/togetherai.zh-CN.mdx +18 -40
- package/docs/usage/providers/upstage.mdx +18 -34
- package/docs/usage/providers/upstage.zh-CN.mdx +17 -33
- package/docs/usage/providers/wenxin.mdx +22 -42
- package/docs/usage/providers/wenxin.zh-CN.mdx +20 -40
- package/docs/usage/providers/xai.mdx +21 -38
- package/docs/usage/providers/xai.zh-CN.mdx +20 -37
- package/docs/usage/providers/zeroone.mdx +22 -48
- package/docs/usage/providers/zeroone.zh-CN.mdx +22 -48
- package/docs/usage/providers/zhipu.mdx +17 -35
- package/docs/usage/providers/zhipu.zh-CN.mdx +18 -34
- package/docs/usage/providers.mdx +1 -6
- package/docs/usage/providers.zh-CN.mdx +1 -6
- package/docs/usage/start.mdx +4 -18
- package/docs/usage/start.zh-CN.mdx +2 -19
- package/docs/usage/tools-calling/anthropic.mdx +18 -51
- package/docs/usage/tools-calling/anthropic.zh-CN.mdx +22 -55
- package/docs/usage/tools-calling/google.mdx +16 -23
- package/docs/usage/tools-calling/google.zh-CN.mdx +17 -24
- package/docs/usage/tools-calling/groq.mdx +9 -0
- package/docs/usage/tools-calling/groq.zh-CN.mdx +44 -70
- package/docs/usage/tools-calling/moonshot.mdx +9 -0
- package/docs/usage/tools-calling/openai.mdx +19 -44
- package/docs/usage/tools-calling/openai.zh-CN.mdx +20 -45
- package/docs/usage/tools-calling.mdx +9 -0
- package/docs/usage/tools-calling.zh-CN.mdx +60 -68
- package/package.json +42 -41
- package/scripts/mdxWorkflow/index.ts +7 -0
- package/src/app/(main)/(mobile)/me/(home)/features/Header.tsx +2 -1
- package/src/app/(main)/(mobile)/me/data/features/Header.tsx +1 -1
- package/src/app/(main)/(mobile)/me/profile/features/Header.tsx +1 -1
- package/src/app/(main)/(mobile)/me/settings/features/Header.tsx +1 -1
- package/src/app/(main)/@nav/_layout/Mobile.tsx +2 -1
- package/src/app/(main)/chat/(workspace)/@topic/features/SystemRole/SystemRoleContent.tsx +2 -1
- package/src/app/(main)/chat/(workspace)/@topic/features/TopicListContent/ByTimeMode/GroupItem.tsx +2 -2
- package/src/app/(main)/chat/(workspace)/_layout/Desktop/ChatHeader/Main.tsx +2 -1
- package/src/app/(main)/chat/(workspace)/_layout/Desktop/ChatHeader/index.tsx +1 -1
- package/src/app/(main)/chat/(workspace)/_layout/Mobile/ChatHeader/ChatHeaderTitle.tsx +2 -1
- package/src/app/(main)/chat/(workspace)/_layout/Mobile/ChatHeader/index.tsx +1 -1
- package/src/app/(main)/chat/@session/_layout/Mobile/SessionHeader.tsx +2 -1
- package/src/app/(main)/chat/settings/_layout/Desktop/Header.tsx +1 -1
- package/src/app/(main)/chat/settings/_layout/Mobile/Header.tsx +1 -1
- package/src/app/(main)/discover/(detail)/provider/[slug]/features/InfoSidebar/SuggestionItem.tsx +8 -6
- package/src/app/(main)/discover/(list)/_layout/Desktop/Nav.tsx +1 -1
- package/src/app/(main)/discover/(list)/_layout/Mobile/Header.tsx +2 -1
- package/src/app/(main)/discover/_layout/Desktop/Header.tsx +1 -1
- package/src/app/(main)/discover/components/VirtuosoGridList/index.tsx +9 -5
- package/src/app/(main)/discover/search/_layout/Mobile/Header.tsx +1 -1
- package/src/app/(main)/files/(content)/@menu/features/KnowledgeBase/EmptyStatus.tsx +21 -13
- package/src/app/(main)/repos/[id]/evals/evaluation/EvaluationList/index.tsx +1 -1
- package/src/app/(main)/settings/sync/features/DeviceInfo/SystemIcon.tsx +2 -0
- package/src/components/Branding/ProductLogo/Custom.tsx +19 -20
- package/src/components/BrowserIcon/index.tsx +19 -30
- package/src/components/BubblesLoading/index.tsx +31 -23
- package/src/components/FunctionModal/createModalHooks.ts +6 -3
- package/src/components/StopLoading.tsx +10 -7
- package/src/features/ChatInput/Desktop/InputArea/index.tsx +2 -2
- package/src/features/InitClientDB/EnableModal.tsx +2 -2
- package/src/features/InitClientDB/{PGliteSVG.tsx → PGliteIcon.tsx} +17 -11
- package/src/features/ShareModal/ShareImage/index.tsx +32 -22
- package/src/features/ShareModal/ShareJSON/Preview.tsx +2 -2
- package/src/features/ShareModal/ShareJSON/index.tsx +49 -37
- package/src/features/ShareModal/ShareText/Preview.tsx +4 -1
- package/src/features/ShareModal/ShareText/index.tsx +49 -38
- package/src/features/ShareModal/index.tsx +1 -1
- package/src/features/ShareModal/style.ts +30 -0
- package/src/utils/colorUtils.ts +1 -1
- package/src/components/BrowserIcon/components/Brave.tsx +0 -56
- package/src/components/BrowserIcon/components/Chrome.tsx +0 -14
- package/src/components/BrowserIcon/components/Chromium.tsx +0 -14
- package/src/components/BrowserIcon/components/Edge.tsx +0 -36
- package/src/components/BrowserIcon/components/Firefox.tsx +0 -38
- package/src/components/BrowserIcon/components/Opera.tsx +0 -19
- package/src/components/BrowserIcon/components/Safari.tsx +0 -23
- package/src/components/BrowserIcon/components/Samsung.tsx +0 -21
@@ -14,11 +14,7 @@ tags:
|
|
14
14
|
|
15
15
|
# Using Google Gemma Model
|
16
16
|
|
17
|
-
<Image
|
18
|
-
alt={'Using Gemma in LobeChat'}
|
19
|
-
cover
|
20
|
-
src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'}
|
21
|
-
/>
|
17
|
+
<Image alt={'Using Gemma in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'} />
|
22
18
|
|
23
19
|
[Gemma](https://blog.google/technology/developers/gemma-open-models/) is an open-source large language model (LLM) from Google, designed to provide a more general and flexible model for various natural language processing tasks. Now, with the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Google Gemma in LobeChat.
|
24
20
|
|
@@ -27,42 +23,29 @@ This document will guide you on how to use Google Gemma in LobeChat:
|
|
27
23
|
<Steps>
|
28
24
|
### Install Ollama locally
|
29
25
|
|
30
|
-
First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
|
26
|
+
First, you need to install Ollama. For the installation process, please refer to the [Ollama usage documentation](/docs/usage/providers/ollama).
|
31
27
|
|
32
|
-
### Pull Google Gemma model to local using Ollama
|
28
|
+
### Pull Google Gemma model to local using Ollama
|
33
29
|
|
34
|
-
After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
|
30
|
+
After installing Ollama, you can install the Google Gemma model using the following command, using the 7b model as an example:
|
35
31
|
|
36
|
-
```bash
|
37
|
-
ollama pull gemma
|
38
|
-
```
|
32
|
+
```bash
|
33
|
+
ollama pull gemma
|
34
|
+
```
|
39
35
|
|
40
|
-
<Image
|
41
|
-
alt={'Pulling Gemma model using Ollama'}
|
42
|
-
height={473}
|
43
|
-
inStep
|
44
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
|
45
|
-
width={791}
|
46
|
-
/>
|
36
|
+
<Image alt={'Pulling Gemma model using Ollama'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'} width={791} />
|
47
37
|
|
48
|
-
### Select Gemma model
|
38
|
+
### Select Gemma model
|
49
39
|
|
50
|
-
In the session page, open the model panel and then select the Gemma model.
|
40
|
+
In the session page, open the model panel and then select the Gemma model.
|
51
41
|
|
52
|
-
<Image
|
53
|
-
alt={'Selecting Gemma model in the model selection panel'}
|
54
|
-
height={629}
|
55
|
-
inStep
|
56
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'}
|
57
|
-
width={791}
|
58
|
-
/>
|
42
|
+
<Image alt={'Selecting Gemma model in the model selection panel'} height={629} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/c91d0c18-a21f-41f6-b5cc-94d29faeb797'} width={791} />
|
59
43
|
|
60
44
|
<Callout type={'info'}>
|
61
45
|
If you do not see the Ollama provider in the model selection panel, please refer to [Integrating
|
62
46
|
with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in
|
63
47
|
LobeChat.
|
64
|
-
|
65
|
-
</Callout>
|
48
|
+
</Callout>
|
66
49
|
</Steps>
|
67
50
|
|
68
51
|
Now, you can start conversing with the local Gemma model using LobeChat.
|
@@ -13,12 +13,7 @@ tags:
|
|
13
13
|
|
14
14
|
# 使用 Google Gemma 模型
|
15
15
|
|
16
|
-
<Image
|
17
|
-
alt={'在 LobeChat 中使用 Gemma'}
|
18
|
-
cover
|
19
|
-
rounded
|
20
|
-
src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'}
|
21
|
-
/>
|
16
|
+
<Image alt={'在 LobeChat 中使用 Gemma'} cover rounded src={'https://github.com/lobehub/lobe-chat/assets/17870709/65d2dd2a-fdcf-4f3f-a6af-4ed5164a510d'} />
|
22
17
|
|
23
18
|
[Gemma](https://blog.google/technology/developers/gemma-open-models/) 是 Google 开源的一款大语言模型(LLM),旨在提供一个更加通用、灵活的模型用于各种自然语言处理任务。现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 Google Gemma。
|
24
19
|
|
@@ -27,41 +22,28 @@ tags:
|
|
27
22
|
<Steps>
|
28
23
|
### 本地安装 Ollama
|
29
24
|
|
30
|
-
首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
|
25
|
+
首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
|
31
26
|
|
32
|
-
### 用 Ollama 拉取 Google Gemma 模型到本地
|
27
|
+
### 用 Ollama 拉取 Google Gemma 模型到本地
|
33
28
|
|
34
|
-
在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:
|
29
|
+
在安装完成 Ollama 后,你可以通过以下命令安装 Google Gemma 模型,以 7b 模型为例:
|
35
30
|
|
36
|
-
```bash
|
37
|
-
ollama pull gemma
|
38
|
-
```
|
31
|
+
```bash
|
32
|
+
ollama pull gemma
|
33
|
+
```
|
39
34
|
|
40
|
-
<Image
|
41
|
-
alt={'使用 Ollama 拉取 Gemma 模型'}
|
42
|
-
height={473}
|
43
|
-
inStep
|
44
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'}
|
45
|
-
width={791}
|
46
|
-
/>
|
35
|
+
<Image alt={'使用 Ollama 拉取 Gemma 模型'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/7049a811-a08b-45d3-8491-970f579c2ebd'} width={791} />
|
47
36
|
|
48
|
-
### 选择 Gemma 模型
|
37
|
+
### 选择 Gemma 模型
|
49
38
|
|
50
|
-
在会话页面中,选择模型面板打开,然后选择 Gemma 模型。
|
39
|
+
在会话页面中,选择模型面板打开,然后选择 Gemma 模型。
|
51
40
|
|
52
|
-
<Image
|
53
|
-
alt={'模型选择面板中选择 Gemma 模型'}
|
54
|
-
height={629}
|
55
|
-
inStep
|
56
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'}
|
57
|
-
width={791}
|
58
|
-
/>
|
41
|
+
<Image alt={'模型选择面板中选择 Gemma 模型'} height={629} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/69414c79-642e-4323-9641-bfa43a74fcc8'} width={791} />
|
59
42
|
|
60
43
|
<Callout type={'info'}>
|
61
44
|
如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
|
62
45
|
集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
|
63
|
-
|
64
|
-
</Callout>
|
46
|
+
</Callout>
|
65
47
|
</Steps>
|
66
48
|
|
67
49
|
接下来,你就可以使用 LobeChat 与本地 Gemma 模型对话了。
|
@@ -11,11 +11,7 @@ tags:
|
|
11
11
|
|
12
12
|
# Using the Local Qwen Model
|
13
13
|
|
14
|
-
<Image
|
15
|
-
alt={'Using Qwen in LobeChat'}
|
16
|
-
cover
|
17
|
-
src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'}
|
18
|
-
/>
|
14
|
+
<Image alt={'Using Qwen in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'} />
|
19
15
|
|
20
16
|
[Qwen](https://github.com/QwenLM/Qwen1.5) is a large language model (LLM) open-sourced by Alibaba Cloud. It is officially defined as a constantly evolving AI large model, and it achieves more accurate Chinese recognition capabilities through more training set content.
|
21
17
|
|
@@ -26,44 +22,33 @@ Now, through the integration of LobeChat and [Ollama](https://ollama.com/), you
|
|
26
22
|
<Steps>
|
27
23
|
## Local Installation of Ollama
|
28
24
|
|
29
|
-
First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
|
25
|
+
First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/docs/usage/providers/ollama).
|
30
26
|
|
31
|
-
## Pull the Qwen Model to Local with Ollama
|
27
|
+
## Pull the Qwen Model to Local with Ollama
|
32
28
|
|
33
|
-
After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
|
29
|
+
After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:
|
34
30
|
|
35
|
-
```bash
|
36
|
-
ollama pull qwen:14b
|
37
|
-
```
|
31
|
+
```bash
|
32
|
+
ollama pull qwen:14b
|
33
|
+
```
|
38
34
|
|
39
|
-
<Callout type={'info'}>
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
</Callout>
|
35
|
+
<Callout type={'info'}>
|
36
|
+
The local version of Qwen provides different model sizes to choose from. Please refer to the
|
37
|
+
[Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
|
38
|
+
model size.
|
39
|
+
</Callout>
|
44
40
|
|
45
|
-
<Image
|
46
|
-
alt={'Use Ollama Pull Qwen Model'}
|
47
|
-
height={473}
|
48
|
-
inStep
|
49
|
-
src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'}
|
50
|
-
/>
|
41
|
+
<Image alt={'Use Ollama Pull Qwen Model'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />
|
51
42
|
|
52
|
-
### Select the Qwen Model
|
43
|
+
### Select the Qwen Model
|
53
44
|
|
54
|
-
In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
|
45
|
+
In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
|
55
46
|
|
56
|
-
<Image
|
57
|
-
alt={'Choose Qwen Model'}
|
58
|
-
height={430}
|
59
|
-
inStep
|
60
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'}
|
61
|
-
/>
|
47
|
+
<Image alt={'Choose Qwen Model'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />
|
62
48
|
|
63
49
|
<Callout type={'info'}>
|
64
50
|
If you do not see the Ollama provider in the model selection panel, please refer to [Integration with Ollama](/docs/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat.
|
65
|
-
|
66
|
-
</Callout>
|
51
|
+
</Callout>
|
67
52
|
</Steps>
|
68
53
|
|
69
54
|
Next, you can have a conversation with the local Qwen model in LobeChat.
|
@@ -11,11 +11,7 @@ tags:
|
|
11
11
|
|
12
12
|
# 使用本地通义千问 Qwen 模型
|
13
13
|
|
14
|
-
<Image
|
15
|
-
alt={'在 LobeChat 中使用 Qwen'}
|
16
|
-
cover
|
17
|
-
src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'}
|
18
|
-
/>
|
14
|
+
<Image alt={'在 LobeChat 中使用 Qwen'} cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/b4a01219-e7b1-48a0-888c-f0271b18e3a6'} />
|
19
15
|
|
20
16
|
[通义千问](https://github.com/QwenLM/Qwen1.5) 是阿里云开源的一款大语言模型(LLM),官方定义是一个不断进化的 AI 大模型,并通过更多的训练集内容达到更精准的中文识别能力。
|
21
17
|
|
@@ -28,39 +24,28 @@ tags:
|
|
28
24
|
<Steps>
|
29
25
|
### 本地安装 Ollama
|
30
26
|
|
31
|
-
首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
|
27
|
+
首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/docs/usage/providers/ollama)。
|
32
28
|
|
33
|
-
### 用 Ollama 拉取 Qwen 模型到本地
|
29
|
+
### 用 Ollama 拉取 Qwen 模型到本地
|
34
30
|
|
35
|
-
在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:
|
31
|
+
在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:
|
36
32
|
|
37
|
-
```bash
|
38
|
-
ollama pull qwen:14b
|
39
|
-
```
|
33
|
+
```bash
|
34
|
+
ollama pull qwen:14b
|
35
|
+
```
|
40
36
|
|
41
|
-
<Image
|
42
|
-
alt={'使用 Ollama 拉取 Qwen 模型'}
|
43
|
-
height={473}
|
44
|
-
inStep
|
45
|
-
src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'}
|
46
|
-
/>
|
37
|
+
<Image alt={'使用 Ollama 拉取 Qwen 模型'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />
|
47
38
|
|
48
|
-
### 选择 Qwen 模型
|
39
|
+
### 选择 Qwen 模型
|
49
40
|
|
50
|
-
在会话页面中,选择模型面板打开,然后选择 Qwen 模型。
|
41
|
+
在会话页面中,选择模型面板打开,然后选择 Qwen 模型。
|
51
42
|
|
52
|
-
<Image
|
53
|
-
alt={'模型选择面板中选择 Qwen 模型'}
|
54
|
-
height={430}
|
55
|
-
inStep
|
56
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'}
|
57
|
-
/>
|
43
|
+
<Image alt={'模型选择面板中选择 Qwen 模型'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />
|
58
44
|
|
59
45
|
<Callout type={'info'}>
|
60
46
|
如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
|
61
47
|
集成](/zh/docs/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
|
62
|
-
|
63
|
-
</Callout>
|
48
|
+
</Callout>
|
64
49
|
</Steps>
|
65
50
|
|
66
51
|
接下来,你就可以使用 LobeChat 与本地 Qwen 模型对话了。
|
@@ -13,151 +13,130 @@ tags:
|
|
13
13
|
|
14
14
|
# Using Ollama in LobeChat
|
15
15
|
|
16
|
-
<Image
|
17
|
-
alt={'Using Ollama in LobeChat'}
|
18
|
-
borderless
|
19
|
-
cover
|
20
|
-
src={'https://github.com/lobehub/lobe-chat/assets/17870709/f579b39b-e771-402c-a1d1-620e57a10c75'}
|
21
|
-
/>
|
16
|
+
<Image alt={'Using Ollama in LobeChat'} borderless cover src={'https://github.com/lobehub/lobe-chat/assets/17870709/f579b39b-e771-402c-a1d1-620e57a10c75'} />
|
22
17
|
|
23
18
|
Ollama is a powerful framework for running large language models (LLMs) locally, supporting various language models including Llama 2, Mistral, and more. Now, LobeChat supports integration with Ollama, meaning you can easily enhance your application by using the language models provided by Ollama in LobeChat.
|
24
19
|
|
25
20
|
This document will guide you on how to use Ollama in LobeChat:
|
26
21
|
|
27
|
-
<Video
|
28
|
-
alt="demonstration of using Ollama in LobeChat"
|
29
|
-
height={580}
|
30
|
-
src="https://github.com/lobehub/lobe-chat/assets/28616219/c32b56db-c6a1-4876-9bc3-acbd37ec0c0c"
|
31
|
-
/>
|
22
|
+
<Video alt="demonstration of using Ollama in LobeChat" height={580} src="https://github.com/lobehub/lobe-chat/assets/28616219/c32b56db-c6a1-4876-9bc3-acbd37ec0c0c" />
|
32
23
|
|
33
24
|
## Using Ollama on macOS
|
34
25
|
|
35
26
|
<Steps>
|
27
|
+
### Local Installation of Ollama
|
36
28
|
|
37
|
-
|
29
|
+
[Download Ollama for macOS](https://ollama.com/download?utm_source=lobehub\&utm_medium=docs\&utm_campaign=download-macos) and unzip/install it.
|
38
30
|
|
39
|
-
|
31
|
+
### Configure Ollama for Cross-Origin Access
|
40
32
|
|
41
|
-
|
33
|
+
Due to Ollama's default configuration, which restricts access to local only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. Use `launchctl` to set the environment variable:
|
42
34
|
|
43
|
-
|
35
|
+
```bash
|
36
|
+
launchctl setenv OLLAMA_ORIGINS "*"
|
37
|
+
```
|
44
38
|
|
45
|
-
|
46
|
-
launchctl setenv OLLAMA_ORIGINS "*"
|
47
|
-
```
|
48
|
-
|
49
|
-
After setting up, restart the Ollama application.
|
39
|
+
After setting up, restart the Ollama application.
|
50
40
|
|
51
|
-
### Conversing with Local Large Models in LobeChat
|
41
|
+
### Conversing with Local Large Models in LobeChat
|
52
42
|
|
53
|
-
Now, you can start conversing with the local LLM in LobeChat.
|
54
|
-
|
55
|
-
<Image
|
56
|
-
alt="Chat with llama3 in LobeChat"
|
57
|
-
height="573"
|
58
|
-
src="https://github.com/lobehub/lobe-chat/assets/28616219/7f9a9a9f-fd91-4f59-aac9-3f26c6d49a1e"
|
59
|
-
/>
|
43
|
+
Now, you can start conversing with the local LLM in LobeChat.
|
60
44
|
|
45
|
+
<Image alt="Chat with llama3 in LobeChat" height="573" src="https://github.com/lobehub/lobe-chat/assets/28616219/7f9a9a9f-fd91-4f59-aac9-3f26c6d49a1e" />
|
61
46
|
</Steps>
|
62
47
|
|
63
48
|
## Using Ollama on Windows
|
64
49
|
|
65
50
|
<Steps>
|
51
|
+
### Local Installation of Ollama
|
66
52
|
|
67
|
-
|
68
|
-
|
69
|
-
[Download Ollama for Windows](https://ollama.com/download?utm_source=lobehub&utm_medium=docs&utm_campaign=download-windows) and install it.
|
70
|
-
|
71
|
-
### Configure Ollama for Cross-Origin Access
|
53
|
+
[Download Ollama for Windows](https://ollama.com/download?utm_source=lobehub\&utm_medium=docs\&utm_campaign=download-windows) and install it.
|
72
54
|
|
73
|
-
|
55
|
+
### Configure Ollama for Cross-Origin Access
|
74
56
|
|
75
|
-
|
57
|
+
Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
|
76
58
|
|
77
|
-
|
78
|
-
2. Edit system environment variables from the Control Panel.
|
79
|
-
3. Edit or create the Ollama environment variable `OLLAMA_ORIGINS` for your user account, setting the value to `*`.
|
80
|
-
4. Click `OK/Apply` to save and restart the system.
|
81
|
-
5. Run `Ollama` again.
|
59
|
+
On Windows, Ollama inherits your user and system environment variables.
|
82
60
|
|
83
|
-
|
61
|
+
1. First, exit the Ollama program by clicking on it in the Windows taskbar.
|
62
|
+
2. Edit system environment variables from the Control Panel.
|
63
|
+
3. Edit or create the Ollama environment variable `OLLAMA_ORIGINS` for your user account, setting the value to `*`.
|
64
|
+
4. Click `OK/Apply` to save and restart the system.
|
65
|
+
5. Run `Ollama` again.
|
84
66
|
|
85
|
-
|
67
|
+
### Conversing with Local Large Models in LobeChat
|
86
68
|
|
69
|
+
Now, you can start conversing with the local LLM in LobeChat.
|
87
70
|
</Steps>
|
88
71
|
|
89
72
|
## Using Ollama on Linux
|
90
73
|
|
91
74
|
<Steps>
|
75
|
+
### Local Installation of Ollama
|
92
76
|
|
93
|
-
|
77
|
+
Install using the following command:
|
94
78
|
|
95
|
-
|
79
|
+
```bash
|
80
|
+
curl -fsSL https://ollama.com/install.sh | sh
|
81
|
+
```
|
96
82
|
|
97
|
-
|
98
|
-
curl -fsSL https://ollama.com/install.sh | sh
|
99
|
-
```
|
100
|
-
|
101
|
-
Alternatively, you can refer to the [Linux manual installation guide](https://github.com/ollama/ollama/blob/main/docs/linux.md).
|
83
|
+
Alternatively, you can refer to the [Linux manual installation guide](https://github.com/ollama/ollama/blob/main/docs/linux.md).
|
102
84
|
|
103
|
-
### Configure Ollama for Cross-Origin Access
|
85
|
+
### Configure Ollama for Cross-Origin Access
|
104
86
|
|
105
|
-
Due to Ollama's default configuration, which allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. If Ollama runs as a systemd service, use `systemctl` to set the environment variable:
|
87
|
+
Due to Ollama's default configuration, which allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is required for cross-origin access and port listening. If Ollama runs as a systemd service, use `systemctl` to set the environment variable:
|
106
88
|
|
107
|
-
1. Edit the systemd service by calling `sudo systemctl edit ollama.service`:
|
108
|
-
|
109
|
-
```bash
|
110
|
-
sudo systemctl edit ollama.service
|
111
|
-
```
|
89
|
+
1. Edit the systemd service by calling `sudo systemctl edit ollama.service`:
|
112
90
|
|
113
|
-
|
91
|
+
```bash
|
92
|
+
sudo systemctl edit ollama.service
|
93
|
+
```
|
114
94
|
|
115
|
-
|
116
|
-
[Service]
|
117
|
-
Environment="OLLAMA_HOST=0.0.0.0"
|
118
|
-
Environment="OLLAMA_ORIGINS=*"
|
119
|
-
```
|
95
|
+
2. Add `Environment` under `[Service]` for each environment variable:
|
120
96
|
|
121
|
-
|
122
|
-
|
97
|
+
```bash
|
98
|
+
[Service]
|
99
|
+
Environment="OLLAMA_HOST=0.0.0.0"
|
100
|
+
Environment="OLLAMA_ORIGINS=*"
|
101
|
+
```
|
123
102
|
|
124
|
-
|
125
|
-
|
126
|
-
sudo systemctl restart ollama
|
127
|
-
```
|
103
|
+
3. Save and exit.
|
104
|
+
4. Reload `systemd` and restart Ollama:
|
128
105
|
|
129
|
-
|
106
|
+
```bash
|
107
|
+
sudo systemctl daemon-reload
|
108
|
+
sudo systemctl restart ollama
|
109
|
+
```
|
130
110
|
|
131
|
-
|
111
|
+
### Conversing with Local Large Models in LobeChat
|
132
112
|
|
113
|
+
Now, you can start conversing with the local LLM in LobeChat.
|
133
114
|
</Steps>
|
134
115
|
|
135
116
|
## Deploying Ollama using Docker
|
136
117
|
|
137
118
|
<Steps>
|
119
|
+
### Pulling Ollama Image
|
138
120
|
|
139
|
-
|
121
|
+
If you prefer using Docker, Ollama provides an official Docker image that you can pull using the following command:
|
140
122
|
|
141
|
-
|
123
|
+
```bash
|
124
|
+
docker pull ollama/ollama
|
125
|
+
```
|
142
126
|
|
143
|
-
|
144
|
-
docker pull ollama/ollama
|
145
|
-
```
|
146
|
-
|
147
|
-
### Configure Ollama for Cross-Origin Access
|
148
|
-
|
149
|
-
Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
|
127
|
+
### Configure Ollama for Cross-Origin Access
|
150
128
|
|
151
|
-
|
129
|
+
Since Ollama's default configuration allows local access only, additional environment variable setting `OLLAMA_ORIGINS` is needed for cross-origin access and port listening.
|
152
130
|
|
153
|
-
|
154
|
-
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
|
155
|
-
```
|
131
|
+
If Ollama runs as a Docker container, you can add the environment variable to the `docker run` command.
|
156
132
|
|
157
|
-
|
133
|
+
```bash
|
134
|
+
docker run -d --gpus=all -v ollama:/root/.ollama -e OLLAMA_ORIGINS="*" -p 11434:11434 --name ollama ollama/ollama
|
135
|
+
```
|
158
136
|
|
159
|
-
|
137
|
+
### Conversing with Local Large Models in LobeChat
|
160
138
|
|
139
|
+
Now, you can start conversing with the local LLM in LobeChat.
|
161
140
|
</Steps>
|
162
141
|
|
163
142
|
## Installing Ollama Models
|
@@ -168,11 +147,7 @@ Ollama supports various models, which you can view in the [Ollama Library](https
|
|
168
147
|
|
169
148
|
In LobeChat, we have enabled some common large language models by default, such as llama3, Gemma, Mistral, etc. When you select a model for conversation, we will prompt you to download that model.
|
170
149
|
|
171
|
-
<Image
|
172
|
-
alt="LobeChat guide your to install Ollama model"
|
173
|
-
height="460"
|
174
|
-
src="https://github.com/lobehub/lobe-chat/assets/28616219/4e81decc-776c-43b8-9a54-dfb43e9f601a"
|
175
|
-
/>
|
150
|
+
<Image alt="LobeChat guide your to install Ollama model" height="460" src="https://github.com/lobehub/lobe-chat/assets/28616219/4e81decc-776c-43b8-9a54-dfb43e9f601a" />
|
176
151
|
|
177
152
|
Once downloaded, you can start conversing.
|
178
153
|
|
@@ -184,20 +159,13 @@ Alternatively, you can install models by executing the following command in the
|
|
184
159
|
ollama pull llama3
|
185
160
|
```
|
186
161
|
|
187
|
-
<Video
|
188
|
-
height="524"
|
189
|
-
src="https://github.com/lobehub/lobe-chat/assets/28616219/95828c11-0ae5-4dfa-84ed-854124e927a6"
|
190
|
-
/>
|
162
|
+
<Video height="524" src="https://github.com/lobehub/lobe-chat/assets/28616219/95828c11-0ae5-4dfa-84ed-854124e927a6" />
|
191
163
|
|
192
164
|
## Custom Configuration
|
193
165
|
|
194
166
|
You can find Ollama's configuration options in `Settings` -> `Language Models`, where you can configure Ollama's proxy, model names, etc.
|
195
167
|
|
196
|
-
<Image
|
197
|
-
alt={'Ollama Provider Settings'}
|
198
|
-
height={274}
|
199
|
-
src={'https://github.com/lobehub/lobe-chat/assets/28616219/54b3696b-5b13-4761-8c1b-1e664867b2dd'}
|
200
|
-
/>
|
168
|
+
<Image alt={'Ollama Provider Settings'} height={274} src={'https://github.com/lobehub/lobe-chat/assets/28616219/54b3696b-5b13-4761-8c1b-1e664867b2dd'} />
|
201
169
|
|
202
170
|
<Callout type={'info'}>
|
203
171
|
Visit [Integrating with Ollama](/docs/self-hosting/examples/ollama) to learn how to deploy
|